1
|
Beitner J, Helbing J, David EJ, Võ MLH. Using a flashlight-contingent window paradigm to investigate visual search and object memory in virtual reality and on computer screens. Sci Rep 2024; 14:8596. [PMID: 38615047 DOI: 10.1038/s41598-024-58941-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 04/04/2024] [Indexed: 04/15/2024] Open
Abstract
A popular technique to modulate visual input during search is to use gaze-contingent windows. However, these are often rather discomforting, providing the impression of visual impairment. To counteract this, we asked participants in this study to search through illuminated as well as dark three-dimensional scenes using a more naturalistic flashlight with which they could illuminate the rooms. In a surprise incidental memory task, we tested the identities and locations of objects encountered during search. Importantly, we tested this study design in both immersive virtual reality (VR; Experiment 1) and on a desktop-computer screen (Experiment 2). As hypothesized, searching with a flashlight increased search difficulty and memory usage during search. We found a memory benefit for identities of distractors in the flashlight condition in VR but not in the computer screen experiment. Surprisingly, location memory was comparable across search conditions despite the enormous difference in visual input. Subtle differences across experiments only appeared in VR after accounting for previous recognition performance, hinting at a benefit of flashlight search in VR. Our findings highlight that removing visual information does not necessarily impair location memory, and that screen experiments using virtual environments can elicit the same major effects as VR setups.
Collapse
Affiliation(s)
- Julia Beitner
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany.
| | - Jason Helbing
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Erwan Joël David
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
- LIUM, Le Mans Université, Le Mans, France
| | - Melissa Lê-Hoa Võ
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
2
|
Zhou Z, Geng JJ. Learned associations serve as target proxies during difficult but not easy visual search. Cognition 2024; 242:105648. [PMID: 37897882 DOI: 10.1016/j.cognition.2023.105648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 10/03/2023] [Accepted: 10/12/2023] [Indexed: 10/30/2023]
Abstract
The target template contains information in memory that is used to guide attention during visual search and is typically thought of as containing features of the actual target object. However, when targets are hard to find, it is advantageous to use other information in the visual environment that is predictive of the target's location to help guide attention. The purpose of these studies was to test if newly learned associations between face and scene category images lead observers to use scene information as a proxy for the face target. Our results showed that scene information was used as a proxy for the target to guide attention but only when the target face was difficult to discriminate from the distractor face; when the faces were easy to distinguish, attention was no longer guided by the scene unless the scene was presented earlier. The results suggest that attention is flexibly guided by both target features as well as features of objects that are predictive of the target location. The degree to which each contributes to guiding attention depends on the efficiency with which that information can be used to decode the location of the target in the current moment. The results contribute to the view that attentional guidance is highly flexible in its use of information to rapidly locate the target.
Collapse
Affiliation(s)
- Zhiheng Zhou
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA 95618, USA.
| | - Joy J Geng
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA 95618, USA; Department of Psychology, University of California, One Shields Ave, Davis, CA 95616, USA.
| |
Collapse
|
3
|
Malpica S, Martin D, Serrano A, Gutierrez D, Masia B. Task-Dependent Visual Behavior in Immersive Environments: A Comparative Study of Free Exploration, Memory and Visual Search. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4417-4425. [PMID: 37788210 DOI: 10.1109/tvcg.2023.3320259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Visual behavior depends on both bottom-up mechanisms, where gaze is driven by the visual conspicuity of the stimuli, and top-down mechanisms, guiding attention towards relevant areas based on the task or goal of the viewer. While this is well-known, visual attention models often focus on bottom-up mechanisms. Existing works have analyzed the effect of high-level cognitive tasks like memory or visual search on visual behavior; however, they have often done so with different stimuli, methodology, metrics and participants, which makes drawing conclusions and comparisons between tasks particularly difficult. In this work we present a systematic study of how different cognitive tasks affect visual behavior in a novel within-subjects design scheme. Participants performed free exploration, memory and visual search tasks in three different scenes while their eye and head movements were being recorded. We found significant, consistent differences between tasks in the distributions of fixations, saccades and head movements. Our findings can provide insights for practitioners and content creators designing task-oriented immersive applications.
Collapse
|
4
|
Nachtnebel SJ, Cambronero-Delgadillo AJ, Helmers L, Ischebeck A, Höfler M. The impact of different distractions on outdoor visual search and object memory. Sci Rep 2023; 13:16700. [PMID: 37794077 PMCID: PMC10551016 DOI: 10.1038/s41598-023-43679-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 09/27/2023] [Indexed: 10/06/2023] Open
Abstract
We investigated whether and how different types of search distractions affect visual search behavior and target memory while participants searched in a real-world environment. They searched either undistracted (control condition), listened to a podcast (auditory distraction), counted down aloud at intervals of three while searching (executive working memory load), or were forced to stop the search on half of the trials (time pressure). In line with findings from laboratory settings, participants searched longer but made fewer errors when the target was absent than when it was present, regardless of distraction condition. Furthermore, compared to the auditory distraction condition, the executive working memory load led to higher error rates (but not longer search times). In a surprise memory test after the end of the search tasks, recognition was better for previously present targets than for absent targets. Again, this was regardless of the previous distraction condition, although significantly fewer targets were remembered by the participants in the executive working memory load condition than by those in the control condition. The findings suggest that executive working memory load, but likely not auditory distraction and time pressure affected visual search performance and target memory in a real-world environment.
Collapse
Affiliation(s)
| | | | - Linda Helmers
- Department of Psychology, University of Graz, Universitätsplatz 2/III, 8010, Graz, Austria
| | - Anja Ischebeck
- Department of Psychology, University of Graz, Universitätsplatz 2/III, 8010, Graz, Austria
| | - Margit Höfler
- Department of Psychology, University of Graz, Universitätsplatz 2/III, 8010, Graz, Austria
- Department for Dementia Research, University for Continuing Education Krems, Dr.-Karl-Dorrek-Straße 30, 3500, Krems, Austria
| |
Collapse
|
5
|
Mahr JB, Schacter DL. A language of episodic thought? Behav Brain Sci 2023; 46:e283. [PMID: 37766653 DOI: 10.1017/s0140525x2300198x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/29/2023]
Abstract
We propose that episodic thought (i.e., episodic memory and imagination) is a domain where the language-of-thought hypothesis (LoTH) could be fruitfully applied. On the one hand, LoTH could explain the structure of what is encoded into and retrieved from long-term memory. On the other, LoTH can help make sense of how episodic contents come to play such a large variety of different cognitive roles after they have been retrieved.
Collapse
Affiliation(s)
- Johannes B Mahr
- Department of Psychology, Harvard University, Cambridge, MA, USA ;
| | | |
Collapse
|
6
|
Cárdenas-Miller N, O'Donnell RE, Tam J, Wyble B. Surprise! Draw the scene: Visual recall reveals poor incidental working memory following visual search in natural scenes. Mem Cognit 2023:10.3758/s13421-023-01465-9. [PMID: 37770695 DOI: 10.3758/s13421-023-01465-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/01/2023] [Indexed: 09/30/2023]
Abstract
Searching within natural scenes can induce incidental encoding of information about the scene and the target, particularly when the scene is complex or repeated. However, recent evidence from attribute amnesia (AA) suggests that in some situations, searchers can find a target without building a robust incidental memory of its task relevant features. Through drawing-based visual recall and an AA search task, we investigated whether search in natural scenes necessitates memory encoding. Participants repeatedly searched for and located an easily detected item in novel scenes for numerous trials before being unexpectedly prompted to draw either the entire scene (Experiment 1) or their search target (Experiment 2) directly after viewing the search image. Naïve raters assessed the similarity of the drawings to the original information. We found that surprise-trial drawings of the scene and search target were both poorly recognizable, but the same drawers produced highly recognizable drawings on the next trial when they had an expectation to draw the image. Experiment 3 further showed that the poor surprise trial memory could not merely be attributed to interference from the surprising event. Our findings suggest that even for searches done in natural scenes, it is possible to locate a target without creating a robust memory of either it or the scene it was in, even if attended to just a few seconds prior. This disconnection between attention and memory might reflect a fundamental property of cognitive computations designed to optimize task performance and minimize resource use.
Collapse
Affiliation(s)
| | - Ryan E O'Donnell
- Pennsylvania State University, University Park, PA, USA
- Drexel University, Philadelphia, PA, USA
| | - Joyce Tam
- Pennsylvania State University, University Park, PA, USA
| | - Brad Wyble
- Pennsylvania State University, University Park, PA, USA.
| |
Collapse
|
7
|
Sasin E, Markov Y, Fougnie D. Meaningful objects avoid attribute amnesia due to incidental long-term memories. Sci Rep 2023; 13:14464. [PMID: 37660090 PMCID: PMC10475071 DOI: 10.1038/s41598-023-41642-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 08/29/2023] [Indexed: 09/04/2023] Open
Abstract
Attribute amnesia describes the failure to unexpectedly report the attribute of an attended stimulus, likely reflecting a lack of working memory consolidation. Previous studies have shown that unique meaningful objects are immune to attribute amnesia. However, these studies used highly dissimilar foils to test memory, raising the possibility that good performance at the surprise test was based on an imprecise (gist-like) form of long-term memory. In Experiment 1, we explored whether a more sensitive memory test would reveal attribute amnesia in meaningful objects. We used a four-alternative-forced-choice test with foils having mis-matched exemplar (e.g., apple pie/pumpkin pie) and/or state (e.g., cut/full) information. Errors indicated intact exemplar, but not state information. Thus, meaningful objects are vulnerable to attribute amnesia under the right conditions. In Experiments 2A-2D, we manipulated the familiarity signals of test items by introducing a critical object as a pre-surprise target. In the surprise trial, this critical item matched one of the foil choices. Participants selected the critical object more often than other items. By demonstrating that familiarity influences responses in this paradigm, we suggest that meaningful objects are not immune to attribute amnesia but instead side-step the effects of attribute amnesia.
Collapse
Affiliation(s)
- Edyta Sasin
- Department of Psychology, New York University Abu Dhabi, Abu Dhabi, UAE.
| | - Yuri Markov
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Daryl Fougnie
- Department of Psychology, New York University Abu Dhabi, Abu Dhabi, UAE
| |
Collapse
|
8
|
Klever L, Islam J, Võ MLH, Billino J. Aging attenuates the memory advantage for unexpected objects in real-world scenes. Heliyon 2023; 9:e20241. [PMID: 37809883 PMCID: PMC10560015 DOI: 10.1016/j.heliyon.2023.e20241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Revised: 09/14/2023] [Accepted: 09/14/2023] [Indexed: 10/10/2023] Open
Abstract
Across the adult lifespan memory processes are subject to pronounced changes. Prior knowledge and expectations might critically shape functional differences; however, corresponding findings have remained ambiguous so far. Here, we chose a tailored approach to scrutinize how schema (in-)congruencies affect older and younger adults' memory for objects embedded in real-world scenes, a scenario close to everyday life memory demands. A sample of 23 older (52-81 years) and 23 younger adults (18-38 years) freely viewed 60 photographs of scenes in which target objects were included that were either congruent or incongruent with the given context information. After a delay, recognition performance for those objects was determined. In addition, recognized objects had to be matched to the scene context in which they were previously presented. While we found schema violations beneficial for object recognition across age groups, the advantage was significantly less pronounced in older adults. We moreover observed an age-related congruency bias for matching objects to their original scene context. Our findings support a critical role of predictive processes for age-related memory differences and indicate enhanced weighting of predictions with age. We suggest that recent predictive processing theories provide a particularly useful framework to elaborate on age-related functional vulnerabilities as well as stability.
Collapse
Affiliation(s)
- Lena Klever
- Experimental Psychology, Justus Liebig University Giessen, Germany
- Center for Mind, Brain, And Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| | - Jasmin Islam
- Experimental Psychology, Justus Liebig University Giessen, Germany
| | - Melissa Le-Hoa Võ
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Jutta Billino
- Experimental Psychology, Justus Liebig University Giessen, Germany
- Center for Mind, Brain, And Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Germany
| |
Collapse
|
9
|
Peacock CE, Singh P, Hayes TR, Rehrig G, Henderson JM. Searching for meaning: Local scene semantics guide attention during natural visual search in scenes. Q J Exp Psychol (Hove) 2023; 76:632-648. [PMID: 35510885 PMCID: PMC11132926 DOI: 10.1177/17470218221101334] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Models of visual search in scenes include image salience as a source of attentional guidance. However, because scene meaning is correlated with image salience, it could be that the salience predictor in these models is driven by meaning. To test this proposal, we generated meaning maps that represented the spatial distribution of semantic informativeness in scenes, and salience maps which represented the spatial distribution of conspicuous image features and tested their influence on fixation densities from two object search tasks in real-world scenes. The results showed that meaning accounted for significantly greater variance in fixation densities than image salience, both overall and in early attention across both studies. Here, meaning explained 58% and 63% of the theoretical ceiling of variance in attention across both studies, respectively. Furthermore, both studies demonstrated that fast initial saccades were not more likely to be directed to higher salience regions than slower initial saccades, and initial saccades of all latencies were directed to regions containing higher meaning than salience. Together, these results demonstrated that even though meaning was task-neutral, the visual system still selected meaningful over salient scene regions for attention during search.
Collapse
Affiliation(s)
- Candace E Peacock
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
- Department of Psychology, University of California, Davis, Davis, CA, USA
| | - Praveena Singh
- Center for Neuroscience, University of California, Davis, Davis, CA, USA
| | - Taylor R Hayes
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
| | - Gwendolyn Rehrig
- Department of Psychology, University of California, Davis, Davis, CA, USA
| | - John M Henderson
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
- Department of Psychology, University of California, Davis, Davis, CA, USA
| |
Collapse
|
10
|
Maharjan S, Dhakal L, George L, Shrestha B, Coombe H, Bhatta S, Kristensen S. Socio-culturally adapted educational videos increase maternal and newborn health knowledge in pregnant women and female community health volunteers in Nepal's Khotang district. WOMEN'S HEALTH (LONDON, ENGLAND) 2022; 18:17455057221104297. [PMID: 35748586 PMCID: PMC9234840 DOI: 10.1177/17455057221104297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
OBJECTIVES While Nepal has made significant improvements in maternal and newborn health overall, the lack of maternal and newborn health-related knowledge in the more rural parts of the country has led to significant disparities in terms of both maternal and newborn health service utilization and maternal and newborn health outcomes. This study aimed to assess whether viewing culturally adapted maternal and newborn health educational films had a positive impact on (1) the maternal and newborn health knowledge levels among pregnant women and (2) the postpartum hemorrhage-related knowledge levels among Female Community Health Volunteers in rural Nepal. METHODS Four locations were selected for their remoteness and comparatively high number of pregnancies. A convenience sample of 101 pregnant women and 39 Female Community Health Volunteers were enrolled in the study. A pre- and post-test design was employed to assess this intervention. Paired t-tests were used to analyze the change in number of correct responses by knowledge domain for multi-film participants, producing a numeric "mean knowledge score," and McNemar's tests were used to calculate the change and significance among select questions grouped into distinct themes, domains, and points of "maternal and newborn health-related knowledge" based on the priorities outlined in Nepal's maternal and newborn health 2030 goals. RESULTS There was a significant improvement in knowledge scores on maternal and newborn health issues after watching the educational films for both types of participants. The mean knowledge score for pregnant women improved from 10 to 15 (P < 0.001) for the Understanding Antenatal Care (ANC) film, 3 to 10 (P < 0.001) for the Warning Sign in Pregnancy film, and 6 to 14 (P < 0.001) for the Newborn Care film. For the Female Community Health Volunteers, knowledge also significantly improved (P < 0.05) in all except one category after watching the postpartum hemorrhage film. The percent that correctly answered when to administer misoprostol (80%-95%) was the only variable in which knowledge improvement was not significant (P < 0.057). CONCLUSION Using culturally adapted educational films is an effective intervention to improve short-term maternal and newborn health-related knowledge among rural populations with low educational levels. The authors recommend additional larger-scale trials of this type of intervention in Nepal and other low- and middle-income countries to determine the impact on long-term maternal and newborn health knowledge and behaviors among rural populations.
Collapse
Affiliation(s)
- Sajana Maharjan
- One Heart Worldwide, Lalitpur,
Nepal,Sajana Maharjan, One Heart Worldwide,
Bagdol, Ward No. 4, P.O. Box 3764, Lalitpur 44600, Nepal.
| | | | | | | | | | | | | |
Collapse
|
11
|
Ramzaoui H, Faure S, Spotorno S. EXPRESS: Age-related differences when searching in a real environment: The use of semantic contextual guidance and incidental object encoding. Q J Exp Psychol (Hove) 2021; 75:1948-1958. [PMID: 34816760 DOI: 10.1177/17470218211064887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Visual search is a crucial, everyday activity that declines with aging. Here, referring to the environmental support account, we hypothesized that semantic contextual associations between the target and the neighboring objects (e.g., a teacup near a tea bag and a spoon), acting as external cues, may counteract this decline. Moreover, when searching for a target, viewers may encode information about the co-present distractor objects, by simply looking at them. In everyday life, where viewers often search for several targets within the same environment, such distractor objects may often become targets of future searches. Thus, we examined whether incidentally fixating a target during previous trials, when it was a distractor, may also modulate the impact of aging on search performance. We used everyday object arrays on tables in a real room, where healthy young and older adults had to search sequentially for multiple objects across different trials within the same array. We showed that search was quicker: (1) in young than older adults, (2) for targets surrounded by semantically associated objects than unassociated objects, but only in older adults, and (3) for incidentally fixated targets than for targets that were not fixated when they were distractors, with no differences between young and older adults. These results suggest that older viewers use both environmental support based on object semantic associations and object information incidentally encoded to enhance efficiency of real-world search, even in relatively simple environments. This reduces, but does not eliminate, search decline related to aging.
Collapse
Affiliation(s)
| | | | - Sara Spotorno
- School of Psychology, Keele University, United Kingdom 4212
| |
Collapse
|
12
|
Bainbridge WA, Kwok WY, Baker CI. Disrupted object-scene semantics boost scene recall but diminish object recall in drawings from memory. Mem Cognit 2021; 49:1568-1582. [PMID: 34031795 PMCID: PMC8568627 DOI: 10.3758/s13421-021-01180-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/05/2021] [Indexed: 11/08/2022]
Abstract
Humans are highly sensitive to the statistical relationships between features and objects within visual scenes. Inconsistent objects within scenes (e.g., a mailbox in a bedroom) instantly jump out to us and are known to catch our attention. However, it is debated whether such semantic inconsistencies result in boosted memory for the scene, impaired memory, or have no influence on memory. Here, we examined the relationship of scene-object consistencies on memory representations measured through drawings made during recall. Participants (N = 30) were eye-tracked while studying 12 real-world scene images with an added object that was either semantically consistent or inconsistent. After a 6-minute distractor task, they drew the scenes from memory while pen movements were tracked electronically. Online scorers (N = 1,725) rated each drawing for diagnosticity, object detail, spatial detail, and memory errors. Inconsistent scenes were recalled more frequently, but contained less object detail. Further, inconsistent objects elicited more errors reflecting looser memory binding (e.g., migration across images). These results point to a dual effect in memory of boosted global (scene) but diminished local (object) information. Finally, we observed that participants fixate longest on inconsistent objects, but these fixations during study were not correlated with recall performance, time, or drawing order. In sum, these results show a nuanced effect of scene inconsistencies on memory detail during recall.
Collapse
Affiliation(s)
- Wilma A Bainbridge
- Department of Psychology, University of Chicago, 5848 South University Ave, 303 Beecher Hall, Chicago, IL, 60637, USA.
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, 20814, USA.
| | - Wan Y Kwok
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, 20814, USA
- University of Cincinnati College of Medicine, Cincinnati, OH, 45267, USA
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, 20814, USA
| |
Collapse
|
13
|
Peacock CE, Cronin DA, Hayes TR, Henderson JM. Meaning and expected surfaces combine to guide attention during visual search in scenes. J Vis 2021; 21:1. [PMID: 34609475 PMCID: PMC8496418 DOI: 10.1167/jov.21.11.1] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Accepted: 09/02/2021] [Indexed: 11/24/2022] Open
Abstract
How do spatial constraints and meaningful scene regions interact to control overt attention during visual search for objects in real-world scenes? To answer this question, we combined novel surface maps of the likely locations of target objects with maps of the spatial distribution of scene semantic content. The surface maps captured likely target surfaces as continuous probabilities. Meaning was represented by meaning maps highlighting the distribution of semantic content in local scene regions. Attention was indexed by eye movements during the search for target objects that varied in the likelihood they would appear on specific surfaces. The interaction between surface maps and meaning maps was analyzed to test whether fixations were directed to meaningful scene regions on target-related surfaces. Overall, meaningful scene regions were more likely to be fixated if they appeared on target-related surfaces than if they appeared on target-unrelated surfaces. These findings suggest that the visual system prioritizes meaningful scene regions on target-related surfaces during visual search in scenes.
Collapse
Affiliation(s)
- Candace E Peacock
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
- Department of Psychology, University of California, Davis, Davis, CA, USA
| | - Deborah A Cronin
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
| | - Taylor R Hayes
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
| | - John M Henderson
- Center for Mind and Brain, University of California, Davis, Davis, CA, USA
- Department of Psychology, University of California, Davis, Davis, CA, USA
| |
Collapse
|
14
|
Enders LR, Smith RJ, Gordon SM, Ries AJ, Touryan J. Gaze Behavior During Navigation and Visual Search of an Open-World Virtual Environment. Front Psychol 2021; 12:681042. [PMID: 34434140 PMCID: PMC8380848 DOI: 10.3389/fpsyg.2021.681042] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 06/28/2021] [Indexed: 11/13/2022] Open
Abstract
Eye tracking has been an essential tool within the vision science community for many years. However, the majority of studies involving eye-tracking technology employ a relatively passive approach through the use of static imagery, prescribed motion, or video stimuli. This is in contrast to our everyday interaction with the natural world where we navigate our environment while actively seeking and using task-relevant visual information. For this reason, an increasing number of vision researchers are employing virtual environment platforms, which offer interactive, realistic visual environments while maintaining a substantial level of experimental control. Here, we recorded eye movement behavior while subjects freely navigated through a rich, open-world virtual environment. Within this environment, subjects completed a visual search task where they were asked to find and count occurrence of specific targets among numerous distractor items. We assigned each participant into one of four target conditions: Humvees, motorcycles, aircraft, or furniture. Our results show a statistically significant relationship between gaze behavior and target objects across Target Conditions with increased visual attention toward assigned targets. Specifically, we see an increase in the number of fixations and an increase in dwell time on target relative to distractor objects. In addition, we included a divided attention task to investigate how search changed with the addition of a secondary task. With increased cognitive load, subjects slowed their speed, decreased gaze on objects, and increased the number of objects scanned in the environment. Overall, our results confirm previous findings and support that complex virtual environments can be used for active visual search experimentation, maintaining a high level of precision in the quantification of gaze information and visual attention. This study contributes to our understanding of how individuals search for information in a naturalistic (open-world) virtual environment. Likewise, our paradigm provides an intriguing look into the heterogeneity of individual behaviors when completing an un-timed visual search task while actively navigating.
Collapse
Affiliation(s)
| | | | | | - Anthony J Ries
- DEVCOM Army Research Laboratory, Aberdeen Proving Ground, MD, United States.,Warfighter Effectiveness Research Center, U.S. Air Force Academy, Colorado Springs, CO, United States
| | - Jonathan Touryan
- DEVCOM Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| |
Collapse
|
15
|
The detail is in the difficulty: Challenging search facilitates rich incidental object encoding. Mem Cognit 2021; 48:1214-1233. [PMID: 32562249 DOI: 10.3758/s13421-020-01051-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
When searching for objects in the environment, observers necessarily encounter other, nontarget, objects. Despite their irrelevance for search, observers often incidentally encode the details of these objects, an effect that is exaggerated as the search task becomes more challenging. Although it is well established that searchers create incidental memories for targets, less is known about the fidelity with which nontargets are remembered. Do observers store richly detailed representations of nontargets, or are these memories characterized by gist-level detail, containing only the information necessary to reject the item as a nontarget? We addressed this question across two experiments in which observers completed multiple-target (one to four potential targets) searches, followed by surprise alternative forced-choice (AFC) recognition tests for all encountered objects. To assess the detail of incidentally stored memories, we used similarity rankings derived from multidimensional scaling to manipulate the perceptual similarity across objects in 4-AFC (Experiment 1a) and 16-AFC (Experiments 1b and 2) tests. Replicating prior work, observers recognized more nontarget objects encountered during challenging, relative to easier, searches. More importantly, AFC results revealed that observers stored more than gist-level detail: When search objects were not recognized, observers systematically chose lures with higher perceptual similarity, reflecting partial encoding of the search object's perceptual features. Further, similarity effects increased with search difficulty, revealing that incidental memories for visual search objects are sharpened when the search task requires greater attentional processing.
Collapse
|
16
|
Kristjánsson Á, Draschkow D. Keeping it real: Looking beyond capacity limits in visual cognition. Atten Percept Psychophys 2021; 83:1375-1390. [PMID: 33791942 PMCID: PMC8084831 DOI: 10.3758/s13414-021-02256-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/23/2020] [Indexed: 11/23/2022]
Abstract
Research within visual cognition has made tremendous strides in uncovering the basic operating characteristics of the visual system by reducing the complexity of natural vision to artificial but well-controlled experimental tasks and stimuli. This reductionist approach has for example been used to assess the basic limitations of visual attention, visual working memory (VWM) capacity, and the fidelity of visual long-term memory (VLTM). The assessment of these limits is usually made in a pure sense, irrespective of goals, actions, and priors. While it is important to map out the bottlenecks our visual system faces, we focus here on selected examples of how such limitations can be overcome. Recent findings suggest that during more natural tasks, capacity may be higher than reductionist research suggests and that separable systems subserve different actions, such as reaching and looking, which might provide important insights about how pure attentional or memory limitations could be circumvented. We also review evidence suggesting that the closer we get to naturalistic behavior, the more we encounter implicit learning mechanisms that operate "for free" and "on the fly." These mechanisms provide a surprisingly rich visual experience, which can support capacity-limited systems. We speculate whether natural tasks may yield different estimates of the limitations of VWM, VLTM, and attention, and propose that capacity measurements should also pass the real-world test within naturalistic frameworks. Our review highlights various approaches for this and suggests that our understanding of visual cognition will benefit from incorporating the complexities of real-world cognition in experimental approaches.
Collapse
Affiliation(s)
- Árni Kristjánsson
- School of Health Sciences, University of Iceland, Reykjavík, Iceland.
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia.
| | - Dejan Draschkow
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| |
Collapse
|
17
|
Conti F, Irish M. Harnessing Visual Imagery and Oculomotor Behaviour to Understand Prospection. Trends Cogn Sci 2021; 25:272-283. [PMID: 33618981 DOI: 10.1016/j.tics.2021.01.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 01/20/2021] [Accepted: 01/20/2021] [Indexed: 12/20/2022]
Abstract
Much of the rich internal world constructed by humans is derived from, and experienced through, visual mental imagery. Despite growing appreciation of visual exploration in guiding episodic memory processes, extant theories of prospection have yet to accommodate the precise role of visual mental imagery in the service of future-oriented thinking. We propose that the construction of future events relies on the assimilation of perceptual details originally experienced, and subsequently reinstantiated, predominantly in the visual domain. Individual differences in the capacity to summon discrete aspects of visual imagery can therefore account for the diversity of content generated by humans during future simulation. Our integrative framework provides a novel testbed to query alterations in future thinking in health and disease.
Collapse
Affiliation(s)
- Federica Conti
- Institut des Neurosciences de la Timone, Aix-Marseille University, 27 Boulevard Jean Moulin, 13005 Marseille, France; The University of Sydney, Brain and Mind Centre and School of Psychology, 94 Mallett Street, Camperdown, NSW 2050, Australia.
| | - Muireann Irish
- The University of Sydney, Brain and Mind Centre and School of Psychology, 94 Mallett Street, Camperdown, NSW 2050, Australia.
| |
Collapse
|
18
|
Abstract
In this paper, we define a new method for analyzing object-scene contextual relationships using computational linguistics: Linguistic Analysis of Scene Semantics, or LASS. LASS uses linguistic semantic similarity relationships between scene object and context labels embedded in a vector-space language model: Facebook Research's fastText. Importantly, the use of fastText permits semantic similarity score calculation between any set of strings and thus elements of any set of image data for which labels are available. Scene semantic similarity scores are then embedded in object segmentation mask locations in the image, creating a semantic similarity map. LASS can also be fully automated by generating context and object labels, as well as object segmentation masks, using deep learning. We compare semantic similarity maps between human- and neural network-generated annotations on a corpus of images taken from the LabelMe database. Semantic similarity maps produced by the fully automated LASS have a number of desirable properties, while maintaining a high degree of spatial and semantic similarity to them. Finally, we use LASS to evaluate the distribution of semantically consistent scene elements in space. Both show relatively uniform distributions of semantic relatedness to scene context, suggesting that contextually appropriate objects are likely to be found in all image regions. Taken together, these results suggest that LASS is accurate, automatic, flexible, and useful in a number of research contexts such as scene grammar and novelty detection.
Collapse
|
19
|
Võ MLH. The meaning and structure of scenes. Vision Res 2021; 181:10-20. [PMID: 33429218 DOI: 10.1016/j.visres.2020.11.003] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Revised: 10/31/2020] [Accepted: 11/03/2020] [Indexed: 01/09/2023]
Abstract
We live in a rich, three dimensional world with complex arrangements of meaningful objects. For decades, however, theories of visual attention and perception have been based on findings generated from lines and color patches. While these theories have been indispensable for our field, the time has come to move on from this rather impoverished view of the world and (at least try to) get closer to the real thing. After all, our visual environment consists of objects that we not only look at, but constantly interact with. Having incorporated the meaning and structure of scenes, i.e. its "grammar", then allows us to easily understand objects and scenes we have never encountered before. Studying this grammar provides us with the fascinating opportunity to gain new insights into the complex workings of attention, perception, and cognition. In this review, I will discuss how the meaning and the complex, yet predictive structure of real-world scenes influence attention allocation, search, and object identification.
Collapse
Affiliation(s)
- Melissa Le-Hoa Võ
- Department of Psychology, Johann Wolfgang-Goethe-Universität, Frankfurt, Germany. https://www.scenegrammarlab.com/
| |
Collapse
|
20
|
Beitner J, Helbing J, Draschkow D, Võ MLH. Get Your Guidance Going: Investigating the Activation of Spatial Priors for Efficient Search in Virtual Reality. Brain Sci 2021; 11:44. [PMID: 33406655 PMCID: PMC7823740 DOI: 10.3390/brainsci11010044] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 12/21/2020] [Accepted: 12/22/2020] [Indexed: 11/21/2022] Open
Abstract
Repeated search studies are a hallmark in the investigation of the interplay between memory and attention. Due to a usually employed averaging, a substantial decrease in response times occurring between the first and second search through the same search environment is rarely discussed. This search initiation effect is often the most dramatic decrease in search times in a series of sequential searches. The nature of this initial lack of search efficiency has thus far remained unexplored. We tested the hypothesis that the activation of spatial priors leads to this search efficiency profile. Before searching repeatedly through scenes in VR, participants either (1) previewed the scene, (2) saw an interrupted preview, or (3) started searching immediately. The search initiation effect was present in the latter condition but in neither of the preview conditions. Eye movement metrics revealed that the locus of this effect lies in search guidance instead of search initiation or decision time, and was beyond effects of object learning or incidental memory. Our study suggests that upon visual processing of an environment, a process of activating spatial priors to enable orientation is initiated, which takes a toll on search time at first, but once activated it can be used to guide subsequent searches.
Collapse
Affiliation(s)
- Julia Beitner
- Scene Grammar Lab, Institute of Psychology, Goethe University, 60323 Frankfurt am Main, Germany; (J.H.); (M.L.-H.V.)
| | - Jason Helbing
- Scene Grammar Lab, Institute of Psychology, Goethe University, 60323 Frankfurt am Main, Germany; (J.H.); (M.L.-H.V.)
| | - Dejan Draschkow
- Brain and Cognition Laboratory, Department of Psychiatry, University of Oxford, Oxford OX3 7JX, UK;
| | - Melissa L.-H. Võ
- Scene Grammar Lab, Institute of Psychology, Goethe University, 60323 Frankfurt am Main, Germany; (J.H.); (M.L.-H.V.)
| |
Collapse
|
21
|
Geyer T, Rostami P, Sogerer L, Schlagbauer B, Müller HJ. Task-based memory systems in contextual-cueing of visual search and explicit recognition. Sci Rep 2020; 10:16527. [PMID: 33020507 PMCID: PMC7536208 DOI: 10.1038/s41598-020-71632-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2020] [Accepted: 08/18/2020] [Indexed: 02/06/2023] Open
Abstract
Visual search is facilitated when observers encounter targets in repeated display arrangements. This 'contextual-cueing' (CC) effect is attributed to incidental learning of spatial distractor-target relations. Prior work has typically used only one recognition measure (administered after the search task) to establish whether CC is based on implicit or explicit memory of repeated displays, with the outcome depending on the diagnostic accuracy of the test. The present study compared two explicit memory tests to tackle this issue: yes/no recognition of a given search display as repeated versus generation of the quadrant in which the target (which was replaced by a distractor) had been located during the search task, thus closely matching the processes involved in performing the search. While repeated displays elicited a CC effect in the search task, both tests revealed above-chance knowledge of repeated displays, though explicit-memory accuracy and its correlation with contextual facilitation in the search task were more pronounced for the generation task. These findings argue in favor of a one-system, explicit-memory account of CC. Further, they demonstrate the superiority of the generation task for revealing the explicitness of CC, likely because both the search and the memory task involve overlapping processes (in line with 'transfer-appropriate processing').
Collapse
Affiliation(s)
- Thomas Geyer
- Department Psychologie, Lehrstuhl für Allgemeine Und Experimentelle Psychologie, Ludwig-Maximilians-Universität München, Leopoldstraße 13, 80802, München, Germany.
- Munich Center for Neurosciences - Brain and Mind, Ludwig-Maximilians-Universität München, Planegg-Martinsried, Germany.
| | - Pardis Rostami
- Department Psychologie, Lehrstuhl für Allgemeine Und Experimentelle Psychologie, Ludwig-Maximilians-Universität München, Leopoldstraße 13, 80802, München, Germany
| | - Lisa Sogerer
- Department Psychologie, Lehrstuhl für Allgemeine Und Experimentelle Psychologie, Ludwig-Maximilians-Universität München, Leopoldstraße 13, 80802, München, Germany
| | - Bernhard Schlagbauer
- Department Psychologie, Lehrstuhl für Allgemeine Und Experimentelle Psychologie, Ludwig-Maximilians-Universität München, Leopoldstraße 13, 80802, München, Germany
| | - Hermann J Müller
- Department Psychologie, Lehrstuhl für Allgemeine Und Experimentelle Psychologie, Ludwig-Maximilians-Universität München, Leopoldstraße 13, 80802, München, Germany
- Munich Center for Neurosciences - Brain and Mind, Ludwig-Maximilians-Universität München, Planegg-Martinsried, Germany
| |
Collapse
|
22
|
Greater discrimination difficulty during perceptual learning leads to stronger and more distinct representations. Psychon Bull Rev 2020; 27:768-775. [PMID: 32462637 DOI: 10.3758/s13423-020-01751-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Despite the conventional wisdom that it is more difficult to find a target among similar distractors, this study demonstrates that this disadvantage is short-lived, and that high target-to-distractor (TD) similarity during visual search training can have beneficial effects for learning. Participants with no prior knowledge of Chinese performed 12 hour-long sessions over 4 weeks, where they had to find a briefly presented target character among a set of distractors. At the beginning of the experiment, high TD similarity hurt performance, but the effect reversed during the first session and remained positive throughout the remaining sessions. This effect was due primarily to reducing false alarms on trials in which the target was absent from the search display. In addition, making an error on a trial with a specific character was associated with slower visual search response times on the subsequent repetition of the character, suggesting that participants paid more attention in encoding the characters after false alarms. Finally, the benefit of high TD similarity during visual search training transferred to a subsequent N-back working-memory task. These results suggest that greater discrimination difficulty likely induces stronger and more distinct representations of each character.
Collapse
|
23
|
Ryan JD, Shen K, Liu Z. The intersection between the oculomotor and hippocampal memory systems: empirical developments and clinical implications. Ann N Y Acad Sci 2020; 1464:115-141. [PMID: 31617589 PMCID: PMC7154681 DOI: 10.1111/nyas.14256] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2019] [Revised: 08/29/2019] [Accepted: 09/19/2019] [Indexed: 12/28/2022]
Abstract
Decades of cognitive neuroscience research has shown that where we look is intimately connected to what we remember. In this article, we review findings from human and nonhuman animals, using behavioral, neuropsychological, neuroimaging, and computational modeling methods, to show that the oculomotor and hippocampal memory systems interact in a reciprocal manner, on a moment-to-moment basis, mediated by a vast structural and functional network. Visual exploration serves to efficiently gather information from the environment for the purpose of creating new memories, updating existing memories, and reconstructing the rich, vivid details from memory. Conversely, memory increases the efficiency of visual exploration. We call for models of oculomotor control to consider the influence of the hippocampal memory system on the cognitive control of eye movements, and for models of hippocampal and broader medial temporal lobe function to consider the influence of the oculomotor system on the development and expression of memory. We describe eye movement-based applications for the detection of neurodegeneration and delivery of therapeutic interventions for mental health disorders for which the hippocampus is implicated and memory dysfunctions are at the forefront.
Collapse
Affiliation(s)
- Jennifer D. Ryan
- Rotman Research InstituteBaycrestTorontoOntarioCanada
- Department of PsychologyUniversity of TorontoTorontoOntarioCanada
- Department of PsychiatryUniversity of TorontoTorontoOntarioCanada
| | - Kelly Shen
- Rotman Research InstituteBaycrestTorontoOntarioCanada
| | - Zhong‐Xu Liu
- Department of Behavioral SciencesUniversity of Michigan‐DearbornDearbornMichigan
| |
Collapse
|
24
|
Helbing J, Draschkow D, Võ MLH. Search superiority: Goal-directed attentional allocation creates more reliable incidental identity and location memory than explicit encoding in naturalistic virtual environments. Cognition 2020; 196:104147. [PMID: 32004760 DOI: 10.1016/j.cognition.2019.104147] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Revised: 11/19/2019] [Accepted: 11/20/2019] [Indexed: 01/23/2023]
Abstract
We use representations and expectations formed during life-long learning to support attentional allocation and perception. In comparison to traditional laboratory investigations, real-world memory formation is usually achieved without explicit instruction and on-the-fly as a by-product of natural interactions with our environment. Understanding this process and the quality of naturally formed representations is critical to understanding how memory is used to guide attention and perception. Utilizing immersive, navigable, and realistic virtual environments, we investigated incidentally generated memory representations by comparing them to memories for items which were explicitly memorized. Participants either searched for objects embedded in realistic indoor environments or explicitly memorized them for follow-up identity and location memory tests. We show for the first time that memory for the identity of naturalistic objects and their location in 3D space is higher after incidental encoding compared to explicit memorization, even though the subsequent memory tests came as a surprise to participants. Relating gaze behavior to memory performance revealed that encoding time was more predictive of subsequent memory when participants explicitly memorized an item, compared to incidentally encoding it. Our results suggest that the active nature of guiding attentional allocation during proactive behavior allows for behaviorally optimal formation and utilization of representations. This highlights the importance of investigating cognition under ecologically valid conditions and shows that understanding the most natural processes for encoding and maintaining information is critical for understanding adaptive behavior.
Collapse
Affiliation(s)
- Jason Helbing
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Dejan Draschkow
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany; Department of Psychiatry, University of Oxford, Oxford, England, United Kingdom of Great Britain and Northern Ireland.
| | - Melissa L-H Võ
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
25
|
Functional Imaging of Visuospatial Attention in Complex and Naturalistic Conditions. Curr Top Behav Neurosci 2020. [PMID: 30547430 DOI: 10.1007/7854_2018_73] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/30/2023]
Abstract
One of the ultimate goals of cognitive neuroscience is to understand how the brain works in the real world. Functional imaging with naturalistic stimuli provides us with the opportunity to study the brain in situations similar to the everyday life. This includes the processing of complex stimuli that can trigger many types of signals related both to the physical characteristics of the external input and to the internal knowledge that we have about natural objects and environments. In this chapter, I will first outline different types of stimuli that have been used in naturalistic imaging studies. These include static pictures, short video clips, full-length movies, and virtual reality, each comprising specific advantages and disadvantages. Next, I will turn to the main issue of visual-spatial orienting in naturalistic conditions and its neural substrates. I will discuss different classes of internal signals, related to objects, scene structure, and long-term memory. All of these, together with external signals about stimulus salience, have been found to modulate the activity and the connectivity of the frontoparietal attention networks. I will conclude by pointing out some promising future directions for functional imaging with naturalistic stimuli. Despite this field of research is still in its early days, I consider that it will play a major role in bridging the gap between standard laboratory paradigms and mechanisms of brain functioning in the real world.
Collapse
|
26
|
Williams CC. Looking for your keys: The interaction of attention, memory, and eye movements in visual search. PSYCHOLOGY OF LEARNING AND MOTIVATION 2020. [DOI: 10.1016/bs.plm.2020.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
27
|
Relating Visual Production and Recognition of Objects in Human Visual Cortex. J Neurosci 2019; 40:1710-1721. [PMID: 31871278 DOI: 10.1523/jneurosci.1843-19.2019] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Revised: 11/23/2019] [Accepted: 11/26/2019] [Indexed: 11/21/2022] Open
Abstract
Drawing is a powerful tool that can be used to convey rich perceptual information about objects in the world. What are the neural mechanisms that enable us to produce a recognizable drawing of an object, and how does this visual production experience influence how this object is represented in the brain? Here we evaluate the hypothesis that producing and recognizing an object recruit a shared neural representation, such that repeatedly drawing the object can enhance its perceptual discriminability in the brain. We scanned human participants (N = 31; 11 male) using fMRI across three phases of a training study: during training, participants repeatedly drew two objects in an alternating sequence on an MR-compatible tablet; before and after training, they viewed these and two other control objects, allowing us to measure the neural representation of each object in visual cortex. We found that: (1) stimulus-evoked representations of objects in visual cortex are recruited during visually cued production of drawings of these objects, even throughout the period when the object cue is no longer present; (2) the object currently being drawn is prioritized in visual cortex during drawing production, while other repeatedly drawn objects are suppressed; and (3) patterns of connectivity between regions in occipital and parietal cortex supported enhanced decoding of the currently drawn object across the training phase, suggesting a potential neural substrate for learning how to transform perceptual representations into representational actions. Together, our study provides novel insight into the functional relationship between visual production and recognition in the brain.SIGNIFICANCE STATEMENT Humans can produce simple line drawings that capture rich information about their perceptual experiences. However, the mechanisms that support this behavior are not well understood. Here we investigate how regions in visual cortex participate in the recognition of an object and the production of a drawing of it. We find that these regions carry diagnostic information about an object in a similar format both during recognition and production, and that practice drawing an object enhances transmission of information about it to downstream regions. Together, our study provides novel insight into the functional relationship between visual production and recognition in the brain.
Collapse
|
28
|
Nobre AC, Stokes MG. Premembering Experience: A Hierarchy of Time-Scales for Proactive Attention. Neuron 2019; 104:132-146. [PMID: 31600510 PMCID: PMC6873797 DOI: 10.1016/j.neuron.2019.08.030] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Revised: 08/07/2019] [Accepted: 08/20/2019] [Indexed: 12/30/2022]
Abstract
Memories are about the past, but they serve the future. Memory research often emphasizes the former aspect: focusing on the functions that re-constitute (re-member) experience and elucidating the various types of memories and their interrelations, timescales, and neural bases. Here we highlight the prospective nature of memory in guiding selective attention, focusing on functions that use previous experience to anticipate the relevant events about to unfold-to "premember" experience. Memories of various types and timescales play a fundamental role in guiding perception and performance adaptively, proactively, and dynamically. Consonant with this perspective, memories are often recorded according to expected future demands. Using working memory as an example, we consider how mnemonic content is selected and represented for future use. This perspective moves away from the traditional representational account of memory toward a functional account in which forward-looking memory traces are informationally and computationally tuned for interacting with incoming sensory signals to guide adaptive behavior.
Collapse
Affiliation(s)
- Anna C Nobre
- Department of Experimental Psychology, University of Oxford, Oxford, UK; Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| | - Mark G Stokes
- Department of Experimental Psychology, University of Oxford, Oxford, UK; Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
| |
Collapse
|
29
|
Guevara Pinto JD, Papesh MH. Incidental memory following rapid object processing: The role of attention allocation strategies. J Exp Psychol Hum Percept Perform 2019; 45:1174-1190. [PMID: 31219283 PMCID: PMC7202240 DOI: 10.1037/xhp0000664] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
When observers search for multiple (rather than singular) targets, they are slower and less accurate, yet have better incidental memory for nontarget items encountered during the task (Hout & Goldinger, 2010). One explanation for this may be that observers titrate their attention allocation based on the expected difficulty suggested by search cues. Difficult search cues may implicitly encourage observers to narrow their attention, simultaneously enhancing distractor encoding and hindering peripheral processing. Across three experiments, we manipulated the difficulty of search cues preceding passive visual search for real-world objects, using a Rapid Serial Visual Presentation (RSVP) task to equate item exposure durations. In all experiments, incidental memory was enhanced for distractors encountered while participants monitored for difficult targets. Moreover, in key trials, peripheral shapes appeared at varying eccentricities off center, allowing us to infer the spread and precision of participants' attentional windows. Peripheral item detection and identification decreased when search cues were difficult, even when the peripheral items appeared before targets. These results were not an artifact of sustained vigilance in miss trials, but instead reflect top-down modulation of attention allocation based on task demands. Implications for individual differences are discussed. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
|
30
|
Affiliation(s)
- Katja Fiehler
- Department of Psychology, Justus Liebig University, Giessen, Germany
- Center for Mind, Brain, and Behavior (CMBB), Universities of Marburg and Giessen, Germany
| | - Eli Brenner
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, The Netherlands
| | - Miriam Spering
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, Canada
| |
Collapse
|
31
|
Facilitation of allocentric coding by virtue of object-semantics. Sci Rep 2019; 9:6263. [PMID: 31000759 PMCID: PMC6472393 DOI: 10.1038/s41598-019-42735-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2018] [Accepted: 04/05/2019] [Indexed: 11/26/2022] Open
Abstract
In the field of spatial coding it is well established that we mentally represent objects for action not only relative to ourselves, egocentrically, but also relative to other objects (landmarks), allocentrically. Several factors facilitate allocentric coding, for example, when objects are task-relevant or constitute stable and reliable spatial configurations. What is unknown, however, is how object-semantics facilitate the formation of these spatial configurations and thus allocentric coding. Here we demonstrate that (i) we can quantify the semantic similarity of objects and that (ii) semantically similar objects can serve as a cluster of landmarks that are allocentrically coded. Participants arranged a set of objects based on their semantic similarity. These arrangements were then entered into a similarity analysis. Based on the results, we created two semantic classes of objects, natural and man-made, that we used in a virtual reality experiment. Participants were asked to perform memory-guided reaching movements toward the initial position of a target object in a scene while either semantically congruent or incongruent landmarks were shifted. We found that the reaching endpoints systematically deviated in the direction of landmark shift. Importantly, this effect was stronger for shifts of semantically congruent landmarks. Our findings suggest that object-semantics facilitate allocentric coding by creating stable spatial configurations.
Collapse
|
32
|
Bainbridge WA, Hall EH, Baker CI. Drawings of real-world scenes during free recall reveal detailed object and spatial information in memory. Nat Commun 2019; 10:5. [PMID: 30602785 PMCID: PMC6315028 DOI: 10.1038/s41467-018-07830-6] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2018] [Accepted: 10/02/2018] [Indexed: 11/26/2022] Open
Abstract
Understanding the content of memory is essential to teasing apart its underlying mechanisms. While recognition tests have commonly been used to probe memory, it is difficult to establish what specific content is driving performance. Here, we instead focus on free recall of real-world scenes, and quantify the content of memory using a drawing task. Participants studied 30 scenes and, after a distractor task, drew as many images in as much detail as possible from memory. The resulting memory-based drawings were scored by thousands of online observers, revealing numerous objects, few memory intrusions, and precise spatial information. Further, we find that visual saliency and meaning maps can explain aspects of memory performance and observe no relationship between recall and recognition for individual images. Our findings show that not only is it possible to quantify the content of memory during free recall, but those memories contain detailed representations of our visual experiences.
Collapse
Affiliation(s)
- Wilma A Bainbridge
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, 20814, USA.
| | - Elizabeth H Hall
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, 20814, USA
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, 20814, USA
| |
Collapse
|
33
|
Boettcher SEP, Draschkow D, Dienhart E, Võ MLH. Anchoring visual search in scenes: Assessing the role of anchor objects on eye movements during visual search. J Vis 2018; 18:11. [DOI: 10.1167/18.13.11] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
| | - Dejan Draschkow
- Department of Psychology, Johann Wolfgang Goethe-Universität, Frankfurt, Germany
| | - Eric Dienhart
- Department of Psychology, Johann Wolfgang Goethe-Universität, Frankfurt, Germany
| | - Melissa L.-H. Võ
- Department of Psychology, Johann Wolfgang Goethe-Universität, Frankfurt, Germany
| |
Collapse
|
34
|
Draschkow D, Reinecke S, Cunningham CA, Võ MLH. The lower bounds of massive memory: Investigating memory for object details after incidental encoding. Q J Exp Psychol (Hove) 2018; 72:1176-1182. [DOI: 10.1177/1747021818783722] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Visual long-term memory capacity appears massive and detailed when probed explicitly. In the real world, however, memories are usually built from chance encounters. Therefore, we investigated the capacity and detail of incidental memory in a novel encoding task, instructing participants to detect visually distorted objects among intact objects. In a subsequent surprise recognition memory test, lures of a novel category, another exemplar, the same object in a different state, or exactly the same object were presented. Lure recognition performance was above chance, suggesting that incidental encoding resulted in reliable memory formation. Critically, performance for state lures was worse than for exemplars, which was driven by a greater similarity of state as opposed to exemplar foils to the original objects. Our results indicate that incidentally generated visual long-term memory representations of isolated objects are more limited in detail than recently suggested.
Collapse
Affiliation(s)
- Dejan Draschkow
- Department of Psychology, Goethe University, Frankfurt am Main, Germany
| | - Saliha Reinecke
- Department of Psychology, Goethe University, Frankfurt am Main, Germany
| | - Corbin A Cunningham
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| | - Melissa L-H Võ
- Department of Psychology, Goethe University, Frankfurt am Main, Germany
| |
Collapse
|
35
|
Visual search for changes in scenes creates long-term, incidental memory traces. Atten Percept Psychophys 2018; 80:829-843. [PMID: 29427122 DOI: 10.3758/s13414-018-1486-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Humans are very good at remembering large numbers of scenes over substantial periods of time. But how good are they at remembering changes to scenes? In this study, we tested scene memory and change detection two weeks after initial scene learning. In Experiments 1-3, scenes were learned incidentally during visual search for change. In Experiment 4, observers explicitly memorized scenes. At test, after two weeks observers were asked to discriminate old from new scenes, to recall a change that they had detected in the study phase, or to detect a newly introduced change in the memorization experiment. Next, they performed a change detection task, usually looking for the same change as in the study period. Scene recognition memory was found to be similar in all experiments, regardless of the study task. In Experiment 1, more difficult change detection produced better scene memory. Experiments 2 and 3 supported a "depth-of-processing" account for the effects of initial search and change detection on incidental memory for scenes. Of most interest, change detection was faster during the test phase than during the study phase, even when the observer had no explicit memory of having found that change previously. This result was replicated in two of our three change detection experiments. We conclude that scenes can be encoded incidentally as well as explicitly and that changes in those scenes can leave measurable traces even if they are not explicitly recalled.
Collapse
|
36
|
Draschkow D, Võ MLH. Scene grammar shapes the way we interact with objects, strengthens memories, and speeds search. Sci Rep 2017; 7:16471. [PMID: 29184115 PMCID: PMC5705766 DOI: 10.1038/s41598-017-16739-x] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2017] [Accepted: 11/16/2017] [Indexed: 11/09/2022] Open
Abstract
Predictions of environmental rules (here referred to as "scene grammar") can come in different forms: seeing a toilet in a living room would violate semantic predictions, while finding a toilet brush next to the toothpaste would violate syntactic predictions. The existence of such predictions has usually been investigated by showing observers images containing such grammatical violations. Conversely, the generative process of creating an environment according to one's scene grammar and its effects on behavior and memory has received little attention. In a virtual reality paradigm, we either instructed participants to arrange objects according to their scene grammar or against it. Subsequently, participants' memory for the arrangements was probed using a surprise recall (Exp1), or repeated search (Exp2) task. As a result, participants' construction behavior showed strategic use of larger, static objects to anchor the location of smaller objects which are generally the goals of everyday actions. Further analysis of this scene construction data revealed possible commonalities between the rules governing word usage in language and object usage in naturalistic environments. Taken together, we revealed some of the building blocks of scene grammar necessary for efficient behavior, which differentially influence how we interact with objects and what we remember about scenes.
Collapse
Affiliation(s)
- Dejan Draschkow
- Scene Grammar Lab, Johann Wolfgang Goethe-Universität, Frankfurt, Germany.
| | - Melissa L-H Võ
- Scene Grammar Lab, Johann Wolfgang Goethe-Universität, Frankfurt, Germany
| |
Collapse
|
37
|
Of "what" and "where" in a natural search task: Active object handling supports object location memory beyond the object's identity. Atten Percept Psychophys 2017; 78:1574-84. [PMID: 27165170 DOI: 10.3758/s13414-016-1111-x] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Looking for as well as actively manipulating objects that are relevant to ongoing behavioral goals are intricate parts of natural behavior. It is, however, not clear to what degree these two forms of interaction with our visual environment differ with regard to their memory representations. In a real-world paradigm, we investigated if physically engaging with objects as part of a search task influences identity and position memory differently for task-relevant versus irrelevant objects. Participants equipped with a mobile eye tracker either searched for cued objects without object interaction (Find condition) or actively collected the objects they found (Handle condition). In the following free-recall task, identity memory was assessed, demonstrating superior memory for relevant compared to irrelevant objects, but no difference between the Handle and Find conditions. Subsequently, location memory was inferred via times to first fixation in a final object search task. Active object manipulation and task-relevance interacted in that location memory for relevant objects was superior to irrelevant ones only in the Handle condition. Including previous object recall performance as a covariate in the linear mixed-model analysis of times to first fixation allowed us to explore the interaction between remembered/forgotten object identities and the execution of location memory. Identity memory performance predicted location memory in the Find but not the Handle condition, suggesting that active object handling leads to strong spatial representations independent of object identity memory. We argue that object handling facilitates the prioritization of relevant location information, but this might come at the cost of deprioritizing irrelevant information.
Collapse
|
38
|
Li CL, Aivar MP, Kit DM, Tong MH, Hayhoe MM. Memory and visual search in naturalistic 2D and 3D environments. J Vis 2017; 16:9. [PMID: 27299769 PMCID: PMC4913723 DOI: 10.1167/16.8.9] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D.
Collapse
|
39
|
Making Sense of Real-World Scenes. Trends Cogn Sci 2016; 20:843-856. [PMID: 27769727 DOI: 10.1016/j.tics.2016.09.003] [Citation(s) in RCA: 82] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2016] [Revised: 09/06/2016] [Accepted: 09/06/2016] [Indexed: 11/23/2022]
Abstract
To interact with the world, we have to make sense of the continuous sensory input conveying information about our environment. A recent surge of studies has investigated the processes enabling scene understanding, using increasingly complex stimuli and sophisticated analyses to highlight the visual features and brain regions involved. However, there are two major challenges to producing a comprehensive framework for scene understanding. First, scene perception is highly dynamic, subserving multiple behavioral goals. Second, a multitude of different visual properties co-occur across scenes and may be correlated or independent. We synthesize the recent literature and argue that for a complete view of scene understanding, it is necessary to account for both differing observer goals and the contribution of diverse scene properties.
Collapse
|
40
|
Josephs EL, Draschkow D, Wolfe JM, Võ MLH. Gist in time: Scene semantics and structure enhance recall of searched objects. Acta Psychol (Amst) 2016; 169:100-108. [PMID: 27270227 DOI: 10.1016/j.actpsy.2016.05.013] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2015] [Revised: 02/24/2016] [Accepted: 05/20/2016] [Indexed: 11/16/2022] Open
Abstract
Previous work has shown that recall of objects that are incidentally encountered as targets in visual search is better than recall of objects that have been intentionally memorized (Draschkow, Wolfe, & Võ, 2014). However, this counter-intuitive result is not seen when these tasks are performed with non-scene stimuli. The goal of the current paper is to determine what features of search in a scene contribute to higher recall rates when compared to a memorization task. In each of four experiments, we compare the free recall rate for target objects following a search to the rate following a memorization task. Across the experiments, the stimuli include progressively more scene-related information. Experiment 1 provides the spatial relations between objects. Experiment 2 adds relative size and depth of objects. Experiments 3 and 4 include scene layout and semantic information. We find that search leads to better recall than explicit memorization in cases where scene layout and semantic information are present, as long as the participant has ample time (2500ms) to integrate this information with knowledge about the target object (Exp. 4). These results suggest that the integration of scene and target information not only leads to more efficient search, but can also contribute to stronger memory representations than intentional memorization.
Collapse
Affiliation(s)
- Emilie L Josephs
- Cognitive and Neural Organization Lab, Harvard University, Cambridge, MA, USA
| | - Dejan Draschkow
- Scene Grammar Lab, Johann Wolfgang Goethe-Universität, Frankfurt, Germany
| | - Jeremy M Wolfe
- Visual Attention Lab, Brigham and Women's Hospital, Boston, MA, USA
- Harvard Medical School, Cambridge, MA, USA
| | - Melissa L-H Võ
- Scene Grammar Lab, Johann Wolfgang Goethe-Universität, Frankfurt, Germany
| |
Collapse
|
41
|
Solman GJF, Kingstone A. Arranging Objects in Space: Measuring Task-Relevant Organizational Behaviors During Goal Pursuit. Cogn Sci 2016; 41:1042-1070. [PMID: 27427463 DOI: 10.1111/cogs.12391] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2015] [Revised: 03/24/2016] [Accepted: 03/29/2016] [Indexed: 10/21/2022]
Abstract
Human behavior unfolds primarily in built environments, where the arrangement of objects is a result of ongoing human decisions and actions, yet these organizational decisions have received limited experimental study. In two experiments, we introduce a novel paradigm designed to explore how individuals organize task-relevant objects in space. Participants completed goals by locating and accessing sequences of objects in a computer-based task, and they were free to rearrange the positions of objects at any time. We measure a variety of organization changes and evaluate how these measures relate to individual differences in performance. In Experiment 1, we show that with weak structure in task demands, changes in object positions that arise through performance of the task lead to improved order, characterized predominantly by a centralization of frequently used items and a peripheralization of infrequently used objects. In Experiment 2, with increased task structure, we observe more refined organizational tendencies, with selective contraction and clustering of interrelated task-relevant objects. We further demonstrate that these more selective organization behaviors are reliably associated with individual differences in task performance. Collectively, these two studies reveal properties of space and of task demands that support and facilitate effective organization of the environment in support of ongoing behavior.
Collapse
Affiliation(s)
| | - Alan Kingstone
- Department of Psychology, University of British Columbia
| |
Collapse
|
42
|
Nardo D, Console P, Reverberi C, Macaluso E. Competition between Visual Events Modulates the Influence of Salience during Free-Viewing of Naturalistic Videos. Front Hum Neurosci 2016; 10:320. [PMID: 27445760 PMCID: PMC4923118 DOI: 10.3389/fnhum.2016.00320] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2016] [Accepted: 06/13/2016] [Indexed: 11/13/2022] Open
Abstract
In daily life the brain is exposed to a large amount of external signals that compete for processing resources. The attentional system can select relevant information based on many possible combinations of goal-directed and stimulus-driven control signals. Here, we investigate the behavioral and physiological effects of competition between distinctive visual events during free-viewing of naturalistic videos. Nineteen healthy subjects underwent functional magnetic resonance imaging (fMRI) while viewing short video-clips of everyday life situations, without any explicit goal-directed task. Each video contained either a single semantically-relevant event on the left or right side (Lat-trials), or multiple distinctive events in both hemifields (Multi-trials). For each video, we computed a salience index to quantify the lateralization bias due to stimulus-driven signals, and a gaze index (based on eye-tracking data) to quantify the efficacy of the stimuli in capturing attention to either side. Behaviorally, our results showed that stimulus-driven salience influenced spatial orienting only in presence of multiple competing events (Multi-trials). fMRI results showed that the processing of competing events engaged the ventral attention network, including the right temporoparietal junction (R TPJ) and the right inferior frontal cortex. Salience was found to modulate activity in the visual cortex, but only in the presence of competing events; while the orienting efficacy of Multi-trials affected activity in both the visual cortex and posterior parietal cortex (PPC). We conclude that in presence of multiple competing events, the ventral attention system detects semantically-relevant events, while regions of the dorsal system make use of saliency signals to select relevant locations and guide spatial orienting.
Collapse
Affiliation(s)
- Davide Nardo
- Neuroimaging Laboratory, Santa Lucia FoundationRome, Italy; Institute of Cognitive Neuroscience, University College LondonLondon, UK
| | - Paola Console
- Neuroimaging Laboratory, Santa Lucia Foundation Rome, Italy
| | - Carlo Reverberi
- Department of Psychology, University of Milano-BicoccaMilan, Italy; NeuroMi-Milan Center for Neuroscience, University of Milano-BicoccaMilan, Italy
| | - Emiliano Macaluso
- Neuroimaging Laboratory, Santa Lucia FoundationRome, Italy; Impact Team, Lyon Neuroscience Research CenterLyon, France
| |
Collapse
|
43
|
Wynn JS, Bone MB, Dragan MC, Hoffman KL, Buchsbaum BR, Ryan JD. Selective scanpath repetition during memory-guided visual search. VISUAL COGNITION 2016; 24:15-37. [PMID: 27570471 PMCID: PMC4975086 DOI: 10.1080/13506285.2016.1175531] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2015] [Revised: 03/31/2016] [Accepted: 04/01/2016] [Indexed: 10/26/2022]
Abstract
Visual search efficiency improves with repetition of a search display, yet the mechanisms behind these processing gains remain unclear. According to Scanpath Theory, memory retrieval is mediated by repetition of the pattern of eye movements or "scanpath" elicited during stimulus encoding. Using this framework, we tested the prediction that scanpath recapitulation reflects relational memory guidance during repeated search events. Younger and older subjects were instructed to find changing targets within flickering naturalistic scenes. Search efficiency (search time, number of fixations, fixation duration) and scanpath similarity (repetition) were compared across age groups for novel (V1) and repeated (V2) search events. Younger adults outperformed older adults on all efficiency measures at both V1 and V2, while the search time benefit for repeated viewing (V1-V2) did not differ by age. Fixation-binned scanpath similarity analyses revealed repetition of initial and final (but not middle) V1 fixations at V2, with older adults repeating more initial V1 fixations than young adults. In young adults only, early scanpath similarity correlated negatively with search time at test, indicating increased efficiency, whereas the similarity of V2 fixations to middle V1 fixations predicted poor search performance. We conclude that scanpath compression mediates increased search efficiency by selectively recapitulating encoding fixations that provide goal-relevant input. Extending Scanpath Theory, results suggest that scanpath repetition varies as a function of time and memory integrity.
Collapse
Affiliation(s)
- Jordana S. Wynn
- Department of Psychology, University of Toronto, Toronto, ON, CanadaM5S 3G3
- Rotman Research Institute, Baycrest Health Sciences, Toronto, ON, CanadaM6A 2E1
| | - Michael B. Bone
- Department of Psychology, University of Toronto, Toronto, ON, CanadaM5S 3G3
- Rotman Research Institute, Baycrest Health Sciences, Toronto, ON, CanadaM6A 2E1
| | | | - Kari L. Hoffman
- Department of Biology, York University, Toronto, ON, CanadaM3J 1P3
- Department of Psychology, York University, Toronto, ON, CanadaM3J 1P3
- Centre for Vision Research, York University, Toronto, ON, CanadaM3J 1P3
| | - Bradley R. Buchsbaum
- Department of Psychology, University of Toronto, Toronto, ON, CanadaM5S 3G3
- Rotman Research Institute, Baycrest Health Sciences, Toronto, ON, CanadaM6A 2E1
| | - Jennifer D. Ryan
- Department of Psychology, University of Toronto, Toronto, ON, CanadaM5S 3G3
- Rotman Research Institute, Baycrest Health Sciences, Toronto, ON, CanadaM6A 2E1
| |
Collapse
|
44
|
Abstract
Many daily activities involve looking for something. The ease with which these searches are performed often allows one to forget that searching represents complex interactions between visual attention and memory. Although a clear understanding exists of how search efficiency will be influenced by visual features of targets and their surrounding distractors or by the number of items in the display, the role of memory in search is less well understood. Contextual cueing studies have shown that implicit memory for repeated item configurations can facilitate search in artificial displays. When searching more naturalistic environments, other forms of memory come into play. For instance, semantic memory provides useful information about which objects are typically found where within a scene, and episodic scene memory provides information about where a particular object was seen the last time a particular scene was viewed. In this paper, we will review work on these topics, with special emphasis on the role of memory in guiding search in organized, real-world scenes.
Collapse
Affiliation(s)
- Melissa Le-Hoa Võ
- Scene Grammar Lab, Department of Cognitive Psychology, Goethe University Frankfurt, Frankfurt, Germany
| | | |
Collapse
|