1
|
Saltzmann SM, Eich B, Moen KC, Beck MR. Activated long-term memory and visual working memory during hybrid visual search: Effects on target memory search and distractor memory. Mem Cognit 2024; 52:2156-2171. [PMID: 38528298 DOI: 10.3758/s13421-024-01556-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/13/2024] [Indexed: 03/27/2024]
Abstract
In hybrid visual search, observers must maintain multiple target templates and subsequently search for any one of those targets. If the number of potential target templates exceeds visual working memory (VWM) capacity, then the target templates are assumed to be maintained in activated long-term memory (aLTM). Observers must search the array for potential targets (visual search), as well as search through memory (target memory search). Increasing the target memory set size reduces accuracy, increases search response times (RT), and increases dwell time on distractors. However, the extent of observers' memory for distractors during hybrid search is largely unknown. In the current study, the impact of hybrid search on target memory search (measured by dwell time on distractors, false alarms, and misses) and distractor memory (measured by distractor revisits and recognition memory of recently viewed distractors) was measured. Specifically, we aimed to better understand how changes in behavior during hybrid search impacts distractor memory. Increased target memory set size led to an increase in search RTs, distractor dwell times, false alarms, and target identification misses. Increasing target memory set size increased revisits to distractors, suggesting impaired distractor location memory, but had no effect on a two-alternative forced-choice (2AFC) distractor recognition memory test presented during the search trial. The results from the current study suggest a lack of interference between memory stores maintaining target template representations (aLTM) and distractor information (VWM). Loading aLTM with more target templates does not impact VWM for distracting information.
Collapse
Affiliation(s)
- Stephanie M Saltzmann
- Department of Psychology, Louisiana State University, 236 Audubon Hall, Baton Rouge, LA, 70803, USA
| | - Brandon Eich
- Department of Psychology, Louisiana State University, 236 Audubon Hall, Baton Rouge, LA, 70803, USA
| | - Katherine C Moen
- Department of Psychology, Louisiana State University, 236 Audubon Hall, Baton Rouge, LA, 70803, USA
- Department of Psychology, University of Nebraska at Kearney, 2504 9th Ave, Kearney, NE, 68849, USA
| | - Melissa R Beck
- Department of Psychology, Louisiana State University, 236 Audubon Hall, Baton Rouge, LA, 70803, USA.
| |
Collapse
|
2
|
Aivar MP, Li CL, Tong MH, Kit DM, Hayhoe MM. Knowing where to go: Spatial memory guides eye and body movements in a naturalistic visual search task. J Vis 2024; 24:1. [PMID: 39226069 PMCID: PMC11373708 DOI: 10.1167/jov.24.9.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/04/2024] Open
Abstract
Most research on visual search has used simple tasks presented on a computer screen. However, in natural situations visual search almost always involves eye, head, and body movements in a three-dimensional (3D) environment. The different constraints imposed by these two types of search tasks might explain some of the discrepancies in our understanding concerning the use of memory resources and the role of contextual objects during search. To explore this issue, we analyzed a visual search task performed in an immersive virtual reality apartment. Participants searched for a series of geometric 3D objects while eye movements and head coordinates were recorded. Participants explored the apartment to locate target objects whose location and visibility were manipulated. For objects with reliable locations, we found that repeated searches led to a decrease in search time and number of fixations and to a reduction of errors. Searching for those objects that had been visible in previous trials but were only tested at the end of the experiment was also easier than finding objects for the first time, indicating incidental learning of context. More importantly, we found that body movements showed changes that reflected memory for target location: trajectories were shorter and movement velocities were higher, but only for those objects that had been searched for multiple times. We conclude that memory of 3D space and target location is a critical component of visual search and also modifies movement kinematics. In natural search, memory is used to optimize movement control and reduce energetic costs.
Collapse
Affiliation(s)
- M Pilar Aivar
- Facultad de Psicología, Universidad Autónoma de Madrid, Madrid, Spain
- https://www.psicologiauam.es/aivar/
| | - Chia-Ling Li
- Institute of Neuroscience, The University of Texas at Austin, Austin, TX, USA
- Present address: Apple Inc., Cupertino, California, USA
| | - Matthew H Tong
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
- Present address: IBM Research, Cambridge, Massachusetts, USA
| | - Dmitry M Kit
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
- Present address: F5, Boston, Massachusetts, USA
| | - Mary M Hayhoe
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
3
|
Davis EE, Tehrani EK, Campbell KL. Some young adults hyper-bind too: Attentional control relates to individual differences in hyper-binding. Psychon Bull Rev 2024; 31:1809-1820. [PMID: 38302792 DOI: 10.3758/s13423-024-02464-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/12/2024] [Indexed: 02/03/2024]
Abstract
Hyper-binding - the erroneous encoding of target and distractor information into associative pairs in memory - has been described as a unique age effect caused by declines in attentional control. Previous work has found that, on average, young adults do not hyper-bind. However, if hyper-binding is caused by reduced attentional control, then young adults with poor attention regulation should also show evidence of hyper-binding. We tested this question with an individual differences approach, using a battery of attentional control tasks and relating this to individual differences in hyper-binding. Participants (N = 121) completed an implicit associative memory test measuring memory for both target-distractor (i.e., hyper-binding) and target-target pairs, followed by a series of tasks measuring attentional control. Our results show that on average, young adults do not hyper-bind, but as predicted, those with poor attentional control show a larger hyper-binding effect than those with good attentional control. Exploratory analyses also suggest that individual differences in attentional control relate to susceptibility to interference at retrieval. These results support the hypothesis that hyper-binding in older adults is due to age-related declines in attentional control, and demonstrate that hyper-binding may be an issue for any individual with poor attentional control, regardless of age.
Collapse
Affiliation(s)
- Emily E Davis
- Department of Psychology, Brock University, 1812 Sir Isaac Brock Way, St. Catharines, Ontario, L2S 3A1, Canada.
| | - Edyta K Tehrani
- Department of Psychology, Brock University, 1812 Sir Isaac Brock Way, St. Catharines, Ontario, L2S 3A1, Canada
| | - Karen L Campbell
- Department of Psychology, Brock University, 1812 Sir Isaac Brock Way, St. Catharines, Ontario, L2S 3A1, Canada
| |
Collapse
|
4
|
Brook L, Kreichman O, Masarwa S, Gilaie-Dotan S. Higher-contrast images are better remembered during naturalistic encoding. Sci Rep 2024; 14:13445. [PMID: 38862623 PMCID: PMC11166978 DOI: 10.1038/s41598-024-63953-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 06/04/2024] [Indexed: 06/13/2024] Open
Abstract
It is unclear whether memory for images of poorer visibility (as low contrast or small size) will be lower due to weak signals elicited in early visual processing stages, or perhaps better since their processing may entail top-down processes (as effort and attention) associated with deeper encoding. We have recently shown that during naturalistic encoding (free viewing without task-related modulations), for image sizes between 3°-24°, bigger images stimulating more visual system processing resources at early processing stages are better remembered. Similar to size, higher contrast leads to higher activity in early visual processing. Therefore, here we hypothesized that during naturalistic encoding, at critical visibility ranges, higher contrast images will lead to higher signal-to-noise ratio and better signal quality flowing downstream and will thus be better remembered. Indeed, we found that during naturalistic encoding higher contrast images were remembered better than lower contrast ones (~ 15% higher accuracy, ~ 1.58 times better) for images at 7.5-60 RMS contrast range. Although image contrast and size modulate early visual processing very differently, our results further substantiate that at poor visibility ranges, during naturalistic non-instructed visual behavior, physical image dimensions (contributing to image visibility) impact image memory.
Collapse
Affiliation(s)
- Limor Brook
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan, Israel
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Olga Kreichman
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan, Israel
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Shaimaa Masarwa
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan, Israel
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel
| | - Sharon Gilaie-Dotan
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan, Israel.
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan, Israel.
- UCL Institute of Cognitive Neuroscience, London, UK.
| |
Collapse
|
5
|
Beitner J, Helbing J, David EJ, Võ MLH. Using a flashlight-contingent window paradigm to investigate visual search and object memory in virtual reality and on computer screens. Sci Rep 2024; 14:8596. [PMID: 38615047 PMCID: PMC11379806 DOI: 10.1038/s41598-024-58941-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 04/04/2024] [Indexed: 04/15/2024] Open
Abstract
A popular technique to modulate visual input during search is to use gaze-contingent windows. However, these are often rather discomforting, providing the impression of visual impairment. To counteract this, we asked participants in this study to search through illuminated as well as dark three-dimensional scenes using a more naturalistic flashlight with which they could illuminate the rooms. In a surprise incidental memory task, we tested the identities and locations of objects encountered during search. Importantly, we tested this study design in both immersive virtual reality (VR; Experiment 1) and on a desktop-computer screen (Experiment 2). As hypothesized, searching with a flashlight increased search difficulty and memory usage during search. We found a memory benefit for identities of distractors in the flashlight condition in VR but not in the computer screen experiment. Surprisingly, location memory was comparable across search conditions despite the enormous difference in visual input. Subtle differences across experiments only appeared in VR after accounting for previous recognition performance, hinting at a benefit of flashlight search in VR. Our findings highlight that removing visual information does not necessarily impair location memory, and that screen experiments using virtual environments can elicit the same major effects as VR setups.
Collapse
Affiliation(s)
- Julia Beitner
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany.
| | - Jason Helbing
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Erwan Joël David
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
- LIUM, Le Mans Université, Le Mans, France
| | - Melissa Lê-Hoa Võ
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
6
|
Salo SK, Harries CA, Riddoch MJ, Smith AD. Visuospatial memory in apraxia: Exploring quantitative drawing metrics to assess the representation of local and global information. Mem Cognit 2024:10.3758/s13421-024-01531-w. [PMID: 38334870 DOI: 10.3758/s13421-024-01531-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/26/2024] [Indexed: 02/10/2024]
Abstract
Neuropsychological evidence suggests that visuospatial memory is subserved by two separable processing systems, with dorsal underpinnings for global form and ventral underpinnings for the integration of part elements. Previous drawing studies have explored the effects of Gestalt organisation upon memory for hierarchical stimuli, and we here present an exploratory study of an apraxic dorsal stream patient's (MH) performance. We presented MH with a stimulus set (previously reported by Riddoch et al., Cognitive Neuropsychology, 20(7), 641-671, 2003) and devised a novel quantitative scoring system to obtain a finer grain of insight into performance. Stimuli possessed either good or poor Gestalt qualities and were reproduced in a copy condition and two visual memory conditions (with unlimited viewing before the model was removed, or with 3 s viewing). MH's copying performance was impaired in comparison to younger adult and age-matched older adult controls, with a variety of errors at the local level but relatively few at the global level. However, his performance in the visual memory conditions revealed impairments at the global level. For all participants, drawing errors were modulated by the Gestalt qualities of the stimuli, with accuracy at the global and local levels being lesser for poor global stimuli in all conditions. These data extend previous observations of this patient, and support theories that posit interaction between dorsal and ventral streams in the representation of hierarchical stimuli. We discuss the implications of these findings for our understanding of visuospatial memory in neurological patients, and also evaluate the application of quantitative metrics to the interpretation of drawings.
Collapse
Affiliation(s)
- Sarah K Salo
- School of Psychology, University of Plymouth, Plymouth, UK.
- Brain Research and Imaging Centre, University of Plymouth, Plymouth, UK.
| | | | - M Jane Riddoch
- Department of Experimental Psychology, University of Oxford, Oxford, UK
- School of Psychology, University of Birmingham, Birmingham, UK
| | - Alastair D Smith
- School of Psychology, University of Plymouth, Plymouth, UK.
- Brain Research and Imaging Centre, University of Plymouth, Plymouth, UK.
| |
Collapse
|
7
|
Moriya J. Long-term memory for distractors: Effects of involuntary attention from working memory. Mem Cognit 2024; 52:401-416. [PMID: 37768481 DOI: 10.3758/s13421-023-01469-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/08/2023] [Indexed: 09/29/2023]
Abstract
In a visual search task, attention to task-irrelevant distractors impedes search performance. However, is it maladaptive to future performance? Here, I showed that attended distractors in a visual search task were better remembered in long-term memory (LTM) in the subsequent surprise recognition task than non-attended distractors. In four experiments, participants performed a visual search task using real-world objects of a single color. They encoded color in working memory (WM) during the task; because each object had a different color, participants directed their attention to the WM-matching colored distractor. Then, in the surprise recognition task, participants were required to indicate whether an object had been shown in the earlier visual search task, regardless of its color. The results showed that attended distractors were remembered better in LTM than non-attended distractors (Experiments 1 and 2). Moreover, the more participants directed their attention to distractors, the better they explicitly remembered them. Participants did not explicitly remember the color of the attended distractors (Experiment 3) but remembered integrated information with object and color (Experiment 4). When the color of the distractors in the recognition task was mismatched with the color in the visual search task, LTM decreased compared to color-matching distractors. These results suggest that attention to distractors impairs search for a target but is helpful in remembering distractors in LTM. When task-irrelevant distractors become task-relevant information in the future, their attention becomes beneficial.
Collapse
Affiliation(s)
- Jun Moriya
- Faculty of Sociology, Kansai University, 3-3-35 Yamate-cho, Suita-shi, Osaka, Japan.
| |
Collapse
|
8
|
Parimoo S, Choi A, Iafrate L, Grady C, Olsen R. Are older adults susceptible to visual distraction when targets and distractors are spatially separated? NEUROPSYCHOLOGY, DEVELOPMENT, AND COGNITION. SECTION B, AGING, NEUROPSYCHOLOGY AND COGNITION 2024; 31:38-74. [PMID: 36059213 DOI: 10.1080/13825585.2022.2117271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Accepted: 08/22/2022] [Indexed: 06/15/2023]
Abstract
Older adults show preserved memory for previously distracting information due to reduced inhibitory control. In some previous studies, targets and distractors overlap both temporally and spatially. We investigated whether age differences in attentional orienting and disengagement affect recognition memory when targets and distractors are spatially separated at encoding. In Experiments 1 and 2, eye movements were recorded while participants completed an incidental encoding task under covert (i.e., restricted viewing) and overt (i.e., free-viewing) conditions, respectively. The encoding task consisted of pairs of target and distractor item-color stimuli presented in separate visual hemifields. Prior to stimulus onset, a central cue indicated the location of the upcoming target. Participants were subsequently tested on their recognition of the items, their location, and the associated color. In Experiment 3, targets were validly cued on 75% of the encoding trials; on invalid trials, participants had to disengage their attention from the distractor and reorient to the target. Associative memory for colors was reduced among older adults across all experiments, though their location memory was only reduced in Experiment 1. In Experiment 2, older and younger adults directed a similar proportion of fixations toward targets and distractors. Explicit recognition of distractors did not differ between age groups in any of the experiments. However, older adults were slower to correctly recognize distractors than false alarm to novel items in Experiment 2, suggesting some implicit memory for distraction. Together, these results demonstrate that older adults may only be vulnerable to encoding visual distraction when viewing behavior is unconstrained.
Collapse
Affiliation(s)
- Shireen Parimoo
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- Rotman Research Institute, Toronto, ON, Canada
| | - Anika Choi
- Rotman Research Institute, Toronto, ON, Canada
| | | | - Cheryl Grady
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- Rotman Research Institute, Toronto, ON, Canada
| | - Rosanna Olsen
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- Rotman Research Institute, Toronto, ON, Canada
| |
Collapse
|
9
|
Stefani M, Sauter M. Relative contributions of oculomotor capture and disengagement to distractor-related dwell times in visual search. Sci Rep 2023; 13:16676. [PMID: 37794059 PMCID: PMC10551035 DOI: 10.1038/s41598-023-43604-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 09/26/2023] [Indexed: 10/06/2023] Open
Abstract
In visual search, attention is reliably captured by salient distractors and must be actively disengaged from them to reach the target. In such attentional capture paradigms, dwell time is measured on distractors that appear in the periphery (e.g., on a random location on a circle). Distractor-related dwell time is typically thought to be largely due to stimulus-driven processes related to oculomotor capture dynamics. However, the extent to which oculomotor capture and oculomotor disengagement contribute to distractor dwell time has not been known because standard attentional capture paradigms cannot decouple these processes. In the present study, we used a novel paradigm combining classical attentional capture trials and delayed disengagement trials. We measured eye movements to dissociate the capture and disengagement mechanisms underlying distractor dwell time. We found that only two-thirds of distractor dwell time (~ 52 ms) can be explained by oculomotor capture, while one-third is explained by oculomotor disengagement (~ 18 ms), which has been neglected or underestimated in previous studies. Thus, oculomotor disengagement (goal-directed) processes play a more significant role in distractor dwell times than previously thought.
Collapse
Affiliation(s)
- Maximilian Stefani
- Institute of Psychology, General Psychology, Bundeswehr University Munich, Werner-Heisenberg-Weg, 39, 85577, Neubiberg, Germany.
| | - Marian Sauter
- Institute of Psychology, General Psychology, Bundeswehr University Munich, Werner-Heisenberg-Weg, 39, 85577, Neubiberg, Germany
- Institute of Psychology, General Psychology, Ulm University, 89069, Ulm, Germany
| |
Collapse
|
10
|
Sakata C, Ueda Y, Moriguchi Y. Visual memory of a co-actor's target during joint search. PSYCHOLOGICAL RESEARCH 2023; 87:2068-2085. [PMID: 36976364 PMCID: PMC10043510 DOI: 10.1007/s00426-023-01819-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Accepted: 03/17/2023] [Indexed: 03/29/2023]
Abstract
Studies on joint action show that when two actors turn-takingly attend to each other's target that appears one at a time, a partner's target is accumulated in memory. However, in the real world, actors may not be certain that they attend to the same object because multiple objects often appear simultaneously. In this study, we asked participant pairs to search for different targets in parallel from multiple objects and investigated the memory of a partner's target. We employed the contextual cueing paradigm, in which repetitive search forms associative memory between a target and a configuration of distractors that facilitates search. During the learning phase, exemplars of three target categories (i.e., bird, shoe, and tricycle) were presented among unique objects, and participant pairs searched for them. In Experiment 1, it was followed by a memory test about target exemplars. Consequently, the partner's target was better recognized than the target that nobody searched for. In Experiments 2a and 2b, the memory test was replaced with the transfer phase, where one individual from the pair searched for the category that nobody had searched for while the other individual searched for the category the partner had searched for in the learning phase. The transfer phase did not show search facilitation underpinned by associative memory between the partner's target and distractors. These results suggest that when participant pairs search for different targets in parallel, they accumulate the partner's target in memory but may not form its associative memory with the distractors that facilitates its search.
Collapse
Affiliation(s)
- Chifumi Sakata
- Graduate School of Letters, Kyoto University, Yoshida Hon-Machi, Sakyo-Ku, Kyoto, 606-8501, Japan.
| | - Yoshiyuki Ueda
- Institute for the Future of Human Society, Kyoto University, 46 Yoshida Shimoadachi-Cho, Sakyo-Ku, Kyoto, 606-8501, Japan
| | - Yusuke Moriguchi
- Graduate School of Letters, Kyoto University, Yoshida Hon-Machi, Sakyo-Ku, Kyoto, 606-8501, Japan
| |
Collapse
|
11
|
Cárdenas-Miller N, O'Donnell RE, Tam J, Wyble B. Surprise! Draw the scene: Visual recall reveals poor incidental working memory following visual search in natural scenes. Mem Cognit 2023:10.3758/s13421-023-01465-9. [PMID: 37770695 DOI: 10.3758/s13421-023-01465-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/01/2023] [Indexed: 09/30/2023]
Abstract
Searching within natural scenes can induce incidental encoding of information about the scene and the target, particularly when the scene is complex or repeated. However, recent evidence from attribute amnesia (AA) suggests that in some situations, searchers can find a target without building a robust incidental memory of its task relevant features. Through drawing-based visual recall and an AA search task, we investigated whether search in natural scenes necessitates memory encoding. Participants repeatedly searched for and located an easily detected item in novel scenes for numerous trials before being unexpectedly prompted to draw either the entire scene (Experiment 1) or their search target (Experiment 2) directly after viewing the search image. Naïve raters assessed the similarity of the drawings to the original information. We found that surprise-trial drawings of the scene and search target were both poorly recognizable, but the same drawers produced highly recognizable drawings on the next trial when they had an expectation to draw the image. Experiment 3 further showed that the poor surprise trial memory could not merely be attributed to interference from the surprising event. Our findings suggest that even for searches done in natural scenes, it is possible to locate a target without creating a robust memory of either it or the scene it was in, even if attended to just a few seconds prior. This disconnection between attention and memory might reflect a fundamental property of cognitive computations designed to optimize task performance and minimize resource use.
Collapse
Affiliation(s)
| | - Ryan E O'Donnell
- Pennsylvania State University, University Park, PA, USA
- Drexel University, Philadelphia, PA, USA
| | - Joyce Tam
- Pennsylvania State University, University Park, PA, USA
| | - Brad Wyble
- Pennsylvania State University, University Park, PA, USA.
| |
Collapse
|
12
|
Sasin E, Markov Y, Fougnie D. Meaningful objects avoid attribute amnesia due to incidental long-term memories. Sci Rep 2023; 13:14464. [PMID: 37660090 PMCID: PMC10475071 DOI: 10.1038/s41598-023-41642-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 08/29/2023] [Indexed: 09/04/2023] Open
Abstract
Attribute amnesia describes the failure to unexpectedly report the attribute of an attended stimulus, likely reflecting a lack of working memory consolidation. Previous studies have shown that unique meaningful objects are immune to attribute amnesia. However, these studies used highly dissimilar foils to test memory, raising the possibility that good performance at the surprise test was based on an imprecise (gist-like) form of long-term memory. In Experiment 1, we explored whether a more sensitive memory test would reveal attribute amnesia in meaningful objects. We used a four-alternative-forced-choice test with foils having mis-matched exemplar (e.g., apple pie/pumpkin pie) and/or state (e.g., cut/full) information. Errors indicated intact exemplar, but not state information. Thus, meaningful objects are vulnerable to attribute amnesia under the right conditions. In Experiments 2A-2D, we manipulated the familiarity signals of test items by introducing a critical object as a pre-surprise target. In the surprise trial, this critical item matched one of the foil choices. Participants selected the critical object more often than other items. By demonstrating that familiarity influences responses in this paradigm, we suggest that meaningful objects are not immune to attribute amnesia but instead side-step the effects of attribute amnesia.
Collapse
Affiliation(s)
- Edyta Sasin
- Department of Psychology, New York University Abu Dhabi, Abu Dhabi, UAE.
| | - Yuri Markov
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Daryl Fougnie
- Department of Psychology, New York University Abu Dhabi, Abu Dhabi, UAE
| |
Collapse
|
13
|
Fernandez-Duque M, Hayakawa S, Marian V. Speakers of different languages remember visual scenes differently. SCIENCE ADVANCES 2023; 9:eadh0064. [PMID: 37585537 PMCID: PMC10431704 DOI: 10.1126/sciadv.adh0064] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 07/14/2023] [Indexed: 08/18/2023]
Abstract
Language can have a powerful effect on how people experience events. Here, we examine how the languages people speak guide attention and influence what they remember from a visual scene. When hearing a word, listeners activate other similar-sounding words before settling on the correct target. We tested whether this linguistic coactivation during a visual search task changes memory for objects. Bilinguals and monolinguals remembered English competitor words that overlapped phonologically with a spoken English target better than control objects without name overlap. High Spanish proficiency also enhanced memory for Spanish competitors that overlapped across languages. We conclude that linguistic diversity partly accounts for differences in higher cognitive functions such as memory, with multilinguals providing a fertile ground for studying the interaction between language and cognition.
Collapse
Affiliation(s)
- Matias Fernandez-Duque
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL 60208, USA
| | - Sayuri Hayakawa
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL 60208, USA
- Department of Psychology, Oklahoma State University, Stillwater, OK 74078, USA
| | - Viorica Marian
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL 60208, USA
| |
Collapse
|
14
|
Sho H, Morita H. The effects of viewing by scrolling on a small screen on the encoding of objects into visual long-term memory. Front Psychol 2023; 14:1191952. [PMID: 37663343 PMCID: PMC10469673 DOI: 10.3389/fpsyg.2023.1191952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 07/07/2023] [Indexed: 09/05/2023] Open
Abstract
The perception of an image obtained by scrolling through a small screen can differ from the typical perception of a wide visual field in a stable environment. However, we do not fully understand image perception by scrolling on a small screen based on psychological knowledge of visual perception and cognition of images. This study investigated how screen size limitations and image shifts caused by scrolling affect image encoding in visual long-term memory. Participants explored the stimulus images under three conditions. Under the scrolling condition, they explored the image through a small screen. Under the moving-window condition, they explored the image by moving the screen over a masked image; this is similar to looking through a moving peephole. Under the no-window condition, participants were able to view the entire image simultaneously. Each stimulus comprised 12 objects. After 1 h, the samples were tested for object recognition. Consequently, the memory retention rate was higher in the scrolling and moving-window conditions than in the no-window condition, and no difference was observed between the scrolling and moving-window conditions. The time required by participants to explore the stimulus was shorter under the no-window condition. Thus, encoding efficiency (i.e., the rate of encoding information into memory in a unit of time) did not differ among the three conditions. An analysis of the scan trace of the scrolling and window movements in relation to the image revealed differences between the scrolling and moving-window conditions in terms of the scan's dynamic features. Moreover, a negative correlation was observed between the memory retention rate and image-scrolling speed. We conclude that perceiving images by scrolling on a small screen enables better memory retention than that obtained through whole-image viewing if the viewing time is not limited. We suggest that viewing through a small screen is not necessarily disadvantageous for memory encoding efficiency depending on the presentation mode, and the results show that participants who scrolled fast tended to have worse memory retention. These findings can impact school education and thus suggest that the use of mobile devices in learning has some merit from the viewpoint of cognitive psychology.
Collapse
Affiliation(s)
- Hayato Sho
- Graduate School of Comprehensive Human Sciences, Master’s Program in Informatics, University of Tsukuba, Tsukuba, Ibaraki, Japan
| | - Hiromi Morita
- Institute of Library, Information and Media Science, University of Tsukuba, Tsukuba, Ibaraki, Japan
| |
Collapse
|
15
|
How does searching for faces among similar-looking distractors affect distractor memory? Mem Cognit 2023:10.3758/s13421-023-01405-7. [PMID: 36849759 DOI: 10.3758/s13421-023-01405-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/09/2023] [Indexed: 03/01/2023]
Abstract
Prior research has shown that searching for multiple targets in a visual search task enhances distractor memory in a subsequent recognition test. Three non-mutually exclusive accounts have been offered to explain this phenomenon. The mental comparison hypothesis states that searching for multiple targets requires participants to make more mental comparisons between the targets and the distractors, which enhances distractor memory. The attention allocation hypothesis states that participants allocate more attention to distractors because a multiple-target search cue leads them to expect a more difficult search. Finally, the partial match hypothesis states that searching for multiple targets increases the amount of featural overlap between targets and distractors, which necessitates greater attention in order to reject each distractor. In two experiments, we examined these hypotheses by manipulating visual working memory (VWM) load and target-distractor similarity of AI-generated faces in a visual search (i.e., RSVP) task. Distractor similarity was manipulated using a multidimensional scaling model constructed from facial landmarks and other metadata of each face. In both experiments, distractors from multiple-target searches were recognized better than distractors from single-target searches. Experiment 2 additionally revealed that increased target-distractor similarity during search improved distractor recognition memory, consistent with the partial match hypothesis.
Collapse
|
16
|
Rehrig G, Hayes TR, Henderson JM, Ferreira F. Visual attention during seeing for speaking in healthy aging. Psychol Aging 2023; 38:49-66. [PMID: 36395016 PMCID: PMC10021028 DOI: 10.1037/pag0000718] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
As we age, we accumulate a wealth of information about the surrounding world. Evidence from visual search suggests that older adults retain intact knowledge for where objects tend to occur in everyday environments (semantic information) that allows them to successfully locate objects in scenes, but may overrely on semantic guidance. We investigated age differences in the allocation of attention to semantically informative and visually salient information in a task in which the eye movements of younger (N = 30, aged 18-24) and older (N = 30, aged 66-82) adults were tracked as they described real-world scenes. We measured the semantic information in scenes based on "meaning map" ratings from a norming sample of young and older adults, and image salience as graph-based visual saliency. Logistic mixed-effects modeling was used to determine whether, controlling for center bias, fixated scene locations differed in semantic informativeness and visual salience from locations that were not fixated, and whether these effects differed for young and older adults. Semantic informativeness predicted fixated locations well overall, as did image salience, although unique variance in the model was better explained by semantic informativeness than image salience. Older adults were less likely to fixate informative locations in scenes than young adults were, though the locations older adults' fixated were independently predicted well by informativeness. These results suggest young and older adults both use semantic information to guide attention in scenes and that older adults do not overrely on semantic information across the board. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
Affiliation(s)
| | | | - John M. Henderson
- Department of Psychology, University of California, Davis
- Center for Mind and Brain, University of California, Davis
| | | |
Collapse
|
17
|
Clarkson TR, Cunningham SJ, Haslam C, Kritikos A. Is self always prioritised? Attenuating the ownership self-reference effect in memory. Conscious Cogn 2022; 106:103420. [PMID: 36274390 DOI: 10.1016/j.concog.2022.103420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 09/22/2022] [Accepted: 10/03/2022] [Indexed: 01/27/2023]
Abstract
The current study demonstrates the abolishment of the Ownership Self Reference Effect (OSRE) when elaborate details of a distant other-referent are provided. In a 2 (High versus Low information) × 2 (Self versus Other) experimental design, we tested the capacity for the SRE to be modulated with social saliency. Using a well-established ownership paradigm (Collard et al., 2020; Cunningham et al., 2008; Sparks et al., 2016), when the other was made socially salient (i.e. details and characteristics about the other were provided to the participant prior to encoding), no SRE emerged, such that self-owned and other-owned items were recalled with comparable accuracy. In contrast, when the other was not salient (i.e., no details about them were provided), participants accurately recalled a higher proportion of self-owned items, demonstrating a typical SRE in source memory. The degree of self- or other- referencing was not related to measured variables of closeness, similarity or shared traits with the other. Although the SRE is an established and robust effect, the findings of the current study illustrate critical circumstances in which the self is no longer prioritised above the other. In line with our predictions, we suggest that the self has automatic attributed social salience (e.g. through ownership) and that enhancing social salience by elaborating details of the other, prioritisation can expand to encapsulate an other beyond the self and influence incidental memory.
Collapse
Affiliation(s)
- T R Clarkson
- School of Psychology, University of Queensland, Brisbane, Queensland, Australia.
| | - S J Cunningham
- School of Applied Sciences, Abertay University, United Kingdom
| | - C Haslam
- School of Psychology, University of Queensland, Brisbane, Queensland, Australia
| | - A Kritikos
- School of Psychology, University of Queensland, Brisbane, Queensland, Australia
| |
Collapse
|
18
|
Zoellner C, Klein N, Cheng S, Schubotz R, Axmacher N, Wolf OT. EXPRESS: Where was the Toaster? A systematic investigation of semantic construction in a new virtual episodic memory paradigm. Q J Exp Psychol (Hove) 2022:17470218221116610. [PMID: 35848220 DOI: 10.1177/17470218221116610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Retrieved memories of past events are often inaccurate. The scenario construction model (SCM) postulates that during encoding, only the gist of an episode is stored in the episodic memory trace and during retrieval, information missing from that trace is constructed from semantic information. The current study aimed to find behavioural evidence for semantic construction in a realistic, yet controlled setting by introducing a new paradigm and adjusted memory tests that measure semantic construction. Using a desktop virtual reality (VR) participants navigated through a flat in which some household objects appeared in unexpected rooms, creating conflicts between the experienced episode and semantic expectations. The manipulation of congruence enabled us to identify influences from semantic information in cases of episodic memory failure. Besides we controlled for objects to be task-relevant or -irrelevant to the sequence of action. In addition to an established old/new recognition task we introduced spatial and temporal recall measures as possible superior memory measures quantifying semantic construction. The recognition task and the spatial recall revealed, that both congruence and task-relevance predicted correct episodic memory retrieval. In cases of episodic memory failure semantic construction was more likely than guessing and occurred more frequently for task-irrelevant objects. In the temporal recall object-pairs belonging to the same semantic room-category were temporally clustered together compared to object-pairs from different semantic categories (at the second retrieval). Taken together, our findings support the predictions of the SCM. The new VR-paradigm, including the new memory measures appears to be a promising tool for investigating semantic construction.
Collapse
Affiliation(s)
- Carina Zoellner
- Department of Cognitive Psychology, Ruhr University, Bochum, Germany 9142
| | - Nicole Klein
- Department of Cognitive Psychology, Ruhr University, Bochum, Germany 9142
| | - Sen Cheng
- Institute for Neural Computation, Ruhr University Bochum, Bochum, Germany 9142
| | - Ricarda Schubotz
- Department of Psychology, University of Muenster, Muenster, Germany 9185.,Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Muenster, Germany
| | - Nikolai Axmacher
- Department of Neuropsychology, Ruhr University Bochum, Bochum, Germany 9142
| | - Oliver T Wolf
- Department of Cognitive Psychology, Ruhr University, Bochum, Germany 9142
| |
Collapse
|
19
|
Masarwa S, Kreichman O, Gilaie-Dotan S. Larger images are better remembered during naturalistic encoding. Proc Natl Acad Sci U S A 2022; 119:e2119614119. [PMID: 35046050 PMCID: PMC8794838 DOI: 10.1073/pnas.2119614119] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 12/03/2021] [Indexed: 11/18/2022] Open
Abstract
We are constantly exposed to multiple visual scenes, and while freely viewing them without an intentional effort to memorize or encode them, only some are remembered. It has been suggested that image memory is influenced by multiple factors, such as depth of processing, familiarity, and visual category. However, this is typically investigated when people are instructed to perform a task (e.g., remember or make some judgment about the images), which may modulate processing at multiple levels and thus, may not generalize to naturalistic visual behavior. Visual memory is assumed to rely on high-level visual perception that shows a level of size invariance and therefore is not assumed to be highly dependent on image size. Here, we reasoned that during naturalistic vision, free of task-related modulations, bigger images stimulate more visual system processing resources (from retina to cortex) and would, therefore, be better remembered. In an extensive set of seven experiments, naïve participants (n = 182) were asked to freely view presented images (sized 3° to 24°) without any instructed encoding task. Afterward, they were given a surprise recognition test (midsized images, 50% already seen). Larger images were remembered better than smaller ones across all experiments (∼20% higher accuracy or ∼1.5 times better). Memory was proportional to image size, faces were better remembered, and outdoors the least. Results were robust even when controlling for image set, presentation order, screen resolution, image scaling at test, or the amount of information. While multiple factors affect image memory, our results suggest that low- to high-level processes may all contribute to image memory.
Collapse
Affiliation(s)
- Shaimaa Masarwa
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan 5290002, Israel
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
| | - Olga Kreichman
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan 5290002, Israel
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
| | - Sharon Gilaie-Dotan
- School of Optometry and Vision Science, Faculty of Life Science, Bar Ilan University, Ramat Gan 5290002, Israel;
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University, Ramat Gan 5290002, Israel
- Institute of Cognitive Neuroscience, University College London, London WC1N 3AZ, United Kingdom
| |
Collapse
|
20
|
Ramzaoui H, Faure S, Spotorno S. EXPRESS: Age-related differences when searching in a real environment: The use of semantic contextual guidance and incidental object encoding. Q J Exp Psychol (Hove) 2021; 75:1948-1958. [PMID: 34816760 DOI: 10.1177/17470218211064887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Visual search is a crucial, everyday activity that declines with aging. Here, referring to the environmental support account, we hypothesized that semantic contextual associations between the target and the neighboring objects (e.g., a teacup near a tea bag and a spoon), acting as external cues, may counteract this decline. Moreover, when searching for a target, viewers may encode information about the co-present distractor objects, by simply looking at them. In everyday life, where viewers often search for several targets within the same environment, such distractor objects may often become targets of future searches. Thus, we examined whether incidentally fixating a target during previous trials, when it was a distractor, may also modulate the impact of aging on search performance. We used everyday object arrays on tables in a real room, where healthy young and older adults had to search sequentially for multiple objects across different trials within the same array. We showed that search was quicker: (1) in young than older adults, (2) for targets surrounded by semantically associated objects than unassociated objects, but only in older adults, and (3) for incidentally fixated targets than for targets that were not fixated when they were distractors, with no differences between young and older adults. These results suggest that older viewers use both environmental support based on object semantic associations and object information incidentally encoded to enhance efficiency of real-world search, even in relatively simple environments. This reduces, but does not eliminate, search decline related to aging.
Collapse
Affiliation(s)
| | | | - Sara Spotorno
- School of Psychology, Keele University, United Kingdom 4212
| |
Collapse
|
21
|
Marian V, Hayakawa S, Schroeder SR. Memory after visual search: Overlapping phonology, shared meaning, and bilingual experience influence what we remember. BRAIN AND LANGUAGE 2021; 222:105012. [PMID: 34464828 PMCID: PMC8554070 DOI: 10.1016/j.bandl.2021.105012] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 07/19/2021] [Accepted: 08/19/2021] [Indexed: 06/13/2023]
Abstract
How we remember the things that we see can be shaped by our prior experiences. Here, we examine how linguistic and sensory experiences interact to influence visual memory. Objects in a visual search that shared phonology (cat-cast) or semantics (dog-fox) with a target were later remembered better than unrelated items. Phonological overlap had a greater influence on memory when targets were cued by spoken words, while semantic overlap had a greater effect when targets were cued by characteristic sounds. The influence of overlap on memory varied as a function of individual differences in language experience -- greater bilingual experience was associated with decreased impact of overlap on memory. We conclude that phonological and semantic features of objects influence memory differently depending on individual differences in language experience, guiding not only what we initially look at, but also what we later remember.
Collapse
Affiliation(s)
- Viorica Marian
- Department of Communication Sciences and Disorders, Northwestern University, 2240 North Campus Drive, Evanston, IL 60208, United States
| | - Sayuri Hayakawa
- Department of Communication Sciences and Disorders, Northwestern University, 2240 North Campus Drive, Evanston, IL 60208, United States.
| | - Scott R Schroeder
- Department of Speech, Language, Hearing Sciences, Hofstra University, 110, Hempstead, NY 11549, United States
| |
Collapse
|
22
|
The detail is in the difficulty: Challenging search facilitates rich incidental object encoding. Mem Cognit 2021; 48:1214-1233. [PMID: 32562249 DOI: 10.3758/s13421-020-01051-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
When searching for objects in the environment, observers necessarily encounter other, nontarget, objects. Despite their irrelevance for search, observers often incidentally encode the details of these objects, an effect that is exaggerated as the search task becomes more challenging. Although it is well established that searchers create incidental memories for targets, less is known about the fidelity with which nontargets are remembered. Do observers store richly detailed representations of nontargets, or are these memories characterized by gist-level detail, containing only the information necessary to reject the item as a nontarget? We addressed this question across two experiments in which observers completed multiple-target (one to four potential targets) searches, followed by surprise alternative forced-choice (AFC) recognition tests for all encountered objects. To assess the detail of incidentally stored memories, we used similarity rankings derived from multidimensional scaling to manipulate the perceptual similarity across objects in 4-AFC (Experiment 1a) and 16-AFC (Experiments 1b and 2) tests. Replicating prior work, observers recognized more nontarget objects encountered during challenging, relative to easier, searches. More importantly, AFC results revealed that observers stored more than gist-level detail: When search objects were not recognized, observers systematically chose lures with higher perceptual similarity, reflecting partial encoding of the search object's perceptual features. Further, similarity effects increased with search difficulty, revealing that incidental memories for visual search objects are sharpened when the search task requires greater attentional processing.
Collapse
|
23
|
Sasin E, Fougnie D. The road to long-term memory: Top-down attention is more effective than bottom-up attention for forming long-term memories. Psychon Bull Rev 2021; 28:937-945. [PMID: 33443709 PMCID: PMC8219582 DOI: 10.3758/s13423-020-01856-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/25/2020] [Indexed: 11/10/2022]
Abstract
Does the strength of representations in long-term memory (LTM) depend on which type of attention is engaged? We tested participants' memory for objects seen during visual search. We compared implicit memory for two types of objects-related-context nontargets that grabbed attention because they matched the target defining feature (i.e., color; top-down attention) and salient distractors that captured attention only because they were perceptually distracting (bottom-up attention). In Experiment 1, the salient distractor flickered, while in Experiment 2, the luminance of the salient distractor was alternated. Critically, salient and related-context nontargets produced equivalent attentional capture, yet related-context nontargets were remembered far better than salient distractors (and salient distractors were not remembered better than unrelated distractors). These results suggest that LTM depends not only on the amount of attention but also on the type of attention. Specifically, top-down attention is more effective in promoting the formation of memory traces than bottom-up attention.
Collapse
Affiliation(s)
- Edyta Sasin
- Department of Psychology, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.
| | - Daryl Fougnie
- Department of Psychology, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| |
Collapse
|
24
|
Lau JSH, Pashler H, Brady TF. Target templates in low target-distractor discriminability visual search have higher resolution, but the advantage they provide is short-lived. Atten Percept Psychophys 2021; 83:1435-1454. [PMID: 33409902 PMCID: PMC7787128 DOI: 10.3758/s13414-020-02213-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/16/2020] [Indexed: 11/17/2022]
Abstract
When you search repeatedly for a set of items among very similar distractors, does that make you more efficient in locating the targets? To address this, we had observers search for two categories of targets among the same set of distractors across trials. Visual and conceptual similarity of the stimuli were validated with a multidimensional scaling analysis, and separately using a deep neural network model. After a few blocks of visual search trials, the distractor set was replaced. In three experiments, we manipulated the level of discriminability between the targets and distractors before and after the distractors were replaced. Our results suggest that in the presence of repeated distractors, observers generally become more efficient. However, the difficulty of the search task does impact how efficient people are when the distractor set is replaced. Specifically, when the training is easy, people are more impaired in a difficult transfer test. We attribute this effect to the precision of the target template generated during training. In particular, a coarse target template is created when the target and distractors are easy to discriminate. These coarse target templates do not transfer well in a context with new distractors. This suggests that learning with more distinct targets and distractors can result in lower performance when context changes, but observers recover from this effect quickly (within a block of search trials).
Collapse
Affiliation(s)
- Jonas Sin-Heng Lau
- Department of Psychology, University of California, San Diego, California 92093-0109, USA
| | - Hal Pashler
- Department of Psychology, University of California, San Diego, California 92093-0109, USA
| | - Timothy F Brady
- Department of Psychology, University of California, San Diego, California 92093-0109, USA.
| |
Collapse
|
25
|
Kristjánsson Á, Draschkow D. Keeping it real: Looking beyond capacity limits in visual cognition. Atten Percept Psychophys 2021; 83:1375-1390. [PMID: 33791942 PMCID: PMC8084831 DOI: 10.3758/s13414-021-02256-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/23/2020] [Indexed: 11/23/2022]
Abstract
Research within visual cognition has made tremendous strides in uncovering the basic operating characteristics of the visual system by reducing the complexity of natural vision to artificial but well-controlled experimental tasks and stimuli. This reductionist approach has for example been used to assess the basic limitations of visual attention, visual working memory (VWM) capacity, and the fidelity of visual long-term memory (VLTM). The assessment of these limits is usually made in a pure sense, irrespective of goals, actions, and priors. While it is important to map out the bottlenecks our visual system faces, we focus here on selected examples of how such limitations can be overcome. Recent findings suggest that during more natural tasks, capacity may be higher than reductionist research suggests and that separable systems subserve different actions, such as reaching and looking, which might provide important insights about how pure attentional or memory limitations could be circumvented. We also review evidence suggesting that the closer we get to naturalistic behavior, the more we encounter implicit learning mechanisms that operate "for free" and "on the fly." These mechanisms provide a surprisingly rich visual experience, which can support capacity-limited systems. We speculate whether natural tasks may yield different estimates of the limitations of VWM, VLTM, and attention, and propose that capacity measurements should also pass the real-world test within naturalistic frameworks. Our review highlights various approaches for this and suggests that our understanding of visual cognition will benefit from incorporating the complexities of real-world cognition in experimental approaches.
Collapse
Affiliation(s)
- Árni Kristjánsson
- School of Health Sciences, University of Iceland, Reykjavík, Iceland.
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia.
| | - Dejan Draschkow
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| |
Collapse
|
26
|
Lavelle M, Alonso D, Luria R, Drew T. Visual working memory load plays limited, to no role in encoding distractor objects during visual search. VISUAL COGNITION 2021. [DOI: 10.1080/13506285.2021.1914256] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- Mark Lavelle
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - David Alonso
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Roy Luria
- The School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Trafton Drew
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| |
Collapse
|
27
|
Abstract
When searching for a specific object, we often form an image of the target, which we use as a search template. This template is thought to be maintained in working memory, primarily because of evidence that the contents of working memory influences search behavior. However, it is unknown whether this interaction applies in both directions. Here, we show that changes in search templates influence working memory. Participants were asked to remember the orientation of a line that changed every trial, and on some trials (75%) search for that orientation, but on remaining trials recall the orientation. Critically, we manipulated the target template by introducing a predictable context—distractors in the visual search task were always counterclockwise (or clockwise) from the search target. The predictable context produced a large bias in search. Importantly, we also found a similar bias in orientation memory reports, demonstrating that working memory and target templates were not held as completely separate, isolated representations. However, the memory bias was considerably smaller than the search bias, suggesting that, although there is a common source, the two may not be driven by a single, shared process.
Collapse
|
28
|
Grieben R, Tekülve J, Zibner SKU, Lins J, Schneegans S, Schöner G. Scene memory and spatial inhibition in visual search : A neural dynamic process model and new experimental evidence. Atten Percept Psychophys 2020; 82:775-798. [PMID: 32048181 PMCID: PMC7246253 DOI: 10.3758/s13414-019-01898-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Any object-oriented action requires that the object be first brought into the attentional foreground, often through visual search. Outside the laboratory, this would always take place in the presence of a scene representation acquired from ongoing visual exploration. The interaction of scene memory with visual search is still not completely understood. Feature integration theory (FIT) has shaped both research on visual search, emphasizing the scaling of search times with set size when searches entail feature conjunctions, and research on visual working memory through the change detection paradigm. Despite its neural motivation, there is no consistently neural process account of FIT in both its dimensions. We propose such an account that integrates (1) visual exploration and the building of scene memory, (2) the attentional detection of visual transients and the extraction of search cues, and (3) visual search itself. The model uses dynamic field theory in which networks of neural dynamic populations supporting stable activation states are coupled to generate sequences of processing steps. The neural architecture accounts for basic findings in visual search and proposes a concrete mechanism for the integration of working memory into the search process. In a behavioral experiment, we address the long-standing question of whether both the overall speed and the efficiency of visual search can be improved by scene memory. We find both effects and provide model fits of the behavioral results. In a second experiment, we show that the increase in efficiency is fragile, and trace that fragility to the resetting of spatial working memory.
Collapse
Affiliation(s)
- Raul Grieben
- Institut für Neuroinformatik, Ruhr-Universität Bochum, Universitätsstraße 150, 44780 Bochum, Germany
| | - Jan Tekülve
- Institut für Neuroinformatik, Ruhr-Universität Bochum, Universitätsstraße 150, 44780 Bochum, Germany
| | - Stephan K. U. Zibner
- Institut für Neuroinformatik, Ruhr-Universität Bochum, Universitätsstraße 150, 44780 Bochum, Germany
| | - Jonas Lins
- Institut für Neuroinformatik, Ruhr-Universität Bochum, Universitätsstraße 150, 44780 Bochum, Germany
| | | | - Gregor Schöner
- Institut für Neuroinformatik, Ruhr-Universität Bochum, Universitätsstraße 150, 44780 Bochum, Germany
| |
Collapse
|
29
|
Helbing J, Draschkow D, Võ MLH. Search superiority: Goal-directed attentional allocation creates more reliable incidental identity and location memory than explicit encoding in naturalistic virtual environments. Cognition 2020; 196:104147. [PMID: 32004760 DOI: 10.1016/j.cognition.2019.104147] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Revised: 11/19/2019] [Accepted: 11/20/2019] [Indexed: 01/23/2023]
Abstract
We use representations and expectations formed during life-long learning to support attentional allocation and perception. In comparison to traditional laboratory investigations, real-world memory formation is usually achieved without explicit instruction and on-the-fly as a by-product of natural interactions with our environment. Understanding this process and the quality of naturally formed representations is critical to understanding how memory is used to guide attention and perception. Utilizing immersive, navigable, and realistic virtual environments, we investigated incidentally generated memory representations by comparing them to memories for items which were explicitly memorized. Participants either searched for objects embedded in realistic indoor environments or explicitly memorized them for follow-up identity and location memory tests. We show for the first time that memory for the identity of naturalistic objects and their location in 3D space is higher after incidental encoding compared to explicit memorization, even though the subsequent memory tests came as a surprise to participants. Relating gaze behavior to memory performance revealed that encoding time was more predictive of subsequent memory when participants explicitly memorized an item, compared to incidentally encoding it. Our results suggest that the active nature of guiding attentional allocation during proactive behavior allows for behaviorally optimal formation and utilization of representations. This highlights the importance of investigating cognition under ecologically valid conditions and shows that understanding the most natural processes for encoding and maintaining information is critical for understanding adaptive behavior.
Collapse
Affiliation(s)
- Jason Helbing
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Dejan Draschkow
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany; Department of Psychiatry, University of Oxford, Oxford, England, United Kingdom of Great Britain and Northern Ireland.
| | - Melissa L-H Võ
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
30
|
Williams CC. Looking for your keys: The interaction of attention, memory, and eye movements in visual search. PSYCHOLOGY OF LEARNING AND MOTIVATION 2020. [DOI: 10.1016/bs.plm.2020.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
31
|
Loschky LC, Larson AM, Smith TJ, Magliano JP. The Scene Perception & Event Comprehension Theory (SPECT) Applied to Visual Narratives. Top Cogn Sci 2019; 12:311-351. [PMID: 31486277 PMCID: PMC9328418 DOI: 10.1111/tops.12455] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Revised: 08/05/2019] [Accepted: 08/05/2019] [Indexed: 11/29/2022]
Abstract
Understanding how people comprehend visual narratives (including picture stories, comics, and film) requires the combination of traditionally separate theories that span the initial sensory and perceptual processing of complex visual scenes, the perception of events over time, and comprehension of narratives. Existing piecemeal approaches fail to capture the interplay between these levels of processing. Here, we propose the Scene Perception & Event Comprehension Theory (SPECT), as applied to visual narratives, which distinguishes between front‐end and back‐end cognitive processes. Front‐end processes occur during single eye fixations and are comprised of attentional selection and information extraction. Back‐end processes occur across multiple fixations and support the construction of event models, which reflect understanding of what is happening now in a narrative (stored in working memory) and over the course of the entire narrative (stored in long‐term episodic memory). We describe relationships between front‐ and back‐end processes, and medium‐specific differences that likely produce variation in front‐end and back‐end processes across media (e.g., picture stories vs. film). We describe several novel research questions derived from SPECT that we have explored. By addressing these questions, we provide greater insight into how attention, information extraction, and event model processes are dynamically coordinated to perceive and understand complex naturalistic visual events in narratives and the real world. Comprehension of visual narratives like comics, picture stories, and films involves both decoding the visual content and construing the meaningful events they represent. The Scene Perception & Event Comprehension Theory (SPECT) proposes a framework for understanding how a comprehender perceptually negotiates the surface of a visual representation and integrates its meaning into a growing mental model.
Collapse
Affiliation(s)
| | | | - Tim J Smith
- Department of Psychological Sciences, Birkbeck, University of London
| | | |
Collapse
|
32
|
Guevara Pinto JD, Papesh MH. Incidental memory following rapid object processing: The role of attention allocation strategies. J Exp Psychol Hum Percept Perform 2019; 45:1174-1190. [PMID: 31219283 PMCID: PMC7202240 DOI: 10.1037/xhp0000664] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
When observers search for multiple (rather than singular) targets, they are slower and less accurate, yet have better incidental memory for nontarget items encountered during the task (Hout & Goldinger, 2010). One explanation for this may be that observers titrate their attention allocation based on the expected difficulty suggested by search cues. Difficult search cues may implicitly encourage observers to narrow their attention, simultaneously enhancing distractor encoding and hindering peripheral processing. Across three experiments, we manipulated the difficulty of search cues preceding passive visual search for real-world objects, using a Rapid Serial Visual Presentation (RSVP) task to equate item exposure durations. In all experiments, incidental memory was enhanced for distractors encountered while participants monitored for difficult targets. Moreover, in key trials, peripheral shapes appeared at varying eccentricities off center, allowing us to infer the spread and precision of participants' attentional windows. Peripheral item detection and identification decreased when search cues were difficult, even when the peripheral items appeared before targets. These results were not an artifact of sustained vigilance in miss trials, but instead reflect top-down modulation of attention allocation based on task demands. Implications for individual differences are discussed. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
|
33
|
Gorbunova ES, Kozlov KS, Le STT, Makarov IM. The Role of Working Memory in Dual-Target Visual Search. Front Psychol 2019; 10:1673. [PMID: 31417449 PMCID: PMC6684960 DOI: 10.3389/fpsyg.2019.01673] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2019] [Accepted: 07/02/2019] [Indexed: 11/13/2022] Open
Abstract
Visual search (VS) for multiple targets is especially error prone. One of these errors is called subsequent search misses (SSM) and represents a decrease in accuracy at detecting a second target after a first target has been found. One of the possible explanations of SSM errors is working memory (WM) resource depletion. Three experiments investigated the role of WM in SSM errors using a dual task paradigm. The first experiment investigated the role of object WM using a classical color change detection task. In the second and the third experiments, a modified change detection task was applied, using shape as the relevant feature. The results of our study revealed no effect of additional WM task on second target detection in dual-target VS. To this end, SSM errors are not related to WM resource depletion. On the contrary, WM task performance was violated by dual-target VS as compared to single-target VS, when the targets in VS task were defined by the same feature used in the WM task.
Collapse
Affiliation(s)
- Elena S Gorbunova
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia
| | - Kirill S Kozlov
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia
| | - Sofia Tkhan Tin Le
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia
| | - Ivan M Makarov
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia
| |
Collapse
|
34
|
Wiegand I, Wolfe JM. Age doesn't matter much: hybrid visual and memory search is preserved in older adults. AGING NEUROPSYCHOLOGY AND COGNITION 2019; 27:220-253. [PMID: 31050319 DOI: 10.1080/13825585.2019.1604941] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
We tested younger and older observers' attention and long-term memory functions in a "hybrid search" task, in which observers look through visual displays for instances of any of several types of targets held in memory. Apart from a general slowing, search efficiency did not change with age. In both age groups, reaction times increased linearly with the visual set size and logarithmically with the memory set size, with similar relative costs of increasing load (Experiment 1). We replicated the finding and further showed that performance remained comparable between age groups when familiarity cues were made irrelevant (Experiment 2) and target-context associations were to be retrieved (Experiment 3). Our findings are at variance with theories of cognitive aging that propose age-specific deficits in attention and memory. As hybrid search resembles many real-world searches, our results might be relevant to improve the ecological validity of assessing age-related cognitive decline.
Collapse
Affiliation(s)
- Iris Wiegand
- Visual Attention Lab, Brigham & Women's Hospital, Cambridge, MA, USA.,Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Berlin, Germany.,Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
| | - Jeremy M Wolfe
- Visual Attention Lab, Brigham & Women's Hospital, Cambridge, MA, USA.,Departments of Ophthalmology & Radiology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
35
|
Levin DT, Seiffert AE, Cho SJ, Carter KE. Are failures to look, to represent, or to learn associated with change blindness during screen-capture video learning? Cogn Res Princ Implic 2018; 3:49. [PMID: 30588561 PMCID: PMC6306372 DOI: 10.1186/s41235-018-0142-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2018] [Accepted: 11/07/2018] [Indexed: 11/10/2022] Open
Abstract
Although phenomena such as change blindness and inattentional blindness are robust, it is not entirely clear how these failures of visual awareness are related to failures to attend to visual information, to represent it, and to ultimately learn in visual environments. On some views, failures of visual awareness such as change blindness underestimate the true extent of otherwise rich visual representations. This might occur if people did represent the changing features but failed to compare them across views. In contrast, other approaches emphasize visual representations that are created only when they are functional. On this view, change blindness may be associated with poor representations of the changing properties. It is possible to compromise and propose that representational richness varies across contexts, but then it becomes important to detail relationships among attention, awareness, and learning in specific, but applicable, settings. We therefore assessed these relationships in an important visual setting: screen-captured instructional videos. In two experiments, we tested the degree to which attention (as measured by gaze) predicts change detection, and whether change detection is associated with visual representations and content learning. We observed that attention sometimes predicted change detection, and that change detection was associated with representations of attended objects. However, there was no relationship between change detection and learning.
Collapse
Affiliation(s)
- Daniel T. Levin
- Department of Psychology and Human Development, Vanderbilt University, Peabody College #552, 230 Appleton Place, Nashville, TN 37203-5721 USA
| | - Adriane E. Seiffert
- Department of Psychology and Human Development, Vanderbilt University, Peabody College #552, 230 Appleton Place, Nashville, TN 37203-5721 USA
| | - Sun-Joo Cho
- Department of Psychology and Human Development, Vanderbilt University, Peabody College #552, 230 Appleton Place, Nashville, TN 37203-5721 USA
| | - Kelly E. Carter
- Department of Psychology and Human Development, Vanderbilt University, Peabody College #552, 230 Appleton Place, Nashville, TN 37203-5721 USA
| |
Collapse
|
36
|
Hutmacher F, Kuhbandner C. Long-Term Memory for Haptically Explored Objects: Fidelity, Durability, Incidental Encoding, and Cross-Modal Transfer. Psychol Sci 2018; 29:2031-2038. [PMID: 30376424 DOI: 10.1177/0956797618803644] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
The question of how many of our perceptual experiences are stored in long-term memory has received considerable attention. The present study examined long-term memory for haptic experiences. Blindfolded participants haptically explored 168 everyday objects (e.g., a pen) for 10 s each. In a blindfolded memory test, they indicated which of two objects from the same basic-level category (e.g., two different pens) had been touched before. As shown in Experiment 1 (N = 26), memory was nearly perfect when tested immediately after exploration (94%) and still high when tested after 1 week (85%). As shown in Experiment 2 (N = 43), when participants explored the objects without the intention to memorize them, memory in a 1-week delayed surprise test was still high (79%), even when assessed with a cross-modal visual memory test (73%). These results indicate that detailed, durable, long-term memory representations are stored as a natural product of haptic perception.
Collapse
|
37
|
Abstract
In visual search of natural scenes, differentiation of briefly fixated but task-irrelevant distractor items from incidental memory is often comparable to explicit memorization. However, many characteristics of incidental memory remain unclear, including the capacity for its conscious retrieval. Here, we examined incidental memory for faces in either upright or inverted orientation using Rapid Serial Visual Presentation (RSVP). Subjects were instructed to detect a target face in a sequence of 8-15 faces cropped from natural scene photographs (Experiment 1). If the target face was identified within a brief time window, the subject proceeded to an incidental memory task. Here, subjects used incidental memory to discriminate between a probe face (a distractor in the RSVP stream) and a novel, foil face. In Experiment 2 we reduced scene-related semantic coherency by intermixing faces from multiple scenes and contrasted incidental memory with explicit memory, a condition where subjects actively memorized each face from the sequence without searching for a target. In both experiments, we measured objective performance (Type 1 AUC) and metacognitive accuracy (Type 2 AUC), revealing sustained and consciously accessible incidental memory for upright and inverted faces. In novel analyses of face categories, we examined whether accuracy or metacognitive judgments are affected by shared semantic features (i.e., similarity in gender, race, age). Similarity enhanced the accuracy of incidental memory discriminations but did not influence metacognition. We conclude that incidental memory is sustained and consciously accessible, is not reliant on scene contexts, and is not enhanced by explicit memorization.
Collapse
|
38
|
Draschkow D, Reinecke S, Cunningham CA, Võ MLH. The lower bounds of massive memory: Investigating memory for object details after incidental encoding. Q J Exp Psychol (Hove) 2018; 72:1176-1182. [DOI: 10.1177/1747021818783722] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Visual long-term memory capacity appears massive and detailed when probed explicitly. In the real world, however, memories are usually built from chance encounters. Therefore, we investigated the capacity and detail of incidental memory in a novel encoding task, instructing participants to detect visually distorted objects among intact objects. In a subsequent surprise recognition memory test, lures of a novel category, another exemplar, the same object in a different state, or exactly the same object were presented. Lure recognition performance was above chance, suggesting that incidental encoding resulted in reliable memory formation. Critically, performance for state lures was worse than for exemplars, which was driven by a greater similarity of state as opposed to exemplar foils to the original objects. Our results indicate that incidentally generated visual long-term memory representations of isolated objects are more limited in detail than recently suggested.
Collapse
Affiliation(s)
- Dejan Draschkow
- Department of Psychology, Goethe University, Frankfurt am Main, Germany
| | - Saliha Reinecke
- Department of Psychology, Goethe University, Frankfurt am Main, Germany
| | - Corbin A Cunningham
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| | - Melissa L-H Võ
- Department of Psychology, Goethe University, Frankfurt am Main, Germany
| |
Collapse
|
39
|
Visual search for changes in scenes creates long-term, incidental memory traces. Atten Percept Psychophys 2018; 80:829-843. [PMID: 29427122 DOI: 10.3758/s13414-018-1486-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Humans are very good at remembering large numbers of scenes over substantial periods of time. But how good are they at remembering changes to scenes? In this study, we tested scene memory and change detection two weeks after initial scene learning. In Experiments 1-3, scenes were learned incidentally during visual search for change. In Experiment 4, observers explicitly memorized scenes. At test, after two weeks observers were asked to discriminate old from new scenes, to recall a change that they had detected in the study phase, or to detect a newly introduced change in the memorization experiment. Next, they performed a change detection task, usually looking for the same change as in the study period. Scene recognition memory was found to be similar in all experiments, regardless of the study task. In Experiment 1, more difficult change detection produced better scene memory. Experiments 2 and 3 supported a "depth-of-processing" account for the effects of initial search and change detection on incidental memory for scenes. Of most interest, change detection was faster during the test phase than during the study phase, even when the observer had no explicit memory of having found that change previously. This result was replicated in two of our three change detection experiments. We conclude that scenes can be encoded incidentally as well as explicitly and that changes in those scenes can leave measurable traces even if they are not explicitly recalled.
Collapse
|
40
|
Liu Q, Ulloa A, Horwitz B. Using a Large-scale Neural Model of Cortical Object Processing to Investigate the Neural Substrate for Managing Multiple Items in Short-term Memory. J Cogn Neurosci 2017; 29:1860-1876. [PMID: 28686137 PMCID: PMC6402487 DOI: 10.1162/jocn_a_01163] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Many cognitive and computational models have been proposed to help understand working memory. In this article, we present a simulation study of cortical processing of visual objects during several working memory tasks using an extended version of a previously constructed large-scale neural model [Tagamets, M. A., & Horwitz, B. Integrating electrophysiological and anatomical experimental data to create a large-scale model that simulates a delayed match-to-sample human brain imaging study. Cerebral Cortex, 8, 310-320, 1998]. The original model consisted of arrays of Wilson-Cowan type of neuronal populations representing primary and secondary visual cortices, inferotemporal (IT) cortex, and pFC. We added a module representing entorhinal cortex, which functions as a gating module. We successfully implemented multiple working memory tasks using the same model and produced neuronal patterns in visual cortex, IT cortex, and pFC that match experimental findings. These working memory tasks can include distractor stimuli or can require that multiple items be retained in mind during a delay period (Sternberg's task). Besides electrophysiology data and behavioral data, we also generated fMRI BOLD time series from our simulation. Our results support the involvement of IT cortex in working memory maintenance and suggest the cortical architecture underlying the neural mechanisms mediating particular working memory tasks. Furthermore, we noticed that, during simulations of memorizing a list of objects, the first and last items in the sequence were recalled best, which may implicate the neural mechanism behind this important psychological effect (i.e., the primacy and recency effect).
Collapse
Affiliation(s)
- Qin Liu
- Brain Imaging & Modeling Section, National Institute on Deafness and Other Communications Disorders, National Institutes of Health, Bethesda, MD USA
- Physics Department, University of Maryland, College Park, MD USA
| | - Antonio Ulloa
- Brain Imaging & Modeling Section, National Institute on Deafness and Other Communications Disorders, National Institutes of Health, Bethesda, MD USA
- Neural Bytes LLC, Washington, DC USA
| | - Barry Horwitz
- Brain Imaging & Modeling Section, National Institute on Deafness and Other Communications Disorders, National Institutes of Health, Bethesda, MD USA
| |
Collapse
|
41
|
Meaning in learning: Contextual cueing relies on objects' visual features and not on objects' meaning. Mem Cognit 2017; 46:58-67. [PMID: 28770539 DOI: 10.3758/s13421-017-0745-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
People easily learn regularities embedded in the environment and utilize them to facilitate visual search. Using images of real-world objects, it has been recently shown that this learning, termed contextual cueing (CC), occurs even in complex, heterogeneous environments, but only when the same distractors are repeated at the same locations. Yet it is not clear what exactly is being learned under these conditions: the visual features of the objects or their meaning. In this study, Experiment 1 demonstrated that meaning is not necessary for this type of learning, as a similar pattern of results was found even when the objects' meaning was largely removed. Experiments 2 and 3 showed that after learning meaningful objects, CC was not diminished by a manipulation that distorted the objects' meaning but preserved most of their visual properties. By contrast, CC was eliminated when the learned objects were replaced with different category exemplars that preserved the objects' meaning but altered their visual properties. Together, these data strongly suggest that the acquired context that facilitates real-world objects search relies primarily on the visual properties and the spatial locations of the objects, but not on their meaning.
Collapse
|
42
|
Li CL, Aivar MP, Kit DM, Tong MH, Hayhoe MM. Memory and visual search in naturalistic 2D and 3D environments. J Vis 2017; 16:9. [PMID: 27299769 PMCID: PMC4913723 DOI: 10.1167/16.8.9] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D.
Collapse
|
43
|
|
44
|
Numerosity estimates for attended and unattended items in visual search. Atten Percept Psychophys 2017; 79:1336-1351. [PMID: 28321798 DOI: 10.3758/s13414-017-1296-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The goal of this research was to examine memories created for the number of items during a visual search task. Participants performed a visual search task for a target defined by a single feature (Experiment 1A), by a conjunction of features (Experiment 1B), or by a specific spatial configuration of features (Experiment 1C). On some trials following the search task, subjects were asked to recall the total number of items in the previous display. In all search types, participants underestimated the total number of items, but the severity of the underestimation varied depending on the efficiency of the search. In three follow-up studies (Experiments 2A, 2B, and 2C) using the same visual stimuli, the participants' only task was to estimate the number of items on each screen. Participants still underestimated the numerosity of the items, although the degree of underestimation was smaller than in the search tasks and did not depend on the type of visual stimuli. In Experiment 3, participants were asked to recall the number of items in a display only once. Subjects still displayed a tendency to underestimate, indicating that the underestimation effects seen in Experiments 1A-1C were not attributable to knowledge of the estimation task. The degree of underestimation depends on the efficiency of the search task, with more severe underestimation in efficient search tasks. This suggests that the lower attentional demands of very efficient searches leads to less encoding of numerosity of the distractor set.
Collapse
|
45
|
Zhou W, Mo F, Zhang Y, Ding J. Semantic and Syntactic Associations During Word Search Modulate the Relationship Between Attention and Subsequent Memory. The Journal of General Psychology 2017; 144:69-88. [PMID: 28098521 DOI: 10.1080/00221309.2016.1258389] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Two experiments were conducted to investigate how linguistic information influences attention allocation in visual search and memory for words. In Experiment 1, participants searched for the synonym of a cue word among five words. The distractors included one antonym and three unrelated words. In Experiment 2, participants were asked to judge whether the five words presented on the screen comprise a valid sentence. The relationships among words were sentential, semantically related or unrelated. A memory recognition task followed. Results in both experiments showed that linguistically related words produced better memory performance. We also found that there were significant interactions between linguistic relation conditions and memorization on eye-movement measures, indicating that good memory for words relied on frequent and long fixations during search in the unrelated condition but to a much lesser extent in linguistically related conditions. We conclude that semantic and syntactic associations attenuate the link between overt attention allocation and subsequent memory performance, suggesting that linguistic relatedness can somewhat compensate for a relative lack of attention during word search.
Collapse
Affiliation(s)
| | - Fei Mo
- a Capital Normal University
| | | | | |
Collapse
|
46
|
Doherty BR, Patai EZ, Duta M, Nobre AC, Scerif G. The functional consequences of social distraction: Attention and memory for complex scenes. Cognition 2017; 158:215-223. [PMID: 27842274 DOI: 10.1016/j.cognition.2016.10.015] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2015] [Revised: 10/12/2016] [Accepted: 10/26/2016] [Indexed: 11/20/2022]
Affiliation(s)
- Brianna Ruth Doherty
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| | - Eva Zita Patai
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom; Oxford Centre for Human Brain Activity, University of Oxford, Oxford, United Kingdom
| | - Mihaela Duta
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| | - Anna Christina Nobre
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom; Oxford Centre for Human Brain Activity, University of Oxford, Oxford, United Kingdom
| | - Gaia Scerif
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom.
| |
Collapse
|
47
|
Kaunitz LN, Rowe EG, Tsuchiya N. Large Capacity of Conscious Access for Incidental Memories in Natural Scenes. Psychol Sci 2016; 27:1266-77. [PMID: 27507869 DOI: 10.1177/0956797616658869] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2015] [Accepted: 06/17/2016] [Indexed: 11/15/2022] Open
Abstract
When searching a crowd, people can detect a target face only by direct fixation and attention. Once the target is found, it is consciously experienced and remembered, but what is the perceptual fate of the fixated nontarget faces? Whereas introspection suggests that one may remember nontargets, previous studies have proposed that almost no memory should be retained. Using a gaze-contingent paradigm, we asked subjects to visually search for a target face within a crowded natural scene and then tested their memory for nontarget faces, as well as their confidence in those memories. Subjects remembered up to seven fixated, nontarget faces with more than 70% accuracy. Memory accuracy was correlated with trial-by-trial confidence ratings, which implies that the memory was consciously maintained and accessed. When the search scene was inverted, no more than three nontarget faces were remembered. These findings imply that incidental memory for faces, such as those recalled by eyewitnesses, is more reliable than is usually assumed.
Collapse
Affiliation(s)
- Lisandro N Kaunitz
- School of Psychological Sciences, Faculty of Biomedical and Psychological Sciences, Monash University
| | - Elise G Rowe
- School of Psychological Sciences, Faculty of Biomedical and Psychological Sciences, Monash University
| | - Naotsugu Tsuchiya
- School of Psychological Sciences, Faculty of Biomedical and Psychological Sciences, Monash University Monash Institute of Cognitive and Clinical Neuroscience, Monash University Decoding and Controlling Brain Information, Japan Science and Technology Agency, Chiyoda-ku, Tokyo, Japan
| |
Collapse
|
48
|
Tenbrink T, Bergmann E, Hertzberg C, Gondorf C. Time will not help unskilled observers to understand a cluttered spatial scene. SPATIAL COGNITION AND COMPUTATION 2016. [DOI: 10.1080/13875868.2016.1143474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
49
|
|
50
|
Abstract
Many daily activities involve looking for something. The ease with which these searches are performed often allows one to forget that searching represents complex interactions between visual attention and memory. Although a clear understanding exists of how search efficiency will be influenced by visual features of targets and their surrounding distractors or by the number of items in the display, the role of memory in search is less well understood. Contextual cueing studies have shown that implicit memory for repeated item configurations can facilitate search in artificial displays. When searching more naturalistic environments, other forms of memory come into play. For instance, semantic memory provides useful information about which objects are typically found where within a scene, and episodic scene memory provides information about where a particular object was seen the last time a particular scene was viewed. In this paper, we will review work on these topics, with special emphasis on the role of memory in guiding search in organized, real-world scenes.
Collapse
Affiliation(s)
- Melissa Le-Hoa Võ
- Scene Grammar Lab, Department of Cognitive Psychology, Goethe University Frankfurt, Frankfurt, Germany
| | | |
Collapse
|