1
|
How does searching for faces among similar-looking distractors affect distractor memory? Mem Cognit 2023:10.3758/s13421-023-01405-7. [PMID: 36849759 DOI: 10.3758/s13421-023-01405-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/09/2023] [Indexed: 03/01/2023]
Abstract
Prior research has shown that searching for multiple targets in a visual search task enhances distractor memory in a subsequent recognition test. Three non-mutually exclusive accounts have been offered to explain this phenomenon. The mental comparison hypothesis states that searching for multiple targets requires participants to make more mental comparisons between the targets and the distractors, which enhances distractor memory. The attention allocation hypothesis states that participants allocate more attention to distractors because a multiple-target search cue leads them to expect a more difficult search. Finally, the partial match hypothesis states that searching for multiple targets increases the amount of featural overlap between targets and distractors, which necessitates greater attention in order to reject each distractor. In two experiments, we examined these hypotheses by manipulating visual working memory (VWM) load and target-distractor similarity of AI-generated faces in a visual search (i.e., RSVP) task. Distractor similarity was manipulated using a multidimensional scaling model constructed from facial landmarks and other metadata of each face. In both experiments, distractors from multiple-target searches were recognized better than distractors from single-target searches. Experiment 2 additionally revealed that increased target-distractor similarity during search improved distractor recognition memory, consistent with the partial match hypothesis.
Collapse
|
2
|
Marian V, Hayakawa S, Schroeder SR. Memory after visual search: Overlapping phonology, shared meaning, and bilingual experience influence what we remember. BRAIN AND LANGUAGE 2021; 222:105012. [PMID: 34464828 PMCID: PMC8554070 DOI: 10.1016/j.bandl.2021.105012] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 07/19/2021] [Accepted: 08/19/2021] [Indexed: 06/13/2023]
Abstract
How we remember the things that we see can be shaped by our prior experiences. Here, we examine how linguistic and sensory experiences interact to influence visual memory. Objects in a visual search that shared phonology (cat-cast) or semantics (dog-fox) with a target were later remembered better than unrelated items. Phonological overlap had a greater influence on memory when targets were cued by spoken words, while semantic overlap had a greater effect when targets were cued by characteristic sounds. The influence of overlap on memory varied as a function of individual differences in language experience -- greater bilingual experience was associated with decreased impact of overlap on memory. We conclude that phonological and semantic features of objects influence memory differently depending on individual differences in language experience, guiding not only what we initially look at, but also what we later remember.
Collapse
Affiliation(s)
- Viorica Marian
- Department of Communication Sciences and Disorders, Northwestern University, 2240 North Campus Drive, Evanston, IL 60208, United States
| | - Sayuri Hayakawa
- Department of Communication Sciences and Disorders, Northwestern University, 2240 North Campus Drive, Evanston, IL 60208, United States.
| | - Scott R Schroeder
- Department of Speech, Language, Hearing Sciences, Hofstra University, 110, Hempstead, NY 11549, United States
| |
Collapse
|
3
|
The detail is in the difficulty: Challenging search facilitates rich incidental object encoding. Mem Cognit 2021; 48:1214-1233. [PMID: 32562249 DOI: 10.3758/s13421-020-01051-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
When searching for objects in the environment, observers necessarily encounter other, nontarget, objects. Despite their irrelevance for search, observers often incidentally encode the details of these objects, an effect that is exaggerated as the search task becomes more challenging. Although it is well established that searchers create incidental memories for targets, less is known about the fidelity with which nontargets are remembered. Do observers store richly detailed representations of nontargets, or are these memories characterized by gist-level detail, containing only the information necessary to reject the item as a nontarget? We addressed this question across two experiments in which observers completed multiple-target (one to four potential targets) searches, followed by surprise alternative forced-choice (AFC) recognition tests for all encountered objects. To assess the detail of incidentally stored memories, we used similarity rankings derived from multidimensional scaling to manipulate the perceptual similarity across objects in 4-AFC (Experiment 1a) and 16-AFC (Experiments 1b and 2) tests. Replicating prior work, observers recognized more nontarget objects encountered during challenging, relative to easier, searches. More importantly, AFC results revealed that observers stored more than gist-level detail: When search objects were not recognized, observers systematically chose lures with higher perceptual similarity, reflecting partial encoding of the search object's perceptual features. Further, similarity effects increased with search difficulty, revealing that incidental memories for visual search objects are sharpened when the search task requires greater attentional processing.
Collapse
|
4
|
Flexible attention allocation dynamically impacts incidental encoding in prospective memory. Mem Cognit 2021; 50:112-128. [PMID: 34184211 DOI: 10.3758/s13421-021-01199-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/04/2021] [Indexed: 11/08/2022]
Abstract
Remembering to fulfill an intention at a later time often requires people to monitor the environment for cues that it is time to act. This monitoring involves the strategic allocation of attentional resources, ramping attention up more in some contexts than others. In addition to interfering with ongoing task performance, flexibly shifting attention may affect whether task-irrelevant information is later remembered. In the present investigation, we manipulated contextual expectations in event-related prospective memory (PM) to examine the consequences of flexible attention allocation on incidental memory. Across two experiments, participants completed a color-matching task while monitoring for ill-defined (Experiment 1) or specific (Experiment 2) PM targets. To manipulate contextual expectations, some participants were explicitly told about the trial types in which PM targets could (or not) appear, while others were given less precise or no expectations. Across experiments, participants' color-matching decisions were slower in high-expectation trials, relative to trials when targets were not expected. Additionally, participants had better incidental memory for PM-irrelevant items from high-expectation trials, but only when they received explicit contextual expectations. These results confirm that participants flexibly allocate attention based on explicit trial-by-trial expectations. Furthermore, the present study indicates that greater attention to item identity yields better incidental memory even for PM-irrelevant items, irrespective of processing time.
Collapse
|
5
|
Williams CC. Looking for your keys: The interaction of attention, memory, and eye movements in visual search. PSYCHOLOGY OF LEARNING AND MOTIVATION 2020. [DOI: 10.1016/bs.plm.2020.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
6
|
Guevara Pinto JD, Papesh MH. Incidental memory following rapid object processing: The role of attention allocation strategies. J Exp Psychol Hum Percept Perform 2019; 45:1174-1190. [PMID: 31219283 PMCID: PMC7202240 DOI: 10.1037/xhp0000664] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
When observers search for multiple (rather than singular) targets, they are slower and less accurate, yet have better incidental memory for nontarget items encountered during the task (Hout & Goldinger, 2010). One explanation for this may be that observers titrate their attention allocation based on the expected difficulty suggested by search cues. Difficult search cues may implicitly encourage observers to narrow their attention, simultaneously enhancing distractor encoding and hindering peripheral processing. Across three experiments, we manipulated the difficulty of search cues preceding passive visual search for real-world objects, using a Rapid Serial Visual Presentation (RSVP) task to equate item exposure durations. In all experiments, incidental memory was enhanced for distractors encountered while participants monitored for difficult targets. Moreover, in key trials, peripheral shapes appeared at varying eccentricities off center, allowing us to infer the spread and precision of participants' attentional windows. Peripheral item detection and identification decreased when search cues were difficult, even when the peripheral items appeared before targets. These results were not an artifact of sustained vigilance in miss trials, but instead reflect top-down modulation of attention allocation based on task demands. Implications for individual differences are discussed. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
|
7
|
Wiegand I, Wolfe JM. Age doesn't matter much: hybrid visual and memory search is preserved in older adults. AGING NEUROPSYCHOLOGY AND COGNITION 2019; 27:220-253. [PMID: 31050319 DOI: 10.1080/13825585.2019.1604941] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
We tested younger and older observers' attention and long-term memory functions in a "hybrid search" task, in which observers look through visual displays for instances of any of several types of targets held in memory. Apart from a general slowing, search efficiency did not change with age. In both age groups, reaction times increased linearly with the visual set size and logarithmically with the memory set size, with similar relative costs of increasing load (Experiment 1). We replicated the finding and further showed that performance remained comparable between age groups when familiarity cues were made irrelevant (Experiment 2) and target-context associations were to be retrieved (Experiment 3). Our findings are at variance with theories of cognitive aging that propose age-specific deficits in attention and memory. As hybrid search resembles many real-world searches, our results might be relevant to improve the ecological validity of assessing age-related cognitive decline.
Collapse
Affiliation(s)
- Iris Wiegand
- Visual Attention Lab, Brigham & Women's Hospital, Cambridge, MA, USA.,Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Berlin, Germany.,Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
| | - Jeremy M Wolfe
- Visual Attention Lab, Brigham & Women's Hospital, Cambridge, MA, USA.,Departments of Ophthalmology & Radiology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
8
|
Draschkow D, Reinecke S, Cunningham CA, Võ MLH. The lower bounds of massive memory: Investigating memory for object details after incidental encoding. Q J Exp Psychol (Hove) 2018; 72:1176-1182. [DOI: 10.1177/1747021818783722] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Visual long-term memory capacity appears massive and detailed when probed explicitly. In the real world, however, memories are usually built from chance encounters. Therefore, we investigated the capacity and detail of incidental memory in a novel encoding task, instructing participants to detect visually distorted objects among intact objects. In a subsequent surprise recognition memory test, lures of a novel category, another exemplar, the same object in a different state, or exactly the same object were presented. Lure recognition performance was above chance, suggesting that incidental encoding resulted in reliable memory formation. Critically, performance for state lures was worse than for exemplars, which was driven by a greater similarity of state as opposed to exemplar foils to the original objects. Our results indicate that incidentally generated visual long-term memory representations of isolated objects are more limited in detail than recently suggested.
Collapse
Affiliation(s)
- Dejan Draschkow
- Department of Psychology, Goethe University, Frankfurt am Main, Germany
| | - Saliha Reinecke
- Department of Psychology, Goethe University, Frankfurt am Main, Germany
| | - Corbin A Cunningham
- Department of Psychological and Brain Sciences, Johns Hopkins University, Baltimore, MD, USA
| | - Melissa L-H Võ
- Department of Psychology, Goethe University, Frankfurt am Main, Germany
| |
Collapse
|
9
|
Visual search for changes in scenes creates long-term, incidental memory traces. Atten Percept Psychophys 2018; 80:829-843. [PMID: 29427122 DOI: 10.3758/s13414-018-1486-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Humans are very good at remembering large numbers of scenes over substantial periods of time. But how good are they at remembering changes to scenes? In this study, we tested scene memory and change detection two weeks after initial scene learning. In Experiments 1-3, scenes were learned incidentally during visual search for change. In Experiment 4, observers explicitly memorized scenes. At test, after two weeks observers were asked to discriminate old from new scenes, to recall a change that they had detected in the study phase, or to detect a newly introduced change in the memorization experiment. Next, they performed a change detection task, usually looking for the same change as in the study period. Scene recognition memory was found to be similar in all experiments, regardless of the study task. In Experiment 1, more difficult change detection produced better scene memory. Experiments 2 and 3 supported a "depth-of-processing" account for the effects of initial search and change detection on incidental memory for scenes. Of most interest, change detection was faster during the test phase than during the study phase, even when the observer had no explicit memory of having found that change previously. This result was replicated in two of our three change detection experiments. We conclude that scenes can be encoded incidentally as well as explicitly and that changes in those scenes can leave measurable traces even if they are not explicitly recalled.
Collapse
|
10
|
|