1
|
Saltzmann SM, Eich B, Moen KC, Beck MR. Activated long-term memory and visual working memory during hybrid visual search: Effects on target memory search and distractor memory. Mem Cognit 2024:10.3758/s13421-024-01556-1. [PMID: 38528298 DOI: 10.3758/s13421-024-01556-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/13/2024] [Indexed: 03/27/2024]
Abstract
In hybrid visual search, observers must maintain multiple target templates and subsequently search for any one of those targets. If the number of potential target templates exceeds visual working memory (VWM) capacity, then the target templates are assumed to be maintained in activated long-term memory (aLTM). Observers must search the array for potential targets (visual search), as well as search through memory (target memory search). Increasing the target memory set size reduces accuracy, increases search response times (RT), and increases dwell time on distractors. However, the extent of observers' memory for distractors during hybrid search is largely unknown. In the current study, the impact of hybrid search on target memory search (measured by dwell time on distractors, false alarms, and misses) and distractor memory (measured by distractor revisits and recognition memory of recently viewed distractors) was measured. Specifically, we aimed to better understand how changes in behavior during hybrid search impacts distractor memory. Increased target memory set size led to an increase in search RTs, distractor dwell times, false alarms, and target identification misses. Increasing target memory set size increased revisits to distractors, suggesting impaired distractor location memory, but had no effect on a two-alternative forced-choice (2AFC) distractor recognition memory test presented during the search trial. The results from the current study suggest a lack of interference between memory stores maintaining target template representations (aLTM) and distractor information (VWM). Loading aLTM with more target templates does not impact VWM for distracting information.
Collapse
Affiliation(s)
- Stephanie M Saltzmann
- Department of Psychology, Louisiana State University, 236 Audubon Hall, Baton Rouge, LA, 70803, USA
| | - Brandon Eich
- Department of Psychology, Louisiana State University, 236 Audubon Hall, Baton Rouge, LA, 70803, USA
| | - Katherine C Moen
- Department of Psychology, Louisiana State University, 236 Audubon Hall, Baton Rouge, LA, 70803, USA
- Department of Psychology, University of Nebraska at Kearney, 2504 9th Ave, Kearney, NE, 68849, USA
| | - Melissa R Beck
- Department of Psychology, Louisiana State University, 236 Audubon Hall, Baton Rouge, LA, 70803, USA.
| |
Collapse
|
2
|
Zsido AN, Hout MC, Hernandez M, White B, Polák J, Kiss BL, Godwin HJ. No evidence of attentional prioritization for threatening targets in visual search. Sci Rep 2024; 14:5651. [PMID: 38454142 PMCID: PMC10920919 DOI: 10.1038/s41598-024-56265-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 03/04/2024] [Indexed: 03/09/2024] Open
Abstract
Throughout human evolutionary history, snakes have been associated with danger and threat. Research has shown that snakes are prioritized by our attentional system, despite many of us rarely encountering them in our daily lives. We conducted two high-powered, pre-registered experiments (total N = 224) manipulating target prevalence to understand this heightened prioritization of threatening targets. Target prevalence refers to the proportion of trials wherein a target is presented; reductions in prevalence consistently reduce the likelihood that targets will be found. We reasoned that snake targets in visual search should experience weaker effects of low target prevalence compared to non-threatening targets (rabbits) because they should be prioritized by searchers despite appearing rarely. In both experiments, we found evidence of classic prevalence effects but (contrasting prior work) we also found that search for threatening targets was slower and less accurate than for nonthreatening targets. This surprising result is possibly due to methodological issues common in prior studies, including comparatively smaller sample sizes, fewer trials, and a tendency to exclusively examine conditions of relatively high prevalence. Our findings call into question accounts of threat prioritization and suggest that prior attention findings may be constrained to a narrow range of circumstances.
Collapse
Affiliation(s)
- Andras N Zsido
- Institute of Psychology, University of Pécs, 6 Ifjusag Street, Pécs, 7624, Baranya, Hungary.
- Szentágothai Research Centre, University of Pécs, Pécs, Hungary.
| | - Michael C Hout
- Department of Psychology, New Mexico State University, Las Cruces, USA
| | - Marko Hernandez
- Department of Psychology, New Mexico State University, Las Cruces, USA
| | - Bryan White
- Department of Psychology, New Mexico State University, Las Cruces, USA
| | - Jakub Polák
- Department of Economy and Management, Ambis University, Prague, Czech Republic
- Faculty of Science, Charles University, Prague, Czech Republic
| | - Botond L Kiss
- Institute of Psychology, University of Pécs, 6 Ifjusag Street, Pécs, 7624, Baranya, Hungary
| | - Hayward J Godwin
- School of Psychology, University of Southampton, Southampton, UK
| |
Collapse
|
3
|
Moriya J. Long-term memory for distractors: Effects of involuntary attention from working memory. Mem Cognit 2024; 52:401-416. [PMID: 37768481 DOI: 10.3758/s13421-023-01469-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/08/2023] [Indexed: 09/29/2023]
Abstract
In a visual search task, attention to task-irrelevant distractors impedes search performance. However, is it maladaptive to future performance? Here, I showed that attended distractors in a visual search task were better remembered in long-term memory (LTM) in the subsequent surprise recognition task than non-attended distractors. In four experiments, participants performed a visual search task using real-world objects of a single color. They encoded color in working memory (WM) during the task; because each object had a different color, participants directed their attention to the WM-matching colored distractor. Then, in the surprise recognition task, participants were required to indicate whether an object had been shown in the earlier visual search task, regardless of its color. The results showed that attended distractors were remembered better in LTM than non-attended distractors (Experiments 1 and 2). Moreover, the more participants directed their attention to distractors, the better they explicitly remembered them. Participants did not explicitly remember the color of the attended distractors (Experiment 3) but remembered integrated information with object and color (Experiment 4). When the color of the distractors in the recognition task was mismatched with the color in the visual search task, LTM decreased compared to color-matching distractors. These results suggest that attention to distractors impairs search for a target but is helpful in remembering distractors in LTM. When task-irrelevant distractors become task-relevant information in the future, their attention becomes beneficial.
Collapse
Affiliation(s)
- Jun Moriya
- Faculty of Sociology, Kansai University, 3-3-35 Yamate-cho, Suita-shi, Osaka, Japan.
| |
Collapse
|
4
|
Cárdenas-Miller N, O'Donnell RE, Tam J, Wyble B. Surprise! Draw the scene: Visual recall reveals poor incidental working memory following visual search in natural scenes. Mem Cognit 2023:10.3758/s13421-023-01465-9. [PMID: 37770695 DOI: 10.3758/s13421-023-01465-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/01/2023] [Indexed: 09/30/2023]
Abstract
Searching within natural scenes can induce incidental encoding of information about the scene and the target, particularly when the scene is complex or repeated. However, recent evidence from attribute amnesia (AA) suggests that in some situations, searchers can find a target without building a robust incidental memory of its task relevant features. Through drawing-based visual recall and an AA search task, we investigated whether search in natural scenes necessitates memory encoding. Participants repeatedly searched for and located an easily detected item in novel scenes for numerous trials before being unexpectedly prompted to draw either the entire scene (Experiment 1) or their search target (Experiment 2) directly after viewing the search image. Naïve raters assessed the similarity of the drawings to the original information. We found that surprise-trial drawings of the scene and search target were both poorly recognizable, but the same drawers produced highly recognizable drawings on the next trial when they had an expectation to draw the image. Experiment 3 further showed that the poor surprise trial memory could not merely be attributed to interference from the surprising event. Our findings suggest that even for searches done in natural scenes, it is possible to locate a target without creating a robust memory of either it or the scene it was in, even if attended to just a few seconds prior. This disconnection between attention and memory might reflect a fundamental property of cognitive computations designed to optimize task performance and minimize resource use.
Collapse
Affiliation(s)
| | - Ryan E O'Donnell
- Pennsylvania State University, University Park, PA, USA
- Drexel University, Philadelphia, PA, USA
| | - Joyce Tam
- Pennsylvania State University, University Park, PA, USA
| | - Brad Wyble
- Pennsylvania State University, University Park, PA, USA.
| |
Collapse
|
5
|
How does searching for faces among similar-looking distractors affect distractor memory? Mem Cognit 2023:10.3758/s13421-023-01405-7. [PMID: 36849759 DOI: 10.3758/s13421-023-01405-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/09/2023] [Indexed: 03/01/2023]
Abstract
Prior research has shown that searching for multiple targets in a visual search task enhances distractor memory in a subsequent recognition test. Three non-mutually exclusive accounts have been offered to explain this phenomenon. The mental comparison hypothesis states that searching for multiple targets requires participants to make more mental comparisons between the targets and the distractors, which enhances distractor memory. The attention allocation hypothesis states that participants allocate more attention to distractors because a multiple-target search cue leads them to expect a more difficult search. Finally, the partial match hypothesis states that searching for multiple targets increases the amount of featural overlap between targets and distractors, which necessitates greater attention in order to reject each distractor. In two experiments, we examined these hypotheses by manipulating visual working memory (VWM) load and target-distractor similarity of AI-generated faces in a visual search (i.e., RSVP) task. Distractor similarity was manipulated using a multidimensional scaling model constructed from facial landmarks and other metadata of each face. In both experiments, distractors from multiple-target searches were recognized better than distractors from single-target searches. Experiment 2 additionally revealed that increased target-distractor similarity during search improved distractor recognition memory, consistent with the partial match hypothesis.
Collapse
|
6
|
Zhang Q, Luo C, Ngetich R, Zhang J, Jin Z, Li L. Visual Selective Attention P300 Source in Frontal-Parietal Lobe: ERP and fMRI Study. Brain Topogr 2022; 35:636-650. [PMID: 36178537 DOI: 10.1007/s10548-022-00916-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Accepted: 09/03/2022] [Indexed: 11/28/2022]
Abstract
Visual selective attention can be achieved into bottom-up and top-down attention. Different selective attention tasks involve different attention control ways. The pop-out task requires more bottom-up attention, whereas the search task involves more top-down attention. P300, which is the positive potential generated by the brain in the latency of 300 ~ 600 ms after stimulus, reflects the processing of attention. There is no consensus on the P300 source. The aim of present study is to study the source of P300 elicited by different visual selective attention. We collected thirteen participants' P300 elicited by pop-out and search tasks with event-related potentials (ERP). We collected twenty-six participants' activation brain regions in pop-out and search tasks with functional magnetic resonance imaging (fMRI). And we analyzed the sources of P300 using the ERP and fMRI integration with high temporal resolution and high spatial resolution. ERP results indicated that the pop-out task induced larger P300 than the search task. P300 induced by the two tasks distributed at frontal and parietal lobes, with P300 induced by the pop-out task mainly at the parietal lobe and that induced by the search task mainly at the frontal lobe. Further ERP and fMRI integration analysis showed that neural difference sources of P300 were the right precentral gyrus, left superior frontal gyrus (medial orbital), left middle temporal gyrus, left rolandic operculum, right postcentral gyrus, and left angular gyrus. Our study suggests that the frontal and parietal lobes contribute to the P300 component of visual selective attention.
Collapse
Affiliation(s)
- Qiuzhu Zhang
- MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Cimei Luo
- MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Ronald Ngetich
- MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Junjun Zhang
- MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Zhenlan Jin
- MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Ling Li
- MOE Key Lab for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China.
| |
Collapse
|
7
|
Marian V, Hayakawa S, Schroeder SR. Memory after visual search: Overlapping phonology, shared meaning, and bilingual experience influence what we remember. BRAIN AND LANGUAGE 2021; 222:105012. [PMID: 34464828 PMCID: PMC8554070 DOI: 10.1016/j.bandl.2021.105012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 07/19/2021] [Accepted: 08/19/2021] [Indexed: 06/13/2023]
Abstract
How we remember the things that we see can be shaped by our prior experiences. Here, we examine how linguistic and sensory experiences interact to influence visual memory. Objects in a visual search that shared phonology (cat-cast) or semantics (dog-fox) with a target were later remembered better than unrelated items. Phonological overlap had a greater influence on memory when targets were cued by spoken words, while semantic overlap had a greater effect when targets were cued by characteristic sounds. The influence of overlap on memory varied as a function of individual differences in language experience -- greater bilingual experience was associated with decreased impact of overlap on memory. We conclude that phonological and semantic features of objects influence memory differently depending on individual differences in language experience, guiding not only what we initially look at, but also what we later remember.
Collapse
Affiliation(s)
- Viorica Marian
- Department of Communication Sciences and Disorders, Northwestern University, 2240 North Campus Drive, Evanston, IL 60208, United States
| | - Sayuri Hayakawa
- Department of Communication Sciences and Disorders, Northwestern University, 2240 North Campus Drive, Evanston, IL 60208, United States.
| | - Scott R Schroeder
- Department of Speech, Language, Hearing Sciences, Hofstra University, 110, Hempstead, NY 11549, United States
| |
Collapse
|
8
|
Flexible attention allocation dynamically impacts incidental encoding in prospective memory. Mem Cognit 2021; 50:112-128. [PMID: 34184211 DOI: 10.3758/s13421-021-01199-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/04/2021] [Indexed: 11/08/2022]
Abstract
Remembering to fulfill an intention at a later time often requires people to monitor the environment for cues that it is time to act. This monitoring involves the strategic allocation of attentional resources, ramping attention up more in some contexts than others. In addition to interfering with ongoing task performance, flexibly shifting attention may affect whether task-irrelevant information is later remembered. In the present investigation, we manipulated contextual expectations in event-related prospective memory (PM) to examine the consequences of flexible attention allocation on incidental memory. Across two experiments, participants completed a color-matching task while monitoring for ill-defined (Experiment 1) or specific (Experiment 2) PM targets. To manipulate contextual expectations, some participants were explicitly told about the trial types in which PM targets could (or not) appear, while others were given less precise or no expectations. Across experiments, participants' color-matching decisions were slower in high-expectation trials, relative to trials when targets were not expected. Additionally, participants had better incidental memory for PM-irrelevant items from high-expectation trials, but only when they received explicit contextual expectations. These results confirm that participants flexibly allocate attention based on explicit trial-by-trial expectations. Furthermore, the present study indicates that greater attention to item identity yields better incidental memory even for PM-irrelevant items, irrespective of processing time.
Collapse
|
9
|
Kristjánsson Á, Draschkow D. Keeping it real: Looking beyond capacity limits in visual cognition. Atten Percept Psychophys 2021; 83:1375-1390. [PMID: 33791942 PMCID: PMC8084831 DOI: 10.3758/s13414-021-02256-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/23/2020] [Indexed: 11/23/2022]
Abstract
Research within visual cognition has made tremendous strides in uncovering the basic operating characteristics of the visual system by reducing the complexity of natural vision to artificial but well-controlled experimental tasks and stimuli. This reductionist approach has for example been used to assess the basic limitations of visual attention, visual working memory (VWM) capacity, and the fidelity of visual long-term memory (VLTM). The assessment of these limits is usually made in a pure sense, irrespective of goals, actions, and priors. While it is important to map out the bottlenecks our visual system faces, we focus here on selected examples of how such limitations can be overcome. Recent findings suggest that during more natural tasks, capacity may be higher than reductionist research suggests and that separable systems subserve different actions, such as reaching and looking, which might provide important insights about how pure attentional or memory limitations could be circumvented. We also review evidence suggesting that the closer we get to naturalistic behavior, the more we encounter implicit learning mechanisms that operate "for free" and "on the fly." These mechanisms provide a surprisingly rich visual experience, which can support capacity-limited systems. We speculate whether natural tasks may yield different estimates of the limitations of VWM, VLTM, and attention, and propose that capacity measurements should also pass the real-world test within naturalistic frameworks. Our review highlights various approaches for this and suggests that our understanding of visual cognition will benefit from incorporating the complexities of real-world cognition in experimental approaches.
Collapse
Affiliation(s)
- Árni Kristjánsson
- School of Health Sciences, University of Iceland, Reykjavík, Iceland.
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia.
| | - Dejan Draschkow
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| |
Collapse
|
10
|
Lavelle M, Alonso D, Luria R, Drew T. Visual working memory load plays limited, to no role in encoding distractor objects during visual search. VISUAL COGNITION 2021. [DOI: 10.1080/13506285.2021.1914256] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- Mark Lavelle
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - David Alonso
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| | - Roy Luria
- The School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Trafton Drew
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
| |
Collapse
|