1
|
Chen W, Ye S, Ding X, Shen M, Gao Z. Selectively maintaining an object's feature in visual working memory: A comparison between highly discriminable and fine-grained features. Mem Cognit 2024:10.3758/s13421-024-01612-w. [PMID: 39048836 DOI: 10.3758/s13421-024-01612-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/03/2024] [Indexed: 07/27/2024]
Abstract
Selectively maintaining information is an essential function of visual working memory (VWM). Recent VWM studies have mainly focused on selective maintenance of objects, leaving the mechanisms of selectively maintaining an object's feature in VWM unknown. Based on the interactive model of perception and VWM, we hypothesized that there are distinct selective maintenance mechanisms for objects containing fine-grained features versus objects containing highly discriminable features. To test this hypothesis, we first required participants to memorize a dual-feature object (colored simple shapes vs. colored polygons), and informed them about the target feature via a retro-cue. Then a visual search task was added to examine the fate of the irrelevant feature. The selective maintenance of an object's feature predicted that the irrelevant feature should be removed from the active state of VWM and should not capture attention when presented as a distractor in the visual search task. We found that irrelevant simple shapes impaired performance in the visual search task (Experiment 1). However, irrelevant polygons did not affect visual search performance (Experiment 2), and this could not be explained by decay of polygons (Experiment 3) or by polygons not capturing attention (Experiment 4). These findings suggest that VWM adopts dissociable mechanisms to selectively maintain an object's feature, depending on the feature's perceptual characteristics.
Collapse
Affiliation(s)
- Wei Chen
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| | - Shujuan Ye
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| | - Xiaowei Ding
- Department of Psychology, Sun Yat-sen University, Guangzhou, China.
| | - Mowei Shen
- Department of Psychology and Behavioral Sciences, Zhejiang University, Zhejiang, China.
| | - Zaifeng Gao
- Department of Psychology and Behavioral Sciences, Zhejiang University, Zhejiang, China.
| |
Collapse
|
2
|
Zheng Y, Lou J, Lu Y, Li Z. Multiple visual items can be simultaneously compared with target templates in memory. Atten Percept Psychophys 2024; 86:1641-1652. [PMID: 38839716 DOI: 10.3758/s13414-024-02906-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/16/2024] [Indexed: 06/07/2024]
Abstract
When we search for something, we often rely on both what we see and what we remember. This process can be divided into three stages: selecting items, identifying those items, and comparing them with what we are trying to find in our memory. It has been suggested that we select items one by one, and we can identify several items at once. In the present study, we tested whether we need to finish comparing a selected item in the visual display with one or more target templates in memory before we can move on to the next selected item. In Experiment 1, observers looked for either one or two target types in a rapid serially presented stimuli stream. The time interval between the presentation onset of successive items in the stream was varied to get a threshold. For search for one target, the threshold was 89 ms. When look for either of two targets, it was 192 ms. This threshold difference offered a baseline. In Experiment 2, observers looked for one or two types of target in a search array. If they compared each identified item separately, we should expect a jump in the slope of the RT × Set Size function, on the order of the baseline obtained in Experiment 1. However, the slope difference was only 13 ms/item, suggesting that several identified items can be compared at once with target templates in memory. Experiment 3 showed that this slope difference was not just a memory-load cost.
Collapse
Affiliation(s)
- Yujie Zheng
- Department of Psychology and Behavioral Sciences, Zhejiang University, 148 Tian Mu Shan Road, Hangzhou, 310007, People's Republic of China
| | - Jiafei Lou
- Department of Psychology and Behavioral Sciences, Zhejiang University, 148 Tian Mu Shan Road, Hangzhou, 310007, People's Republic of China
| | - Yunrong Lu
- Department of Psychiatry, The Fourth Affiliated Hospital, Zhejiang University School of Medicine and International Institutes of Medicine of Zhejiang University, Yiwu, 322000, People's Republic of China.
- Department of Psychiatry, The Second Affiliate Hospital, Zhejiang University School of Medicine, Hangzhou, People's Republic of China.
| | - Zhi Li
- Department of Psychology and Behavioral Sciences, Zhejiang University, 148 Tian Mu Shan Road, Hangzhou, 310007, People's Republic of China.
- Department of Psychiatry, The Fourth Affiliated Hospital, Zhejiang University School of Medicine and International Institutes of Medicine of Zhejiang University, Yiwu, 322000, People's Republic of China.
| |
Collapse
|
3
|
Shi Y, Zhang Y. Reliability and validity of a novel attention assessment scale (broken ring enVision search test) in the Chinese population. Front Psychol 2024; 15:1375326. [PMID: 38784625 PMCID: PMC11111916 DOI: 10.3389/fpsyg.2024.1375326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 04/25/2024] [Indexed: 05/25/2024] Open
Abstract
Background The correct assessment of attentional function is the key to cognitive research. A new attention assessment scale, the Broken Ring enVision Search Test (BReViS), has not been validated in China. The purpose of this study was to assess the reliability and validity of the BReViS in the Chinese population. Methods From July to October 2023, 100 healthy residents of Changzhou were selected and subjected to the BReViS, Digital Cancelation Test (D-CAT), Symbol Digit Modalities Test (SDMT), and Digit Span Test (DST). Thirty individuals were randomly chosen to undergo the BReViS twice for test-retest reliability assessment. Correlation analysis was conducted between age, education level, gender, and various BReViS sub-tests including Selective Attention (SA), Orientation of Attention (OA), Focal Attention (FA), and Total Errors (Err). Intergroup comparisons and multiple linear regression analyses were performed. Additionally, correlation analyses between the BReViS sub-tests and with other attention tests were also analyzed. Results The correlation coefficients of the BReViS sub-tests (except for FA) between the two tests were greater than 0.600 (p < 0.001), indicating good test-retest reliability. The Cronbach's alpha coefficient was 0.874, suggesting high internal consistency reliability. SA showed a significant negative correlation with the net score of D-CAT (r = -0.405, p < 0.001), and a significant positive correlation with the error rate of D-CAT (r = 0.401, p < 0.001), demonstrating good criterion-related validity. The correlation analysis among the results of each sub-test showed that the correlation coefficient between SA and Err was 0.532 (p < 0.001), and between OA and Err was-0.229 (p < 0.05), whereas there was no significant correlation between SA, OA, and FA, which indicated that the scale had good informational content validity and structural validity. Both SA and Err were significantly correlated with age and years of education, while gender was significantly correlated with OA and Err. Multiple linear regression suggested that Err was mainly affected by age and gender. There were significant differences in the above indexes among different age, education level and gender groups. Correlation analysis with other attention tests revealed that SA negatively correlated with DST forward and backward scores and SDMT scores. Err positively correlated with D-CAT net scores and negatively with D-CAT error rate, DST forward and backward scores, and SDMT scores. OA and FA showed no significant correlation with other attention tests. Conclusion The BReViS test, demonstrating good reliability and validity, assessing not only selective attention but also gauging capacities in immediate memory, information processing speed, visual scanning, and hand-eye coordination. The results are susceptible to demographic variables such as age, gender, and education level.
Collapse
Affiliation(s)
| | - Yi Zhang
- Department of Rehabilitation Medicine, Third Affiliated Hospital of Soochow University, Changzhou, China
| |
Collapse
|
4
|
Broken Ring enVision Search (BReViS): A New Clinical Test of Attention to Assess the Effect of Layout and Crowding on Visual Search. Brain Sci 2023; 13:brainsci13030494. [PMID: 36979304 PMCID: PMC10046675 DOI: 10.3390/brainsci13030494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 03/13/2023] [Accepted: 03/14/2023] [Indexed: 03/17/2023] Open
Abstract
The assessment of attention in neuropsychological patients could be performed with visual search tests. The Broken Rings enVision Search test (BReViS) here proposed represents a novel open access paper-and-pencil tool in which layout and crowding are varied among four cards. These manipulations allow the assessment of different components of attention: a selective component, the visuo-spatial orientation of attention, and the focal attention, involved in a crowding phenomenon. Our purpose was to determine the characteristics of the BReViS test, provide specific normative data, and assess these components across the lifespan. The test was administered to a sample of 550 participants aged between 20 and 79 years old and to a series of patients. Three indexes targeting different components of visuo-spatial attention (selective attention, strategic orientation of visual attention, focal attention) were obtained by combining execution times and accuracy together with the total errors. The results showed that age, education and gender influenced, in different combinations, the four indexes, for which specific norms were developed. Regression-based norms were provided in percentiles and equivalent scores. All patients showed pathological scores and specific patterns of attentional deficits. The BreViS test proved to be a free and easy valuable tool which can be used in the clinical environment to assess attentional deficits in neuropsychological patients.
Collapse
|
5
|
Inhibition of return as a foraging facilitator in visual search: Evidence from long-term training. Atten Percept Psychophys 2023; 85:88-98. [PMID: 36380146 DOI: 10.3758/s13414-022-02605-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/19/2022] [Indexed: 11/16/2022]
Abstract
Inhibition of return (IOR) discourages visual attention from returning to previously attended locations, and has been theorized as a mechanism to facilitate foraging in visual search by inhibitory tagging of inspected items. Previous studies using visual search and probe-detection tasks (i.e., the probe-following-search paradigm) found longer reaction times (RTs) for probes appearing at the searched locations than probes appearing at novel locations. This IOR effect was stronger in serial than parallel search, favoring the foraging facilitator hypothesis. However, evidence for this hypothesis was still lacking because no attempt was made to study how IOR would change when search efficiency gradually improves. The current study employed the probe-following-search paradigm and long-term training to examine how IOR varied following search efficiency improvements across training days. According to the foraging facilitator hypothesis, inhibitory tagging is an after-effect of attentional engagement. Therefore, when attentional engagement in a visual search task is reduced via long-term training, the strength of inhibitory tagging decreases, thus predicting a reduced IOR effect. Consistent with this prediction, two experiments consistently showed that IOR decreased while search efficiency improved through training, although IOR reached the floor more quickly than search efficiency. These findings support the notion that IOR facilitates search performance via stronger inhibitory tagging in more difficult visual search.
Collapse
|
6
|
Valdois S. The visual-attention span deficit in developmental dyslexia: Review of evidence for a visual-attention-based deficit. DYSLEXIA (CHICHESTER, ENGLAND) 2022; 28:397-415. [PMID: 35903834 DOI: 10.1002/dys.1724] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 04/07/2022] [Accepted: 07/13/2022] [Indexed: 06/15/2023]
Abstract
The visual attention span (VAS) deficit hypothesis in developmental dyslexia posits that a subset of dyslexic individuals shows a multielement parallel processing deficit due to reduced visual attention capacity. However, the attention-based interpretation of poor performance on VAS tasks is hotly debated. The purpose of the present paper is to clarify this issue through a critical review of relevant behavioural and neurobiological findings. We first examine the plausibility of alternative verbal interpretations of VAS performance, evaluating whether performance on VAS tasks might reflect verbal short-term memory, verbal coding or visual-to-verbal mapping skills. We then focus on the visual dimensions of VAS tasks to question whether VAS primarily reflects visuo-attentional rather than more basic visual skills. Scrutiny of the available behavioural and neurobiological findings not only points to a deficit of visual attention in dyslexic individuals with impaired VAS but further suggests a selective endogenous attentional system deficit that relates to atypical functioning of the brain dorsal attentional network. The overview clarifies the debate on what is being measured through VAS tasks and provides insights on how to interpret the VAS deficit in developmental dyslexia.
Collapse
|
7
|
Radhakrishnan A, Balakrishnan M, Behera S, Raghunandhan R. Role of reading medium and audio distractors on visual search. JOURNAL OF OPTOMETRY 2022; 15:299-304. [PMID: 35798673 PMCID: PMC9537263 DOI: 10.1016/j.optom.2021.12.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 11/30/2021] [Accepted: 12/27/2021] [Indexed: 06/15/2023]
Abstract
PURPOSE Visual search is an active perceptual task influenced by objective factors and subjective factors such as task difficulty, distractors, attention and familiarity respectively. We studied the effect of different search directions, task medium and presence or absence of audio distractors on visual search time in young normal subjects METHODS: Twenty-four young (19-27 years) subjects with normal ocular health (except refractive error) participated in the study after obtaining informed consent. Subjects performed a word search task of ten 7-letter words of medium difficulty level. It was performed by each subject in Up-down, Down-Up, Left-Right, Right-Left, Diagonal and Random directions, with equal number of distractors. The task was performed in paper and digital medium, with or without audio distractors. The conditions were performed in random order by each subject and the time taken to accurately complete the word search was documented for each condition. RESULT The visual search time (VST) was significantly different with different search directions (ANOVA p<0.0001, df=5), considering both digital and non-digital medium, with or without audio distractors. The average VST was the least for left-right search direction (100±7.2 s) and was highest for random search direction (291±19 s), on a digital medium (VSTdigital: 183±77 s) and in presence of an audio distractor (VSTaudio: 184±77 s). The VST scores were not correlated with the age (r=-0.14, p = 0.25). CONCLUSION The visual search time is significantly delayed for search direction other than left-right direction and in presence of an audio distractor on a digital medium. These factors could play a significant role in visual orientation and specific tasks such as reading.
Collapse
Affiliation(s)
- Aiswaryah Radhakrishnan
- Assistant Professor (Optometry), Department of Ophthalmology, SRM Medical College and Research Center, SRM Institute of Science and Technology, Potheri, Kattankulathur 603203, Chengalpattu District, Tamil Nadu, India.
| | - Mohan Balakrishnan
- BOptom students, SRM Medical College and Research Center, SRM Institute of Science and Technology, Potheri, Kattankulathur 603203, Chengalpattu District, Tamil Nadu, India
| | - Soumyasmita Behera
- BOptom students, SRM Medical College and Research Center, SRM Institute of Science and Technology, Potheri, Kattankulathur 603203, Chengalpattu District, Tamil Nadu, India
| | - Roshini Raghunandhan
- BOptom students, SRM Medical College and Research Center, SRM Institute of Science and Technology, Potheri, Kattankulathur 603203, Chengalpattu District, Tamil Nadu, India
| |
Collapse
|
8
|
Li J, Deng SW. Facilitation and interference effects of the multisensory context on learning: a systematic review and meta-analysis. PSYCHOLOGICAL RESEARCH 2022; 87:1334-1352. [DOI: 10.1007/s00426-022-01733-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 08/28/2022] [Indexed: 11/24/2022]
|
9
|
Wolfe JM, Kosovicheva A, Wolfe B. Normal blindness: when we Look But Fail To See. Trends Cogn Sci 2022; 26:809-819. [PMID: 35872002 PMCID: PMC9378609 DOI: 10.1016/j.tics.2022.06.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 06/02/2022] [Accepted: 06/13/2022] [Indexed: 10/17/2022]
Abstract
Humans routinely miss important information that is 'right in front of our eyes', from overlooking typos in a paper to failing to see a cyclist in an intersection. Recent studies on these 'Looked But Failed To See' (LBFTS) errors point to a common mechanism underlying these failures, whether the missed item was an unexpected gorilla, the clearly defined target of a visual search, or that simple typo. We argue that normal blindness is the by-product of the limited-capacity prediction engine that is our visual system. The processes that evolved to allow us to move through the world with ease are virtually guaranteed to cause us to miss some significant stimuli, especially in important tasks like driving and medical image perception.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Brigham and Women's Hospital, 900 Commonwealth Avenue, Boston, MA 02215, USA; Harvard Medical School, 25 Shattuck Street, Boston, MA 02115, USA.
| | - Anna Kosovicheva
- Department of Psychology, University of Toronto Mississauga, 3359 Mississauga Road, Mississauga, Ontario, L5L 1C6, Canada
| | - Benjamin Wolfe
- Department of Psychology, University of Toronto Mississauga, 3359 Mississauga Road, Mississauga, Ontario, L5L 1C6, Canada
| |
Collapse
|
10
|
Sawada R, Sato W, Nakashima R, Kumada T. How are emotional facial expressions detected rapidly and accurately? A diffusion model analysis. Cognition 2022; 229:105235. [PMID: 35933796 DOI: 10.1016/j.cognition.2022.105235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 06/09/2022] [Accepted: 07/25/2022] [Indexed: 11/03/2022]
Abstract
Previous psychological studies have shown that people detect emotional facial expressions more rapidly and accurately than neutral facial expressions. However, the cognitive mechanisms underlying the efficient detection of emotional facial expressions remain unclear. To investigate this issue, we used diffusion model analyses to estimate the cognitive parameters of a visual search task in which participants detected faces with normal expressions of anger and happiness and their anti-expressions within a crowd of neutral faces. The anti-expressions were artificially created to control the visual changes of facial features but were usually recognized as emotionally neutral. We tested the hypothesis that the emotional significance of the target's facial expressions modulated the non-decisional time and the drift rate. We also conducted an exploratory investigation of the effect of facial expressions on threshold separation. The results showed that the non-decisional time was shorter, and the drift rate was larger for targets with normal expressions than with anti-expressions. Subjective emotional arousal ratings of facial targets were negatively related to the non-decisional time and positively associated with the drift rate. In addition, the threshold separation was larger for normal expressions than for anti-expressions and positively associated with arousal ratings for facial targets. These results suggest that the efficient detection of emotional facial expressions is accomplished via the faster and more cautious accumulation of emotional information of facial expressions which is initiated more rapidly by enhanced attentional allocation.
Collapse
Affiliation(s)
- Reiko Sawada
- Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Japan.
| | - Wataru Sato
- Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Japan; Psychological Process Research Team, Guardian Robot Project, RIKEN, Japan
| | - Ryoichi Nakashima
- Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Japan
| | - Takatsune Kumada
- Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Japan
| |
Collapse
|
11
|
Lu T, Tang M, Guo Y, Zhou C, Zhao Q, You X. Effect of video game experience on the simulated flight task: the role of attention and spatial orientation. AUSTRALIAN JOURNAL OF PSYCHOLOGY 2022. [DOI: 10.1080/00049530.2021.2007736] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Affiliation(s)
- Tianjiao Lu
- Student Mental Health Education Center, Northwestern Polytechnical University, Xi’an, Shaanxi, China
| | - Menghan Tang
- Shaanxi Key Laboratory of Behavior and Cognitive Neuroscience, The Institute of Psychology, Shaanxi Normal University, Xi’an, Shaanxi, China
| | - Yu Guo
- Shaanxi Key Laboratory of Behavior and Cognitive Neuroscience, The Institute of Psychology, Shaanxi Normal University, Xi’an, Shaanxi, China
| | - Chenchen Zhou
- Shaanxi Key Laboratory of Behavior and Cognitive Neuroscience, The Institute of Psychology, Shaanxi Normal University, Xi’an, Shaanxi, China
| | - Qingxian Zhao
- Shaanxi Key Laboratory of Behavior and Cognitive Neuroscience, The Institute of Psychology, Shaanxi Normal University, Xi’an, Shaanxi, China
| | - Xuqun You
- Shaanxi Key Laboratory of Behavior and Cognitive Neuroscience, The Institute of Psychology, Shaanxi Normal University, Xi’an, Shaanxi, China
| |
Collapse
|
12
|
Wu CC, Wolfe JM. The Functional Visual Field(s) in simple visual search. Vision Res 2022; 190:107965. [PMID: 34775158 PMCID: PMC8976560 DOI: 10.1016/j.visres.2021.107965] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Revised: 10/07/2021] [Accepted: 10/12/2021] [Indexed: 01/03/2023]
Abstract
During a visual search for a target among distractors, observers do not fixate every location in the search array. Rather processing is thought to occur within a Functional Visual Field (FVF) surrounding each fixation. We argue that there are three questions that can be asked at each fixation and that these imply three different senses of the FVF. 1) Can I identify what is at location XY? This defines a resolution FVF. 2) To what shall I attend during this fixation? This defines an Attentional FVF. 3) Where should I fixate next? This defines an Exploratory FVF. We examine FVFs 2&3 using eye movements in visual search. In three Experiments, we collected eye movements during visual search for the target letter T among distractor letter Ls (Exps 1 and 3) or for a color X orientation conjunction (Exp 2). Saccades that do not go to the target can be used to define the Exploratory FVF. The saccade that goes to the target can be used to define the Attentional FVF since the target was probably covertly detected during the prior fixation. The Exploratory FVF is larger than the Attentional FVF for all three experiments. Interestingly, the probability that the next saccade would go to the target was always well below 1.0, even when the current fixation was close to the target and well within any reasonable estimate of the FVF. Measuring search-based Exploratory and Attentional FVFs sheds light on how we can miss clearly visible targets.
Collapse
Affiliation(s)
- Chia-Chien Wu
- Harvard Medical School, Boston, MA, USA; Brigham & Women's Hospital, Boston, MA, USA.
| | - Jeremy M Wolfe
- Harvard Medical School, Boston, MA, USA; Brigham & Women's Hospital, Boston, MA, USA
| |
Collapse
|
13
|
Hoffmeister JA, Smit AN, Livingstone AC, McDonald JJ. Diversion of Attention Leads to Conflict between Concurrently Attended Stimuli, Not Delayed Orienting to the Object of Interest. J Cogn Neurosci 2021; 34:348-364. [PMID: 34813660 DOI: 10.1162/jocn_a_01797] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The control processes that guide attention to a visual-search target can result in the selection of an irrelevant object with similar features (a distractor). Once attention is captured by such a distractor, search for a subsequent target is momentarily impaired if the two stimuli appear at different locations. The textbook explanation for this impairment is based on the notion of an indivisible focus of attention that moves to the distractor, illuminates a nontarget that subsequently appears at that location, and then moves to the target once the nontarget is rejected. Here, we show that such delayed orienting to the target does not underlie the behavioral cost of distraction. Observers identified a color-defined target appearing within the second of two stimulus arrays. The first array contained irrelevant items, including one that shared the target's color. ERPs were examined to test two predictions stemming from the textbook serial-orienting hypothesis. Namely, when the target and distractor appear at different locations, (1) the target should elicit delayed selection activity relative to same-location trials, and (2) the nontarget search item appearing at the distractor location should elicit selection activity that precedes selection activity tied to the target. Here, the posterior contralateral N2 component was used to track selection of each of these search-array items and the previous distractor. The results supported neither prediction above, thereby disconfirming the serial-orienting hypothesis. Overall, the results show that the behavioral costs of distraction are caused by perceptual and postperceptual competition between concurrently attended target and nontarget stimuli.
Collapse
Affiliation(s)
| | - Andrea N Smit
- Simon Fraser University, Burnaby, British Columbia, Canada
| | | | | |
Collapse
|
14
|
Abstract
This paper describes Guided Search 6.0 (GS6), a revised model of visual search. When we encounter a scene, we can see something everywhere. However, we cannot recognize more than a few items at a time. Attention is used to select items so that their features can be "bound" into recognizable objects. Attention is "guided" so that items can be processed in an intelligent order. In GS6, this guidance comes from five sources of preattentive information: (1) top-down and (2) bottom-up feature guidance, (3) prior history (e.g., priming), (4) reward, and (5) scene syntax and semantics. These sources are combined into a spatial "priority map," a dynamic attentional landscape that evolves over the course of search. Selective attention is guided to the most active location in the priority map approximately 20 times per second. Guidance will not be uniform across the visual field. It will favor items near the point of fixation. Three types of functional visual field (FVFs) describe the nature of these foveal biases. There is a resolution FVF, an FVF governing exploratory eye movements, and an FVF governing covert deployments of attention. To be identified as targets or rejected as distractors, items must be compared to target templates held in memory. The binding and recognition of an attended object is modeled as a diffusion process taking > 150 ms/item. Since selection occurs more frequently than that, it follows that multiple items are undergoing recognition at the same time, though asynchronously, making GS6 a hybrid of serial and parallel processes. In GS6, if a target is not found, search terminates when an accumulating quitting signal reaches a threshold. Setting of that threshold is adaptive, allowing feedback about performance to shape subsequent searches. Simulation shows that the combination of asynchronous diffusion and a quitting signal can produce the basic patterns of response time and error data from a range of search experiments.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Ophthalmology and Radiology, Brigham & Women's Hospital/Harvard Medical School, Cambridge, MA, USA.
- Visual Attention Lab, 65 Landsdowne St, 4th Floor, Cambridge, MA, 02139, USA.
| |
Collapse
|
15
|
Veríssimo IS, Hölsken S, Olivers CNL. Individual differences in crowding predict visual search performance. J Vis 2021; 21:29. [PMID: 34038508 PMCID: PMC8164367 DOI: 10.1167/jov.21.5.29] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Accepted: 03/12/2021] [Indexed: 11/24/2022] Open
Abstract
Visual search is an integral part of human behavior and has proven important to understanding mechanisms of perception, attention, memory, and oculomotor control. Thus far, the dominant theoretical framework posits that search is mainly limited by covert attentional mechanisms, comprising a central bottleneck in visual processing. A different class of theories seeks the cause in the inherent limitations of peripheral vision, with search being constrained by what is known as the functional viewing field (FVF). One of the major factors limiting peripheral vision, and thus the FVF, is crowding. We adopted an individual differences approach to test the prediction from FVF theories that visual search performance is determined by the efficacy of peripheral vision, in particular crowding. Forty-four participants were assessed with regard to their sensitivity to crowding (as measured by critical spacing) and their search efficiency (as indicated by manual responses and eye movements). This revealed substantial correlations between the two tasks, as stronger susceptibility to crowding was predictive of slower search, more eye movements, and longer fixation durations. Our results support FVF theories in showing that peripheral vision is an important determinant of visual search efficiency.
Collapse
Affiliation(s)
- Inês S Veríssimo
- Cognitive Psychology, Institute for Brain and Behavior, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Stefanie Hölsken
- Cognitive Psychology, Institute for Brain and Behavior, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Christian N L Olivers
- Cognitive Psychology, Institute for Brain and Behavior, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
- https://www.vupsy.nl/
| |
Collapse
|
16
|
Wang Y, Yan J, Yin Z, Ren S, Dong M, Zheng C, Zhang W, Liang J. How Native Background Affects Human Performance in Real-World Visual Object Detection: An Event-Related Potential Study. Front Neurosci 2021; 15:665084. [PMID: 33994938 PMCID: PMC8119748 DOI: 10.3389/fnins.2021.665084] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Accepted: 03/31/2021] [Indexed: 11/13/2022] Open
Abstract
Visual processing refers to the process of perceiving, analyzing, synthesizing, manipulating, transforming, and thinking of visual objects. It is modulated by both stimulus-driven and goal-directed factors and manifested in neural activities that extend from visual cortex to high-level cognitive areas. Extensive body of studies have investigated the neural mechanisms of visual object processing using synthetic or curated visual stimuli. However, synthetic or curated images generally do not accurately reflect the semantic links between objects and their backgrounds, and previous studies have not provided answers to the question of how the native background affects visual target detection. The current study bridged this gap by constructing a stimulus set of natural scenes with two levels of complexity and modulating participants' attention to actively or passively attend to the background contents. Behaviorally, the decision time was elongated when the background was complex or when the participants' attention was distracted from the detection task, and the object detection accuracy was decreased when the background was complex. The results of event-related potentials (ERP) analysis explicated the effects of scene complexity and attentional state on the brain responses in occipital and centro-parietal areas, which were suggested to be associated with varied attentional cueing and sensory evidence accumulation effects in different experimental conditions. Our results implied that efficient visual processing of real-world objects may involve a competition process between context and distractors that co-exist in the native background, and extensive attentional cues and fine-grained but semantically irrelevant scene information were perhaps detrimental to real-world object detection.
Collapse
Affiliation(s)
- Yue Wang
- School of Electronic Engineering, Xidian University, Xi'an, China
| | - Jianpu Yan
- School of Electronic Engineering, Xidian University, Xi'an, China
| | - Zhongliang Yin
- School of Life Science and Technology, Xidian University, Xi'an, China
| | - Shenghan Ren
- School of Life Science and Technology, Xidian University, Xi'an, China
| | - Minghao Dong
- School of Life Science and Technology, Xidian University, Xi'an, China
| | - Changli Zheng
- Southwest China Research Institute of Electronic Equipment, Chengdu, China
| | - Wei Zhang
- Southwest China Research Institute of Electronic Equipment, Chengdu, China
| | - Jimin Liang
- School of Electronic Engineering, Xidian University, Xi'an, China
| |
Collapse
|
17
|
Lee J, Jung K, Han SW. Serial, self-terminating search can be distinguished from others: Evidence from multi-target search data. Cognition 2021; 212:104736. [PMID: 33887651 DOI: 10.1016/j.cognition.2021.104736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Revised: 04/10/2021] [Accepted: 04/12/2021] [Indexed: 10/21/2022]
Abstract
How do people find a target among multiple stimuli? The process of searching for a target among distractors has been a fundamental issue in human perception and cognition, evoking raging debates. Some researchers argued that search should be carried out by serially allocating focal attention to each item until the target is found. Others claimed that multiple stimuli, sharing a finite amount of processing resource, could be processed in parallel. This strict serial/parallel dichotomy in visual search has been challenged and many recent theories suggest that visual search tasks involve both serial and parallel processes. However, some search tasks should primarily depend on serial processing, while others would rely upon parallel processing to a greater extent. Here, by simple innovation of an experimental paradigm, we were able to identify a specific behavioral pattern associated with serial, self-terminating search and clarified which tasks depend on serial processing to a greater extent than others. Using this paradigm, we provide insights regarding under which condition the search becomes more serial or parallel. We also discuss several recent models of visual search that are capable of accommodating these findings and reconciling the extant controversy.
Collapse
Affiliation(s)
- Jongmin Lee
- Department of Psychology, Chungnam National University, Daejeon, Republic of Korea
| | - Koeun Jung
- Institute of Basic Science, Daejeon, Republic of Korea.
| | - Suk Won Han
- Department of Psychology, Chungnam National University, Daejeon, Republic of Korea.
| |
Collapse
|
18
|
Maintaining rejected distractors in working memory during visual search depends on search stimuli: Evidence from contralateral delay activity. Atten Percept Psychophys 2021; 83:67-84. [PMID: 33000442 DOI: 10.3758/s13414-020-02127-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The presence of memory for rejected distractors during visual search has been heavily debated in the literature and has proven challenging to investigate behaviorally. In this research, we used an electrophysiological index of working memory (contralateral delay activity) to passively measure working memory activity during visual search. Participants were asked to indicate whether a novel target was present or absent in a lateralized search array with three visual set sizes (2, 4, or 6). If rejected distractors are maintained in working memory during search, working memory activity should increase with the number of distractors that need to be evaluated. Therefore, we predicted the amplitude of the contralateral delay activity would be larger for target-absent trials and would increase with visual set size until WM capacity was reached. In Experiment 1, we found no evidence for distractor maintenance in working memory during search for real-world stimuli. In Experiment 2, we found partial evidence in support of distractor maintenance during search for stimuli with high target/distractor similarity. In both experiments, working memory capacity did not appear to be a limiting factor during visual search. These results suggest the role of working memory during search may depend on the visual search task in question. Maintaining distractors in working memory appears to be unnecessary during search for realistic stimuli. However, there appears to be a limited role for distractor maintenance during search for artificial stimuli with a high degree of feature overlap.
Collapse
|
19
|
Neural Mechanisms of Human Decision-Making. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2021; 21:35-57. [PMID: 33409958 DOI: 10.3758/s13415-020-00842-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 09/28/2020] [Indexed: 11/08/2022]
Abstract
We present a theory and neural network model of the neural mechanisms underlying human decision-making. We propose a detailed model of the interaction between brain regions, under a proposer-predictor-actor-critic framework. This theory is based on detailed animal data and theories of action-selection. Those theories are adapted to serial operation to bridge levels of analysis and explain human decision-making. Task-relevant areas of cortex propose a candidate plan using fast, model-free, parallel neural computations. Other areas of cortex and medial temporal lobe can then predict likely outcomes of that plan in this situation. This optional prediction- (or model-) based computation can produce better accuracy and generalization, at the expense of speed. Next, linked regions of basal ganglia act to accept or reject the proposed plan based on its reward history in similar contexts. If that plan is rejected, the process repeats to consider a new option. The reward-prediction system acts as a critic to determine the value of the outcome relative to expectations and produce dopamine as a training signal for cortex and basal ganglia. By operating sequentially and hierarchically, the same mechanisms previously proposed for animal action-selection could explain the most complex human plans and decisions. We discuss explanations of model-based decisions, habitization, and risky behavior based on the computational model.
Collapse
|
20
|
Abstract
Spatial averaging of luminances over a variegated region has been assumed in visual processes such as light adaptation, texture segmentation, and lightness scaling. Despite the importance of these processes, how mean brightness can be computed remains largely unknown. We investigated how accurately and precisely mean brightness can be compared for two briefly presented heterogeneous luminance arrays composed of different numbers of disks. The results demonstrated that mean brightness judgments can be made in a task-dependent and flexible fashion. Mean brightness judgments measured via the point of subjective equality (PSE) exhibited a consistent bias, suggesting that observers relied strongly on a subset of the disks (e.g., the highest- or lowest-luminance disks) in making their judgments. Moreover, the direction of the bias flexibly changed with the task requirements, even when the stimuli were completely the same. When asked to choose the brighter array, observers relied more on the highest-luminance disks. However, when asked to choose the darker array, observers relied more on the lowest-luminance disks. In contrast, when the task was the same, observers' judgments were almost immune to substantial changes in apparent contrast caused by changing the background luminance. Despite the bias in PSE, the mean brightness judgments were precise. The just-noticeable differences measured for multiple disks were similar to or even smaller than those for single disks, which suggested a benefit of averaging. These findings implicated flexible weighted averaging; that is, mean brightness can be judged efficiently by flexibly relying more on a few items that are relevant to the task.
Collapse
|
21
|
Wang S, Megla EE, Woodman GF. Stimulus-induced Alpha Suppression Tracks the Difficulty of Attentional Selection, Not Visual Working Memory Storage. J Cogn Neurosci 2020; 33:536-562. [PMID: 33054550 DOI: 10.1162/jocn_a_01637] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Human alpha-band activity (8-12 Hz) has been proposed to index a variety of mechanisms during visual processing. Here, we distinguished between an account in which alpha suppression indexes selective attention versus an account in which it indexes subsequent working memory storage. We manipulated two aspects of the visual stimuli that perceptual attention is believed to mitigate before working memory storage: the potential interference from distractors and the size of the focus of attention. We found that the magnitude of alpha-band suppression tracked both of these aspects of the visual arrays. Thus, alpha-band activity after stimulus onset is clearly related to how the visual system deploys perceptual attention and appears to be distinct from mechanisms that store target representations in working memory.
Collapse
Affiliation(s)
- Sisi Wang
- Vanderbilt University.,Beijing Normal University
| | | | | |
Collapse
|
22
|
Abstract
Research and theories on visual search often focus on visual guidance to explain differences in search. Guidance is the tuning of attention to target features and facilitates search because distractors that do not show target features can be more effectively ignored (skipping). As a general rule, the better the guidance is, the more efficient search is. Correspondingly, behavioral experiments often interpreted differences in efficiency as reflecting varying degrees of attentional guidance. But other factors such as the time spent on processing a distractor (dwelling) or multiple visits to the same stimulus in a search display (revisiting) are also involved in determining search efficiency. While there is some research showing that dwelling and revisiting modulate search times in addition to skipping, the corresponding studies used complex naturalistic and category-defined stimuli. The present study tests whether results from prior research can be generalized to more simple stimuli, where target-distractor similarity, a strong factor influencing search performance, can be manipulated in a detailed fashion. Thus, in the present study, simple stimuli with varying degrees of target-distractor similarity were used to deliver conclusive evidence for the contribution of dwelling and revisiting to search performance. The results have theoretical and methodological implications: They imply that visual search models should not treat dwelling and revisiting as constants across varying levels of search efficiency and that behavioral search experiments are equivocal with respect to the responsible processing mechanisms underlying more versus less efficient search. We also suggest that eye-tracking methods may be used to disentangle different search components such as skipping, dwelling, and revisiting.
Collapse
|
23
|
Abstract
The mechanisms guiding visual attention are of great interest within cognitive and perceptual psychology. Many researchers have proposed models of these mechanisms, which serve to both formalize their theories and to guide further empirical investigations. The assumption that a number of basic features are processed in parallel early in the attentional process is common among most models of visual attention and visual search. To date, much of the evidence for parallel processing has been limited to set-size manipulations. Unfortunately, set-size manipulations have been shown to be insufficient evidence for parallel processing. We applied Systems Factorial Technology, a general nonparametric framework, to test this assumption, specifically whether color and shape are processed in parallel or in serial, in three experiments representative of feature search, conjunctive search, and odd-one-out search, respectively. Our results provide strong evidence that color and shape information guides search through parallel processes. Furthermore, we found evidence for facilitation between color and shape when the target was known in advance but performance consistent with unlimited capacity, independent parallel processing in odd-one-out search. These results confirm core assumptions about color and shape feature processing instantiated in most models of visual search and provide more detailed clues about the manner in which color and shape information is combined to guide search.
Collapse
|
24
|
Abstract
In visual search tasks, observers look for targets among distractors. In the lab, this often takes the form of multiple searches for a simple shape that may or may not be present among other items scattered at random on a computer screen (e.g., Find a red T among other letters that are either black or red.). In the real world, observers may search for multiple classes of target in complex scenes that occur only once (e.g., As I emerge from the subway, can I find lunch, my friend, and a street sign in the scene before me?). This article reviews work on how search is guided intelligently. I ask how serial and parallel processes collaborate in visual search, describe the distinction between search templates in working memory and target templates in long-term memory, and consider how searches are terminated.
Collapse
Affiliation(s)
- Jeremy M. Wolfe
- Department of Ophthalmology, Harvard Medical School, Boston, Massachusetts 02115, USA
- Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115, USA
- Visual Attention Lab, Brigham & Women's Hospital, Cambridge, Massachusetts 02139, USA
| |
Collapse
|
25
|
Fischer M, Moscovitch M, Alain C. Incidental auditory learning and memory-guided attention: Examining the role of attention at the behavioural and neural level using EEG. Neuropsychologia 2020; 147:107586. [PMID: 32818487 DOI: 10.1016/j.neuropsychologia.2020.107586] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Revised: 07/16/2020] [Accepted: 08/16/2020] [Indexed: 02/08/2023]
Abstract
The current study addressed the relation between awareness, attention, and memory, by examining whether merely presenting a tone and audio-clip, without deliberately associating one with other, was sufficient to bias attention to a given side. Participants were exposed to 80 different audio-clips (half included a lateralized pure tone) and told to classify audio-clips as natural (e.g., waterfall) or manmade (e.g., airplane engine). A surprise memory test followed, in which participants pressed a button to a lateralized faint tone (target) embedded in each audio-clip. They also indicated if the clip was (i) old/new; (ii) recollected/familiar; and (iii) if the tone was on left/right/not present when they heard the clip at exposure. The results demonstrate good explicit memory for the clip, but not for tone location. Response times were faster for old than for new clips but did not vary according to the target-context associations. Neuro-electric activity revealed an old-new effect at midline-frontal sites and a difference between old clips that were previously associated with the target tone and those that were not. These results are consistent with the attention-dependent learning hypothesis and suggest that associations were formed incidentally at a neural level (silent memory trace or engram), but these associations did not guide attention at a level that influenced behaviour either explicitly or implicitly.
Collapse
Affiliation(s)
- Manda Fischer
- Rotman Research Institute, Baycrest Hospital, Toronto, Canada; University of Toronto, Department of Psychology, Toronto, Canada.
| | - Morris Moscovitch
- Rotman Research Institute, Baycrest Hospital, Toronto, Canada; University of Toronto, Department of Psychology, Toronto, Canada
| | - Claude Alain
- Rotman Research Institute, Baycrest Hospital, Toronto, Canada; University of Toronto, Department of Psychology, Toronto, Canada
| |
Collapse
|
26
|
Nickel AE, Hopkins LS, Minor GN, Hannula DE. Attention capture by episodic long-term memory. Cognition 2020; 201:104312. [PMID: 32387722 DOI: 10.1016/j.cognition.2020.104312] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Revised: 03/16/2020] [Accepted: 04/19/2020] [Indexed: 10/24/2022]
Abstract
Everyday behavior depends upon the operation of concurrent cognitive processes. In visual search, studies that examine memory-attention interactions have indicated that long-term memory facilitates search for a target (e.g., contextual cueing), but the potential for memories to capture attention and decrease search efficiency has not been investigated. To address this gap in the literature, five experiments were conducted to examine whether task-irrelevant encoded objects might capture attention. In each experiment, participants encoded scene-object pairs. Then, in a visual search task, 6-object search displays were presented and participants were told to make a single saccade to targets defined by shape (e.g., diamond among differently colored circles; Experiments 1, 4, and 5) or by color (e.g., blue shape among differently shaped gray objects; Experiments 2 and 3). Sometimes, one of the distractors was from the encoded set, and occasionally the scene that had been paired with that object was presented prior to the search display. Results indicated that eye movements were made, in error, more often to encoded distractors than to baseline distractors, and that this effect was greatest when the corresponding scene was presented prior to search. When capture did occur, participants looked longer at encoded distractors if scenes had been presented, an effect that we attribute to the representational match between a retrieved associate and the identity of the encoded distractor in the search display. In addition, the presence of a scene resulted in slower saccade deployment when participants made first saccades to targets, as instructed. Experiments 4 and 5 suggest that this slowdown may be due to the relatively rare and therefore, surprising, appearance of visual stimulus information prior to search. Collectively, results suggest that information encoded into episodic memory can capture attention, which is consistent with the recent proposal that selection history can guide attentional selection.
Collapse
Affiliation(s)
- Allison E Nickel
- Department of Psychology, University of Wisconsin - Milwaukee, Milwaukee, WI, USA
| | - Lauren S Hopkins
- Department of Psychology, University of Wisconsin - Milwaukee, Milwaukee, WI, USA
| | - Greta N Minor
- Department of Psychology, University of Wisconsin - Milwaukee, Milwaukee, WI, USA
| | - Deborah E Hannula
- Department of Psychology, University of Wisconsin - Milwaukee, Milwaukee, WI, USA.
| |
Collapse
|
27
|
O’Reilly RC, Nair A, Russin JL, Herd SA. How Sequential Interactive Processing Within Frontostriatal Loops Supports a Continuum of Habitual to Controlled Processing. Front Psychol 2020; 11:380. [PMID: 32210892 PMCID: PMC7076192 DOI: 10.3389/fpsyg.2020.00380] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Accepted: 02/18/2020] [Indexed: 11/13/2022] Open
Abstract
We address the distinction between habitual/automatic vs. goal-directed/controlled behavior, from the perspective of a computational model of the frontostriatal loops. The model exhibits a continuum of behavior between these poles, as a function of the interactive dynamics among different functionally-specialized brain areas, operating iteratively over multiple sequential steps, and having multiple nested loops of similar decision making circuits. This framework blurs the lines between these traditional distinctions in many ways. For example, although habitual actions have traditionally been considered purely automatic, the outer loop must first decide to allow such habitual actions to proceed. Furthermore, because the part of the brain that generates proposed action plans is common across habitual and controlled/goal-directed behavior, the key differences are instead in how many iterations of sequential decision-making are taken, and to what extent various forms of predictive (model-based) processes are engaged. At the core of every iterative step in our model, the basal ganglia provides a "model-free" dopamine-trained Go/NoGo evaluation of the entire distributed plan/goal/evaluation/prediction state. This evaluation serves as the fulcrum of serializing otherwise parallel neural processing. Goal-based inputs to the nominally model-free basal ganglia system are among several ways in which the popular model-based vs. model-free framework may not capture the most behaviorally and neurally relevant distinctions in this area.
Collapse
Affiliation(s)
- Randall C. O’Reilly
- Computational Cognitive Neuroscience Lab, Department of Psychology, Computer Science, and Center for Neuroscience, University of California, Davis, Davis, CA, United States
- eCortex, Inc., Boulder, CO, United States
| | | | - Jacob L. Russin
- Computational Cognitive Neuroscience Lab, Department of Psychology, Computer Science, and Center for Neuroscience, University of California, Davis, Davis, CA, United States
| | - Seth A. Herd
- Computational Cognitive Neuroscience Lab, Department of Psychology, Computer Science, and Center for Neuroscience, University of California, Davis, Davis, CA, United States
- eCortex, Inc., Boulder, CO, United States
| |
Collapse
|
28
|
Prefrontal attentional saccades explore space rhythmically. Nat Commun 2020; 11:925. [PMID: 32066740 PMCID: PMC7026397 DOI: 10.1038/s41467-020-14649-7] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2019] [Accepted: 01/25/2020] [Indexed: 01/01/2023] Open
Abstract
Recent studies suggest that attention samples space rhythmically through oscillatory interactions in the frontoparietal network. How these attentional fluctuations coincide with spatial exploration/displacement and exploitation/selection by a dynamic attentional spotlight under top-down control is unclear. Here, we show a direct contribution of prefrontal attention selection mechanisms to a continuous space exploration. Specifically, we provide a direct high spatio-temporal resolution prefrontal population decoding of the covert attentional spotlight. We show that it continuously explores space at a 7-12 Hz rhythm. Sensory encoding and behavioral reports are increased at a specific optimal phase w/ to this rhythm. We propose that this prefrontal neuronal rhythm reflects an alpha-clocked sampling of the visual environment in the absence of eye movements. These attentional explorations are highly flexible, how they spatially unfold depending both on within-trial and across-task contingencies. These results are discussed in the context of exploration-exploitation strategies and prefrontal top-down attentional control.
Collapse
|
29
|
Abstract
How do emotional stimuli influence perception, attention, and ultimately memory? This debate at the cross-section of emotion and cognition research has a long tradition. The emotional oddball paradigm (EOP) has frequently been applied to investigate the detection and processing of (emotional) change detection (Schlüter & Bermeitinger, 2017). However, the EOP has also been used to reveal the effects of emotional deviants on memory for serially presented stimuli. In this integrative article, we review the results of 29 experiments published between the years 2000 and 2017. Based on these data, we provide an overview of how the EOP is applied in the context of memory research. We also review and integrate the empirical evidence for memory effects in the EOP (with a special focus on retrograde and anterograde emotion-induced effects) and present theories of emotional memory as well as their fit with the results obtained by the EOP. Directions for future research are presented that would help to address important issues of the current debate around emotion-induced memory effects.
Collapse
|
30
|
Djouab S, Albonico A, Yeung SC, Malaspina M, Mogard A, Wahlberg R, Corrow SL, Barton JJS. Search for Face Identity or Expression: Set Size Effects in Developmental Prosopagnosia. J Cogn Neurosci 2020; 32:889-905. [PMID: 31905091 DOI: 10.1162/jocn_a_01519] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
The set size effect during visual search indexes the effects of processing load and thus the efficiency of perceptual mechanisms. Our goal was to investigate whether individuals with developmental prosopagnosia show increased set size effects when searching faces for face identity and how this compares to search for face expression. We tested 29 healthy individuals and 13 individuals with developmental prosopagnosia. Participants were shown sets of three to seven faces to judge whether the identities or expressions of the faces were the same across all stimuli or if one differed. The set size effect was the slope of the linear regression between the number of faces in the array and the response time. Accuracy was similar in both controls and prosopagnosic participants. Developmental prosopagnosic participants displayed increased set size effects in face identity search but not in expression search. Single-participant analyses reveal that 11 developmental prosopagnosic participants showed a putative classical dissociation, with impairments in identity but not expression search. Signal detection theory analysis showed that identity set size effects were highly reliable in discriminating prosopagnosic participants from controls. Finally, the set size ratios of same to different trials were consistent with the predictions of self-terminated serial search models for control participants and prosopagnosic participants engaged in expression search but deviated from those predictions for identity search by the prosopagnosic cohort. We conclude that the face set size effect reveals a highly prevalent and selective perceptual inefficiency for processing face identity in developmental prosopagnosia.
Collapse
Affiliation(s)
- Sara Djouab
- University of British Columbia.,University of Auvergne, Clermont-Ferrand, France
| | | | | | | | | | | | | | | |
Collapse
|
31
|
Wolfe JM, Utochkin IS. What is a preattentive feature? Curr Opin Psychol 2019; 29:19-26. [PMID: 30472539 PMCID: PMC6513732 DOI: 10.1016/j.copsyc.2018.11.005] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Revised: 11/01/2018] [Accepted: 11/08/2018] [Indexed: 11/30/2022]
Abstract
The concept of a preattentive feature has been central to vision and attention research for about half a century. A preattentive feature is a feature that guides attention in visual search and that cannot be decomposed into simpler features. While that definition seems straightforward, there is no simple diagnostic test that infallibly identifies a preattentive feature. This paper briefly reviews the criteria that have been proposed and illustrates some of the difficulties of definition.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Corresponding author Visual Attention Lab, Department
of Surgery, Brigham & Women's Hospital, Departments of Ophthalmology
and Radiology, Harvard Medical School, 64 Sidney St. Suite. 170, Cambridge, MA
02139-4170,
| | - Igor S Utochkin
- National Research University Higher School of
Economics, Moscow, Russian Federation Address: 101000, Armyansky per. 4, Moscow,
Russian Federation,
| |
Collapse
|
32
|
Williams LH, Drew T. What do we know about volumetric medical image interpretation?: a review of the basic science and medical image perception literatures. Cogn Res Princ Implic 2019; 4:21. [PMID: 31286283 PMCID: PMC6614227 DOI: 10.1186/s41235-019-0171-6] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Accepted: 05/19/2019] [Indexed: 11/26/2022] Open
Abstract
Interpretation of volumetric medical images represents a rapidly growing proportion of the workload in radiology. However, relatively little is known about the strategies that best guide search behavior when looking for abnormalities in volumetric images. Although there is extensive literature on two-dimensional medical image perception, it is an open question whether the conclusions drawn from these images can be generalized to volumetric images. Importantly, volumetric images have distinct characteristics (e.g., scrolling through depth, smooth-pursuit eye-movements, motion onset cues, etc.) that should be considered in future research. In this manuscript, we will review the literature on medical image perception and discuss relevant findings from basic science that can be used to generate predictions about expertise in volumetric image interpretation. By better understanding search through volumetric images, we may be able to identify common sources of error, characterize the optimal strategies for searching through depth, or develop new training and assessment techniques for radiology residents.
Collapse
|
33
|
Hemström J, Albonico A, Djouab S, Barton JJS. Visual search for complex objects: Set-size effects for faces, words and cars. Vision Res 2019; 162:8-19. [PMID: 31233767 DOI: 10.1016/j.visres.2019.06.007] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2018] [Revised: 06/07/2019] [Accepted: 06/16/2019] [Indexed: 11/18/2022]
Abstract
To compare visual processing for different object types, we developed visual search tests that generated accuracy and response time parameters, including an object set-size effect that indexes perceptual processing load. Our goal was to compare visual search for two expert object types, faces and visual words, as well as a less expert type, cars. We first asked if faces and words showed greater inversion effects in search. Second, we determined whether search with upright stimuli correlated with other perceptual indices. Last we assessed for correlations between tests within a single orientation, and between orientations for a single object type. Object set-size effects were smaller for faces and words than cars. All accuracy and temporal measures showed an inversion effect for faces and words, but not cars. Face-search accuracy measures correlated with accuracy on the Cambridge Face Memory Test and word-search temporal measures correlated with single-word reading times, but car search did not correlate with semantic car knowledge. There were cross-orientation correlations for all object types, as well as cross-object correlations in the inverted orientation, while in the upright orientation face search did not correlate with word or car search. We conclude that object search shows effects of expertise. Compared to cars, words and faces showed smaller object set-size effects, greater inversion effects, and their search results correlated with other indices of perceptual expertise. The correlation analyses provide preliminary evidence supporting contributions from common processes in the case of inverted stimuli, object-specific processes that operate in both orientations, and distinct processing for upright faces.
Collapse
Affiliation(s)
- Jennifer Hemström
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vancouver, Canada; Faculty of Medicine, Linköping University, Linköping, Sweden
| | - Andrea Albonico
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vancouver, Canada
| | - Sarra Djouab
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vancouver, Canada; Faculty of Medicine, University of Auvergne, Clermont-Ferrand, France
| | - Jason J S Barton
- Human Vision and Eye Movement Laboratory, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vancouver, Canada.
| |
Collapse
|
34
|
Evans KK, Culpan AM, Wolfe JM. Detecting the "gist" of breast cancer in mammograms three years before localized signs of cancer are visible. Br J Radiol 2019; 92:20190136. [PMID: 31166769 DOI: 10.1259/bjr.20190136] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023] Open
Abstract
OBJECTIVES After a 500 ms presentation, experts can distinguish abnormal mammograms at above chance levels even when only the breast contralateral to the lesion is shown. Here, we show that this signal of abnormality is detectable 3 years before localized signs of cancer become visible. METHODS In 4 prospective studies, 59 expert observers from 3 groups viewed 116-200 bilateral mammograms for 500 ms each. Half of the images were prior exams acquired 3 years prior to onset of visible, actionable cancer and half were normal. Exp. 1D included cases having visible abnormalities. Observers rated likelihood of abnormality on a 0-100 scale and categorized breast density. Performance was measured using receiver operating characteristic analysis. RESULTS In all three groups, observers could detect abnormal images at above chance levels 3 years prior to visible signs of breast cancer (p < 0.001). The results were not due to specific salient cases nor to breast density. Performance was correlated with expertise quantified by the number of mammographic cases read within a year. In Exp. 1D, with cases having visible actionable pathology included, the full group of readers failed to reliably detect abnormal priors; with the exception of a subgroup of the six most experienced observers. CONCLUSIONS Imaging specialists can detect signals of abnormality in mammograms acquired years before lesions become visible. Detection may depend on expertise acquired by reading large numbers of cases. ADVANCES IN KNOWLEDGE Global gist signal can serve as imaging risk factor with the potential to identify patients with elevated risk for developing cancer, resulting in improved early cancer diagnosis rates and improved prognosis for females with breast cancer.
Collapse
Affiliation(s)
- Karla K Evans
- 1 Psychology Department, University of York , York , United Kingdom
| | | | - Jeremy M Wolfe
- 3 Harvard Medical School and Brigham and Women's Hospital , Boston , MA, USA
| |
Collapse
|
35
|
Reichenthal A, Ben-Tov M, Ben-Shahar O, Segev R. What pops out for you pops out for fish: Four common visual features. J Vis 2019; 19:1. [PMID: 30601571 DOI: 10.1167/19.1.1] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Visual search is the ability to detect a target of interest against a background of distracting objects. For many animals, performing this task fast and accurately is crucial for survival. Typically, visual-search performance is measured by the time it takes the observer to detect a target against a backdrop of distractors. The efficiency of a visual search depends fundamentally on the features of the target, the distractors, and the interaction between them. Substantial efforts have been devoted to investigating the influence of different visual features on visual-search performance in humans. In particular, it has been demonstrated that color, size, orientation, and motion are efficient visual features to guide attention in humans. However, little is known about which features are efficient and which are not in other vertebrates. Given earlier observations that moving targets elicit pop-out and parallel search in the archerfish during visual-search tasks, here we investigate and confirm that all four of these visual features also facilitate efficient search in the archerfish in a manner comparable to humans. In conjunction with results reported for other species, these finding suggest universality in the way visual search is carried out by animals despite very different brain anatomies and living environments.
Collapse
Affiliation(s)
- Adam Reichenthal
- Life Sciences Department, Ben-Gurion University of the Negev, Beer-Sheva, Israel.,Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Mor Ben-Tov
- Department of Neurobiology, Duke University, Durham, NC, USA
| | - Ohad Ben-Shahar
- Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel.,Department of Computer Science, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Ronen Segev
- Life Sciences Department, Ben-Gurion University of the Negev, Beer-Sheva, Israel.,Zlotowski Center for Neuroscience, Ben-Gurion University of the Negev, Beer-Sheva, Israel.,Department of Biomedical Engineering, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| |
Collapse
|
36
|
Ouerfelli-Ethier J, Elsaeid B, Desgroseilliers J, Munoz DP, Blohm G, Khan AZ. Anti-saccades predict cognitive functions in older adults and patients with Parkinson's disease. PLoS One 2018; 13:e0207589. [PMID: 30485332 PMCID: PMC6261587 DOI: 10.1371/journal.pone.0207589] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Accepted: 11/03/2018] [Indexed: 12/12/2022] Open
Abstract
A major component of cognitive control is the ability to act flexibly in the environment by either behaving automatically or inhibiting an automatic behaviour. The interleaved pro/anti-saccade task measures cognitive control because the task relies on one's abilities to switch flexibly between pro and anti-saccades, and inhibit automatic saccades during anti-saccade trials. Decline in cognitive control occurs during aging or neurological illnesses such as Parkinson's disease (PD), and indicates decline in other cognitive abilities, such as memory. However, little is known about the relationship between cognitive control and other cognitive processes. Here we investigated whether anti-saccade performance can predict decision-making, visual memory, and pop-out and serial visual search performance. We tested 34 younger adults, 22 older adults, and 20 PD patients on four tasks: an interleaved pro/anti-saccade, a spatial visual memory, a decision-making and two types of visual search (pop-out and serial) tasks. Anti-saccade performance was a good predictor of decision-making and visual memory abilities for both older adults and PD patients, while it predicted visual search performance to a larger extent in PD patients. Our results thus demonstrate the suitability of the interleaved pro/anti-saccade task as a cognitive marker of cognitive control in aging and PD populations.
Collapse
Affiliation(s)
| | - Basma Elsaeid
- Centre for Neuroscience Studies, Queen’s University, Kingston, Ontario, Canada
| | - Julie Desgroseilliers
- Department of Medicine and Health Sciences, University of Sherbrooke, Sherbrooke, Quebec, Canada
| | - Douglas P. Munoz
- Centre for Neuroscience Studies, Queen’s University, Kingston, Ontario, Canada
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen’s University, Kingston, Ontario, Canada
| | | |
Collapse
|
37
|
Savage SW, Potter DD, Tatler BW. The effects of array structure and secondary cognitive task demand on processes of visual search. Vision Res 2018; 153:37-46. [PMID: 30248367 DOI: 10.1016/j.visres.2018.09.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2018] [Revised: 08/30/2018] [Accepted: 09/05/2018] [Indexed: 11/16/2022]
Abstract
Many aspects of our everyday behaviour require that we search for objects. However, in real situations search is often conducted while internal and external factors compete for our attention resources. Cognitive distraction interferes with our ability to search for targets, increasing search times. Here we consider whether effects of cognitive distraction interfere differentially with three distinct phases of search: initiating search, overtly scanning through items in the display, and verifying that the object is indeed the target of search once it has been fixated. Furthermore, we consider whether strategic components of visual search that emerge when searching items organized into structured arrays are susceptible to cognitive distraction or not. We used Gilchrist & Harvey's (2006) structured and unstructured visual search paradigm with the addition of Savage, Potter, and Tatler's (2013) secondary puzzle task. Cognitive load influenced two phases of search: 1) scanning times and 2) verification times. Under high load, fixation durations were longer and re-fixations of distracters were more common. In terms of scanning strategy, we replicated Gilchrist and Harvey's (2006) findings of more systematic search for structured arrays than unstructured ones. We also found an effect of cognitive load on this aspect of search but only in structured arrays. Our findings suggest that our eyes, by default, produce an autonomous scanning pattern that is modulated but not completely eliminated by secondary cognitive load.
Collapse
Affiliation(s)
- Steven William Savage
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Harvard Medical School, 20 Staniford Street, 02114 Boston, MA, USA.
| | | | | |
Collapse
|
38
|
Phillips S. Going Beyond the Data as the Patching (Sheaving) of Local Knowledge. Front Psychol 2018; 9:1926. [PMID: 30356817 PMCID: PMC6189483 DOI: 10.3389/fpsyg.2018.01926] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2018] [Accepted: 09/19/2018] [Indexed: 11/13/2022] Open
Abstract
Consistently predicting outcomes in novel situations is colloquially called "going beyond the data," or "generalization." Going beyond the data features in spatial and non-spatial cognition, raising the question of whether such features have a common basis-a kind of systematicity of generalization. Here, we conceptualize this ability as the patching of local knowledge to obtain non-local (global) information. Tracking the passage from local to global properties is the purview of sheaf theory, a branch of mathematics at the nexus of algebra and geometry/topology. Two cognitive domains are examined: (1) learning cue-target patterns that conform to an underlying algebraic rule, and (2) visual attention requiring the integration of space-based feature maps. In both cases, going beyond the data is obtained from a (universal) sheaf theory construction called "sheaving," i.e., the "patching" of local data attached to a topological space to obtain a representation considered as a globally coherent cognitive map. These results are discussed in the context of a previous (category theory) explanation for systematicity, vis-a-vis, categorical universal constructions, along with other cognitive domains where going beyond the data is apparent. Analogous to higher-order function (i.e., a function that takes/returns a function), going beyond the data as a higher-order systematicity property is explained by sheaving, a higher-order (categorical) universal construction.
Collapse
Affiliation(s)
- Steven Phillips
- Mathematical Neuroinformatics Group, Human Informatics Research Institute, National Institute of Advanced Industrial Science and Technology, Tsukuba, Japan
| |
Collapse
|
39
|
Won BY, Leber AB. Failure to exploit learned spatial value information during visual search. VISUAL COGNITION 2018. [DOI: 10.1080/13506285.2018.1500502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- Bo-Yeong Won
- Center for Mind and Brain, University of California, Davis, Davis, USA
| | - Andrew B. Leber
- Department of Psychology, The Ohio State University, Columbus, USA
| |
Collapse
|
40
|
Skrzypulec B. Do we need visual subjects? PHILOSOPHICAL PSYCHOLOGY 2018. [DOI: 10.1080/09515089.2018.1441990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Affiliation(s)
- Błażej Skrzypulec
- Institute of Philosophy and Sociology, Polish Academy of Sciences, Warsaw, Poland
| |
Collapse
|
41
|
Cherubini P, Burigo M, Bricolo E. Inference-driven attention in symbolic and perceptual tasks: Biases toward expected and unexpected inputs. Q J Exp Psychol (Hove) 2018; 59:597-624. [PMID: 16627358 DOI: 10.1080/02724980443000863] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
The aims of this paper are (a) to gather support for the hypothesis that some basic mechanisms of attentional deployment (i.e., its high efficiency in dealing with expected and unexpected inputs) meet the requirements of the inferential system and have possibly evolved to support its functioning, and (b) to show that these orienting mechanisms function in very similar ways in two perceptual tasks and in a symbolic task. The general hypothesis and its predictions are sketched in the Introduction, after a discussion of current findings concerning visual attention and the generalities of the inferential system. In the empirical section, three experiments are presented where participants tracked visual trajectories (Experiments 1 and 3) or arithmetic series (Experiments 2 and 3), responding to the onset of a target event (e.g., to a specific number) and to the repetition of an event (e.g., to a number appearing twice consecutively). Target events could be anticipated when they were embedded in regular series/trajectories; they could be anticipated, with the anticipation later disconfirmed, when a regular series/trajectory was abruptly interrupted before the target event occurred; and they could not be anticipated when the series/trajectory was random. Repeated events could not be anticipated. Results show a very similar pattern of allocation in tracking visual trajectories and arithmetic series: Attention is focused on anticipated events; it is defocused and redistributed when an anticipation is not confirmed by ensuing events; however, performance decreases when dealing with random series/trajectory—that is, in the absence of anticipations. In our view, this is due to the fact that confirmed and disconfirmed anticipations are crucial events for “knowledge revision”—that is, the fine tuning of the inferential system to the environment; attentional mechanisms have developed so as to enhance detection of these events, possibly at all levels of inferential processing.
Collapse
Affiliation(s)
- Paolo Cherubini
- Department of Psychology, University of Milan-Bicocca, Milan, Italy.
| | | | | |
Collapse
|
42
|
Watson DG, Maylor EA, Bruce LAM. Effects of Age on Searching for and Enumerating Targets that Cannot be Detected Efficiently. ACTA ACUST UNITED AC 2018; 58:1119-42. [PMID: 16194951 DOI: 10.1080/02724980443000511] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
We investigated the effects of old age on search, subitizing, and counting of difficult-to-find targets. In Experiment 1, young and older adults enumerated targets (Os) with and without distractors (Qs). Without distractors, the usual subitization-counting function occurred in both groups, with the same subitization span of 3.3 items. Subitization disappeared with distractors; older adults were slowed more overall by their presence but enumeration rates were not slowed by ageing either with or without distractors. In contrast, search rates for a single target (O among Qs) were twice as slow for older as for young adults. Experiment 2 tested and ruled out one account of age-equivalent serial enumeration based on the need to subvocalize numbers as items are enumerated. Alternative explanations based on the specific task differences between detecting and enumerating stimuli are discussed.
Collapse
|
43
|
Cherubini P, Mazzocco A, Minelli S. Facilitation and inhibition caused by the orienting of attention in propositional reasoning tasks. Q J Exp Psychol (Hove) 2018; 60:1496-523. [PMID: 17853220 DOI: 10.1080/17470210601066103] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
In an attempt to study the orienting of attention in reasoning, we developed a set of propositional reasoning tasks structurally similar to Posner's (1980) spatial cueing paradigm, widely used to study the orienting of attention in perceptual tasks. We cued the representation in working memory of a reasoning premise, observing whether inferences drawn using that premise or a different, uncued one were facilitated, hindered, or unaffected. The results of Experiments 1a, 1b, 1c, and 1d, using semantically (1a–1c) or statistically (1d) informative cues, showed a robust, long-lasting facilitation for drawing inferences from the cued rule. In Experiment 2, using uninformative cues, inferences from the cued rule were facilitated with a short stimulus onset asynchrony (SOA), whereas they were delayed when the SOA was longer, an effect that is similar to the “inhibition of return” (IOR) in perceptual tasks. Experiment 3 used uninformative cues, three different SOAs, and inferential rules with disjunctive antecedents, replicating the IOR-like effect with the long SOAs and, at the short SOA, finding evidence of a gradient-like behaviour of the facilitation effect. Our findings show qualitative similarities to some effects typically observed in the orienting of visual attention, although the tasks did not involve spatial orienting.
Collapse
Affiliation(s)
- Paolo Cherubini
- Dipartimento di Psicologia, Università di Milano-Bicocca, Milan, Italy.
| | | | | |
Collapse
|
44
|
When is it time to move to the next map? Optimal foraging in guided visual search. Atten Percept Psychophys 2017; 78:2135-51. [PMID: 27192994 DOI: 10.3758/s13414-016-1128-1] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Suppose that you are looking for visual targets in a set of images, each containing an unknown number of targets. How do you perform that search, and how do you decide when to move from the current image to the next? Optimal foraging theory predicts that foragers should leave the current image when the expected value from staying falls below the expected value from leaving. Here, we describe how to apply these models to more complex tasks, like search for objects in natural scenes where people have prior beliefs about the number and locations of targets in each image, and search is guided by target features and scene context. We model these factors in a guided search task and predict the optimal time to quit search. The data come from a satellite image search task. Participants searched for small gas stations in large satellite images. We model quitting times with a Bayesian model that incorporates prior beliefs about the number of targets in each map, average search efficiency (guidance), and actual search history in the image. Clicks deploying local magnification were used as surrogates for deployments of attention and, thus, for time. Leaving times (measured in mouse clicks) were well-predicted by the model. People terminated search when their expected rate of target collection fell to the average rate for the task. Apparently, people follow a rate-optimizing strategy in this task and use both their prior knowledge and search history in the image to decide when to quit searching.
Collapse
|
45
|
Asutay E, Västfjäll D. Exposure to arousal-inducing sounds facilitates visual search. Sci Rep 2017; 7:10363. [PMID: 28871100 PMCID: PMC5583323 DOI: 10.1038/s41598-017-09975-8] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2017] [Accepted: 08/02/2017] [Indexed: 11/23/2022] Open
Abstract
Exposure to affective stimuli could enhance perception and facilitate attention via increasing alertness, vigilance, and by decreasing attentional thresholds. However, evidence on the impact of affective sounds on perception and attention is scant. Here, a novel aspect of affective facilitation of attention is studied: whether arousal induced by task-irrelevant auditory stimuli could modulate attention in a visual search. In two experiments, participants performed a visual search task with and without auditory-cues that preceded the search. Participants were faster in locating high-salient targets compared to low-salient targets. Critically, search times and search slopes decreased with increasing auditory-induced arousal while searching for low-salient targets. Taken together, these findings suggest that arousal induced by sounds can facilitate attention in a subsequent visual search. This novel finding provides support for the alerting function of the auditory system by showing an auditory-phasic alerting effect in visual attention. The results also indicate that stimulus arousal modulates the alerting effect. Attention and perception are our everyday tools to navigate our surrounding world and the current findings showing that affective sounds could influence visual attention provide evidence that we make use of affective information during perceptual processing.
Collapse
Affiliation(s)
- Erkin Asutay
- Department of Behavioral Sciences and Learning, Linköping University, Linköping, SE-58183, Sweden.
| | - Daniel Västfjäll
- Department of Behavioral Sciences and Learning, Linköping University, Linköping, SE-58183, Sweden.,Decision Research, Eugene, OR, 97401, USA
| |
Collapse
|
46
|
Need for cognitive closure and attention allocation during multitasking: Evidence from eye-tracking studies. PERSONALITY AND INDIVIDUAL DIFFERENCES 2017. [DOI: 10.1016/j.paid.2017.02.014] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
47
|
Mayer KM, Vuong QC, Thornton IM. Humans are Detected More Efficiently than Machines in the Context of Natural Scenes. JAPANESE PSYCHOLOGICAL RESEARCH 2017. [DOI: 10.1111/jpr.12145] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Katja M. Mayer
- Max Planck Institute for Human Cognitive and Brain Sciences
| | | | | |
Collapse
|
48
|
Horstmann G, Ansorge U. Surprise capture and inattentional blindness. Cognition 2016; 157:237-249. [DOI: 10.1016/j.cognition.2016.09.005] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2016] [Revised: 09/01/2016] [Accepted: 09/10/2016] [Indexed: 10/21/2022]
|
49
|
Abstract
Language learners encounter numerous opportunities to learn regularities, but need to decide which of these regularities to learn, because some are not productive in their native language. Here, we present an account of rule learning based on perceptual and memory primitives (Endress, Dehaene-Lambertz, & Mehler, Cognition, 105(3), 577–614, 2007; Endress, Nespor, & Mehler, Trends in Cognitive Sciences, 13(8), 348–353, 2009), suggesting that learners preferentially learn regularities that are more salient to them, and that the pattern of salience reflects the frequency of language features across languages. We contrast this view with previous artificial grammar learning research, which suggests that infants “choose” the regularities they learn based on rational, Bayesian criteria (Frank & Tenenbaum, Cognition, 120(3), 360–371, 2013; Gerken, Cognition, 98(3)B67–B74, 2006, Cognition, 115(2), 362–366, 2010). In our experiments, adult participants listened to syllable strings starting with a syllable reduplication and always ending with the same “affix” syllable, or to syllable strings starting with this “affix” syllable and ending with the “reduplication”. Both affixation and reduplication are frequently used for morphological marking across languages. We find three crucial results. First, participants learned both regularities simultaneously. Second, affixation regularities seemed easier to learn than reduplication regularities. Third, regularities in sequence offsets were easier to learn than regularities at sequence onsets. We show that these results are inconsistent with previous Bayesian rule learning models, but mesh well with the perceptual or memory primitives view. Further, we show that the pattern of salience revealed in our experiments reflects the distribution of regularities across languages. Ease of acquisition might thus be one determinant of the frequency of regularities across languages.
Collapse
|
50
|
Reyes G, Sackur J. Introspective access to implicit shifts of attention. Conscious Cogn 2016; 48:11-20. [PMID: 27810726 DOI: 10.1016/j.concog.2016.10.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2016] [Revised: 09/27/2016] [Accepted: 10/09/2016] [Indexed: 11/13/2022]
Abstract
Literature in metacognition has systematically rejected the possibility of introspective access to complex cognitive processes. This situation derives from the difficulty of experimentally manipulating cognitive processes while abiding by the two contradictory constraints. First, participants must not be aware of the experimental manipulation, otherwise they run the risk of incorporating their knowledge of the experimental manipulation in some rational elaboration. Second, we need an external, third person perspective evidence that the experimental manipulation did impact some relevant cognitive processes. Here, we study introspection during visual searches, and we try to overcome the above dilemma, by presenting a barely visible, "pre-conscious" cue just before the search array. We aim at influencing the attentional guidance of the search processes, while participants would not notice that fact. Results show that introspection of the complexity of a search process is driven in part by subjective access to its attentional guidance.
Collapse
Affiliation(s)
- Gabriel Reyes
- Facultad de Psicología, Universidad del Desarrollo, Santiago, Chile.
| | - Jérôme Sackur
- Brain and Consciousness Group (EHESS/CNRS/ENS), École Normale Supérieure, PSL Research University, Paris, France.
| |
Collapse
|