1
|
Davis G. ATLAS: Mapping ATtention's Location And Size to probe five modes of serial and parallel search. Atten Percept Psychophys 2024:10.3758/s13414-024-02921-7. [PMID: 38982008 DOI: 10.3758/s13414-024-02921-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/12/2024] [Indexed: 07/11/2024]
Abstract
Conventional visual search tasks do not address attention directly and their core manipulation of 'set size' - the number of displayed items - introduces stimulus confounds that hinder interpretation. However, alternative approaches have not been widely adopted, perhaps reflecting their complexity, assumptions, or indirect attention-sampling. Here, a new procedure, the ATtention Location And Size ('ATLAS') task used probe displays to track attention's location, breadth, and guidance during search. Though most probe displays comprised six items, participants reported only the single item they judged themselves to have perceived most clearly - indexing the attention 'peak'. By sampling peaks across variable 'choice sets', the size and position of the attention window during search was profiled. These indices appeared to distinguish narrow- from broad attention, signalled attention to pairs of items where it arose and tracked evolving attention-guidance over time. ATLAS is designed to discriminate five key search modes: serial-unguided, sequential-guided, unguided attention to 'clumps' with local guidance, and broad parallel-attention with or without guidance. This initial investigation used only an example set of highly regular stimuli, but its broader potential should be investigated.
Collapse
Affiliation(s)
- Gregory Davis
- Department of Psychology, University of Cambridge, Downing Street, Cambridge, CB2 3EB, UK.
| |
Collapse
|
2
|
Becker MW, Rodriguez A, Bolkhovsky J, Peltier C, Guillory SB. Activation thresholds, not quitting thresholds, account for the low prevalence effect in dynamic search. Atten Percept Psychophys 2024:10.3758/s13414-024-02919-1. [PMID: 38977613 DOI: 10.3758/s13414-024-02919-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/22/2024] [Indexed: 07/10/2024]
Abstract
The low-prevalence effect (LPE) is the finding that target detection rates decline as targets become less frequent in a visual search task. A major source of this effect is thought to be that fewer targets result in lower quitting thresholds, i.e., observers respond target-absent after looking at fewer items compared to searches with a higher prevalence of targets. However, a lower quitting threshold does not directly account for an LPE in searches where observers continuously monitor a dynamic display for targets. In these tasks there are no discrete "trials" to which a quitting threshold could be applied. This study examines whether the LPE persists in this type of dynamic search context. Experiment 1 was a 2 (dynamic/static) x 2 (10%/40% prevalence targets) design. Although overall performance was worse in the dynamic task, both tasks showed a similar magnitude LPE. In Experiment 2, we replicated this effect using a task where subjects searched for either of two targets (Ts and Ls). One target appeared infrequently (10%) and the other moderately (40%). Given this method of manipulating prevalence rate, the quitting threshold explanation does not account for the LPE even for static displays. However, replicating Experiment 1, we found an LPE of similar magnitude for both search scenarios, and lower target detection rates with the dynamic displays, demonstrating the LPE is a potential concern for both static and dynamic searches. These findings suggest an activation threshold explanation of the LPE may better account for our observations than the traditional quitting threshold model.
Collapse
Affiliation(s)
- Mark W Becker
- Department of Psychology, Michigan State University, East Lansing, MI, 48824, USA.
| | - Andrew Rodriguez
- Department of Psychology, Michigan State University, East Lansing, MI, 48824, USA
| | - Jeffrey Bolkhovsky
- Naval Submarine Medical Research Laboratory (NSMRL), Groton, CT, 06349, USA
| | | | - Sylvia B Guillory
- Naval Submarine Medical Research Laboratory (NSMRL), Groton, CT, 06349, USA
- Leidos, Inc, New London, CT, 06320, USA
| |
Collapse
|
3
|
Zivony A, Eimer M. A dissociation between the effects of expectations and attention in selective visual processing. Cognition 2024; 250:105864. [PMID: 38906015 DOI: 10.1016/j.cognition.2024.105864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2023] [Revised: 05/31/2024] [Accepted: 06/13/2024] [Indexed: 06/23/2024]
Abstract
It is often claimed that probabilistic expectations affect visual perception directly, without mediation by selective attention. However, these claims have been disputed, as effects of expectation and attention are notoriously hard to dissociate experimentally. In this study, we used a new approach to separate expectations from attention. In four experiments (N = 60), participants searched for a target in a rapid serial visual presentation (RSVP) stream and had to identify a digit or a letter defined by a low-level cue (colour or shape). Expectations about the target's alphanumeric category were probabilistically manipulated. Since category membership is a high-level feature and since the target was embedded among many distractors that shared its category, targets from the expected category should not attract attention more than targets from the unexpected category. In the first experiment, these targets were more likely to be identified relative to targets from the unexpected category. Importantly, in the following experiments, we also included behavioural and electrophysiological indices of attentional guidance and engagement. This allowed us to examine whether expectations also modulated these or earlier attentional processes. Results showed that category-based expectations had no modulatory effects on attention, and only affected processing at later encoding-related stages. Alternative interpretation of expectation effects in terms of repetition priming or response bias were also ruled out. These observations provide new evidence for direct attention-independent expectation effects on perception. We suggest that expectations can adjust the threshold required for encoding expectations-congruent information, thereby affecting the speed with which target objects are encoded in working memory.
Collapse
Affiliation(s)
- Alon Zivony
- Department of Psychology, University of Shefeld, Portobello, Shefeld S1 4DP, United Kingdom.
| | - Martin Eimer
- Department of Psychological Sciences, Birkbeck College, University of London, Malet Street, London WC1E 7HX, United Kingdom
| |
Collapse
|
4
|
Wagner J, Zurlo A, Rusconi E. Individual differences in visual search: A systematic review of the link between visual search performance and traits or abilities. Cortex 2024; 178:51-90. [PMID: 38970898 DOI: 10.1016/j.cortex.2024.05.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Revised: 05/29/2024] [Accepted: 05/30/2024] [Indexed: 07/08/2024]
Abstract
Visual search (VS) comprises a class of tasks that we typically perform several times during a day and requires intentionally scanning (with or without moving the eyes) the environment for a specific target (be it an object or a feature) among distractor stimuli. Experimental research in lab-based or real-world settings has offered insight into its underlying neurocognitive mechanisms from a nomothetic point of view. A lesser-known but rapidly growing body of quasi-experimental and correlational research has explored the link between individual differences and VS performance. This combines different research traditions and covers a wide range of individual differences in studies deploying a vast array of VS tasks. As such, it is a challenge to determine whether any associations highlighted in single studies are robust when considering the wider literature. However, clarifying such relationships systematically and comprehensively would help build more accurate models of VS, and it would highlight promising directions for future research. This systematic review provides an up to date and comprehensive synthesis of the existing literature investigating associations between common indices of performance in VS tasks and measures of individual differences mapped onto four categories of cognitive abilities (short-term working memory, fluid reasoning, visual processing and processing speed) and seven categories of traits (Big Five traits, trait anxiety and autistic traits). Consistent associations for both traits (in particular, conscientiousness, autistic traits and trait anxiety - the latter limited to emotional stimuli) and cognitive abilities (particularly visual processing) were identified. Overall, however, informativeness of future studies would benefit from checking and reporting the reliability of all measurement tools, applying multiplicity correction, using complementary techniques, study preregistration and testing why, rather than only if, a robust relation between certain individual differences and VS performance exists.
Collapse
Affiliation(s)
- Jennifer Wagner
- Department of Psychology and Cognitive Sciences, University of Trento, Rovereto, Italy
| | - Adriana Zurlo
- Department of Psychology and Cognitive Sciences, University of Trento, Rovereto, Italy
| | - Elena Rusconi
- Department of Psychology and Cognitive Sciences, University of Trento, Rovereto, Italy; Centre of Security and Crime Sciences, University of Trento - University of Verona, Trento, Italy.
| |
Collapse
|
5
|
Jeong J, Cho YS. Object-based suppression in target search but not in distractor inhibition. Atten Percept Psychophys 2024:10.3758/s13414-024-02905-7. [PMID: 38839715 DOI: 10.3758/s13414-024-02905-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/13/2024] [Indexed: 06/07/2024]
Abstract
The present study investigated the effect of object representation on attentional priority regarding distractor inhibition and target search processes while the statistical regularities of singleton distractor location were biased. A color singleton distractor appeared more frequently at one of six stimulus locations, called the 'high-probability location,' to induce location-based suppression. Critically, three objects were presented, each of which paired two adjacent stimuli in a target display by adding background contours (Experiment 1) or using perceptual grouping (Experiments 2 and 3). The results revealed that attention capture by singleton distractors was hardly modulated by objects. In contrast, target selection was impeded at the location in the object containing the high-probability location compared to an equidistant location in a different object. This object-based suppression in target selection was evident when object-related features were parts of task-relevant features. These findings suggest that task-irrelevant objects modulate attentional suppression. Moreover, different features are engaged in determining attentional priority for distractor inhibition and target search processes.
Collapse
Affiliation(s)
- Jiyoon Jeong
- School of Psychology, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul, 02841, Korea
| | - Yang Seok Cho
- School of Psychology, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul, 02841, Korea.
| |
Collapse
|
6
|
Lemire M, Soulières I, Saint-Amour D. The effect of age on executive functions in adults is not sex specific. J Int Neuropsychol Soc 2024; 30:489-498. [PMID: 38221864 DOI: 10.1017/s1355617723011487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/16/2024]
Abstract
OBJECTIVE Numerous studies have shown a decrease in executive functions (EF) associated with aging. However, few investigations examined whether this decrease is similar between sexes throughout adulthood. The present study investigated if age-related decline in EF differs between men and women from early to late adulthood. METHODS A total of 302 participants (181 women) aged between 18 and 78 years old completed four computer-based cognitive tasks at home: an arrow-based Flanker task, a letter-based Visual search task, the Trail Making Test, and the Corsi task. These tasks measured inhibition, attention, cognitive flexibility, and working memory, respectively. To investigate the potential effects of age, sex, and their interaction on specific EF and a global EF score, we divided the sample population into five age groups (i.e., 18-30, 31-44, 45-54, 55-64, 65-78) and conducted analyses of covariance (MANCOVA and ANCOVA) with education and pointing device as control variables. RESULTS Sex did not significantly affect EF performance across age groups. However, in every task, participants from the three youngest groups (< 55 y/o) outperformed the ones from the two oldest. Results from the global score also suggest that an EF decrease is distinctly noticeable from 55 years old onward. CONCLUSION Our results suggest that age-related decline in EF, including inhibition, attention, cognitive flexibility, and working memory, becomes apparent around the age of 55 and does not differ between sexes at any age. This study provides additional data regarding the effects of age and sex on EF across adulthood, filling a significant gap in the existing literature.
Collapse
Affiliation(s)
- Marilou Lemire
- Department of Psychology, Université du Québec à Montréal, Montréal, QC, Canada
| | - Isabelle Soulières
- Department of Psychology, Université du Québec à Montréal, Montréal, QC, Canada
- CIUSSS NIM Research Center, Hôpital en Santé Mentale Rivière-des-Prairies, Montréal, QC, Canada
| | - Dave Saint-Amour
- Department of Psychology, Université du Québec à Montréal, Montréal, QC, Canada
- Research Center, Centre Hospitalier Universitaire Sainte-Justine, Montréal, QC, Canada
| |
Collapse
|
7
|
Oor EE, Salinas E, Stanford TR. Location- and feature-based selection histories make independent, qualitatively distinct contributions to urgent visuomotor performance. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.29.596532. [PMID: 38853897 PMCID: PMC11160778 DOI: 10.1101/2024.05.29.596532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2024]
Abstract
Attention mechanisms that guide visuomotor behaviors are classified into three broad types according to their reliance on stimulus salience, current goals, and selection histories (i.e., recent experience with events of many sorts). These forms of attentional control are clearly distinct and multifaceted, but what is largely unresolved is how they interact dynamically to determine impending visuomotor choices. To investigate this, we trained two macaque monkeys to perform an urgent version of an oddball search task in which a red target appears among three green distracters, or vice versa. By imposing urgency, performance can be tracked continuously as it transitions from uninformed guesses to informed choices, and this, in turn, permits assessment of attentional control as a function of time. We found that the probability of making a correct choice was strongly modulated by the histories of preceding target colors and target locations. Crucially, although both effects were gated by success (or reward), the two variables played dynamically distinct roles: whereas location history promoted an early motor bias, color history modulated the later perceptual evaluation. Furthermore, target color and location influenced performance independently of each other. The results show that, when combined, selection histories can give rise to enormous swings in visuomotor performance even in simple tasks with highly discriminable stimuli.
Collapse
Affiliation(s)
- Emily E Oor
- Department of Psychology, Wake Forest University, Winston-Salem, North Carolina, United States of America
| | - Emilio Salinas
- Department of Translational Neuroscience, Wake Forest University School of Medicine, Winston-Salem, North Carolina, United States of America
| | - Terrence R Stanford
- Department of Translational Neuroscience, Wake Forest University School of Medicine, Winston-Salem, North Carolina, United States of America
| |
Collapse
|
8
|
Halámek F, Světlák M, Malatincová T, Halámková J, Slezáčková A, Barešová Z, Lekárová M. Enhancing patient well-being in oncology waiting rooms: a pilot field experiment on the emotional impact of virtual forest therapy. Front Psychol 2024; 15:1392397. [PMID: 38800677 PMCID: PMC11117429 DOI: 10.3389/fpsyg.2024.1392397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Accepted: 04/24/2024] [Indexed: 05/29/2024] Open
Abstract
Introduction This study explores the emotional impact of virtual forest therapy delivered through audio-visual recordings shown to patients in the oncology waiting rooms, focusing on whether simulated forest walks can positively influence patients' emotional states compared to traditional waiting room stimuli. Methods The study involved 117 participants from a diverse group of oncology patients in the outpatient clinic waiting room at the Masaryk Memorial Cancer Institute. Using a partially randomized controlled trial design, the study assessed basic emotional dimensions-valence and arousal-as well as specific psychological states such as thought control, sadness, anxiety, and pain. This assessment used the Self-Assessment Manikin and the modified Emotional Thermometer before and after participants watched three video types (forest, sea, news). Baseline stress levels were measured using the Kessler Psychological Distress Scale (K6). Results Participants exposed to forest and sea videos reported significant improvements in emotional valence and reduced arousal, suggesting a calming and uplifting effect. No significant changes were observed in the control and news groups. Secondary outcomes related to anxiety, sadness, and pain showed no significant interaction effects, though small but significant main effects of time on these variables were noted. Discussion The findings suggest that videos of forest and sea can be a beneficial intervention in the oncology waiting rooms by enhancing patients' emotional well-being. This pilot study underscores the potential for integrating virtual mental health support elements into healthcare settings to improve patient care experience.
Collapse
Affiliation(s)
- Filip Halámek
- Department of Medical Psychology and Psychosomatics, Faculty of Medicine, Masaryk University, Brno, Czechia
- Department of Comprehensive Cancer Care, Masaryk Memorial Cancer Institute, Brno, Czechia
| | - Miroslav Světlák
- Department of Medical Psychology and Psychosomatics, Faculty of Medicine, Masaryk University, Brno, Czechia
- Department of Comprehensive Cancer Care, Masaryk Memorial Cancer Institute, Brno, Czechia
| | - Tatiana Malatincová
- Department of Medical Psychology and Psychosomatics, Faculty of Medicine, Masaryk University, Brno, Czechia
| | - Jana Halámková
- Department of Comprehensive Cancer Care, Masaryk Memorial Cancer Institute, Brno, Czechia
| | - Alena Slezáčková
- Department of Medical Psychology and Psychosomatics, Faculty of Medicine, Masaryk University, Brno, Czechia
| | - Zdeňka Barešová
- Department of Medical Psychology and Psychosomatics, Faculty of Medicine, Masaryk University, Brno, Czechia
| | - Monika Lekárová
- Department of Medical Psychology and Psychosomatics, Faculty of Medicine, Masaryk University, Brno, Czechia
| |
Collapse
|
9
|
Salinas E, Stanford TR. Conditional independence as a statistical assessment of evidence integration processes. PLoS One 2024; 19:e0297792. [PMID: 38722936 PMCID: PMC11081312 DOI: 10.1371/journal.pone.0297792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 01/12/2024] [Indexed: 05/13/2024] Open
Abstract
Intuitively, combining multiple sources of evidence should lead to more accurate decisions than considering single sources of evidence individually. In practice, however, the proper computation may be difficult, or may require additional data that are inaccessible. Here, based on the concept of conditional independence, we consider expressions that can serve either as recipes for integrating evidence based on limited data, or as statistical benchmarks for characterizing evidence integration processes. Consider three events, A, B, and C. We find that, if A and B are conditionally independent with respect to C, then the probability that C occurs given that both A and B are known, P(C|A, B), can be easily calculated without the need to measure the full three-way dependency between A, B, and C. This simplified approach can be used in two general ways: to generate predictions by combining multiple (conditionally independent) sources of evidence, or to test whether separate sources of evidence are functionally independent of each other. These applications are demonstrated with four computer-simulated examples, which include detecting a disease based on repeated diagnostic testing, inferring biological age based on multiple biomarkers of aging, discriminating two spatial locations based on multiple cue stimuli (multisensory integration), and examining how behavioral performance in a visual search task depends on selection histories. Besides providing a sound prescription for predicting outcomes, this methodology may be useful for analyzing experimental data of many types.
Collapse
Affiliation(s)
- Emilio Salinas
- Department of Neurobiology & Anatomy, Wake Forest University School of Medicine, Winston-Salem, North Carolina, United States of America
| | - Terrence R. Stanford
- Department of Neurobiology & Anatomy, Wake Forest University School of Medicine, Winston-Salem, North Carolina, United States of America
| |
Collapse
|
10
|
Nagy B, Kojouharova P, Protzner AB, Gaál ZA. Investigating the Effect of Contextual Cueing with Face Stimuli on Electrophysiological Measures in Younger and Older Adults. J Cogn Neurosci 2024; 36:776-799. [PMID: 38437174 DOI: 10.1162/jocn_a_02135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
Extracting repeated patterns from our surroundings plays a crucial role in contextualizing information, making predictions, and guiding our behavior implicitly. Previous research showed that contextual cueing enhances visual search performance in younger adults. In this study, we investigated whether contextual cueing could also improve older adults' performance and whether age-related differences in the neural processes underlying implicit contextual learning could be detected. Twenty-four younger and 25 older participants performed a visual search task with contextual cueing. Contextual information was generated using repeated face configurations alongside random new configurations. We measured RT difference between new and repeated configurations; ERPs to uncover the neural processes underlying contextual cueing for early (N2pc), intermediate (P3b), and late (r-LRP) processes; and multiscale entropy and spectral power density analyses to examine neural dynamics. Both younger and older adults showed similar contextual cueing benefits in their visual search efficiency at the behavioral level. In addition, they showed similar patterns regarding contextual information processing: Repeated face configurations evoked decreased finer timescale entropy (1-20 msec) and higher frequency band power (13-30 Hz) compared with new configurations. However, we detected age-related differences in ERPs: Younger, but not older adults, had larger N2pc and P3b components for repeated compared with new configurations. These results suggest that contextual cueing remains intact with aging. Although attention- and target-evaluation-related ERPs differed between the age groups, the neural dynamics of contextual learning were preserved with aging, as both age groups increasingly utilized more globally grouped representations for repeated face configurations during the learning process.
Collapse
Affiliation(s)
- Boglárka Nagy
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
- Department of Cognitive Science, Faculty of Natural Sciences, Budapest University of Technology and Economics, Budapest, Hungary
| | - Petia Kojouharova
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
| | - Andrea B Protzner
- Department of Psychology, University of Calgary, Calgary, Alberta, Canada
| | - Zsófia Anna Gaál
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Budapest, Hungary
| |
Collapse
|
11
|
Chapman AF, Störmer VS. Representational structures as a unifying framework for attention. Trends Cogn Sci 2024; 28:416-427. [PMID: 38280837 DOI: 10.1016/j.tics.2024.01.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 01/04/2024] [Accepted: 01/05/2024] [Indexed: 01/29/2024]
Abstract
Our visual system consciously processes only a subset of the incoming information. Selective attention allows us to prioritize relevant inputs, and can be allocated to features, locations, and objects. Recent advances in feature-based attention suggest that several selection principles are shared across these domains and that many differences between the effects of attention on perceptual processing can be explained by differences in the underlying representational structures. Moving forward, it can thus be useful to assess how attention changes the structure of the representational spaces over which it operates, which include the spatial organization, feature maps, and object-based coding in visual cortex. This will ultimately add to our understanding of how attention changes the flow of visual information processing more broadly.
Collapse
Affiliation(s)
- Angus F Chapman
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, USA.
| | - Viola S Störmer
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA.
| |
Collapse
|
12
|
Chen S, Müller HJ, Shi Z. Contextual facilitation: Separable roles of contextual guidance and context suppression in visual search. Psychon Bull Rev 2024:10.3758/s13423-024-02508-1. [PMID: 38689187 DOI: 10.3758/s13423-024-02508-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/02/2024] [Indexed: 05/02/2024]
Abstract
Visual search is facilitated when targets are repeatedly encountered at a fixed position relative to an invariant distractor layout, compared to random distractor arrangements. However, standard investigations of this contextual-facilitation effect employ fixed distractor layouts that predict a constant target location, which does not always reflect real-world situations where the target location may vary relative to an invariant distractor arrangement. To explore the mechanisms involved in contextual learning, we employed a training-test procedure, introducing not only the standard full-repeated displays with fixed target-distractor locations but also distractor-repeated displays in which the distractor arrangement remained unchanged but the target locations varied. During the training phase, participants encountered three types of display: full-repeated, distractor-repeated, and random arrangements. The results revealed full-repeated displays to engender larger performance gains than distractor-repeated displays, relative to the random-display baseline. In the test phase, the gains were substantially reduced when full-repeated displays changed into distractor-repeated displays, while the transition from distractor-repeated to full-repeated displays failed to yield additional gains. We take this pattern to indicate that contextual learning can improve performance with both predictive and non-predictive (repeated) contexts, employing distinct mechanisms: contextual guidance and context suppression, respectively. We consider how these mechanisms might be implemented (neuro-)computationally.
Collapse
Affiliation(s)
- Siyi Chen
- Allgemeine und Experimentelle Psychologie, Department Psychologie, LMU München, Leopoldstr. 13, D-80802, Munich, Germany.
| | - Hermann J Müller
- Allgemeine und Experimentelle Psychologie, Department Psychologie, LMU München, Leopoldstr. 13, D-80802, Munich, Germany
| | - Zhuanghua Shi
- Allgemeine und Experimentelle Psychologie, Department Psychologie, LMU München, Leopoldstr. 13, D-80802, Munich, Germany
| |
Collapse
|
13
|
Stein N, Watson T, Lappe M, Westendorf M, Durant S. Eye and head movements in visual search in the extended field of view. Sci Rep 2024; 14:8907. [PMID: 38632334 PMCID: PMC11023950 DOI: 10.1038/s41598-024-59657-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 04/12/2024] [Indexed: 04/19/2024] Open
Abstract
In natural environments, head movements are required to search for objects outside the field of view (FoV). Here we investigate the power of a salient target in an extended visual search array to facilitate faster detection once this item comes into the FoV by a head movement. We conducted two virtual reality experiments using spatially clustered sets of stimuli to observe target detection and head and eye movements during visual search. Participants completed search tasks with three conditions: (1) target in the initial FoV, (2) head movement needed to bring the target into the FoV, (3) same as condition 2 but the periphery was initially hidden and appeared after the head movement had brought the location of the target set into the FoV. We measured search time until participants found a more salient (O) or less salient (T) target among distractors (L). On average O's were found faster than T's. Gaze analysis showed that saliency facilitation occurred due to the target guiding the search only if it was within the initial FoV. When targets required a head movement to enter the FoV, participants followed the same search strategy as in trials without a visible target in the periphery. Moreover, faster search times for salient targets were only caused by the time required to find the target once the target set was reached. This suggests that the effect of stimulus saliency differs between visual search on fixed displays and when we are actively searching through an extended visual field.
Collapse
Affiliation(s)
- Niklas Stein
- Institute for Psychology, University of Münster, 48143, Münster, Germany.
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, 48143, Münster, Germany.
| | - Tamara Watson
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, 2751, Australia
| | - Markus Lappe
- Institute for Psychology, University of Münster, 48143, Münster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, 48143, Münster, Germany
| | - Maren Westendorf
- Institute for Psychology, University of Münster, 48143, Münster, Germany
| | - Szonya Durant
- Department of Psychology, Royal Holloway, University of London, Egham, TW20 0EX, UK
| |
Collapse
|
14
|
Beitner J, Helbing J, David EJ, Võ MLH. Using a flashlight-contingent window paradigm to investigate visual search and object memory in virtual reality and on computer screens. Sci Rep 2024; 14:8596. [PMID: 38615047 DOI: 10.1038/s41598-024-58941-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 04/04/2024] [Indexed: 04/15/2024] Open
Abstract
A popular technique to modulate visual input during search is to use gaze-contingent windows. However, these are often rather discomforting, providing the impression of visual impairment. To counteract this, we asked participants in this study to search through illuminated as well as dark three-dimensional scenes using a more naturalistic flashlight with which they could illuminate the rooms. In a surprise incidental memory task, we tested the identities and locations of objects encountered during search. Importantly, we tested this study design in both immersive virtual reality (VR; Experiment 1) and on a desktop-computer screen (Experiment 2). As hypothesized, searching with a flashlight increased search difficulty and memory usage during search. We found a memory benefit for identities of distractors in the flashlight condition in VR but not in the computer screen experiment. Surprisingly, location memory was comparable across search conditions despite the enormous difference in visual input. Subtle differences across experiments only appeared in VR after accounting for previous recognition performance, hinting at a benefit of flashlight search in VR. Our findings highlight that removing visual information does not necessarily impair location memory, and that screen experiments using virtual environments can elicit the same major effects as VR setups.
Collapse
Affiliation(s)
- Julia Beitner
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany.
| | - Jason Helbing
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Erwan Joël David
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
- LIUM, Le Mans Université, Le Mans, France
| | - Melissa Lê-Hoa Võ
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
15
|
Khvostov VA, Iakovlev AU, Wolfe JM, Utochkin IS. What is the basis of ensemble subset selection? Atten Percept Psychophys 2024; 86:776-798. [PMID: 38351233 DOI: 10.3758/s13414-024-02850-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/23/2024] [Indexed: 05/03/2024]
Abstract
The visual system can rapidly calculate the ensemble statistics of a set of objects; for example, people can easily estimate an average size of apples on a tree. To accomplish this, it is not always useful to summarize all the visual information. If there are various types of objects, the visual system should select a relevant subset: only apples, not leaves and branches. Here, we ask what kind of visual information makes a "good" ensemble that can be selectively attended to provide an accurate summary estimate. We tested three candidate representations: basic features, preattentive object files, and full-fledged bound objects. In four experiments, we presented a target and several distractors' sets of differently colored objects. We found that conditions where a target ensemble had at least one unique color (basic feature) provided ensemble averaging performance comparable to the baseline displays without distractors. When the target subset was defined as a conjunction of two colors or color-shape partly shared with distractors (so that they could be differentiated only as preattentive object files), subset averaging was also possible but less accurate than in the baseline and feature conditions. Finally, performance was very poor when the target subset was defined by an exact feature relationship, such as in the spatial conjunction of two colors (spatially bound object). Overall, these results suggest that distinguishable features and, to a lesser degree, preattentive object files can serve as the representational basis of ensemble selection, while bound objects cannot.
Collapse
Affiliation(s)
- Vladislav A Khvostov
- Faculty of Psychology, School of Health Sciences, University of Iceland, Reykjavik, Iceland.
- HSE University, Moscow, Russia.
| | - Aleksei U Iakovlev
- Faculty of Psychology, School of Health Sciences, University of Iceland, Reykjavik, Iceland
| | - Jeremy M Wolfe
- Visual Attention Laboratory, Brigham and Women's Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Igor S Utochkin
- Institute for Mind and Biology, University of Chicago, Chicago, IL, USA
| |
Collapse
|
16
|
Salsano I, Tain R, Giulietti G, Williams DP, Ottaviani C, Antonucci G, Thayer JF, Santangelo V. Negative emotions enhance memory-guided attention in a visual search task by increasing frontoparietal, insular, and parahippocampal cortical activity. Cortex 2024; 173:16-33. [PMID: 38354670 DOI: 10.1016/j.cortex.2023.12.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 08/18/2023] [Accepted: 12/12/2023] [Indexed: 02/16/2024]
Abstract
Previous literature demonstrated that long-term memory representations guide spatial attention during visual search in real-world pictures. However, it is currently unknown whether memory-guided visual search is affected by the emotional content of the picture. During functional magnetic resonance imaging (fMRI), participants were asked to encode the position of high-contrast targets embedded in emotional (negative or positive) or neutral pictures. At retrieval, they performed a visual search for targets presented at the same location as during encoding, but at a much lower contrast. Behaviorally, participants detected more accurately targets presented in negative pictures compared to those in positive or neutral pictures. They were also faster in detecting targets presented at encoding in emotional (negative or positive) pictures than in neutral pictures, or targets not presented during encoding (i.e., memory-guided attention effect). At the neural level, we found increased activation in a large circuit of regions involving the dorsal and ventral frontoparietal cortex, insular and parahippocampal cortex, selectively during the detection of targets presented in negative pictures during encoding. We propose that these regions might form an integrated neural circuit recruited to select and process previously encoded target locations (i.e., memory-guided attention sustained by the frontoparietal cortex) embedded in emotional contexts (i.e., emotional contexts recollection supported by the parahippocampal cortex and emotional monitoring supported by the insular cortex). Ultimately, these findings reveal that negative emotions can enhance memory-guided visual search performance by increasing neural activity in a large-scale brain circuit, contributing to disentangle the complex relationship between emotion, attention, and memory.
Collapse
Affiliation(s)
- Ilenia Salsano
- Functional Neuroimaging Laboratory, Santa Lucia Foundation IRCCS, Rome, Italy; PhD Program in Behavioral Neuroscience, Sapienza University of Rome, Rome, Italy.
| | - Rongwen Tain
- Campus Center of Neuroimaging, University of California, Irvine, CA, USA
| | - Giovanni Giulietti
- Functional Neuroimaging Laboratory, Santa Lucia Foundation IRCCS, Rome, Italy; SAIMLAL Department, Sapienza University of Rome, Rome, Italy
| | - DeWayne P Williams
- Department of Psychological Science, University of California, Irvine, Irvine, USA
| | | | - Gabriella Antonucci
- Department of Psychology, Sapienza University of Rome, Rome, Italy; Santa Lucia Foundation, IRCCS, Rome, Italy
| | - Julian F Thayer
- Department of Psychological Science, University of California, Irvine, Irvine, USA
| | - Valerio Santangelo
- Functional Neuroimaging Laboratory, Santa Lucia Foundation IRCCS, Rome, Italy; Department of Philosophy, Social Sciences & Education, University of Perugia, Perugia, Italy.
| |
Collapse
|
17
|
Lladó P, Hyvärinen P, Pulkki V. The impact of head-worn devices in an auditory-aided visual search task. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:2460-2469. [PMID: 38578178 DOI: 10.1121/10.0025542] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 03/21/2024] [Indexed: 04/06/2024]
Abstract
Head-worn devices (HWDs) interfere with the natural transmission of sound from the source to the ears of the listener, worsening their localization abilities. The localization errors introduced by HWDs have been mostly studied in static scenarios, but these errors are reduced if head movements are allowed. We studied the effect of 12 HWDs on an auditory-cued visual search task, where head movements were not restricted. In this task, a visual target had to be identified in a three-dimensional space with the help of an acoustic stimulus emitted from the same location as the visual target. The results showed an increase in the search time caused by the HWDs. Acoustic measurements of a dummy head wearing the studied HWDs showed evidence of impaired localization cues, which were used to estimate the perceived localization errors using computational auditory models of static localization. These models were able to explain the search-time differences in the perceptual task, showing the influence of quadrant errors in the auditory-aided visual search task. These results indicate that HWDs have an impact on sound-source localization even when head movements are possible, which may compromise the safety and the quality of experience of the wearer.
Collapse
Affiliation(s)
- Pedro Lladó
- Acoustics Lab, Department of Information and Communication Engineering, Aalto University, Espoo, 00076, Finland
| | - Petteri Hyvärinen
- Acoustics Lab, Department of Information and Communication Engineering, Aalto University, Espoo, 00076, Finland
| | - Ville Pulkki
- Acoustics Lab, Department of Information and Communication Engineering, Aalto University, Espoo, 00076, Finland
| |
Collapse
|
18
|
Dietze N, Poth CH. Phasic alerting in visual search tasks. Atten Percept Psychophys 2024; 86:707-716. [PMID: 38240893 PMCID: PMC11062964 DOI: 10.3758/s13414-024-02844-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/02/2024] [Indexed: 02/01/2024]
Abstract
Many tasks require one to search for and find important objects in the visual environment. Visual search is strongly supported by cues indicating target objects to mechanisms of selective attention, which enable one to prioritise targets and ignore distractor objects. Besides selective attention, a major influence on performance across cognitive tasks is phasic alertness, a temporary increase of arousal induced by warning stimuli (alerting cues). Alerting cues provide no specific information on whose basis selective attention could be deployed, but have nevertheless been found to speed up perception and simple actions. It is still unclear, however, how alerting affects visual search. Therefore, in the present study, participants performed a visual search task with and without preceding visual alerting cues. Participants had to report the orientation of a target among several distractors. The target saliency was low in Experiment 1 and high in Experiment 2. In both experiments, we found that visual search was faster when a visual alerting cue was presented before the target display. Performance benefits occurred irrespective of how many distractors had been presented along with the target. Taken together, the findings reveal that visual alerting supports visual search independently of the complexity of the search process and the demands for selective attention.
Collapse
Affiliation(s)
- Niklas Dietze
- Department of Psychology, Neuro‑Cognitive Psychology and Center for Cognitive Interaction Technology, Bielefeld University, P.O. box 10 01 31, 33501, Bielefeld, Germany.
| | - Christian H Poth
- Department of Psychology, Neuro‑Cognitive Psychology and Center for Cognitive Interaction Technology, Bielefeld University, P.O. box 10 01 31, 33501, Bielefeld, Germany
| |
Collapse
|
19
|
Makarov I, Unnthorsson R, Kristjánsson Á, Thornton IM. The effects of visual and auditory synchrony on human foraging. Atten Percept Psychophys 2024; 86:909-930. [PMID: 38253985 DOI: 10.3758/s13414-023-02840-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/21/2023] [Indexed: 01/24/2024]
Abstract
Can synchrony in stimulation guide attention and aid perceptual performance? Here, in a series of three experiments, we tested the influence of visual and auditory synchrony on attentional selection during a novel human foraging task. Human foraging tasks are a recent extension of the classic visual search paradigm in which multiple targets must be located on a given trial, making it possible to capture a wide range of performance metrics. Experiment 1 was performed online, where the task was to forage for 10 (out of 20) vertical lines among 60 randomly oriented distractor lines that changed color between yellow and blue at random intervals. The targets either changed colors in visual synchrony or not. In another condition, a non-spatial sound additionally occurred synchronously with the color change of the targets. Experiment 2 was run in the laboratory (within-subjects) with the same design. When the targets changed color in visual synchrony, foraging times were significantly shorter than when they randomly changed colors, but there was no additional benefit for the sound synchrony, in contrast to predictions from the so-called "pip-and-pop" effect (Van der Burg et al., Journal of Experimental Psychology, 1053-1065, 2008). In Experiment 3, task difficulty was increased as participants foraged for as many 45° rotated lines as possible among lines of different orientations within 10 s, with the same synchrony conditions as in Experiments 1 and 2. Again, there was a large benefit of visual synchrony but no additional benefit for sound synchronization. Our results provide strong evidence that visual synchronization can guide attention during multiple target foraging. This likely reflects the local grouping of the synchronized targets. Importantly, there was no additional benefit for sound synchrony, even when the foraging task was quite difficult (Experiment 3).
Collapse
Affiliation(s)
- Ivan Makarov
- Faculty of Psychology, School of Health Sciences, University of Iceland, Reykjavik, Iceland.
- Faculty of Industrial Engineering, Mechanical Engineering and Computer Science, University of Iceland, Reykjavik, Iceland.
| | - Runar Unnthorsson
- Faculty of Industrial Engineering, Mechanical Engineering and Computer Science, University of Iceland, Reykjavik, Iceland
| | - Árni Kristjánsson
- Faculty of Psychology, School of Health Sciences, University of Iceland, Reykjavik, Iceland
| | - Ian M Thornton
- Department of Cognitive Science Faculty of Media & Knowledge Science, University of Malta, Msida, Malta
| |
Collapse
|
20
|
Goldstein AT, Stanford TR, Salinas E. Coupling of saccade plans to endogenous attention during urgent choices. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.01.583058. [PMID: 38496491 PMCID: PMC10942325 DOI: 10.1101/2024.03.01.583058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/19/2024]
Abstract
The neural mechanisms that willfully direct attention to specific locations in space are closely related to those for generating targeting eye movements (saccades). However, the degree to which the voluntary deployment of attention to a location is necessarily accompanied by a corresponding saccade plan remains unclear. One problem is that attention and saccades are both automatically driven by salient sensory events; another is that the underlying processes unfold within tens of milliseconds only. Here, we use an urgent task design to resolve the evolution of a visuomotor choice on a moment-by-moment basis while independently controlling the endogenous (goal-driven) and exogenous (salience-driven) contributions to performance. Human participants saw a peripheral cue and, depending on its color, either looked at it (prosaccade) or looked at a diametrically opposite, uninformative non-cue (antisaccade). By varying the luminance of the stimuli, the exogenous contributions could be cleanly dissociated from the endogenous process guiding the choice over time. According to the measured timecourses, generating a correct antisaccade requires about 30 ms more processing time than generating a correct prosaccade based on the same perceptual signal. The results indicate that saccade plans are biased toward the location where attention is endogenously deployed, but the coupling is weak and can be willfully overridden very rapidly.
Collapse
Affiliation(s)
- Allison T Goldstein
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, 1 Medical Center Blvd., Winston-Salem, NC 27157-1010, USA
| | - Terrence R Stanford
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, 1 Medical Center Blvd., Winston-Salem, NC 27157-1010, USA
| | - Emilio Salinas
- Department of Neurobiology and Anatomy, Wake Forest School of Medicine, 1 Medical Center Blvd., Winston-Salem, NC 27157-1010, USA
| |
Collapse
|
21
|
Jahn CI, Markov NT, Morea B, Daw ND, Ebitz RB, Buschman TJ. Learning attentional templates for value-based decision-making. Cell 2024; 187:1476-1489.e21. [PMID: 38401541 DOI: 10.1016/j.cell.2024.01.041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 12/18/2023] [Accepted: 01/25/2024] [Indexed: 02/26/2024]
Abstract
Attention filters sensory inputs to enhance task-relevant information. It is guided by an "attentional template" that represents the stimulus features that are currently relevant. To understand how the brain learns and uses templates, we trained monkeys to perform a visual search task that required them to repeatedly learn new attentional templates. Neural recordings found that templates were represented across the prefrontal and parietal cortex in a structured manner, such that perceptually neighboring templates had similar neural representations. When the task changed, a new attentional template was learned by incrementally shifting the template toward rewarded features. Finally, we found that attentional templates transformed stimulus features into a common value representation that allowed the same decision-making mechanisms to deploy attention, regardless of the identity of the template. Altogether, our results provide insight into the neural mechanisms by which the brain learns to control attention and how attention can be flexibly deployed across tasks.
Collapse
Affiliation(s)
- Caroline I Jahn
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA.
| | - Nikola T Markov
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA
| | - Britney Morea
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA
| | - Nathaniel D Daw
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA; Department of Psychology, Princeton University, Princeton, NJ 08540, USA
| | - R Becket Ebitz
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA; Department of Neurosciences, Université de Montréal, Montréal, QC H3C 3J7, Canada
| | - Timothy J Buschman
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA; Department of Psychology, Princeton University, Princeton, NJ 08540, USA.
| |
Collapse
|
22
|
Mu Y, Schubö A, Tünnermann J. Adapting attentional control settings in a shape-changing environment. Atten Percept Psychophys 2024; 86:404-421. [PMID: 38169028 PMCID: PMC10805924 DOI: 10.3758/s13414-023-02818-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/06/2023] [Indexed: 01/05/2024]
Abstract
In rich visual environments, humans have to adjust their attentional control settings in various ways, depending on the task. Especially if the environment changes dynamically, it remains unclear how observers adapt to these changes. In two experiments (online and lab-based versions of the same task), we investigated how observers adapt their target choices while searching for color singletons among shape distractor contexts that changed over trials. The two equally colored targets had shapes that differed from each other and matched a varying number of distractors. Participants were free to select either target. The results show that participants adjusted target choices to the shape ratio of distractors: even though the task could be finished by focusing on color only, participants showed a tendency to choose targets matching with fewer distractors in shape. The time course of this adaptation showed that the regularities in the changing environment were taken into account. A Bayesian modeling approach was used to provide a fine-grained picture of how observers adapted their behavior to the changing shape ratio with three parameters: the strength of adaptation, its delay relative to the objective distractor shape ratio, and a general bias toward specific shapes. Overall, our findings highlight that systematic changes in shape, even when it is not a target-defining feature, influence how searchers adjust their attentional control settings. Furthermore, our comparison between lab-based and online assessments with this paradigm suggests that shape is a good choice as a feature dimension in adaptive choice online experiments.
Collapse
Affiliation(s)
- Yunyun Mu
- Department of Psychology, Cognitive Neuroscience of Perception and Action, Philipps-University Marburg, Gutenbergstraße 18, 35032, Marburg, Germany.
| | - Anna Schubö
- Department of Psychology, Cognitive Neuroscience of Perception and Action, Philipps-University Marburg, Gutenbergstraße 18, 35032, Marburg, Germany
| | - Jan Tünnermann
- Department of Psychology, Cognitive Neuroscience of Perception and Action, Philipps-University Marburg, Gutenbergstraße 18, 35032, Marburg, Germany
| |
Collapse
|
23
|
Plater L, Giammarco M, Joubran S, Al-Aidroos N. Control over attentional capture within 170 ms by long-term memory control settings: Evidence from the N2pc. Psychon Bull Rev 2024; 31:283-292. [PMID: 37566216 DOI: 10.3758/s13423-023-02352-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/24/2023] [Indexed: 08/12/2023]
Abstract
Observers adopt attentional control settings (ACSs) based on their goals that guide the capture of attention: Searched-for stimuli capture attention, and stimuli that are not searched for do not. While previous behavioural research indicates that observers can adopt long-term memory (LTM) ACSs (Giammarco et al. Visual Cognition, 24, 78-101, 2016), it seems surprising that representations in LTM could guide attention quickly enough to control attentional capture. To assess the claim that LTM ACSs exert control over early attentional orienting, we recorded electroencephalography while participants studied and searched for 30 target objects in an attention cueing task. Participants reported the studied target and ignored the preceding cues. To control for perceptual evoked responses, on each trial we presented two cue objects (one studied and one nonstudied). Even though participants were instructed to ignore the cues, studied cues produced the N2pc event-related potential, indicating early attentional orienting that was preferentially directed towards the studied cue versus the nonstudied cue. Critically, the N2pc was detectable within 170 ms, confirming that LTM ACSs rapidly control early capture. We propose an update to contemporary models of attentional capture to account for rapid attentional guidance by LTM ACSs.
Collapse
Affiliation(s)
- Lindsay Plater
- Department of Psychology, University of Guelph, Guelph, ON, N1G 2W1, Canada.
| | - Maria Giammarco
- Department of Psychology, University of Guelph, Guelph, ON, N1G 2W1, Canada
| | - Samantha Joubran
- Department of Psychology, University of Guelph, Guelph, ON, N1G 2W1, Canada
| | - Naseem Al-Aidroos
- Department of Psychology, University of Guelph, Guelph, ON, N1G 2W1, Canada
| |
Collapse
|
24
|
van Heusden E, Olivers CNL, Donk M. The effects of eccentricity on attentional capture. Atten Percept Psychophys 2024; 86:422-438. [PMID: 37258897 PMCID: PMC10806068 DOI: 10.3758/s13414-023-02735-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/14/2023] [Indexed: 06/02/2023]
Abstract
Visual attention may be captured by an irrelevant yet salient distractor, thereby slowing search for a relevant target. This phenomenon has been widely studied using the additional singleton paradigm in which search items are typically all presented at one and the same eccentricity. Yet, differences in eccentricity may well bias the competition between target and distractor. Here we investigate how attentional capture is affected by the relative eccentricities of a target and a distractor. Participants searched for a shape-defined target in a grid of homogeneous nontargets of the same color. On 75% of trials, one of the nontarget items was replaced by a salient color-defined distractor. Crucially, target and distractor eccentricities were independently manipulated across three levels of eccentricity (i.e., near, middle, and far). Replicating previous work, we show that the presence of a distractor slows down search. Interestingly, capture as measured by manual reaction times was not affected by target and distractor eccentricity, whereas capture as measured by the eyes was: items close to fixation were more likely to be selected than items presented further away. Furthermore, the effects of target and distractor eccentricity were largely additive, suggesting that the competition between saliency- and relevance-driven selection was modulated by an independent eccentricity-based spatial component. Implications of the dissociation between manual and oculomotor responses are also discussed.
Collapse
Affiliation(s)
- Elle van Heusden
- Faculty of Behavioral and Movement Sciences, Cognitive Psychology, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081 HV, Amsterdam, The Netherlands.
| | - Christian N L Olivers
- Faculty of Behavioral and Movement Sciences, Cognitive Psychology, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081 HV, Amsterdam, The Netherlands
| | - Mieke Donk
- Faculty of Behavioral and Movement Sciences, Cognitive Psychology, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081 HV, Amsterdam, The Netherlands
| |
Collapse
|
25
|
Donk M, van Heusden E, Olivers CNL. Retinal eccentricity modulates saliency-driven but not relevance-driven visual selection. Atten Percept Psychophys 2024:10.3758/s13414-024-02848-z. [PMID: 38273181 DOI: 10.3758/s13414-024-02848-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/10/2024] [Indexed: 01/27/2024]
Abstract
Where we move our eyes during visual search is controlled by the relative saliency and relevance of stimuli in the visual field. However, the visual field is not homogeneous, as both sensory representations and attention change with eccentricity. Here we present an experiment investigating how eccentricity differences between competing stimuli affect saliency- and relevance-driven selection. Participants made a single eye movement to a predefined orientation singleton target that was simultaneously presented with an orientation singleton distractor in a background of multiple homogenously oriented other items. The target was either more or less salient than the distractor. Moreover, each of the two singletons could be presented at one of three different retinal eccentricities, such that both were presented at the same eccentricity, one eccentricity value apart, or two eccentricity values apart. The results showed that selection was initially determined by saliency, followed after about 300 ms by relevance. In addition, observers preferred to select the closer over the more distant singleton, and this central selection bias increased with increasing eccentricity difference. Importantly, it largely emerged within the same time window as the saliency effect, thereby resulting in a net reduction of the influence of saliency on the selection outcome. In contrast, the relevance effect remained unaffected by eccentricity. Together, these findings demonstrate that eccentricity is a major determinant of selection behavior, even to the extent that it modifies the relative contribution of saliency in determining where people move their eyes.
Collapse
Affiliation(s)
- Mieke Donk
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Van der Boechorststraat 5-7, 1081 BT, Amsterdam, the Netherlands.
- Institute Brain and Behavior (iBBA), Amsterdam, the Netherlands.
| | - Elle van Heusden
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Van der Boechorststraat 5-7, 1081 BT, Amsterdam, the Netherlands
- Institute Brain and Behavior (iBBA), Amsterdam, the Netherlands
| | - Christian N L Olivers
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Van der Boechorststraat 5-7, 1081 BT, Amsterdam, the Netherlands
- Institute Brain and Behavior (iBBA), Amsterdam, the Netherlands
| |
Collapse
|
26
|
Strappini F, Fagioli S, Mastandrea S, Scorolli C. Sustainable materials: a linking bridge between material perception, affordance, and aesthetics. Front Psychol 2024; 14:1307467. [PMID: 38259544 PMCID: PMC10800687 DOI: 10.3389/fpsyg.2023.1307467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 12/12/2023] [Indexed: 01/24/2024] Open
Abstract
The perception of material properties, which refers to the way in which individuals perceive and interpret materials through their sensory experiences, plays a crucial role in our interaction with the environment. Affordance, on the other hand, refers to the potential actions and uses that materials offer to users. In turn, the perception of the affordances is modulated by the aesthetic appreciation that individuals experience when interacting with the environment. Although material perception, affordances, and aesthetic appreciation are recognized as essential to fostering sustainability in society, only a few studies have investigated this subject matter systematically and their reciprocal influences. This scarcity is partially due to the challenges offered by the complexity of combining interdisciplinary topics that explore interactions between various disciplines, such as psychophysics, neurophysiology, affective science, aesthetics, and social and environmental sciences. Outlining the main findings across disciplines, this review highlights the pivotal role of material perception in shaping sustainable behaviors. It establishes connections between material perception, affordance, aesthetics, and sustainability, emphasizing the need for interdisciplinary research and integrated approaches in environmental psychology. This integration is essential as it can provide insight into how to foster sustainable and durable changes.
Collapse
Affiliation(s)
- Francesca Strappini
- Department of Philosophy and Communication, University of Bologna, Bologna, Italy
| | | | | | - Claudia Scorolli
- Department of Philosophy and Communication, University of Bologna, Bologna, Italy
| |
Collapse
|
27
|
Liesefeld HR, Lamy D, Gaspelin N, Geng JJ, Kerzel D, Schall JD, Allen HA, Anderson BA, Boettcher S, Busch NA, Carlisle NB, Colonius H, Draschkow D, Egeth H, Leber AB, Müller HJ, Röer JP, Schubö A, Slagter HA, Theeuwes J, Wolfe J. Terms of debate: Consensus definitions to guide the scientific discourse on visual distraction. Atten Percept Psychophys 2024:10.3758/s13414-023-02820-3. [PMID: 38177944 DOI: 10.3758/s13414-023-02820-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/15/2023] [Indexed: 01/06/2024]
Abstract
Hypothesis-driven research rests on clearly articulated scientific theories. The building blocks for communicating these theories are scientific terms. Obviously, communication - and thus, scientific progress - is hampered if the meaning of these terms varies idiosyncratically across (sub)fields and even across individual researchers within the same subfield. We have formed an international group of experts representing various theoretical stances with the goal to homogenize the use of the terms that are most relevant to fundamental research on visual distraction in visual search. Our discussions revealed striking heterogeneity and we had to invest much time and effort to increase our mutual understanding of each other's use of central terms, which turned out to be strongly related to our respective theoretical positions. We present the outcomes of these discussions in a glossary and provide some context in several essays. Specifically, we explicate how central terms are used in the distraction literature and consensually sharpen their definitions in order to enable communication across theoretical standpoints. Where applicable, we also explain how the respective constructs can be measured. We believe that this novel type of adversarial collaboration can serve as a model for other fields of psychological research that strive to build a solid groundwork for theorizing and communicating by establishing a common language. For the field of visual distraction, the present paper should facilitate communication across theoretical standpoints and may serve as an introduction and reference text for newcomers.
Collapse
Affiliation(s)
- Heinrich R Liesefeld
- Department of Psychology, University of Bremen, Hochschulring 18, D-28359, Bremen, Germany.
| | - Dominique Lamy
- The School of Psychology Sciences and The Sagol School of Neuroscience, Tel Aviv University, Ramat Aviv 69978, POB 39040, Tel Aviv, Israel.
| | | | - Joy J Geng
- University of California Davis, Daivs, CA, USA
| | | | | | | | | | | | | | | | - Hans Colonius
- Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | | | | | | | | | | | - Anna Schubö
- Philipps University Marburg, Marburg, Germany
| | | | | | - Jeremy Wolfe
- Harvard Medical School, Boston, MA, USA
- Brigham & Women's Hospital, Boston, MA, USA
| |
Collapse
|
28
|
Põder E. CNN-based search model fails to account for human attention guidance by simple visual features. Atten Percept Psychophys 2024; 86:9-15. [PMID: 36977907 DOI: 10.3758/s13414-023-02697-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/11/2023] [Indexed: 03/30/2023]
Abstract
Recently, Zhang et al. (Nature communications, 9(1), 3730, 2018) proposed an interesting model of attention guidance that uses visual features learnt by convolutional neural networks (CNNs) for object classification. I adapted this model for search experiments, with accuracy as the measure of performance. Simulation of our previously published feature and conjunction search experiments revealed that the CNN-based search model proposed by Zhang et al. considerably underestimates human attention guidance by simple visual features. Using target-distractor differences instead of target features for attention guidance or computing attention map at lower layers of the network could improve the performance. Still, the model fails to reproduce qualitative regularities of human visual search. The most likely explanation is that standard CNNs that are trained on image classification have not learnt medium- or high-level features required for human-like attention guidance.
Collapse
Affiliation(s)
- Endel Põder
- Institute of Psychology, University of Tartu, Näituse 2, 50409, Tartu, Estonia.
| |
Collapse
|
29
|
Zhou Z, Geng JJ. Learned associations serve as target proxies during difficult but not easy visual search. Cognition 2024; 242:105648. [PMID: 37897882 DOI: 10.1016/j.cognition.2023.105648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 10/03/2023] [Accepted: 10/12/2023] [Indexed: 10/30/2023]
Abstract
The target template contains information in memory that is used to guide attention during visual search and is typically thought of as containing features of the actual target object. However, when targets are hard to find, it is advantageous to use other information in the visual environment that is predictive of the target's location to help guide attention. The purpose of these studies was to test if newly learned associations between face and scene category images lead observers to use scene information as a proxy for the face target. Our results showed that scene information was used as a proxy for the target to guide attention but only when the target face was difficult to discriminate from the distractor face; when the faces were easy to distinguish, attention was no longer guided by the scene unless the scene was presented earlier. The results suggest that attention is flexibly guided by both target features as well as features of objects that are predictive of the target location. The degree to which each contributes to guiding attention depends on the efficiency with which that information can be used to decode the location of the target in the current moment. The results contribute to the view that attentional guidance is highly flexible in its use of information to rapidly locate the target.
Collapse
Affiliation(s)
- Zhiheng Zhou
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA 95618, USA.
| | - Joy J Geng
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA 95618, USA; Department of Psychology, University of California, One Shields Ave, Davis, CA 95616, USA.
| |
Collapse
|
30
|
Witkowski PP, Geng JJ. Prefrontal Cortex Codes Representations of Target Identity and Feature Uncertainty. J Neurosci 2023; 43:8769-8776. [PMID: 37875376 PMCID: PMC10727173 DOI: 10.1523/jneurosci.1117-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 09/04/2023] [Accepted: 10/07/2023] [Indexed: 10/26/2023] Open
Abstract
Many objects in the real world have features that vary over time, creating uncertainty in how they will look in the future. This uncertainty makes statistical knowledge about the likelihood of features critical to attention demanding processes such as visual search. However, little is known about how the uncertainty of visual features is integrated into predictions about search targets in the brain. In the current study, we test the idea that regions prefrontal cortex code statistical knowledge about search targets before the onset of search. Across 20 human participants (13 female; 7 male), we observe target identity in the multivariate pattern and uncertainty in the overall activation of dorsolateral prefrontal cortex (DLPFC) and inferior frontal junction (IFJ) in advance of the search display. This indicates that the target identity (mean) and uncertainty (variance) of the target distribution are coded independently within the same regions. Furthermore, once the search display appears the univariate IFJ signal scaled with the distance of the actual target from the expected mean, but more so when expected variability was low. These results inform neural theories of attention by showing how the prefrontal cortex represents both the identity and expected variability of features in service of top-down attentional control.SIGNIFICANCE STATEMENT Theories of attention and working memory posit that when we engage in complex cognitive tasks our performance is determined by how precisely we remember task-relevant information. However, in the real world the properties of objects change over time, creating uncertainty about many aspects of the task. There is currently a gap in our understanding of how neural systems represent this uncertainty and combine it with target identity information in anticipation of attention demanding cognitive tasks. In this study, we show that the prefrontal cortex represents identity and uncertainty as unique codes before task onset. These results advance theories of attention by showing that the prefrontal cortex codes both target identity and uncertainty to implement top-down attentional control.
Collapse
Affiliation(s)
- Phillip P Witkowski
- Center for Mind and Brain, University of California, Davis, Davis, California 95618
- Department of Psychology, University of California, Davis, Davis, California 95618
| | - Joy J Geng
- Center for Mind and Brain, University of California, Davis, Davis, California 95618
- Department of Psychology, University of California, Davis, Davis, California 95618
| |
Collapse
|
31
|
Thayer DD, Sprague TC. Feature-Specific Salience Maps in Human Cortex. J Neurosci 2023; 43:8785-8800. [PMID: 37907257 PMCID: PMC10727177 DOI: 10.1523/jneurosci.1104-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 09/29/2023] [Accepted: 10/24/2023] [Indexed: 11/02/2023] Open
Abstract
Priority map theory is a leading framework for understanding how various aspects of stimulus displays and task demands guide visual attention. Per this theory, the visual system computes a priority map, which is a representation of visual space indexing the relative importance, or priority, of locations in the environment. Priority is computed based on both salience, defined based on image-computable properties; and relevance, defined by an individual's current goals, and is used to direct attention to the highest-priority locations for further processing. Computational theories suggest that priority maps identify salient locations based on individual feature dimensions (e.g., color, motion), which are integrated into an aggregate priority map. While widely accepted, a core assumption of this framework, the existence of independent feature dimension maps in visual cortex, remains untested. Here, we tested the hypothesis that retinotopic regions selective for specific feature dimensions (color or motion) in human cortex act as neural feature dimension maps, indexing salient locations based on their preferred feature. We used fMRI activation patterns to reconstruct spatial maps while male and female human participants viewed stimuli with salient regions defined by relative color or motion direction. Activation in reconstructed spatial maps was localized to the salient stimulus position in the display. Moreover, the strength of the stimulus representation was strongest in the ROI selective for the salience-defining feature. Together, these results suggest that feature-selective extrastriate visual regions highlight salient locations based on local feature contrast within their preferred feature dimensions, supporting their role as neural feature dimension maps.SIGNIFICANCE STATEMENT Identifying salient information is important for navigating the world. For example, it is critical to detect a quickly approaching car when crossing the street. Leading models of computer vision and visual search rely on compartmentalized salience computations based on individual features; however, there has been no direct empirical demonstration identifying neural regions as responsible for performing these dissociable operations. Here, we provide evidence of a critical double dissociation that neural activation patterns from color-selective regions prioritize the location of color-defined salience while minimally representing motion-defined salience, whereas motion-selective regions show the complementary result. These findings reveal that specialized cortical regions act as neural "feature dimension maps" that are used to index salient locations based on specific features to guide attention.
Collapse
Affiliation(s)
- Daniel D Thayer
- Department of Psychological and Brain Sciences, University of California-Santa Barbara, Santa Barbara, California 93106
| | - Thomas C Sprague
- Department of Psychological and Brain Sciences, University of California-Santa Barbara, Santa Barbara, California 93106
| |
Collapse
|
32
|
Grubert A, Eimer M. Do We Prepare for What We Predict? How Target Expectations Affect Preparatory Attentional Templates and Target Selection in Visual Search. J Cogn Neurosci 2023; 35:1919-1935. [PMID: 37713670 DOI: 10.1162/jocn_a_02054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/17/2023]
Abstract
Visual search is guided by representations of target-defining features (attentional templates) that are activated in a preparatory fashion. Here, we investigated whether these template activation processes are modulated by probabilistic expectations about upcoming search targets. We tracked template activation while observers prepared to search for one or two possible color-defined targets by measuring N2pc components (markers of attentional capture) to task-irrelevant color probes flashed every 200 msec during the interval between search displays. These probes elicit N2pcs only if the corresponding color template is active at the time when the probe appears. Probe N2pcs emerged from about 600 msec before search display onset. They did not differ between one-color and two-color search, indicating that two color templates can be activated concurrently. Critically, probe N2pcs measured during two-color search were identical for probes matching an expected or unexpected color (target color probability: 80% vs. 20%), or one of two equally likely colors. This strongly suggests that probabilistic target color expectations had no impact on search preparation. In marked contrast, subsequent target selection processes were strongly affected by these expectations. We discuss possible explanations for this clear dissociation in the effects of expectations on preparatory search template activation and search target selection, respectively.
Collapse
|
33
|
Wu Y, Wang Q. The Distinctness of Illusory and Non-Illusory Conjunctions in the Perception of Chinese Words: Assessing the Roles of Stimulus Exposure Time. Percept Mot Skills 2023; 130:2430-2449. [PMID: 37905513 DOI: 10.1177/00315125231210584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
Abstract
Previous studies of illusory conjunction (IC) mainly focused on alphabetic languages, while researchers have poorly understood the IC mechanism of Chinese words as an ideographic writing system. In the present study, we aimed to investigate the dynamic changes of IC effects for Chinese words under different stimulus exposure times and spatial arrangements. We conducted two experiments with a 3 (Condition: IC, non-IC-same, non-IC-different) × 3 (Exposure time: 38 ms, 88 ms, 138 ms) within-subject design. The results showed that in the IC condition, the two characters recombined regardless of exposure time as long as they could form an orthographically correct new word, demonstrating the universality of IC. In non-IC conditions, increasing exposure time decreased response time and significantly reduced error rate, indicating that attention played a decisive role in perceptual processing. The spatial arrangement had no impact on IC production. These findings support the feature confirmation account model, suggesting that attention modulates IC through top-down feature confirmation processes. These data expand an understanding of IC mechanisms, validate the role of attention in feature confirmation, and elucidate the inimitable mechanism of the Chinese word IC influenced by both low-level visual processing and high-level cognitive control.
Collapse
Affiliation(s)
- Yanwen Wu
- School of Teacher Education, Tianshui Normal University, Tianshui, China
| | - Qiangqiang Wang
- School of Teacher Education, Huzhou University, Huzhou, China
| |
Collapse
|
34
|
Zou J, Zhang Y, Li J, Tian X, Ding N. Human attention during goal-directed reading comprehension relies on task optimization. eLife 2023; 12:RP87197. [PMID: 38032825 PMCID: PMC10688971 DOI: 10.7554/elife.87197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2023] Open
Abstract
The computational principles underlying attention allocation in complex goal-directed tasks remain elusive. Goal-directed reading, that is, reading a passage to answer a question in mind, is a common real-world task that strongly engages attention. Here, we investigate what computational models can explain attention distribution in this complex task. We show that the reading time on each word is predicted by the attention weights in transformer-based deep neural networks (DNNs) optimized to perform the same reading task. Eye tracking further reveals that readers separately attend to basic text features and question-relevant information during first-pass reading and rereading, respectively. Similarly, text features and question relevance separately modulate attention weights in shallow and deep DNN layers. Furthermore, when readers scan a passage without a question in mind, their reading time is predicted by DNNs optimized for a word prediction task. Therefore, we offer a computational account of how task optimization modulates attention distribution during real-world reading.
Collapse
Affiliation(s)
- Jiajie Zou
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang UniversityHangzhouChina
- Nanhu Brain-computer Interface InstituteHangzhouChina
| | - Yuran Zhang
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang UniversityHangzhouChina
| | - Jialu Li
- Division of Arts and Sciences, New York University ShanghaiShanghaiChina
| | - Xing Tian
- Division of Arts and Sciences, New York University ShanghaiShanghaiChina
| | - Nai Ding
- Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Sciences, Zhejiang UniversityHangzhouChina
- Nanhu Brain-computer Interface InstituteHangzhouChina
| |
Collapse
|
35
|
Pitt KM, McCarthy JW. Strategies for highlighting items within visual scene displays to support augmentative and alternative communication access for those with physical impairments. Disabil Rehabil Assist Technol 2023; 18:1319-1329. [PMID: 34788177 DOI: 10.1080/17483107.2021.2003455] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 11/02/2021] [Indexed: 10/19/2022]
Abstract
PURPOSE In contrast to the traditional grid-based display, visual scene displays (VSDs) offer a new paradigm for aided communication. For individuals who cannot select items from an AAC display by direct selection due to physical impairments, AAC access can be supported via methods such as item scanning. Item scanning sequentially highlights items on a display until the individual signals for selection. How items are highlighted or scanned for AAC access can impact performance outcomes. Further, the effectiveness of a VSD interface may be enhanced through consultation with experts in visual communication. Therefore, to support AAC access for those with physical impairments, the aim of this study was to evaluate the perspectives of experts in visual communication regarding effective methods for highlighting VSD elements. METHODS Thirteen participants with expertise related to visual communication (e.g., photographers, artists) completed semi-structured interviews regarding techniques for item highlighting. RESULTS Study findings identified four main themes to inform how AAC items may be highlighted or scanned, including (1) use of contrast related to light and dark, (2) use of contrast as it relates to colour, (3) outline highlighting, and (4) use of scale and motion. CONCLUSION By identifying how compositional techniques can be utilized to highlight VSD elements, study findings may inform current practice for scanning-based AAC access, along with other selection techniques where feedback or highlighting is used (e.g., eye-gaze, brain-computer interface). Further, avenues for just-in-time programming are discussed to support effective implementation for those with physical impairments.IMPLICATIONS FOR REHABILITATIONFindings identify multiple potential techniques to improve scanning through items in a photograph for individuals with severe motor impairments using alternative access strategies.Study findings inform current practice for scanning-based AAC access, along with other selection techniques where feedback or highlighting is used (e.g., eye-gaze, brain-computer interface).Avenues for just in time programming of AAC displays are discussed to decrease programming demands and support effective implementation of study findings.
Collapse
Affiliation(s)
- Kevin M Pitt
- Department of Special Education and Communication Disorders, University of Nebraska-Lincoln, Lincoln, NE, USA
| | - John W McCarthy
- Division of Communication Sciences and Disorders, Ohio University, Athens, OH, USA
| |
Collapse
|
36
|
Biassoni F, Gandola M, Gnerre M. Grounding the Restorative Effect of the Environment in Tertiary Qualities: An Integration of Embodied and Phenomenological Perspectives. J Intell 2023; 11:208. [PMID: 37998707 PMCID: PMC10672635 DOI: 10.3390/jintelligence11110208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 10/18/2023] [Accepted: 10/25/2023] [Indexed: 11/25/2023] Open
Abstract
This paper proposes an integration of embodied and phenomenological perspectives to understand the restorative capacity of natural environments. It emphasizes the role of embodied simulation mechanisms in evoking positive affects and cognitive functioning. Perceptual symbols play a crucial role in generating the restorative potential in environments, highlighting the significance of the encounter between the embodied individual and the environment. This study reviews Stress Reduction Theory (SRT) and Attention Restoration Theory (ART), finding commonalities in perceptual fluency and connectedness to nature. It also explores a potential model based on physiognomic perception, where the environment's pervasive qualities elicit an affective response. Restorativeness arises from a direct encounter between the environment's phenomenal structure and the embodied perceptual processes of individuals. Overall, this integrative approach sheds light on the intrinsic affective value of environmental elements and their influence on human well-being.
Collapse
Affiliation(s)
- Federica Biassoni
- Traffic Psychology Research Unit, Department of Psychology, Università Cattolica del Sacro Cuore, 20123 Milano, Italy (M.G.)
- Research Center in Communication Psychology, Università Cattolica del Sacro Cuore, 20123 Milan, Italy
| | - Michela Gandola
- Traffic Psychology Research Unit, Department of Psychology, Università Cattolica del Sacro Cuore, 20123 Milano, Italy (M.G.)
| | - Martina Gnerre
- Traffic Psychology Research Unit, Department of Psychology, Università Cattolica del Sacro Cuore, 20123 Milano, Italy (M.G.)
| |
Collapse
|
37
|
Matas J, Tokalić R, García-Costa D, López-Iñesta E, Álvarez-García E, Grimaldo F, Marušić A. Tool to assess recognition and understanding of elements in Summary of Findings Table for health evidence synthesis: a cross-sectional study. Sci Rep 2023; 13:18044. [PMID: 37872203 PMCID: PMC10593927 DOI: 10.1038/s41598-023-45359-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 10/18/2023] [Indexed: 10/25/2023] Open
Abstract
of Findings (SoF) tables concisely present the main findings of evidence synthesis of health evidence, but how users navigate it to understand and interpret the presented information is not clear. We quantified the interaction of medical students with an SoF table while answering a knowledge quiz. Read&Learn tool was used to measure the number of target and non-target table cells visited for each question and the time spent on these cells. Students positively identified target elements for quiz questions and answered simpler questions, but struggled with critical thinking and understanding study outcomes. The question on outcomes with the largest improvement post-intervention had the fewest correct answers, the longest interaction with table cells and the most opened cells before answering. Students spent a median of 72% of the time reading target table cells. A heatmap of the interactions showed that they were mostly answer-oriented. Further development of the tool and metrics is needed to use the tool and the metrics to study the cognitive processes during the assessment of health evidence.
Collapse
Affiliation(s)
- Jakov Matas
- Department of Research in Biomedicine and Health, Center for Evidence-Based Medicine, University of Split School of Medicine, Šoltanska 2, 21000, Split, Croatia
| | - Ružica Tokalić
- Department of Research in Biomedicine and Health, Center for Evidence-Based Medicine, University of Split School of Medicine, Šoltanska 2, 21000, Split, Croatia
| | | | - Emilia López-Iñesta
- Department of Didactics of Mathematics, Universitat de València, Valencia, Spain
| | | | | | - Ana Marušić
- Department of Research in Biomedicine and Health, Center for Evidence-Based Medicine, University of Split School of Medicine, Šoltanska 2, 21000, Split, Croatia.
| |
Collapse
|
38
|
Salinas E, Stanford TR. Conditional independence as a statistical assessment of evidence integration processes. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.03.539321. [PMID: 37646001 PMCID: PMC10461915 DOI: 10.1101/2023.05.03.539321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
Intuitively, combining multiple sources of evidence should lead to more accurate decisions than considering single sources of evidence individually. In practice, however, the proper computation may be difficult, or may require additional data that are inaccessible. Here, based on the concept of conditional independence, we consider expressions that can serve either as recipes for integrating evidence based on limited data, or as statistical benchmarks for characterizing evidence integration processes. Consider three events, A , B , and C . We find that, if A and B are conditionally independent with respect to C , then the probability that C occurs given that both A and B are known, P C | A , B , can be easily calculated without the need to measure the full three-way dependency between A , B , and C . This simplified approach can be used in two general ways: to generate predictions by combining multiple (conditionally independent) sources of evidence, or to test whether separate sources of evidence are functionally independent of each other. These applications are demonstrated with four computer-simulated examples, which include detecting a disease based on repeated diagnostic testing, inferring biological age based on multiple biomarkers of aging, discriminating two spatial locations based on multiple cue stimuli (multisensory integration), and examining how behavioral performance in a visual search task depends on selection histories. Besides providing a sound prescription for predicting outcomes, this methodology may be useful for analyzing experimental data of many types.
Collapse
Affiliation(s)
- Emilio Salinas
- Department of Neurobiology & Anatomy, Wake Forest University School of Medicine, Winston-Salem, North Carolina, United States of America
| | - Terrence R Stanford
- Department of Neurobiology & Anatomy, Wake Forest University School of Medicine, Winston-Salem, North Carolina, United States of America
| |
Collapse
|
39
|
Nachtnebel SJ, Cambronero-Delgadillo AJ, Helmers L, Ischebeck A, Höfler M. The impact of different distractions on outdoor visual search and object memory. Sci Rep 2023; 13:16700. [PMID: 37794077 PMCID: PMC10551016 DOI: 10.1038/s41598-023-43679-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 09/27/2023] [Indexed: 10/06/2023] Open
Abstract
We investigated whether and how different types of search distractions affect visual search behavior and target memory while participants searched in a real-world environment. They searched either undistracted (control condition), listened to a podcast (auditory distraction), counted down aloud at intervals of three while searching (executive working memory load), or were forced to stop the search on half of the trials (time pressure). In line with findings from laboratory settings, participants searched longer but made fewer errors when the target was absent than when it was present, regardless of distraction condition. Furthermore, compared to the auditory distraction condition, the executive working memory load led to higher error rates (but not longer search times). In a surprise memory test after the end of the search tasks, recognition was better for previously present targets than for absent targets. Again, this was regardless of the previous distraction condition, although significantly fewer targets were remembered by the participants in the executive working memory load condition than by those in the control condition. The findings suggest that executive working memory load, but likely not auditory distraction and time pressure affected visual search performance and target memory in a real-world environment.
Collapse
Affiliation(s)
| | | | - Linda Helmers
- Department of Psychology, University of Graz, Universitätsplatz 2/III, 8010, Graz, Austria
| | - Anja Ischebeck
- Department of Psychology, University of Graz, Universitätsplatz 2/III, 8010, Graz, Austria
| | - Margit Höfler
- Department of Psychology, University of Graz, Universitätsplatz 2/III, 8010, Graz, Austria
- Department for Dementia Research, University for Continuing Education Krems, Dr.-Karl-Dorrek-Straße 30, 3500, Krems, Austria
| |
Collapse
|
40
|
Walle A, Druey MD, Hübner R. Learned cognitive control counteracts value-driven attentional capture. PSYCHOLOGICAL RESEARCH 2023; 87:2048-2067. [PMID: 36763140 DOI: 10.1007/s00426-023-01792-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Accepted: 01/11/2023] [Indexed: 02/11/2023]
Abstract
Stimuli formerly associated with monetary reward capture our attention, even if this attraction is contrary to current goals (so-called value-driven attentional capture [VDAC], see Anderson (Ann N Y Acad Sci 1369:24-39, 2016), for a review). Despite the growing literature to this topic, little is known about the boundary conditions for the occurrence of VDAC. In three experiments, we investigated the role of response conflicts and spatial uncertainty regarding the target location during the training and test phase for the emergence of value-driven effects. Thus, we varied the occurrence of a response conflict, search components, and the type of task in both phases. In the training, value-driven effects were rather observed if the location of the value-associated target was not predictable and a response conflict was present. Value-driven effects also only occurred, if participants have not learned to deal with a response conflict, yet. However, the introduction of a response conflict during learning of the color-value association seemed to prevent attention to be distracted by this feature in a subsequent test. The study provides new insights not only into the boundary conditions of the learning of value associations, but also into the learning of cognitive control.
Collapse
Affiliation(s)
- Annabelle Walle
- Department of Psychology, University of Konstanz, 78457, Konstanz, Germany.
| | - Michel D Druey
- Department of Psychology, University of Konstanz, 78457, Konstanz, Germany
| | - Ronald Hübner
- Department of Psychology, University of Konstanz, 78457, Konstanz, Germany
| |
Collapse
|
41
|
Liang X, Wu Z, Yue Z. The association of targets modulates the search efficiency in multitarget searches. Atten Percept Psychophys 2023; 85:1888-1904. [PMID: 37568033 DOI: 10.3758/s13414-023-02771-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/24/2023] [Indexed: 08/13/2023]
Abstract
Previous studies have found that distractors can affect visual search efficiency when associated with the target in a single-target search. However, multitarget searches are frequently necessary in daily life. In the present study, we examined how the association of targets in a multitarget search affected performance when searching for two targets simultaneously (Experiment 1). In addition, we explored whether the association affected switch cost (Experiment 2) and preparation cost (Experiment 3). Participants were required to learn associations between different colors or shapes and then performed feature search and conjunction search tasks. For all experiments, the results of search efficiency showed that for conjunction search, the search efficiency under the associated condition was significantly higher than that under the neutral condition. Similarly, the response times in the associated condition were significantly faster than those in the neutral condition under the conjunction search condition in Experiments 1 and 2. However, in Experiment 3, the response times in the associated condition were longer than those in the neutral condition. These results indicate that the association between targets can improve the efficiency of multitarget searches. Furthermore, associations can reduce the time spent searching for individual targets and the switch cost; however, the preparation cost increases.
Collapse
Affiliation(s)
- Xinxian Liang
- Department of Psychology, Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience and Mental Health, Sun Yat-sen University, Guangzhou, 510006, China
| | - Zehua Wu
- Department of Psychology, Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience and Mental Health, Sun Yat-sen University, Guangzhou, 510006, China
| | - Zhenzhu Yue
- Department of Psychology, Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience and Mental Health, Sun Yat-sen University, Guangzhou, 510006, China.
| |
Collapse
|
42
|
Li Y, Ye B, Bao Y. The same phase creates a unique visual rhythm unifying moving elements in time. Psych J 2023; 12:500-506. [PMID: 36916772 DOI: 10.1002/pchj.636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Accepted: 12/02/2022] [Indexed: 03/16/2023]
Abstract
Attention can be selectively tuned to particular features at different spatial locations or objects. The deployment of attention can be guided by properties, such as color, orientation, and so forth, as guiding features. What might be such guiding features for visual stimuli under dynamic rhythmic conditions? We asked specifically what might be the parameters that attract attention when perceiving a visual rhythm. We used a visual search paradigm, in which a dynamic search display consisted of vertically "bouncing balls" with regular rhythms. The search target was defined by a unique visual rhythm (i.e., with either a shorter or longer period) among rhythmic distractors sharing an identical period. We modulated amplitudes and phases of the distractor balls systematically. The results showed a crucial factor of the phase, not the amplitude. If the phase is violated, the target suddenly "pops out" as an "oddball," showing an efficient parallel search. The findings indicate in general the essential role of the phase in conjunction with amplitude and period for visual rhythm perception. Furthermore, a higher saliency of moving objects with a higher frequency component has also been disclosed.
Collapse
Affiliation(s)
- Yao Li
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China
- Peking-Tsinghua Center for Life Sciences, Peking University, Beijing, China
| | - Biyi Ye
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China
| | - Yan Bao
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China
- Institute of Medical Psychology, Ludwig Maximilian University Munich, Munich, Germany
- Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| |
Collapse
|
43
|
Xu P, Wang M, Zhang T, Zhang J, Jin Z, Li L. The role of middle frontal gyrus in working memory retrieval by the effect of target detection tasks: a simultaneous EEG-fMRI study. Brain Struct Funct 2023:10.1007/s00429-023-02687-y. [PMID: 37477712 DOI: 10.1007/s00429-023-02687-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 07/11/2023] [Indexed: 07/22/2023]
Abstract
Maintained working memory (WM) representations have been shown to influence visual target detection selection, while the effect of the visual target detection process on WM retrieval remains largely unknown. In the current research, we used the dual-paradigm of the visual target detection task and the delayed matching task (DMT), which contained the following four conditions: the match condition: the DMT target contained the detection target; the mismatch condition: the DMT target contained the detection distractor; the neutral condition: only the detection target was presented; the catch condition: only the DMT target was presented. Twenty-six subjects were recruited in the experiment with simultaneous EEG-fMRI data. Behaviorally, faster responses were found in the mismatch condition than in the match and neutral conditions. The EEG data found a greater parieto-occipital N1 component in the mismatch condition compared to the neutral condition, and a greater frontal N2 component in the match condition than in the mismatch condition. Moreover, compared to the match and neutral conditions, weaker activations of the bilateral middle frontal gyrus (MFG) were observed in the mismatch condition. And the representational similarity analysis (RSA) revealed significant differences in the representational patterns of the bilateral MFG between mismatch and match conditions, as well as in the representational patterns of the left MFG between mismatch and neutral conditions. Additionally, the left MFG may be the brain source of the N1 component in the mismatch condition. These findings suggest that the mismatch between the DMT target and detection target affects early attention allocation and attentional control in WM retrieval, and the MFG may play an important role in WM retrieval by the effect of the target detection task. In conclusion, our work deepens the understanding of the neural mechanisms by which visual target detection affects WM retrieval.
Collapse
Affiliation(s)
- Ping Xu
- MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Min Wang
- Bioinformatics and BioMedical Bigdata Mining Laboratory, School of Big Health, Guizhou Medical University, Guiyang, China
| | - Tingting Zhang
- MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Junjun Zhang
- MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Zhenlan Jin
- MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China
| | - Ling Li
- MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, Center for Psychiatry and Psychology, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China.
| |
Collapse
|
44
|
Tanabe-Ishibashi A, Ishibashi R, Hatori Y. Control of bottom-up attention in scene cognition contributes to visual working memory performance. Atten Percept Psychophys 2023:10.3758/s13414-023-02740-2. [PMID: 37337017 DOI: 10.3758/s13414-023-02740-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/24/2023] [Indexed: 06/21/2023]
Abstract
Several studies have investigated the relationship between working memory and attention. However, most of the relevant studies so far investigated top-down attention; only a few have examined possible interactions between bottom-up attention and visual working memory. In the present study, we focused on the visual saliency of different parts of pictures as an index of the degree to which one's bottom-up attention can be drawn towards each of them. We administered the Picture Span Test (PST) to investigate whether salient parts of pictures can influence the performance of visual working memory. The task required participants to judge the semantic congruency of objects in pictures and remember specific parts of pictures. In Experiment 1, we calculated a saliency map for the PST stimuli and found that salient but task-irrelevant parts of pictures could evoke intrusion errors. In Experiment 2, we demonstrated that longer gazing time at target areas results in a higher probability of making correct recognition. In addition, frequent gaze fixation and high normalized scan-path saliency values in task-irrelevant areas were associated with intrusion errors. These results suggest that visual information processed by bottom-up attention may affect working memory.
Collapse
Affiliation(s)
- Azumi Tanabe-Ishibashi
- International Research Institute of Disaster Science, Tohoku University, Tohoku, Miyagi, Japan.
- Institute of Development, Aging and Cancer, Tohoku University, Miyagi, Japan.
| | - Ryo Ishibashi
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Osaka, Japan
| | - Yasuhiro Hatori
- Research Institute of Electrical Communication, Tohoku University, Miyagi, Japan
| |
Collapse
|
45
|
Isasi-Isasmendi A, Andrews C, Flecken M, Laka I, Daum MM, Meyer M, Bickel B, Sauppe S. The Agent Preference in Visual Event Apprehension. Open Mind (Camb) 2023; 7:240-282. [PMID: 37416075 PMCID: PMC10320828 DOI: 10.1162/opmi_a_00083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 03/19/2023] [Indexed: 07/08/2023] Open
Abstract
A central aspect of human experience and communication is understanding events in terms of agent ("doer") and patient ("undergoer" of action) roles. These event roles are rooted in general cognition and prominently encoded in language, with agents appearing as more salient and preferred over patients. An unresolved question is whether this preference for agents already operates during apprehension, that is, the earliest stage of event processing, and if so, whether the effect persists across different animacy configurations and task demands. Here we contrast event apprehension in two tasks and two languages that encode agents differently; Basque, a language that explicitly case-marks agents ('ergative'), and Spanish, which does not mark agents. In two brief exposure experiments, native Basque and Spanish speakers saw pictures for only 300 ms, and subsequently described them or answered probe questions about them. We compared eye fixations and behavioral correlates of event role extraction with Bayesian regression. Agents received more attention and were recognized better across languages and tasks. At the same time, language and task demands affected the attention to agents. Our findings show that a general preference for agents exists in event apprehension, but it can be modulated by task and language demands.
Collapse
Affiliation(s)
- Arrate Isasi-Isasmendi
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
| | - Caroline Andrews
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
| | - Monique Flecken
- Department of Linguistics, Amsterdam Centre for Language and Communication, University of Amsterdam, Amsterdam, The Netherlands
| | - Itziar Laka
- Department of Linguistics and Basque Studies, University of the Basque Country (UPV/EHU), Leioa, Spain
| | - Moritz M. Daum
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
- Department of Psychology, University of Zurich, Zurich, Switzerland
- Jacobs Center for Productive Youth Development, University of Zurich, Zurich, Switzerland
| | - Martin Meyer
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
- Cognitive Psychology Unit, University of Klagenfurt, Klagenfurt, Austria
| | - Balthasar Bickel
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
| | - Sebastian Sauppe
- Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich, Switzerland
| |
Collapse
|
46
|
Narhi-Martinez W, Chen J, Golomb JD. Probabilistic visual attentional guidance triggers "feature avoidance" response errors. J Exp Psychol Hum Percept Perform 2023; 49:802-820. [PMID: 37141038 PMCID: PMC10320923 DOI: 10.1037/xhp0001095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Spatial attention affects not only where we look, but also what we perceive and remember in attended and unattended locations. Previous work has shown that manipulating attention via top-down cues or bottom-up capture leads to characteristic patterns of feature errors. Here we investigated whether experience-driven attentional guidance-and probabilistic attentional guidance more generally-leads to similar feature errors. We conducted a series of pre-registered experiments employing a learned spatial probability or probabilistic pre-cue; all experiments involved reporting the color of one of four simultaneously presented stimuli using a continuous response modality. When the probabilistic cues guided attention to an invalid (nontarget) location, participants were less likely to report the target color, as expected. But strikingly, their errors tended to be clustered around a nontarget color opposite the color of the invalidly-cued nontarget. This "feature avoidance" was found for both experience-driven and top-down probabilistic cues, and appears to be the product of a strategic-but possibly subconscious-behavior, occurring when information about the features and/or feature-location bindings outside the focus of attention is limited. The findings emphasize the importance of considering how different types of attentional guidance can exert different effects on feature perception and memory reports. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
|
47
|
Kunar MA, Watson DG. Framing the fallibility of Computer-Aided Detection aids cancer detection. Cogn Res Princ Implic 2023; 8:30. [PMID: 37222932 PMCID: PMC10209366 DOI: 10.1186/s41235-023-00485-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Accepted: 04/29/2023] [Indexed: 05/25/2023] Open
Abstract
Computer-Aided Detection (CAD) has been proposed to help operators search for cancers in mammograms. Previous studies have found that although accurate CAD leads to an improvement in cancer detection, inaccurate CAD leads to an increase in both missed cancers and false alarms. This is known as the over-reliance effect. We investigated whether providing framing statements of CAD fallibility could keep the benefits of CAD while reducing over-reliance. In Experiment 1, participants were told about the benefits or costs of CAD, prior to the experiment. Experiment 2 was similar, except that participants were given a stronger warning and instruction set in relation to the costs of CAD. The results showed that although there was no effect of framing in Experiment 1, a stronger message in Experiment 2 led to a reduction in the over-reliance effect. A similar result was found in Experiment 3 where the target had a lower prevalence. The results show that although the presence of CAD can result in over-reliance on the technology, these effects can be mitigated by framing and instruction sets in relation to CAD fallibility.
Collapse
Affiliation(s)
- Melina A Kunar
- Department of Psychology, The University of Warwick, Coventry, CV4 7AL, UK.
| | - Derrick G Watson
- Department of Psychology, The University of Warwick, Coventry, CV4 7AL, UK
| |
Collapse
|
48
|
Yu X, Zhou Z, Becker SI, Boettcher SEP, Geng JJ. Good-enough attentional guidance. Trends Cogn Sci 2023; 27:391-403. [PMID: 36841692 DOI: 10.1016/j.tics.2023.01.007] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 01/24/2023] [Accepted: 01/25/2023] [Indexed: 02/27/2023]
Abstract
Theories of attention posit that attentional guidance operates on information held in a target template within memory. The template is often thought to contain veridical target features, akin to a photograph, and to guide attention to objects that match the exact target features. However, recent evidence suggests that attentional guidance is highly flexible and often guided by non-veridical features, a subset of features, or only associated features. We integrate these findings and propose that attentional guidance maximizes search efficiency based on a 'good-enough' principle to rapidly localize candidate target objects. Candidates are then serially interrogated to make target-match decisions using more precise information. We suggest that good-enough guidance optimizes the speed-accuracy-effort trade-offs inherent in each stage of visual search.
Collapse
Affiliation(s)
- Xinger Yu
- Center for Mind and Brain, University of California Davis, Davis, CA, USA; Department of Psychology, University of California Davis, Davis, CA, USA
| | - Zhiheng Zhou
- Center for Mind and Brain, University of California Davis, Davis, CA, USA
| | - Stefanie I Becker
- School of Psychology, University of Queensland, Brisbane, QLD, Australia
| | | | - Joy J Geng
- Center for Mind and Brain, University of California Davis, Davis, CA, USA; Department of Psychology, University of California Davis, Davis, CA, USA.
| |
Collapse
|
49
|
Kang T, Luo S, Wang P, Tang T. Influence of figure information on attention distribution in Chinese landscape painting. Heliyon 2023; 9:e15036. [PMID: 37082642 PMCID: PMC10112017 DOI: 10.1016/j.heliyon.2023.e15036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 03/21/2023] [Accepted: 03/24/2023] [Indexed: 04/03/2023] Open
Abstract
Chinese landscape painting is a complex form of visual art. More researchers pay more attention to the creation process and expressed intention of landscape painting. However, the appreciation of works of visual art is often related to cognitive processing, and it is influenced by the content of the works. This study hypothesized that the information of the figure in landscape paintings could guide the allocation of attention and affect cognitive processing. To test this hypothesis, the vertical landscape paintings of the Song and Ming Dynasties were used as experimental materials in the experiment, and the eye-movement technology was applied to record and compare the differences of the eye-movement indexes of landscape paintings with and without figures. The results showed that the dwell time of landscape painting with figure was significantly longer than that of landscape painting without figure, and the dwell time of the interest area of the figure was significantly longer than that of the interest area without figure. However, the first three fixation duration of the interest area with figures is significantly less than that of the interest area without figure, and there was no difference in the saccade counts and the distribution of fixation points between different landscape paintings. It suggested that the figure in the landscape painting can attract peoples' attention, but it does not have attention priority. Meanwhile, peoples tend to holistic processing when they viewing the vertical landscape paintings, and it is not influenced by the information of figure.
Collapse
|
50
|
Barker M, Rehrig G, Ferreira F. Speakers prioritise affordance-based object semantics in scene descriptions. LANGUAGE, COGNITION AND NEUROSCIENCE 2023; 38:1045-1067. [PMID: 37841974 PMCID: PMC10572038 DOI: 10.1080/23273798.2023.2190136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 02/18/2023] [Indexed: 10/17/2023]
Abstract
This work investigates the linearisation strategies used by speakers when describing real-world scenes to better understand production plans for multi-utterance sequences. In this study, 30 participants described real-world scenes aloud. To investigate which semantic features of scenes predict order of mention, we quantified three features (meaning, graspability, and interactability) using two techniques (whole-object ratings and feature map values). We found that object-level semantic features, namely those affordance-based, predicted order of mention in a scene description task. Our findings provide the first evidence for an object-related semantic feature that guides linguistic ordering decisions and offer theoretical support for the role of object semantics in scene viewing and description.
Collapse
Affiliation(s)
- M. Barker
- Department of Psychology, University of California, Davis
| | - G. Rehrig
- Department of Psychology, University of California, Davis
| | - F. Ferreira
- Department of Psychology, University of California, Davis
| |
Collapse
|