1
|
Luo Y, Li A, Soman D, Zhao J. A meta-analytic cognitive framework of nudge and sludge. ROYAL SOCIETY OPEN SCIENCE 2023; 10:230053. [PMID: 38034123 PMCID: PMC10685127 DOI: 10.1098/rsos.230053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Accepted: 10/31/2023] [Indexed: 12/02/2023]
Abstract
Public and private institutions have gained traction in developing interventions to alter people's behaviours in predictable ways without limiting the freedom of choice or significantly changing the incentive structure. A nudge is designed to facilitate actions by minimizing friction, while a sludge is an intervention that inhibits actions by increasing friction, but the underlying cognitive mechanisms behind these interventions remain largely unknown. Here, we develop a novel cognitive framework by organizing these interventions along six cognitive processes: attention, perception, memory, effort, intrinsic motivation and extrinsic motivation. In addition, we conduct a meta-analysis of field experiments (i.e. randomized controlled trials) that contained real behavioural measures (n = 184 papers, k = 184 observations, N = 2 245 373 participants) from 2008 to 2021 to examine the effect size of these interventions targeting each cognitive process. Our findings demonstrate that interventions changing effort are more effective than interventions changing intrinsic motivation, and nudge and sludge interventions had similar effect sizes. However, these results need to be interpreted with caution due to a potential publication bias. This new meta-analytic framework provides cognitive principles for organizing nudge and sludge with corresponding behavioural impacts. The insights gained from this framework help inform the design and development of future interventions based on cognitive insights.
Collapse
Affiliation(s)
- Yu Luo
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| | - Andrew Li
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| | - Dilip Soman
- Rotman School of Management, University of Toronto, Toronto, Canada
| | - Jiaying Zhao
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
- Institute for Resources, Environment and Sustainability, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
2
|
Schmidt F, Schürmann L, Haberkamp A. Animal eMotion, or the emotional evaluation of moving animals. Cogn Emot 2022; 36:1132-1148. [PMID: 35749075 DOI: 10.1080/02699931.2022.2087600] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
Abstract
Responding adequately to the behaviour of human and non-human animals in our environment has been crucial for our survival. This is also reflected in our exceptional capacity to detect and interpret biological motion signals. However, even though our emotions have specifically emerged as automatic adaptive responses to such vital stimuli, few studies investigated the influence of biological motion on emotional evaluations. Here, we test how the motion of animals affects emotional judgements by contrasting static animal images and videos. We investigated this question (1) in non-fearful observers across many different animals, and (2) in observers afraid of particular animals across four types of animals, including the feared ones. In line with previous studies, we find an idiosyncratic pattern of evoked emotions across different types of animals. These emotions can be explained to different extents by regression models based on relevant predictor variables (e.g. familiarity, dangerousness). Additionally, our findings show a boosting effect of motion on emotional evaluations across all animals, with an additional increase in (negative) emotions for moving feared animals (except snakes). We discuss implications of our results for experimental and clinical research and applications, highlighting the importance of experiments with dynamic and ecologically valid stimuli.
Collapse
Affiliation(s)
- Filipp Schmidt
- Experimental Psychology, Justus-Liebig-University Giessen, Giessen, Germany
| | - Lisa Schürmann
- Clinical Psychology and Psychotherapy, Philipps-University Marburg, Marburg, Germany
| | - Anke Haberkamp
- Clinical Psychology and Psychotherapy, Philipps-University Marburg, Marburg, Germany
| |
Collapse
|
3
|
Holm SK, Häikiö T, Olli K, Kaakinen JK. Eye Movements during Dynamic Scene Viewing are Affected by Visual Attention Skills and Events of the Scene: Evidence from First-Person Shooter Gameplay Videos. J Eye Mov Res 2021; 14. [PMID: 34745442 PMCID: PMC8566014 DOI: 10.16910/jemr.14.2.3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
The role of individual differences during dynamic scene viewing was explored. Participants
(N=38) watched a gameplay video of a first-person shooter (FPS) videogame while their
eye movements were recorded. In addition, the participants’ skills in three visual attention
tasks (attentional blink, visual search, and multiple object tracking) were assessed. The
results showed that individual differences in visual attention tasks were associated with eye
movement patterns observed during viewing of the gameplay video. The differences were
noted in four eye movement measures: number of fixations, fixation durations, saccade amplitudes
and fixation distances from the center of the screen. The individual differences
showed during specific events of the video as well as during the video as a whole. The results
highlight that an unedited, fast-paced and cluttered dynamic scene can bring about individual
differences in dynamic scene viewing.
Collapse
|
4
|
Nazaré CJ, Oliveira AM. Effects of Audiovisual Presentations on Visual Localization Errors: One or Several Multisensory Mechanisms? Multisens Res 2021; 34:1-35. [PMID: 33882452 DOI: 10.1163/22134808-bja10048] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2020] [Accepted: 03/30/2021] [Indexed: 11/19/2022]
Abstract
The present study examines the extent to which temporal and spatial properties of sound modulate visual motion processing in spatial localization tasks. Participants were asked to locate the place at which a moving visual target unexpectedly vanished. Across different tasks, accompanying sounds were factorially varied within subjects as to their onset and offset times and/or positions relative to visual motion. Sound onset had no effect on the localization error. Sound offset was shown to modulate the perceived visual offset location, both for temporal and spatial disparities. This modulation did not conform to attraction toward the timing or location of the sounds but, demonstrably in the case of temporal disparities, to bimodal enhancement instead. Favorable indications to a contextual effect of audiovisual presentations on interspersed visual-only trials were also found. The short sound-leading offset asynchrony had equivalent benefits to audiovisual offset synchrony, suggestive of the involvement of early-level mechanisms, constrained by a temporal window, at these conditions. Yet, we tentatively hypothesize that the whole of the results and how they compare with previous studies requires the contribution of additional mechanisms, including learning-detection of auditory-visual associations and cross-sensory spread of endogenous attention.
Collapse
Affiliation(s)
- Cristina Jordão Nazaré
- Instituto Politécnico de Coimbra, ESTESC - Coimbra Health School, Audiologia, Coimbra, Portugal
| | | |
Collapse
|
5
|
Vaina LM, Calabro FJ, Samal A, Rana KD, Mamashli F, Khan S, Hämäläinen M, Ahlfors SP, Ahveninen J. Auditory cues facilitate object movement processing in human extrastriate visual cortex during simulated self-motion: A pilot study. Brain Res 2021; 1765:147489. [PMID: 33882297 DOI: 10.1016/j.brainres.2021.147489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 04/12/2021] [Accepted: 04/13/2021] [Indexed: 10/21/2022]
Abstract
Visual segregation of moving objects is a considerable computational challenge when the observer moves through space. Recent psychophysical studies suggest that directionally congruent, moving auditory cues can substantially improve parsing object motion in such settings, but the exact brain mechanisms and visual processing stages that mediate these effects are still incompletely known. Here, we utilized multivariate pattern analyses (MVPA) of MRI-informed magnetoencephalography (MEG) source estimates to examine how crossmodal auditory cues facilitate motion detection during the observer's self-motion. During MEG recordings, participants identified a target object that moved either forward or backward within a visual scene that included nine identically textured objects simulating forward observer translation. Auditory motion cues 1) improved the behavioral accuracy of target localization, 2) significantly modulated the MEG source activity in the areas V2 and human middle temporal complex (hMT+), and 3) increased the accuracy at which the target movement direction could be decoded from hMT+ activity using MVPA. The increase of decoding accuracy by auditory cues in hMT+ was significant also when superior temporal activations in or near auditory cortices were regressed out from the hMT+ source activity to control for source estimation biases caused by point spread. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow in the human extrastriate visual cortex can be facilitated by crossmodal influences from auditory system.
Collapse
Affiliation(s)
- Lucia M Vaina
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA; Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Harvard Medical School-Department of Neurology, Massachusetts General Hospital and Brigham and Women's Hospital, MA, USA
| | - Finnegan J Calabro
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA; Department of Psychiatry and Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Abhisek Samal
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Kunjan D Rana
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Fahimeh Mamashli
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Sheraz Khan
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Matti Hämäläinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Seppo P Ahlfors
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
6
|
Greene CM, Broughan J, Hanlon A, Keane S, Hanrahan S, Kerr S, Rooney B. Visual Search in 3D: Effects of Monoscopic and Stereoscopic Cues to Depth on the Validity of Feature Integration Theory and Perceptual Load Theory. Front Psychol 2021; 12:596511. [PMID: 33815197 PMCID: PMC8009999 DOI: 10.3389/fpsyg.2021.596511] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Accepted: 02/22/2021] [Indexed: 11/21/2022] Open
Abstract
Previous research has successfully used feature integration theory to operationalise the predictions of Perceptual Load Theory, while simultaneously testing the predictions of both models. Building on this work, we test the extent to which these models hold up in a 3D world. In two experiments, participants responded to a target stimulus within an array of shapes whose apparent depth was manipulated using a combination of monoscopic and stereoscopic cues. The search task was designed to test the predictions of (a) feature integration theory, as the target was identified by a single feature or a conjunction of features and embedded in search arrays of varying size, and (b) perceptual load theory, as the task included congruent and incongruent distractors presented alongside search tasks imposing high or low perceptual load. Findings from both experiments upheld the predictions of feature integration theory, regardless of 2D/3D condition. Longer search times in conditions with a combination of monoscopic and stereoscopic depth cues suggests that binding features into three-dimensional objects requires greater attentional effort. This additional effort should have implications for perceptual load theory, yet our findings did not uphold its predictions; the effect of incongruent distractors did not differ between conjunction search trials (conceptualised as high perceptual load) and feature search trials (low perceptual load). Individual differences in susceptibility to the effects of perceptual load were evident and likely explain the absence of load effects. Overall, our findings suggest that feature integration theory may be useful for predicting attentional performance in a 3D world.
Collapse
Affiliation(s)
- Ciara M Greene
- School of Psychology, University College Dublin, Dublin, Ireland
| | - John Broughan
- School of Psychology, University College Dublin, Dublin, Ireland
| | - Anthony Hanlon
- School of Psychology, University College Dublin, Dublin, Ireland
| | - Seán Keane
- School of Psychology, University College Dublin, Dublin, Ireland
| | - Sophia Hanrahan
- School of Psychology, University College Dublin, Dublin, Ireland
| | - Stephen Kerr
- School of Psychology, University College Dublin, Dublin, Ireland
| | - Brendan Rooney
- School of Psychology, University College Dublin, Dublin, Ireland
| |
Collapse
|
7
|
Abstract
Previous studies have demonstrated a complex relationship between ensemble perception and outlier detection. We presented two array of heterogeneously oriented stimulus bars and different mean orientations and/or a bar with an outlier orientation, asking participants to discriminate the mean orientations or detect the outlier. Perceptual learning was found in every case, with improved performance accuracy and speeded responses. Testing for improved accuracy through cross-task transfer, we found considerable transfer from training outlier detection to mean discrimination performance, and none in the opposite direction. Implicit learning in terms of increased accuracy was not found in either direction when participants performed one task, and the second task's stimulus features were present. Reaction time improvement was found to transfer in all cases. This study adds to the already broad knowledge concerning perceptual learning and cross-task transfer of training effects.
Collapse
Affiliation(s)
- Shaul Hochstein
- ELSC Safra Brain Research Center and Life Sciences Institute, Hebrew University, Jerusalem, Israel
| | - Marina Pavlovskaya
- Lowenstein Rehabilitation Hospital and Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
8
|
Kristjánsson T, Draschkow D, Pálsson Á, Haraldsson D, Jónsson PÖ, Kristjánsson Á. Moving foraging into three dimensions: Feature- versus conjunction-based foraging in virtual reality. Q J Exp Psychol (Hove) 2020; 75:313-327. [PMID: 32519926 DOI: 10.1177/1747021820937020] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Visual attention evolved in a three-dimensional (3D) world, yet studies on human attention in three dimensions are sparse. Here we present findings from a human foraging study in immersive 3D virtual reality. We used a foraging task introduced in Kristjánsson et al. to examine how well their findings generalise to more naturalistic settings. The second goal was to examine what effect the motion of targets and distractors has on inter-target times (ITTs), run patterns, and foraging organisation. Observers foraged for 50 targets among 50 distractors in four different conditions. Targets were distinguished from distractors by either a single feature (feature foraging) or a conjunction of features (conjunction foraging). Furthermore, those conditions were performed both with static and moving targets and distractors. Our results replicate previous foraging studies in many aspects, with constant ITTs during a "cruise-phase" within foraging trials and response time peaks at the end of foraging trials. Some key differences emerged, however, such as more frequent switches between target types during conjunction foraging than previously seen and a lack of clear mid-peaks during conjunction foraging, possibly reflecting that differences between feature and conjunction processing are smaller within 3D environments. Observers initiated their foraging in the bottom part of the visual field and motion did not have much of an effect on selection times between different targets (ITTs) or run behaviour patterns except for the end-peaks. Our results cast new light upon visual attention in 3D environments and highlight how 3D virtual reality studies can provide important extensions to two-dimensional studies of visual attention.
Collapse
Affiliation(s)
- Tómas Kristjánsson
- Icelandic Vision Laboratory, School of Health Sciences, University of Iceland, Reykjavík, Iceland
| | - Dejan Draschkow
- Department of Psychiatry, Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK
| | - Ágúst Pálsson
- Icelandic Vision Laboratory, School of Health Sciences, University of Iceland, Reykjavík, Iceland
| | - Davíð Haraldsson
- Icelandic Vision Laboratory, School of Health Sciences, University of Iceland, Reykjavík, Iceland
| | - Pétur Örn Jónsson
- Icelandic Vision Laboratory, School of Health Sciences, University of Iceland, Reykjavík, Iceland
| | - Árni Kristjánsson
- Icelandic Vision Laboratory, School of Health Sciences, University of Iceland, Reykjavík, Iceland.,School of Psychology, National Research University Higher School of Economics, Moscow, Russian Federation
| |
Collapse
|
9
|
Yang YH, Wolfe JM. Is apparent instability a guiding feature in visual search? VISUAL COGNITION 2020; 28:218-238. [PMID: 33100884 PMCID: PMC7577071 DOI: 10.1080/13506285.2020.1779892] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Accepted: 06/01/2020] [Indexed: 10/24/2022]
Abstract
Humans are quick to notice if an object is unstable. Does that assessment require attention or can instability serve as a preattentive feature that can guide the deployment of attention? This paper describes a series of visual search experiments, designed to address this question. Experiment 1 shows that less stable images among more stable images are found more efficiently than more stable among less stable; a search asymmetry that supports guidance by instability. Experiment 2 shows efficient search but no search asymmetry when the orientation of the objects is removed as a confound. Experiment 3 independently varies the orientation cues and perceived stability and finds a clear main effect of apparent stability. Experiment 4 shows converging evidence for a role of stability using different stimuli that lack an orientation cue. However, here both search for stable and unstable targets is inefficient. Experiment 5 is a control for Experiment 4, showing that the stability effect in Experiment 4 is not simple side-effects of the geometry of the stimuli. On balance, the data support a role for instability in the guidance of attention in visual search. (184 words).
Collapse
Affiliation(s)
- Yung-Hao Yang
- Visual Attention Laboratory, Brigham and Women’s Hospital & Harvard Medical School, Boston, MA, USA
- Human Information Science Laboratory, NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Atsugi, Japan
| | - Jeremy M Wolfe
- Visual Attention Laboratory, Brigham and Women’s Hospital & Harvard Medical School, Boston, MA, USA
| |
Collapse
|
10
|
Wolfe JM. Forty years after feature integration theory: An introduction to the special issue in honor of the contributions of Anne Treisman. Atten Percept Psychophys 2020; 82:1-6. [PMID: 31950427 PMCID: PMC7039157 DOI: 10.3758/s13414-019-01966-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Jeremy M Wolfe
- Professor of Ophthalmology & Radiology, Harvard Medical School, Boston, MA, 20115, USA.
- Visual Attention Lab, Department of Surgery Brigham & Women's Hospital, Cambridge, MA, 02139, USA.
| |
Collapse
|
11
|
Wolfe JM, Utochkin IS. What is a preattentive feature? Curr Opin Psychol 2019; 29:19-26. [PMID: 30472539 PMCID: PMC6513732 DOI: 10.1016/j.copsyc.2018.11.005] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Revised: 11/01/2018] [Accepted: 11/08/2018] [Indexed: 11/30/2022]
Abstract
The concept of a preattentive feature has been central to vision and attention research for about half a century. A preattentive feature is a feature that guides attention in visual search and that cannot be decomposed into simpler features. While that definition seems straightforward, there is no simple diagnostic test that infallibly identifies a preattentive feature. This paper briefly reviews the criteria that have been proposed and illustrates some of the difficulties of definition.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Corresponding author Visual Attention Lab, Department
of Surgery, Brigham & Women's Hospital, Departments of Ophthalmology
and Radiology, Harvard Medical School, 64 Sidney St. Suite. 170, Cambridge, MA
02139-4170,
| | - Igor S Utochkin
- National Research University Higher School of
Economics, Moscow, Russian Federation Address: 101000, Armyansky per. 4, Moscow,
Russian Federation,
| |
Collapse
|
12
|
|
13
|
Abstract
In a series of four experiments, standard visual search was used to explore whether the onset of illusory motion pre-attentively guides vision in the same way that the onset of real-motion is known to do. Participants searched for target stimuli based on Akiyoshi Kitaoka's classic illusions, configured so that they either did or did not give the subjective impression of illusory motion. Distractor items always contained the same elements as target items, but did not convey a sense of illusory motion. When target items contained illusory motion, they popped-out, with flat search slopes that were independent of set size. Search for control items without illusory motion - but with identical structural differences to distractors - was slow and serial in nature (> 200 ms/item). Using a nulling task, we estimated the speed of illusory rotation in our displays to be approximately 2 °/s. Direct comparison of illusory and real-motion targets moving with matched velocity showed that illusory motion targets were detected more quickly. Blurred target items that conveyed a weak subjective impression of illusory motion gave rise to serial but faster (< 100 ms/item) search than control items. Our behavioral findings of parallel detection across the visual field, together with previous imaging and neurophysiological studies, suggests that relatively early cortical areas play a causal role in the perception of illusory motion. Furthermore, we hope to re-emphasize the way in which visual search can be used as a flexible, objective measure of illusion strength.
Collapse
|
14
|
Salience from multiple feature contrast: Evidence from saccade trajectories. Atten Percept Psychophys 2018; 80:677-690. [DOI: 10.3758/s13414-017-1480-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
15
|
Yang L, Yu R, Lin X, Liu N. Shape representation modulating the effect of motion on visual search performance. Sci Rep 2017; 7:14921. [PMID: 29097713 PMCID: PMC5668301 DOI: 10.1038/s41598-017-14999-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2017] [Accepted: 10/19/2017] [Indexed: 11/08/2022] Open
Abstract
The effect of motion on visual search has been extensively investigated, but that of uniform linear motion of display on search performance for tasks with different target-distractor shape representations has been rarely explored. The present study conducted three visual search experiments. In Experiments 1 and 2, participants finished two search tasks that differed in target-distractor shape representations under static and dynamic conditions. Two tasks with clear and blurred stimuli were performed in Experiment 3. The experiments revealed that target-distractor shape representation modulated the effect of motion on visual search performance. For tasks with low target-distractor shape similarity, motion negatively affected search performance, which was consistent with previous studies. However, for tasks with high target-distractor shape similarity, if the target differed from distractors in that a gap with a linear contour was added to the target, and the corresponding part of distractors had a curved contour, motion positively influenced search performance. Motion blur contributed to the performance enhancement under dynamic conditions. The findings are useful for understanding the influence of target-distractor shape representation on dynamic visual search performance when display had uniform linear motion.
Collapse
Affiliation(s)
- Lindong Yang
- Department of Industrial Engineering, Tsinghua University, Beijing, 100084, China
| | - Ruifeng Yu
- Department of Industrial Engineering, Tsinghua University, Beijing, 100084, China.
| | - Xuelian Lin
- Department of Industrial Engineering, Tsinghua University, Beijing, 100084, China
| | - Na Liu
- Department of Industrial Engineering, Tsinghua University, Beijing, 100084, China
| |
Collapse
|
16
|
Becker L, Smith D, Schenk T. Investigating the familiarity effect in texture segmentation by means of event-related brain potentials. Vision Res 2017; 140:120-132. [DOI: 10.1016/j.visres.2017.08.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2016] [Revised: 07/27/2017] [Accepted: 08/01/2017] [Indexed: 10/18/2022]
|
17
|
Abstract
Binocular rivalry is a phenomenon of visual competition in which perception alternates between two monocular images. When two eye's images only differ in luminance, observers may perceive shininess, a form of rivalry called binocular luster. Does dichoptic information guide attention in visual search? Wolfe and Franzel (Perception & Psychophysics, 44(1), 81-93, 1988) reported that rivalry could guide attention only weakly, but that luster (shininess) "popped out," producing very shallow Reaction Time (RT) × Set Size functions. In this study, we have revisited the topic with new and improved stimuli. By using a checkerboard pattern in rivalry experiments, we found that search for rivalry can be more efficient (16 ms/item) than standard, rivalrous grating (30 ms/item). The checkerboard may reduce distracting orientation signals that masked the salience of rivalry between simple orthogonal gratings. Lustrous stimuli did not pop out when potential contrast and luminance artifacts were reduced. However, search efficiency was substantially improved when luster was added to the search target. Both rivalry and luster tasks can produce search asymmetries, as is characteristic of guiding features in search. These results suggest that interocular differences that produce rivalry or luster can guide attention, but these effects are relatively weak and can be hidden by other features like luminance and orientation in visual search tasks.
Collapse
|
18
|
Petersik JT. Conceptualizations of Short-Range and Long-Range Processes in Apparent Movement. THEORY & PSYCHOLOGY 2016. [DOI: 10.1177/0959354394043007] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The two-process distinction in apparent movement posits the existence of competitive `short-range' and `long-range' processes. It was proposed to account for the fact that the visual system seems to generate different qualitative percepts under different spatiotemporal conditions of stimulation. It has been shown to be in accord with empirical data from random-dot cinematogram experiments and bistable-percept experiments, as well as with subjective experience. Although it has been developed by some into a model of motion perception, the two-process distinction is perhaps best conceptualized as a metatheoretical perspective rather than a theory. That is, the two-process distinction has guided the development of a number of theories, all of which share the notion of a basic processing dichotomy. The present paper elaborates these ideas and addresses criticisms of the two-process distinction, arguing that they inappropriately test the processes against fixed `criteria'. It is claimed here that, like all complex perceptual processes, those associated with the two-process distinction cannot be easily isolated by manipulations of individual stimulus parameters in the search for criterion behavior. The nature of perceptual theories is discussed in this context, and the notion of modes of perceiving is used as a conceptualization for the two-process distinction. Consistency between the two-process distinction and other theoretical conceptualizations is shown. Conclusions are drawn and suggestions are made for the future of the two-process distinction.
Collapse
|
19
|
Horowitz TS, Wolfe JM, DiMase JS, Klieger SB. Visual Search for Type of Motion is Based on Simple Motion Primitives. Perception 2016; 36:1624-34. [DOI: 10.1068/p5683] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Can we search for items based on their type of motion? We consider here visual search based on three types of motion: (i) ballistic motion, in which objects move in a straight line until they encounter a display boundary; (ii) random-walk motion, in which objects change direction randomly; (iii) composite motion, in which objects move with random fluctuations around a generally ballistic trajectory. The asymmetric pattern of search efficiency can be explained by assuming that visual attention is guided by processes sensitive to the presence of linear motion and change in motion. The results do not reveal a more sophisticated ability to segregate items based on the nature of their motion.
Collapse
|
20
|
Nakayama R, Motoyoshi I, Sato T. The Roles of Non-retinotopic Motions in Visual Search. Front Psychol 2016; 7:840. [PMID: 27313560 PMCID: PMC4887493 DOI: 10.3389/fpsyg.2016.00840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2016] [Accepted: 05/19/2016] [Indexed: 11/30/2022] Open
Abstract
In visual search, a moving target among stationary distracters is detected more rapidly and more efficiently than a static target among moving distracters. Here we examined how this search asymmetry depends on motion signals from three distinct coordinate systems—retinal, relative, and spatiotopic (head/body-centered). Our search display consisted of a target element, distracters elements, and a fixation point tracked by observers. Each element was composed of a spatial carrier grating windowed by a Gaussian envelope, and the motions of carriers, windows, and fixation were manipulated independently and used in various combinations to decouple the respective effects of motion coordinate systems on visual search asymmetry. We found that retinal motion hardly contributes to reaction times and search slopes but that relative and spatiotopic motions contribute to them substantially. Results highlight the important roles of non-retinotopic motions for guiding observer attention in visual search.
Collapse
Affiliation(s)
- Ryohei Nakayama
- Department of Psychology, The University of TokyoTokyo, Japan
- *Correspondence: Ryohei Nakayama
| | - Isamu Motoyoshi
- Department of Life Sciences, The University of TokyoTokyo, Japan
| | - Takao Sato
- Department of Psychology, The University of TokyoTokyo, Japan
| |
Collapse
|
21
|
Contingent attentional capture across multiple feature dimensions in a temporal search task. Acta Psychol (Amst) 2016; 163:107-13. [PMID: 26637932 DOI: 10.1016/j.actpsy.2015.11.009] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2015] [Revised: 11/18/2015] [Accepted: 11/19/2015] [Indexed: 11/21/2022] Open
Abstract
The present study examined whether attention can be flexibly controlled to monitor two different feature dimensions (shape and color) in a temporal search task. Specifically, we investigated the occurrence of contingent attentional capture (i.e., interference from task-relevant distractors) and resulting set reconfiguration (i.e., enhancement of single task-relevant set). If observers can restrict searches to a specific value for each relevant feature dimension independently, the capture and reconfiguration effect should only occur when the single relevant distractor in each dimension appears. Participants identified a target letter surrounded by a non-green square or a non-square green frame. The results revealed contingent attentional capture, as target identification accuracy was lower when the distractor contained a target-defining feature than when it contained a nontarget feature. Resulting set reconfiguration was also obtained in that accuracy was superior when the current target's feature (e.g., shape) corresponded to the defining feature of the present distractor (shape) than when the current target's feature did not match the distractor's feature (color). This enhancement was not due to perceptual priming. The present study demonstrated that the principles of contingent attentional capture and resulting set reconfiguration held even when multiple target feature dimensions were monitored.
Collapse
|
22
|
Wei H, Zuo Q. A biologically inspired neurocomputing circuit for image representation. Neurocomputing 2015. [DOI: 10.1016/j.neucom.2015.01.078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
23
|
Abstract
A key tenet of feature integration theory and of related theories such as guided search (GS) is that the binding of basic features requires attention. This would seem to predict that conjunctions of features of objects that have not been attended should not influence search. However, Found (1998) reported that an irrelevant feature (size) improved the efficiency of search for a Color × Orientation conjunction if it was correlated with the other two features across the display, as compared to the case in which size was not correlated with color and orientation features. We examined this issue with somewhat different stimuli. We used triple conjunctions of color, orientation, and shape (e.g., search for a red, vertical, oval-shaped item). This allowed us to manipulate the number of features that each distractor shared with the target (sharing) and it allowed us to vary the total number of distractor types (and, thus, the number of groups of identical items: grouping). We found that these triple conjunction searches were generally very efficient--producing very shallow Reaction Time × Set Size slopes, consistent with strong guidance by basic features. Nevertheless, both of the variables, sharing and grouping, modulated performance. These influences were not predicted by previous accounts of GS; however, both can be accommodated in a GS framework. Alternatively, it is possible, though not necessary, to see these effects as evidence for "preattentive binding" of conjunctions.
Collapse
|
24
|
Born S, Zimmermann E, Cavanagh P. The spatial profile of mask-induced compression for perception and action. Vision Res 2015; 110:128-41. [PMID: 25748882 DOI: 10.1016/j.visres.2015.01.027] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2014] [Revised: 01/05/2015] [Accepted: 01/11/2015] [Indexed: 10/23/2022]
Abstract
Stimuli briefly flashed just before a saccade are perceived closer to the saccade target, a phenomenon known as saccadic compression of space. We have recently demonstrated that similar mislocalizations of flashed stimuli can be observed in the absence of saccades: brief probes were attracted towards a visual reference when followed by a mask. To examine the spatial profile of this new phenomenon of masked-induced compression, here we used a pair of references that draw the probe into the gap between them. Strong compression was found when we masked the probe and presented it following a reference pair, whereas little or no compression occurred for the probe without the reference pair or without the mask. When the two references were arranged vertically, horizontal mislocalizations prevailed. That is, probes presented to the left or right of the vertically arranged references were "drawn in" to be seen aligned with the references. In contrast, when we arranged the two references horizontally, we found vertical compression for stimuli presented above or below the references. Finally, when participants were to indicate the perceived probe location by making an eye movement towards it, saccade landing positions were compressed in a similar fashion as perceptual judgments, confirming the robustness of mask-induced compression. Our findings challenge pure oculomotor accounts of saccadic compression of space that assume a vital role for saccade-specific signals such as corollary discharge or the updating of eye position. Instead, we suggest that saccade- and mask-induced compression both reflect how the visual system deals with disruptions.
Collapse
Affiliation(s)
- Sabine Born
- Centre Attention & Vision, Laboratoire Psychologie de la Perception, Université Paris Descartes, Sorbonne Paris Cité, CNRS UMR 8242, Paris, France.
| | - Eckart Zimmermann
- Cognitive Neuroscience, Institute of Neuroscience and Medicine (INM-3), Research Centre Jülich, Jülich, Germany
| | - Patrick Cavanagh
- Centre Attention & Vision, Laboratoire Psychologie de la Perception, Université Paris Descartes, Sorbonne Paris Cité, CNRS UMR 8242, Paris, France
| |
Collapse
|
25
|
Becker SI, Lewis AJ. Oculomotor capture by irrelevant onsets with and without color contrast. Ann N Y Acad Sci 2015; 1339:60-71. [PMID: 25708201 DOI: 10.1111/nyas.12685] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
It is widely known that irrelevant onsets (i.e., items appearing in previously empty locations) can automatically capture attention and attract our gaze. Some studies have shown that onset capture is stronger when the onset distractor matches the target feature, indicating that onset capture can be modulated by feature-based (top-down) tuning to the target. However, it is less clear whether and to what extent the perceptual saliency of the distractor can further modulate this effect. This study examined the effects of target similarity, competition between target and distractor, and bottom-up color contrast on the ability of onset distractor to capture the gaze, by varying the color (contrast) and stimulus-onset asynchrony of the onset distractor. The results clearly show that competition and feature-based attention modulate capture by the irrelevant onset to a large extent, whereas bottom-up color contrasts do not modulate onset capture. These results indicate the need to revise current accounts of gaze control.
Collapse
Affiliation(s)
- Stefanie I Becker
- School of Psychology, The University of Queensland, Brisbane, Australia; Center for Interdisciplinacy Research, Bielefeld University, Bielefeld, Germany
| | | |
Collapse
|
26
|
McDonnell GP, Mills M, McCuller L, Dodd MD. How does implicit learning of search regularities alter the manner in which you search? PSYCHOLOGICAL RESEARCH 2014; 79:183-93. [PMID: 24558017 DOI: 10.1007/s00426-014-0546-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2013] [Accepted: 02/04/2014] [Indexed: 10/25/2022]
|
27
|
Guided Search 2.0 A revised model of visual search. Psychon Bull Rev 2013; 1:202-38. [PMID: 24203471 DOI: 10.3758/bf03200774] [Citation(s) in RCA: 1795] [Impact Index Per Article: 163.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/1993] [Accepted: 01/29/1994] [Indexed: 11/08/2022]
Abstract
An important component of routine visual behavior is the ability to find one item in a visual world filled with other, distracting items. This ability to performvisual search has been the subject of a large body of research in the past 15 years. This paper reviews the visual search literature and presents a model of human search behavior. Built upon the work of Neisser, Treisman, Julesz, and others, the model distinguishes between a preattentive, massively parallel stage that processes information about basic visual features (color, motion, various depth cues, etc.) across large portions of the visual field and a subsequent limited-capacity stage that performs other, more complex operations (e.g., face recognition, reading, object identification) over a limited portion of the visual field. The spatial deployment of the limited-capacity process is under attentional control. The heart of the guided search model is the idea that attentional deployment of limited resources isguided by the output of the earlier parallel processes. Guided Search 2.0 (GS2) is a revision of the model in which virtually all aspects of the model have been made more explicit and/or revised in light of new data. The paper is organized into four parts: Part 1 presents the model and the details of its computer simulation. Part 2 reviews the visual search literature on preattentive processing of basic features and shows how the GS2 simulation reproduces those results. Part 3 reviews the literature on the attentional deployment of limited-capacity processes in conjunction and serial searches and shows how the simulation handles those conditions. Finally, Part 4 deals with shortcomings of the model and unresolved issues.
Collapse
|
28
|
|
29
|
Li H, Bao Y, Pöppel E, Su YH. A unique visual rhythm does not pop out. Cogn Process 2013; 15:93-7. [DOI: 10.1007/s10339-013-0581-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2013] [Accepted: 09/26/2013] [Indexed: 10/26/2022]
|
30
|
Abstract
In five experiments, we examined whether the number of items can guide visual focal attention. Observers searched for the target area with the largest (or smallest) number of dots (squares in Experiment 4 and "checkerboards" in Experiment 5) among distractor areas with a smaller (or larger) number of dots. Results of Experiments 1 and 2 show that search efficiency is determined by target to distractor dot ratios. In searches where target items contained more dots than did distractor items, ratios over 1.5:1 yielded efficient search. Searches for targets where target items contained fewer dots than distractor items were harder. Here, ratios needed to be lower than 1:2 to yield efficient search. When the areas of the dots and of the squares containing them were fixed, as they were in Experiments 1 and 2, dot density and total dot area increased as dot number increased. Experiment 3 removed the density and area cues by allowing dot size and total dot area to vary. This produced a marked decline in search performance. Efficient search now required ratios of above 3:1 or below 1:3. By using more realistic and isoluminant stimuli, Experiments 4 and 5 show that guidance by numerosity is fragile. As is found with other features that guide focal attention (e.g., color, orientation, size), the numerosity differences that are able to guide attention by bottom-up signals are much coarser than the differences that can be detected in attended stimuli.
Collapse
Affiliation(s)
- Ester Reijnen
- Department of Psychology, University of Fribourg, Rue de Faucigny 2, 1700 Fribourg, Switzerland
| | - Jeremy M. Wolfe
- Visual Attention Lab, Harvard Medical School & Brigham and Women’s Hospital, Cambridge, MA, USA
| | - Joseph Krummenacher
- Department of Psychology, University of Fribourg, Rue de Faucigny 2, 1700 Fribourg, Switzerland
| |
Collapse
|
31
|
Inhibitory guidance in visual search: The case of movement–form conjunctions. Atten Percept Psychophys 2011; 74:269-84. [DOI: 10.3758/s13414-011-0240-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
32
|
Burr D, Thompson P. Motion psychophysics: 1985–2010. Vision Res 2011; 51:1431-56. [PMID: 21324335 DOI: 10.1016/j.visres.2011.02.008] [Citation(s) in RCA: 119] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2010] [Revised: 02/08/2011] [Accepted: 02/09/2011] [Indexed: 11/19/2022]
Affiliation(s)
- David Burr
- Department of Psychology, University of Florence, Florence, Italy.
| | | |
Collapse
|
33
|
Abstract
While basic visual features such as color, motion, and orientation can guide attention, it is likely that additional features guide search for objects in real-world scenes. Recent work has shown that human observers efficiently extract global scene properties such as mean depth or navigability from a brief glance at a single scene (M. R. Greene & A. Oliva, 2009a, 2009b). Can human observers also efficiently search for an image possessing a particular global scene property among other images lacking that property? Observers searched for scene image targets defined by global properties of naturalness, transience, navigability, and mean depth. All produced inefficient search. Search efficiency for a property was not correlated with its classification threshold time from M. R. Greene and A. Oliva (2009b). Differences in search efficiency between properties can be partially explained by low-level visual features that are correlated with the global property. Overall, while global scene properties can be rapidly classified from a single image, it does not appear to be possible to use those properties to guide attention to one of several images.
Collapse
|
34
|
Aydın M, Herzog MH, Oğmen H. Attention modulates spatio-temporal grouping. Vision Res 2011; 51:435-46. [PMID: 21266181 DOI: 10.1016/j.visres.2010.12.013] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2010] [Revised: 08/21/2010] [Accepted: 12/22/2010] [Indexed: 11/29/2022]
Abstract
Dynamic stimuli are ubiquitous in natural viewing conditions implying that grouping operations need to operate, not only in space, but also jointly in space and time. Moreover, in natural viewing, attention plays an important role in controlling how resources are allocated. We investigated how attention interacts with spatio-temporal perceptual grouping by using a bistable stimulus, called the Ternus-Pikler display. Ternus-Pikler displays can give rise to two different motion percepts, called Element Motion (EM) and Group Motion (GM), the former dominating at short Inter-Stimulus Intervals (ISIs) and the latter at long ISIs. Our results indicate that GM grouping requires more attentional resources than EM grouping. Different theoretical accounts of perceptual grouping and attention are discussed and evaluated in the light of the current results.
Collapse
Affiliation(s)
- Murat Aydın
- Department of Electrical and Computer Engineering, University of Houston, Houston, TX 77024-4005, USA.
| | | | | |
Collapse
|
35
|
Calabro FJ, Soto-Faraco S, Vaina LM. Acoustic facilitation of object movement detection during self-motion. Proc Biol Sci 2011; 278:2840-7. [PMID: 21307050 DOI: 10.1098/rspb.2010.2757] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
In humans, as well as most animal species, perception of object motion is critical to successful interaction with the surrounding environment. Yet, as the observer also moves, the retinal projections of the various motion components add to each other and extracting accurate object motion becomes computationally challenging. Recent psychophysical studies have demonstrated that observers use a flow-parsing mechanism to estimate and subtract self-motion from the optic flow field. We investigated whether concurrent acoustic cues for motion can facilitate visual flow parsing, thereby enhancing the detection of moving objects during simulated self-motion. Participants identified an object (the target) that moved either forward or backward within a visual scene containing nine identical textured objects simulating forward observer translation. We found that spatially co-localized, directionally congruent, moving auditory stimuli enhanced object motion detection. Interestingly, subjects who performed poorly on the visual-only task benefited more from the addition of moving auditory stimuli. When auditory stimuli were not co-localized to the visual target, improvements in detection rates were weak. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow can operate on multisensory object representations.
Collapse
Affiliation(s)
- F J Calabro
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston, MA 02215, USA
| | | | | |
Collapse
|
36
|
Abstract
Transient spatial attention refers to the automatic selection of a location that is driven by the stimulus rather than a voluntary decision. Apparent motion is an illusory motion created by stationary stimuli that are presented successively at different locations. In this study we explored the effects of transient attention on apparent motion. The motion target presentation was preceded by either valid attentional cues that attract attention to the target location in advance (experiments 1–4), neutral cues that do not indicate a location (experiments 1, 3, and 4), or invalid cues that direct attention to a non-target location (experiment 2). Valid attentional cues usually improve performance in various tasks. Here, however, an attentional impairment was found. Observers' ability to discriminate the direction of motion diminished at the cued location. Analogous results were obtained regardless of cue type: singleton cue (experiment 1), central non-informative cue (experiment 2), or abrupt onset cue (experiment 3). Experiment 4 further demonstrated that reversed apparent motion is less likely with attention. This seemingly counterintuitive attentional degradation of perceived apparent motion is consistent with several recent findings, and together they suggest that transient attention facilitates spatial segregation and temporal integration but impairs spatial integration and temporal segregation.
Collapse
Affiliation(s)
- Yaffa Yeshurun
- Department of Psychology, University of Haifa, Haifa 31905, Israel
| | - Elisabeth Hein
- Laboratoire Psychologie de la Perception, Université Paris Descartes, Sorbonne Paris Cité, Paris, France; and CNRS UMR 8158, Paris, France
| |
Collapse
|
37
|
Modelling Visual Search with the Selective Attention for Identification Model (VS-SAIM): A Novel Explanation for Visual Search Asymmetries. Cognit Comput 2010; 3:185-205. [PMID: 21475687 PMCID: PMC3059816 DOI: 10.1007/s12559-010-9076-x] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2010] [Accepted: 10/06/2010] [Indexed: 11/21/2022]
Abstract
In earlier work, we developed the Selective Attention for Identification Model (SAIM [16]). SAIM models the human ability to perform translation-invariant object identification in multiple object scenes. SAIM suggests that central for this ability is an interaction between parallel competitive processes in a selection stage and a object identification stage. In this paper, we applied the model to visual search experiments involving simple lines and letters. We presented successful simulation results for asymmetric and symmetric searches and for the influence of background line orientations. Search asymmetry refers to changes in search performance when the roles of target item and non-target item (distractor) are swapped. In line with other models of visual search, the results suggest that a large part of the empirical evidence can be explained by competitive processes in the brain, which are modulated by the similarity between target and distractor. The simulations also suggest that another important factor is the feature properties of distractors. Finally, the simulations indicate that search asymmetries can be the outcome of interactions between top-down (knowledge about search items) and bottom-up (feature of search items) processing. This interaction in VS-SAIM is dominated by a novel mechanism, the knowledge-based on-centre-off-surround receptive field. This receptive field is reminiscent of the classical receptive fields but the exact shape is modulated by both, top-down and bottom-up processes. The paper discusses supporting evidence for the existence of this novel concept.
Collapse
|
38
|
Hidaka S, Manaka Y, Teramoto W, Sugita Y, Miyauchi R, Gyoba J, Suzuki Y, Iwaya Y. Alternation of sound location induces visual motion perception of a static object. PLoS One 2009; 4:e8188. [PMID: 19997648 PMCID: PMC2781159 DOI: 10.1371/journal.pone.0008188] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2009] [Accepted: 11/09/2009] [Indexed: 11/19/2022] Open
Abstract
BACKGROUND Audition provides important cues with regard to stimulus motion although vision may provide the most salient information. It has been reported that a sound of fixed intensity tends to be judged as decreasing in intensity after adaptation to looming visual stimuli or as increasing in intensity after adaptation to receding visual stimuli. This audiovisual interaction in motion aftereffects indicates that there are multimodal contributions to motion perception at early levels of sensory processing. However, there has been no report that sounds can induce the perception of visual motion. METHODOLOGY/PRINCIPAL FINDINGS A visual stimulus blinking at a fixed location was perceived to be moving laterally when the flash onset was synchronized to an alternating left-right sound source. This illusory visual motion was strengthened with an increasing retinal eccentricity (2.5 deg to 20 deg) and occurred more frequently when the onsets of the audio and visual stimuli were synchronized. CONCLUSIONS/SIGNIFICANCE We clearly demonstrated that the alternation of sound location induces illusory visual motion when vision cannot provide accurate spatial information. The present findings strongly suggest that the neural representations of auditory and visual motion processing can bias each other, which yields the best estimates of external events in a complementary manner.
Collapse
Affiliation(s)
- Souta Hidaka
- Department of Psychology, Graduate School of Arts and Letters, Tohoku University, Sendai, Miyagi, Japan.
| | | | | | | | | | | | | | | |
Collapse
|
39
|
|
40
|
Rushton SK, Bradshaw MF, Warren PA. The pop out of scene-relative object movement against retinal motion due to self-movement. Cognition 2007; 105:237-45. [PMID: 17069787 DOI: 10.1016/j.cognition.2006.09.004] [Citation(s) in RCA: 46] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2006] [Revised: 09/06/2006] [Accepted: 09/07/2006] [Indexed: 11/18/2022]
Abstract
An object that moves is spotted almost effortlessly; it "pops out". When the observer is stationary, a moving object is uniquely identified by retinal motion. This is not so when the observer is also moving; as the eye travels through space all scene objects change position relative to the eye producing a complicated field of retinal motion. Without the unique identifier of retinal motion an object moving relative to the scene should be difficult to locate. Using a search task, we investigated this proposition. Computer-rendered objects were moved and transformed in a manner consistent with movement of the observer. Despite the complex pattern of retinal motion, objects moving relative to the scene were found to pop out. We suggest the brain uses its sensitivity to optic flow to "stabilise" the scene, allowing the scene-relative movement of an object to be identified.
Collapse
Affiliation(s)
- Simon K Rushton
- School of Psychology, Cardiff University, Tower Building, Park Place, P.O. Box 901, Cardiff CF10 3YG, Wales, UK.
| | | | | |
Collapse
|
41
|
Aghdaee SM, Cavanagh P. Temporal limits of long-range phase discrimination across the visual field. Vision Res 2007; 47:2156-63. [PMID: 17574644 DOI: 10.1016/j.visres.2007.04.016] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2006] [Revised: 04/11/2007] [Accepted: 04/11/2007] [Indexed: 11/28/2022]
Abstract
When two flickering sources are far enough apart to avoid low-level motion signals, phase judgment relies on the temporal individuation of the light and dark phases of each source. The highest rate at which the individuation can be maintained has been referred to as Gestalt flicker fusion [Van de Grind, W. A., Grüsser, O. -J., & Lunkenheimer, H. U. (1973). Temporal transfer properties of the afferent visual system. Psychophysical, neurophysiological and theoretical investigations. In R. Jung (Ed.), Handbook of sensory physiology (Vol. VII/3, pp. 431-573). Berlin: Springer, Chapter 7] and this has been taken as a measure of the temporal resolution of attention [Verstraten, F. A., Cavanagh, P., & Labianca, A. T. (2000). Limits of attentive tracking reveal temporal properties of attention. Vision Research, 40, 3651-3664; Battelli, L., Cavanagh, P., Intriligator, J., Tramo, M. J., Henaff, M. A., Michel, F., et al. (2001). Unilateral right parietal damage leads to bilateral deficit for high-level motion. Neuron, 32, 985-995]. Here we examine the variation of the temporal resolution of attention across the visual field using phase judgments of widely spaced pairs of flickering dots presented either in the upper or lower visual field and at either 4 degrees or 14 degrees eccentricity. We varied inter-dot separation to determine the spacing at which phase discriminations are no longer facilitated by low-level motion signals. Our data for these long-range phase judgments showed that temporal resolution decreases only slightly with increased distance from center of gaze (decrease from 11.4 to 8.9 Hz between 4 degrees to 14 degrees ), and does not differ between upper and lower visual fields. We conclude that the variation of the temporal limits of visual attention across the visual field differs markedly from that of the spatial resolution of attention.
Collapse
Affiliation(s)
- S Mehdi Aghdaee
- Department of Psychology, Harvard University, 33 Kirkland Street, Cambridge, MA 02138, USA.
| | | |
Collapse
|
42
|
Matsuno T, Tomonaga M. Visual search for moving and stationary items in chimpanzees (Pan troglodytes) and humans (Homo sapiens). Behav Brain Res 2006; 172:219-32. [PMID: 16790282 DOI: 10.1016/j.bbr.2006.05.004] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2005] [Revised: 05/02/2006] [Accepted: 05/04/2006] [Indexed: 11/26/2022]
Abstract
Four visual search experiments were conducted using human and chimpanzee subjects to investigate attentional processing of movement, and perceptual organization based on movement of items. In the first experiment, subjects performed visual searches for a moving target among stationary items, and for a stationary target among moving items. Subjects of both species displayed an advantage in detecting the moving item compared to the stationary one, suggesting the priority of movement in the attentional processing. A second experiment assessed the effect of the coherent movement of items in the search for a stationary target. Facilitative effects of motion coherence were observed only in the performance of human subjects. In the third and fourth experiments, the effect of coherent movement of the reference frame on the search for moving and stationary targets was tested. Related target movements significantly influenced the search performance of both species. The results of the second, third, and fourth experiments suggest that perceptual organization based on coherent movements is partially shared by chimpanzees and humans, and is more highly developed in humans.
Collapse
Affiliation(s)
- Toyomi Matsuno
- Primate Research Institute, Kyoto University, Inuyama, Aichi 484-8506, Japan
| | | |
Collapse
|
43
|
Niedeggen M, Hesselmann G, Sahraie A, Milders M. ERPs predict the appearance of visual stimuli in a temporal selection task. Brain Res 2006; 1097:205-15. [PMID: 16730675 DOI: 10.1016/j.brainres.2006.04.087] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2005] [Revised: 04/25/2006] [Accepted: 04/26/2006] [Indexed: 11/25/2022]
Abstract
In contrast to the visual spatial domain, the effect of attention on sensory processing and stimulus appearance in temporal selection tasks is still controversial. Using a rapid serial visual presentation (RSVP) procedure, we examined whether the stimulus onset asynchrony (SOA) between a color cue and a motion target affects the appearance of the latter. Event-related brain potentials (ERPs) recorded simultaneously allowed us to test whether a change in the targets' appearance is associated with a modulation of the sensory ERP components. In the experimental condition 'SOA', the temporal interval between the cue and the target was varied between 0 and 300 ms. In a control condition, the physical appearance of the motion target was varied (level of coherence: 25-100%) while holding the cue-target SOA constant (300 ms). In trials when the participant detected the target motion, his/her task was to report the strength of the perceived motion on a 5-point scale. In both conditions, the mean rating of the target's appearance increased monotonically with increasing SOA and the level of coherence, respectively. The psychophysical ratings were associated with an increase of a negative deflection about 200 ms (N200) related to the sensory processing of visual motion. The physical variation of motion coherence and the variation of the cue-target SOA affected the N200 response in similar fashion. These results indicate that sensory processing is also modulated by attentional resources in temporal selection tasks which - in turn - affect the appearance of the relevant target stimulus.
Collapse
Affiliation(s)
- Michael Niedeggen
- Institute of Experimental Psychology II, Heinrich-Heine-University, D-40225 Düsseldorf, Germany.
| | | | | | | |
Collapse
|
44
|
Hesselmann G, Niedeggen M, Sahraie A, Milders M. Specifying the distractor inhibition account of attention-induced motion blindness. Vision Res 2006; 46:1048-56. [PMID: 16309728 DOI: 10.1016/j.visres.2005.10.007] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2005] [Revised: 10/11/2005] [Accepted: 10/12/2005] [Indexed: 11/23/2022]
Abstract
There is growing evidence that motion perception is modulated by visual selective attention. In the 'attention-induced motion blindness' paradigm the detection of coherent motion in a random dot kinematogram (RDK) is impaired in a rapid serial presentation task [Sahraie, A., Milders, M., & Niedeggen, M. (2001). Attention induced motion blindness. Vision Research, 41, 1613-1617]. The effect depends on irrelevant motion episodes (distractors) prior to the target. In this study, we show that both the number and timing of distractors affect detection performance, allowing for implications on the build-up and release of inhibition. Furthermore, we rule out the possibility that subjects falsely classify targets as distractors due to uncertainty of temporal order.
Collapse
Affiliation(s)
- Guido Hesselmann
- Institute of Experimental Psychology II, Heinrich-Heine-Universität, Düesseldorf, Germany.
| | | | | | | |
Collapse
|
45
|
Hay JL, Milders MM, Sahraie A, Niedeggen M. The effect of perceptual load on attention-induced motion blindness: The efficiency of selective inhibition. ACTA ACUST UNITED AC 2006; 32:885-907. [PMID: 16846286 DOI: 10.1037/0096-1523.32.4.885] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Recent visual marking studies have shown that the carry-over of distractor inhibition can impair the ability of singletons to capture attention if the singleton and distractors share features. The current study extends this finding to first-order motion targets and distractors, clearly separated in time by a visual cue (the letter X). Target motion discrimination was significantly impaired, a result attributed to the carry-over of distractor inhibition. Increasing the difficulty of cue detection increased the motion target impairment, as distractor inhibition is thought to increase under demanding (high load) conditions in order to maximize selection efficiency. The apparent conflict with studies reporting reduced distractor inhibition under high load conditions was resolved by distinguishing between the effects of "cognitive" and "perceptual" load.
Collapse
Affiliation(s)
- Julia L Hay
- School of Psychology, College of Life Sciences and Medicine, University of Aberdeen, Aberdeen, United Kingdom.
| | | | | | | |
Collapse
|
46
|
Casco C, Grieco A, Giora E, Martinelli M. Saliency from orthogonal velocity component in texture segregation. Vision Res 2005; 46:1091-8. [PMID: 16289199 DOI: 10.1016/j.visres.2005.09.032] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2005] [Revised: 09/08/2005] [Accepted: 09/08/2005] [Indexed: 10/25/2022]
Abstract
We found that a moving target line, more-vertical than 45 deg-oriented background lines, pops-out (d'=1.2) although it moves at the same speed of background elements and although it is invisible in static presentation (d'=.7). We suggest that the moving more-vertical target is more salient because the motion system responds to the orthogonal-velocity-component (V(perpendicular)=Delta d/Delta t sin theta) that is larger for the more-vertical target than for distracters. However, motion does not produce high d' when the target is more horizontal than background (d'=.6). This result is not expected if saliency resulted from the sum of saliency of orientation and motion independently coded but is instead predicted by visual search asymmetry. A line length effect on the moving target saliency also suggests that V(perpendicular) is extracted on the whole line and this operation is facilitated by line length in the same way for more-vertical and more-horizontal targets. Altogether, these results demonstrate that speed-based segmentation operating on V(perpendicular) not only affects speed and direction of motion discrimination, as previously demonstrated, but accounts for high saliency of image features that would otherwise prove undetectable of the basis of orientation-contrast.
Collapse
Affiliation(s)
- Clara Casco
- Dipartimento di Psicologia Generale, Università di Padova, Via Venezia 8, 35131 Padova, Italy.
| | | | | | | |
Collapse
|
47
|
Aghdaee SM, Zandvakili A. Adaptation to spiral motion: global but not local motion detectors are modulated by attention. Vision Res 2005; 45:1099-105. [PMID: 15707918 DOI: 10.1016/j.visres.2004.11.012] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2004] [Revised: 10/28/2004] [Accepted: 11/03/2004] [Indexed: 11/27/2022]
Abstract
In this study, we investigated the effect of attention on local motion detectors. For this purpose we used logarithmic spirals previously used by Cavanagh and Favreau [Perception, 1980, 9(2), 175-182]. While the adapting stimulus was a rotating logarithmic spiral, the test stimulus was either the same spiral or its mirror image. When superimposed, all contours of the spiral stimulus and its mirror image are 90 degrees apart. Presenting the same spiral during the test period shows adaptation of both local motion detectors and global rotation detectors, whereas showing the mirror-spiral stimulates another set of local motion detectors, and therefore illustrates adaptation at only the global motion level. To manipulate the attentional state of observers, a secondary task was presented during the adaptation phase and observers either performed the task or ignored it. Motion aftereffect (MAE) duration was measured afterwards. While the effects of attention and test stimulus type on MAE duration were both significant, the difference in the MAE strength between the attention-distracted and attention-not-distracted conditions was equal when the test stimulus was the same-spiral or the mirror-spiral, suggesting that attention to spiral motion modulates only global rotation units and does not affect local motion detectors located at V1. Our results are in accord with those reported by Watanabe et al. [Proceedings of the National Academy of Sciences of the USA, 1998, 95(19), 11489-11492] which showed differential modulation of motion processing areas depending on the type of motion being attended. Therefore our data are supportive of the notion that attentional modulation of V1 is highly task-dependent.
Collapse
Affiliation(s)
- S Mehdi Aghdaee
- School of Cognitive Sciences (SCS), Institute for Studies in Theoretical Physics and Mathematics (IPM), Niavaran, Bahonar Square, P.O. Box 19395-5746, Tehran, Iran.
| | | |
Collapse
|
48
|
Shim WM, Cavanagh P. The motion-induced position shift depends on the perceived direction of bistable quartet motion. Vision Res 2004; 44:2393-401. [PMID: 15246755 DOI: 10.1016/j.visres.2004.05.003] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2003] [Revised: 04/28/2004] [Accepted: 05/05/2004] [Indexed: 11/30/2022]
Abstract
Motion can influence the perceived position of nearby stationary objects (Nature Neuroscience 3 (2000) 954). To investigate the influence of high-level motion processes on the position shift while controlling for low-level motion signals, we measured the position shift as a function of the motion seen in a bistable quartet. In this stimulus, motion can be seen along either one or the other of two possible paths. An illusory position shift was observed only when the flashes were adjacent to the path where motion was perceived. If the flash was adjacent to the other path, where no motion was perceived, there was no illusory displacement. Thus for the same physical stimulus, a change in the perceived motion path determined the location where illusory position shifts would be seen. This result indicates that high-level motion processes alone are sufficient to produce the position shift of stationary objects. The effect of the timing of the test flash between the onset and offset of the motion was also examined. The position shifts were greatest at the onset of motion, then decreasing gradually, disappearing at the offset of motion. We propose an attentional repulsion explanation for the shift effect.
Collapse
Affiliation(s)
- Won Mok Shim
- Department of Psychology, Harvard University, 33 Kirkland Street, William James Hall, Room 930, Cambridge, MA 02138, USA.
| | | |
Collapse
|
49
|
Wolfe JM, Horowitz TS. What attributes guide the deployment of visual attention and how do they do it? Nat Rev Neurosci 2004; 5:495-501. [PMID: 15152199 DOI: 10.1038/nrn1411] [Citation(s) in RCA: 708] [Impact Index Per Article: 35.4] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Jeremy M Wolfe
- Visual Attention Laboratory, Brigham and Women's Hospital and Harvard Medical School, 64 Sidney Street, Cambridge, Massachusetts 02139, USA.
| | | |
Collapse
|
50
|
Abstract
Visual processing by 10-year-old children diagnosed on the basis of standardised tests as having developmental 'clumsiness' syndrome, and by a control group of children without motor difficulties, was tested using three different psychophysical tasks. The tasks comprised a measure of global motion processing using a dynamic random dot kinematogram, a measure of static global pattern processing where the position of the target was randomised, and a measure of static global pattern processing in which the target position was fixed. The most striking finding was that the group of clumsy children, who were diagnosed solely on the basis of their motor difficulties, were significantly less sensitive than the control group on all three tasks of visual sensitivity. Clumsy children may have impaired visual sensitivity in both the dorsal and ventral streams in addition to their obvious problems with motor control. These results support the existence of generalised visual anomalies associated with impairments of cerebellar function.
Collapse
Affiliation(s)
- H Sigmundsson
- Research Group for Child Development, Department of Sport Sciences, Norwegian University of Science and Technology, Trondheim 7497, Norway.
| | | | | |
Collapse
|