1
|
Zhao A, Li L, Liu S. UIDF-Net: Unsupervised Image Dehazing and Fusion Utilizing GAN and Encoder-Decoder. J Imaging 2024; 10:164. [PMID: 39057735 PMCID: PMC11278268 DOI: 10.3390/jimaging10070164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Revised: 06/27/2024] [Accepted: 07/09/2024] [Indexed: 07/28/2024] Open
Abstract
Haze weather deteriorates image quality, causing images to become blurry with reduced contrast. This makes object edges and features unclear, leading to lower detection accuracy and reliability. To enhance haze removal effectiveness, we propose an image dehazing and fusion network based on the encoder-decoder paradigm (UIDF-Net). This network leverages the Image Fusion Module (MDL-IFM) to fuse the features of dehazed images, producing clearer results. Additionally, to better extract haze information, we introduce a haze encoder (Mist-Encode) that effectively processes different frequency features of images, improving the model's performance in image dehazing tasks. Experimental results demonstrate that the proposed model achieves superior dehazing performance compared to existing algorithms on outdoor datasets.
Collapse
Affiliation(s)
- Anxin Zhao
- School of Communication and Information Engineering, Xi’an University of Science and Technology, Xi’an 710054, China
| | | | | |
Collapse
|
2
|
Krug A, Eberhardt LV, Huckauf A. Transient attention does not alter the eccentricity effect in estimation of duration. Atten Percept Psychophys 2024; 86:392-403. [PMID: 37550478 PMCID: PMC10806013 DOI: 10.3758/s13414-023-02766-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/16/2023] [Indexed: 08/09/2023]
Abstract
Previous research investigating the influence of stimulus eccentricity on perceived duration showed an increasing duration underestimation with increasing eccentricity. Based on studies showing that precueing the stimulus location prolongs perceived duration, one might assume that this eccentricity effect is influenced by spatial attention. In the present study, we assessed the influence of transient covert attention on the eccentricity effect in duration estimation in two experiments, one online and one in a laboratory setting. In a duration estimation task, participants judged whether a comparison stimulus presented near or far from fixation with a varying duration was shorter or longer than a standard stimulus presented foveally with a constant duration. To manipulate transient covert attention, either a transient luminance cue was used (valid cue) to direct attention to the position of the subsequent peripheral comparison stimulus or all positions were marked by luminance (neutral cue). Results of both experiments yielded a greater underestimation of duration for the far than for the near stimulus, replicating the eccentricity effect. Although cueing was effective (i.e., shorter response latencies for validly cued stimuli), cueing did not alter the eccentricity effect on estimation of duration. This indicates that cueing leads to covert attentional shifts but does not account for the eccentricity effect in perceived duration.
Collapse
Affiliation(s)
- Alina Krug
- Department of General Psychology, Institute of Psychology and Education, Ulm University, 89069, Ulm, Germany.
| | - Lisa Valentina Eberhardt
- Department of General Psychology, Institute of Psychology and Education, Ulm University, 89069, Ulm, Germany
| | - Anke Huckauf
- Department of General Psychology, Institute of Psychology and Education, Ulm University, 89069, Ulm, Germany
| |
Collapse
|
3
|
Aboutorabi E, Baloni Ray S, Kaping D, Shahbazi F, Treue S, Esghaei M. Phase of neural oscillations as a reference frame for attention-based routing in visual cortex. Prog Neurobiol 2024; 233:102563. [PMID: 38142770 DOI: 10.1016/j.pneurobio.2023.102563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 12/04/2023] [Accepted: 12/20/2023] [Indexed: 12/26/2023]
Abstract
Selective attention allows the brain to efficiently process the image projected onto the retina, selectively focusing neural processing resources on behaviorally relevant visual information. While previous studies have documented the crucial role of the action potential rate of single neurons in relaying such information, little is known about how the activity of single neurons relative to their neighboring network contributes to the efficient representation of attended stimuli and transmission of this information to downstream areas. Here, we show in the dorsal visual pathway of monkeys (medial superior temporal area) that neurons fire spikes preferentially at a specific phase of the ongoing population beta (∼20 Hz) oscillations of the surrounding local network. This preferred spiking phase shifts towards a later phase when monkeys selectively attend towards (rather than away from) the receptive field of the neuron. This shift of the locking phase is positively correlated with the speed at which animals report a visual change. Furthermore, our computational modeling suggests that neural networks can manipulate the preferred phase of coupling by imposing differential synaptic delays on postsynaptic potentials. This distinction between the locking phase of neurons activated by the spatially attended stimulus vs. that of neurons activated by the unattended stimulus, may enable the neural system to discriminate relevant from irrelevant sensory inputs and consequently filter out distracting stimuli information by aligning the spikes which convey relevant/irrelevant information to distinct phases linked to periods of better/worse perceptual sensitivity for higher cortices. This strategy may be used to reserve the narrow windows of highest perceptual efficacy to the processing of the most behaviorally relevant information, ensuring highly efficient responses to attended sensory events.
Collapse
Affiliation(s)
- Ehsan Aboutorabi
- Schulich School of Medicine and Dentistry, Western University, London, Ontario, Canada; Robarts Research Institute, Western University, London, Ontario, Canada
| | - Sonia Baloni Ray
- Indraprastha Institute of Information Technology, New Delhi, India
| | - Daniel Kaping
- Helmholtz Centre for Environmental Research - UFZ, Leipzig, Germany
| | - Farhad Shahbazi
- Department of Physics, Isfahan University of Technology, Isfahan, Iran
| | - Stefan Treue
- Cognitive Neuroscience Laboratory, German Primate Center - Leibniz Institute for Primate Research, Goettingen, Germany; Faculty for Biology and Psychology, University of Goettingen, Germany; Leibniz ScienceCampus Primate Cognition, Goettingen, Germany
| | - Moein Esghaei
- Cognitive Neuroscience Laboratory, German Primate Center - Leibniz Institute for Primate Research, Goettingen, Germany; Westa Higher Education Center, Karaj, Iran.
| |
Collapse
|
4
|
Chen YR, Zhang YW, Zhang JY. The impact of training on the inner-outer asymmetry in crowding. J Vis 2023; 23:3. [PMID: 37526622 PMCID: PMC10399601 DOI: 10.1167/jov.23.8.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Accepted: 06/18/2023] [Indexed: 08/02/2023] Open
Abstract
Inner-outer asymmetry, where the outer flanker induces stronger crowding than the inner flanker, is a hallmark property of visual crowding. It is unclear the contribution of inner-outer asymmetry to the pattern of crowding errors (biased predominantly toward the flanker identities) and the role of training on crowding errors. In a typical radial crowding display, 20 observers were asked to report the orientation of a target Gabor (7.5° eccentricity) flanked by either an inner or outer Gabor along the horizontal meridian. The results showed that outer flanker conditions induced stronger crowding, accompanied by assimilative errors to the outer flanker for similar target/flanker elements. In contrast, the inner flanker condition exhibited weaker crowding, with no significant patterns of crowding errors. A population coding model showed that the flanker weights in the outer flanker condition were significantly higher than those in the inner flanker condition. Nine observers continued to train the outer flanker condition for four sessions. Training reduced inner-outer asymmetry and reduced flanker weights to the outer flanker. The learning effects were retained over 4 to 6 months. Individual differences in the appearance of crowding errors, the strength of inner-outer asymmetry, and the training effects were evident. Nevertheless, our findings indicate that different crowding mechanisms may be responsible for the asymmetric crowding effects induced by inner and outer flankers, with the outer flankers dominating the appearance more than the inner ones. Training reduces inner-outer asymmetry by reducing target/flanker confusion, and learning is persistent over months, suggesting that perceptual learning has the potential to improve visual performance by promoting neural plasticity.
Collapse
Affiliation(s)
- Yan-Ru Chen
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Yu-Wei Zhang
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Jun-Yun Zhang
- School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| |
Collapse
|
5
|
Abstract
Visual perception is limited by spatial resolution, the ability to discriminate fine details. Spatial resolution not only declines with eccentricity but also differs for polar angle locations around the visual field, also known as 'performance fields'. To compensate for poor peripheral resolution, we make rapid eye movements-saccades-to bring peripheral objects into high-acuity foveal vision. Already before saccade onset, visual attention shifts to the saccade target location and prioritizes visual processing. This presaccadic shift of attention improves performance in many visual tasks, but whether it changes resolution is unknown. Here, we investigated whether presaccadic attention sharpens peripheral spatial resolution; and if so, whether such effect interacts with performance fields asymmetries. We measured acuity thresholds in an orientation discrimination task during fixation and saccade preparation around the visual field. The results revealed that presaccadic attention sharpens acuity, which can facilitate a smooth transition from peripheral to foveal representation. This acuity enhancement is similar across the four cardinal locations; thus, the typically robust effect of presaccadic attention does not change polar angle differences in resolution.
Collapse
|
6
|
Metacognition tracks sensitivity following involuntary shifts of visual attention. Psychon Bull Rev 2022:10.3758/s13423-022-02212-y. [PMCID: PMC9668230 DOI: 10.3758/s13423-022-02212-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/24/2022] [Indexed: 11/17/2022]
Abstract
AbstractSalient, exogenous cues have been shown to induce a temporary boost of perceptual sensitivity in their immediate vicinity. In two experiments involving uninformative exogenous cues presented at various times before a target stimulus, we investigated whether human observers (N = 100) were able to monitor the involuntary increase in performance induced by such transients. We found that an increase of perceptual sensitivity (in a choice task) and encoding precision (in a free-estimation task) occurred approximately 100 ms after cue onset, and was accompanied by an increase in confidence about the perceptual response. These simultaneous changes in sensitivity and confidence resulted in stable metacognition across conditions. These results suggest that metacognition efficiently tracks the effects of a reflexive attentional mechanism known to evade voluntary control, and illustrate a striking ability of high-level cognition to capture fleeting, low-level sensory modulations.
Collapse
|
7
|
Abstract
A small number of objects can be rapidly and accurately enumerated, whereas a larger number of objects can only be approximately enumerated. These subitizing and estimation abilities, respectively, are both spatial processes relying on extracting information across spatial locations. Nevertheless, whether and how these processes vary across visual field locations remains unknown. Here, we examined if enumeration displays asymmetries around the visual field. Experiment 1 tested small number (1–6) enumeration at cardinal and non-cardinal peripheral locations while manipulating the spacing among the objects. Experiment 2 examined enumeration at cardinal locations in more detail while minimising crowding. Both experiments demonstrated a Horizontal-Vertical Asymmetry (HVA) where performance was better along the horizontal axis relative to the vertical. Experiment 1 found that this effect was modulated by spacing with stronger asymmetry at closer spacing. Experiment 2 revealed further asymmetries: a Vertical Meridian Asymmetry (VMA) with better enumeration on the lower vertical meridian than on the upper and a Horizontal Meridian Asymmetry (HMA) with better enumeration along the left horizontal meridian than along the right. All three asymmetries were evident for both subitizing and estimation. HVA and VMA have been observed in a range of visual tasks, indicating that they might be inherited from early visual constraints. However, HMA is observed primarily in mid-level tasks, often involving attention. These results suggest that while enumeration processes can be argued to inherit low-level visual constraints, the findings are, parsimoniously, consistent with visual attention playing a role in both subitizing and estimation.
Collapse
|
8
|
Prahalad KS, Coates DR. Microsaccadic correlates of covert attention and crowding. J Vis 2022; 22:15. [PMID: 36121661 PMCID: PMC9503213 DOI: 10.1167/jov.22.10.15] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Spatial crowding occurs when an object is cluttered among other objects in space and is a ubiquitous factor affecting object recognition in the peripheral visual field. Crowding is typically tested by presenting crowded stimuli at an eccentric location while having observers fixate at a point in space. However, even during fixation, our eyes are not perfectly steady but instead make small-scale eye movements (microsaccades) that have recently been suggested to be affected by shifts in attentional allocation. In the current study, we monitored microsaccadic behavior (a possible attentional correlate) to understand naturally occurring shifts in attention that occur following the presentation of a crowded stimulus. A tracking scanning laser ophthalmoscope (TSLO) was used to image the right eye of each observer during a psychophysical task. The stimuli consisted of Sloan numbers (0-9) presented briefly, either unflanked or surrounded by Sloan numbers at one of four nominal spacings. The extent of crowding was found to decrease by 26% on trials with the presence of incongruent microsaccades (proposed to suggest attentional capture). These findings complement the existing body of literature on the beneficial impact of explicit shifts of spatial attention to the location of a crowded stimulus.
Collapse
Affiliation(s)
| | - Daniel R Coates
- College of Optometry, University of Houston, Houston, TX, USA.,
| |
Collapse
|
9
|
Pavan A, Koc Yilmaz S, Kafaligonul H, Battaglini L, Blurton SP. Motion processing impaired by transient spatial attention: Potential implications for the magnocellular pathway. Vision Res 2022; 199:108080. [PMID: 35749832 DOI: 10.1016/j.visres.2022.108080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Revised: 06/04/2022] [Accepted: 06/08/2022] [Indexed: 10/18/2022]
Abstract
Spatial cues presented prior to the presentation of a static stimulus usually improve its perception. However, previous research has also shown that transient exogenous cues to direct spatial attention to the location of a forthcoming stimulus can lead to reduced performance. In the present study, we investigated the effects of transient exogenous cues on the perception of briefly presented drifting Gabor patches. The spatial and temporal frequencies of the drifting Gabors were chosen to mainly engage the magnocellular pathway. We found better performance in the motion direction discrimination task when neutral cues were presented before the drifting target compared to a valid spatial cue. The behavioral results support the hypothesis that transient attention prolongs the internal response to the attended stimulus, thus reducing the temporal segregation of visual events. These results were complemented by applying a recently developed model for perceptual decisions to rule out a speed-accuracy trade-off and to further assess cueing effects on visual performance. In a model-based assessment, we found that valid cues initially enhanced processing but overall resulted in less efficient processing compared to neutral cues, possibly caused by reduced temporal segregation of visual events.
Collapse
Affiliation(s)
- Andrea Pavan
- Department of Psychology, University of Bologna, Viale Berti Pichat, 5, 40127 Bologna, Italy.
| | - Seyma Koc Yilmaz
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, 06800 Ankara, Turkey; Interdisciplinary Neuroscience Program, Aysel Sabuncu Brain Research Center, Bilkent University, 06800 Ankara, Turkey
| | - Hulusi Kafaligonul
- National Magnetic Resonance Research Center (UMRAM), Bilkent University, 06800 Ankara, Turkey; Interdisciplinary Neuroscience Program, Aysel Sabuncu Brain Research Center, Bilkent University, 06800 Ankara, Turkey
| | - Luca Battaglini
- Dipartimento di Psicologia Generale, University of Padova, Via Venezia 8, 35131 Padova, Italy
| | - Steven P Blurton
- Department of Psychology, University of Copenhagen, Øster Farimagsgade 2A, 1353 København, Denmark
| |
Collapse
|
10
|
Effects of spatial attention on spatial and temporal acuity: A computational account. Atten Percept Psychophys 2022; 84:1886-1900. [PMID: 35729455 DOI: 10.3758/s13414-022-02527-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/01/2022] [Indexed: 11/08/2022]
Abstract
In our daily lives, the visual system receives a plethora of visual information that competes for the brain's limited processing capacity. Nevertheless, not all visual information is useful for our cognitive, emotional, social, and ultimately survival purposes. Therefore, the brain employs mechanisms to select critical information and thereby optimizes its limited resources. Attention is the selective process that serves such a function. In particular, covert spatial attention - attending to a particular location in the visual field without eye movements - improves spatial resolution and paradoxically deteriorates temporal resolution. The neural correlates underlying these attentional effects still remainelusive. In this work, we tested a neural model's predictions that explain these phenomena based on interactions between channels with different spatiotemporal sensitivities - namely, the magnocellular (transient) and parvocellular (sustained) channels. More specifically, our model postulates that spatial attention enhances activities in the parvocellular pathway, thereby producing improved performance in spatial resolution tasks. However, the enhancement of parvocellular activities leads to decreased magnocellular activities due to parvo-magno inhibitory interactions. As a result, spatial attention hampers temporal resolution. We compared the predictions of the model to psychophysical data, and show that our model can account qualitatively and quantitatively for the effects of spatial attention on spatial and temporal acuity.
Collapse
|
11
|
Mahjoob M, Heravian Shandiz J, Anderson AJ. The effect of mental load on psychophysical and visual evoked potential visual acuity. Ophthalmic Physiol Opt 2022; 42:586-593. [PMID: 35150443 DOI: 10.1111/opo.12955] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2021] [Revised: 01/05/2022] [Accepted: 01/05/2022] [Indexed: 11/30/2022]
Abstract
PURPOSE Under real-world conditions, tasks dependent on visual acuity may need to be performed in the presence of a mental load arising from concurrent, non-visual tasks. Therefore, measuring visual acuity concurrently with mentally demanding tasks may reflect a patient's vision more accurately. This study was designed to evaluate the impact of task-induced mental load on high contrast visual acuity, as measured using a letter chart and estimated via sweep visual evoked potentials (sweep VEP). METHODS Visual acuity was determined using the Freiburg Vision Test, and also using sweep VEP tested stepwise, from coarse to fine, over 13 spatial frequencies, in 31 healthy participants (aged 22.4 ± 3.6 years). Recordings were repeated while participants concurrently performed an auditory 2-back task. Mental load of the n-back task was confirmed through subjective ratings. RESULTS Visual acuity determined with the Freiburg Vision Test worsened from -0.02 ± 0.12 to 0.04 ± 0.15 logMAR under mental load (p = 0.03). Visual acuities estimated by sweep VEPs worsened from 0.38 ± 0.1 to 0.47 ± 0.1 logMAR (p < 0.001). While the slope of the VEP amplitude versus spatial frequency function steepened significantly with mental load (p = 0.01), VEP noise levels were not significantly affected (p = 0.07). CONCLUSION Visual acuity reduces significantly with a concurrent task that produces mental load. At least part of this reduction appears to be related to alterations in responses within the visual cortex, rather than being purely attributable to higher-level distraction effects.
Collapse
Affiliation(s)
- Monireh Mahjoob
- Health Promotion Research Center, Department of Optometry, Rehabilitation Faculty, Zahedan University of Medical Sciences, Zahedan, Iran
| | - Javad Heravian Shandiz
- Refractive Eye Research Center, Department of Optometry, School of Paramedical Sciences, Mashhad University of Medical Sciences, Mashhad, Iran
| | - Andrew J Anderson
- Department of Optometry and Vision Sciences, The University of Melbourne, Parkville, Australia
| |
Collapse
|
12
|
No effect of spatial attention on the processing of a motion ensemble: Evidence from Posner cueing. Atten Percept Psychophys 2021; 84:1845-1857. [PMID: 34811633 DOI: 10.3758/s13414-021-02392-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/07/2021] [Indexed: 11/08/2022]
Abstract
The formation of ensemble codes is an efficient means through which the visual system represents vast arrays of information. This has led to the claim that ensemble representations are formed with minimal reliance on attentional resources. However, evidence is mixed regarding the effects of attention on ensemble processing, and researchers do not always make it clear how attention is being manipulated by their paradigm of choice. In this study, we examined the effects of Posner cueing - a well-established method of manipulating spatial attention - on the processing of a global motion stimulus, a naturalistic ensemble that requires the pooling of local motion signals. In Experiment 1, using a centrally presented, predictive attentional cue, we found no effect of spatial attention on global motion performance: Accuracy in invalid trials, where attention was misdirected by the cue, did not differ from accuracy in valid trials, where attention was directed to the location of the motion stimulus. In Experiment 2, we maximized the potential for our paradigm to reveal any attentional effects on global motion processing by using a threshold-based measure of performance; however, despite this change, there was again no evidence of an attentional effect on performance. Together, our results show that the processing of a global motion stimulus is unaffected when spatial attention is misdirected, and speak to the efficiency with which such ensemble stimuli are processed.
Collapse
|
13
|
Inter-individual variations in internal noise predict the effects of spatial attention. Cognition 2021; 217:104888. [PMID: 34450395 DOI: 10.1016/j.cognition.2021.104888] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2021] [Revised: 07/30/2021] [Accepted: 08/19/2021] [Indexed: 11/23/2022]
Abstract
Individuals differ considerably in the degree to which they benefit from attention allocation. Thus far, such individual differences were attributed to post-perceptual factors such as working-memory capacity. This study examined whether a perceptual factor - the level of internal noise - also contributes to this inter-individual variability in attentional effects. To that end, we estimated individual levels of internal noise from behavioral variability in an orientation discrimination task (with tilted gratings) using the double-pass procedure and the perceptual-template model. We also measured the effects of spatial attention in an acuity task: the participants reported the side of a square on which a small aperture appeared. Central arrows were used to engage sustained attention and peripheral cues to engage transient attention. We found reliable correlations between individual levels of internal noise and the effects of both types of attention, albeit of opposite directions: positive correlation with sustained attention and negative correlation with transient attention. These findings demonstrate that internal noise - a fundamental characteristic of visual perception - can predict individual differences in the effects of spatial attention, highlighting the intricate relations between perception and attention.
Collapse
|
14
|
Jigo M, Heeger DJ, Carrasco M. An image-computable model of how endogenous and exogenous attention differentially alter visual perception. Proc Natl Acad Sci U S A 2021; 118:e2106436118. [PMID: 34389680 PMCID: PMC8379934 DOI: 10.1073/pnas.2106436118] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Attention alters perception across the visual field. Typically, endogenous (voluntary) and exogenous (involuntary) attention similarly improve performance in many visual tasks, but they have differential effects in some tasks. Extant models of visual attention assume that the effects of these two types of attention are identical and consequently do not explain differences between them. Here, we develop a model of spatial resolution and attention that distinguishes between endogenous and exogenous attention. We focus on texture-based segmentation as a model system because it has revealed a clear dissociation between both attention types. For a texture for which performance peaks at parafoveal locations, endogenous attention improves performance across eccentricity, whereas exogenous attention improves performance where the resolution is low (peripheral locations) but impairs it where the resolution is high (foveal locations) for the scale of the texture. Our model emulates sensory encoding to segment figures from their background and predict behavioral performance. To explain attentional effects, endogenous and exogenous attention require separate operating regimes across visual detail (spatial frequency). Our model reproduces behavioral performance across several experiments and simultaneously resolves three unexplained phenomena: 1) the parafoveal advantage in segmentation, 2) the uniform improvements across eccentricity by endogenous attention, and 3) the peripheral improvements and foveal impairments by exogenous attention. Overall, we unveil a computational dissociation between each attention type and provide a generalizable framework for predicting their effects on perception across the visual field.
Collapse
Affiliation(s)
- Michael Jigo
- Center for Neural Science, New York University, New York, NY 10003;
| | - David J Heeger
- Center for Neural Science, New York University, New York, NY 10003
- Department of Psychology, New York University, New York, NY 10003
| | - Marisa Carrasco
- Center for Neural Science, New York University, New York, NY 10003
- Department of Psychology, New York University, New York, NY 10003
| |
Collapse
|
15
|
Guzhang Y, Shelchkova N, Ezzo R, Poletti M. Transient perceptual enhancements resulting from selective shifts of exogenous attention in the central fovea. Curr Biol 2021; 31:2698-2703.e2. [PMID: 33930304 PMCID: PMC8763350 DOI: 10.1016/j.cub.2021.03.105] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 03/01/2021] [Accepted: 03/31/2021] [Indexed: 12/01/2022]
Abstract
Exogenous attention, a powerful adaptive tool that quickly and involuntarily orients processing resources to salient stimuli, has traditionally been studied in the lower-resolution parafoveal and peripheral visual field.1-4 It is not known whether and how it operates across the 1° central fovea where visual resolution peaks.5,6 Here we investigated the dynamics of exogenous attention in the foveola. To circumvent the challenges posed by fixational eye movements at this scale, we used high-precision eye-tracking and gaze-contingent display control for retinal stabilization.7 High-acuity stimuli were briefly presented foveally at varying delays following an exogenous cue. Attended and unattended locations were just a few arcminutes away from the preferred locus of fixation. Our results show that for short temporal delays, observers' ability to discriminate fine detail is enhanced at the cued location. This enhancement is highly localized and does not extend to the nearby locations only 16' away. On a longer timescale, instead, we report an inverse effect: paradoxically, acuity is sharper at the unattended locations, resembling the phenomenon of inhibition of return at much larger eccentricities.8-10 Although exogenous attention represents a mechanism for low-cost monitoring of the environment in the extrafoveal space, these findings show that, in the foveola, it transiently modulates vision of detail with a high degree of resolution. Together with inhibition of return, it may aid visual exploration of complex foveal stimuli.11.
Collapse
Affiliation(s)
- Yue Guzhang
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| | - Natalya Shelchkova
- Program in Computational Neuroscience, University of Chicago, Chicago, IL, USA
| | - Rania Ezzo
- Department of Psychology, New York University, New York, NY, USA
| | - Martina Poletti
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA; Department of Neuroscience, University of Rochester, Rochester, NY, USA; Center for Visual Science, University of Rochester, Rochester, NY, USA.
| |
Collapse
|
16
|
Mirpour K, Bisley JW. The roles of the lateral intraparietal area and frontal eye field in guiding eye movements in free viewing search behavior. J Neurophysiol 2021; 125:2144-2157. [PMID: 33949898 DOI: 10.1152/jn.00559.2020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The lateral intraparietal area (LIP) and frontal eye field (FEF) have been shown to play significant roles in oculomotor control, yet most studies have found that the two areas behave similarly. To identify the unique roles each area plays in guiding eye movements, we recorded 200 LIP neurons and 231 FEF neurons from four animals performing a free viewing visual foraging task. We analyzed how neuronal responses were modulated by stimulus identity and the animals' choice of where to make a saccade. We additionally analyzed the comodulation of the sensory signals and the choice signal to identify how the sensory signals drove the choice. We found a clearly defined division of labor: LIP provided a stable map integrating task rules and stimulus identity, whereas FEF responses were dynamic, representing more complex information and, just before the saccade, were integrated with task rules and stimulus identity to decide where to move the eye.NEW & NOTEWORTHY The lateral intrapareital area (LIP) and frontal eye field (FEF) are known to contribute to guiding eye movements, but little is known about the unique roles that each area plays. Using a free viewing visual search task, we found that LIP provides a stable map of the visual world, integrating task rules and stimulus identity. FEF activity is consistently modulated by more complex information but, just before the saccade, integrates all the information to make the final decision about where to move.
Collapse
Affiliation(s)
- Koorosh Mirpour
- Department of Neurobiology, David Geffen School of Medicine at UCLA, Los Angeles, California
| | - James W Bisley
- Department of Neurobiology, David Geffen School of Medicine at UCLA, Los Angeles, California.,Jules Stein Eye Institute, David Geffen School of Medicine at UCLA, Los Angeles, California.,Department of Psychology and the Brain Research Institute, UCLA, Los Angeles, California
| |
Collapse
|
17
|
De Lestrange-Anginieur E, Leung TW, Kee CS. Joint effect of defocus blur and spatial attention. Vision Res 2021; 185:88-97. [PMID: 33964585 DOI: 10.1016/j.visres.2021.04.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 04/04/2021] [Accepted: 04/05/2021] [Indexed: 10/21/2022]
Abstract
Defocus blur and spatial attention both act on our ability to see clearly over time. However, it is currently unknown how these two factors interact because studies on spatial resolution only focused on the separate effects of attention and defocus blurs. In this study, eleven participants performed a resolution acuity task along the diagonal 135˚/315˚ with horizontal, at 8˚ eccentricity for clear and blurred Landolt C images under various manipulations of covert endogenous attention. All the conditions were interleaved and viewed binocularly on a visual display. We observed that attention not just improves the resolution of clear stimuli, but also modulates the resolution of defocused stimuli for compensating the loss of resolution caused by retinal blur. Our results show, however, that as the degree of attention decreases, the differences between clear and blurred images largely diminish, thus limiting the benefit of an image quality enhancement. It also appeared that attention tends to enhance the resolution of clear targets more than blurred targets, suggesting potential variations in the gain of vision correction with the level of attention. This demonstrates that the interaction between spatial attention and defocus blur can play a role in the way we see things. In view of these findings, the development of adaptive interventions, which adjust the eye's defocus to attention, may hold promise.
Collapse
Affiliation(s)
| | - T W Leung
- School of Optometry, Hong Kong Polytechnic University, Hong Kong, China
| | - C S Kee
- School of Optometry, Hong Kong Polytechnic University, Hong Kong, China; Interdisciplinary Division of Biomedical Engineering, Hong Kong Polytechnic University, Hong Kong SAR, China
| |
Collapse
|
18
|
Veríssimo IS, Hölsken S, Olivers CNL. Individual differences in crowding predict visual search performance. J Vis 2021; 21:29. [PMID: 34038508 PMCID: PMC8164367 DOI: 10.1167/jov.21.5.29] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Accepted: 03/12/2021] [Indexed: 11/24/2022] Open
Abstract
Visual search is an integral part of human behavior and has proven important to understanding mechanisms of perception, attention, memory, and oculomotor control. Thus far, the dominant theoretical framework posits that search is mainly limited by covert attentional mechanisms, comprising a central bottleneck in visual processing. A different class of theories seeks the cause in the inherent limitations of peripheral vision, with search being constrained by what is known as the functional viewing field (FVF). One of the major factors limiting peripheral vision, and thus the FVF, is crowding. We adopted an individual differences approach to test the prediction from FVF theories that visual search performance is determined by the efficacy of peripheral vision, in particular crowding. Forty-four participants were assessed with regard to their sensitivity to crowding (as measured by critical spacing) and their search efficiency (as indicated by manual responses and eye movements). This revealed substantial correlations between the two tasks, as stronger susceptibility to crowding was predictive of slower search, more eye movements, and longer fixation durations. Our results support FVF theories in showing that peripheral vision is an important determinant of visual search efficiency.
Collapse
Affiliation(s)
- Inês S Veríssimo
- Cognitive Psychology, Institute for Brain and Behavior, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Stefanie Hölsken
- Cognitive Psychology, Institute for Brain and Behavior, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Christian N L Olivers
- Cognitive Psychology, Institute for Brain and Behavior, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
- https://www.vupsy.nl/
| |
Collapse
|
19
|
Shurygina O, Pooresmaeili A, Rolfs M. Pre-saccadic attention spreads to stimuli forming a perceptual group with the saccade target. Cortex 2021; 140:179-198. [PMID: 33991779 DOI: 10.1016/j.cortex.2021.03.020] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2020] [Revised: 01/05/2021] [Accepted: 03/04/2021] [Indexed: 01/10/2023]
Abstract
The pre-saccadic attention shift-a rapid increase in visual sensitivity at the target-is an inevitable precursor of saccadic eye movements. Saccade targets are often parts of the objects that are of interest to the active observer. Although the link between saccades and covert attention shifts is well established, it remains unclear if pre-saccadic attention selects the location of the eye movement target or rather the entire object that occupies this location. Indeed, several neurophysiological studies suggest that attentional modulations of neural activity in visual cortex spreads across parts of objects (e.g., elements grouped by Gestalt principles) that contain the target location of a saccade. To understand the nature of pre-saccadic attentional selection, we examined how visual sensitivity, measured in a challenging orientation discrimination task, changes during saccade preparation at locations that are perceptually grouped with the saccade target. In Experiment 1, using grouping by color in a delayed-saccade task, we found no consistent spread of attention to locations that formed a perceptual group with the saccade target. However, performance depended on the side of the stimulus arrangement relative to the saccade target location, an effect we discuss with respect to attentional momentum. In Experiment 2, employing stronger perceptual grouping cues (color and motion) and an immediate-saccade task, we obtained a reliable grouping effect: Attention spread to locations that were perceptually grouped with the saccade target while saccade preparation was underway. We also replicated the side effect observed in Experiment 1. These results provide evidence that the pre-saccadic attention spreads beyond the target location along the saccade direction, and selects scene elements that-based on Gestalt criteria-are likely to belong to the same object as the saccade target.
Collapse
Affiliation(s)
- Olga Shurygina
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Germany; Department of Psychology, Humboldt-Universität zu Berlin, Germany; Exzellenzcluster Science of Intelligence, Technische Universität Berlin, Berlin, Germany.
| | - Arezoo Pooresmaeili
- Perception and Cognition Group, European Neuroscience Institute Göttingen - A Joint Initiative of the University Medical Center Göttingen and the Max-Planck-Society, Göttingen, Germany
| | - Martin Rolfs
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Germany; Department of Psychology, Humboldt-Universität zu Berlin, Germany; Exzellenzcluster Science of Intelligence, Technische Universität Berlin, Berlin, Germany
| |
Collapse
|
20
|
Michel R, Dugué L, Busch NA. Distinct contributions of alpha and theta rhythms to perceptual and attentional sampling. Eur J Neurosci 2021; 55:3025-3039. [PMID: 33609313 DOI: 10.1111/ejn.15154] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Revised: 02/04/2021] [Accepted: 02/12/2021] [Indexed: 11/28/2022]
Abstract
Accumulating evidence suggests that visual perception operates in an oscillatory fashion at an alpha frequency (around 10 Hz). Moreover, visual attention also seems to operate rhythmically, albeit at a theta frequency (around 5 Hz). Both rhythms are often associated to "perceptual snapshots" taken at the favorable phases of these rhythms. However, less is known about the unfavorable phases: do they constitute "blind gaps," requiring the observer to guess, or is information sampled with reduced precision insufficient for the task demands? As simple detection or discrimination tasks cannot distinguish these options, we applied a continuous report task by asking for the exact orientation of a Landolt ring's gap to estimate separate model parameters for precision and the amount of guessing. We embedded this task in a well-established psychophysical protocol by densely sampling such reports across 20 cue-target stimulus onset asynchronies in a Posner-like cueing paradigm manipulating involuntary spatial attention. Testing the resulting time courses of the guessing and precision parameters for rhythmicities using a fast Fourier transform, we found an alpha rhythm (9.6 Hz) in precision for invalidly cued trials and a theta rhythm (4.8 Hz) in the guess rate across validity conditions. These results suggest distinct roles of the perceptual alpha and the attentional theta rhythm. We speculate that both rhythms result in environmental sampling characterized by fluctuating spatial resolution, speaking against a strict succession of blind gaps and perceptual snapshots.
Collapse
Affiliation(s)
- René Michel
- Institute of Psychology, University of Münster, Münster, Germany.,Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| | - Laura Dugué
- Université de Paris, INCC UMR 8002, CNRS, Paris, France.,Institut Universitaire de France (IUF), Paris, France
| | - Niko A Busch
- Institute of Psychology, University of Münster, Münster, Germany.,Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| |
Collapse
|
21
|
Abstract
The information used by conscious perception may differ from that which drives certain actions. A dramatic illusion caused by an object's internal texture motion has been put forward as one example. The motion causes an illusory position shift that accumulates over seconds into a large effect, but targeting of the grating for a saccade (a rapid eye movement) is not affected by this illusion. While this has been described as a dissociation between perception and action, an alternative explanation is that rather than saccade targeting having privileged access to the correct position, a shift of attention that precedes saccades resets the accumulated illusory position shift to zero. In support of this possibility, we found that the accumulation of illusory position shift can be reset by transients near the moving object, creating an impression of the object returning to near its actual position. Repetitive luminance changes of the object also resulted in reset of the accumulation, but less so when attention to the object was reduced by a concurrent digit identification task. Finally, judgments of the object's positions around the time of saccade onset reflected the veridical rather than the illusory position. These results suggest that attentional shifts, including those preceding saccades, can update the perceived position of moving objects and mediate the previously reported dissociation between conscious perception and saccades.
Collapse
|
22
|
Focal lung pathology detection in radiology: Is there an effect of experience on visual search behavior? Atten Percept Psychophys 2020; 82:2837-2850. [PMID: 32367272 DOI: 10.3758/s13414-020-02033-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
In radiology, 60% to 80% of diagnostic errors are perceptual. The use of more efficient visual search behaviors is expected to reduce these errors. We collected eye-tracking data from participants with different levels of experience when interpreting chest X-rays during the completion of a pathology-detection task. Eye-tracking measures were assessed in the context of three existing visual search theories from the literature to understand the association between visual search behavior and underlying processes: the long-term working memory theory, the information-reduction hypothesis, and the holistic model of image perception. The most experienced participants (radiology residents) showed the highest level of performance, albeit their visual search behaviors did not differ from the intermediate group. This suggests that radiology residents better processed the represented information on the X-ray, using a visual search strategy similar to the intermediate group. Since similar visual search resulted in more information extraction in the radiology residents compared with the intermediates, we suggest that this result might support the long-term working memory theory. Furthermore, compared with novices, intermediates and radiology residents fixated longer on areas that were more important to avoid missing any pathology, which possibly confirms the information-reduction hypothesis. Finally, the larger distances between fixations observed in more experienced participants could support the holistic model of image perception. In addition, measures of generic skills were related to a lower time cost for switching between global and local information processing. Our findings suggest that the three theories may be complementary in chest X-ray interpretation. Therefore, a unified theory explaining perceptual-cognitive superiority in radiology is considered.
Collapse
|
23
|
Junker MS, Park BY, Shin JC, Cho YS. Adaptive Changes in the Dynamics of Visual Attention With Extended Practice. Front Psychol 2020; 11:565288. [PMID: 33117232 PMCID: PMC7574854 DOI: 10.3389/fpsyg.2020.565288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2020] [Accepted: 09/10/2020] [Indexed: 11/13/2022] Open
Abstract
Previous research indicates that visual attention can adapt to temporal stimulus patterns utilizing the rapid serial visual presentation (RSVP) task. However, how the temporal dynamics of an attentional pulse adapt to temporal patterns has not been explored. We addressed this question by conducting an attentional component analysis on RSVP performance and explored whether changes in attentional dynamics were accompanied by explicit learning about predictable target timing. We utilized an RSVP task in which a target letter appeared either in two possible RSVP positions in fixed-timing conditions or in random positions over 1, 2, or 3 days of training. In a transfer phase, the target appeared in previously presented or new positions. Over 3 days of practice the target identification rate, efficacy, and precision of a putative attentional pulse increased. These changes reflected general learning in the RSVP task resulting in attentional dynamics more efficiently focused on the target. Although group performance effects did not support learning of fixed target positions, target identification rates and the measure of the efficacy of an attentional pulse at these positions were positively associated with explicit learning. The current study is the first to provide a detailed description of practice related adaptation of attentional dynamics and suggests that timing specific changes might be mediated by explicit temporal learning.
Collapse
Affiliation(s)
- Matthew S Junker
- School of Psychological and Behavioral Sciences, Southern Illinois University, Carbondale, IL, United States
| | - Bo Youn Park
- Department of Psychology, Korea University, Seoul, South Korea
| | - Jacqueline C Shin
- Department of Psychology, Indiana State University, Terre Haute, IN, United States
| | - Yang Seok Cho
- Department of Psychology, Korea University, Seoul, South Korea
| |
Collapse
|
24
|
A single, simple, statistical mechanism explains resource distribution and temporal updating in visual short-term memory. Cogn Psychol 2020; 122:101330. [PMID: 32712370 DOI: 10.1016/j.cogpsych.2020.101330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Revised: 06/28/2020] [Accepted: 06/29/2020] [Indexed: 11/22/2022]
Abstract
Investigations into the way that information is held and integrated within the visual system provides some basis for understanding how visual information is represented and processed. Just over sixty years ago, Swets, Shipley, McKey, and Green (1959) demonstrated that performance within an auditory detection task increases as a function of the square root of the number of stimulus observation intervals, following the predictions of basic sampling theory, indicating the efficient perceptual integration of stimulus information. This principle of observer performance contingent on a constant rate of stimulus sampling also forms the basis of the sample-size model (Palmer, 1990; Sewell, Lilburn, & Smith, 2014) which seeks to provide an account of how memory resources might be divided among item representations in visual short-term memory (VSTM). In this article, we combine the multiple observations paradigm of Swets and colleagues with the VSTM paradigm of Sewell and colleagues and show that the sample-size relationship accounts for both the increase in performance with the number of presentation intervals and the way that performance changes as a function of the number of items in memory. The model provides an account of both the overall information limit of VSTM and an account of the dynamics of that limit, demonstrating not only that observers can selectively update specific representations in memory but that performance in this task is accounted for by a simple statistical constraint. We discuss the implications for models of VSTM capacity and architecture generally, focusing on the implications for objecthood and the characteristics of encoding to and retrieval from memory.
Collapse
|
25
|
Baruch O, Goldfarb L. Mexican Hat Modulation of Visual Acuity Following an Exogenous Cue. Front Psychol 2020; 11:854. [PMID: 32499738 PMCID: PMC7242741 DOI: 10.3389/fpsyg.2020.00854] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Accepted: 04/06/2020] [Indexed: 11/13/2022] Open
Abstract
Classical models of exogenous attention suggest that attentional enhancement at the focus of attention degrades gradually with distance from the attended location. On the other hand, the Attentional Attraction Field (AAF) model (Baruch and Yeshurun, 2014) suggests that the shift of receptive fields toward the attended location, reported by several physiological studies, leads to a decreased density of RFs at the attentional surrounds and hence the model predicts that the modulation of performance by spatial attention may have the shape of a Mexican Hat. Motivated by these theories, this study presents behavioral evidence in support of a Mexican Hat shaped modulation in exogenous spatial tasks that appears only at short latencies. In two experiments participants had to decide the location of a small gap in a target circle that was preceded by a non-informative attention capturing cue. The distance between cue and target and the latency between their onsets were varied. At short SOAs the performance curves were cubic and only at longer SOAs- this trend turned linear. Our results suggest that a rapid Mexican Hat modulation is an inherent property of the mechanism underlying exogenous attention and that a monotonically degrading trend, such as advocated by classical models, develops only at later stages of processing. The involvements of bottom-up processes such as the attraction of RFs to the focus of attention are further discussed.
Collapse
Affiliation(s)
- Orit Baruch
- The Institute for Information Processing and Decision Making (IIPDM), University of Haifa, Haifa, Israel
| | - Liat Goldfarb
- E. J. Safra Brain Research Center for the Study of Learning Disabilities, University of Haifa, Haifa, Israel
| |
Collapse
|
26
|
Investigating face and house discrimination at foveal to parafoveal locations reveals category-specific characteristics. Sci Rep 2020; 10:8306. [PMID: 32433486 PMCID: PMC7239942 DOI: 10.1038/s41598-020-65239-y] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2019] [Accepted: 04/28/2020] [Indexed: 01/26/2023] Open
Abstract
Since perceptual and neural face sensitivity is associated with a foveal bias, and neural place sensitivity is associated with a peripheral bias (integration over space), we hypothesized that face perception ability will decline more with eccentricity than place perception ability. We also wanted to examine whether face perception ability would show a left visual field (LeVF) bias due to earlier reports suggesting right hemisphere dominance for faces, or would show an upper or lower visual field bias. Participants performed foveal and parafoveal face and house discrimination tasks for upright or inverted stimuli (≤4°) while their eye movements were monitored. Low-level visual tasks were also measured. The eccentricity-related accuracy reductions were evident for all categories. Through detailed analyses we found (i) a robust face inversion effect across the parafovea, while for houses an opposite effect was found, (ii) higher eccentricity-related sensitivity for face performance than for house performance (via inverted vs. upright within-category eccentricity-driven reductions), (iii) within-category but not across-category performance associations across eccentricities, and (iv) no hemifield biases. Our central to parafoveal investigations suggest that high-level vision processing may be reflected in behavioural performance.
Collapse
|
27
|
Attention amplifies neural representations of changes in sensory input at the expense of perceptual accuracy. Nat Commun 2020; 11:2128. [PMID: 32358494 PMCID: PMC7195455 DOI: 10.1038/s41467-020-15989-0] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2019] [Accepted: 03/31/2020] [Indexed: 01/20/2023] Open
Abstract
Attention enhances the neural representations of behaviorally relevant stimuli, typically by a push-pull increase of the neuronal response gain to attended vs. unattended stimuli. This selectively improves perception and consequently behavioral performance. However, to enhance the detectability of stimulus changes, attention might also distort neural representations, compromising accurate stimulus representation. We test this hypothesis by recording neural responses in the visual cortex of rhesus monkeys during a motion direction change detection task. We find that attention indeed amplifies the neural representation of direction changes, beyond a similar effect of adaptation. We further show that humans overestimate such direction changes, providing a perceptual correlate of our neurophysiological observations. Our results demonstrate that attention distorts the neural representations of abrupt sensory changes and consequently perceptual accuracy. This likely represents an evolutionary adaptive mechanism that allows sensory systems to flexibly forgo accurate representation of stimulus features to improve the encoding of stimulus change.
Collapse
|
28
|
Donovan I, Shen A, Tortarolo C, Barbot A, Carrasco M. Exogenous attention facilitates perceptual learning in visual acuity to untrained stimulus locations and features. J Vis 2020; 20:18. [PMID: 32340029 PMCID: PMC7405812 DOI: 10.1167/jov.20.4.18] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Accepted: 01/08/2020] [Indexed: 12/11/2022] Open
Abstract
Visual perceptual learning (VPL) refers to the improvement in performance on a visual task due to practice. A hallmark of VPL is specificity, as improvements are often confined to the trained retinal locations or stimulus features. We have previously found that exogenous (involuntary, stimulus-driven) and endogenous (voluntary, goal-driven) spatial attention can facilitate the transfer of VPL across locations in orientation discrimination tasks mediated by contrast sensitivity. Here, we investigated whether exogenous spatial attention can facilitate such transfer in acuity tasks that have been associated with higher specificity. We trained observers for 3 days (days 2-4) in a Landolt acuity task (Experiment 1) or a Vernier hyperacuity task (Experiment 2), with either exogenous precues (attention group) or neutral precues (neutral group). Importantly, during pre-tests (day 1) and post-tests (day 5), all observers were tested with neutral precues; thus, groups differed only in their attentional allocation during training. For the Landolt acuity task, we found evidence of location transfer in both the neutral and attention groups, suggesting weak location specificity of VPL. For the Vernier hyperacuity task, we found evidence of location and feature specificity in the neutral group, and learning transfer in the attention group-similar improvement at trained and untrained locations and features. Our results reveal that, when there is specificity in a perceptual acuity task, exogenous spatial attention can overcome that specificity and facilitate learning transfer to both untrained locations and features simultaneously with the same training. Thus, in addition to improving performance, exogenous attention generalizes perceptual learning across locations and features.
Collapse
Affiliation(s)
- Ian Donovan
- Department of Psychology and Neural Science, New York University,New York,NY,USA
| | - Angela Shen
- Department of Psychology, New York University,New York,NY,USA
| | | | - Antoine Barbot
- Department of Psychology, New York University,New York,NY,USA
- Center for Neural Science, New York University,New York,NY,USA
| | - Marisa Carrasco
- Department of Psychology, New York University,New York,NY,USA
- Center for Neural Science, New York University,New York,NY,USA
| |
Collapse
|
29
|
Prior Experience Alters the Appearance of Blurry Object Borders. Sci Rep 2020; 10:5821. [PMID: 32242057 PMCID: PMC7118174 DOI: 10.1038/s41598-020-62728-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2019] [Accepted: 03/17/2020] [Indexed: 01/02/2023] Open
Abstract
Object memories activated by borders serve as priors for figure assignment: figures are more likely to be perceived on the side of a border where a well-known object is sketched. Do object memories also affect the appearance of object borders? Memories represent past experience with objects; memories of well-known objects include many with sharp borders because they are often fixated. We investigated whether object memories affect appearance by testing whether blurry borders appear sharper when they are contours of well-known objects versus matched novel objects. Participants viewed blurry versions of one familiar and one novel stimulus simultaneously for 180 ms; then made comparative (Exp. 1) or equality judgments regarding perceived blur (Exps. 2–4). For equivalent levels of blur, the borders of well-known objects appeared sharper than those of novel objects. These results extend evidence for the influence of past experience to object appearance, consistent with dynamic interactive models of perception.
Collapse
|
30
|
Exogeneous Spatial Cueing beyond the Near Periphery: Cueing Effects in a Discrimination Paradigm at Large Eccentricities. Vision (Basel) 2020; 4:vision4010013. [PMID: 32079326 PMCID: PMC7157755 DOI: 10.3390/vision4010013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 01/06/2020] [Accepted: 01/27/2020] [Indexed: 11/30/2022] Open
Abstract
Although visual attention is one of the most thoroughly investigated topics in experimental psychology and vision science, most of this research tends to be restricted to the near periphery. Eccentricities used in attention studies usually do not exceed 20° to 30°, but most studies even make use of considerably smaller maximum eccentricities. Thus, empirical knowledge about attention beyond this range is sparse, probably due to a previous lack of suitable experimental devices to investigate attention in the far periphery. This is currently changing due to the development of temporal high-resolution projectors and head-mounted displays (HMDs) that allow displaying experimental stimuli at far eccentricities. In the present study, visual attention was investigated beyond the near periphery (15°, 30°, 56° Exp. 1) and (15°, 35°, 56° Exp. 2) in a peripheral Posner cueing paradigm using a discrimination task with placeholders. Interestingly, cueing effects were revealed for the whole range of eccentricities although the inhomogeneity of the visual field and its functional subdivisions might lead one to suspect otherwise.
Collapse
|
31
|
Bonder T, Gopher D. The Effect of Confidence Rating on a Primary Visual Task. Front Psychol 2019; 10:2674. [PMID: 31827456 PMCID: PMC6892355 DOI: 10.3389/fpsyg.2019.02674] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2019] [Accepted: 11/13/2019] [Indexed: 11/13/2022] Open
Abstract
The current study explored the influence of confidence rating on visual acuity. We used brief exposures of the Landolt gap discrimination task, probing the primary visual ability to detect contrast. During 200 practice trials, participants in the Confidence Rating group rated their response-confidence in each trial. A second (Time Delay) group received a short break at the end of each trial, equivalent to the average rating response time of the Confidence Rating group. The third (Standard Task) group performed the Landolt gap task in its original form. During practice, the Confidence Rating group developed an efficient monitoring ability indicated by a significant correlation between accuracy and confidence rating and a moderate calibration index score. Following practice, all groups performed 400 identical test trials of the standard Landolt gap task. In the test trials, the Confidence Rating group responded more accurately than the control groups, though it did not differ from them in response time for correct answers. Remarkably, the Confidence Rating group was significantly slower when making errors, compared the control groups. An interaction in learning efficiency occurred: the Confidence Rating group significantly improved its reaction times after the initial practice, as compared to both control groups. The findings demonstrate an effect of confidence rating on the formation of processing and response strategies, which granted participants significant benefits in later performance.
Collapse
Affiliation(s)
- Taly Bonder
- Faculty of Industrial Engineering and Management, Technion - Israel Institute of Technology, Haifa, Israel
| | - Daniel Gopher
- Faculty of Industrial Engineering and Management, Technion - Israel Institute of Technology, Haifa, Israel
| |
Collapse
|
32
|
Wolfe JM, Utochkin IS. What is a preattentive feature? Curr Opin Psychol 2019; 29:19-26. [PMID: 30472539 PMCID: PMC6513732 DOI: 10.1016/j.copsyc.2018.11.005] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Revised: 11/01/2018] [Accepted: 11/08/2018] [Indexed: 11/30/2022]
Abstract
The concept of a preattentive feature has been central to vision and attention research for about half a century. A preattentive feature is a feature that guides attention in visual search and that cannot be decomposed into simpler features. While that definition seems straightforward, there is no simple diagnostic test that infallibly identifies a preattentive feature. This paper briefly reviews the criteria that have been proposed and illustrates some of the difficulties of definition.
Collapse
Affiliation(s)
- Jeremy M Wolfe
- Corresponding author Visual Attention Lab, Department
of Surgery, Brigham & Women's Hospital, Departments of Ophthalmology
and Radiology, Harvard Medical School, 64 Sidney St. Suite. 170, Cambridge, MA
02139-4170,
| | - Igor S Utochkin
- National Research University Higher School of
Economics, Moscow, Russian Federation Address: 101000, Armyansky per. 4, Moscow,
Russian Federation,
| |
Collapse
|
33
|
Sagar V, Sengupta R, Sridharan D. Dissociable sensitivity and bias mechanisms mediate behavioral effects of exogenous attention. Sci Rep 2019; 9:12657. [PMID: 31477747 PMCID: PMC6718663 DOI: 10.1038/s41598-019-42759-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Accepted: 04/08/2019] [Indexed: 11/24/2022] Open
Abstract
Attention can be directed endogenously, based on task-relevant goals, or captured exogenously, by salient stimuli. While recent studies have shown that endogenous attention can facilitate behavior through dissociable sensitivity (sensory) and choice bias (decisional) mechanisms, it is unknown if exogenous attention also operates through dissociable sensitivity and bias mechanisms. We tested human participants on a multialternative change detection task with exogenous attention cues, which preceded or followed change events in close temporal proximity. Analyzing participants’ behavior with a multidimensional signal detection model revealed clear dissociations between exogenous cueing effects on sensitivity and bias. While sensitivity was, overall, lower at the cued location compared to other locations, bias was highest at the cued location. With an appropriately designed post-cue control condition, we discovered that the attentional effect of exogenous pre-cueing was to enhance sensitivity proximal to the cue. In contrast, exogenous attention enhanced bias even for distal stimuli in the cued hemifield. Reaction time effects of exogenous cueing could be parsimoniously explained with a diffusion-decision model, in which drift rate was determined by independent contributions from sensitivity and bias at each location. The results suggest a mechanistic schema of how exogenous attention engages dissociable sensitivity and bias mechanisms to shape behavior.
Collapse
Affiliation(s)
- Vishak Sagar
- Centre for Neuroscience, Indian Institute of Science, C. V. Raman Avenue, Bangalore, 560012, India
| | - Ranit Sengupta
- Centre for Neuroscience, Indian Institute of Science, C. V. Raman Avenue, Bangalore, 560012, India
| | - Devarajan Sridharan
- Centre for Neuroscience, Indian Institute of Science, C. V. Raman Avenue, Bangalore, 560012, India.
| |
Collapse
|
34
|
How voluntary spatial attention influences feature biases in object correspondence. Atten Percept Psychophys 2019; 82:1024-1037. [PMID: 31254261 DOI: 10.3758/s13414-019-01801-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Our visual system is able to establish associations between corresponding images across space and time and to maintain the identity of objects, even though the information our retina receives is ambiguous. It has been shown that lower level factors-as, for example, spatiotemporal proximity-can affect this correspondence problem. In addition, higher level factors-as, for example, semantic knowledge-can influence correspondence, suggesting that correspondence might also be solved at a higher object-based level of processing, which could be mediated by attention. To test this hypothesis, we instructed participants to voluntarily direct their attention to individual elements in the Ternus display. In this ambiguous apparent motion display, three elements are aligned next to each other and shifted by one position from one frame to the next. This shift can be either perceived as all elements moving together (group motion) or as one element jumping across the others (element motion). We created a competitive Ternus display, in which the color of the elements was manipulated in such a way that the percept was biased toward element motion for one color and toward group motion for another color. If correspondence can be established at an object-based level, attending toward one of the biased elements should increase the likelihood that this element determines the correspondence solution and thereby that the biased motion is perceived. Our results were in line with this hypothesis providing support for an object-based correspondence process that is based on a one-to-one mapping of the most similar elements mediated via attention.
Collapse
|
35
|
Ahveninen J, Ingalls G, Yildirim F, Calabro FJ, Vaina LM. Peripheral visual localization is degraded by globally incongruent auditory-spatial attention cues. Exp Brain Res 2019; 237:2137-2143. [PMID: 31201472 DOI: 10.1007/s00221-019-05578-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2019] [Accepted: 06/07/2019] [Indexed: 11/26/2022]
Abstract
Global auditory-spatial orienting cues help the detection of weak visual stimuli, but it is not clear whether crossmodal attention cues also enhance the resolution of visuospatial discrimination. Here, we hypothesized that if anywhere, crossmodal modulations of visual localization should emerge in the periphery where the receptive fields are large. Subjects were presented with trials where a Visual Target, defined by a cluster of low-luminance dots, was shown for 220 ms at 25°-35° eccentricity in either the left or right hemifield. The Visual Target was either Uncued or it was presented 250 ms after a crossmodal Auditory Cue that was simulated either from the same or the opposite hemifield than the Visual Target location. After a whole-screen visual mask displayed for 800 ms, a pair of vertical Reference Bars was presented ipsilateral to the Visual Target. In a two-alternative forced choice task, subjects were asked to determine which of these two bars was closer to the center of the Visual Target. When the Auditory Cue and Visual Target were hemispatially incongruent, the speed and accuracy of visual localization performance was significantly impaired. However, hemispatially congruent Auditory Cues did not improve the localization of Visual Targets when compared to the Uncued condition. Further analyses suggested that the crossmodal Auditory Cues decreased the sensitivity (d') of the Visual Target localization without affecting post-perceptual decision biases. Our results suggest that in the visual periphery, the detrimental effect of hemispatially incongruent Auditory Cues is far greater than the benefit produced by hemispatially congruent cues. Our working hypothesis for future studies is that auditory-spatial attention cues suppress irrelevant visual locations in a global fashion, without modulating the local visual precision at relevant sites.
Collapse
Affiliation(s)
- Jyrki Ahveninen
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA.
| | - Grace Ingalls
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Funda Yildirim
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Finnegan J Calabro
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
- Department of Psychiatry and Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Lucia M Vaina
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
- Department of Neurology, Harvard Medical School, Massachusetts General Hospital and Brigham and Women's Hospital, Boston, MA, USA
| |
Collapse
|
36
|
Effects of multitasking and intention-behaviour consistency when facing yellow traffic light uncertainty. Atten Percept Psychophys 2019; 81:2832-2849. [PMID: 31161494 DOI: 10.3758/s13414-019-01766-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We examined the effects of multitasking on resolving response bistability to yellow traffic lights, using the performance metrics of reaction time and stopping frequency. We also examined whether people's actual behaviours, measured by implicit foot pedal responses, differed from their intentions related to these factors, as measured by explicit verbal commands. In a dual-task paradigm, participants responded to random traffic light changes, presented over a static background photograph of an intersection, using either foot pedals or verbal commands, while simultaneously identifying spoken words as either "animals" or "artefacts" via button pressing. The dual-task condition was found to prolong reaction times relative to a single-task condition. In addition, verbal commands were faster than the foot pedal responses, and conservativeness was the same for both types of responses. A second experiment, which provided a more dynamic simulation of the first experiment, confirmed that conservativeness did not differ between verbal commands and foot pedal responses. We conclude that multitasking affects a person's ability to resolve response bistability to yellow traffic lights. If one considers that prolonged reaction times reduce the amount of distance available to safely stop at intersections, this study underscores how multitasking poses a considerable safety risk for drivers approaching a yellow traffic light.
Collapse
|
37
|
|
38
|
Abstract
Constructing useful representations of our visual environment requires the ability to selectively pay attention to particular locations at specific moments. Whilst there has been much investigation on the influence of selective attention on spatial discrimination, less is known about its influence on temporal discrimination. In particular, little is known about how endogenous attention influences two fundamental and opposing temporal processes: segregation - the parsing of the visual scene over time into separate features, and integration - the binding together of related elements. In four experiments, we tested how endogenous cueing to a location influences each of these opposing processes. Results demonstrate a strong cueing effect on both segregation and integration. These results are consistent with the hypothesis that endogenous attention can influence both of these opposing processes in a flexible manner. The finding has implications for arbitrating between accounts of the multiple modulatory mechanisms comprising selective attention.
Collapse
|
39
|
Cue frequency modulates cuing effect either in the presence or in the absence of distractors. Acta Psychol (Amst) 2019; 193:73-79. [PMID: 30597422 DOI: 10.1016/j.actpsy.2018.12.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2018] [Revised: 12/08/2018] [Accepted: 12/14/2018] [Indexed: 11/21/2022] Open
Abstract
A novel, salient stimulus, even though it is not related to a concurrent goal-directed behavior, powerfully captures people's attention. While this stimulus-driven attentional capture has long been presumed to take place in a purely bottom-up or automatic manner, growing evidence shows that a number of top-down factors modulate the stimulus-driven capture of attention. Recent studies pointed out the cue presentation frequency is such a factor; the capture of attention by a salient, task-irrelevant cue increased as its presentation frequency decreased. Expanding these studies, we investigated how the modulatory effect of the cue frequency differs depending on the level of competition between multiple stimuli. As results, we found that an infrequently presented cue exerted stronger capture effect than a frequently presented cue, either in the presence or in the absence of distractors. Importantly, in the absence of distractors, performance difference elicited by the frequently present cue was due to non-attentional sensory artifacts or decisional noise. However, the same frequent cue evoked genuine attentional effect when multiple distractors accompanied the target, evoking stimulus-driven competition. Taken together, these results demonstrate that the effect of attentional cue is modulated by cue frequency, and this modulation is also affected by stimulus-driven competition.
Collapse
|
40
|
Goodhew SC, Edwards M. Translating experimental paradigms into individual-differences research: Contributions, challenges, and practical recommendations. Conscious Cogn 2019; 69:14-25. [PMID: 30685513 DOI: 10.1016/j.concog.2019.01.008] [Citation(s) in RCA: 59] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2018] [Revised: 01/10/2019] [Accepted: 01/14/2019] [Indexed: 12/16/2022]
Abstract
Psychological science has long been cleaved by a fundamental divide between researchers who experimentally manipulate variables and those who measure existing individual-differences. Increasingly, however, researchers are appreciating the value of integrating these approaches. Here, we used visual attention research as a case-in-point for how this gap can be bridged. Traditionally, researchers have predominately adopted experimental approaches to investigating visual attention. Increasingly, however, researchers are integrating individual-differences approaches with experimental approaches to answer novel and innovative research questions. However, individual differences research challenges some of the core assumptions and practices of experimental research. The purpose of this review, therefore, is to provide a timely summary and discussion of the key issues. While these are contextualised in the field of visual attention, the discussion of these issues has implications for psychological research more broadly. In doing so, we provide eight practical recommendations for proposed solutions and novel avenues for research moving forward.
Collapse
Affiliation(s)
- Stephanie C Goodhew
- Research School of Psychology, The Australian National University, Australia.
| | - Mark Edwards
- Research School of Psychology, The Australian National University, Australia
| |
Collapse
|
41
|
Ebitz RB, Moore T. Both a Gauge and a Filter: Cognitive Modulations of Pupil Size. Front Neurol 2019; 9:1190. [PMID: 30723454 PMCID: PMC6350273 DOI: 10.3389/fneur.2018.01190] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2018] [Accepted: 12/27/2018] [Indexed: 01/21/2023] Open
Abstract
Over 50 years of research have established that cognitive processes influence pupil size. This has led to the widespread use of pupil size as a peripheral measure of cortical processing in psychology and neuroscience. However, the function of cortical control over the pupil remains poorly understood. Why does visual attention change the pupil light reflex? Why do mental effort and surprise cause pupil dilation? Here, we consider these functional questions as we review and synthesize two literatures on cognitive effects on the pupil: how cognition affects pupil light response and how cognition affects pupil size under constant luminance. We propose that cognition may have co-opted control of the pupil in order to filter incoming visual information to optimize it for particular goals. This could complement other cortical mechanisms through which cognition shapes visual perception.
Collapse
Affiliation(s)
- R. Becket Ebitz
- Department of Neuroscience and Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States
| | - Tirin Moore
- Department of Neurobiology, Stanford University School of Medicine, Stanford, CA, United States
- Howard Hughes Medical Institute, Seattle, WA, United States
| |
Collapse
|
42
|
Dynamic distractor environments reveal classic visual field anisotropies for judgments of temporal order. Atten Percept Psychophys 2018; 81:738-751. [PMID: 30520009 DOI: 10.3758/s13414-018-1628-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Numerous studies have shown that visual performance critically depends on the stimulus' projected retinal location. For example, performance tends to be better along the horizontal relative to the vertical meridian (lateral anisotropy). Another case is the so-called upper-lower anisotropy, whereby performance is better in the upper relative to the lower hemifield. This study investigates whether temporal order judgments (TOJs) are subject to these visual field constraints. In Experiments 1 and 2, subjects reported the temporal order of two disks located along the horizontal or vertical meridians. Each target disk was surrounded by 10 black and white distractor disks, whose polarity remained unchanged (static condition) or reversed throughout the trial (dynamic condition). Results indicate that the mere presence of dynamic distractors elevated thresholds by more than a factor of four and that this elevation was particularly pronounced along the vertical meridian, evidencing the lateral anisotropy. In Experiment 3, thresholds were compared in upper, lower, left, and right visual hemifields. Results show that the threshold elevation caused by dynamic distractors was greatest in the upper visual field, demonstrating an upper-lower anisotropy. Critically, these anisotropies were evident exclusively in dynamic distractor conditions suggesting that distinct processes govern TOJ performance under these different contextual conditions. We propose that whereas standard TOJs are processed by fast low-order motion mechanisms, the presence of dynamic distractors mask these low-order motion signals, forcing observers to rely more heavily on more sluggish higher order motion processes.
Collapse
|
43
|
van Es DM, Theeuwes J, Knapen T. Spatial sampling in human visual cortex is modulated by both spatial and feature-based attention. eLife 2018; 7:e36928. [PMID: 30526848 PMCID: PMC6286128 DOI: 10.7554/elife.36928] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Accepted: 11/13/2018] [Indexed: 11/13/2022] Open
Abstract
Spatial attention changes the sampling of visual space. Behavioral studies suggest that feature-based attention modulates this resampling to optimize the attended feature's sampling. We investigate this hypothesis by estimating spatial sampling in visual cortex while independently varying both feature-based and spatial attention. Our results show that spatial and feature-based attention interacted: resampling of visual space depended on both the attended location and feature (color vs. temporal frequency). This interaction occurred similarly throughout visual cortex, regardless of an area's overall feature preference. However, the interaction did depend on spatial sampling properties of voxels that prefer the attended feature. These findings are parsimoniously explained by variations in the precision of an attentional gain field. Our results demonstrate that the deployment of spatial attention is tailored to the spatial sampling properties of units that are sensitive to the attended feature.
Collapse
Affiliation(s)
- Daniel Marten van Es
- Behavioural and Movement SciencesVrije Universiteit AmsterdamAmsterdamThe Netherlands
| | - Jan Theeuwes
- Behavioural and Movement SciencesVrije Universiteit AmsterdamAmsterdamThe Netherlands
| | - Tomas Knapen
- Behavioural and Movement SciencesVrije Universiteit AmsterdamAmsterdamThe Netherlands
- Spinoza Centre for NeuroimagingRoyal Academy of SciencesAmsterdamThe Netherlands
| |
Collapse
|
44
|
Li J, Oksama L, Hyönä J. Model of Multiple Identity Tracking (MOMIT) 2.0: Resolving the serial vs. parallel controversy in tracking. Cognition 2018; 182:260-274. [PMID: 30384128 DOI: 10.1016/j.cognition.2018.10.016] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2018] [Revised: 10/03/2018] [Accepted: 10/23/2018] [Indexed: 11/29/2022]
Abstract
The present study investigated whether during tracking of multiple moving objects with distinct identities only one identity is tracked at each moment (serial tracking) or whether multiple identities can be tracked simultaneously (parallel tracking). By adopting the gaze-contingent display change technique, we manipulated in real time the presence/absence of object identities during tracking. The data on performance accuracy revealed a serial tracking pattern for facial images and a parallel pattern for color discs: when tracking faces, the presence/absence of only the currently foveated identity impacted the performance, whereas when tracking colors, the presence of multiple identities across the visual field led to improved tracking performance. This pattern is consistent with the identifiability of the different types of objects in the visual field. The eye movements during MIT showed a bias towards visiting and dwelling on individual targets when facial identities were present and towards visiting the blank areas between targets when color identities were present. Nevertheless, the eye visits were predominately on individual targets regardless of the type of objects and the presence of object identities. The eye visits to targets were beneficial for target tracking, particularly in face tracking. We propose the Model of Multiple Identity Tracking (MOMIT) 2.0 which accounts for the results and reconcile the serial vs. parallel controversy. The model suggests that observers cooperatively use attention, eye movements, perception, and working memory for dynamic tracking. Tracking appears more serial when high-resolution information needs to be sampled and maintained for discriminating the targets, whereas it appears more parallel when low-resolution information is sufficient.
Collapse
Affiliation(s)
- Jie Li
- School of Psychology, Beijing Sport University, China.
| | | | - Jukka Hyönä
- Department of Psychology, University of Turku, Finland.
| |
Collapse
|
45
|
Günel B, Thiel CM, Hildebrandt KJ. Effects of Exogenous Auditory Attention on Temporal and Spectral Resolution. Front Psychol 2018; 9:1984. [PMID: 30405479 PMCID: PMC6206225 DOI: 10.3389/fpsyg.2018.01984] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2018] [Accepted: 09/27/2018] [Indexed: 11/25/2022] Open
Abstract
Previous research in the visual domain suggests that exogenous attention in form of peripheral cueing increases spatial but lowers temporal resolution. It is unclear whether this effect transfers to other sensory modalities. Here, we tested the effects of exogenous attention on temporal and spectral resolution in the auditory domain. Eighteen young, normal-hearing adults were tested in both gap and frequency change detection tasks with exogenous cuing. Benefits of valid cuing were only present in the gap detection task while costs of invalid cuing were observed in both tasks. Our results suggest that exogenous attention in the auditory system improves temporal resolution without compromising spectral resolution.
Collapse
Affiliation(s)
- Basak Günel
- Department of Psychology, University of Oldenburg, Oldenburg, Germany
| | - Christiane M Thiel
- Department of Psychology, University of Oldenburg, Oldenburg, Germany.,Cluster of Excellence Hearing4all, University of Oldenburg, Oldenburg, Germany
| | - K Jannis Hildebrandt
- Cluster of Excellence Hearing4all, University of Oldenburg, Oldenburg, Germany.,Department of Neuroscience, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
46
|
Abstract
Endogenous and exogenous visuospatial attention both alter spatial resolution, but they operate via distinct mechanisms. In texture segmentation tasks, exogenous attention inflexibly increases resolution even when detrimental for the task at hand and does so by modulating second-order processing. Endogenous attention is more flexible and modulates resolution to benefit performance according to task demands, but it is unknown whether it also operates at the second-order level. To answer this question, we measured performance on a second-order texture segmentation task while independently manipulating endogenous and exogenous attention. Observers discriminated a second-order texture target at several eccentricities. We found that endogenous attention improved performance uniformly across eccentricity, suggesting a flexible mechanism that can increase or decrease resolution based on task demands. In contrast, exogenous attention improved performance in the periphery but impaired it at central retinal locations, consistent with an inflexible resolution enhancement. Our results reveal that endogenous and exogenous attention both alter spatial resolution by differentially modulating second-order processing.
Collapse
Affiliation(s)
- Michael Jigo
- Center for Neural Science, New York University, New York, NY, USA
| | - Marisa Carrasco
- Center for Neural Science and Department of Psychology, New York University, New York, NY, USA
| |
Collapse
|
47
|
Hochmitz I, Lauffs MM, Herzog MH, Yeshurun Y. Sustained spatial attention can affect feature fusion. J Vis 2018; 18:20. [PMID: 30029230 DOI: 10.1167/18.6.20] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
When two verniers are presented in rapid succession at the same location, feature fusion occurs. Instead of perceiving two separate verniers, participants typically report perceiving one fused vernier, whose offset is a combination of the two previous verniers, with the later one slightly dominating. Here, we examined the effects of sustained attention-the voluntary component of spatial attention-on feature fusion. One way to manipulate sustained attention is via the degree of certainty regarding the stimulus location. In the attended condition, the stimulus appeared always in the same location, and in the unattended condition it could appear in one of two possible locations. Participants had to report the offset of the fused vernier. Experiments 1 and 2 measured attentional effects on feature fusion with and without eye-tracking. In both experiments, we found a higher rate of reports corresponding to the offset of the second vernier with focused attention than without focused attention, suggesting that attention strengthened the final percept emerging from the fusion operation. In Experiment 3, we manipulated the stimulus duration to encourage a final fused percept that is dominated by either the first or second vernier. We found that attention strengthened the already dominant percept, regardless of whether it corresponded to the offset of the first or second vernier. These results are consistent with an attentional mechanism of signal enhancement at the encoding stage.
Collapse
Affiliation(s)
- Ilanit Hochmitz
- Department of Psychology, University of Haifa, Haifa, Israel
| | - Marc M Lauffs
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Michael H Herzog
- Laboratory of Psychophysics, Brain Mind Institute, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Yaffa Yeshurun
- Department of Psychology, University of Haifa, Haifa, Israel
| |
Collapse
|
48
|
Cutrone EK, Heeger DJ, Carrasco M. On spatial attention and its field size on the repulsion effect. J Vis 2018; 18:8. [PMID: 30029219 PMCID: PMC6012187 DOI: 10.1167/18.6.8] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2017] [Accepted: 05/13/2018] [Indexed: 11/24/2022] Open
Abstract
We investigated the attentional repulsion effect-stimuli appear displaced further away from attended locations-in three experiments: one with exogenous (involuntary) attention, and two with endogenous (voluntary) attention with different attention-field sizes. It has been proposed that differences in attention-field size can account for qualitative differences in neural responses elicited by attended stimuli. We used psychophysical comparative judgments and manipulated either exogenous attention via peripheral cues or endogenous attention via central cues and a demanding rapid serial visual presentation task. We manipulated the attention field size of endogenous attention by presenting streams of letters at two specific locations or at two of many possible locations during each block. We found a robust attentional repulsion effect in all three experiments: with endogenous and exogenous attention and with both attention-field sizes. These findings advance our understanding of the influence of spatial attention on the perception of visual space and help relate this repulsion effect to possible neurophysiological correlates.
Collapse
Affiliation(s)
| | - David J Heeger
- Department of Psychology, New York University, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
| | - Marisa Carrasco
- Department of Psychology, New York University, New York, NY, USA
- Center for Neural Science, New York University, New York, NY, USA
| |
Collapse
|
49
|
Li J, Oksama L, Hyönä J. Close coupling between eye movements and serial attentional refreshing during multiple-identity tracking. JOURNAL OF COGNITIVE PSYCHOLOGY 2018. [DOI: 10.1080/20445911.2018.1476517] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Affiliation(s)
- Jie Li
- School of Psychology, Beijing Sport University, Beijing, People’s Republic of China
| | - Lauri Oksama
- Headquarters, National Defence University, Helsinki, Finland
| | - Jukka Hyönä
- Department of Psychology, University of Turku, Turku, Finland
| |
Collapse
|
50
|
Kothari NB, Wohlgemuth MJ, Moss CF. Dynamic representation of 3D auditory space in the midbrain of the free-flying echolocating bat. eLife 2018; 7:e29053. [PMID: 29633711 PMCID: PMC5896882 DOI: 10.7554/elife.29053] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2017] [Accepted: 02/27/2018] [Indexed: 11/23/2022] Open
Abstract
Essential to spatial orientation in the natural environment is a dynamic representation of direction and distance to objects. Despite the importance of 3D spatial localization to parse objects in the environment and to guide movement, most neurophysiological investigations of sensory mapping have been limited to studies of restrained subjects, tested with 2D, artificial stimuli. Here, we show for the first time that sensory neurons in the midbrain superior colliculus (SC) of the free-flying echolocating bat encode 3D egocentric space, and that the bat's inspection of objects in the physical environment sharpens tuning of single neurons, and shifts peak responses to represent closer distances. These findings emerged from wireless neural recordings in free-flying bats, in combination with an echo model that computes the animal's instantaneous stimulus space. Our research reveals dynamic 3D space coding in a freely moving mammal engaged in a real-world navigation task.
Collapse
|