1
|
Cao L. A spatial-attentional mechanism underlies action-related distortions of time judgment. eLife 2024; 12:e91825. [PMID: 38334366 PMCID: PMC10942542 DOI: 10.7554/elife.91825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/10/2024] Open
Abstract
Temporal binding has been understood as an illusion in timing judgment. When an action triggers an outcome (e.g. a sound) after a brief delay, the action is reported to occur later than if the outcome does not occur, and the outcome is reported to occur earlier than a similar outcome not caused by an action. We show here that an attention mechanism underlies the seeming illusion of timing judgment. In one method, participants watch a rotating clock hand and report event times by noting the clock hand position when the event occurs. We find that visual spatial attention is critically involved in shaping event time reports made in this way. This occurs because action and outcome events result in shifts of attention around the clock rim, thereby biasing the perceived location of the clock hand. Using a probe detection task to measure attention, we show a difference in the distribution of visual spatial attention between a single-event condition (sound only or action only) and a two-event agency condition (action plus sound). Participants accordingly report the timing of the same event (the sound or the action) differently in the two conditions: spatial attentional shifts masquerading as temporal binding. Furthermore, computational modeling based on the attention measure can reproduce the temporal binding effect. Studies that use time judgment as an implicit marker of voluntary agency should first discount the artefactual changes in event timing reports that actually reflect differences in spatial attention. The study also has important implications for related results in mental chronometry obtained with the clock-like method since Wundt, as attention may well be a critical confounding factor in the interpretation of these studies.
Collapse
Affiliation(s)
- Liyu Cao
- Department of Psychology and Behavioural Sciences, Zhejiang UniversityHangzhouChina
- The State Key Lab of Brain-Machine Intelligence, Zhejiang UniversityHangzhouChina
| |
Collapse
|
2
|
Hong F, Badde S, Landy MS. Repeated exposure to either consistently spatiotemporally congruent or consistently incongruent audiovisual stimuli modulates the audiovisual common-cause prior. Sci Rep 2022; 12:15532. [PMID: 36109544 PMCID: PMC9478143 DOI: 10.1038/s41598-022-19041-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 08/23/2022] [Indexed: 11/09/2022] Open
Abstract
AbstractTo estimate an environmental property such as object location from multiple sensory signals, the brain must infer their causal relationship. Only information originating from the same source should be integrated. This inference relies on the characteristics of the measurements, the information the sensory modalities provide on a given trial, as well as on a cross-modal common-cause prior: accumulated knowledge about the probability that cross-modal measurements originate from the same source. We examined the plasticity of this cross-modal common-cause prior. In a learning phase, participants were exposed to a series of audiovisual stimuli that were either consistently spatiotemporally congruent or consistently incongruent; participants’ audiovisual spatial integration was measured before and after this exposure. We fitted several Bayesian causal-inference models to the data; the models differed in the plasticity of the common-source prior. Model comparison revealed that, for the majority of the participants, the common-cause prior changed during the learning phase. Our findings reveal that short periods of exposure to audiovisual stimuli with a consistent causal relationship can modify the common-cause prior. In accordance with previous studies, both exposure conditions could either strengthen or weaken the common-cause prior at the participant level. Simulations imply that the direction of the prior-update might be mediated by the degree of sensory noise, the variability of the measurements of the same signal across trials, during the learning phase.
Collapse
|
3
|
Invalidly cued targets are well localized when detected. Atten Percept Psychophys 2019; 81:1757-1766. [DOI: 10.3758/s13414-019-01793-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
4
|
Abstract
Visual cognition in our 3D world requires understanding how we accurately localize objects in 2D and depth, and what influence both types of location information have on visual processing. Spatial location is known to play a special role in visual processing, but most of these findings have focused on the special role of 2D location. One such phenomena is the spatial congruency bias (Golomb, Kupitz, & Thiemann, 2014), where 2D location biases judgments of object features but features do not bias location judgments. This paradigm has recently been used to compare different types of location information in terms of how much they bias different types of features. Here we used this paradigm to ask a related question: whether 2D and depth-from-disparity location bias localization judgments for each other. We found that presenting two objects in the same 2D location biased position-in-depth judgments, but presenting two objects at the same depth (disparity) did not bias 2D location judgments. We conclude that an object's 2D location may be automatically incorporated into perception of its depth location, but not vice versa, which is consistent with a fundamentally special role for 2D location in visual processing.
Collapse
Affiliation(s)
- Nonie J. Finlayson
- Department of Psychology, Center for Cognitive & Brain Sciences, The Ohio State University, Columbus, OH 43210, USA
| | | |
Collapse
|
5
|
Changes in the distribution of sustained attention alter the perceived structure of visual space. Vision Res 2016; 131:26-36. [PMID: 28025055 DOI: 10.1016/j.visres.2016.12.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2016] [Revised: 11/07/2016] [Accepted: 12/16/2016] [Indexed: 11/24/2022]
Abstract
Visual spatial attention is a critical process that allows for the selection and enhanced processing of relevant objects and locations. While studies have shown attentional modulations of perceived location and the representation of distance information across multiple objects, there remains disagreement regarding what influence spatial attention has on the underlying structure of visual space. The present study utilized a method of magnitude estimation in which participants must judge the location of briefly presented targets within the boundaries of their individual visual fields in the absence of any other objects or boundaries. Spatial uncertainty of target locations was used to assess perceived locations across distributed and focused attention conditions without the use of external stimuli, such as visual cues. Across two experiments we tested locations along the cardinal and 45° oblique axes. We demonstrate that focusing attention within a region of space can expand the perceived size of visual space; even in cases where doing so makes performance less accurate. Moreover, the results of the present studies show that when fixation is actively maintained, focusing attention along a visual axis leads to an asymmetrical stretching of visual space that is predominantly focused across the central half of the visual field, consistent with an expansive gradient along the focus of voluntary attention. These results demonstrate that focusing sustained attention peripherally during active fixation leads to an asymmetrical expansion of visual space within the central visual field.
Collapse
|
6
|
Wang Y, Ali Z, Subramani S, Biswas S, Fenerty C, Henson DB, Aslam T. Normal Threshold Size of Stimuli in Children Using a Game-Based Visual Field Test. Ophthalmol Ther 2016; 6:115-122. [PMID: 27885592 PMCID: PMC5449290 DOI: 10.1007/s40123-016-0071-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2016] [Indexed: 11/01/2022] Open
Abstract
INTRODUCTION The aim of this study was to demonstrate and explore the ability of novel game-based perimetry to establish normal visual field thresholds in children. METHODS One hundred and eighteen children (aged 8.0 ± 2.8 years old) with no history of visual field loss or significant medical history were recruited. Each child had one eye tested using a game-based visual field test 'Caspar's Castle' at four retinal locations 12.7° (N = 118) from fixation. Thresholds were established repeatedly using up/down staircase algorithms with stimuli of varying diameter (luminance 20 cd/m2, duration 200 ms, background luminance 10 cd/m2). Relationships between threshold and age were determined along with measures of intra- and intersubject variability. RESULTS The Game-based visual field test was able to establish threshold estimates in the full range of children tested. Threshold size reduced with increasing age in children. Intrasubject variability and intersubject variability were inversely related to age in children. CONCLUSIONS Normal visual field thresholds were established for specific locations in children using a novel game-based visual field test. These could be used as a foundation for developing a game-based perimetry screening test for children.
Collapse
Affiliation(s)
- Yanfang Wang
- Manchester Royal Eye Hospital, CMFT, Manchester Academic Health Sciences Centre, Oxford Road, Manchester, UK.,Division of Pharmacy and Optometry, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Zaria Ali
- Manchester Royal Eye Hospital, CMFT, Manchester Academic Health Sciences Centre, Oxford Road, Manchester, UK
| | - Siddharth Subramani
- Manchester Royal Eye Hospital, CMFT, Manchester Academic Health Sciences Centre, Oxford Road, Manchester, UK
| | - Susmito Biswas
- Manchester Royal Eye Hospital, CMFT, Manchester Academic Health Sciences Centre, Oxford Road, Manchester, UK.,Division of Evolution and Genomic Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Cecilia Fenerty
- Manchester Royal Eye Hospital, CMFT, Manchester Academic Health Sciences Centre, Oxford Road, Manchester, UK.,Division of Pharmacy and Optometry, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - David B Henson
- Manchester Royal Eye Hospital, CMFT, Manchester Academic Health Sciences Centre, Oxford Road, Manchester, UK.,Division of Pharmacy and Optometry, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
| | - Tariq Aslam
- Manchester Royal Eye Hospital, CMFT, Manchester Academic Health Sciences Centre, Oxford Road, Manchester, UK. .,Division of Pharmacy and Optometry, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK. .,Heriot Watt University, Edinburgh, UK.
| |
Collapse
|
7
|
Odegaard B, Wozny DR, Shams L. Biases in Visual, Auditory, and Audiovisual Perception of Space. PLoS Comput Biol 2015; 11:e1004649. [PMID: 26646312 PMCID: PMC4672909 DOI: 10.1371/journal.pcbi.1004649] [Citation(s) in RCA: 76] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2015] [Accepted: 11/09/2015] [Indexed: 11/18/2022] Open
Abstract
Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1) if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors), and (2) whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli). Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers) was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s) driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only improves the precision of perceptual estimates, but also the accuracy.
Collapse
Affiliation(s)
- Brian Odegaard
- Department of Psychology, University of California, Los Angeles, Los Angeles, California, United States of America
| | - David R. Wozny
- Department of Psychology, University of California, Los Angeles, Los Angeles, California, United States of America
| | - Ladan Shams
- Department of Psychology, University of California, Los Angeles, Los Angeles, California, United States of America
- Department of BioEngineering, University of California, Los Angeles, Los Angeles, California, United States of America
- Neuroscience Interdepartmental Program, University of California, Los Angeles, Los Angeles, California, United States of America
| |
Collapse
|
8
|
Abstract
The angular declination of a target with respect to eye level is known to be an important cue to egocentric distance when objects are viewed or can be assumed to be resting on the ground. When targets are fixated, angular declination and the direction of the gaze with respect to eye level have the same objective value. However, any situation that limits the time available to shift gaze could leave to-be-localized objects outside the fovea, and, in these cases, the objective values would differ. Nevertheless, angular declination and gaze declination are often conflated, and the role for retinal eccentricity in egocentric distance judgments is unknown. We report two experiments demonstrating that gaze declination is sufficient to support judgments of distance, even when extraretinal signals are all that are provided by the stimulus and task environment. Additional experiments showed no accuracy costs for extrafoveally viewed targets and no systematic impact of foveal or peripheral biases, although a drop in precision was observed for the most retinally eccentric targets. The results demonstrate the remarkable utility of target direction, relative to eye level, for judging distance (signaled by angular declination and/or gaze declination) and are consonant with the idea that detection of the target is sufficient to capitalize on the angular declination of floor-level targets (regardless of the direction of gaze).
Collapse
|
9
|
Abstract
Tachistoscopic presentation of scenes has been valuable for studying the emerging properties of visual scene representations. The spatial aspects of this work have generally been focused on the conceptual locations (e.g., next to the refrigerator) and directional locations of objects in 2-D arrays and/or images. Less is known about how the perceived egocentric distance of objects develops. Here we describe a novel system for presenting brief glimpses of a real-world environment, followed by a mask. The system includes projectors with mechanical shutters for projecting the fixation and masking images, a set of LED floodlights for illuminating the environment, and computer-controlled electronics to set the timing and initiate the process. Because a real environment is used, most visual distance and depth cues can be manipulated using traditional methods. The system is inexpensive, robust, and its components are readily available in the marketplace. This article describes the system and the timing characteristics of each component. We verified the system's ability to control exposure to time scales as low as a few milliseconds.
Collapse
|
10
|
Gajewski DA, Philbeck JW, Wirtz PW, Chichka D. Angular declination and the dynamic perception of egocentric distance. J Exp Psychol Hum Percept Perform 2014; 40:361-77. [PMID: 24099588 PMCID: PMC4140626 DOI: 10.1037/a0034394] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The extraction of the distance between an object and an observer is fast when angular declination is informative, as it is with targets placed on the ground. To what extent does angular declination drive performance when viewing time is limited? Participants judged target distances in a real-world environment with viewing durations ranging from 36-220 ms. An important role for angular declination was supported by experiments showing that the cue provides information about egocentric distance even on the very first glimpse, and that it supports a sensitive response to distance in the absence of other useful cues. Performance was better at 220-ms viewing durations than for briefer glimpses, suggesting that the perception of distance is dynamic even within the time frame of a typical eye fixation. Critically, performance in limited viewing trials was better when preceded by a 15-s preview of the room without a designated target. The results indicate that the perception of distance is powerfully shaped by memory from prior visual experience with the scene. A theoretical framework for the dynamic perception of distance is presented.
Collapse
Affiliation(s)
| | | | - Philip W. Wirtz
- Department of Psychology, The George Washington University
- Department of Decision Sciences, The George Washington University
| | - David Chichka
- Department of Mechanical and Aerospace Engineering, The George Washington University
| |
Collapse
|
11
|
Fortenbaugh FC, Sanghvi S, Silver MA, Robertson LC. Exploring the edges of visual space: the influence of visual boundaries on peripheral localization. J Vis 2012; 12:12.2.19. [PMID: 22353778 DOI: 10.1167/12.2.19] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Previous studies of localization of stationary targets in the peripheral visual field have found either underestimations (foveal biases) or overestimations (peripheral biases) of target eccentricity. In the present study, we help resolve this inconsistency by demonstrating the influence of visual boundaries on the type of localization bias. Using a Goldmann perimeter (an illuminated half-dome), we presented targets at different eccentricities across the visual field and asked participants to judge the target locations. In Experiments 1 and 2, participants reported target locations relative to their perceived visual field extent using either a manual or verbal response, with both response types producing a peripheral bias. This peripheral localization bias was a non-linear scaling of perceived location when the visual field was not bounded by external borders induced by facial features (i.e., the nose and brow), but location scaling was linear when visual boundaries were present. Experiment 3 added an external border (an aperture edge placed in the Goldmann perimeter) that resulted in a foveal bias and linear scaling. Our results show that boundaries that define a spatial region within the visual field determine both the direction of bias in localization errors for stationary objects and the scaling function of perceived location across visual space.
Collapse
|
12
|
Abstract
Shifts of attention due to rapid cue onsets have been shown to distort the perceived location of objects, but are there also systematic distortions in the perceived shapes of the objects themselves from such shifts? The present study demonstrates that there are. In three experiments, oval contours were presented that varied in width and height. Two brief, bright white dots were presented as cues and were positioned horizontally or vertically either inside or outside the oval contour. Observers had to judge whether the oval was taller than wide. The results show that the perceived shape of an oval was changed by visual cues such that the oval contours were repelled by the cues (Exp. 1). This effect only occurred when the cues preceded the ovals, providing sufficient time between the presentations to attract involuntary attention (Exp. 2). Moreover, an explanation based on figural aftereffects was ruled out (Exp. 3).
Collapse
|
13
|
When here becomes there: attentional distribution modulates foveal bias in peripheral localization. Atten Percept Psychophys 2011; 73:809-28. [PMID: 21264747 PMCID: PMC3063879 DOI: 10.3758/s13414-010-0075-5] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Much research concerning attention has focused on changes in the perceptual qualities of objects while attentional states were varied. Here, we address a complementary question—namely, how perceived location can be altered by the distribution of sustained attention over the visual field. We also present a new way to assess the effects of distributing spatial attention across the visual field. We measured magnitude judgments relative to an aperture edge to test perceived location across a large range of eccentricities (30°), and manipulated spatial uncertainty in target locations to examine perceived location under three different distributions of spatial attention. Across three experiments, the results showed that changing the distribution of sustained attention significantly alters known foveal biases in peripheral localization.
Collapse
|
14
|
Blanke M, Harsch L, Knöll J, Bremmer F. Spatial perception during pursuit initiation. Vision Res 2010; 50:2714-20. [PMID: 20826177 DOI: 10.1016/j.visres.2010.08.037] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2010] [Revised: 08/06/2010] [Accepted: 08/28/2010] [Indexed: 11/19/2022]
Abstract
Spatial perception is modulated by eye movements. During smooth pursuit, perceived locations are shifted in the direction of the eye movement. During active fixation, visual space is perceptually compressed towards the fovea. In our present study, we were interested to determine the time course of spatial localization during pursuit initiation, i.e. the transition period from fixation to steady-state pursuit. Human observers had to localize briefly flashed targets around the time of pursuit initiation. Our data clearly show that pursuit-like mislocalization starts well before the onset of the eye movement. Our results point towards corollary-discharge as neural source for the observed perceptual effect.
Collapse
Affiliation(s)
- Marius Blanke
- Department of Neurophysics, Philipps-Universität Marburg, D-35043 Marburg, Germany
| | | | | | | |
Collapse
|
15
|
Salvano-Pardieu V, Wink B, Taliercio A, Fontaine R, Manktelow KI, Ehrenstein WH. Edge-induced illusory contours and visual detection: Subthreshold summation or spatial cueing? VISUAL COGNITION 2010. [DOI: 10.1080/13506280902949312] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
16
|
Roberts MJ, Thiele A. Attention and contrast differently affect contextual integration in an orientation discrimination task. Exp Brain Res 2008; 187:535-49. [PMID: 18305931 PMCID: PMC2671221 DOI: 10.1007/s00221-008-1322-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2007] [Accepted: 02/11/2008] [Indexed: 11/29/2022]
Abstract
Attention is often regarded as a mechanism by which attended objects become perceptually more salient, akin to increasing their contrast. We demonstrate that attention is better described as a mechanism by which task relevant information impacts on ongoing processing, while excluding task irrelevant information. We asked subjects to judge the orientation of a target relative to a reference, in a single and dual task setting. The target orientation percept was systematically influenced by the presentation of prior spatio-temporal context. We found that the sign of the context influence depended on target contrast, but its strength depended on the level of attention devoted to the task. Thus the effects of attention and contrast were fundamentally different; contrast influenced the sign of contextual interactions, while attention suppressed these interactions irrespective of their sign.
Collapse
Affiliation(s)
- M. J. Roberts
- Department of Psychology, Institute of Neuroscience, Henry Wellcome Building for Neuroecology, University of Newcastle upon Tyne, Newcastle upon Tyne NE2 4HH, UK, e-mail:
| | - A. Thiele
- Department of Psychology, Institute of Neuroscience, Henry Wellcome Building for Neuroecology, University of Newcastle upon Tyne, Newcastle upon Tyne NE2 4HH, UK, e-mail:
| |
Collapse
|