1
|
The Costs of Paying Overt and C overt Attention Assessed With Pupillometry. Psychol Sci 2023; 34:887-898. [PMID: 37314425 DOI: 10.1177/09567976231179378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/15/2023] Open
Abstract
Attention can be shifted with or without an accompanying saccade (i.e., overtly or covertly, respectively). Thus far, it is unknown how cognitively costly these shifts are, yet such quantification is necessary to understand how and when attention is deployed overtly or covertly. In our first experiment (N = 24 adults), we used pupillometry to show that shifting attention overtly is more costly than shifting attention covertly, likely because planning saccades is more complex. We pose that these differential costs will, in part, determine whether attention is shifted overtly or covertly in a given context. A subsequent experiment (N = 24 adults) showed that relatively complex oblique saccades are more costly than relatively simple saccades in horizontal or vertical directions. This provides a possible explanation for the cardinal-direction bias of saccades. The utility of a cost perspective as presented here is vital to furthering our understanding of the multitude of decisions involved in processing and interacting with the external world efficiently.
Collapse
|
2
|
Abstract
QE is the final ocular fixation that precedes critical athletic movements and that enables athletes to gather relevant information and organize their subsequent movement. Although little is known about the factors sustaining performance in table tennis, to date there has been no investigation to assess QE as a contributor to table tennis performance. Furthermore, there is limited research regarding the influence on QE of factors that are known to impact performance, such as task complexity and fatigue. In a within-subjects experimental design, we manipulated fatigue (high vs low) and task complexity (high vs low). Eleven elite table tennis players (mage =14.72 years, mexperience = 7.27 years) underwent each of the four resulting conditions. Athletes made longer QE before hit versus missed shots (p <.001, η2p = .795) and QE and performance decreased under fatigue (p = 0.02, η2p = .628; p = .002, η2p = .62), but we did not detect a significant effect of complexity on QE (p = .352, η2p = .087). This study is one of the first to show that QE sustains performance in a dynamic sport, that is table tennis, and that QE is affected by fatigue.
Collapse
|
3
|
A BCI-Based Study on the Relationship Between the SSVEP and Retinal Eccentricity in Overt and C overt Attention. Front Neurosci 2022; 15:746146. [PMID: 34970111 PMCID: PMC8712654 DOI: 10.3389/fnins.2021.746146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Accepted: 11/23/2021] [Indexed: 12/04/2022] Open
Abstract
Covert attention aids us in monitoring the environment and optimizing performance in visual tasks. Past behavioral studies have shown that covert attention can enhance spatial resolution. However, electroencephalography (EEG) activity related to neural processing between central and peripheral vision has not been systematically investigated. Here, we conducted an EEG study with 25 subjects who performed covert attentional tasks at different retinal eccentricities ranging from 0.75° to 13.90°, as well as tasks involving overt attention and no attention. EEG signals were recorded with a single stimulus frequency to evoke steady-state visual evoked potentials (SSVEPs) for attention evaluation. We found that the SSVEP response in fixating at the attended location was generally negatively correlated with stimulus eccentricity as characterized by Euclidean distance or horizontal and vertical distance. Moreover, more pronounced characteristics of SSVEP analysis were also acquired in overt attention than in covert attention. Furthermore, offline classification of overt attention, covert attention, and no attention yielded an average accuracy of 91.42%. This work contributes to our understanding of the SSVEP representation of attention in humans and may also lead to brain-computer interfaces (BCIs) that allow people to communicate with choices simply by shifting their attention to them.
Collapse
|
4
|
Overt and c overt attention shifts to emotional faces: Combining EEG, eye tracking, and a go/no-go paradigm. Psychophysiology 2021; 58:e13838. [PMID: 33983655 DOI: 10.1111/psyp.13838] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 03/09/2021] [Accepted: 04/19/2021] [Indexed: 11/29/2022]
Abstract
In everyday life, faces with emotional expressions quickly attract attention and eye movements. To study the neural mechanisms of such emotion-driven attention by means of event-related brain potentials (ERPs), tasks that employ covert shifts of attention are commonly used, in which participants need to inhibit natural eye movements towards stimuli. It remains, however, unclear how shifts of attention to emotional faces with and without eye movements differ from each other. The current preregistered study aimed to investigate neural differences between covert and overt emotion-driven attention. We combined eye tracking with measurements of ERPs to compare shifts of attention to faces with happy, angry, or neutral expressions when eye movements were either executed (go conditions) or withheld (no-go conditions). Happy and angry faces led to larger EPN amplitudes, shorter latencies of the P1 component, and faster saccades, suggesting that emotional expressions significantly affected shifts of attention. Several ERPs (N170, EPN, LPC) were augmented in amplitude when attention was shifted with an eye movement, indicating an enhanced neural processing of faces if eye movements had to be executed together with a reallocation of attention. However, the modulation of ERPs by facial expressions did not differ between the go and no-go conditions, suggesting that emotional content enhances both covert and overt shifts of attention. In summary, our results indicate that overt and covert attention shifts differ but are comparably affected by emotional content.
Collapse
|
5
|
Let Me Make You Happy, and I'll Tell You How You Look Around: Using an Approach-Avoidance Task as an Embodied Emotion Prime in a Free-Viewing Task. Front Psychol 2021; 12:604393. [PMID: 33790829 PMCID: PMC8005526 DOI: 10.3389/fpsyg.2021.604393] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Accepted: 02/18/2021] [Indexed: 11/29/2022] Open
Abstract
The embodied approach of human cognition suggests that concepts are deeply dependent upon and constrained by an agent's physical body's characteristics, such as performed body movements. In this study, we attempted to broaden previous research on emotional priming, investigating the interaction of emotions and visual exploration. We used the joystick-based approach-avoidance task to influence the emotional states of participants, and subsequently, we presented pictures of news web pages on a computer screen and measured participant's eye movements. As a result, the number of fixations on images increased, the total dwell time increased, and the average saccade length from outside of the images toward the images decreased after the bodily congruent priming phase. The combination of these effects suggests increased attention to web pages' image content after the participants performed bodily congruent actions in the priming phase. Thus, congruent bodily interaction with images in the priming phase fosters visual interaction in the subsequent exploration phase.
Collapse
|
6
|
Eye-movement patterns to social and non-social cues in early deaf adults. Q J Exp Psychol (Hove) 2021; 74:1021-1036. [PMID: 33586487 DOI: 10.1177/1747021821998511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Previous research on covert orienting to the periphery suggested that early profound deaf adults were less susceptible to uninformative gaze-cues, though were equally or more affected by non-social arrow-cues. The aim of this work was to investigate whether spontaneous eye movement behaviour helps explain the reduced impact of the social cue in deaf adults. We tracked the gaze of 25 early profound deaf and 25 age-matched hearing observers performing a peripheral discrimination task with uninformative central cues (gaze vs arrow), stimulus-onset asynchrony (250 vs 750 ms), and cue validity (valid vs invalid) as within-subject factors. In both groups, the cue effect on reaction time (RT) was comparable for the two cues, although deaf observers responded significantly slower than hearing controls. While deaf and hearing observers' eye movement pattern looked similar when the cue was presented in isolation, deaf participants made significantly more eye movements than hearing controls once the discrimination target appeared. Notably, further analysis of eye movements in the deaf group revealed that independent of the cue type, cue validity affected saccade landing position, while latency was not modulated by these factors. Saccade landing position was also strongly related to the magnitude of the validity effect on RT, such that the greater the difference in saccade landing position between invalid and valid trials, the greater the difference in manual RT between invalid and valid trials. This work suggests that the contribution of overt selection in central cueing of attention is more prominent in deaf adults and helps determine the manual performance, irrespective of the cue type.
Collapse
|
7
|
Task-Irrelevant Visual Forms Facilitate Covert and Overt Spatial Selection. J Neurosci 2020; 40:9496-9506. [PMID: 33127854 PMCID: PMC7724129 DOI: 10.1523/jneurosci.1593-20.2020] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Revised: 09/08/2020] [Accepted: 10/07/2020] [Indexed: 11/21/2022] Open
Abstract
Covert and overt spatial selection behaviors are guided by both visual saliency maps derived from early visual features as well as priority maps reflecting high-level cognitive factors. However, whether mid-level perceptual processes associated with visual form recognition contribute to covert and overt spatial selection behaviors remains unclear. We hypothesized that if peripheral visual forms contribute to spatial selection behaviors, then they should do so even when the visual forms are task-irrelevant. We tested this hypothesis in male and female human subjects as well as in male macaque monkeys performing a visual detection task. In this task, subjects reported the detection of a suprathreshold target spot presented on top of one of two peripheral images, and they did so with either a speeded manual button press (humans) or a speeded saccadic eye movement response (humans and monkeys). Crucially, the two images, one with a visual form and the other with a partially phase-scrambled visual form, were completely irrelevant to the task. In both manual (covert) and oculomotor (overt) response modalities, and in both humans and monkeys, response times were faster when the target was congruent with a visual form than when it was incongruent. Importantly, incongruent targets were associated with almost all errors, suggesting that forms automatically captured selection behaviors. These findings demonstrate that mid-level perceptual processes associated with visual form recognition contribute to covert and overt spatial selection. This indicates that neural circuits associated with target selection, such as the superior colliculus, may have privileged access to visual form information.SIGNIFICANCE STATEMENT Spatial selection of visual information either with (overt) or without (covert) foveating eye movements is critical to primate behavior. However, it is still not clear whether spatial maps in sensorimotor regions known to guide overt and covert spatial selection are influenced by peripheral visual forms. We probed the ability of humans and monkeys to perform overt and covert target selection in the presence of spatially congruent or incongruent visual forms. Even when completely task-irrelevant, images of visual objects had a dramatic effect on target selection, acting much like spatial cues used in spatial attention tasks. Our results demonstrate that traditional brain circuits for orienting behaviors, such as the superior colliculus, likely have privileged access to visual object representations.
Collapse
|
8
|
Global visual salience of competing stimuli. J Vis 2020; 20:27. [PMID: 32720973 PMCID: PMC7424106 DOI: 10.1167/jov.20.7.27] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2019] [Accepted: 01/14/2020] [Indexed: 11/24/2022] Open
Abstract
Current computational models of visual salience accurately predict the distribution of fixations on isolated visual stimuli. It is not known, however, whether the global salience of a stimulus, that is, its effectiveness in the competition for attention with other stimuli, is a function of the local salience or an independent measure. Further, do task and familiarity with the competing images influence eye movements? Here, we investigated the direction of the first saccade to characterize and analyze the global visual salience of competing stimuli. Participants freely observed pairs of images while eye movements were recorded. The pairs balanced the combinations of new and already seen images, as well as task and task-free trials. Then, we trained a logistic regression model that accurately predicted the location-left or right image-of the first fixation for each stimulus pair, accounting too for the influence of task, familiarity, and lateral bias. The coefficients of the model provided a reliable measure of global salience, which we contrasted with two distinct local salience models, GBVS and Deep Gaze. The lack of correlation of the behavioral data with the former and the small correlation with the latter indicate that global salience cannot be explained by the feature-driven local salience of images. Further, the influence of task and familiarity was rather small, and we reproduced the previously reported left-sided bias. Summarized, we showed that natural stimuli have an intrinsic global salience related to the human initial gaze direction, independent of the local salience and little influenced by task and familiarity.
Collapse
|
9
|
Dissociating Attention and Eye Movements in a Quantitative Analysis of Attention Allocation. Front Psychol 2017; 8:715. [PMID: 28567024 PMCID: PMC5434143 DOI: 10.3389/fpsyg.2017.00715] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2016] [Accepted: 04/21/2017] [Indexed: 11/13/2022] Open
Abstract
In a recent paper, we introduced a method and equation for inferring the allocation of attention on a continuous scale. The size of the stimuli, the estimated size of the fovea, and the pattern of results implied that the subjects' responses reflected shifts in covert attention rather than shifts in eye movements. This report describes an experiment that tests this implication. We measured eye movements. The monitor briefly displayed (e.g., 130 ms) two small stimuli (≈1.0° × 1.2°), situated one atop another. When the stimuli were close together, as in the previous study, fixations that supported correct responses at one stimulus also supported correct responses at the other stimulus, as measured over the entire session. Yet, on any particular trial, correct responses were limited to just one stimulus. This pattern suggests that the constraints on responding within a trial were due to limits on cognitive processing, whereas the ability to respond correctly to either stimulus on different trials must have entailed shifts in attention (that were not accompanied by eye movements). In contrast, when the stimuli were far apart, fixations that had a high probability of supporting correct responses at one stimulus had a low probability of supporting correct responses at the other stimulus. Thus, conditions could be arranged so that correct responses depended on eye movements, whereas in the "standard" procedure, correct responses were independent of eye movements. The results dissociate covert and overt attention and support the claim that our procedure measures covert attention.
Collapse
|
10
|
Preferential Processing of Social Features and Their Interplay with Physical Saliency in Complex Naturalistic Scenes. Front Psychol 2017; 8:418. [PMID: 28424635 PMCID: PMC5371661 DOI: 10.3389/fpsyg.2017.00418] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2016] [Accepted: 03/06/2017] [Indexed: 11/30/2022] Open
Abstract
According to so-called saliency-based attention models, attention during free viewing of visual scenes is particularly allocated to physically salient image regions. In the present study, we assumed that social features in complex naturalistic scenes would be processed preferentially irrespective of their physical saliency. Therefore, we expected worse prediction of gazing behavior by saliency-based attention models when social information is present in the visual field. To test this hypothesis, participants freely viewed color photographs of complex naturalistic social (e.g., including heads, bodies) and non-social (e.g., including landscapes, objects) scenes while their eye movements were recorded. In agreement with our hypothesis, we found that social features (especially heads) were heavily prioritized during visual exploration. Correspondingly, the presence of social information weakened the influence of low-level saliency on gazing behavior. Importantly, this pattern was most pronounced for the earliest fixations indicating automatic attentional processes. These findings were further corroborated by a linear mixed model approach showing that social features (especially heads) add substantially to the prediction of fixations beyond physical saliency. Taken together, the current study indicates gazing behavior for naturalistic scenes to be better predicted by the interplay of social and physically salient features than by low-level saliency alone. These findings strongly challenge the generalizability of saliency-based attention models and demonstrate the importance of considering social influences when investigating the driving factors of human visual attention.
Collapse
|
11
|
How is visual salience computed in the brain? Insights from behaviour, neurobiology and modelling. Philos Trans R Soc Lond B Biol Sci 2017; 372:20160113. [PMID: 28044023 PMCID: PMC5206280 DOI: 10.1098/rstb.2016.0113] [Citation(s) in RCA: 75] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/07/2016] [Indexed: 01/07/2023] Open
Abstract
Inherent in visual scene analysis is a bottleneck associated with the need to sequentially sample locations with foveating eye movements. The concept of a 'saliency map' topographically encoding stimulus conspicuity over the visual scene has proven to be an efficient predictor of eye movements. Our work reviews insights into the neurobiological implementation of visual salience computation. We start by summarizing the role that different visual brain areas play in salience computation, whether at the level of feature analysis for bottom-up salience or at the level of goal-directed priority maps for output behaviour. We then delve into how a subcortical structure, the superior colliculus (SC), participates in salience computation. The SC represents a visual saliency map via a centre-surround inhibition mechanism in the superficial layers, which feeds into priority selection mechanisms in the deeper layers, thereby affecting saccadic and microsaccadic eye movements. Lateral interactions in the local SC circuit are particularly important for controlling active populations of neurons. This, in turn, might help explain long-range effects, such as those of peripheral cues on tiny microsaccades. Finally, we show how a combination of in vitro neurophysiology and large-scale computational modelling is able to clarify how salience computation is implemented in the local circuit of the SC.This article is part of the themed issue 'Auditory and visual scene analysis'.
Collapse
|
12
|
Abstract
Mechanisms underlying attentional biases towards threat (ABTs), such as attentional avoidance and difficulty of disengagement, are still unclear. To address this issue, we recorded participants' eye movements during a dot detection task in which threatening or neutral stimuli served as peripheral cues. We evaluated response times (RTs) in trials where participants looked at the central fixation cross (not at the cues), as they were required, and number and duration of (unwanted) fixations towards threatening or neutral cues; in all analyses trait anxiety was treated as a covariate. Difficulty in attentional disengagement (longer RTs) was found when peripheral threatening stimuli were presented for 100 ms. Moreover, we observed significantly shorter (unwanted) fixations on threatening than on neutral peripheral stimuli, compatible with an avoidance bias, for longer presentation times. These findings demonstrate that, independent of trait anxiety levels, disengagement bias occurs without eye movements, whereas eye movements are implied in threat avoidance.
Collapse
|
13
|
Cost-sensitive Bayesian control policy in human active sensing. Front Hum Neurosci 2014; 8:955. [PMID: 25520640 PMCID: PMC4253738 DOI: 10.3389/fnhum.2014.00955] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2014] [Accepted: 11/10/2014] [Indexed: 11/13/2022] Open
Abstract
An important but poorly understood aspect of sensory processing is the role of active sensing, the use of self-motion such as eye or head movements to focus sensing resources on the most rewarding or informative aspects of the sensory environment. Here, we present behavioral data from a visual search experiment, as well as a Bayesian model of within-trial dynamics of sensory processing and eye movements. Within this Bayes-optimal inference and control framework, which we call C-DAC (Context-Dependent Active Controller), various types of behavioral costs, such as temporal delay, response error, and sensor repositioning cost, are explicitly minimized. This contrasts with previously proposed algorithms that optimize abstract statistical objectives such as anticipated information gain (Infomax) (Butko and Movellan, 2010) and expected posterior maximum (greedy MAP) (Najemnik and Geisler, 2005). We find that C-DAC captures human visual search dynamics better than previous models, in particular a certain form of "confirmation bias" apparent in the way human subjects utilize prior knowledge about the spatial distribution of the search target to improve search speed and accuracy. We also examine several computationally efficient approximations to C-DAC that may present biologically more plausible accounts of the neural computations underlying active sensing, as well as practical tools for solving active sensing problems in engineering applications. To summarize, this paper makes the following key contributions: human visual search behavioral data, a context-sensitive Bayesian active sensing model, a comparative study between different models of human active sensing, and a family of efficient approximations to the optimal model.
Collapse
|
14
|
Complementary effects of gaze direction and early saliency in guiding fixations during free viewing. J Vis 2014; 14:3. [PMID: 25371549 DOI: 10.1167/14.13.3] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Gaze direction provides an important and ubiquitous communication channel in daily behavior and social interaction of humans and some animals. While several studies have addressed gaze direction in synthesized simple scenes, few have examined how it can bias observer attention and how it might interact with early saliency during free viewing of natural and realistic scenes. Experiment 1 used a controlled, staged setting in which an actor was asked to look at two different objects in turn, yielding two images that differed only by the actor's gaze direction, to causally assess the effects of actor gaze direction. Over all scenes, the median probability of following an actor's gaze direction was higher than the median probability of looking toward the single most salient location, and higher than chance. Experiment 2 confirmed these findings over a larger set of unconstrained scenes collected from the Web and containing people looking at objects and/or other people. To further compare the strength of saliency versus gaze direction cues, we computed gaze maps by drawing a cone in the direction of gaze of the actors present in the images. Gaze maps predicted observers' fixation locations significantly above chance, although below saliency. Finally, to gauge the relative importance of actor face and eye directions in guiding observer's fixations, in Experiment 3, observers were asked to guess the gaze direction from only an actor's face region (with the rest of the scene masked), in two conditions: actor eyes visible or masked. Median probability of guessing the true gaze direction within ±9° was significantly higher when eyes were visible, suggesting that the eyes contribute significantly to gaze estimation, in addition to face region. Our results highlight that gaze direction is a strong attentional cue in guiding eye movements, complementing low-level saliency cues, and derived from both face and eyes of actors in the scene. Thus gaze direction should be considered in constructing more predictive visual attention models in the future.
Collapse
|
15
|
Spatial ranking strategy and enhanced peripheral vision discrimination optimize performance and efficiency of visual sequential search. Eur J Neurosci 2014; 40:2833-41. [PMID: 24893753 DOI: 10.1111/ejn.12639] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2014] [Revised: 04/17/2014] [Accepted: 04/25/2014] [Indexed: 11/27/2022]
Abstract
Visual sequential search might use a peripheral spatial ranking of the scene to put the next target of the sequence in the correct order. This strategy, indeed, might enhance the discriminative capacity of the human peripheral vision and spare neural resources associated with foveation. However, it is not known how exactly the peripheral vision sustains sequential search and whether the sparing of neural resources has a cost in terms of performance. To elucidate these issues, we compared strategy and performance during an alpha-numeric sequential task where peripheral vision was modulated in three different conditions: normal, blurred, or obscured. If spatial ranking is applied to increase the peripheral discrimination, its use as a strategy in visual sequencing should differ according to the degree of discriminative information that can be obtained from the periphery. Moreover, if this strategy spares neural resources without impairing the performance, its use should be associated with better performance. We found that spatial ranking was applied when peripheral vision was fully available, reducing the number and time of explorative fixations. When the periphery was obscured, explorative fixations were numerous and sparse; when the periphery was blurred, explorative fixations were longer and often located close to the items. Performance was significantly improved by this strategy. Our results demonstrated that spatial ranking is an efficient strategy adopted by the brain in visual sequencing to highlight peripheral detection and discrimination; it reduces the neural cost by avoiding unnecessary foveations, and promotes sequential search by facilitating the onset of a new saccade.
Collapse
|
16
|
The impact of attentional, linguistic, and visual features during object naming. Front Psychol 2013; 4:927. [PMID: 24379792 PMCID: PMC3861867 DOI: 10.3389/fpsyg.2013.00927] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2013] [Accepted: 11/23/2013] [Indexed: 11/14/2022] Open
Abstract
Object detection and identification are fundamental to human vision, and there is mounting evidence that objects guide the allocation of visual attention. However, the role of objects in tasks involving multiple modalities is less clear. To address this question, we investigate object naming, a task in which participants have to verbally identify objects they see in photorealistic scenes. We report an eye-tracking study that investigates which features (attentional, visual, and linguistic) influence object naming. We find that the amount of visual attention directed toward an object, its position and saliency, along with linguistic factors such as word frequency, animacy, and semantic proximity, significantly influence whether the object will be named or not. We then ask how features from different modalities are combined during naming, and find significant interactions between saliency and position, saliency and linguistic features, and attention and position. We conclude that when the cognitive system performs tasks such as object naming, it uses input from one modality to constraint or enhance the processing of other modalities, rather than processing each input modality independently.
Collapse
|
17
|
Social orienting of children with autism to facial expressions and speech: a study with a wearable eye-tracker in naturalistic settings. Front Psychol 2013; 4:840. [PMID: 24312064 PMCID: PMC3834245 DOI: 10.3389/fpsyg.2013.00840] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2013] [Accepted: 10/22/2013] [Indexed: 12/27/2022] Open
Abstract
This study investigates attention orienting to social stimuli in children with Autism Spectrum Conditions (ASC) during dyadic social interactions taking place in real-life settings. We study the effect of social cues that differ in complexity and distinguish between social cues produced by facial expressions of emotion and those produced during speech. We record the children's gazes using a head-mounted eye-tracking device and report on a detailed and quantitative analysis of the motion of the gaze in response to the social cues. The study encompasses a group of children with ASC from 2 to 11-years old (n = 14) and a group of typically developing (TD) children (n = 17) between 3 and 6-years old. While the two groups orient overtly to facial expressions, children with ASC do so to a lesser extent. Children with ASC differ importantly from TD children in the way they respond to speech cues, displaying little overt shifting of attention to speaking faces. When children with ASC orient to facial expressions, they show reaction times and first fixation lengths similar to those presented by TD children. However, children with ASC orient to speaking faces slower than TD children. These results support the hypothesis that individuals affected by ASC have difficulties processing complex social sounds and detecting intermodal correspondence between facial and vocal information. It also corroborates evidence that people with ASC show reduced overt attention toward social stimuli.
Collapse
|
18
|
Developmental Changes in Natural Viewing Behavior: Bottom-Up and Top-Down Differences between Children, Young Adults and Older Adults. Front Psychol 2010; 1:207. [PMID: 21833263 PMCID: PMC3153813 DOI: 10.3389/fpsyg.2010.00207] [Citation(s) in RCA: 54] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2010] [Accepted: 11/01/2010] [Indexed: 11/13/2022] Open
Abstract
Despite the growing interest in fixation selection under natural conditions, there is a major gap in the literature concerning its developmental aspects. Early in life, bottom-up processes, such as local image feature – color, luminance contrast etc. – guided viewing, might be prominent but later overshadowed by more top-down processing. Moreover, with decline in visual functioning in old age, bottom-up processing is known to suffer. Here we recorded eye movements of 7- to 9-year-old children, 19- to 27-year-old adults, and older adults above 72 years of age while they viewed natural and complex images before performing a patch-recognition task. Task performance displayed the classical inverted U-shape, with young adults outperforming the other age groups. Fixation discrimination performance of local feature values dropped with age. Whereas children displayed the highest feature values at fixated points, suggesting a bottom-up mechanism, older adult viewing behavior was less feature-dependent, reminiscent of a top-down strategy. Importantly, we observed a double dissociation between children and elderly regarding the effects of active viewing on feature-related viewing: Explorativeness correlated with feature-related viewing negatively in young age, and positively in older adults. The results indicate that, with age, bottom-up fixation selection loses strength and/or the role of top-down processes becomes more important. Older adults who increase their feature-related viewing by being more explorative make use of this low-level information and perform better in the task. The present study thus reveals an important developmental change in natural and task-guided viewing.
Collapse
|