1
|
Weng G, Akbarian A, Clark K, Noudoost B, Nategh N. Neural correlates of perisaccadic visual mislocalization in extrastriate cortex. Nat Commun 2024; 15:6335. [PMID: 39068199 PMCID: PMC11283495 DOI: 10.1038/s41467-024-50545-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 07/10/2024] [Indexed: 07/30/2024] Open
Abstract
When interacting with the visual world using saccadic eye movements (saccades), the perceived location of visual stimuli becomes biased, a phenomenon called perisaccadic mislocalization. However, the neural mechanism underlying this altered visuospatial perception and its potential link to other perisaccadic perceptual phenomena have not been established. Using the electrophysiological recording of extrastriate areas in four male macaque monkeys, combined with a computational model, we were able to quantify spatial bias around the saccade target (ST) based on the perisaccadic dynamics of extrastriate spatiotemporal sensitivity captured by a statistical model. This approach could predict the perisaccadic spatial bias around the ST, consistent with behavioral data, and revealed the precise neuronal response components underlying representational bias. These findings also establish the crucial role of increased sensitivity near the ST for neurons with receptive fields far from the ST in driving the ST spatial bias. Moreover, we showed that, by allocating more resources for visual target representation, visual areas enhance their representation of the ST location, even at the expense of transient distortions in spatial representation. This potential neural basis for perisaccadic ST representation also supports a general role for extrastriate neurons in creating the perception of stimulus location.
Collapse
Affiliation(s)
- Geyu Weng
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, USA
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, USA
| | - Amir Akbarian
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, USA
| | - Kelsey Clark
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, USA
| | - Behrad Noudoost
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, USA.
| | - Neda Nategh
- Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, USA.
- Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT, USA.
| |
Collapse
|
2
|
Weng G, Akbarian A, Clark K, Noudoost B, Nategh N. Neural correlates of perisaccadic visual mislocalization in extrastriate cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.11.06.565871. [PMID: 37986765 PMCID: PMC10659380 DOI: 10.1101/2023.11.06.565871] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2023]
Abstract
When interacting with the visual world using saccadic eye movements (saccades), the perceived location of visual stimuli becomes biased, a phenomenon called perisaccadic mislocalization, which is indeed an exemplar of the brain's dynamic representation of the visual world. However, the neural mechanism underlying this altered visuospatial perception and its potential link to other perisaccadic perceptual phenomena have not been established. Using a combined experimental and computational approach, we were able to quantify spatial bias around the saccade target (ST) based on the perisaccadic dynamics of extrastriate spatiotemporal sensitivity captured by statistical models. This approach could predict the perisaccadic spatial bias around the ST, consistent with the psychophysical studies, and revealed the precise neuronal response components underlying representational bias. These findings also established the crucial role of response remapping toward ST representation for neurons with receptive fields far from the ST in driving the ST spatial bias. Moreover, we showed that, by allocating more resources for visual target representation, visual areas enhance their representation of the ST location, even at the expense of transient distortions in spatial representation. This potential neural basis for perisaccadic ST representation, also supports a general role for extrastriate neurons in creating the perception of stimulus location.
Collapse
Affiliation(s)
- Geyu Weng
- Department of Biomedical Engineering, University of Utah, Salt Lake City, UT, USA
- Department of Ophthalmology and V7isual Sciences, University of Utah, Salt Lake City, UT, USA
| | - Amir Akbarian
- Department of Ophthalmology and V7isual Sciences, University of Utah, Salt Lake City, UT, USA
| | - Kelsey Clark
- Department of Ophthalmology and V7isual Sciences, University of Utah, Salt Lake City, UT, USA
| | - Behrad Noudoost
- Department of Ophthalmology and V7isual Sciences, University of Utah, Salt Lake City, UT, USA
| | - Neda Nategh
- Department of Ophthalmology and V7isual Sciences, University of Utah, Salt Lake City, UT, USA
- Department of Electrical and Computer Engineering, University of Utah, Salt Lake City, UT, USA
| |
Collapse
|
3
|
AttentionMNIST: a mouse-click attention tracking dataset for handwritten numeral and alphabet recognition. Sci Rep 2023; 13:3305. [PMID: 36849543 PMCID: PMC9971057 DOI: 10.1038/s41598-023-29880-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Accepted: 02/11/2023] [Indexed: 03/01/2023] Open
Abstract
Multiple attention-based models that recognize objects via a sequence of glimpses have reported results on handwritten numeral recognition. However, no attention-tracking data for handwritten numeral or alphabet recognition is available. Availability of such data would allow attention-based models to be evaluated in comparison to human performance. We collect mouse-click attention tracking data from 382 participants trying to recognize handwritten numerals and alphabets (upper and lowercase) from images via sequential sampling. Images from benchmark datasets are presented as stimuli. The collected dataset, called AttentionMNIST, consists of a sequence of sample (mouse click) locations, predicted class label(s) at each sampling, and the duration of each sampling. On average, our participants observe only 12.8% of an image for recognition. We propose a baseline model to predict the location and the class(es) a participant will select at the next sampling. When exposed to the same stimuli and experimental conditions as our participants, a highly-cited attention-based reinforcement model falls short of human efficiency.
Collapse
|
4
|
Rossheim ME, Peterson MS, Livingston MD, Dunlap P, Trangenstein PJ, Tran K, Emechebe OC, McDonald KK, Treffers RD, Jernigan DH, Thombs DL. Eye-tracking to examine differences in alcohol product appeal by sex among young people. THE AMERICAN JOURNAL OF DRUG AND ALCOHOL ABUSE 2022; 48:734-744. [PMID: 36206530 DOI: 10.1080/00952990.2022.2129062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Background: Advertising of traditional alcopops contains elements that appeal to youth, especially females. Supersized alcopops are marketed differently than traditional alcopops and contain up to 5.5 standard alcoholic drinks. Young females are more likely to underestimate the alcohol content of supersized alcopops, putting them at higher risk of overconsumption. Similar to supersized alcopops, beer is packaged in large cans and in the same areas of store shelves.Objective: This study examined among young people whether supersized alcopops versus beer products disproportionately appealed to females.Methods: Eleven adolescents (13-17 years old) and 72 college students (21-26 years old) were recruited during 2019-2020. Participants viewed 19 photos of convenience store display cases containing both supersized alcopop and beer products. While viewing each image, participants were instructed to click on the beverage that looked the "coolest" (i.e. most appealing). Eye-tracking hardware and software measured the amount of time participants visually fixated on each product. Participants completed a survey to record demographic characteristics.Results: Compared to males (n=25), females (n=58) fixated on supersized alcopops for 6.8 seconds longer (95%CI 0.3,13.3). Females also had 3.7 times the odds of selecting a supersized alcopop as the product they found most appealing compared to males (95%CI 1.68,8.01), adjusting for amount of time visually fixating on supersized alcopops, which was also a significant predictor.Conclusions: Young females' strong preference for supersized alcopops is concerning given they disproportionately underestimate their potency, relative to males, and are more likely to obtain dangerously high BAC levels from consuming one or two supersized alcopops.
Collapse
Affiliation(s)
- Matthew E Rossheim
- Department of Health Behavior and Health Systems, University of North Texas Health Science Center School of Public Health, Fort Worth, TX, USA
| | | | - M Doug Livingston
- Department of Behavioral Sciences and Health Education, Emory University, Atlanta, GA, USA
| | - Phenesse Dunlap
- Department of Behavioral Sciences and Health Education, Emory University, Atlanta, GA, USA
| | | | - Katherine Tran
- Department of Biology, George Mason University, Fairfax, VA, USA
| | - Ogechi C Emechebe
- Department of Global and Community Health, George Mason University, Fairfax, VA, USA
| | - Kayla K McDonald
- Department of Global and Community Health, George Mason University, Fairfax, VA, USA
| | - Ryan D Treffers
- National Capital Region Center, Pacific Institute for Research and Evaluation (PIRE), Santa Cruz, CA, USA
| | - David H Jernigan
- Department of Health Law, Policy, and Management, Boston University School of Public Health, Boston, MA, USA
| | - Dennis L Thombs
- Department of Health Behavior and Health Systems, University of North Texas Health Science Center School of Public Health, Fort Worth, TX, USA
| |
Collapse
|
5
|
Oakes LM. The development of visual attention in infancy: A cascade approach. ADVANCES IN CHILD DEVELOPMENT AND BEHAVIOR 2022; 64:1-37. [PMID: 37080665 DOI: 10.1016/bs.acdb.2022.10.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Visual attention develops rapidly and significantly during the first postnatal years. At birth, infants have poor visual acuity, poor head and neck control, and as a result have little autonomy over where and how long they look. Across the first year, the neural systems that support alerting, orienting, and endogenous attention develop, allowing infants to more effectively focus their attention on information in the environment important for processing. However, visual attention is a system that develops in the context of the whole child, and fully understanding this development requires understanding how attentional systems interact and how these systems interact with other systems across wide domains. By adopting a cascades framework we can better position the development of visual attention in the context of the whole developing child. Specifically, development builds, with previous achievements setting the stage for current development, and current development having cascading consequences on future development. In addition, development reflects changes in multiple domains, and those domains influence each other across development. Finally, development reflects and produces changes in the input that the visual system receives; understanding the changing input is key to fully understand the development of visual attention. The development of visual attention is described in this context.
Collapse
Affiliation(s)
- Lisa M Oakes
- Department of Psychology and Center for Mind and Brain, University of California, Davis, Davis, CA, United States.
| |
Collapse
|
6
|
Hazard Perception–Response: A Theoretical Framework to Explain Drivers’ Interactions with Roadway Hazards. SAFETY 2021. [DOI: 10.3390/safety7020029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Research suggests that novice drivers are most susceptible to errors when detecting and responding to hazards. If this were true, then hazard training should be effective in improving novice drivers’ performance. However, there is limited evidence to support this effectiveness. Much of this research has overlooked a fundamental aspect of psychological research: theory. Although four theoretical frameworks were developed to explain this process, none have been validated. We proposed a theoretical framework to more accurately explain drivers’ behavior when interacting with hazardous situations. This framework is novel in that it leverages support from visual attention and driving behavior research. Hazard-related constructs are defined and suitable metrics to evaluate the stages in hazard processing are suggested. Additionally, individual differences which affect hazard-related skills are also discussed. This new theoretical framework may explain why the conflicts in current hazard-related research fail to provide evidence that training such behaviors reduces crash risk. Future research is necessary to empirically test this framework.
Collapse
|
7
|
Neural Representations of Covert Attention across Saccades: Comparing Pattern Similarity to Shifting and Holding Attention during Fixation. eNeuro 2021; 8:ENEURO.0186-20.2021. [PMID: 33558269 PMCID: PMC8026251 DOI: 10.1523/eneuro.0186-20.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2020] [Revised: 01/21/2021] [Accepted: 01/26/2021] [Indexed: 11/21/2022] Open
Abstract
We can focus visuospatial attention by covertly attending to relevant locations, moving our eyes, or both simultaneously. How does shifting versus holding covert attention during fixation compare with maintaining covert attention across saccades? We acquired human fMRI data during a combined saccade and covert attention task. On Eyes-fixed trials, participants either held attention at the same initial location (“hold attention”) or shifted attention to another location midway through the trial (“shift attention”). On Eyes-move trials, participants made a saccade midway through the trial, while maintaining attention in one of two reference frames: the “retinotopic attention” condition involved holding attention at a fixation-relative location but shifting to a different screen-centered location, whereas the “spatiotopic attention” condition involved holding attention on the same screen-centered location but shifting relative to fixation. We localized the brain network sensitive to attention shifts (shift > hold attention), and used multivoxel pattern time course (MVPTC) analyses to investigate the patterns of brain activity for spatiotopic and retinotopic attention across saccades. In the attention shift network, we found transient information about both whether covert shifts were made and whether saccades were executed. Moreover, in this network, both retinotopic and spatiotopic conditions were represented more similarly to shifting than to holding covert attention. An exploratory searchlight analysis revealed additional regions where spatiotopic was relatively more similar to shifting and retinotopic more to holding. Thus, maintaining retinotopic and spatiotopic attention across saccades may involve different types of updating that vary in similarity to covert attention “hold” and “shift” signals across different regions.
Collapse
|
8
|
Exploring the temporal dynamics of inhibition of return using steady-state visual evoked potentials. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2020; 20:1349-1364. [PMID: 33236297 DOI: 10.3758/s13415-020-00846-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/25/2020] [Indexed: 11/08/2022]
Abstract
Inhibition of return is characterized by delayed responses to previously attended locations when the interval between stimuli is long enough. The present study employed steady-state visual evoked potentials (SSVEPs) as a measure of attentional modulation to explore the nature and time course of input- and output-based inhibitory cueing mechanisms that each slow response times at previously stimulated locations under different experimental conditions. The neural effects of behavioral inhibition were examined by comparing post-cue SSVEPs between cued and uncued locations measured across two tasks that differed only in the response modality (saccadic or manual response to targets). Grand averages of SSVEP amplitudes for each condition showed a reduction in amplitude at cued locations in the window of 100-500 ms post-cue, revealing an early, short-term decrease in the responses of neurons that can be attributed to sensory adaptation, regardless of response modality. Because primary visual cortex has been found to be one of the major sources of SSVEP signals, the results suggest that the SSVEP modulations observed were caused by input-based inhibition that occurred in V1, or visual areas earlier than V1, as a consequence of reduced visual input activity at previously cued locations. No SSVEP modulations were observed in either response condition late in the cue-target interval, suggesting that neither late input- nor output-based IOR modulates SSVEPs. These findings provide further electrophysiological support for the theory of multiple mechanisms contributing to behavioral cueing effects.
Collapse
|
9
|
Lovett A, Bridewell W, Bello P. Selection enables enhancement: An integrated model of object tracking. J Vis 2020; 19:23. [PMID: 31868894 DOI: 10.1167/19.14.23] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The diversity of research on visual attention and multiple-object tracking presents challenges for anyone hoping to develop a unified account. One key challenge is identifying the attentional limitations that give rise to competition among targets during tracking. To address this challenge, we present a computational model of object tracking that relies on two attentional mechanisms: serial selection and parallel enhancement. Selection picks out an object for further processing, whereas enhancement increases sensitivity to stimuli in regions where objects have been selected previously. In this model, multiple target locations can be tracked in parallel via enhancement, whereas a single target can be selected so that additional information beyond its location can be processed. In simulations of two psychological experiments, we demonstrate that spatial competition during enhancement and temporal competition for selection can explain a range of findings on multiple-object tracking, and we argue that the interaction between selection and enhancement captured in the model is critical to understanding attention more broadly.
Collapse
Affiliation(s)
| | | | - Paul Bello
- U.S. Naval Research Laboratory, Washington, DC, USA
| |
Collapse
|
10
|
Stereotypes and Structure in the Interaction between Facial Emotional Expression and Sex Characteristics. ADAPTIVE HUMAN BEHAVIOR AND PHYSIOLOGY 2020. [DOI: 10.1007/s40750-020-00141-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
11
|
|
12
|
Reuther J, Chakravarthi R, Hunt AR. The eye that binds: Feature integration is not disrupted by saccadic eye movements. Atten Percept Psychophys 2020; 82:533-549. [PMID: 31808114 PMCID: PMC7246252 DOI: 10.3758/s13414-019-01873-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Feature integration theory proposes that visual features, such as shape and color, can only be combined into a unified object when spatial attention is directed to their location in retinotopic maps. Eye movements cause dramatic changes on our retinae, and are associated with obligatory shifts in spatial attention. In two experiments, we measured the prevalence of conjunction errors (that is, reporting an object as having an attribute that belonged to another object), for brief stimulus presentation before, during, and after a saccade. Planning and executing a saccade did not itself disrupt feature integration. Motion did disrupt feature integration, leading to an increase in conjunction errors. However, retinal motion of an equal extent but caused by saccadic eye movements is spared this disruption, and showed similar rates of conjunction errors as a condition with static stimuli presented to a static eye. The results suggest that extra-retinal signals are able to compensate for the motion caused by saccadic eye movements, thereby preserving the integrity of objects across saccades and preventing their features from mixing or mis-binding.
Collapse
|
13
|
Abstract
In this article, we challenge the usefulness of "attention" as a unitary construct and/or neural system. We point out that the concept has too many meanings to justify a single term, and that "attention" is used to refer to both the explanandum (the set of phenomena in need of explanation) and the explanans (the set of processes doing the explaining). To illustrate these points, we focus our discussion on visual selective attention. It is argued that selectivity in processing has emerged through evolution as a design feature of a complex multi-channel sensorimotor system, which generates selective phenomena of "attention" as one of many by-products. Instead of the traditional analytic approach to attention, we suggest a synthetic approach that starts with well-understood mechanisms that do not need to be dedicated to attention, and yet account for the selectivity phenomena under investigation. We conclude that what would serve scientific progress best would be to drop the term "attention" as a label for a specific functional or neural system and instead focus on behaviorally relevant selection processes and the many systems that implement them.
Collapse
Affiliation(s)
- Bernhard Hommel
- Institute of Psychology, Cognitive Psychology Unit and Leiden Institute for Brain and Cognition, Leiden University, Leiden, the Netherlands
| | - Craig S Chapman
- Faculty of Kinesiology, Sport, and Recreation, University of Alberta, Edmonton, Alberta, Canada
| | - Paul Cisek
- Department of Neuroscience, University of Montreal, Montreal, Quebec, Canada
| | - Heather F Neyedli
- School of Health and Human Performance, Dalhousie University, Halifax, Nova Scotia, Canada
| | - Joo-Hyun Song
- Department of Cognitive, Linguistic and Psychological Sciences, Brown University, Providence, RI, USA
| | - Timothy N Welsh
- Centre for Motor Control, Faculty of Kinesiology and Physical Education, University of Toronto, 55 Harbord Street, Toronto, ON, M5S 2W6, Canada.
| |
Collapse
|
14
|
Valéry B, Matton N, Scannella S, Dehais F. Global difficulty modulates the prioritization strategy in multitasking situations. APPLIED ERGONOMICS 2019; 80:1-8. [PMID: 31280792 DOI: 10.1016/j.apergo.2019.04.012] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Revised: 01/08/2019] [Accepted: 04/23/2019] [Indexed: 06/09/2023]
Abstract
There has been a considerable amount of research to conceptualize how cognition handle multitasking situations. Despite these efforts, it is still not clear how task parameters shape attentionnal resources allocation. For instance, many research have suggested that difficulty levels could explain these conflicting observations and very few have considered other factors such as task importance. In the present study, twenty participants had to carry out two N-Back tasks simultaneously, each subtask having distinct difficulty (0,1 or 2-Back) and importance (1 or 3 points) levels. Participants's cumulative dwell time were collected to assess their attentional strategies. Results showed that depending on the global level of difficulty (combination of the two levels of difficulty), attentional resources of people were driven either by the subtask difficulty (under low-global-difficulty) or the subtask importance (under high-global-difficulty), in a non-compensatory way. We discussed these results in terms of decision-making heuristics and metacognition.
Collapse
Affiliation(s)
- Benoît Valéry
- Institut Supérieur de l'Aéronautique et de l'Espace - Supaéro, Toulouse, France; Institut National Universitaire Champollion, Albi, France.
| | - Nadine Matton
- École Nationale de l'Aviation Civile, Toulouse, France
| | - Sébastien Scannella
- Institut Supérieur de l'Aéronautique et de l'Espace - Supaéro, Toulouse, France
| | - Frédéric Dehais
- Institut Supérieur de l'Aéronautique et de l'Espace - Supaéro, Toulouse, France
| |
Collapse
|
15
|
Alexander RG, Nahvi RJ, Zelinsky GJ. Specifying the precision of guiding features for visual search. J Exp Psychol Hum Percept Perform 2019; 45:1248-1264. [PMID: 31219282 PMCID: PMC6706321 DOI: 10.1037/xhp0000668] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Visual search is the task of finding things with uncertain locations. Despite decades of research, the features that guide visual search remain poorly specified, especially in realistic contexts. This study tested the role of two features-shape and orientation-both in the presence and absence of hue information. We conducted five experiments to describe preview-target mismatch effects, decreases in performance caused by differences between the image of the target as it appears in the preview and as it appears in the actual search display. These mismatch effects provide direct measures of feature importance, with larger performance decrements expected for more important features. Contrary to previous conclusions, our data suggest that shape and orientation only guide visual search when color is not available. By varying the probability of mismatch in each feature dimension, we also show that these patterns of feature guidance do not change with the probability that the previewed feature will be invalid. We conclude that the target representations used to guide visual search are much less precise than previously believed, with participants encoding and using color and little else. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
|
16
|
Pereira EJ, Birmingham E, Ristic J. The eyes do not have it after all? Attention is not automatically biased towards faces and eyes. PSYCHOLOGICAL RESEARCH 2019; 84:1407-1423. [PMID: 30603864 DOI: 10.1007/s00426-018-1130-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2018] [Accepted: 12/07/2018] [Indexed: 10/27/2022]
Abstract
It is commonly accepted that attention is spontaneously biased towards faces and eyes. However, the role of stimulus features and task settings in this finding has not yet been systematically investigated. Here, we tested if faces and facial features bias attention spontaneously when stimulus factors, task properties, response conditions, and eye movements are controlled. In three experiments, participants viewed face, house, and control scrambled face-house images in an upright and inverted orientation. The task was to discriminate a target that appeared with equal probability at the previous location of the face, house, or the control image. In all experiments, our data indicated no spontaneous biasing of attention for targets occurring at the previous location of the face. Experiment 3, which measured oculomotor biasing, suggested a reliable but infrequent saccadic bias towards the eye region of upright faces. Importantly, these results did not reflect our specific laboratory settings, as in Experiment 4, we present a full replication of a classic finding in the literature demonstrating reliable social attention bias. Together, these data suggest that attentional biasing for social information is task and context mediated, and less robust than originally thought.
Collapse
Affiliation(s)
- Effie J Pereira
- Department of Psychology, McGill University, 1205 Dr. Penfield Avenue, H3A 1B1, Montreal, QC, Canada.
| | - Elina Birmingham
- Faculty of Education, Simon Fraser University, 8888 University Drive, Burnaby, V5A 1S6, BC, Canada
| | - Jelena Ristic
- Department of Psychology, McGill University, 1205 Dr. Penfield Avenue, H3A 1B1, Montreal, QC, Canada
| |
Collapse
|
17
|
Abstract
Several times per second, humans make rapid eye movements called saccades which redirect their gaze to sample new regions of external space. Saccades present unique challenges to both perceptual and motor systems. During the movement, the visual input is smeared across the retina and severely degraded. Once completed, the projection of the world onto the retina has undergone a large-scale spatial transformation. The vector of this transformation, and the new orientation of the eye in the external world, is uncertain. Memory for the pre-saccadic visual input is thought to play a central role in compensating for the disruption caused by saccades. Here, we review evidence that memory contributes to (1) detecting and identifying changes in the world that occur during a saccade, (2) bridging the gap in input so that visual processing does not have to start anew, and (3) correcting saccade errors and recalibrating the oculomotor system to ensure accuracy of future saccades. We argue that visual working memory (VWM) is the most likely candidate system to underlie these behaviours and assess the consequences of VWM's strict resource limitations for transsaccadic processing. We conclude that a full understanding of these processes will require progress on broader unsolved problems in psychology and neuroscience, in particular how the brain solves the object correspondence problem, to what extent prior beliefs influence visual perception, and how disparate signals arriving with different delays are integrated.
Collapse
|
18
|
Abstract
The nature of the relationship between spatial attention and eye movements has been the subject of intense debate for more than 40 years. Two ideas have dominated this debate. First is the idea that spatial attention shares common neural mechanisms with eye movement programming, characterizing attention as an eye movement that has been prepared but not executed. Second, based on the observation that attention shifts to saccade targets, several theories have proposed that saccade programming necessarily recruits attentional resources. In this chapter, we review the evidence for each of these ideas and discuss some of the limitations and challenges in confirming their predictions. Although they are clearly dependent under some circumstances, dissociations between spatial attention and eye movements, and clear differences in their basic functions, point to the existence of two interconnected, but separate, systems.
Collapse
|
19
|
Xing Q, Rong C, Lu Z, Yao Y, Zhang Z, Zhao X. The Effect of the Embodied Guidance in the Insight Problem Solving: An Eye Movement Study. Front Psychol 2018; 9:2257. [PMID: 30534097 PMCID: PMC6275308 DOI: 10.3389/fpsyg.2018.02257] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Accepted: 10/30/2018] [Indexed: 11/24/2022] Open
Abstract
Insight is an important cognitive process in creative thinking. The present research applied embodied cognitive perspective to explore the effect of embodied guidance on insight problem solving and its underlying mechanisms by two experiments. Experiment 1 used the matchstick arithmetic problem to explore the role of embodied gestures guidance in problem solving. The results showed that the embodied gestures facilitate the participants’ performance. Experiment 2 investigated how embodied attention guidance affects insight problem solving. The results showed that participants performed better in prototypical guidance condition. Experiment 2a adopted the Duncker’s radiation problem to explore how embodied behavior and prototypical guidance influence problem solving by attention tracing techniques. Experiment 2b aimed to further examine whether implicit attention transfer was the real cause which resulted in participants over-performing in prototypical guidance condition in Experiment 2a. The results demonstrated that overt physical motion was unnecessary for individuals to experience the benefits of embodied guidance in problem solving, which supported the reciprocal relation hypothesis of saccades and attention. In addition, the questionnaire completed after experiments showed that participants did not realize the relation between guidance and insight problem solving. Taken together, the current study provided further evidence for that embodied gesture and embodied attention both facilitated the insight problem solving and the facilitation is implicit.
Collapse
Affiliation(s)
- Qiang Xing
- Department of Psychology, School of Education, Guangzhou University, Guangzhou, China
| | - Cuiliang Rong
- Department of Psychology, School of Education, Guangzhou University, Guangzhou, China
| | - Zheyi Lu
- Department of Psychology, School of Education, Guangzhou University, Guangzhou, China
| | - Yanfeng Yao
- Department of Psychology, School of Education, Guangzhou University, Guangzhou, China.,Jiangcun Primary School, Guangzhou, China
| | - Zhonglu Zhang
- Department of Psychology, School of Education, Guangzhou University, Guangzhou, China
| | - Xue Zhao
- Shunde Experiment Middle School, Foshan, China
| |
Collapse
|
20
|
Abstract
Rapid shifts of involuntary attention have been shown to induce mislocalizations of nearby objects. One pattern of mislocalization, termed the Attentional Repulsion Effect (ARE), occurs when the onset of peripheral pre-cues lead to perceived shifts of subsequently presented stimuli away from the cued location. While the standard ARE configuration utilizes vernier lines, to date, all previous ARE studies have only assessed distortions along one direction and tested one spatial dimension (i.e., position or shape). The present study assessed the magnitude of the ARE using a novel stimulus configuration. Across three experiments participants judged which of two rectangles on the left or right side of the display appeared wider or taller. Pre-cues were used in Experiments 1 and 2. Results show equivalent perceived expansions in the width and height of the pre-cued rectangle in addition to baseline asymmetries in left/right relative size under no-cue conditions. Altering cue locations led to shifts in the perceived location of the same rectangles, demonstrating distortions in perceived shape and location using the same stimuli and cues. Experiment 3 demonstrates that rectangles are perceived as larger in the periphery compared to fixation, suggesting that eye movements cannot account for results from Experiments 1 and 2. The results support the hypothesis that the ARE reflects a localized, symmetrical warping of visual space that impacts multiple aspects of spatial and object perception.
Collapse
|
21
|
Li J, Oksama L, Hyönä J. Model of Multiple Identity Tracking (MOMIT) 2.0: Resolving the serial vs. parallel controversy in tracking. Cognition 2018; 182:260-274. [PMID: 30384128 DOI: 10.1016/j.cognition.2018.10.016] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2018] [Revised: 10/03/2018] [Accepted: 10/23/2018] [Indexed: 11/29/2022]
Abstract
The present study investigated whether during tracking of multiple moving objects with distinct identities only one identity is tracked at each moment (serial tracking) or whether multiple identities can be tracked simultaneously (parallel tracking). By adopting the gaze-contingent display change technique, we manipulated in real time the presence/absence of object identities during tracking. The data on performance accuracy revealed a serial tracking pattern for facial images and a parallel pattern for color discs: when tracking faces, the presence/absence of only the currently foveated identity impacted the performance, whereas when tracking colors, the presence of multiple identities across the visual field led to improved tracking performance. This pattern is consistent with the identifiability of the different types of objects in the visual field. The eye movements during MIT showed a bias towards visiting and dwelling on individual targets when facial identities were present and towards visiting the blank areas between targets when color identities were present. Nevertheless, the eye visits were predominately on individual targets regardless of the type of objects and the presence of object identities. The eye visits to targets were beneficial for target tracking, particularly in face tracking. We propose the Model of Multiple Identity Tracking (MOMIT) 2.0 which accounts for the results and reconcile the serial vs. parallel controversy. The model suggests that observers cooperatively use attention, eye movements, perception, and working memory for dynamic tracking. Tracking appears more serial when high-resolution information needs to be sampled and maintained for discriminating the targets, whereas it appears more parallel when low-resolution information is sufficient.
Collapse
Affiliation(s)
- Jie Li
- School of Psychology, Beijing Sport University, China.
| | | | - Jukka Hyönä
- Department of Psychology, University of Turku, Finland.
| |
Collapse
|
22
|
Indrajeet I, Ray S. Detectability of stop-signal determines magnitude of deceleration in saccade planning. Eur J Neurosci 2018; 49:232-249. [PMID: 30362205 DOI: 10.1111/ejn.14220] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2018] [Revised: 09/23/2018] [Accepted: 10/16/2018] [Indexed: 12/29/2022]
Abstract
An inhibitory control is exerted when the context in which a movement has been planned changes abruptly making the impending movement inappropriate. Neurons in the frontal eye field and superior colliculus steadily increase activity before a saccadic eye movement, but cease the rise below a threshold when an impending saccade is withheld in response to an unexpected stop-signal. This type of neural modulation has been majorly considered as an outcome of a race between preparatory and inhibitory processes ramping up to reach a decision criterion. An alternative model claims that the rate of saccade planning is diminished exclusively when the stop-signal is detected within a stipulated period. However, due to a dearth of empirical evidence in support of the latter model, it remains unclear how the detectability of the stop-signal influences saccade inhibition. In our study, human participants selected a visual target to look at by discriminating a go-cue. Infrequently they cancelled saccade and reported whether they saw the stop-signal. The go-cue and stop-signal both were embedded in a stream of irrelevant stimuli presented in rapid succession. Participants exhibited difficulty in detection of the stop-signal when presented almost immediately after the go-cue. We found a robust relationship between the detectability of the stop-signal and the odds of saccade inhibition. Saccade latency increased exponentially with the maximum time available for processing the stop-signal before gaze shifted. A model in which the stop-signal onset spontaneously decelerated progressive saccade planning with the magnitude proportional to its detectability accounted for the data.
Collapse
Affiliation(s)
- Indrajeet Indrajeet
- Centre of Behavioural and Cognitive Sciences, University of Allahabad, Allahabad, India
| | - Supriya Ray
- Centre of Behavioural and Cognitive Sciences, University of Allahabad, Allahabad, India
| |
Collapse
|
23
|
Estéphan A, Fiset D, Saumure C, Plouffe-Demers MP, Zhang Y, Sun D, Blais C. Time Course of Cultural Differences in Spatial Frequency Use for Face Identification. Sci Rep 2018; 8:1816. [PMID: 29379032 PMCID: PMC5788938 DOI: 10.1038/s41598-018-19971-1] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2017] [Accepted: 12/15/2017] [Indexed: 11/13/2022] Open
Abstract
Several previous studies of eye movements have put forward that, during face recognition, Easterners spread their attention across a greater part of their visual field than Westerners. Recently, we found that culture’s effect on the perception of faces reaches mechanisms deeper than eye movements, therefore affecting the very nature of information sampled by the visual system: that is, Westerners globally rely more than Easterners on fine-grained visual information (i.e. high spatial frequencies; SFs), whereas Easterners rely more on coarse-grained visual information (i.e. low SFs). These findings suggest that culture influences basic visual processes; however, the temporal onset and dynamics of these culture-specific perceptual differences are still unknown. Here, we investigate the time course of SF use in Western Caucasian (Canadian) and East Asian (Chinese) observers during a face identification task. Firstly, our results confirm that Easterners use relatively lower SFs than Westerners, while the latter use relatively higher SFs. More importantly, our results indicate that these differences arise as early as 34 ms after stimulus onset, and remain stable through time. Our research supports the hypothesis that Westerners and Easterners initially rely on different types of visual information during face processing.
Collapse
Affiliation(s)
- Amanda Estéphan
- Département de psychoéducation et de psychologie, Université du Québec en Outaouais, Québec, Canada.,Département de psychologie, Université du Québec à Montréal, Québec, Canada
| | - Daniel Fiset
- Département de psychoéducation et de psychologie, Université du Québec en Outaouais, Québec, Canada
| | - Camille Saumure
- Département de psychoéducation et de psychologie, Université du Québec en Outaouais, Québec, Canada
| | | | - Ye Zhang
- Institute of Psychological Science, Hangzhou Normal University, Hangzhou, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, China
| | - Dan Sun
- Institute of Psychological Science, Hangzhou Normal University, Hangzhou, China.,Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, China
| | - Caroline Blais
- Département de psychoéducation et de psychologie, Université du Québec en Outaouais, Québec, Canada.
| |
Collapse
|
24
|
Hannula DE. Attention and long-term memory: Bidirectional interactions and their effects on behavior. PSYCHOLOGY OF LEARNING AND MOTIVATION 2018. [DOI: 10.1016/bs.plm.2018.09.004] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
25
|
Stewart EEM, Schütz AC. Attention modulates trans-saccadic integration. Vision Res 2017; 142:1-10. [PMID: 29183779 PMCID: PMC5757795 DOI: 10.1016/j.visres.2017.11.006] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Revised: 11/13/2017] [Accepted: 11/17/2017] [Indexed: 11/16/2022]
Abstract
With every saccade, humans must reconcile the low resolution peripheral information available before a saccade, with the high resolution foveal information acquired after the saccade. While research has shown that we are able to integrate peripheral and foveal vision in a near-optimal manner, it is still unclear which mechanisms may underpin this important perceptual process. One potential mechanism that may moderate this integration process is visual attention. Pre-saccadic attention is a well documented phenomenon, whereby visual attention shifts to the location of an upcoming saccade before the saccade is executed. While it plays an important role in other peri-saccadic processes such as predictive remapping, the role of attention in the integration process is as yet unknown. This study aimed to determine whether the presentation of an attentional distractor during a saccade impaired trans-saccadic integration, and to measure the time-course of this impairment. Results showed that presenting an attentional distractor impaired integration performance both before saccade onset, and during the saccade, in selected subjects who showed integration in the absence of a distractor. This suggests that visual attention may be a mechanism that facilitates trans-saccadic integration.
Collapse
Affiliation(s)
- Emma E M Stewart
- Allgemeine und Biologische Psychologie, Philipps-Universität Marburg, Marburg, Germany.
| | - Alexander C Schütz
- Allgemeine und Biologische Psychologie, Philipps-Universität Marburg, Marburg, Germany
| |
Collapse
|
26
|
Jung ES, Lee DG, Lee K, Lee SY. Temporally Robust Eye Movements through Task Priming and Self-referential Stimuli. Sci Rep 2017; 7:7257. [PMID: 28775332 PMCID: PMC5543141 DOI: 10.1038/s41598-017-07641-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2017] [Accepted: 06/28/2017] [Indexed: 11/16/2022] Open
Abstract
Studies have demonstrated connections between eye movements and attention shifts. However, little is known about the general factors that contribute to the self-consistency of idiosyncratic scanpaths as a function of attention shifts over time. The present work repeatedly measured human eye movements at various time intervals that ranged from less than one hour to one year between recording sessions. With and without task context, subjects observed multiple images with multiple areas of interest, including their own sporadically interspersed facial images. As reactions to visual stimuli, the eye movements of individuals were compared within and between subjects. We compared scanpaths with dynamic time warping and identified subjects based on the comparisons. The results indicate that within-subject eye movement comparisons remain more similar than between-subject eye movement comparisons over time and that task context and self-referential stimuli contribute to the consistency of idiosyncrasies in attention shift patterns.
Collapse
Affiliation(s)
- Eun-Soo Jung
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea
| | - Dong-Gun Lee
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea
| | - Kyeongho Lee
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea
| | - Soo-Young Lee
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea.
- Brain Science Research Center, Korea Advanced Institute of Science and Technology, Daejeon, South Korea.
| |
Collapse
|
27
|
Daga FB, Macagno E, Stevenson C, Elhosseiny A, Diniz-Filho A, Boer ER, Schulze J, Medeiros FA. Wayfinding and Glaucoma: A Virtual Reality Experiment. Invest Ophthalmol Vis Sci 2017; 58:3343-3349. [PMID: 28687845 PMCID: PMC5499646 DOI: 10.1167/iovs.17-21849] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose Wayfinding, the process of determining and following a route between an origin and a destination, is an integral part of everyday tasks. The purpose of this study was to investigate the impact of glaucomatous visual field loss on wayfinding behavior using an immersive virtual reality (VR) environment. Methods This cross-sectional study included 31 glaucomatous patients and 20 healthy subjects without evidence of overall cognitive impairment. Wayfinding experiments were modeled after the Morris water maze navigation task and conducted in an immersive VR environment. Two rooms were built varying only in the complexity of the visual scene in order to promote allocentric-based (room A, with multiple visual cues) versus egocentric-based (room B, with single visual cue) spatial representations of the environment. Wayfinding tasks in each room consisted of revisiting previously visible targets that subsequently became invisible. Results For room A, glaucoma patients spent on average 35.0 seconds to perform the wayfinding task, whereas healthy subjects spent an average of 24.4 seconds (P = 0.001). For room B, no statistically significant difference was seen on average time to complete the task (26.2 seconds versus 23.4 seconds, respectively; P = 0.514). For room A, each 1-dB worse binocular mean sensitivity was associated with 3.4% (P = 0.001) increase in time to complete the task. Conclusions Glaucoma patients performed significantly worse on allocentric-based wayfinding tasks conducted in a VR environment, suggesting visual field loss may affect the construction of spatial cognitive maps relevant to successful wayfinding. VR environments may represent a useful approach for assessing functional vision endpoints for clinical trials of emerging therapies in ophthalmology.
Collapse
Affiliation(s)
- Fábio B Daga
- Visual Performance Laboratory, University of California San Diego, La Jolla, California, United States
| | - Eduardo Macagno
- Division of Biological Sciences, University of California San Diego, La Jolla, California, United States 3Department of Bioengineering, University of California San Diego, La Jolla, California, United States
| | - Cory Stevenson
- Division of Biological Sciences, University of California San Diego, La Jolla, California, United States 3Department of Bioengineering, University of California San Diego, La Jolla, California, United States
| | - Ahmed Elhosseiny
- Visual Performance Laboratory, University of California San Diego, La Jolla, California, United States 4California Institute for Telecommunications and Information Technology (Calit2), University of California San Diego, La Jolla, California, United States
| | - Alberto Diniz-Filho
- Visual Performance Laboratory, University of California San Diego, La Jolla, California, United States
| | - Erwin R Boer
- Visual Performance Laboratory, University of California San Diego, La Jolla, California, United States
| | - Jürgen Schulze
- California Institute for Telecommunications and Information Technology (Calit2), University of California San Diego, La Jolla, California, United States 5Department of Computer Science and Engineering, University of California San Diego, La Jolla, California, United States
| | - Felipe A Medeiros
- Visual Performance Laboratory, University of California San Diego, La Jolla, California, United States
| |
Collapse
|
28
|
Abstract
Attention and eye movements provide a window into the selective processing of visual information. Evidence suggests that selection is influenced by various factors and is not always under the strategic control of the observer. The aims of this tutorial review are to give a brief introduction to eye movements and attention and to outline the conditions that help determine control. Evidence suggests that the ability to establish control depends on the complexity of the display as well as the point in time at which selection occurs. Stimulus-driven selection is more probable in simple displays than in complex natural scenes, but it critically depends on the timing of the response: Salience determines selection only when responses are triggered quickly following display presentation, and plays no role in longer-latency responses. The time course of selection is also important for the relationship between attention and eye movements. Specifically, attention and eye movements appear to act independently when oculomotor selection is quick, whereas attentional processes are able to influence oculomotor control when saccades are triggered only later in time. This relationship may also be modulated by whether the eye movement is controlled in a voluntary or an involuntary manner. To conclude, we present evidence that shows that visual control is limited in flexibility and that the mechanisms of selection are constrained by context and time. The outcome of visual selection changes with the situational context, and knowing the constraints of control is necessary to understanding when and how visual selection is truly controlled by the observer.
Collapse
|
29
|
Abstract
Frequently, we use expectations about likely locations of a target to guide the allocation of our attention. Despite the importance of this attentional process in everyday tasks, examination of pre-cueing effects on attention, particularly endogenous pre-cueing effects, has been relatively little explored outside an eccentricity of 20°. Given the visual field has functional subdivisions that attentional processes can differ significantly among the foveal, perifoveal, and more peripheral areas, how endogenous pre-cues that carry spatial information of targets influence our allocation of attention across a large visual field (especially in the more peripheral areas) remains unclear. We present two experiments examining how the expectation of the location of the target shapes the distribution of attention across eccentricities in the visual field. We measured participants’ ability to pick out a target among distractors in the visual field after the presentation of a highly valid cue indicating the size of the area in which the target was likely to occur, or the likely direction of the target (left or right side of the display). Our first experiment showed that participants had a higher target detection rate with faster responses, particularly at eccentricities of 20° and 30°. There was also a marginal advantage of pre-cueing effects when trials of the same size cue were blocked compared to when trials were mixed. Experiment 2 demonstrated a higher target detection rate when the target occurred at the cued direction. This pre-cueing effect was greater at larger eccentricities and with a longer cue-target interval. Our findings on the endogenous pre-cueing effects across a large visual area were summarized using a simple model to assist in conceptualizing the modifications of the distribution of attention over the visual field. We discuss our finding in light of cognitive penetration of perception, and highlight the importance of examining attentional process across a large area of the visual field.
Collapse
Affiliation(s)
- Jing Feng
- Department of Psychology, North Carolina State University, RaleighNC, United States
| | - Ian Spence
- Department of Psychology, University of Toronto, TorontoON, Canada
| |
Collapse
|
30
|
The whereabouts of visual attention: Involuntary attentional bias toward the default gaze direction. Atten Percept Psychophys 2017; 79:1666-1673. [PMID: 28500508 DOI: 10.3758/s13414-017-1332-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
This study proposed and verified a new hypothesis on the relationship between gaze direction and visual attention: attentional bias by default gaze direction based on eye-head coordination. We conducted a target identification task in which visual stimuli appeared briefly to the left and right of a fixation cross. In Experiment 1, the direction of the participant's head (aligned with the body) was manipulated to the left, front, or right relative to a central fixation point. In Experiment 2, head direction was manipulated to the left, front, or right relative to the body direction. This manipulation was based on results showing that bias of eye position distribution was highly correlated with head direction. In both experiments, accuracy was greater when the target appeared at a position where the eyes would potentially be directed. Consequently, eye-head coordination influences visual attention. That is, attention can be automatically biased toward the location where the eyes tend to be directed.
Collapse
|
31
|
Abstract
Search for a target stimulus among distractors is subject to both goal-driven and stimulus-driven influences. Variables that selectively modify these influences have shown strong interaction effects on saccade trajectories toward the target, suggesting the involvement of a shared spatial orienting mechanism. However, subsequent manual response times (RTs) have revealed additive effects, suggesting that different mechanisms are involved. In the present study, we tested the hypothesis that an interaction for RTs is obscured by preceding multisaccade trajectories, promoted by the continuous presence of distractors in the display. In two experiments, we compared a condition in which distractors were removed soon after the presentation of the search display to a standard condition in which distractors were not removed. The results showed additive goal-driven and stimulus-driven effects on RTs in the standard condition, but an interaction when distractors were removed. These findings support the view that both variables influence a shared spatial orienting mechanism.
Collapse
|
32
|
Horowitz TS, Fine EM, Fencsik DE, Yurgenson S, Wolfe JM. Fixational Eye Movements Are Not an Index of Covert Attention. Psychol Sci 2016; 18:356-63. [PMID: 17470262 DOI: 10.1111/j.1467-9280.2007.01903.x] [Citation(s) in RCA: 61] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
The debate about the nature of fixational eye movements has revived recently with the claim that microsaccades reflect the direction of attentional shifts. A number of studies have shown an association between the direction of attentional cues and the direction of microsaccades. We sought to determine whether microsaccades in attentional tasks are causally related to behavior. Is reaction time (RT) faster when microsaccades point toward the target than when they point in the opposite direction? We used a dual-Purkinje-image eyetracker to measure gaze position while 3 observers (2 of the authors, 1 naive observer) performed an attentional cuing task under three different response conditions: saccadic localization, manual localization, and manual detection. Critical trials were those on which microsaccades moved away from the cue. On these trials, RTs were slower when microsaccades were oriented toward the target than when they were oriented away from the target. We obtained similar results for direction of drift. Cues, not fixational eye movements, predicted behavior.
Collapse
Affiliation(s)
- Todd S Horowitz
- Visual Attention Laboratory, Brigham and Women's Hospital, 64 Sidney Street, Cambridge, MA 02139, USA.
| | | | | | | | | |
Collapse
|
33
|
Detecting single-target changes in multiple object tracking: The case of peripheral vision. Atten Percept Psychophys 2016; 78:1004-19. [DOI: 10.3758/s13414-016-1078-7] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
34
|
Robinson MM, Irwin DE. Shifts of attention bias awareness of voluntary and reflexive eye movements. Exp Brain Res 2016; 234:1689-99. [DOI: 10.1007/s00221-016-4588-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2015] [Accepted: 01/30/2016] [Indexed: 11/29/2022]
|
35
|
Abstract
Efficient detection of threat provides obvious survival advantages and has resulted in a fast and accurate threat-detection system. Although beneficial under normal circumstances, this system may become hypersensitive and cause threat-processing abnormalities. Past research has shown that anxious individuals have difficulty disengaging attention from threatening faces, but it is unknown whether other forms of threatening social stimuli also influence attentional orienting. Much like faces, human body postures are salient social stimuli, because they are informative of one's emotional state and next likely action. Additionally, postures can convey such information in situations in which another's facial expression is not easily visible. Here we investigated whether there is a threat-specific effect for high-anxious individuals, by measuring the time that it takes the eyes to leave the attended stimulus, a task-irrelevant body posture. The results showed that relative to nonthreating postures, threat-related postures hold attention in anxious individuals, providing further evidence of an anxiety-related attentional bias for threatening information. This is the first study to demonstrate that attentional disengagement from threatening postures is affected by emotional valence in those reporting anxiety.
Collapse
|
36
|
Abstract
Alfred L. Yarbus was among the first to demonstrate that eye movements actively serve our perceptual and cognitive goals, a crucial recognition that is at the heart of today's research on active vision. He realized that not the changes in fixation stick in memory but the changes in shifts of attention. Indeed, oculomotor control is tightly coupled to functions as fundamental as attention and memory. This tight relationship offers an intriguing perspective on transsaccadic perceptual continuity, which we experience despite the fact that saccades cause rapid shifts of the image across the retina. Here, I elaborate this perspective based on a series of psychophysical findings. First, saccade preparation shapes the visual system's priorities; it enhances visual performance and perceived stimulus intensity at the targets of the eye movement. Second, before saccades, the deployment of visual attention is updated, predictively facilitating perception at those retinal locations that will be relevant once the eyes land. Third, saccadic eye movements strongly affect the contents of visual memory, highlighting their crucial role for which parts of a scene we remember or forget. Together, these results provide insights on how attentional processes enable the visual system to cope with the retinal consequences of saccades.
Collapse
Affiliation(s)
- Martin Rolfs
- Department of Psychology, Humboldt Universität zu Berlin, GermanyBernstein Center for Computational Neuroscience, Humboldt Universität zu Berlin, Germany
| |
Collapse
|
37
|
Clark K, Squire RF, Merrikhi Y, Noudoost B. Visual attention: Linking prefrontal sources to neuronal and behavioral correlates. Prog Neurobiol 2015; 132:59-80. [PMID: 26159708 DOI: 10.1016/j.pneurobio.2015.06.006] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2014] [Revised: 06/25/2015] [Accepted: 06/28/2015] [Indexed: 11/26/2022]
Abstract
Attention is a means of flexibly selecting and enhancing a subset of sensory input based on the current behavioral goals. Numerous signatures of attention have been identified throughout the brain, and now experimenters are seeking to determine which of these signatures are causally related to the behavioral benefits of attention, and the source of these modulations within the brain. Here, we review the neural signatures of attention throughout the brain, their theoretical benefits for visual processing, and their experimental correlations with behavioral performance. We discuss the importance of measuring cue benefits as a way to distinguish between impairments on an attention task, which may instead be visual or motor impairments, and true attentional deficits. We examine evidence for various areas proposed as sources of attentional modulation within the brain, with a focus on the prefrontal cortex. Lastly, we look at studies that aim to link sources of attention to its neuronal signatures elsewhere in the brain.
Collapse
Affiliation(s)
- Kelsey Clark
- Montana State University, Bozeman, MT, United States
| | - Ryan Fox Squire
- Stanford University, Stanford, CA, United States; Lumos Labs, San Francisco, CA, United States
| | - Yaser Merrikhi
- School of Cognitive Sciences (SCS), Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
| | | |
Collapse
|
38
|
Müller S, Rothermund K, Wentura D. Relevance drives attention: Attentional bias for gain- and loss-related stimuli is driven by delayed disengagement. Q J Exp Psychol (Hove) 2015; 69:752-63. [PMID: 25980956 DOI: 10.1080/17470218.2015.1049624] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Attentional bias to gain- and loss-related stimuli was investigated in a dot-probe task. We used coloured stimuli that had acquired their valence during the experiment by signalling the chance to either win or lose points in a game task. Replicating previous findings with the additional singleton paradigm, we found attentional bias effects for both gain- and loss-related colours. The effects were due to delayed disengagement from valent stimuli, especially if they were positive, and could not be explained by nonattentional processes like behavioural freezing. Our findings suggest that stimuli signalling opportunities and dangers hold attention, supporting a general motivational relevance principle of the orienting of attention.
Collapse
Affiliation(s)
- Sascha Müller
- a Faculty of Human Sciences, Department of Psychology, General Psychology , Universität der Bundeswehr München , Neubiberg , Germany
| | - Klaus Rothermund
- b Department of Psychology , Friedrich-Schiller-Universität Jena , Jena , Germany
| | - Dirk Wentura
- c Department of Psychology , Saarland Universität Saarbrücken , Saarbrücken , Germany
| |
Collapse
|
39
|
Buhmann C, Kraft S, Hinkelmann K, Krause S, Gerloff C, Zangemeister WH. Visual Attention and Saccadic Oculomotor Control in Parkinson's Disease. Eur Neurol 2015; 73:283-93. [PMID: 25925289 DOI: 10.1159/000381335] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2014] [Accepted: 02/22/2015] [Indexed: 11/19/2022]
Abstract
BACKGROUND In patients with Parkinson's disease (PD) we aimed at differentiating the relation between selective visual attention, deficits of programming and dynamics of saccadic eye movements while searching for a target and hand-reaction time as well as hand-movement time. Visual attention is crucial for concentrating selectively on one aspect of the visual field while ignoring other aspects. Eye movements are anatomically and functionally related to mechanisms of visual attention. Saccadic dysfunction might confound selective visual attention in PD. METHODS We studied visual selective attention in 22 medicated PD patients (clinical ON status, mild to moderate disease severity) and 22 age matched controls. We looked for possible interferences through oculomotor deficits. Two tasks were compared: free viewing of photographs and time optimal visual search of a hidden target. Visual search times (VST), task related dynamics of saccades, and hand-reaction and hand-movement times were analyzed. RESULTS In the free viewing task mild to moderately affected PD patients did not differ statistically from healthy subjects with respect to saccade dynamics. However, patients differed significantly from healthy subjects in the time optimal visual search task with 25% lower rates of successful searches. Hand movement reaction time did not differ in both groups, whereas hand movement execution time was significantly prolonged in PD patients. CONCLUSION Saccadic oculomotor control and hand movement reaction times were intact, whereas in our less severely affected treated PD patients, visual selective attention was not. The highly reduced successful search rate might be related to disturbed programming and delayed execution of saccades during time optimal visual search due to decreased execution of serial-order sequential generation of saccades.
Collapse
Affiliation(s)
- Carsten Buhmann
- Department of Neurology, University Clinic Hamburg-Eppendorf, Hamburg, Germany
| | | | | | | | | | | |
Collapse
|
40
|
Oculomotor Capture by New and Unannounced Color Singletons during Visual Search. Atten Percept Psychophys 2015; 77:1529-43. [DOI: 10.3758/s13414-015-0888-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
41
|
Burigo M, Knoeferle P. Visual attention during spatial language comprehension. PLoS One 2015; 10:e0115758. [PMID: 25607540 PMCID: PMC4301815 DOI: 10.1371/journal.pone.0115758] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2014] [Accepted: 12/01/2014] [Indexed: 11/18/2022] Open
Abstract
Spatial terms such as “above”, “in front of”, and “on the left of” are all essential for describing the location of one object relative to another object in everyday communication. Apprehending such spatial relations involves relating linguistic to object representations by means of attention. This requires at least one attentional shift, and models such as the Attentional Vector Sum (AVS) predict the direction of that attention shift, from the sausage to the box for spatial utterances such as “The box is above the sausage”. To the extent that this prediction generalizes to overt gaze shifts, a listener’s visual attention should shift from the sausage to the box. However, listeners tend to rapidly look at referents in their order of mention and even anticipate them based on linguistic cues, a behavior that predicts a converse attentional shift from the box to the sausage. Four eye-tracking experiments assessed the role of overt attention in spatial language comprehension by examining to which extent visual attention is guided by words in the utterance and to which extent it also shifts “against the grain” of the unfolding sentence. The outcome suggests that comprehenders’ visual attention is predominantly guided by their interpretation of the spatial description. Visual shifts against the grain occurred only when comprehenders had some extra time, and their absence did not affect comprehension accuracy. However, the timing of this reverse gaze shift on a trial correlated with that trial’s verification time. Thus, while the timing of these gaze shifts is subtly related to the verification time, their presence is not necessary for successful verification of spatial relations.
Collapse
Affiliation(s)
- Michele Burigo
- Department of Linguistics, University of Bielefeld, Bielefeld, Germany and Language & Cognition Research Group, Cognitive Interaction Technology—Center of Excellence (CITEC), University of Bielefeld, Bielefeld, Germany
- * E-mail:
| | - Pia Knoeferle
- Department of Linguistics, University of Bielefeld, Bielefeld, Germany and Language & Cognition Research Group, Cognitive Interaction Technology—Center of Excellence (CITEC), University of Bielefeld, Bielefeld, Germany
| |
Collapse
|
42
|
|
43
|
Moore SR, Fu Y, Depue RA. Social traits modulate attention to affiliative cues. Front Psychol 2014; 5:649. [PMID: 25009524 PMCID: PMC4068200 DOI: 10.3389/fpsyg.2014.00649] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2014] [Accepted: 06/06/2014] [Indexed: 11/24/2022] Open
Abstract
Neurobehavioral models of personality suggest that the salience assigned to particular classes of stimuli vary as a function of traits that reflect both the activity of neurobiological encoding and relevant social experience. In turn, this joint influence modulates the extent that salience influences attentional processes, and hence learning about and responding to those stimuli. Applying this model to the domain of social valuation, we assessed the differential effects on attentional guidance by affiliative cues of (i) a higher-order temperament trait (Social Closeness), and (ii) attachment style in a sample of 57 women. Attention to affiliative pictures paired with either incentive or neutral pictures was assessed using camera eye-tracking. Trait social closeness and attachment avoidance interacted to modulate fixation frequency on affiliative but not on incentive pictures, suggesting that both traits influence the salience assigned to affiliative cues specifically.
Collapse
Affiliation(s)
- Sarah R. Moore
- Department of Human Development, Cornell UniversityIthaca, NY, USA
| | | | | |
Collapse
|
44
|
Aricò P, Aloise F, Schettini F, Salinari S, Mattia D, Cincotti F. Influence of P300 latency jitter on event related potential-based brain–computer interface performance. J Neural Eng 2014; 11:035008. [DOI: 10.1088/1741-2560/11/3/035008] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
45
|
Roberts W, Miller MA, Weafer J, Fillmore MT. Heavy drinking and the role of inhibitory control of attention. Exp Clin Psychopharmacol 2014; 22:133-40. [PMID: 24611837 PMCID: PMC4082663 DOI: 10.1037/a0035317] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Alcohol can disrupt goal-directed behavior by impairing the ability to inhibit attentional shifts toward salient but goal-irrelevant stimuli. Individuals who are highly sensitive to this effect of the drug may be at increased risk for problematic drinking, especially among those whose attention is drawn to alcohol-related cues in the environment (i.e., attentional bias). The current study examined the acute impairing effect of alcohol on inhibitory mechanisms of attentional control in a group of healthy social drinkers. We then examined whether increased sensitivity to this disinhibiting effect of alcohol was associated with heavy drinking, especially among those who have an attentional bias toward alcohol-related stimuli. Eighty nondependent social drinkers performed a delayed ocular response task that measured their inhibitory control of attention by their ability to suppress attentional shifts to irrelevant stimuli. Attentional bias was measured using a visual probe task. Inhibitory control was assessed following a moderate dose of alcohol (0.64 g/kg) and a placebo. Participants made more inhibitory failures (i.e., premature saccades) following 0.64 g/kg alcohol compared with placebo and the relation of this effect to their drinking habits did depend on the level of the drinker's attentional bias to alcohol-related stimuli. Among drinkers with higher attentional bias, greater impairment of inhibitory control was associated with heavier drinking. In contrast, drinkers with little or no attentional bias showed no relation between their sensitivity to the disinhibiting effects of alcohol and drinking habits. These findings have implications for understanding how heightened incentive-salience of alcohol cues and impaired attentional control can interactively contribute to excessive alcohol use.
Collapse
Affiliation(s)
| | | | - Jessica Weafer
- University of Chicago, Department of Psychiatry and Behavioral Neuroscience
| | | |
Collapse
|
46
|
Neider MB, Ang CW, Voss MW, Carbonari R, Kramer AF. Training and transfer of training in rapid visual search for camouflaged targets. PLoS One 2013; 8:e83885. [PMID: 24386301 PMCID: PMC3873983 DOI: 10.1371/journal.pone.0083885] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2013] [Accepted: 11/18/2013] [Indexed: 12/02/2022] Open
Abstract
Previous examinations of search under camouflage conditions have reported that performance improves with training and that training can engender near perfect transfer to similar, but novel camouflage-type displays [1]. What remains unclear, however, are the cognitive mechanisms underlying these training improvements and transfer benefits. On the one hand, improvements and transfer benefits might be associated with higher-level overt strategy shifts, such as through the restriction of eye movements to target-likely (background) display regions. On the other hand, improvements and benefits might be related to the tuning of lower-level perceptual processes, such as figure-ground segregation. To decouple these competing possibilities we had one group of participants train on camouflage search displays and a control group train on non-camouflage displays. Critically, search displays were rapidly presented, precluding eye movements. Before and following training, all participants completed transfer sessions in which they searched novel displays. We found that search performance on camouflage displays improved with training. Furthermore, participants who trained on camouflage displays suffered no performance costs when searching novel displays following training. Our findings suggest that training to break camouflage is related to the tuning of perceptual mechanisms and not strategic shifts in overt attention.
Collapse
Affiliation(s)
- Mark B. Neider
- Department of Psychology, University of Central Florida, Orlando, Florida, United States of America
- * E-mail:
| | - Cher Wee Ang
- Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, Urbana, Illinois, United States of America
| | - Michelle W. Voss
- Department of Psychology, University of Iowa, Iowa City, Iowa, United States of America
| | - Ronald Carbonari
- Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, Urbana, Illinois, United States of America
| | - Arthur F. Kramer
- Beckman Institute for Advanced Science and Technology, University of Illinois Urbana-Champaign, Urbana, Illinois, United States of America
| |
Collapse
|
47
|
Yang Z, Jackson T, Chen H. Effects of Chronic Pain and Pain-Related Fear on Orienting and Maintenance of Attention: An Eye Movement Study. THE JOURNAL OF PAIN 2013; 14:1148-57. [DOI: 10.1016/j.jpain.2013.04.017] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2013] [Revised: 04/21/2013] [Accepted: 04/23/2013] [Indexed: 12/28/2022]
|
48
|
Squire RF, Noudoost B, Schafer RJ, Moore T. Prefrontal Contributions to Visual Selective Attention. Annu Rev Neurosci 2013; 36:451-66. [DOI: 10.1146/annurev-neuro-062111-150439] [Citation(s) in RCA: 195] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
| | | | | | - Tirin Moore
- Department of Neurobiology and
- Howard Hughes Medical Institute, Stanford University School of Medicine, Stanford, California 94305;
| |
Collapse
|
49
|
Dudschig C, Souman J, Lachmair M, de la Vega I, Kaup B. Reading "sun" and looking up: the influence of language on saccadic eye movements in the vertical dimension. PLoS One 2013; 8:e56872. [PMID: 23460816 PMCID: PMC3584096 DOI: 10.1371/journal.pone.0056872] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2012] [Accepted: 01/15/2013] [Indexed: 11/30/2022] Open
Abstract
Traditionally, language processing has been attributed to a separate system in the brain, which supposedly works in an abstract propositional manner. However, there is increasing evidence suggesting that language processing is strongly interrelated with sensorimotor processing. Evidence for such an interrelation is typically drawn from interactions between language and perception or action. In the current study, the effect of words that refer to entities in the world with a typical location (e.g., sun, worm) on the planning of saccadic eye movements was investigated. Participants had to perform a lexical decision task on visually presented words and non-words. They responded by moving their eyes to a target in an upper (lower) screen position for a word (non-word) or vice versa. Eye movements were faster to locations compatible with the word's referent in the real world. These results provide evidence for the importance of linguistic stimuli in directing eye movements, even if the words do not directly transfer directional information.
Collapse
Affiliation(s)
- Carolin Dudschig
- Department of Psychology, University of Tübingen, Tübingen, Germany.
| | | | | | | | | |
Collapse
|
50
|
Borji A, Itti L. State-of-the-art in visual attention modeling. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2013; 35:185-207. [PMID: 22487985 DOI: 10.1109/tpami.2012.89] [Citation(s) in RCA: 430] [Impact Index Per Article: 39.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Modeling visual attention--particularly stimulus-driven, saliency-based attention--has been a very active research area over the past 25 years. Many different models of attention are now available which, aside from lending theoretical contributions to other fields, have demonstrated successful applications in computer vision, mobile robotics, and cognitive systems. Here we review, from a computational perspective, the basic concepts of attention implemented in these models. We present a taxonomy of nearly 65 models, which provides a critical comparison of approaches, their capabilities, and shortcomings. In particular, 13 criteria derived from behavioral and computational studies are formulated for qualitative comparison of attention models. Furthermore, we address several challenging issues with models, including biological plausibility of the computations, correlation with eye movement datasets, bottom-up and top-down dissociation, and constructing meaningful performance measures. Finally, we highlight current research trends in attention modeling and provide insights for future.
Collapse
Affiliation(s)
- Ali Borji
- Department of Computer Science, University of Southern California, 3641 Watt Way, Los Angeles, CA 90089, USA.
| | | |
Collapse
|