1
|
Gordon SM, Dalangin B, Touryan J. Saccade size predicts onset time of object processing during visual search of an open world virtual environment. Neuroimage 2024; 298:120781. [PMID: 39127183 DOI: 10.1016/j.neuroimage.2024.120781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Revised: 08/02/2024] [Accepted: 08/08/2024] [Indexed: 08/12/2024] Open
Abstract
OBJECTIVE To date the vast majority of research in the visual neurosciences have been forced to adopt a highly constrained perspective of the vision system in which stimuli are processed in an open-loop reactive fashion (i.e., abrupt stimulus presentation followed by an evoked neural response). While such constraints enable high construct validity for neuroscientific investigation, the primary outcomes have been a reductionistic approach to isolate the component processes of visual perception. In electrophysiology, of the many neural processes studied under this rubric, the most well-known is, arguably, the P300 evoked response. There is, however, relatively little known about the real-world corollary of this component in free-viewing paradigms where visual stimuli are connected to neural function in a closed-loop. While growing evidence suggests that neural activity analogous to the P300 does occur in such paradigms, it is an open question when this response occurs and what behavioral or environmental factors could be used to isolate this component. APPROACH The current work uses convolutional networks to decode neural signals during a free-viewing visual search task in a closed-loop paradigm within an open-world virtual environment. From the decoded activity we construct fixation-locked response profiles that enable estimations of the variable latency of any P300 analogue around the moment of fixation. We then use these estimates to investigate which factors best reduce variable latency and, thus, predict the onset time of the response. We consider measurable, search-related factors encompassing top-down (i.e., goal driven) and bottom-up (i.e., stimulus driven) processes, such as fixation duration and salience. We also consider saccade size as an intermediate factor reflecting the integration of these two systems. MAIN RESULTS The results show that of these factors only saccade size reliably determines the onset time of P300 analogous activity for this task. Specifically, we find that for large saccades the variability in response onset is small enough to enable analysis using traditional ensemble averaging methods. SIGNIFICANCE The results show that P300 analogous activity does occur during closed-loop, free-viewing visual search while highlighting distinct differences between the open-loop version of this response and its real-world analogue. The results also further establish saccades, and saccade size, as a key factor in real-world visual processing.
Collapse
Affiliation(s)
| | | | - Jonathan Touryan
- DEVCOM Army Research Laboratory, Aberdeen Proving Ground, MD, USA
| |
Collapse
|
2
|
Yıldırım Demirdöğen E, Akıncı MA, Bozkurt A, Turan B, Esin İS, Donbaloğlu MA, Bingöl İ, Tümüklü Özyer G, Kılıç U, Dursun OB. Objective Parameters in Attention Deficit Hyperactivity Disorder: Eye and Head Movements. J Atten Disord 2024; 28:982-991. [PMID: 38214185 DOI: 10.1177/10870547231221746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/13/2024]
Abstract
OBJECTIVE We aimed to evaluate eye and head movements, which are objective parameters in ADHD. Method: While the children were watching the course video task, which included the relevant (teacher and smart board) and irrelevant (any regions outside the relevant area) areas of interest, their eye movements were evaluated through eye tracking, and video recordings were made simultaneous. Head position estimation was made using through video recordings. The proportion of total fixation duration on areas of interest (PFDAOI) and saccade count, amplitude, velocity for eye movements, number of total head movements and angular change of head movement in x-y-z axes for head movements were compared. RESULTS Children with ADHD had lower PFDAOI on the relevant area, and had more saccade and head movements The angular change of head movement in the x-axis was higher in the ADHD group. CONCLUSION In the assessment of ADHD, the eye and head movements may be particulary useful.
Collapse
Affiliation(s)
| | | | | | - Bahadır Turan
- Karadeniz Technical University Medicine Faculty, Trabzon, Turkey
| | | | | | - İsa Bingöl
- Bayburt University Distance Education Application and Research Center, Turkey
| | | | - Uğur Kılıç
- Atatürk University Computer Engineering Department, Erzurum, Turkey
| | - Onur Burak Dursun
- Turkish Ministry of Health Autism, Mental Special Needs and Rare Diseases Department, Ankara, Turkey
| |
Collapse
|
3
|
Mirror blindness: Our failure to recognize the target in search for mirror-reversed shapes. Atten Percept Psychophys 2023; 85:418-437. [PMID: 36653521 DOI: 10.3758/s13414-022-02641-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/13/2022] [Indexed: 01/20/2023]
Abstract
It is well known that visual search for a mirror target (i.e., a horizontally flipped item) is more difficult than search for other-oriented items (e.g., vertically flipped items). Previous studies have typically attributed costs of mirror search to early, attention-guiding processes but could not rule out contributions from later processes. In the present study we used eye tracking to distinguish between early, attention-guiding processes and later target identification processes. The results of four experiments revealed a marked human weakness in identifying mirror targets: Observers appear to frequently fail to classify a mirror target as a target on first fixation and to continue with search even after having directly looked at the target. Awareness measures corroborated that the location of a mirror target could not be reported above chance level after it had been fixated once. This mirror blindness effect explained a large proportion (45-87%) of the overall costs of mirror search, suggesting that part of the difficulties with mirror search are rooted in later, object identification processes (not attentional guidance). Mirror blindness was significantly reduced but not completely eliminated when both the target and non-targets were held constant, which shows that perfect top-down knowledge can reduce mirror blindness, without completely eliminating it. The finding that non-target certainty reduced mirror blindness suggests that object identification is not solely achieved by comparing a selected item to a target template. These results demonstrate that templates that guide search toward targets are not identical to the templates used to conclusively identify those targets.
Collapse
|
4
|
Ernst D, Wolfe JM. How fixation durations are affected by search difficulty manipulations. VISUAL COGNITION 2022. [DOI: 10.1080/13506285.2022.2063465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Daniel Ernst
- Brigham & Women’s Hospital, Boston, MA, United States
- Harvard Medical School, Boston, MA, United States
- Bielefeld University, Bielefeld, Germany
| | - Jeremy M. Wolfe
- Brigham & Women’s Hospital, Boston, MA, United States
- Harvard Medical School, Boston, MA, United States
| |
Collapse
|
5
|
Enders LR, Smith RJ, Gordon SM, Ries AJ, Touryan J. Gaze Behavior During Navigation and Visual Search of an Open-World Virtual Environment. Front Psychol 2021; 12:681042. [PMID: 34434140 PMCID: PMC8380848 DOI: 10.3389/fpsyg.2021.681042] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 06/28/2021] [Indexed: 11/13/2022] Open
Abstract
Eye tracking has been an essential tool within the vision science community for many years. However, the majority of studies involving eye-tracking technology employ a relatively passive approach through the use of static imagery, prescribed motion, or video stimuli. This is in contrast to our everyday interaction with the natural world where we navigate our environment while actively seeking and using task-relevant visual information. For this reason, an increasing number of vision researchers are employing virtual environment platforms, which offer interactive, realistic visual environments while maintaining a substantial level of experimental control. Here, we recorded eye movement behavior while subjects freely navigated through a rich, open-world virtual environment. Within this environment, subjects completed a visual search task where they were asked to find and count occurrence of specific targets among numerous distractor items. We assigned each participant into one of four target conditions: Humvees, motorcycles, aircraft, or furniture. Our results show a statistically significant relationship between gaze behavior and target objects across Target Conditions with increased visual attention toward assigned targets. Specifically, we see an increase in the number of fixations and an increase in dwell time on target relative to distractor objects. In addition, we included a divided attention task to investigate how search changed with the addition of a secondary task. With increased cognitive load, subjects slowed their speed, decreased gaze on objects, and increased the number of objects scanned in the environment. Overall, our results confirm previous findings and support that complex virtual environments can be used for active visual search experimentation, maintaining a high level of precision in the quantification of gaze information and visual attention. This study contributes to our understanding of how individuals search for information in a naturalistic (open-world) virtual environment. Likewise, our paradigm provides an intriguing look into the heterogeneity of individual behaviors when completing an un-timed visual search task while actively navigating.
Collapse
Affiliation(s)
| | | | | | - Anthony J Ries
- DEVCOM Army Research Laboratory, Aberdeen Proving Ground, MD, United States.,Warfighter Effectiveness Research Center, U.S. Air Force Academy, Colorado Springs, CO, United States
| | - Jonathan Touryan
- DEVCOM Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| |
Collapse
|
6
|
Lee TL, Yeung MK, Sze SL, Chan AS. Computerized Eye-Tracking Training Improves the Saccadic Eye Movements of Children with Attention-Deficit/Hyperactivity Disorder. Brain Sci 2020; 10:E1016. [PMID: 33371236 PMCID: PMC7766133 DOI: 10.3390/brainsci10121016] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2020] [Revised: 12/18/2020] [Accepted: 12/19/2020] [Indexed: 01/19/2023] Open
Abstract
Abnormal saccadic eye movements, such as longer anti-saccade latency and lower pro-saccade accuracy, are common in children with attention-deficit/hyperactivity disorder (ADHD). The present study aimed to investigate the effectiveness of computerized eye-tracking training on improving saccadic eye movements in children with ADHD. Eighteen children with ADHD (mean age = 8.8 years, 10 males) were recruited and assigned to either the experimental (n = 9) or control group (n = 9). The experimental group underwent an accumulated 240 min of eye-tracking training within two weeks, whereas the control group engaged in web game playing for the same amount of time. Saccadic performances were assessed using the anti- and pro-saccade tasks before and after training. Compared to the baseline, only the children who underwent the eye-tracking training showed significant improvements in saccade latency and accuracy in the anti- and pro-saccade tasks, respectively. In contrast, the control group exhibited no significant changes. These preliminary findings support the use of eye-tracking training as a safe non-pharmacological intervention for improving the saccadic eye movements of children with ADHD.
Collapse
Affiliation(s)
- Tsz Lok Lee
- Department of Psychology, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China; (T.L.L.); (S.L.S.)
| | - Michael K. Yeung
- Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, China;
| | - Sophia L. Sze
- Department of Psychology, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China; (T.L.L.); (S.L.S.)
- Research Center for Neuropsychological Well-Being, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China
| | - Agnes S. Chan
- Department of Psychology, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China; (T.L.L.); (S.L.S.)
- Research Center for Neuropsychological Well-Being, The Chinese University of Hong Kong, Shatin, New Territories, Hong Kong, China
| |
Collapse
|
7
|
Abstract
Research and theories on visual search often focus on visual guidance to explain differences in search. Guidance is the tuning of attention to target features and facilitates search because distractors that do not show target features can be more effectively ignored (skipping). As a general rule, the better the guidance is, the more efficient search is. Correspondingly, behavioral experiments often interpreted differences in efficiency as reflecting varying degrees of attentional guidance. But other factors such as the time spent on processing a distractor (dwelling) or multiple visits to the same stimulus in a search display (revisiting) are also involved in determining search efficiency. While there is some research showing that dwelling and revisiting modulate search times in addition to skipping, the corresponding studies used complex naturalistic and category-defined stimuli. The present study tests whether results from prior research can be generalized to more simple stimuli, where target-distractor similarity, a strong factor influencing search performance, can be manipulated in a detailed fashion. Thus, in the present study, simple stimuli with varying degrees of target-distractor similarity were used to deliver conclusive evidence for the contribution of dwelling and revisiting to search performance. The results have theoretical and methodological implications: They imply that visual search models should not treat dwelling and revisiting as constants across varying levels of search efficiency and that behavioral search experiments are equivocal with respect to the responsible processing mechanisms underlying more versus less efficient search. We also suggest that eye-tracking methods may be used to disentangle different search components such as skipping, dwelling, and revisiting.
Collapse
|
8
|
MacInnes WJ, Jóhannesson ÓI, Chetverikov A, Kristjánsson Á. No Advantage for Separating Overt and Covert Attention in Visual Search. Vision (Basel) 2020; 4:E28. [PMID: 32443506 PMCID: PMC7356832 DOI: 10.3390/vision4020028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Revised: 04/02/2020] [Accepted: 05/10/2020] [Indexed: 11/22/2022] Open
Abstract
We move our eyes roughly three times every second while searching complex scenes, but covert attention helps to guide where we allocate those overt fixations. Covert attention may be allocated reflexively or voluntarily, and speeds the rate of information processing at the attended location. Reducing access to covert attention hinders performance, but it is not known to what degree the locus of covert attention is tied to the current gaze position. We compared visual search performance in a traditional gaze-contingent display, with a second task where a similarly sized contingent window is controlled with a mouse, allowing a covert aperture to be controlled independently by overt gaze. Larger apertures improved performance for both the mouse- and gaze-contingent trials, suggesting that covert attention was beneficial regardless of control type. We also found evidence that participants used the mouse-controlled aperture somewhat independently of gaze position, suggesting that participants attempted to untether their covert and overt attention when possible. This untethering manipulation, however, resulted in an overall cost to search performance, a result at odds with previous results in a change blindness paradigm. Untethering covert and overt attention may therefore have costs or benefits depending on the task demands in each case.
Collapse
Affiliation(s)
- W. Joseph MacInnes
- School of Psychology, National Research University Higher School of Economics, Moscow 101000, Russia;
- Vision Modelling Lab, Faculty of Social Sciences, National Research University Higher School of Economics, Moscow 101000, Russia
| | - Ómar I. Jóhannesson
- Icelandic Vision Laboratory, Department of Psychology, University of Iceland, 102 Reykjavik, Iceland;
| | - Andrey Chetverikov
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, 6525 EN Nijmegen, The Netherlands;
| | - Árni Kristjánsson
- School of Psychology, National Research University Higher School of Economics, Moscow 101000, Russia;
- Icelandic Vision Laboratory, Department of Psychology, University of Iceland, 102 Reykjavik, Iceland;
| |
Collapse
|
9
|
Ernst D, Becker S, Horstmann G. Novelty competes with saliency for attention. Vision Res 2020; 168:42-52. [PMID: 32088400 DOI: 10.1016/j.visres.2020.01.004] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2018] [Revised: 11/28/2019] [Accepted: 01/07/2020] [Indexed: 11/29/2022]
Abstract
A highly debated question in attention research is to what extent attention is biased by bottom-up factors such as saliency versus top-down factors as governed by the task. Visual search experiments in which participants are briefly familiarized with the task and then see a novel stimulus unannounced and for the first time support yet another factor, showing that novel and surprising features attract attention. In the present study, we tested whether gaze behavior as an indicator for attentional prioritization can be predicted accurately within displays containing both salient and novel stimuli by means of a priority map that assumes novelty as an additional source of activation. To that aim, we conducted a visual search experiment where a color singleton was presented for the first time in the surprise trial and manipulated the color-novelty of the remaining non-singletons between participants. In one group, the singleton was the only novel stimulus ("one-new"), whereas in another group, the non-singleton stimuli were likewise novel ("all-new"). The surprise trial was always target absent and designed such that top-down prioritization of any color was unlikely. The results show that the singleton in the all-new group captured the gaze less strongly, with more early fixations being directed to the novel non-singletons. Overall, the fixation pattern can accurately be explained by noisy priority maps where saliency and novelty compete for gaze control.
Collapse
|