1
|
Tamaru Y, Matsushita F, Matsugi A. Tests of abnormal gaze behavior increase the accuracy of mild cognitive impairment assessments. Sci Rep 2024; 14:19512. [PMID: 39174695 PMCID: PMC11341758 DOI: 10.1038/s41598-024-70657-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 08/20/2024] [Indexed: 08/24/2024] Open
Abstract
Tests to detect mild cognitive impairment (MCI) should have high sensitivity and accuracy. Previously, we reported on a cognitive composition test (CCT) that we developed to detect MCI. In this study, we compared gaze behavior parameters, namely, gaze fixation duration and gaze-shift frequency, during the CCT in participants with MCI and healthy controls (HC) to determine whether these parameters would accurately detect MCI in older adults. Participants performed CCT-A, -B, and -C tasks with varying difficulty levels while wearing eye-tracking devices. Performance time, gaze fixation duration, and gaze-shift frequency were analyzed. Receiver operating characteristic curve analysis was performed to assess the identification accuracy. The MCI group was significantly slower in completing the CCT-C task and had a higher gaze-shift frequency into both the sample object space and workspace than the HC group. Gaze fixation duration in the sample object space increased in the MCI group as the CCT became difficult. Our findings indicated that combining the CCT with performance time and gaze pattern improved the accuracy of distinguishing between individuals with and without MCI and that patients with MCI have abnormal gaze behavior during cognitive tasks. Therefore, evaluation of gaze parameters may improve the accuracy of identifying patients with MCI.
Collapse
Affiliation(s)
- Yoshiki Tamaru
- Faculty of Rehabilitation, Morinomiya University of Medical Sciences, 1-26-16 Nankokita, Suminoe-ku, Osaka-shi, Osaka, 559-8611, Japan.
| | - Futoshi Matsushita
- Faculty of Rehabilitation, Morinomiya University of Medical Sciences, 1-26-16 Nankokita, Suminoe-ku, Osaka-shi, Osaka, 559-8611, Japan
| | - Akiyoshi Matsugi
- Faculty of Rehabilitation, Shijonawate-Gakuen University, Hojo 5-11-10, Daito-shi, Osaka, 574-0011, Japan
| |
Collapse
|
2
|
Laurinavichyute A, Ziubanova A, Lopukhina A. Eye-Movement Suppression in the Visual World Paradigm. Open Mind (Camb) 2024; 8:1012-1036. [PMID: 39170794 PMCID: PMC11338299 DOI: 10.1162/opmi_a_00157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Accepted: 07/09/2024] [Indexed: 08/23/2024] Open
Abstract
Eye movements in the visual world paradigm are known to depend not only on linguistic input but on such factors as task, pragmatic context, affordances, etc. However, the degree to which eye movements may depend on task rather than on linguistic input is unclear. The present study for the first time tests how task constraints modulate eye movement behavior in the visual world paradigm by probing whether participants could refrain from looking at the referred image. Across two experiments with and without comprehension questions (total N = 159), we found that when participants were instructed to avoid looking at the referred images, the probability of fixating these reduced from 58% to 18% while comprehension scores remained high. Although language-mediated eye movements could not be suppressed fully, the degree of possible decoupling of eye movements from language processing suggests that participants can withdraw at least some looks from the referred images when needed. If they do so to different degrees in different experimental conditions, comparisons between conditions might be compromised. We discuss some cases where participants could adopt different viewing behaviors depending on the experimental condition, and provide some tentative ways to test for such differences.
Collapse
|
3
|
Metzger A, Ennis RJ, Doerschner K, Toscani M. Perceptual task drives later fixations and long latency saccades, while early fixations and short latency saccades are more automatic. Perception 2024; 53:501-511. [PMID: 38863405 PMCID: PMC11318208 DOI: 10.1177/03010066241253816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2024] [Accepted: 04/14/2024] [Indexed: 06/13/2024]
Abstract
We used a simple stimulus, dissociating perceptually relevant information in space, to differentiate between bottom-up and task-driven fixations. Six participants viewed a dynamic scene showing the reaction of an elastic object fixed to the ceiling being hit. In one condition they had to judge the object's stiffness and in the other condition its lightness. The results show that initial fixations tend to land in the centre of an object, independent of the task. After the initial fixation, participants tended to look at task diagnostic regions. This fixation behaviour correlates with high perceptual performance. Similarly, low-latency saccades lead to fixations that do not depend on the task, whereas higher latency does.
Collapse
Affiliation(s)
- Anna Metzger
- Department of Psychology, Faculty of Science and Technology, Bournemouth University, Poole, UK
| | | | - Katja Doerschner
- Department of Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Matteo Toscani
- Department of Psychology, Faculty of Science and Technology, Bournemouth University, Poole, UK
| |
Collapse
|
4
|
Lavecchia A. Advancing drug discovery with deep attention neural networks. Drug Discov Today 2024; 29:104067. [PMID: 38925473 DOI: 10.1016/j.drudis.2024.104067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2024] [Revised: 06/10/2024] [Accepted: 06/19/2024] [Indexed: 06/28/2024]
Abstract
In the dynamic field of drug discovery, deep attention neural networks are revolutionizing our approach to complex data. This review explores the attention mechanism and its extended architectures, including graph attention networks (GATs), transformers, bidirectional encoder representations from transformers (BERT), generative pre-trained transformers (GPTs) and bidirectional and auto-regressive transformers (BART). Delving into their core principles and multifaceted applications, we uncover their pivotal roles in catalyzing de novo drug design, predicting intricate molecular properties and deciphering elusive drug-target interactions. Despite challenges, these attention-based architectures hold unparalleled promise to drive transformative breakthroughs and accelerate progress in pharmaceutical research.
Collapse
Affiliation(s)
- Antonio Lavecchia
- Drug Discovery Laboratory, Department of Pharmacy, University of Napoli Federico II, I-80131 Naples, Italy.
| |
Collapse
|
5
|
Ghiani A, Amelink D, Brenner E, Hooge ITC, Hessels RS. When knowing the activity is not enough to predict gaze. J Vis 2024; 24:6. [PMID: 38984899 PMCID: PMC11238878 DOI: 10.1167/jov.24.7.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 05/31/2024] [Indexed: 07/11/2024] Open
Abstract
It is reasonable to assume that where people look in the world is largely determined by what they are doing. The reasoning is that the activity determines where it is useful to look at each moment in time. Assuming that it is vital to accurately judge the positions of the steps when navigating a staircase, it is surprising that people differ a lot in the extent to which they look at the steps. Apparently, some people consider the accuracy of peripheral vision, predictability of the step size, and feeling the edges of the steps with their feet to be good enough. If so, occluding part of the view of the staircase and making it more important to place one's feet gently might make it more beneficial to look directly at the steps before stepping onto them, so that people will more consistently look at many steps. We tested this idea by asking people to walk on staircases, either with or without a tray with two cups of water on it. When carrying the tray, people walked more slowly, but they shifted their gaze across steps in much the same way as they did when walking without the tray. They did not look at more steps. There was a clear positive correlation between the fraction of steps that people looked at when walking with and without the tray. Thus, the variability in the extent to which people look at the steps persists when one makes walking on the staircase more challenging.
Collapse
Affiliation(s)
- Andrea Ghiani
- Department of Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Daan Amelink
- Department of Experimental Psychology, Utrecht University, Utrecht, The Netherlands
| | - Eli Brenner
- Department of Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Ignace T C Hooge
- Department of Experimental Psychology, Utrecht University, Utrecht, The Netherlands
| | - Roy S Hessels
- Department of Experimental Psychology, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
6
|
Sharafeldin A, Imam N, Choi H. Active sensing with predictive coding and uncertainty minimization. PATTERNS (NEW YORK, N.Y.) 2024; 5:100983. [PMID: 39005491 PMCID: PMC11240181 DOI: 10.1016/j.patter.2024.100983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/23/2023] [Revised: 03/11/2024] [Accepted: 04/08/2024] [Indexed: 07/16/2024]
Abstract
We present an end-to-end architecture for embodied exploration inspired by two biological computations: predictive coding and uncertainty minimization. The architecture can be applied to any exploration setting in a task-independent and intrinsically driven manner. We first demonstrate our approach in a maze navigation task and show that it can discover the underlying transition distributions and spatial features of the environment. Second, we apply our model to a more complex active vision task, whereby an agent actively samples its visual environment to gather information. We show that our model builds unsupervised representations through exploration that allow it to efficiently categorize visual scenes. We further show that using these representations for downstream classification leads to superior data efficiency and learning speed compared to other baselines while maintaining lower parameter complexity. Finally, the modular structure of our model facilitates interpretability, allowing us to probe its internal mechanisms and representations during exploration.
Collapse
Affiliation(s)
- Abdelrahman Sharafeldin
- ML@GT, Georgia Institute of Technology, Atlanta, GA 30332, USA
- School of Computational Science and Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
- School of Mathematics, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Nabil Imam
- ML@GT, Georgia Institute of Technology, Atlanta, GA 30332, USA
- School of Computational Science and Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Hannah Choi
- ML@GT, Georgia Institute of Technology, Atlanta, GA 30332, USA
- School of Mathematics, Georgia Institute of Technology, Atlanta, GA 30332, USA
| |
Collapse
|
7
|
Skyberg RJ, Niell CM. Natural visual behavior and active sensing in the mouse. Curr Opin Neurobiol 2024; 86:102882. [PMID: 38704868 PMCID: PMC11254345 DOI: 10.1016/j.conb.2024.102882] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Revised: 04/05/2024] [Accepted: 04/10/2024] [Indexed: 05/07/2024]
Abstract
In the natural world, animals use vision for a wide variety of behaviors not reflected in most laboratory paradigms. Although mice have low-acuity vision, they use their vision for many natural behaviors, including predator avoidance, prey capture, and navigation. They also perform active sensing, moving their head and eyes to achieve behavioral goals and acquire visual information. These aspects of natural vision result in visual inputs and corresponding behavioral outputs that are outside the range of conventional vision studies but are essential aspects of visual function. Here, we review recent studies in mice that have tapped into natural behavior and active sensing to reveal the computational logic of neural circuits for vision.
Collapse
Affiliation(s)
- Rolf J Skyberg
- Department of Biology and Institute of Neuroscience, University of Oregon, Eugene OR 97403, USA. https://twitter.com/SkybergRolf
| | - Cristopher M Niell
- Department of Biology and Institute of Neuroscience, University of Oregon, Eugene OR 97403, USA.
| |
Collapse
|
8
|
Ambrad Giovannetti E, Rancz E. Behind mouse eyes: The function and control of eye movements in mice. Neurosci Biobehav Rev 2024; 161:105671. [PMID: 38604571 DOI: 10.1016/j.neubiorev.2024.105671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 03/12/2024] [Accepted: 04/08/2024] [Indexed: 04/13/2024]
Abstract
The mouse visual system has become the most popular model to study the cellular and circuit mechanisms of sensory processing. However, the importance of eye movements only started to be appreciated recently. Eye movements provide a basis for predictive sensing and deliver insights into various brain functions and dysfunctions. A plethora of knowledge on the central control of eye movements and their role in perception and behaviour arose from work on primates. However, an overview of various eye movements in mice and a comparison to primates is missing. Here, we review the eye movement types described to date in mice and compare them to those observed in primates. We discuss the central neuronal mechanisms for their generation and control. Furthermore, we review the mounting literature on eye movements in mice during head-fixed and freely moving behaviours. Finally, we highlight gaps in our understanding and suggest future directions for research.
Collapse
Affiliation(s)
| | - Ede Rancz
- INMED, INSERM, Aix-Marseille University, Marseille, France.
| |
Collapse
|
9
|
Kreyenmeier P, Spering M. A unifying framework for studying discrete and continuous human movements. J Neurophysiol 2024; 131:1112-1114. [PMID: 38718413 DOI: 10.1152/jn.00186.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Accepted: 05/03/2024] [Indexed: 06/05/2024] Open
Affiliation(s)
- Philipp Kreyenmeier
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia, Canada
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
| | - Miriam Spering
- Graduate Program in Neuroscience, University of British Columbia, Vancouver, British Columbia, Canada
- Department of Ophthalmology & Visual Sciences, University of British Columbia, Vancouver, British Columbia, Canada
- Institute for Computing, Information and Cognitive Systems, University of British Columbia, Vancouver, British Columbia, Canada
- Djavad Mowafaghian Center for Brain Health, University of British Columbia, Vancouver, British Columbia, Canada
- Edwin S.H. Leong Centre for Healthy Aging, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
10
|
Rose JM, Ganbold O, Rose AM, Thibodeau JC, Rotaru K. Overcoming resistance to belief revision and correction of misinformation beliefs: psychophysiological and behavioral effects of a counterfactual mindset. Sci Rep 2024; 14:12493. [PMID: 38822014 PMCID: PMC11143297 DOI: 10.1038/s41598-024-63230-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 05/27/2024] [Indexed: 06/02/2024] Open
Abstract
In a series of experiments involving beliefs and misinformation beliefs, we find that individuals who are prompted with a counterfactual mindset are significantly more likely to change their existing beliefs when presented with evidence that contradicts their beliefs. While research finds that beliefs that are considered part of one's identity are highly resistant to change in the face of evidence that challenges these beliefs, four experiments provide evidence that counterfactual generation causes individuals to adjust beliefs and correct misinformation beliefs in response to contradicting evidence. Indeed, we find that a counterfactual mindset was effective in promoting incorporation of accurate facts and causing individuals to revise misinformation beliefs about COVID vaccination safety for a large sample of individuals who have rejected COVID vaccinations. Finally, the results of the psychophysiological experiment reveal that counterfactual generation alters decision makers' search strategies, increases their cognitive arousal in response to evidence that challenges their beliefs, and increases their desire to seek out disconfirming evidence. Overall, the four experiments indicate that counterfactual generation can effectively activate mindsets that increase individuals' willingness to evaluate evidence that contradicts their beliefs and adjust their beliefs in response to evidence.
Collapse
Affiliation(s)
- Jacob M Rose
- Department of Accounting, University of Northern Colorado, Greeley, CO, 80639, USA
| | - Odkhishig Ganbold
- Department of Medicine at Royal Melbourne Hospital, Melbourne Medical School, The University of Melbourne, Melbourne, VIC, 3050, Australia
| | - Anna M Rose
- Department of Accounting, University of Northern Colorado, Greeley, CO, 80639, USA
| | - Jay C Thibodeau
- Department of Accounting, Bentley University, Waltham, MA, 02452, US
| | - Kristian Rotaru
- Department of Accounting, Monash Business School, Monash University, Caulfield East, VIC, 3145, Australia.
- The Turner Institute for Brain and Mental Health, School of Psychological Sciences and Monash Biomedical Imaging Facility, Monash University, BrainPark, Clayton, VIC, 3800, Australia.
| |
Collapse
|
11
|
Heins F, Lappe M. Oculomotor behavior can be adjusted on the basis of artificial feedback signals indicating externally caused errors. PLoS One 2024; 19:e0302872. [PMID: 38768134 PMCID: PMC11104623 DOI: 10.1371/journal.pone.0302872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Accepted: 04/15/2024] [Indexed: 05/22/2024] Open
Abstract
Whether a saccade is accurate and has reached the target cannot be evaluated during its execution, but relies on post-saccadic feedback. If the eye has missed the target object, a secondary corrective saccade has to be made to align the fovea with the target. If a systematic post-saccadic error occurs, adaptive changes to the oculomotor behavior are made, such as shortening or lengthening the saccade amplitude. Systematic post-saccadic errors are typically attributed internally to erroneous motor commands. The corresponding adaptive changes to the motor command reduce the error and the need for secondary corrective saccades, and, in doing so, restore accuracy and efficiency. However, adaptive changes to the oculomotor behavior also occur if a change in saccade amplitude is beneficial for task performance, or if it is rewarded. Oculomotor learning thus is more complex than reducing a post-saccadic position error. In the current study, we used a novel oculomotor learning paradigm and investigated whether human participants are able to adapt their oculomotor behavior to improve task performance even when they attribute the error externally. The task was to indicate the intended target object among several objects to a simulated human-machine interface by making eye movements. The participants were informed that the system itself could make errors. The decoding process depended on a distorted landing point of the saccade, resulting in decoding errors. Two different types of visual feedback were added to the post-saccadic scene and we compared how participants used the different feedback types to adjust their oculomotor behavior to avoid errors. We found that task performance improved over time, regardless of the type of feedback. Thus, error feedback from the simulated human-machine interface was used for post-saccadic error evaluation. This indicates that 1) artificial visual feedback signals and 2) externally caused errors might drive adaptive changes to oculomotor behavior.
Collapse
Affiliation(s)
- Frauke Heins
- Institute for Psychology and Otto-Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| | - Markus Lappe
- Institute for Psychology and Otto-Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| |
Collapse
|
12
|
Abstract
Working memory enables us to bridge past sensory information to upcoming future behaviour. Accordingly, by its very nature, working memory is concerned with two components: the past and the future. Yet, in conventional laboratory tasks, these two components are often conflated, such as when sensory information in working memory is encoded and tested at the same location. We developed a task in which we dissociated the past (encoded location) and future (to-be-tested location) attributes of visual contents in working memory. This enabled us to independently track the utilisation of past and future memory attributes through gaze, as observed during mnemonic selection. Our results reveal the joint consideration of past and future locations. This was prevalent even at the single-trial level of individual saccades that were jointly biased to the past and future. This uncovers the rich nature of working memory representations, whereby both past and future memory attributes are retained and can be accessed together when memory contents become relevant for behaviour.
Collapse
Affiliation(s)
- Baiwei Liu
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit AmsterdamAmsterdamNetherlands
| | - Zampeta-Sofia Alexopoulou
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit AmsterdamAmsterdamNetherlands
| | - Freek van Ede
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit AmsterdamAmsterdamNetherlands
| |
Collapse
|
13
|
Del Campo VL, Morán JFO, Cagigal VM, Martín JM, Pagador JB, Hornero R. The use of the eye-fixation-related potential to investigate visual perception in professional domains with high attentional demand: a literature review. Neurol Sci 2024; 45:1849-1860. [PMID: 38157102 DOI: 10.1007/s10072-023-07275-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2023] [Accepted: 12/17/2023] [Indexed: 01/03/2024]
Abstract
INTRODUCTION Visual attention is a cognitive skill related to visual perception and neural activity, and also moderated by expertise, in time-constrained professional domains (e.g., aviation, driving, sport, surgery). However, the contribution of both perceptual and neural processes on performance has been studied separately in the literature. DEVELOPMENT We defend an integration of visual and neural signals to offer a more complete picture of the visual attention displayed by professionals of different skill levels when performing free-viewing tasks. Specifically, we propose to zoom the analysis in data related to the quiet eye and P300 component jointly, as a novel signal processing approach to evaluate professionals' visual attention. CONCLUSION This review highlights the advantages of using portable eye trackers and electroencephalogram systems altogether, as a promising technique for a better understanding of early cognitive components related to attentional processes. Altogether, the eye-fixation-related potentials method may provide a better understanding of the cognitive mechanisms employed by the participants in natural settings, revealing what visual information is of interest for participants and distinguishing the neural bases of visual attention between targets and non-targets whenever they perceive a stimulus during free viewing experiments.
Collapse
Affiliation(s)
- Vicente Luis Del Campo
- Laboratorio de Aprendizaje y Control Motor, Facultad de Ciencias del Deporte, Universidad de Extremadura, Avda. de La Universidad, S/N, 10003, Cáceres, Spain.
| | | | - Víctor Martínez Cagigal
- Grupo de Ingeniería Biomédica, Universidad de Valladolid, E.T.S.I. Telecomunicación, Paseo Belén 15, 47011, Valladolid, Spain
- Centro de Investigación Biomédica en Red - Bioingeniería, Biomateriales y Biomedicina (CIBER-BBN), E.T.S.I. Telecomunicación, Paseo Belén 15, 47011, Valladolid, Spain
| | - Jesús Morenas Martín
- Laboratorio de Aprendizaje y Control Motor, Facultad de Ciencias del Deporte, Universidad de Extremadura, Avda. de La Universidad, S/N, 10003, Cáceres, Spain
| | - J Blas Pagador
- Centro de Cirugía de Mínima Invasión Jesús Usón, Ctra. N-521, Km. 41,8, 10071, Cáceres, Spain
| | - Roberto Hornero
- Grupo de Ingeniería Biomédica, Universidad de Valladolid, E.T.S.I. Telecomunicación, Paseo Belén 15, 47011, Valladolid, Spain
- Centro de Investigación Biomédica en Red - Bioingeniería, Biomateriales y Biomedicina (CIBER-BBN), E.T.S.I. Telecomunicación, Paseo Belén 15, 47011, Valladolid, Spain
| |
Collapse
|
14
|
Arató J, Rothkopf CA, Fiser J. Eye movements reflect active statistical learning. J Vis 2024; 24:17. [PMID: 38819805 PMCID: PMC11146064 DOI: 10.1167/jov.24.5.17] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 04/23/2024] [Indexed: 06/01/2024] Open
Abstract
What is the link between eye movements and sensory learning? Although some theories have argued for an automatic interaction between what we know and where we look that continuously modulates human information gathering behavior during both implicit and explicit learning, there exists limited experimental evidence supporting such an ongoing interplay. To address this issue, we used a visual statistical learning paradigm combined with a gaze-contingent stimulus presentation and manipulated the explicitness of the task to explore how learning and eye movements interact. During both implicit exploration and explicit visual learning of unknown composite visual scenes, spatial eye movement patterns systematically and gradually changed in accordance with the underlying statistical structure of the scenes. Moreover, the degree of change was directly correlated with the amount and type of knowledge the observers acquired. This suggests that eye movements are potential indicators of active learning, a process where long-term knowledge, current visual stimuli and an inherent tendency to reduce uncertainty about the visual environment jointly determine where we look.
Collapse
Affiliation(s)
- József Arató
- Department of Cognitive Science, Central European University, Vienna, Austria
- Center for Cognitive Computation, Central European University, Vienna, Austria
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
| | - Constantin A Rothkopf
- Center for Cognitive Science & Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany
- Frankfurt Institute for Advanced Studies, Goethe University, Frankfurt, Germany
| | - József Fiser
- Department of Cognitive Science, Central European University, Vienna, Austria
- Center for Cognitive Computation, Central European University, Vienna, Austria
| |
Collapse
|
15
|
Aizenman AM, Gegenfurtner KR, Goettker A. Oculomotor routines for perceptual judgments. J Vis 2024; 24:3. [PMID: 38709511 PMCID: PMC11078167 DOI: 10.1167/jov.24.5.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 03/09/2024] [Indexed: 05/07/2024] Open
Abstract
In everyday life we frequently make simple visual judgments about object properties, for example, how big or wide is a certain object? Our goal is to test whether there are also task-specific oculomotor routines that support perceptual judgments, similar to the well-established exploratory routines for haptic perception. In a first study, observers saw different scenes with two objects presented in a photorealistic virtual reality environment. Observers were asked to judge which of two objects was taller or wider while gaze was tracked. All tasks were performed with the same set of virtual objects in the same scenes, so that we can compare spatial characteristics of exploratory gaze behavior to quantify oculomotor routines for each task. Width judgments showed fixations around the center of the objects with larger horizontal spread. In contrast, for height judgments, gaze was shifted toward the top of the objects with larger vertical spread. These results suggest specific strategies in gaze behavior that presumably are used for perceptual judgments. To test the causal link between oculomotor behavior and perception, in a second study, observers could freely gaze at the object or we introduced a gaze-contingent setup forcing observers to fixate specific positions on the object. Discrimination performance was similar between free-gaze and the gaze-contingent conditions for width and height judgments. These results suggest that although gaze is adapted for different tasks, performance seems to be based on a perceptual strategy, independent of potential cues that can be provided by the oculomotor system.
Collapse
Affiliation(s)
- Avi M Aizenman
- Psychology Department Giessen University, Giessen, Germany
- http://aviaizenman.com/
| | - Karl R Gegenfurtner
- Psychology Department Giessen University, Giessen, Germany
- https://www.allpsych.uni-giessen.de/karl/
| | - Alexander Goettker
- Psychology Department Giessen University, Giessen, Germany
- https://alexgoettker.com/
| |
Collapse
|
16
|
Ghiani A, Mann D, Brenner E. Methods matter: Exploring how expectations influence common actions. iScience 2024; 27:109076. [PMID: 38361615 PMCID: PMC10867666 DOI: 10.1016/j.isci.2024.109076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 12/21/2023] [Accepted: 01/26/2024] [Indexed: 02/17/2024] Open
Abstract
Behavior in controlled laboratory studies is not always representative of what people do in daily life. This has prompted a recent shift toward conducting studies in natural settings. We wondered whether expectations raised by how the task is presented should also be considered. To find out, we studied gaze when walking down and up a staircase. Gaze was often directed at steps before stepping on them, but most participants did not look at every step. Importantly, participants fixated more steps and looked around less when asked to navigate the staircase than when navigating the same staircase but asked to walk outside. Presumably, expecting the staircase to be important made participants direct their gaze at more steps, despite the identical requirements when on the staircase. This illustrates that behavior can be influenced by expectations, such as expectations resulting from task instructions, even when studies are conducted in natural settings.
Collapse
Affiliation(s)
- Andrea Ghiani
- Department of Human Movement Sciences, Amsterdam Movement Sciences and Institute of Brain and Behaviour Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - David Mann
- Department of Human Movement Sciences, Amsterdam Movement Sciences and Institute of Brain and Behaviour Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| | - Eli Brenner
- Department of Human Movement Sciences, Amsterdam Movement Sciences and Institute of Brain and Behaviour Amsterdam, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
17
|
Valtakari NV, Hessels RS, Niehorster DC, Viktorsson C, Nyström P, Falck-Ytter T, Kemner C, Hooge ITC. A field test of computer-vision-based gaze estimation in psychology. Behav Res Methods 2024; 56:1900-1915. [PMID: 37101100 PMCID: PMC10990994 DOI: 10.3758/s13428-023-02125-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/07/2023] [Indexed: 04/28/2023]
Abstract
Computer-vision-based gaze estimation refers to techniques that estimate gaze direction directly from video recordings of the eyes or face without the need for an eye tracker. Although many such methods exist, their validation is often found in the technical literature (e.g., computer science conference papers). We aimed to (1) identify which computer-vision-based gaze estimation methods are usable by the average researcher in fields such as psychology or education, and (2) evaluate these methods. We searched for methods that do not require calibration and have clear documentation. Two toolkits, OpenFace and OpenGaze, were found to fulfill these criteria. First, we present an experiment where adult participants fixated on nine stimulus points on a computer screen. We filmed their face with a camera and processed the recorded videos with OpenFace and OpenGaze. We conclude that OpenGaze is accurate and precise enough to be used in screen-based experiments with stimuli separated by at least 11 degrees of gaze angle. OpenFace was not sufficiently accurate for such situations but can potentially be used in sparser environments. We then examined whether OpenFace could be used with horizontally separated stimuli in a sparse environment with infant participants. We compared dwell measures based on OpenFace estimates to the same measures based on manual coding. We conclude that OpenFace gaze estimates may potentially be used with measures such as relative total dwell time to sparse, horizontally separated areas of interest, but should not be used to draw conclusions about measures such as dwell duration.
Collapse
Affiliation(s)
- Niilo V Valtakari
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, the Netherlands.
| | - Roy S Hessels
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, the Netherlands
| | - Diederick C Niehorster
- Lund University Humanities Lab, Lund University, Lund, Sweden
- Department of Psychology, Lund University, Lund, Sweden
| | - Charlotte Viktorsson
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Uppsala, Sweden
| | - Pär Nyström
- Uppsala Child and Baby Lab, Department of Psychology, Uppsala University, Uppsala, Sweden
| | - Terje Falck-Ytter
- Development and Neurodiversity Lab, Department of Psychology, Uppsala University, Uppsala, Sweden
- Karolinska Institutet Center of Neurodevelopmental Disorders (KIND), Department of Women's and Children's Health, Karolinska Institutet, Stockholm, Sweden
| | - Chantal Kemner
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, the Netherlands
| | - Ignace T C Hooge
- Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, the Netherlands
| |
Collapse
|
18
|
Callahan-Flintoft C, Jensen E, Naeem J, Nonte MW, Madison AM, Ries AJ. A Comparison of Head Movement Classification Methods. SENSORS (BASEL, SWITZERLAND) 2024; 24:1260. [PMID: 38400418 PMCID: PMC10893452 DOI: 10.3390/s24041260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2024] [Revised: 02/06/2024] [Accepted: 02/09/2024] [Indexed: 02/25/2024]
Abstract
To understand human behavior, it is essential to study it in the context of natural movement in immersive, three-dimensional environments. Virtual reality (VR), with head-mounted displays, offers an unprecedented compromise between ecological validity and experimental control. However, such technological advancements mean that new data streams will become more widely available, and therefore, a need arises to standardize methodologies by which these streams are analyzed. One such data stream is that of head position and rotation tracking, now made easily available from head-mounted systems. The current study presents five candidate algorithms of varying complexity for classifying head movements. Each algorithm is compared against human rater classifications and graded based on the overall agreement as well as biases in metrics such as movement onset/offset time and movement amplitude. Finally, we conclude this article by offering recommendations for the best practices and considerations for VR researchers looking to incorporate head movement analysis in their future studies.
Collapse
Affiliation(s)
- Chloe Callahan-Flintoft
- U.S. Army Combat Capabilities Development Command (DEVCOM) Army Research Laboratory, Aberdeen, MD 21005, USA; (A.M.M.); (A.J.R.)
| | - Emily Jensen
- Department of Computer Science, University of Colorado Boulder, Boulder, CO 80303, USA;
| | - Jasim Naeem
- DCS Corporation, Alexandria, VA 22310, USA; (J.N.); (M.W.N.)
| | | | - Anna M. Madison
- U.S. Army Combat Capabilities Development Command (DEVCOM) Army Research Laboratory, Aberdeen, MD 21005, USA; (A.M.M.); (A.J.R.)
- Warfighter Effectiveness Research Center, United States Air Force Academy, Colorado Springs, CO 80840, USA
| | - Anthony J. Ries
- U.S. Army Combat Capabilities Development Command (DEVCOM) Army Research Laboratory, Aberdeen, MD 21005, USA; (A.M.M.); (A.J.R.)
- Warfighter Effectiveness Research Center, United States Air Force Academy, Colorado Springs, CO 80840, USA
| |
Collapse
|
19
|
Guo Y, Pannasch S, Helmert JR, Kaszowska A. Ambient and focal attention during complex problem-solving: preliminary evidence from real-world eye movement data. Front Psychol 2024; 15:1217106. [PMID: 38425554 PMCID: PMC10902451 DOI: 10.3389/fpsyg.2024.1217106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Accepted: 01/31/2024] [Indexed: 03/02/2024] Open
Abstract
Time course analysis of eye movements during free exploration of real-world scenes often reveals an increase in fixation durations together with a decrease in saccade amplitudes, which has been explained within the two visual systems approach, i.e., a transition from ambient to focal. Short fixations and long saccades during early viewing periods are classified as ambient mode of vision, which is concerned with spatial orientation and is related to simple visual properties such as motion, contrast, and location. Longer fixations and shorter saccades during later viewing periods are classified as focal mode of vision, which is concentrated in the foveal projection and is capable of object identification and its semantic categorization. While these findings are mainly obtained in the context of image exploration, the present study endeavors to investigate whether the same pattern of interplay between ambient and focal visual attention is deployed when people work on complex real-world tasks-and if so, when? Based on a re-analysis of existing data that integrates concurrent think aloud and eye tracking protocols, the present study correlated participants' internal thinking models to the parameters of their eye movements when they planned solutions to an open-ended design problem in a real-world setting. We hypothesize that switching between ambient and focal attentional processing is useful when solvers encounter difficulty compelling them to shift their conceptual direction to adjust the solution path. Individuals may prefer different attentional strategies for information-seeking behavior, such as ambient-to-focal or focal-to-ambient. The observed increase in fixation durations and decrease in saccade amplitudes during the periods around shifts in conceptual direction lends support to the postulation of the ambient-to-focal processing; however, focal-to-ambient processing is not evident. Furthermore, our data demonstrate that the beginning of a shift in conceptual direction is observable in eye movement behavior with a significant prolongation of fixation. Our findings add to the conclusions drawn from laboratory settings by providing preliminary evidence for ambient and focal processing characteristics in real-world problem-solving.
Collapse
Affiliation(s)
- Yuxuan Guo
- Institute of Psychology III, Engineering Psychology and Applied Cognitive Research, Technische Universität Dresden, Dresden, Germany
| | - Sebastian Pannasch
- Institute of Psychology III, Engineering Psychology and Applied Cognitive Research, Technische Universität Dresden, Dresden, Germany
| | - Jens R. Helmert
- Institute of Psychology III, Engineering Psychology and Applied Cognitive Research, Technische Universität Dresden, Dresden, Germany
| | | |
Collapse
|
20
|
Lavoie E, Hebert JS, Chapman CS. Comparing eye-hand coordination between controller-mediated virtual reality, and a real-world object interaction task. J Vis 2024; 24:9. [PMID: 38393742 PMCID: PMC10905649 DOI: 10.1167/jov.24.2.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 11/30/2023] [Indexed: 02/25/2024] Open
Abstract
Virtual reality (VR) technology has advanced significantly in recent years, with many potential applications. However, it is unclear how well VR simulations mimic real-world experiences, particularly in terms of eye-hand coordination. This study compares eye-hand coordination from a previously validated real-world object interaction task to the same task re-created in controller-mediated VR. We recorded eye and body movements and segmented participants' gaze data using the movement data. In the real-world condition, participants wore a head-mounted eye tracker and motion capture markers and moved a pasta box into and out of a set of shelves. In the VR condition, participants wore a VR headset and moved a virtual box using handheld controllers. Unsurprisingly, VR participants took longer to complete the task. Before picking up or dropping off the box, participants in the real world visually fixated the box about half a second before their hand arrived at the area of action. This 500-ms minimum fixation time before the hand arrived was preserved in VR. Real-world participants disengaged their eyes from the box almost immediately after their hand initiated or terminated the interaction, but VR participants stayed fixated on the box for much longer after it was picked up or dropped off. We speculate that the limited haptic feedback during object interactions in VR forces users to maintain visual fixation on objects longer than in the real world, altering eye-hand coordination. These findings suggest that current VR technology does not replicate real-world experience in terms of eye-hand coordination.
Collapse
Affiliation(s)
- Ewen Lavoie
- Faculty of Kinesiology, Sport, and Recreation, Neuroscience and Mental Health Institute, University of Alberta, Edmonton, AB, Canada
| | - Jacqueline S Hebert
- Division of Physical Medicine and Rehabilitation, Department of Biomedical Engineering, University of Alberta, Edmonton, AB, Canada
- Glenrose Rehabiliation Hospital, Alberta Health Services, Edmonton, AB, Canada
| | - Craig S Chapman
- Faculty of Kinesiology, Sport, and Recreation, Neuroscience and Mental Health Institute, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
21
|
Samonds JM, Szinte M, Barr C, Montagnini A, Masson GS, Priebe NJ. Mammals Achieve Common Neural Coverage of Visual Scenes Using Distinct Sampling Behaviors. eNeuro 2024; 11:ENEURO.0287-23.2023. [PMID: 38164577 PMCID: PMC10860624 DOI: 10.1523/eneuro.0287-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/24/2023] [Accepted: 10/30/2023] [Indexed: 01/03/2024] Open
Abstract
Most vertebrates use head and eye movements to quickly change gaze orientation and sample different portions of the environment with periods of stable fixation. Visual information must be integrated across fixations to construct a complete perspective of the visual environment. In concert with this sampling strategy, neurons adapt to unchanging input to conserve energy and ensure that only novel information from each fixation is processed. We demonstrate how adaptation recovery times and saccade properties interact and thus shape spatiotemporal tradeoffs observed in the motor and visual systems of mice, cats, marmosets, macaques, and humans. These tradeoffs predict that in order to achieve similar visual coverage over time, animals with smaller receptive field sizes require faster saccade rates. Indeed, we find comparable sampling of the visual environment by neuronal populations across mammals when integrating measurements of saccadic behavior with receptive field sizes and V1 neuronal density. We propose that these mammals share a common statistically driven strategy of maintaining coverage of their visual environment over time calibrated to their respective visual system characteristics.
Collapse
Affiliation(s)
- Jason M Samonds
- Center for Learning and Memory and the Institute for Neuroscience, The University of Texas at Austin, Austin 78712, Texas
| | - Martin Szinte
- Institut de Neurosciences de la Timone (UMR 7289), Centre National de la Recherche Scientifique and Aix-Marseille Université, 13385 Marseille, France
| | - Carrie Barr
- Center for Learning and Memory and the Institute for Neuroscience, The University of Texas at Austin, Austin 78712, Texas
| | - Anna Montagnini
- Institut de Neurosciences de la Timone (UMR 7289), Centre National de la Recherche Scientifique and Aix-Marseille Université, 13385 Marseille, France
| | - Guillaume S Masson
- Institut de Neurosciences de la Timone (UMR 7289), Centre National de la Recherche Scientifique and Aix-Marseille Université, 13385 Marseille, France
| | - Nicholas J Priebe
- Center for Learning and Memory and the Institute for Neuroscience, The University of Texas at Austin, Austin 78712, Texas
| |
Collapse
|
22
|
Stone SA, Boser QA, Dawson TR, Vette AH, Hebert JS, Pilarski PM, Chapman CS. Generating accurate 3D gaze vectors using synchronized eye tracking and motion capture. Behav Res Methods 2024; 56:18-31. [PMID: 36085543 DOI: 10.3758/s13428-022-01958-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/15/2022] [Indexed: 11/08/2022]
Abstract
Assessing gaze behavior during real-world tasks is difficult; dynamic bodies moving through dynamic worlds make gaze analysis difficult. Current approaches involve laborious coding of pupil positions. In settings where motion capture and mobile eye tracking are used concurrently in naturalistic tasks, it is critical that data collection be simple, efficient, and systematic. One solution is to combine eye tracking with motion capture to generate 3D gaze vectors. When combined with tracked or known object locations, 3D gaze vector generation can be automated. Here we use combined eye and motion capture and explore how linear regression models generate accurate 3D gaze vectors. We compare spatial accuracy of models derived from four short calibration routines across three pupil data inputs: the efficacy of calibration routines was assessed, a validation task requiring short fixations on task-relevant locations, and a naturalistic object interaction task to bridge the gap between laboratory and "in the wild" studies. Further, we generated and compared models using spherical and Cartesian coordinate systems and monocular (left or right) or binocular data. All calibration routines performed similarly, with the best performance (i.e., sub-centimeter errors) coming from the naturalistic task trials when the participant is looking at an object in front of them. We found that spherical coordinate systems generate the most accurate gaze vectors with no differences in accuracy when using monocular or binocular data. Overall, we recommend 1-min calibration routines using binocular pupil data combined with a spherical world coordinate system to produce the highest-quality gaze vectors.
Collapse
Affiliation(s)
- Scott A Stone
- Department of Psychology, University of Alberta, Edmonton, Alberta, Canada.
- Neuroscience and Mental Health Institute, University of Alberta, Edmonton, Alberta, Canada.
| | - Quinn A Boser
- Division of Physical Medicine and Rehabilitation, Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - T Riley Dawson
- Division of Physical Medicine and Rehabilitation, Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Albert H Vette
- Department of Mechanical Engineering, University of Alberta, Edmonton, Alberta, Canada
| | - Jacqueline S Hebert
- Division of Physical Medicine and Rehabilitation, Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Patrick M Pilarski
- Division of Physical Medicine and Rehabilitation, Department of Medicine, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Alberta, Canada
| | - Craig S Chapman
- Neuroscience and Mental Health Institute, University of Alberta, Edmonton, Alberta, Canada
- Faculty of Kinesiology, Sport, and Recreation, University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
23
|
Zhang R, Xu Q, Wang S, Parkinson S, Schoeffmann K. Information Difference of Transfer Entropies between Head Motion and Eye Movement Indicates a Proxy of Driving. ENTROPY (BASEL, SWITZERLAND) 2023; 26:3. [PMID: 38275483 PMCID: PMC11154336 DOI: 10.3390/e26010003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 12/07/2023] [Accepted: 12/13/2023] [Indexed: 01/27/2024]
Abstract
Visual scanning is achieved via head motion and gaze movement for visual information acquisition and cognitive processing, which plays a critical role in undertaking common sensorimotor tasks such as driving. The coordination of the head and eyes is an important human behavior to make a key contribution to goal-directed visual scanning and sensorimotor driving. In this paper, we basically investigate the two most common patterns in eye-head coordination: "head motion earlier than eye movement" and "eye movement earlier than head motion". We utilize bidirectional transfer entropies between head motion and eye movements to determine the existence of these two eye-head coordination patterns. Furthermore, we propose a unidirectional information difference to assess which pattern predominates in head-eye coordination. Additionally, we have discovered a significant correlation between the normalized unidirectional information difference and driving performance. This result not only indicates the influence of eye-head coordination on driving behavior from a computational perspective but also validates the practical significance of our approach utilizing transfer entropy for quantifying eye-head coordination.
Collapse
Affiliation(s)
- Runlin Zhang
- College of Intelligence and Computing, Tianjin University, Tianjin 300072, China; (R.Z.); (S.W.)
| | - Qing Xu
- College of Intelligence and Computing, Tianjin University, Tianjin 300072, China; (R.Z.); (S.W.)
| | - Shunbo Wang
- College of Intelligence and Computing, Tianjin University, Tianjin 300072, China; (R.Z.); (S.W.)
| | - Simon Parkinson
- Department of Computer Science, University of Huddersfield, Huddersfield HD1 3DH, UK;
| | - Klaus Schoeffmann
- Institute of Information Technology, Klagenfurt University, 9020 Klagenfurt, Austria;
| |
Collapse
|
24
|
Testoni A, Bernardi R, Ruggeri A. The Efficiency of Question-Asking Strategies in a Real-World Visual Search Task. Cogn Sci 2023; 47:e13396. [PMID: 38142430 DOI: 10.1111/cogs.13396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 11/14/2023] [Accepted: 12/01/2023] [Indexed: 12/26/2023]
Abstract
In recent years, a multitude of datasets of human-human conversations has been released for the main purpose of training conversational agents based on data-hungry artificial neural networks. In this paper, we argue that datasets of this sort represent a useful and underexplored source to validate, complement, and enhance cognitive studies on human behavior and language use. We present a method that leverages the recent development of powerful computational models to obtain the fine-grained annotation required to apply metrics and techniques from Cognitive Science to large datasets. Previous work in Cognitive Science has investigated the question-asking strategies of human participants by employing different variants of the so-called 20-question-game setting and proposing several evaluation methods. In our work, we focus on GuessWhat, a task proposed within the Computer Vision and Natural Language Processing communities that is similar in structure to the 20-question-game setting. Crucially, the GuessWhat dataset contains tens of thousands of dialogues based on real-world images, making it a suitable setting to investigate the question-asking strategies of human players on a large scale and in a natural setting. Our results demonstrate the effectiveness of computational tools to automatically code how the hypothesis space changes throughout the dialogue in complex visual scenes. On the one hand, we confirm findings from previous work on smaller and more controlled settings. On the other hand, our analyses allow us to highlight the presence of "uninformative" questions (in terms of Expected Information Gain) at specific rounds of the dialogue. We hypothesize that these questions fulfill pragmatic constraints that are exploited by human players to solve visual tasks in complex scenes successfully. Our work illustrates a method that brings together efforts and findings from different disciplines to gain a better understanding of human question-asking strategies on large-scale datasets, while at the same time posing new questions about the development of conversational systems.
Collapse
Affiliation(s)
- Alberto Testoni
- Institute for Logic, Language and Computation (ILLC), University of Amsterdam
| | - Raffaella Bernardi
- Center for Mind/Brain Sciences (CIMeC), University of Trento
- Department of Information Engineering and Computer Science (DISI), University of Trento
| | - Azzurra Ruggeri
- MPRG iSearch, Max Planck Institute for Human Development, Berlin
- School of Social Sciences and Technology, Technical University Munich
- Department of Cognitive Science, Central European University
| |
Collapse
|
25
|
Malpica S, Martin D, Serrano A, Gutierrez D, Masia B. Task-Dependent Visual Behavior in Immersive Environments: A Comparative Study of Free Exploration, Memory and Visual Search. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4417-4425. [PMID: 37788210 DOI: 10.1109/tvcg.2023.3320259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Visual behavior depends on both bottom-up mechanisms, where gaze is driven by the visual conspicuity of the stimuli, and top-down mechanisms, guiding attention towards relevant areas based on the task or goal of the viewer. While this is well-known, visual attention models often focus on bottom-up mechanisms. Existing works have analyzed the effect of high-level cognitive tasks like memory or visual search on visual behavior; however, they have often done so with different stimuli, methodology, metrics and participants, which makes drawing conclusions and comparisons between tasks particularly difficult. In this work we present a systematic study of how different cognitive tasks affect visual behavior in a novel within-subjects design scheme. Participants performed free exploration, memory and visual search tasks in three different scenes while their eye and head movements were being recorded. We found significant, consistent differences between tasks in the distributions of fixations, saccades and head movements. Our findings can provide insights for practitioners and content creators designing task-oriented immersive applications.
Collapse
|
26
|
Lai B, Liu M, Ryan F, Rehg JM. In the Eye of Transformer: Global-Local Correlation for Egocentric Gaze Estimation and Beyond. Int J Comput Vis 2023; 132:854-871. [PMID: 38371492 PMCID: PMC10873248 DOI: 10.1007/s11263-023-01879-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Accepted: 08/10/2023] [Indexed: 02/20/2024]
Abstract
Predicting human's gaze from egocentric videos serves as a critical role for human intention understanding in daily activities. In this paper, we present the first transformer-based model to address the challenging problem of egocentric gaze estimation. We observe that the connection between the global scene context and local visual information is vital for localizing the gaze fixation from egocentric video frames. To this end, we design the transformer encoder to embed the global context as one additional visual token and further propose a novel global-local correlation module to explicitly model the correlation of the global token and each local token. We validate our model on two egocentric video datasets - EGTEA Gaze + and Ego4D. Our detailed ablation studies demonstrate the benefits of our method. In addition, our approach exceeds the previous state-of-the-art model by a large margin. We also apply our model to a novel gaze saccade/fixation prediction task and the traditional action recognition problem. The consistent gains suggest the strong generalization capability of our model. We also provide additional visualizations to support our claim that global-local correlation serves a key representation for predicting gaze fixation from egocentric videos. More details can be found in our website (https://bolinlai.github.io/GLC-EgoGazeEst).
Collapse
Affiliation(s)
- Bolin Lai
- Georgia Institute of Technology, Atlanta, GA 30308 USA
| | - Miao Liu
- Georgia Institute of Technology, Atlanta, GA 30308 USA
- Meta AI, Menlo Park, CA 94025 USA
| | - Fiona Ryan
- Georgia Institute of Technology, Atlanta, GA 30308 USA
| | - James M. Rehg
- Georgia Institute of Technology, Atlanta, GA 30308 USA
| |
Collapse
|
27
|
Kou Y, Guo J, Liu S, Zhang C, Zhao C, Li Y, Li Q, Kim ST, Li X. 3D salient object detection based on light field integral imaging. OPTICS LETTERS 2023; 48:5029-5032. [PMID: 37773377 DOI: 10.1364/ol.500151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 09/05/2023] [Indexed: 10/01/2023]
Abstract
Potent usage of the multi-scale light field information for salient object detection (SOD) is the essential requirement of three-dimensional (3D) SOD. On this basis, a light field 3D-SOD scheme is proposed that employs the pixel mapping algorithm to achieve a more distinct representation of spatial and angular information in the four-dimensional (4D) light field, collaboratively mining the global saliency cues via the co-salient object detection (CoSOD) network. Compared with the previous method, our scheme filters out most of the noise by thoroughly leveraging the global dependence of the 4D light field, offering significant enhancements in saliency extraction performance and efficiency. Additionally, the 3D reconstruction results demonstrate the integral retention of the spatial and angular information of the original light field.
Collapse
|
28
|
Peacock CE, Hall EH, Henderson JM. Objects are selected for attention based upon meaning during passive scene viewing. Psychon Bull Rev 2023; 30:1874-1886. [PMID: 37095319 PMCID: PMC11164276 DOI: 10.3758/s13423-023-02286-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/26/2023] [Indexed: 04/26/2023]
Abstract
While object meaning has been demonstrated to guide attention during active scene viewing and object salience guides attention during passive viewing, it is unknown whether object meaning predicts attention in passive viewing tasks and whether attention during passive viewing is more strongly related to meaning or salience. To answer this question, we used a mixed modeling approach where we computed the average meaning and physical salience of objects in scenes while statistically controlling for the roles of object size and eccentricity. Using eye-movement data from aesthetic judgment and memorization tasks, we then tested whether fixations are more likely to land on high-meaning objects than low-meaning objects while controlling for object salience, size, and eccentricity. The results demonstrated that fixations are more likely to be directed to high meaning objects than low meaning objects regardless of these other factors. Further analyses revealed that fixation durations were positively associated with object meaning irrespective of the other object properties. Overall, these findings provide the first evidence that objects are, in part, selected by meaning for attentional selection during passive scene viewing.
Collapse
Affiliation(s)
- Candace E Peacock
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA, 95618, USA.
- Department of Psychology, University of California, Davis, CA, USA.
| | - Elizabeth H Hall
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA, 95618, USA
- Department of Psychology, University of California, Davis, CA, USA
| | - John M Henderson
- Center for Mind and Brain, University of California, 267 Cousteau Place, Davis, CA, 95618, USA
- Department of Psychology, University of California, Davis, CA, USA
| |
Collapse
|
29
|
Segraves MA. Using Natural Scenes to Enhance our Understanding of the Cerebral Cortex's Role in Visual Search. Annu Rev Vis Sci 2023; 9:435-454. [PMID: 37164028 DOI: 10.1146/annurev-vision-100720-124033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Using natural scenes is an approach to studying the visual and eye movement systems approximating how these systems function in everyday life. This review examines the results from behavioral and neurophysiological studies using natural scene viewing in humans and monkeys. The use of natural scenes for the study of cerebral cortical activity is relatively new and presents challenges for data analysis. Methods and results from the use of natural scenes for the study of the visual and eye movement cortex are presented, with emphasis on new insights that this method provides enhancing what is known about these cortical regions from the use of conventional methods.
Collapse
Affiliation(s)
- Mark A Segraves
- Department of Neurobiology, Northwestern University, Evanston, Illinois, USA;
| |
Collapse
|
30
|
Goettker A, Borgerding N, Leeske L, Gegenfurtner KR. Cues for predictive eye movements in naturalistic scenes. J Vis 2023; 23:12. [PMID: 37728915 PMCID: PMC10516764 DOI: 10.1167/jov.23.10.12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Accepted: 08/23/2023] [Indexed: 09/22/2023] Open
Abstract
We previously compared following of the same trajectories with eye movements, but either as an isolated targets or embedded in a naturalistic scene-in this case, the movement of a puck in an ice hockey game. We observed that the oculomotor system was able to leverage the contextual cues available in the naturalistic scene to produce predictive eye movements. In this study, we wanted to assess which factors are critical for achieving this predictive advantage by manipulating four factors: the expertise of the viewers, the amount of available peripheral information, and positional and kinematic cues. The more peripheral information became available (by manipulating the area of the video that was visible), the better the predictions of all observers. However, expert ice hockey fans were consistently better at predicting than novices and used peripheral information more effectively for predictive saccades. Artificial cues about player positions did not lead to a predictive advantage, whereas impairing the causal structure of kinematic cues by playing the video in reverse led to a severe impairment. When videos were flipped vertically to introduce more difficult kinematic cues, predictive behavior was comparable to watching the original videos. Together, these results demonstrate that, when contextual information is available in naturalistic scenes, the oculomotor system is successfully integrating them and is not relying only on low-level information about the target trajectory. Critical factors for successful prediction seem to be the amount of available information, experience with the stimuli, and the availability of intact kinematic cues for player movements.
Collapse
Affiliation(s)
- Alexander Goettker
- Justus Liebig Universität Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior, University of Marburg and Justus Liebig University, Giessen, Germany
| | | | - Linus Leeske
- Justus Liebig Universität Giessen, Giessen, Germany
| | - Karl R Gegenfurtner
- Justus Liebig Universität Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior, University of Marburg and Justus Liebig University, Giessen, Germany
| |
Collapse
|
31
|
Maselli A, Gordon J, Eluchans M, Lancia GL, Thiery T, Moretti R, Cisek P, Pezzulo G. Beyond simple laboratory studies: Developing sophisticated models to study rich behavior. Phys Life Rev 2023; 46:220-244. [PMID: 37499620 DOI: 10.1016/j.plrev.2023.07.006] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Accepted: 07/06/2023] [Indexed: 07/29/2023]
Abstract
Psychology and neuroscience are concerned with the study of behavior, of internal cognitive processes, and their neural foundations. However, most laboratory studies use constrained experimental settings that greatly limit the range of behaviors that can be expressed. While focusing on restricted settings ensures methodological control, it risks impoverishing the object of study: by restricting behavior, we might miss key aspects of cognitive and neural functions. In this article, we argue that psychology and neuroscience should increasingly adopt innovative experimental designs, measurement methods, analysis techniques and sophisticated computational models to probe rich, ecologically valid forms of behavior, including social behavior. We discuss the challenges of studying rich forms of behavior as well as the novel opportunities offered by state-of-the-art methodologies and new sensing technologies, and we highlight the importance of developing sophisticated formal models. We exemplify our arguments by reviewing some recent streams of research in psychology, neuroscience and other fields (e.g., sports analytics, ethology and robotics) that have addressed rich forms of behavior in a model-based manner. We hope that these "success cases" will encourage psychologists and neuroscientists to extend their toolbox of techniques with sophisticated behavioral models - and to use them to study rich forms of behavior as well as the cognitive and neural processes that they engage.
Collapse
Affiliation(s)
- Antonella Maselli
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| | - Jeremy Gordon
- University of California, Berkeley, Berkeley, CA, 94704, United States
| | - Mattia Eluchans
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy; University of Rome "La Sapienza", Rome, Italy
| | - Gian Luca Lancia
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy; University of Rome "La Sapienza", Rome, Italy
| | - Thomas Thiery
- Department of Psychology, University of Montréal, Montréal, Québec, Canada
| | - Riccardo Moretti
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy; University of Rome "La Sapienza", Rome, Italy
| | - Paul Cisek
- Department of Neuroscience, University of Montréal, Montréal, Québec, Canada
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy.
| |
Collapse
|
32
|
Han NX, Eckstein MP. Inferential eye movement control while following dynamic gaze. eLife 2023; 12:e83187. [PMID: 37615158 PMCID: PMC10473837 DOI: 10.7554/elife.83187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 07/31/2023] [Indexed: 08/25/2023] Open
Abstract
Attending to other people's gaze is evolutionary important to make inferences about intentions and actions. Gaze influences covert attention and triggers eye movements. However, we know little about how the brain controls the fine-grain dynamics of eye movements during gaze following. Observers followed people's gaze shifts in videos during search and we related the observer eye movement dynamics to the time course of gazer head movements extracted by a deep neural network. We show that the observers' brains use information in the visual periphery to execute predictive saccades that anticipate the information in the gazer's head direction by 190-350ms. The brain simultaneously monitors moment-to-moment changes in the gazer's head velocity to dynamically alter eye movements and re-fixate the gazer (reverse saccades) when the head accelerates before the initiation of the first forward gaze-following saccade. Using saccade-contingent manipulations of the videos, we experimentally show that the reverse saccades are planned concurrently with the first forward gaze-following saccade and have a functional role in reducing subsequent errors fixating on the gaze goal. Together, our findings characterize the inferential and functional nature of social attention's fine-grain eye movement dynamics.
Collapse
Affiliation(s)
- Nicole Xiao Han
- Department of Psychological and Brain Sciences, Institute for Collaborative Biotechnologies, University of California, Santa BarbaraSanta BarbaraUnited States
| | - Miguel Patricio Eckstein
- Department of Psychological and Brain Sciences, Department of Electrical and Computer Engineering, Department of Computer Science, Institute for Collaborative Biotechnologies, University of California, Santa BarbaraSanta BarbaraUnited States
| |
Collapse
|
33
|
de la Malla C, Goettker A. The effect of impaired velocity signals on goal-directed eye and hand movements. Sci Rep 2023; 13:13646. [PMID: 37607970 PMCID: PMC10444871 DOI: 10.1038/s41598-023-40394-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Accepted: 08/09/2023] [Indexed: 08/24/2023] Open
Abstract
Information about position and velocity is essential to predict where moving targets will be in the future, and to accurately move towards them. But how are the two signals combined over time to complete goal-directed movements? We show that when velocity information is impaired due to using second-order motion stimuli, saccades directed towards moving targets land at positions where targets were ~ 100 ms before saccade initiation, but hand movements are accurate. Importantly, the longer latencies of hand movements allow for additional time to process the sensory information available. When increasing the period of time one sees the moving target before making the saccade, saccades become accurate. In line with that, hand movements with short latencies show higher curvature, indicating corrections based on an update of incoming sensory information. These results suggest that movements are controlled by an independent and evolving combination of sensory information about the target's position and velocity.
Collapse
Affiliation(s)
- Cristina de la Malla
- Vision and Control of Action Group, Department of Cognition, Development, and Psychology of Education, Institute of Neurosciences, Universitat de Barcelona, Barcelona, Catalonia, Spain.
| | - Alexander Goettker
- Justus Liebig Universität Giessen, Giessen, Germany.
- Center for Mind, Brain and Behavior, University of Marburg and Justus Liebig University, Giessen, Germany.
| |
Collapse
|
34
|
Stavropoulos A, Lakshminarasimhan KJ, Angelaki DE. Belief embodiment through eye movements facilitates memory-guided navigation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.21.554107. [PMID: 37662309 PMCID: PMC10473632 DOI: 10.1101/2023.08.21.554107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
Neural network models optimized for task performance often excel at predicting neural activity but do not explain other properties such as the distributed representation across functionally distinct areas. Distributed representations may arise from animals' strategies for resource utilization, however, fixation-based paradigms deprive animals of a vital resource: eye movements. During a naturalistic task in which humans use a joystick to steer and catch flashing fireflies in a virtual environment lacking position cues, subjects physically track the latent task variable with their gaze. We show this strategy to be true also during an inertial version of the task in the absence of optic flow and demonstrate that these task-relevant eye movements reflect an embodiment of the subjects' dynamically evolving internal beliefs about the goal. A neural network model with tuned recurrent connectivity between oculomotor and evidence-integrating frontoparietal circuits accounted for this behavioral strategy. Critically, this model better explained neural data from monkeys' posterior parietal cortex compared to task-optimized models unconstrained by such an oculomotor-based cognitive strategy. These results highlight the importance of unconstrained movement in working memory computations and establish a functional significance of oculomotor signals for evidence-integration and navigation computations via embodied cognition.
Collapse
Affiliation(s)
| | | | - Dora E. Angelaki
- Center for Neural Science, New York University, New York, NY, USA
- Tandon School of Engineering, New York University, New York, NY, USA
| |
Collapse
|
35
|
Samonds JM, Szinte M, Barr C, Montagnini A, Masson GS, Priebe NJ. Mammals achieve common neural coverage of visual scenes using distinct sampling behaviors. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.20.533210. [PMID: 36993477 PMCID: PMC10055212 DOI: 10.1101/2023.03.20.533210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/31/2023]
Abstract
Most vertebrates use head and eye movements to quickly change gaze orientation and sample different portions of the environment with periods of stable fixation. Visual information must be integrated across several fixations to construct a more complete perspective of the visual environment. In concert with this sampling strategy, neurons adapt to unchanging input to conserve energy and ensure that only novel information from each fixation is processed. We demonstrate how adaptation recovery times and saccade properties interact, and thus shape spatiotemporal tradeoffs observed in the motor and visual systems of different species. These tradeoffs predict that in order to achieve similar visual coverage over time, animals with smaller receptive field sizes require faster saccade rates. Indeed, we find comparable sampling of the visual environment by neuronal populations across mammals when integrating measurements of saccadic behavior with receptive field sizes and V1 neuronal density. We propose that these mammals share a common statistically driven strategy of maintaining coverage of their visual environment over time calibrated to their respective visual system characteristics.
Collapse
|
36
|
Brenner E, Smeets JBJ. Continuous use of visual information about the position of the moving hand. Exp Brain Res 2023:10.1007/s00221-023-06658-x. [PMID: 37386195 PMCID: PMC10386923 DOI: 10.1007/s00221-023-06658-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 06/22/2023] [Indexed: 07/01/2023]
Abstract
People generally look at a target when they want to reach for it. Doing so presumably helps them continuously update their judgments about the target's position and motion. But not looking at their hand does not prevent people from updating judgments about its position on the basis of visual information, because people do respond to experimental perturbations of visual information about the position of their hand. Here, we study such responses by adding jitter to the movement of a cursor that follows participants' fingers. We analyse the response to the jitter in a way that reveals how the vigour of the response depends on the moment during the movement at which the change in cursor position occurs. We compare the change in vigour to that for equivalent jitter in the position of the target. We find that participants respond to jitter in the position of a cursor in much the same way as they respond to jitter in the target's position. The responses are more vigorous late in the movement, when adjustments need to be made within less time, but similarly so for the cursor as for the target. The responses are weaker for the cursor, presumably because of the jitter-free kinaesthetic information about the position of the finger.
Collapse
Affiliation(s)
- Eli Brenner
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081 BT, Amsterdam, The Netherlands.
| | - Jeroen B J Smeets
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Van der Boechorststraat 7, 1081 BT, Amsterdam, The Netherlands
| |
Collapse
|
37
|
Nishizono R, Saijo N, Kashino M. Highly reproducible eyeblink timing during formula car driving. iScience 2023; 26:106803. [PMID: 37378324 PMCID: PMC10291330 DOI: 10.1016/j.isci.2023.106803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 02/01/2023] [Accepted: 04/28/2023] [Indexed: 06/29/2023] Open
Abstract
How do humans blink while driving a vehicle? Although gaze control patterns have been previously reported in relation to successful steering, eyeblinks that disrupt vision are believed to be randomly distributed during driving or are ignored. Herein, we demonstrate that eyeblink timing shows reproducible patterns during real formula car racing driving and is related to car control. We studied three top-level racing drivers. Their eyeblinks and driving behavior were acquired during practice sessions. The results revealed that the drivers blinked at surprisingly similar positions on the courses. We identified three factors underlying the eyeblink patterns: the driver's individual blink count, lap pace associated with how strictly they followed their pattern on each lap, and car acceleration associated with when/where to blink at a moment. These findings suggest that the eyeblink pattern reflected cognitive states during in-the-wild driving and experts appear to change such cognitive states continuously and dynamically.
Collapse
Affiliation(s)
- Ryota Nishizono
- NTT Communication Science Laboratories, Morinosato Wakamiya 3-1, Atsugi, Kanagawa 243-0198, Japan
| | - Naoki Saijo
- NTT Communication Science Laboratories, Morinosato Wakamiya 3-1, Atsugi, Kanagawa 243-0198, Japan
| | - Makio Kashino
- NTT Communication Science Laboratories, Morinosato Wakamiya 3-1, Atsugi, Kanagawa 243-0198, Japan
| |
Collapse
|
38
|
Ryu JS, Lee S, Chu Y, Ahn MS, Park YJ, Yang S. CoAt-Mixer: Self-attention deep learning framework for left ventricular hypertrophy using electrocardiography. PLoS One 2023; 18:e0286916. [PMID: 37289800 PMCID: PMC10249819 DOI: 10.1371/journal.pone.0286916] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Accepted: 05/25/2023] [Indexed: 06/10/2023] Open
Abstract
Left ventricular hypertrophy is a significant independent risk factor for all-cause mortality and morbidity, and an accurate diagnosis at an early stage of heart change is clinically significant. Electrocardiography is the most convenient, economical, and non-invasive method for screening in primary care. However, the coincidence rate of the actual left ventricular hypertrophy and diagnostic findings was low, consequently increasing the interest in algorithms using big data and deep learning. We attempted to diagnose left ventricular hypertrophy using big data and deep learning algorithms, and aimed to confirm its diagnostic power according to the differences between males and females. This retrospective study used electrocardiographs obtained at Yonsei University Wonju Severance Christian Hospital, Wonju, Korea, from October 2010 to February 2020. Binary classification was performed for primary screening for left ventricular hypertrophy. Three datasets were used for the experiment: the male, female, and entire dataset. A cutoff for binary classification was defined as the meaningful as a screening test (<132 g/m2 vs. ≥132 g/m2, <109 g/m2 vs. ≥109 g/m2). Six types of input were used for the classification tasks. We attempted to determine whether electrocardiography had predictive power for left ventricular hypertrophy diagnosis. For the entire dataset, the model achieved an area under the receiver operating characteristic (AUROC) curve of 0.836 (95% CI, 0.833-838) with a sensitivity of 78.37% (95% CI, 76.79-79.95). For the male dataset, the AUROC was 0.826 (95% CI, 0.822-830) with a sensitivity of 76.73% (95% CI, 75.14-78.33). For the female dataset, the AUROC was 0.772 (95% CI, 0.769-775) with a sensitivity of 72.90% (95% CI, 70.33-75.46). Our model confirmed that left ventricular hypertrophy can be classified to some extent using electrocardiography, demographics, and electrocardiography features. In particular, a learning environment that considered gender differences was constructed. Consequently, the difference in diagnostic power between men and women was confirmed. Our model will help patients with suspected left ventricular hypertrophy to undergo screening tests at a low cost. In addition, our research and attempts will show the expected effect that gender-consideration approaches can help with various currently proposed diagnostic methods.
Collapse
Affiliation(s)
- Ji Seung Ryu
- Department of Precision Medicine, Yonsei University Wonju College of Medicine, Wonju, Korea
| | - Solam Lee
- Department of Preventive Medicine, Yonsei University Wonju College of Medicine, Wonju, Korea
- Department of Dermatology, Yonsei University Wonju College of Medicine, Wonju, Korea
| | - Yuseong Chu
- Department of Biomedical Engineering, Yonsei University, Wonju, Korea
| | - Min-Soo Ahn
- Division of Cardiology, Department of Internal Medicine, Wonju Severance Christian Hospital, Yonsei University Wonju College of Medicine, Wonju, Korea
| | - Young Jun Park
- Division of Cardiology, Department of Internal Medicine, Wonju Severance Christian Hospital, Yonsei University Wonju College of Medicine, Wonju, Korea
| | - Sejung Yang
- Department of Precision Medicine, Yonsei University Wonju College of Medicine, Wonju, Korea
| |
Collapse
|
39
|
Wedel M, Pieters R, van der Lans R. Modeling Eye Movements During Decision Making: A Review. PSYCHOMETRIKA 2023; 88:697-729. [PMID: 35852670 PMCID: PMC10188393 DOI: 10.1007/s11336-022-09876-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 06/15/2022] [Accepted: 06/16/2022] [Indexed: 05/17/2023]
Abstract
This article reviews recent advances in the psychometric and econometric modeling of eye-movements during decision making. Eye movements offer a unique window on unobserved perceptual, cognitive, and evaluative processes of people who are engaged in decision making tasks. They provide new insights into these processes, which are not easily available otherwise, allow for explanations of fundamental search and choice phenomena, and enable predictions of future decisions. We propose a theoretical framework of the search and choice tasks that people commonly engage in and of the underlying cognitive processes involved in those tasks. We discuss how these processes drive specific eye-movement patterns. Our framework emphasizes the central role of task and strategy switching for complex goal attainment. We place the extant literature within that framework, highlight recent advances in modeling eye-movement behaviors during search and choice, discuss limitations, challenges, and open problems. An agenda for further psychometric modeling of eye movements during decision making concludes the review.
Collapse
Affiliation(s)
- Michel Wedel
- Robert H. Smith School of Business, University of Maryland, College Park, MD 20742-1815 USA
| | - Rik Pieters
- Tilburg University, Tilburg, The Netherlands
- Católica Lisbon School of Business and Economics, Universidade Católica Portuguesa, Lisbon, Portugal
| | - Ralf van der Lans
- Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong
| |
Collapse
|
40
|
Dietze N, Recker L, Poth CH. Warning signals only support the first action in a sequence. Cogn Res Princ Implic 2023; 8:29. [PMID: 37171646 PMCID: PMC10182231 DOI: 10.1186/s41235-023-00484-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 04/30/2023] [Indexed: 05/13/2023] Open
Abstract
Acting upon target stimuli from the environment becomes faster when the targets are preceded by a warning (alerting) cue. Accordingly, alerting is often used to support action in safety-critical contexts (e.g., honking to alert others of a traffic situation). Crucially, however, the benefits of alerting for action have been established using laboratory tasks assessing only simple choice reactions. Real-world actions are considerably more complex and mainly consist of sensorimotor sequences of several sub-actions. Therefore, it is still unknown if the benefits of alerting for action transfer from simple choice reactions to such sensorimotor sequences. Here, we investigated how alerting affected performance in a sequential action task derived from the Trail-Making-Test, a well-established neuropsychological test of cognitive action control (Experiment 1). In addition to this task, participants performed a classic alerting paradigm including a simple choice reaction task (Experiment 2). Results showed that alerting sped up responding in both tasks, but in the sequential action task, this benefit was restricted to the first action of a sequence. This was the case, even when multiple actions were performed within a short time (Experiment 3), ruling out that the restriction of alerting to the first action was due to its short-lived nature. Taken together, these findings reveal the existence of an interface between phasic alertness and action control that supports the next action.
Collapse
Affiliation(s)
- Niklas Dietze
- Department of Psychology, Neuro-Cognitive Psychology and Center for Cognitive Interaction Technology, Bielefeld University, P.O. box 10 01 31, 33501, Bielefeld, Germany.
| | - Lukas Recker
- Department of Psychology, Neuro-Cognitive Psychology and Center for Cognitive Interaction Technology, Bielefeld University, P.O. box 10 01 31, 33501, Bielefeld, Germany
| | - Christian H Poth
- Department of Psychology, Neuro-Cognitive Psychology and Center for Cognitive Interaction Technology, Bielefeld University, P.O. box 10 01 31, 33501, Bielefeld, Germany
| |
Collapse
|
41
|
Zhu SL, Lakshminarasimhan KJ, Angelaki DE. Computational cross-species views of the hippocampal formation. Hippocampus 2023; 33:586-599. [PMID: 37038890 PMCID: PMC10947336 DOI: 10.1002/hipo.23535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 03/17/2023] [Accepted: 03/21/2023] [Indexed: 04/12/2023]
Abstract
The discovery of place cells and head direction cells in the hippocampal formation of freely foraging rodents has led to an emphasis of its role in encoding allocentric spatial relationships. In contrast, studies in head-fixed primates have additionally found representations of spatial views. We review recent experiments in freely moving monkeys that expand upon these findings and show that postural variables such as eye/head movements strongly influence neural activity in the hippocampal formation, suggesting that the function of the hippocampus depends on where the animal looks. We interpret these results in the light of recent studies in humans performing challenging navigation tasks which suggest that depending on the context, eye/head movements serve one of two roles-gathering information about the structure of the environment (active sensing) or externalizing the contents of internal beliefs/deliberation (embodied cognition). These findings prompt future experimental investigations into the information carried by signals flowing between the hippocampal formation and the brain regions controlling postural variables, and constitute a basis for updating computational theories of the hippocampal system to accommodate the influence of eye/head movements.
Collapse
Affiliation(s)
- Seren L Zhu
- Center for Neural Science, New York University, New York, New York, USA
| | - Kaushik J Lakshminarasimhan
- Center for Theoretical Neuroscience, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, USA
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, New York, USA
- Mechanical and Aerospace Engineering, Tandon School of Engineering, New York University, New York, New York, USA
| |
Collapse
|
42
|
Cheng KY, Rehani M, Hebert JS. A scoping review of eye tracking metrics used to assess visuomotor behaviours of upper limb prosthesis users. J Neuroeng Rehabil 2023; 20:49. [PMID: 37095489 PMCID: PMC10127019 DOI: 10.1186/s12984-023-01180-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 04/19/2023] [Indexed: 04/26/2023] Open
Abstract
Advanced upper limb prostheses aim to restore coordinated hand and arm function. However, this objective can be difficult to quantify as coordinated movements require an intact visuomotor system. Eye tracking has recently been applied to study the visuomotor behaviours of upper limb prosthesis users by enabling the calculation of eye movement metrics. This scoping review aims to characterize the visuomotor behaviours of upper limb prosthesis users as described by eye tracking metrics, to summarize the eye tracking metrics used to describe prosthetic behaviour, and to identify gaps in the literature and potential areas for future research. A review of the literature was performed to identify articles that reported eye tracking metrics to evaluate the visual behaviours of individuals using an upper limb prosthesis. Data on the level of amputation, type of prosthetic device, type of eye tracker, primary eye metrics, secondary outcome metrics, experimental task, aims, and key findings were extracted. Seventeen studies were included in this scoping review. A consistently reported finding is that prosthesis users have a characteristic visuomotor behaviour that differs from that of individuals with intact arm function. Visual attention has been reported to be directed more towards the hand and less towards the target during object manipulation tasks. A gaze switching strategy and delay to disengage gaze from the current target has also been reported. Differences in the type of prosthetic device and experimental task have revealed some distinct gaze behaviours. Control factors have been shown to be related to gaze behaviour, while sensory feedback and training interventions have been demonstrated to reduce the visual attention associated with prosthesis use. Eye tracking metrics have also been used to assess the cognitive load and sense of agency of prosthesis users. Overall, there is evidence that eye tracking is an effective tool to quantitatively assess the visuomotor behaviour of prosthesis users and the recorded eye metrics are sensitive to change in response to various factors. Additional studies are needed to validate the eye metrics used to assess cognitive load and sense of agency in upper limb prosthesis users.
Collapse
Affiliation(s)
- Kodi Y Cheng
- Division of Physical Medicine and Rehabilitation, Department of Medicine, Faculty of Medicine and Dentistry, College of Health Science, University of Alberta, Edmonton, AB, Canada
- Department of Biomedical Engineering, Faculty of Medicine and Dentistry, College of Health Science, University of Alberta, Edmonton, AB, Canada
| | - Mayank Rehani
- Division of Physical Medicine and Rehabilitation, Department of Medicine, Faculty of Medicine and Dentistry, College of Health Science, University of Alberta, Edmonton, AB, Canada
| | - Jacqueline S Hebert
- Division of Physical Medicine and Rehabilitation, Department of Medicine, Faculty of Medicine and Dentistry, College of Health Science, University of Alberta, Edmonton, AB, Canada.
- Department of Biomedical Engineering, Faculty of Medicine and Dentistry, College of Health Science, University of Alberta, Edmonton, AB, Canada.
- Glenrose Rehabilitation Hospital, Alberta Health Services, Edmonton, AB, Canada.
| |
Collapse
|
43
|
Torricelli F, Tomassini A, Pezzulo G, Pozzo T, Fadiga L, D'Ausilio A. Motor invariants in action execution and perception. Phys Life Rev 2023; 44:13-47. [PMID: 36462345 DOI: 10.1016/j.plrev.2022.11.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 11/21/2022] [Indexed: 11/27/2022]
Abstract
The nervous system is sensitive to statistical regularities of the external world and forms internal models of these regularities to predict environmental dynamics. Given the inherently social nature of human behavior, being capable of building reliable predictive models of others' actions may be essential for successful interaction. While social prediction might seem to be a daunting task, the study of human motor control has accumulated ample evidence that our movements follow a series of kinematic invariants, which can be used by observers to reduce their uncertainty during social exchanges. Here, we provide an overview of the most salient regularities that shape biological motion, examine the role of these invariants in recognizing others' actions, and speculate that anchoring socially-relevant perceptual decisions to such kinematic invariants provides a key computational advantage for inferring conspecifics' goals and intentions.
Collapse
Affiliation(s)
- Francesco Torricelli
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Alice Tomassini
- Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Via San Martino della Battaglia 44, 00185 Rome, Italy
| | - Thierry Pozzo
- Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; INSERM UMR1093-CAPS, UFR des Sciences du Sport, Université Bourgogne Franche-Comté, F-21000, Dijon, France
| | - Luciano Fadiga
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy
| | - Alessandro D'Ausilio
- Department of Neuroscience and Rehabilitation, University of Ferrara, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy; Center for Translational Neurophysiology of Speech and Communication, Italian Institute of Technology, Via Fossato di Mortara, 17-19, 44121 Ferrara, Italy.
| |
Collapse
|
44
|
Singh T, Rizzo JR, Bonnet C, Semrau JA, Herter TM. Enhanced cognitive interference during visuomotor tasks may cause eye-hand dyscoordination. Exp Brain Res 2023; 241:547-558. [PMID: 36625969 PMCID: PMC10416313 DOI: 10.1007/s00221-023-06550-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Accepted: 01/05/2023] [Indexed: 01/11/2023]
Abstract
In complex visuomotor tasks, such as cooking, people make many saccades to continuously search for items before and during reaching movements. These tasks require cognitive resources, such as short-term memory and task-switching. Cognitive load may impact limb motor performance by increasing demands on mental processes, but mechanisms remain unclear. The Trail-Making Tests, in which participants sequentially search for and make reaching movements to 25 targets, consist of a simple numeric variant (Trails-A) and a cognitively challenging variant that requires alphanumeric switching (Trails-B). We have previously shown that stroke survivors and age-matched controls make many more saccades in Trails-B, and those increases in saccades are associated with decreases in speed and smoothness of reaching movements. However, it remains unclear how patients with neurological injuries, e.g., stroke, manage progressive increases in cognitive load during visuomotor tasks, such as the Trail-Making Tests. As Trails-B trial progresses, switching between numbers and letters leads to progressive increases in cognitive load. Here, we show that stroke survivors with damage to frontoparietal areas and age-matched controls made more saccades and had longer fixations as they progressed through the 25 alphanumeric targets in Trails-B. Furthermore, when stroke survivors made saccades during reaching movements in Trails-B, their movement speed slowed down significantly. Thus, damage to frontoparietal areas serving cognitive motor functions may cause interference between oculomotor, visual, and limb motor functions, which could lead to significant disruptions in activities of daily living. These findings augment our understanding of the mechanisms that underpin cognitive-motor interference during complex visuomotor tasks.
Collapse
Affiliation(s)
- Tarkeshwar Singh
- Department of Kinesiology, The Pennsylvania State University, 32 Rec Building, University Park, PA, 16802, USA.
| | - John-Ross Rizzo
- Department of Rehabilitation Medicine and Neurology, New York University Langone Medical Center, New York, NY, USA
| | - Cédrick Bonnet
- Univ. Lille, CNRS, UMR 9193-SCALab-Sciences Cognitives et Sciences Affectives, Lille, France
| | - Jennifer A Semrau
- Department of Kinesiology and Applied Physiology, University of Delaware, Newark, DE-19716, USA
| | - Troy M Herter
- Department of Kinesiology, University of South Carolina, Columbia, SC, 29208, USA
| |
Collapse
|
45
|
Hayes TR, Henderson JM. Scene inversion reveals distinct patterns of attention to semantically interpreted and uninterpreted features. Cognition 2022; 229:105231. [DOI: 10.1016/j.cognition.2022.105231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 07/19/2022] [Accepted: 07/20/2022] [Indexed: 11/03/2022]
|
46
|
Mao D. Neural Correlates of Spatial Navigation in Primate Hippocampus. Neurosci Bull 2022; 39:315-327. [PMID: 36319893 PMCID: PMC9905402 DOI: 10.1007/s12264-022-00968-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2022] [Accepted: 06/16/2022] [Indexed: 11/07/2022] Open
Abstract
The hippocampus has been extensively implicated in spatial navigation in rodents and more recently in bats. Numerous studies have revealed that various kinds of spatial information are encoded across hippocampal regions. In contrast, investigations of spatial behavioral correlates in the primate hippocampus are scarce and have been mostly limited to head-restrained subjects during virtual navigation. However, recent advances made in freely-moving primates suggest marked differences in spatial representations from rodents, albeit some similarities. Here, we review empirical studies examining the neural correlates of spatial navigation in the primate (including human) hippocampus at the levels of local field potentials and single units. The lower frequency theta oscillations are often intermittent. Single neuron responses are highly mixed and task-dependent. We also discuss neuronal selectivity in the eye and head coordinates. Finally, we propose that future studies should focus on investigating both intrinsic and extrinsic population activity and examining spatial coding properties in large-scale hippocampal-neocortical networks across tasks.
Collapse
Affiliation(s)
- Dun Mao
- Center for Excellence in Brain Science and Intelligent Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, 200031, China. .,University of Chinese Academy of Sciences, Beijing, 100049, China.
| |
Collapse
|
47
|
Subramaniam A, Liu S, Lochhead L, Appelbaum LG. A systematic review of transcranial direct current stimulation on eye movements and associated psychological function. Rev Neurosci 2022; 34:349-364. [PMID: 36310385 DOI: 10.1515/revneuro-2022-0082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Accepted: 09/07/2022] [Indexed: 11/05/2022]
Abstract
Abstract
The last decades have seen a rise in the use of transcranial direct current stimulation (tDCS) approaches to modulate brain activity and associated behavior. Concurrently, eye tracking (ET) technology has improved to allow more precise quantitative measurement of gaze behavior, offering a window into the mechanisms of vision and cognition. When combined, tDCS and ET provide a powerful system to probe brain function and measure the impact on visual function, leading to an increasing number of studies that utilize these techniques together. The current pre-registered, systematic review seeks to describe the literature that integrates these approaches with the goal of changing brain activity with tDCS and measuring associated changes in eye movements with ET. The literature search identified 26 articles that combined ET and tDCS in a probe-and-measure model and are systematically reviewed here. All studies implemented controlled interventional designs to address topics related to oculomotor control, cognitive processing, emotion regulation, or cravings in healthy volunteers and patient populations. Across these studies, active stimulation typically led to changes in the number, duration, and timing of fixations compared to control stimulation. Notably, half the studies addressed emotion regulation, each showing hypothesized effects of tDCS on ET metrics, while tDCS targeting the frontal cortex was widely used and also generally produced expected modulation of ET. This review reveals promising evidence of the impact of tDCS on eye movements and associated psychological function, offering a framework for effective designs with recommendations for future studies.
Collapse
Affiliation(s)
- Ashwin Subramaniam
- Department of Psychiatry and Behavioral Sciences, Duke University, Durham, NC 27710, USA
| | - Sicong Liu
- Department of Psychiatry and Behavioral Sciences, Duke University, Durham, NC 27710, USA
- Annenberg School of Communication, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Liam Lochhead
- Department of Psychiatry, University of California, San Diego, CA 92093, USA
| | - Lawrence Gregory Appelbaum
- Department of Psychiatry and Behavioral Sciences, Duke University, Durham, NC 27710, USA
- Department of Psychiatry, University of California, San Diego, CA 92093, USA
| |
Collapse
|
48
|
Eye spy: Gaze communication and deception during hide-and-seek. Cognition 2022; 227:105209. [DOI: 10.1016/j.cognition.2022.105209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 05/18/2022] [Accepted: 06/16/2022] [Indexed: 11/24/2022]
|
49
|
Helbing J, Draschkow D, L-H Võ M. Auxiliary Scene-Context Information Provided by Anchor Objects Guides Attention and Locomotion in Natural Search Behavior. Psychol Sci 2022; 33:1463-1476. [PMID: 35942922 DOI: 10.1177/09567976221091838] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Successful adaptive behavior requires efficient attentional and locomotive systems. Previous research has thoroughly investigated how we achieve this efficiency during natural behavior by exploiting prior knowledge related to targets of our actions (e.g., attending to metallic targets when looking for a pot) and to the environmental context (e.g., looking for the pot in the kitchen). Less is known about whether and how individual nontarget components of the environment support natural behavior. In our immersive virtual reality task, 24 adult participants searched for objects in naturalistic scenes in which we manipulated the presence and arrangement of large, static objects that anchor predictions about targets (e.g., the sink provides a prediction for the location of the soap). Our results show that gaze and body movements in this naturalistic setting are strongly guided by these anchors. These findings demonstrate that objects auxiliary to the target are incorporated into the representations guiding attention and locomotion.
Collapse
Affiliation(s)
- Jason Helbing
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
| | - Dejan Draschkow
- Brain and Cognition Laboratory, Department of Experimental Psychology, University of Oxford.,Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford
| | - Melissa L-H Võ
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
| |
Collapse
|
50
|
Anderson EM, Seemiller ES, Smith LB. Scene saliencies in egocentric vision and their creation by parents and infants. Cognition 2022; 229:105256. [PMID: 35988453 DOI: 10.1016/j.cognition.2022.105256] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 08/09/2022] [Accepted: 08/11/2022] [Indexed: 11/15/2022]
Abstract
Across the lifespan, humans are biased to look first at what is easy to see, with a handful of well-documented visual saliences shaping our attention (e.g., Itti & Koch, 2001). These attentional biases may emerge from the contexts in which moment-tomoment attention occurs, where perceivers and their social partners actively shape bottom-up saliences, moving their bodies and objects to make targets of interest more salient. The goal of the present study was to determine the bottom-up saliences present in infant egocentric images and to provide evidence on the role that infants and their mature social partners play in highlighting targets of interest via these saliences. We examined 968 unique scenes in which an object had purposefully been placed in the infant's egocentric view, drawn from videos created by one-year-old infants wearing a head camera during toy-play with a parent. To understand which saliences mattered in these scenes, we conducted a visual search task, asking participants (n = 156) to find objects in the egocentric images. To connect this to the behaviors of perceivers, we then characterized the saliences of objects placed by infants or parents compared to objects that were otherwise present in the scenes. Our results show that body-centric properties, such as increases in the centering and visual size of the object, as well as decreases in the number of competing objects immediately surrounding it, both predicted faster search time and distinguished placed and unplaced objects. The present results suggest that the bottom-up saliences that can be readily controlled by perceivers and their social partners may most strongly impact our attention. This finding has implications for the functional role of saliences in human vision, their origin, the social structure of perceptual environments, and how the relation between bottom-up and top-down control of attention in these environments may support infant learning.
Collapse
Affiliation(s)
| | | | - Linda B Smith
- Psychological and Brain Sciences, Indiana University, USA
| |
Collapse
|