1
|
Author Correction: Minimal reporting guideline for research involving eye tracking (2023 edition). Behav Res Methods 2024:10.3758/s13428-024-02438-9. [PMID: 38691219 DOI: 10.3758/s13428-024-02438-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/03/2024]
|
2
|
A field test of computer-vision-based gaze estimation in psychology. Behav Res Methods 2024; 56:1900-1915. [PMID: 37101100 PMCID: PMC10990994 DOI: 10.3758/s13428-023-02125-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/07/2023] [Indexed: 04/28/2023]
Abstract
Computer-vision-based gaze estimation refers to techniques that estimate gaze direction directly from video recordings of the eyes or face without the need for an eye tracker. Although many such methods exist, their validation is often found in the technical literature (e.g., computer science conference papers). We aimed to (1) identify which computer-vision-based gaze estimation methods are usable by the average researcher in fields such as psychology or education, and (2) evaluate these methods. We searched for methods that do not require calibration and have clear documentation. Two toolkits, OpenFace and OpenGaze, were found to fulfill these criteria. First, we present an experiment where adult participants fixated on nine stimulus points on a computer screen. We filmed their face with a camera and processed the recorded videos with OpenFace and OpenGaze. We conclude that OpenGaze is accurate and precise enough to be used in screen-based experiments with stimuli separated by at least 11 degrees of gaze angle. OpenFace was not sufficiently accurate for such situations but can potentially be used in sparser environments. We then examined whether OpenFace could be used with horizontally separated stimuli in a sparse environment with infant participants. We compared dwell measures based on OpenFace estimates to the same measures based on manual coding. We conclude that OpenFace gaze estimates may potentially be used with measures such as relative total dwell time to sparse, horizontally separated areas of interest, but should not be used to draw conclusions about measures such as dwell duration.
Collapse
|
3
|
GlassesValidator: A data quality tool for eye tracking glasses. Behav Res Methods 2024; 56:1476-1484. [PMID: 37326770 PMCID: PMC10991001 DOI: 10.3758/s13428-023-02105-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/03/2023] [Indexed: 06/17/2023]
Abstract
According to the proposal for a minimum reporting guideline for an eye tracking study by Holmqvist et al. (2022), the accuracy (in degrees) of eye tracking data should be reported. Currently, there is no easy way to determine accuracy for wearable eye tracking recordings. To enable determining the accuracy quickly and easily, we have produced a simple validation procedure using a printable poster and accompanying Python software. We tested the poster and procedure with 61 participants using one wearable eye tracker. In addition, the software was tested with six different wearable eye trackers. We found that the validation procedure can be administered within a minute per participant and provides measures of accuracy and precision. Calculating the eye-tracking data quality measures can be done offline on a simple computer and requires no advanced computer skills.
Collapse
|
4
|
What is a blink? Classifying and characterizing blinks in eye openness signals. Behav Res Methods 2024:10.3758/s13428-023-02333-9. [PMID: 38424292 DOI: 10.3758/s13428-023-02333-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/22/2023] [Indexed: 03/02/2024]
Abstract
Blinks, the closing and opening of the eyelids, are used in a wide array of fields where human function and behavior are studied. In data from video-based eye trackers, blink rate and duration are often estimated from the pupil-size signal. However, blinks and their parameters can be estimated only indirectly from this signal, since it does not explicitly contain information about the eyelid position. We ask whether blinks detected from an eye openness signal that estimates the distance between the eyelids (EO blinks) are comparable to blinks detected with a traditional algorithm using the pupil-size signal (PS blinks) and how robust blink detection is when data quality is low. In terms of rate, there was an almost-perfect overlap between EO and PS blink (F1 score: 0.98) when the head was in the center of the eye tracker's tracking range where data quality was high and a high overlap (F1 score 0.94) when the head was at the edge of the tracking range where data quality was worse. When there was a difference in blink rate between EO and PS blinks, it was mainly due to data loss in the pupil-size signal. Blink durations were about 60 ms longer in EO blinks compared to PS blinks. Moreover, the dynamics of EO blinks was similar to results from previous literature. We conclude that the eye openness signal together with our proposed blink detection algorithm provides an advantageous method to detect and describe blinks in greater detail.
Collapse
|
5
|
Representative design: A realistic alternative to (systematic) integrative design. Behav Brain Sci 2024; 47:e48. [PMID: 38311450 DOI: 10.1017/s0140525x23002200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2024]
Abstract
We disagree with Almaatouq et al. that no realistic alternative exists to the "one-at-a-time" paradigm. Seventy years ago, Egon Brunswik introduced representative design, which offers a clear path to commensurability and generality. Almaatouq et al.'s integrative design cannot guarantee the external validity and generalizability of results which is sorely needed, while representative design tackles the problem head on.
Collapse
|
6
|
Large eye-head gaze shifts measured with a wearable eye tracker and an industrial camera. Behav Res Methods 2024:10.3758/s13428-023-02316-w. [PMID: 38200239 DOI: 10.3758/s13428-023-02316-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/04/2023] [Indexed: 01/12/2024]
Abstract
We built a novel setup to record large gaze shifts (up to 140[Formula: see text]). The setup consists of a wearable eye tracker and a high-speed camera with fiducial marker technology to track the head. We tested our setup by replicating findings from the classic eye-head gaze shift literature. We conclude that our new inexpensive setup is good enough to investigate the dynamics of large eye-head gaze shifts. This novel setup could be used for future research on large eye-head gaze shifts, but also for research on gaze during e.g., human interaction. We further discuss reference frames and terminology in head-free eye tracking. Despite a transition from head-fixed eye tracking to head-free gaze tracking, researchers still use head-fixed eye movement terminology when discussing world-fixed gaze phenomena. We propose to use more specific terminology for world-fixed phenomena, including gaze fixation, gaze pursuit, and gaze saccade.
Collapse
|
7
|
Retraction Note: Eye tracking: empirical foundations for a minimal reporting guideline. Behav Res Methods 2024; 56:511-512. [PMID: 37973712 PMCID: PMC10794474 DOI: 10.3758/s13428-023-02285-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2023]
|
8
|
How robust are wearable eye trackers to slow and fast head and body movements? Behav Res Methods 2023; 55:4128-4142. [PMID: 36326998 PMCID: PMC10700439 DOI: 10.3758/s13428-022-02010-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/11/2022] [Indexed: 06/16/2023]
Abstract
How well can modern wearable eye trackers cope with head and body movement? To investigate this question, we asked four participants to stand still, walk, skip, and jump while fixating a static physical target in space. We did this for six different eye trackers. All the eye trackers were capable of recording gaze during the most dynamic episodes (skipping and jumping). The accuracy became worse as movement got wilder. During skipping and jumping, the biggest error was 5.8∘. However, most errors were smaller than 3∘. We discuss the implications of decreased accuracy in the context of different research scenarios.
Collapse
|
9
|
Stable eye versus mouth preference in a live speech-processing task. Sci Rep 2023; 13:12878. [PMID: 37553414 PMCID: PMC10409748 DOI: 10.1038/s41598-023-40017-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 08/03/2023] [Indexed: 08/10/2023] Open
Abstract
Looking at the mouth region is thought to be a useful strategy for speech-perception tasks. The tendency to look at the eyes versus the mouth of another person during speech processing has thus far mainly been studied using screen-based paradigms. In this study, we estimated the eye-mouth-index (EMI) of 38 adult participants in a live setting. Participants were seated across the table from an experimenter, who read sentences out loud for the participant to remember in both a familiar (English) and unfamiliar (Finnish) language. No statistically significant difference in the EMI between the familiar and the unfamiliar languages was observed. Total relative looking time at the mouth also did not predict the number of correctly identified sentences. Instead, we found that the EMI was higher during an instruction phase than during the speech-processing task. Moreover, we observed high intra-individual correlations in the EMI across the languages and different phases of the experiment. We conclude that there are stable individual differences in looking at the eyes versus the mouth of another person. Furthermore, this behavior appears to be flexible and dependent on the requirements of the situation (speech processing or not).
Collapse
|
10
|
Minimal reporting guideline for research involving eye tracking (2023 edition). Behav Res Methods 2023:10.3758/s13428-023-02187-1. [PMID: 37507649 DOI: 10.3758/s13428-023-02187-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/28/2023] [Indexed: 07/30/2023]
Abstract
A guideline is proposed that comprises the minimum items to be reported in research studies involving an eye tracker and human or non-human primate participant(s). This guideline was developed over a 3-year period using a consensus-based process via an open invitation to the international eye tracking community. This guideline will be reviewed at maximum intervals of 4 years.
Collapse
|
11
|
The amplitude of small eye movements can be accurately estimated with video-based eye trackers. Behav Res Methods 2023; 55:657-669. [PMID: 35419703 PMCID: PMC10027793 DOI: 10.3758/s13428-021-01780-6] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/20/2021] [Indexed: 11/08/2022]
Abstract
Estimating the gaze direction with a digital video-based pupil and corneal reflection (P-CR) eye tracker is challenging partly since a video camera is limited in terms of spatial and temporal resolution, and because the captured eye images contain noise. Through computer simulation, we evaluated the localization accuracy of pupil-, and CR centers in the eye image for small eye rotations (≪ 1 deg). Results highlight how inaccuracies in center localization are related to 1) how many pixels the pupil and CR span in the eye camera image, 2) the method to compute the center of the pupil and CRs, and 3) the level of image noise. Our results provide a possible explanation to why the amplitude of small saccades may not be accurately estimated by many currently used video-based eye trackers. We conclude that eye movements with arbitrarily small amplitudes can be accurately estimated using the P-CR eye-tracking principle given that the level of image noise is low and the pupil and CR span enough pixels in the eye camera, or if localization of the CR is based on the intensity values in the eye image instead of a binary representation.
Collapse
|
12
|
Abstract
In this paper, we present a review of how the various aspects of any study using an eye tracker (such as the instrument, methodology, environment, participant, etc.) affect the quality of the recorded eye-tracking data and the obtained eye-movement and gaze measures. We take this review to represent the empirical foundation for reporting guidelines of any study involving an eye tracker. We compare this empirical foundation to five existing reporting guidelines and to a database of 207 published eye-tracking studies. We find that reporting guidelines vary substantially and do not match with actual reporting practices. We end by deriving a minimal, flexible reporting guideline based on empirical research (Section "An empirically based minimal reporting guideline").
Collapse
|
13
|
Fixation classification: how to merge and select fixation candidates. Behav Res Methods 2022; 54:2765-2776. [PMID: 35023066 PMCID: PMC9729319 DOI: 10.3758/s13428-021-01723-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/06/2021] [Indexed: 12/16/2022]
Abstract
Eye trackers are applied in many research fields (e.g., cognitive science, medicine, marketing research). To give meaning to the eye-tracking data, researchers have a broad choice of classification methods to extract various behaviors (e.g., saccade, blink, fixation) from the gaze signal. There is extensive literature about the different classification algorithms. Surprisingly, not much is known about the effect of fixation and saccade selection rules that are usually (implicitly) applied. We want to answer the following question: What is the impact of the selection-rule parameters (minimal saccade amplitude and minimal fixation duration) on the distribution of fixation durations? To answer this question, we used eye-tracking data with high and low quality and seven different classification algorithms. We conclude that selection rules play an important role in merging and selecting fixation candidates. For eye-tracking data with good-to-moderate precision (RMSD < 0.5∘), the classification algorithm of choice does not matter too much as long as it is sensitive enough and is followed by a rule that selects saccades with amplitudes larger than 1.0∘ and a rule that selects fixations with duration longer than 60 ms. Because of the importance of selection, researchers should always report whether they performed selection and the values of their parameters.
Collapse
|
14
|
Gaze and speech behavior in parent–child interactions: The role of conflict and cooperation. CURRENT PSYCHOLOGY 2021. [DOI: 10.1007/s12144-021-02532-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
AbstractA primary mode of human social behavior is face-to-face interaction. In this study, we investigated the characteristics of gaze and its relation to speech behavior during video-mediated face-to-face interactions between parents and their preadolescent children. 81 parent–child dyads engaged in conversations about cooperative and conflictive family topics. We used a dual-eye tracking setup that is capable of concurrently recording eye movements, frontal video, and audio from two conversational partners. Our results show that children spoke more in the cooperation-scenario whereas parents spoke more in the conflict-scenario. Parents gazed slightly more at the eyes of their children in the conflict-scenario compared to the cooperation-scenario. Both parents and children looked more at the other's mouth region while listening compared to while speaking. Results are discussed in terms of the role that parents and children take during cooperative and conflictive interactions and how gaze behavior may support and coordinate such interactions.
Collapse
|
15
|
|
16
|
Replacing eye trackers in ongoing studies: A comparison of eye-tracking data quality between the Tobii Pro TX300 and the Tobii Pro Spectrum. INFANCY 2021; 27:25-45. [PMID: 34687142 DOI: 10.1111/infa.12441] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 09/06/2021] [Accepted: 09/20/2021] [Indexed: 11/26/2022]
Abstract
The Tobii Pro TX300 is a popular eye tracker in developmental eye-tracking research, yet it is no longer manufactured. If a TX300 breaks down, it may have to be replaced. The data quality of the replacement eye tracker may differ from that of the TX300, which may affect the experimental outcome measures. This is problematic for longitudinal and multi-site studies, and for researchers replacing eye trackers between studies. We, therefore, ask how the TX300 and its successor, the Tobii Pro Spectrum, compare in terms of eye-tracking data quality. Data quality-operationalized through precision, accuracy, and data loss-was compared between eye trackers for three age groups (around 5-months, 10-months, and 3-years). Precision was better for all gaze position signals obtained with the Spectrum in comparison to the TX300. Accuracy of the Spectrum was higher for the 5-month-old and 10-month-old children. For the three-year-old children, accuracy was similar across both eye trackers. Gaze position signals from the Spectrum exhibited lower proportions of data loss, and the duration of the data loss periods tended to be shorter. In conclusion, the Spectrum produces gaze position signals with higher data quality, especially for the younger infants. Implications for data analysis are discussed.
Collapse
|
17
|
Perception of the Potential for Interaction in Social Scenes. Iperception 2021; 12:20416695211040237. [PMID: 34589197 PMCID: PMC8474344 DOI: 10.1177/20416695211040237] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Accepted: 07/30/2021] [Indexed: 11/15/2022] Open
Abstract
In urban environments, humans often encounter other people that may engage one in interaction. How do humans perceive such invitations to interact at a glance? We briefly presented participants with pictures of actors carrying out one of 11 behaviors (e.g., waving or looking at a phone) at four camera-actor distances. Participants were asked to describe what they might do in such a situation, how they decided, and what stood out most in the photograph. In addition, participants rated how likely they deemed interaction to take place. Participants formulated clear responses about how they might act. We show convincingly that what participants would do depended on the depicted behavior, but not the camera-actor distance. The likeliness to interact ratings depended both on the depicted behavior and the camera-actor distance. We conclude that humans perceive the "gist" of photographs and that various aspects of the actor, action, and context depicted in photographs are subjectively available at a glance. Our conclusions are discussed in the context of scene perception, social robotics, and intercultural differences.
Collapse
|
18
|
Inhibition of return in the oculomotor decision process: Dissociating visual target discrimination from saccade readiness delays. J Exp Psychol Hum Percept Perform 2020; 47:140-160. [PMID: 33180546 DOI: 10.1037/xhp0000870] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Saccades toward previously cued or fixated locations typically have longer latencies than those toward novel locations, a phenomenon known as inhibition of return (IOR). Despite extensive debate on its potential function, it remains unclear what the role of IOR in the oculomotor decision process is. Here, we ask whether the effect on eye movement planning is best characterized as a delay in visual target discrimination or as a reduction in readiness to execute the movement (saccade readiness). To evaluate this question, we use target-distractor tasks with clear speed-accuracy trade-offs. Simultaneously cueing both the target and distractor (or neither) we find longer latencies at the cued locations. Despite this delay in latencies, accuracy improves in line with the speed-accuracy trade-off curve (Experiment 1). This suggests that while visual target discrimination can progress unimpeded, saccade readiness is reduced. Based on this reduction in readiness we predict that the more saccades rely on visual target discrimination, the less their destination will be affected by inducing IOR. Indeed, after cueing either the target or an onset distractor (Experiment 2), short-latency, stimulus-driven, saccades are strongly biased away from the cued location, while the destinations of longer latency goal-driven saccades are affected only minimally. The fact that primarily stimulus-driven saccades are affected by inducing IOR is interesting as it can explain why the spatial bias associated with IOR is not consistently found. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
|
19
|
Abstract
As humans move through parts of their environment, they meet others that may or may not try to interact with them. Where do people look when they meet others? We had participants wearing an eye tracker walk through a university building. On the way, they encountered nine “walkers.” Walkers were instructed to e.g. ignore the participant, greet him or her, or attempt to hand out a flyer. The participant's gaze was mostly directed to the currently relevant body parts of the walker. Thus, the participants gaze depended on the walker's action. Individual differences in participant's looking behavior were consistent across walkers. Participants who did not respond to the walker seemed to look less at that walker, although this difference was not statistically significant. We suggest that models of gaze allocation should take social motivation into account.
Collapse
|
20
|
Abstract
Human crowds provide an interesting case for research on the perception of people. In this study, we investigate how visual information is acquired for (1) navigating human crowds and (2) seeking out social affordances in crowds by studying gaze behavior during human crowd navigation under different task instructions. Observers (n = 11) wore head-mounted eye-tracking glasses and walked two rounds through hallways containing walking crowds (n = 38) and static objects. For round one, observers were instructed to avoid collisions. For round two, observers furthermore had to indicate with a button press whether oncoming people made eye contact. Task performance (walking speed, absence of collisions) was similar across rounds. Fixation durations indicated that heads, bodies, objects, and walls maintained gaze comparably long. Only crowds in the distance maintained gaze relatively longer. We find no compelling evidence that human bodies and heads hold one's gaze more than objects while navigating crowds. When eye contact was assessed, heads were fixated more often and for a total longer duration, which came at the cost of looking at bodies. We conclude that gaze behavior in crowd navigation is task-dependent, and that not every fixation is strictly necessary for navigating crowds. When explicitly tasked with seeking out potential social affordances, gaze is modulated as a result. We discuss our findings in the light of current theories and models of gaze behavior. Furthermore, we show that in a head-mounted eye-tracking study, a large degree of experimental control can be maintained while many degrees of freedom on the side of the observer remain.
Collapse
|
21
|
Reliability and Validity of the Utrecht Tasks for Attention in Toddlers Using Eye Tracking (UTATE). Front Psychol 2020; 11:1179. [PMID: 32655439 PMCID: PMC7325908 DOI: 10.3389/fpsyg.2020.01179] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2019] [Accepted: 05/07/2020] [Indexed: 12/01/2022] Open
Abstract
Attention problems hinder many children in their cognitive and social emotional development. Children at risk for developmental problems, like preterm born infants, are specifically known for attention difficulties. Early identification of attention difficulties is important for application of appropriate stimulation in trying to reduce further problems. Specifically designed instruments with good psychometric characteristics are needed to show difficulties in attention, that may contribute to early identification. The Utrecht Tasks of Attention in Toddlers using Eye tracking (UTATE) is an instrument to measure orienting, alerting and executive attention capacities in young children. Reliability and validity of the UTATE are specifically addressed in three studies, reported in this paper. A sample of 95 term born children assessed at 18 months of age was used that provided data for both the second and third study reported here. In addition, three other small samples were used, of which the first consisted of 12 children at 18 months with test-retest data available that are reported in the first study. Two other samples that were used in the third study, consisted of 14 children measured at 12 months, and 15 children examined at 24 months. The UTATE resulted in reliable information on eye movements and some first support for construct and predictive validity was found. Low scores on the UTATE at 18 months were found to be related to slower cognitive development as measured with the Bayley-III-NL at 24 months. Furthermore, a first indication that the UTATE is able to detect some age differences in attention was found. It is concluded that the UTATE can be used to study attention capacities in toddlers that underlie cognitive functioning and development, but further research is necessary.
Collapse
|
22
|
The 'Real-World Approach' and Its Problems: A Critique of the Term Ecological Validity. Front Psychol 2020; 11:721. [PMID: 32425850 PMCID: PMC7204431 DOI: 10.3389/fpsyg.2020.00721] [Citation(s) in RCA: 76] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2020] [Accepted: 03/25/2020] [Indexed: 11/22/2022] Open
Abstract
A popular goal in psychological science is to understand human cognition and behavior in the 'real-world.' In contrast, researchers have typically conducted their research in experimental research settings, a.k.a. the 'psychologist's laboratory.' Critics have often questioned whether psychology's laboratory experiments permit generalizable results. This is known as the 'real-world or the lab'-dilemma. To bridge the gap between lab and life, many researchers have called for experiments with more 'ecological validity' to ensure that experiments more closely resemble and generalize to the 'real-world.' However, researchers seldom explain what they mean with this term, nor how more ecological validity should be achieved. In our opinion, the popular concept of ecological validity is ill-formed, lacks specificity, and falls short of addressing the problem of generalizability. To move beyond the 'real-world or the lab'-dilemma, we believe that researchers in psychological science should always specify the particular context of cognitive and behavioral functioning in which they are interested, instead of advocating that experiments should be more 'ecologically valid' in order to generalize to the 'real-world.' We believe this will be a more constructive way to uncover the context-specific and context-generic principles of cognition and behavior.
Collapse
|
23
|
The Reality of "Real-Life" Neuroscience: A Commentary on Shamay-Tsoory and Mendelsohn (2019). PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2020; 16:461-465. [PMID: 32316849 PMCID: PMC7961613 DOI: 10.1177/1745691620917354] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The main thrust of Shamay-Tsoory and Mendelsohn’s ecological approach is that “the use of real-life complex, dynamic, naturalistic stimuli provides a solid basis for understanding brain and behavior” (p. 851). Although we support the overall goal and objectives of Shamay-Tsoory and Mendelsohn’s approach to “real-life” neuroscience, their review refers to the terms “ecological validity” and “representative design” in a manner different from that originally introduced by Egon Brunswik. Our aim is to clarify Brunswik’s original definitions and briefly explain how these concepts pertain to the larger problem of generalizability, not just for history’s sake, but because we believe that a proper understanding of these concepts is important for researchers who want to understand human behavior and the brain in the context of everyday experience, and because Brunswik’s original ideas may contribute to Shamay-Tsoory and Mendelsohn’s ecological approach. Finally, we argue that the popular and often misused concept of “ecological validity” is ill-formed, lacks specificity, and may even undermine the development of theoretically sound and tractable research.
Collapse
|
24
|
Implying social interaction and its influence on gaze behavior to the eyes. PLoS One 2020; 15:e0229203. [PMID: 32092089 PMCID: PMC7039466 DOI: 10.1371/journal.pone.0229203] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2019] [Accepted: 01/31/2020] [Indexed: 11/18/2022] Open
Abstract
Researchers have increasingly focused on how the potential for social interaction modulates basic processes of visual attention and gaze behavior. In this study, we investigated why people may experience social interaction and what factors contributed to their subjective experience. We furthermore investigated whether implying social interaction modulated gaze behavior to people’s faces, specifically the eyes. To imply the potential for interaction, participants received either one of two instructions: 1) they would be presented with a person via a ‘live’ video-feed, or 2) they would be presented with a pre-recorded video clip of a person. Prior to the presentation, a confederate walked into a separate room to suggest to participants that (s)he was being positioned behind a webcam. In fact, all participants were presented with a pre-recorded clip. During the presentation, we measured participants’ gaze behavior with an eye tracker, and after the presentation, participants were asked whether they believed that the confederate was ‘live’ or not, and, why they thought so. Participants varied greatly in their judgements about whether the confederate was ‘live’ or not. Analyses of gaze behavior revealed that a large subset of participants who received the live-instruction gazed less at the eyes of confederates compared with participants who received the pre-recorded-instruction. However, for both the live-instruction group and the pre-recorded instruction group, another subset of participants gazed predominantly at the eyes. The current findings may contribute to the development of experimental designs aimed to capture the interactive aspects of social cognition and visual attention.
Collapse
|
25
|
An eye-tracking approach to Autonomous sensory meridian response (ASMR): The physiology and nature of tingles in relation to the pupil. PLoS One 2019; 14:e0226692. [PMID: 31877152 PMCID: PMC6932793 DOI: 10.1371/journal.pone.0226692] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2019] [Accepted: 12/03/2019] [Indexed: 01/20/2023] Open
Abstract
Autonomous sensory meridian response (ASMR) is a sensory phenomenon commonly characterized by pleasant tingling sensations arising from the back of the head and accompanied by feelings of relaxation and calmness. Although research has found ASMR to have a distinct physiological pattern with increased skin conductance levels and reduced heart rate, the specific tingles felt in ASMR have not received much investigation. The aim of the present study was to investigate the physiology and characteristics of ASMR further by examining whether experiencing ASMR is visible from the pupil of the eye. A total of 91 participants were recruited and assigned to three different groups based on their experience of ASMR (ASMR vs. non-ASMR vs. unsure). Participants were instructed to watch a control video and an ASMR video and to report any tingling sensations by pressing down a button on the keyboard. Pupil diameter was measured over the duration of both videos using a tower-mounted eye tracker. Data was analyzed on a general level, averaging pupil diameter over each video, as well as on a more specific level, comparing pupil diameter during reported episodes of tingling sensations to pupil diameter outside of those episodes. On the general level, results revealed no significant differences between the groups. On the specific level, however, the tingling sensations experienced in ASMR were found to cause statistically significant increases in pupil diameter, demonstrating that they have a physiological basis. The results of the study further reinforce the credibility of ASMR and suggest that the tingles felt in ASMR are at the very core of the experience itself.
Collapse
|
26
|
Eye tracking in developmental cognitive neuroscience - The good, the bad and the ugly. Dev Cogn Neurosci 2019; 40:100710. [PMID: 31593909 PMCID: PMC6974897 DOI: 10.1016/j.dcn.2019.100710] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2019] [Revised: 07/31/2019] [Accepted: 09/10/2019] [Indexed: 02/07/2023] Open
Abstract
Eye tracking is a popular research tool in developmental cognitive neuroscience for studying the development of perceptual and cognitive processes. However, eye tracking in the context of development is also challenging. In this paper, we ask how knowledge on eye-tracking data quality can be used to improve eye-tracking recordings and analyses in longitudinal research so that valid conclusions about child development may be drawn. We answer this question by adopting the data-quality perspective and surveying the eye-tracking setup, training protocols, and data analysis of the YOUth study (investigating neurocognitive development of 6000 children). We first show how our eye-tracking setup has been optimized for recording high-quality eye-tracking data. Second, we show that eye-tracking data quality can be operator-dependent even after a thorough training protocol. Finally, we report distributions of eye-tracking data quality measures for four age groups (5 months, 10 months, 3 years, and 9 years), based on 1531 recordings. We end with advice for (prospective) developmental eye-tracking researchers and generalizations to other methodologies.
Collapse
|
27
|
From lab-based studies to eye-tracking in virtual and real worlds: conceptual and methodological problems and solutions. Symposium 4 at the 20th European Conference on Eye Movement Research (ECEM) in Alicante, 20.8.2019. J Eye Mov Res 2019; 12. [PMID: 33828764 PMCID: PMC7917479 DOI: 10.16910/jemr.12.7.8] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Wearable mobile eye trackers have great potential as they allow the measurement of eye movements during daily activities such as driving, navigating the world and doing groceries. Although mobile eye trackers have been around for some time, developing and operating these eye trackers was generally a highly technical affair. As such, mobile eye-tracking research was not feasible for most labs. Nowadays, many mobile eye trackers are available from eye-tracking manufacturers (e.g. Tobii, Pupil labs, SMI, Ergoneers) and various implementations in virtual/augmented reality have recently been released.The wide availability has caused the number of publications using a mobile eye tracker to increase quickly. Mobile eye tracking is now applied in vision science, educational science, developmental psychology, marketing research (using virtual and real supermarkets), clinical psychology, usability, architecture, medicine, and more. Yet, transitioning from lab-based studies where eye trackers are fixed to the world to studies where eye trackers are fixed to the head presents researchers with a number of problems. These problems range from the conceptual frameworks used in world-fixed and head-fixed eye tracking and how they relate to each other, to the lack of data quality comparisons and field tests of the different mobile eye trackers and how the gaze signal can be classified or mapped to the visual stimulus. Such problems need to be addressed in order to understand how world-fixed and head-fixed eye-tracking research can be compared and to understand the full potential and limits of what mobile eye-tracking can deliver. In this symposium, we bring together presenting researchers from five different institutions (Lund University, Utrecht University, Clemson University, Birkbeck University of London and Rochester Institute of Technology) addressing problems and innovative solutions across the entire breadth of mobile eye-tracking research. Hooge, presenting Hessels et al. paper, focus on the definitions of fixations and saccades held by researchers in the eyemovement field and argue how they need to be clarified in order to allow comparisons between world-fixed and head-fixed eye-tracking research. - Diaz et al. introduce machine-learning techniques for classifying the gaze signal in mobile eye-tracking contexts where head and body are unrestrained. Niehorster et al. compare data quality of mobile eye trackers during natural behavior and discuss the application range of these eye trackers. Duchowski et al. introduce a method for automatically mapping gaze to faces using computer vision techniques. Pelz et al. employ state-of-the-art techniques to map fixations to objects of interest in the scene video and align grasp and eye-movement data in the same reference frame to investigate the guidance of eye movements during manual interaction. Video stream:https://vimeo.com/357473408
Collapse
|
28
|
Developmental changes in visual search are determined by changing visuospatial abilities and task repetition: A longitudinal study in adolescents. APPLIED NEUROPSYCHOLOGY-CHILD 2019; 10:133-143. [PMID: 31268363 DOI: 10.1080/21622965.2019.1627211] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Using a longitudinal study design, a group of 94 adolescents participated in a visual search task and a visuospatial ability task yearly for four consecutive years. We analyzed the association between changes in visuospatial ability and changes in visual search performance and behavior and estimated additional effects of age and task repetition. Visuospatial ability was measured with the Design Organization Test (DOT). Search performance was analyzed in terms of reaction time and response accuracy. Search behavior was analyzed in terms of the number of fixations per trial, the saccade amplitude, and the distribution of fixations over different types of elements. We found that both the increase in age and the yearly repetition of the DOT had a positive effect on visuospatial ability. We show that the acceleration of visual search during childhood can be explained by the increase in visuospatial abilities with age during adolescence. With the yearly task repetition, visual search became faster and more accurate, while fewer fixations were made with larger saccade amplitudes. The combination of increasing visuospatial ability and task repetition makes visual search more effective and might increase the performance of many daily tasks during adolescence.
Collapse
|
29
|
Do pupil-based binocular video eye trackers reliably measure vergence? Vision Res 2019; 156:1-9. [PMID: 30641092 DOI: 10.1016/j.visres.2019.01.004] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Revised: 12/19/2018] [Accepted: 01/05/2019] [Indexed: 10/27/2022]
Abstract
A binocular eye tracker needs to be accurate to enable the determination of vergence, distance to the binocular fixation point and fixation disparity. These measures are useful in e.g. the research fields of visual perception, binocular control in reading and attention in 3D. Are binocular pupil-based video eye trackers accurate enough to produce meaningful binocular measures? Recent research revealed potentially large idiosyncratic systematic errors due to pupil-size changes. With a top of the line eye tracker (SR Research EyeLink 1000 plus), we investigated whether the pupil-size artefact in the separate eyes may cause the eye tracker to report apparent vergence when the eyeballs do not rotate. Participants were asked to fixate a target at a distance of 77 cm for 160 s. We evoked pupil-size changes by varying the light intensity. With increasing pupil size, horizontal vergence reported by the eye tracker decreased in most subjects, up to two degrees. However, this was not due to a rotation of the eyeballs, as identified from the absence of systematic movement in the corneal reflection (CR) signals. From this, we conclude that binocular pupil-CR or pupil-only video eye trackers using the dark pupil technique are not accurate enough to be used to determine vergence, distance to the binocular fixation point and fixation disparity.
Collapse
|
30
|
Does effective gaze behavior lead to enhanced performance in a complex error-detection cockpit task? PLoS One 2018; 13:e0207439. [PMID: 30462695 PMCID: PMC6248957 DOI: 10.1371/journal.pone.0207439] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2018] [Accepted: 10/31/2018] [Indexed: 11/29/2022] Open
Abstract
The purpose of the current study was to examine the relationship between expertise, performance, and gaze behavior in a complex error-detection cockpit task. Twenty-four pilots and 26 non-pilots viewed video-clips from a pilot's viewpoint and were asked to detect malfunctions in the cockpit instrument panel. Compared to non-pilots, pilots detected more malfunctioning instruments, had shorter dwell times on the instruments, made more transitions, visited task-relevant areas more often, and dwelled longer on the areas between the instruments. These results provide evidence for three theories that explain underlying processes for expert performance: The long-term working memory theory, the information-reduction hypothesis, and the holistic model of image perception. In addition, the results for generic attentional skills indicated a higher capability to switch between global and local information processing in pilots compared to non-pilots. Taken together, the results suggest that gaze behavior as well as other generic skills may provide important information concerning underlying processes that can explain successful performance during flight in expert pilots.
Collapse
|
31
|
Abstract
Manual classification is still a common method to evaluate event detection algorithms. The procedure is often as follows: Two or three human coders and the algorithm classify a significant quantity of data. In the gold standard approach, deviations from the human classifications are considered to be due to mistakes of the algorithm. However, little is known about human classification in eye tracking. To what extent do the classifications from a larger group of human coders agree? Twelve experienced but untrained human coders classified fixations in 6 min of adult and infant eye-tracking data. When using the sample-based Cohen's kappa, the classifications of the humans agreed near perfectly. However, we found substantial differences between the classifications when we examined fixation duration and number of fixations. We hypothesized that the human coders applied different (implicit) thresholds and selection rules. Indeed, when spatially close fixations were merged, most of the classification differences disappeared. On the basis of the nature of these intercoder differences, we concluded that fixation classification by experienced untrained human coders is not a gold standard. To bridge the gap between agreement measures (e.g., Cohen's kappa) and eye movement parameters (fixation duration, number of fixations), we suggest the use of the event-based F1 score and two new measures: the relative timing offset (RTO) and the relative timing deviation (RTD).
Collapse
|
32
|
A Validation of Automatically-Generated Areas-of-Interest in Videos of a Face for Eye-Tracking Research. Front Psychol 2018; 9:1367. [PMID: 30123168 PMCID: PMC6085555 DOI: 10.3389/fpsyg.2018.01367] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2018] [Accepted: 07/16/2018] [Indexed: 11/13/2022] Open
Abstract
When mapping eye-movement behavior to the visual information presented to an observer, Areas of Interest (AOIs) are commonly employed. For static stimuli (screen without moving elements), this requires that one AOI set is constructed for each stimulus, a possibility in most eye-tracker manufacturers' software. For moving stimuli (screens with moving elements), however, it is often a time-consuming process, as AOIs have to be constructed for each video frame. A popular use-case for such moving AOIs is to study gaze behavior to moving faces. Although it is technically possible to construct AOIs automatically, the standard in this field is still manual AOI construction. This is likely due to the fact that automatic AOI-construction methods are (1) technically complex, or (2) not effective enough for empirical research. To aid researchers in this field, we present and validate a method that automatically achieves AOI construction for videos containing a face. The fully-automatic method uses an open-source toolbox for facial landmark detection, and a Voronoi-based AOI-construction method. We compared the position of AOIs obtained using our new method, and the eye-tracking measures derived from it, to a recently published semi-automatic method. The differences between the two methods were negligible. The presented method is therefore both effective (as effective as previous methods), and efficient; no researcher time is needed for AOI construction. The software is freely available from https://osf.io/zgmch/.
Collapse
|
33
|
Is the eye-movement field confused about fixations and saccades? A survey among 124 researchers. ROYAL SOCIETY OPEN SCIENCE 2018; 5:180502. [PMID: 30225041 PMCID: PMC6124022 DOI: 10.1098/rsos.180502] [Citation(s) in RCA: 65] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2018] [Accepted: 08/06/2018] [Indexed: 06/08/2023]
Abstract
Eye movements have been extensively studied in a wide range of research fields. While new methods such as mobile eye tracking and eye tracking in virtual/augmented realities are emerging quickly, the eye-movement terminology has scarcely been revised. We assert that this may cause confusion about two of the main concepts: fixations and saccades. In this study, we assessed the definitions of fixations and saccades held in the eye-movement field, by surveying 124 eye-movement researchers. These eye-movement researchers held a variety of definitions of fixations and saccades, of which the breadth seems even wider than what is reported in the literature. Moreover, these definitions did not seem to be related to researcher background or experience. We urge researchers to make their definitions more explicit by specifying all the relevant components of the eye movement under investigation: (i) the oculomotor component: e.g. whether the eye moves slow or fast; (ii) the functional component: what purposes does the eye movement (or lack thereof) serve; (iii) the coordinate system used: relative to what does the eye move; (iv) the computational definition: how is the event represented in the eye-tracker signal. This should enable eye-movement researchers from different fields to have a discussion without misunderstandings.
Collapse
|
34
|
|
35
|
Abstract
Several models of selection in search predict that saccades are biased toward conspicuous objects (also referred to as salient objects). Indeed, it has been demonstrated that initial saccades are biased toward the most conspicuous candidate. However, in a recent study, no such bias was found for the second saccade, and it was concluded that the attraction of conspicuous elements is limited to only short-latency initial saccades. This conclusion is based on only a single feature manipulation (orientation contrast) and conflicts with the prediction of influential salience models. Here, we investigate whether this result can be generalized beyond the domain of orientation. In displays containing three luminance annuli (Experiment 1), we find a considerable bias toward the most conspicuous candidate for the second saccade. In Experiment 1, the target could not be discriminated peripherally. When we made the target peripherally discriminable, the second saccade was no longer biased toward the more conspicuous candidate (Experiment 2). Thus, conspicuity plays a role in saccadic selection beyond the initial saccade. Whether second saccades are biased toward conspicuous objects appears to depend on the type of feature contrast underlying the conspicuity and the peripheral discriminability of target properties.
Collapse
|
36
|
Cognitive functioning in toddlerhood: The role of gestational age, attention capacities, and maternal stimulation. Dev Psychol 2017; 54:648-662. [PMID: 29154655 DOI: 10.1037/dev0000446] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Why do many preterm children show delays in development? An integrated model of biological risk, children's capacities, and maternal stimulation was investigated in relation to cognitive functioning at toddler age. Participants were 200 Dutch children (gestational age = 32-41 weeks); 51% boys, 96% Dutch nationality, 71.5% highly educated mothers. At 18 months, attention capacities were measured using eye-tracking, and maternal attention-directing behavior was observed. Cognitive functioning was measured at 24 months using the Bayley-III-NL. Cognitive functioning was directly predicted by children's attention capacities and maternal attention-maintaining behavior. Gestational age was indirectly related to cognitive functioning through children's attention capacities and through maternal attention-redirecting behavior. In this way, a combination of gestational age, children's attention capacities, and maternal stimulation was associated with early cognitive development. (PsycINFO Database Record
Collapse
|
37
|
Abstract
Eye-tracking research in infants and older children has gained a lot of momentum over the last decades. Although eye-tracking research in these participant groups has become easier with the advance of the remote eye-tracker, this often comes at the cost of poorer data quality than in research with well-trained adults (Hessels, Andersson, Hooge, Nyström, & Kemner Infancy, 20, 601-633, 2015; Wass, Forssman, & Leppänen Infancy, 19, 427-460, 2014). Current fixation detection algorithms are not built for data from infants and young children. As a result, some researchers have even turned to hand correction of fixation detections (Saez de Urabain, Johnson, & Smith Behavior Research Methods, 47, 53-72, 2015). Here we introduce a fixation detection algorithm-identification by two-means clustering (I2MC)-built specifically for data across a wide range of noise levels and when periods of data loss may occur. We evaluated the I2MC algorithm against seven state-of-the-art event detection algorithms, and report that the I2MC algorithm's output is the most robust to high noise and data loss levels. The algorithm is automatic, works offline, and is suitable for eye-tracking data recorded with remote or tower-mounted eye-trackers using static stimuli. In addition to application of the I2MC algorithm in eye-tracking research with infants, school children, and certain patient groups, the I2MC algorithm also may be useful when the noise and data loss levels are markedly different between trials, participants, or time points (e.g., longitudinal research).
Collapse
|
38
|
|
39
|
Abstract
Two questions were posed in the present study: (1) Do infants search for discrepant items in the absence of instructions? We outline where previous research has been inconclusive in answering this question. (2) In what manner do infants search, and what are the fixation and saccade characteristics in saccadic search? A thorough characterization of saccadic search in infancy is of great importance as a reference for future eye-movement studies in infancy. We presented 10-month-old infants with 24 visual search displays in two separate sessions within two weeks. We report that infant saccadic search performance at 10 months is above what may be expected by our model of chance, and is dependent on the specific target. Infant fixation and saccade characteristics show similarities to adult fixation and saccade characteristics in saccadic search. All findings were highly consistent across two separate sessions on the group level. An examination of the reliability of saccadic search revealed that test-retest reliability for oculomotor characteristics was high, particularly for fixation duration. We suggest that future research into saccadic search in infancy adopt the presented model of chance as a baseline against which to compare search performance. Researchers investigating both the typical and atypical development of visual search may benefit from the presented results.
Collapse
|
40
|
Abstract
Human observers are able to successfully infer direction and intensity of light from photographed scenes despite complex interactions between light, shape, and material. We investigate how well they are able to distinguish other low-level aspects of illumination, such as the diffuseness and the number of light sources. We use photographs of a teapot, an orange, and a tennis ball from the ALOI database (Geusebroek, Burghouts, & Smeulders, 2005) to create different illumination conditions, varying either in diffuseness of a single light source or in separation angle between two distinct light sources. Our observers were presented with all three objects; they indicated which object was illuminated differently from the other two. We record discrimination performance, reaction times, and eye fixations. We compare the data to a model that uses differences in image structure in same-object comparisons, and outcomes suggest that participants mostly rely on the information contained in cast shadows and highlights. The pattern of eye fixations confirms this, showing that after the first fixation, observers mostly fixate cast shadow areas. However, information in the highlights is rather salient, so it might be available from first fixation, making separate fixations are unnecessary.
Collapse
|
41
|
Performance on tasks of visuospatial memory and ability: A cross-sectional study in 330 adolescents aged 11 to 20. APPLIED NEUROPSYCHOLOGY-CHILD 2017; 7:129-142. [PMID: 28075186 DOI: 10.1080/21622965.2016.1268960] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Cognitive functions mature at different points in time between birth and adulthood. Of these functions, visuospatial skills, such as spatial memory and part-to-whole organization, have often been tested in children and adults but have been less frequently evaluated during adolescence. We studied visuospatial memory and ability during this critical developmental period, as well as the correlation between these abilities, in a large group of 330 participants (aged 11 to 20 years, 55% male). To assess visuospatial memory, the participants were asked to memorize and reproduce sequences of random locations within a grid using a computer. Visuospatial ability was tested using a variation of the Design Organization Test (DOT). In this paper-and-pencil test, the participants had one minute to reproduce as many visual patterns as possible using a numerical code. On the memory task, compared with younger participants, older participants correctly reproduced more locations overall and longer sequences of locations, made fewer mistakes and needed less time to reproduce the sequences. In the visuospatial ability task, the number of correctly reproduced patterns increased with age. We show that both visuospatial memory and ability improve significantly throughout adolescence and that performance on both tasks is significantly correlated.
Collapse
|
42
|
Goal-directed visual attention drives health goal priming: An eye-tracking experiment. Health Psychol 2017; 36:82-90. [DOI: 10.1037/hea0000410] [Citation(s) in RCA: 56] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
43
|
Abstract
A problem in eyetracking research is choosing areas of interest (AOIs): Researchers in the same field often use widely varying AOIs for similar stimuli, making cross-study comparisons difficult or even impossible. Subjective choices while choosing AOIs cause differences in AOI shape, size, and location. On the other hand, not many guidelines for constructing AOIs, or comparisons between AOI-production methods, are available. In the present study, we addressed this gap by comparing AOI-production methods in face stimuli, using data collected with infants and adults (with autism spectrum disorder [ASD] and matched controls). Specifically, we report that the attention-attracting and attention-maintaining capacities of AOIs differ between AOI-production methods, and that this matters for statistical comparisons in one of three groups investigated (the ASD group). In addition, we investigated the relation between AOI size and an AOI's attention-attracting and attention-maintaining capacities, as well as the consequences for statistical analyses, and report that adopting large AOIs solves the problem of statistical differences between the AOI methods. Finally, we tested AOI-production methods for their robustness to noise, and report that large AOIs-using the Voronoi tessellation method or the limited-radius Voronoi tessellation method with large radii-are most robust to noise. We conclude that large AOIs are a noise-robust solution in face stimuli and, when implemented using the Voronoi method, are the most objective of the researcher-defined AOIs. Adopting Voronoi AOIs in face-scanning research should allow better between-group and cross-study comparisons.
Collapse
|
44
|
The Role of Global and Local Visual Information during Gaze-Cued Orienting of Attention. PLoS One 2016; 11:e0160405. [PMID: 27560368 PMCID: PMC4999176 DOI: 10.1371/journal.pone.0160405] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2016] [Accepted: 07/19/2016] [Indexed: 11/29/2022] Open
Abstract
Gaze direction is an important social communication tool. Global and local visual information are known to play specific roles in processing socially relevant information from a face. The current study investigated whether global visual information has a primary role during gaze-cued orienting of attention and, as such, may influence quality of interaction. Adults performed a gaze-cueing task in which a centrally presented face cued (valid or invalid) the location of a peripheral target through a gaze shift. We measured brain activity (electroencephalography) towards the cue and target and behavioral responses (manual and saccadic reaction times) towards the target. The faces contained global (i.e. lower spatial frequencies), local (i.e. higher spatial frequencies), or a selection of both global and local (i.e. mid-band spatial frequencies) visual information. We found a gaze cue-validity effect (i.e. valid versus invalid), but no interaction effects with spatial frequency content. Furthermore, behavioral responses towards the target were in all cue conditions slower when lower spatial frequencies were not present in the gaze cue. These results suggest that whereas gaze-cued orienting of attention can be driven by both global and local visual information, global visual information determines the speed of behavioral responses towards other entities appearing in the surrounding of gaze cue stimuli.
Collapse
|
45
|
Abstract
Saccades toward previously cued locations have longer latencies than saccades toward other locations, a phenomenon known as inhibition of return (IOR). Watanabe (Exp Brain Res 138:330–342. doi:10.1007/s002210100709, 2001) combined IOR with the global effect (where saccade landing points fall in between neighboring objects) to investigate whether IOR can also have a spatial component. When one of two neighboring targets was cued, there was a clear bias away from the cued location. In a condition where both targets were cued, it appeared that the global effect magnitude was similar to the condition without any cues. However, as the latencies in the double cue condition were shorter compared to the no cue condition, it is still an open question whether these results are representative for IOR. Considering the double cue condition can provide valuable insight into the interaction of the mechanisms underlying the two phenomena, here, we revisit this condition in an adapted paradigm. Our paradigm does result in longer latencies for the cued locations, and we find that the magnitude of the global effect is reduced significantly. Unexpectedly, this holds even when only including saccades with the same latencies for both conditions. Thus, the increased latencies associated with IOR cannot directly explain the reduction in global effect. The global effect reduction can likely best be seen as either a result of short-term depression of exogenous visual signals or a result of IOR established at the center of gravity of cues.
Collapse
|
46
|
Patients With Obsessive-Compulsive Disorder Check Excessively in Response to Mild Uncertainty. Behav Ther 2016; 47:550-9. [PMID: 27423170 DOI: 10.1016/j.beth.2016.04.002] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/14/2015] [Revised: 04/08/2016] [Accepted: 04/12/2016] [Indexed: 02/08/2023]
Abstract
Patients with obsessive-compulsive disorder (OCD) not only respond to obsessions with perseverative checking, but also engage in more general checking, irrespective of their obsessive concerns. This study investigated whether general checking is specific to OCD and exacerbated when only mild uncertainty is induced. Thirty-one patients with OCD, 26 anxiety- and 31 healthy controls performed a visual search task with eye-tracking and indicated in 50 search displays whether a target was "present" or "absent". Target-present trials were unambiguous, whereas target-absent trials induced mild uncertainty, because participants had to rely on not overlooking the target. Checking behavior was measured by assessing search time and the number of fixations, measured with an eye-tracker. Results showed that in both target-present and target-absent trials patients with OCD searched longer and made more fixations than healthy and anxiety controls. However, the difference in checking behavior between patients with OCD and the control groups was larger in target-absent trials (where mild uncertainty was induced). Anxiety and healthy controls did not differ in checking behavior. Thus, mild uncertainty appears to specifically promote checking in patients with OCD, which has implications for treatment.
Collapse
|
47
|
Introduction of the Utrecht Tasks for Attention in Toddlers Using Eye Tracking (UTATE): A Pilot Study. Front Psychol 2016; 7:669. [PMID: 27199880 PMCID: PMC4858515 DOI: 10.3389/fpsyg.2016.00669] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2016] [Accepted: 04/22/2016] [Indexed: 12/05/2022] Open
Abstract
Attention capacities underlie everyday functioning from an early age onwards. Little is known about attentional processes at toddler age. A feasible assessment of attention capacities at toddler age is needed to allow further study of attention development. In this study, a test battery is piloted that consists of four tasks which intend to measure the attention systems orienting, alerting, and executive attention: the Utrecht Tasks of Attention in Toddlers using Eye tracking [UTATE]. The UTATE assesses looking behavior that may reflect visual attention capacities, by using eye-tracking methods. This UTATE was studied in 16 Dutch 18-month-old toddlers. Results showed that the instrument is feasible and generates good quality data. A first indication of sufficient reliability was found for most of the variables. It is concluded that the UTATE can be used in further studies. Further evaluation of the reliability and validity of the instrument in larger samples is worthwhile.
Collapse
|
48
|
Abstract
OBJECTIVE Attention capacities are critical for adaptive functioning and development. Reliable assessment measures are needed for the study of attention capacities in early childhood. In the current study, we investigated the factor structure of the Utrecht Tasks of Attention in Toddlers Using Eye-tracking (UTATE) test battery that assesses attention capacities in 18-month-old toddlers with eye-tracking techniques. METHOD The factor structure of 13 measures of attention capacities, based on four eye-tracking tasks, was investigated in a sample of 95 healthy toddlers (18 months of age) using confirmatory factor analysis. RESULTS Results showed that a three-factor model best fitted the data. The latent constructs reflected an orienting, alerting, and executive attention system. CONCLUSION This study showed support for a three-factor model of attention capacities in 18-month-old toddlers. Further study is needed to investigate whether the model can also be used with children at risk of attention problems.
Collapse
|
49
|
Consequences of Eye Color, Positioning, and Head Movement for Eye-Tracking Data Quality in Infant Research. INFANCY 2015. [DOI: 10.1111/infa.12093] [Citation(s) in RCA: 56] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
50
|
UnAdulterated - children and adults' visual attention to healthy and unhealthy food. Eat Behav 2015; 17:90-3. [PMID: 25679367 PMCID: PMC4380137 DOI: 10.1016/j.eatbeh.2015.01.009] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/01/2014] [Revised: 12/23/2014] [Accepted: 01/28/2015] [Indexed: 11/16/2022]
Abstract
OBJECTIVE Visually attending to unhealthy food creates a desire to consume the food. To resist the temptation people have to employ self-regulation strategies, such as visual avoidance. Past research has shown that self-regulatory skills develop throughout childhood and adolescence, suggesting adults' superior self-regulation skills compared to children. METHODS This study employed a novel method to investigate self-regulatory skills. Children and adults' initial (bottom-up) and maintained (top-down) visual attention to simultaneously presented healthy and unhealthy food were examined in an eye-tracking paradigm. RESULTS Results showed that both children and adults initially attended most to the unhealthy food. Subsequently, adults self-regulated their visual attention away from the unhealthy food. Despite the children's high self-reported attempts to eat healthily and importance of eating healthily, children did not self-regulate visual attention away from unhealthy food. Children remained influenced by the attention-driven desire to consume the unhealthy food whereas adults visually attended more strongly to the healthy food thereby avoiding the desire to consume the unhealthy option. CONCLUSIONS The findings emphasize the necessity of improving children's self-regulatory skills to support their desire to remain healthy and to protect children from the influences of the obesogenic environment.
Collapse
|