1
|
Franchak JM, Adolph KE. An update of the development of motor behavior. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2024; 15:e1682. [PMID: 38831670 PMCID: PMC11534565 DOI: 10.1002/wcs.1682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 03/31/2024] [Accepted: 04/22/2024] [Indexed: 06/05/2024]
Abstract
This primer describes research on the development of motor behavior. We focus on infancy when basic action systems are acquired-posture, locomotion, manual actions, and facial actions-and we adopt a developmental systems perspective to understand the causes and consequences of developmental change. Experience facilitates improvements in motor behavior and infants accumulate immense amounts of varied everyday experience with all the basic action systems. At every point in development, perception guides behavior by providing feedback about the results of just prior movements and information about what to do next. Across development, new motor behaviors provide new inputs for perception. Thus, motor development opens up new opportunities for acquiring knowledge and acting on the world, instigating cascades of developmental changes in perceptual, cognitive, and social domains. This article is categorized under: Cognitive Biology > Cognitive Development Psychology > Motor Skill and Performance Neuroscience > Development.
Collapse
Affiliation(s)
- John M Franchak
- Department of Psychology, University of California, Riverside, California, USA
| | - Karen E Adolph
- Department of Psychology, Center for Neural Science, New York University, New York, USA
| |
Collapse
|
2
|
Franchak JM, Smith L, Yu C. Developmental Changes in How Head Orientation Structures Infants' Visual Attention. Dev Psychobiol 2024; 66:e22538. [PMID: 39192662 PMCID: PMC11481040 DOI: 10.1002/dev.22538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 06/20/2024] [Accepted: 08/01/2024] [Indexed: 08/29/2024]
Abstract
Most studies of developing visual attention are conducted using screen-based tasks in which infants move their eyes to select where to look. However, real-world visual exploration entails active movements of both eyes and head to bring relevant areas in view. Thus, relatively little is known about how infants coordinate their eyes and heads to structure their visual experiences. Infants were tested every 3 months from 9 to 24 months while they played with their caregiver and three toys while sitting in a highchair at a table. Infants wore a head-mounted eye tracker that measured eye movement toward each of the visual targets (caregiver's face and toys) and how targets were oriented within the head-centered field of view (FOV). With age, infants increasingly aligned novel toys in the center of their head-centered FOV at the expense of their caregiver's face. Both faces and toys were better centered in view during longer looking events, suggesting that infants of all ages aligned their eyes and head to sustain attention. The bias in infants' head-centered FOV could not be accounted for by manual action: Held toys were more poorly centered compared with non-held toys. We discuss developmental factors-attentional, motoric, cognitive, and social-that may explain why infants increasingly adopted biased viewpoints with age.
Collapse
Affiliation(s)
| | - Linda Smith
- Department of Psychological and Brain Sciences, Indiana
University
| | - Chen Yu
- Department of Psychology, University of Texas at
Austin
| |
Collapse
|
3
|
Sukkar M, Khatirnamani A, Wibble T. Visually induced vertical vergence as a motion processing biomarker associated with postural instability. Neuroscience 2024; 555:106-115. [PMID: 39053671 DOI: 10.1016/j.neuroscience.2024.07.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2024] [Revised: 07/11/2024] [Accepted: 07/17/2024] [Indexed: 07/27/2024]
Abstract
The present study explored visually induced vertical vergence (VIVV) as non-specific motion processing response. Healthy participants (7 male, mean age 28.57 ± 2.30; 9 female, mean age 27.67 ± 3.65) were exposed to optokinetic stimuli in an HTC VIVE virtual reality headset while VIVV, pupil-size, and postural sway was recorded. The methodology was shown to produce VIVV in the roll plane at 30 deg/s. Subsequent trials consisted of 40 s optokinetic motion in yaw, pitch, and roll directions at 60 deg/s, and radial optic flow; optokinetic directions were inverted after 20 s of motion. Median VIVV amplitude changes were normalized to the clockwise roll rotation, analysed, and correlated with changes in pupil-size and body sway. VIVV, pupil-size, and body sway were all affected by changes in optokinetic direction. Post-hoc analyses showed significant VIVV responses during optokinetic yaw and pitch rotations, as well as during radial optic flow stimulations. VIVV magnitudes were universally correlated with pupil-size and body sway. In conclusion, VIVV was expressed in all tested dimensions and may consequently serve as a visual motion processing biomarker. Failing to support binocularity while responding to optokinetic directionality, VIVV may reflect an eye-movement response associated with increased postural instability and stress, similar to a dorsal light reflex.
Collapse
Affiliation(s)
- Maiar Sukkar
- Department of Clinical Neuroscience, Division of Eye and Vision, Marianne Bernadotte Centrum, St. Erik's Eye Hospital, Karolinska Institutet, Stockholm, Sweden
| | - Amirehsan Khatirnamani
- Department of Clinical Neuroscience, Division of Eye and Vision, Marianne Bernadotte Centrum, St. Erik's Eye Hospital, Karolinska Institutet, Stockholm, Sweden
| | - Tobias Wibble
- Department of Clinical Neuroscience, Division of Eye and Vision, Marianne Bernadotte Centrum, St. Erik's Eye Hospital, Karolinska Institutet, Stockholm, Sweden.
| |
Collapse
|
4
|
Aivar MP, Li CL, Tong MH, Kit DM, Hayhoe MM. Knowing where to go: Spatial memory guides eye and body movements in a naturalistic visual search task. J Vis 2024; 24:1. [PMID: 39226069 PMCID: PMC11373708 DOI: 10.1167/jov.24.9.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/04/2024] Open
Abstract
Most research on visual search has used simple tasks presented on a computer screen. However, in natural situations visual search almost always involves eye, head, and body movements in a three-dimensional (3D) environment. The different constraints imposed by these two types of search tasks might explain some of the discrepancies in our understanding concerning the use of memory resources and the role of contextual objects during search. To explore this issue, we analyzed a visual search task performed in an immersive virtual reality apartment. Participants searched for a series of geometric 3D objects while eye movements and head coordinates were recorded. Participants explored the apartment to locate target objects whose location and visibility were manipulated. For objects with reliable locations, we found that repeated searches led to a decrease in search time and number of fixations and to a reduction of errors. Searching for those objects that had been visible in previous trials but were only tested at the end of the experiment was also easier than finding objects for the first time, indicating incidental learning of context. More importantly, we found that body movements showed changes that reflected memory for target location: trajectories were shorter and movement velocities were higher, but only for those objects that had been searched for multiple times. We conclude that memory of 3D space and target location is a critical component of visual search and also modifies movement kinematics. In natural search, memory is used to optimize movement control and reduce energetic costs.
Collapse
Affiliation(s)
- M Pilar Aivar
- Facultad de Psicología, Universidad Autónoma de Madrid, Madrid, Spain
- https://www.psicologiauam.es/aivar/
| | - Chia-Ling Li
- Institute of Neuroscience, The University of Texas at Austin, Austin, TX, USA
- Present address: Apple Inc., Cupertino, California, USA
| | - Matthew H Tong
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
- Present address: IBM Research, Cambridge, Massachusetts, USA
| | - Dmitry M Kit
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
- Present address: F5, Boston, Massachusetts, USA
| | - Mary M Hayhoe
- Center for Perceptual Systems, The University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
5
|
Onkhar V, Dodou D, de Winter JCF. Evaluating the Tobii Pro Glasses 2 and 3 in static and dynamic conditions. Behav Res Methods 2024; 56:4221-4238. [PMID: 37550466 PMCID: PMC11289326 DOI: 10.3758/s13428-023-02173-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/15/2023] [Indexed: 08/09/2023]
Abstract
Over the past few decades, there have been significant developments in eye-tracking technology, particularly in the domain of mobile, head-mounted devices. Nevertheless, questions remain regarding the accuracy of these eye-trackers during static and dynamic tasks. In light of this, we evaluated the performance of two widely used devices: Tobii Pro Glasses 2 and Tobii Pro Glasses 3. A total of 36 participants engaged in tasks under three dynamicity conditions. In the "seated with a chinrest" trial, only the eyes could be moved; in the "seated without a chinrest" trial, both the head and the eyes were free to move; and during the walking trial, participants walked along a straight path. During the seated trials, participants' gaze was directed towards dots on a wall by means of audio instructions, whereas in the walking trial, participants maintained their gaze on a bullseye while walking towards it. Eye-tracker accuracy was determined using computer vision techniques to identify the target within the scene camera image. The findings showed that Tobii 3 outperformed Tobii 2 in terms of accuracy during the walking trials. Moreover, the results suggest that employing a chinrest in the case of head-mounted eye-trackers is counterproductive, as it necessitates larger eye eccentricities for target fixation, thereby compromising accuracy compared to not using a chinrest, which allows for head movement. Lastly, it was found that participants who reported higher workload demonstrated poorer eye-tracking accuracy. The current findings may be useful in the design of experiments that involve head-mounted eye-trackers.
Collapse
Affiliation(s)
- V Onkhar
- Department of Cognitive Robotics, Delft University of Technology, Delft, The Netherlands
| | - D Dodou
- Department of Biomechanical Engineering, Delft University of Technology, Delft, The Netherlands
| | - J C F de Winter
- Department of Cognitive Robotics, Delft University of Technology, Delft, The Netherlands.
| |
Collapse
|
6
|
Backhaus D, Engbert R. How body postures affect gaze control in scene viewing under specific task conditions. Exp Brain Res 2024; 242:745-756. [PMID: 38300280 PMCID: PMC11297079 DOI: 10.1007/s00221-023-06771-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Accepted: 12/18/2023] [Indexed: 02/02/2024]
Abstract
Gaze movements during visual exploration of natural scenes are typically investigated with the static picture viewing paradigm in the laboratory. While this paradigm is attractive for its highly controlled conditions, limitations in the generalizability of the resulting findings to more natural viewing behavior have been raised frequently. Here, we address the combined influences of body posture and viewing task on gaze behavior with the static picture viewing paradigm under free viewing as a baseline condition. We recorded gaze data using mobile eye tracking during postural manipulations in scene viewing. Specifically, in Experiment 1, we compared gaze behavior during head-supported sitting and quiet standing under two task conditions. We found that task affects temporal and spatial gaze parameters, while posture produces no effects on temporal and small effects on spatial parameters. In Experiment 2, we further investigated body posture by introducing four conditions (sitting with chin rest, head-free sitting, quiet standing, standing on an unstable platform). Again, we found no effects on temporal and small effects on spatial gaze parameters. In our experiments, gaze behavior is largely unaffected by body posture, while task conditions readily produce effects. We conclude that results from static picture viewing may allow predictions of gaze statistics under more natural viewing conditions, however, viewing tasks should be chosen carefully because of their potential effects on gaze characteristics.
Collapse
Affiliation(s)
- Daniel Backhaus
- Department of Psychology, University of Potsdam, Karl-Liebknecht-Str. 24-25, Potsdam, 14476, Germany.
| | - Ralf Engbert
- Department of Psychology, University of Potsdam, Karl-Liebknecht-Str. 24-25, Potsdam, 14476, Germany
- Research Focus Cognitive Sciences, University of Potsdam, Karl-Liebknecht-Str. 24-25, Potsdam, 14476, Germany
| |
Collapse
|
7
|
Servais A, Préa N, Hurter C, Barbeau EJ. Why and when do you look away when trying to remember? Gaze aversion as a marker of the attentional switch to the internal world during memory retrieval. Acta Psychol (Amst) 2023; 240:104041. [PMID: 37774488 DOI: 10.1016/j.actpsy.2023.104041] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 09/15/2023] [Accepted: 09/21/2023] [Indexed: 10/01/2023] Open
Abstract
It is common to look away while trying to remember specific information, for example during autobiographical memory retrieval, a behavior referred to as gaze aversion. Given the competition between internal and external attention, gaze aversion is assumed to play a role in visual decoupling, i.e., suppressing environmental distractors during internal tasks. This suggests a link between gaze aversion and the attentional switch from the outside world to a temporary internal mental space that takes place during the initial stage of memory retrieval, but this assumption has never been verified so far. We designed a protocol where 33 participants answered 48 autobiographical questions while their eye movements were recorded with an eye-tracker and a camcorder. Results indicated that gaze aversion occurred early (median 1.09 s) and predominantly during the access phase of memory retrieval-i.e., the moment when the attentional switch is assumed to take place. In addition, gaze aversion lasted a relatively long time (on average 6 s), and was notably decoupled from concurrent head movements. These results support a role of gaze aversion in perceptual decoupling. Gaze aversion was also related to higher retrieval effort and was rare during memories which came spontaneously to mind. This suggests that gaze aversion might be required only when cognitive effort is required to switch the attention toward the internal world to help retrieving hard-to-access memories. Compared to eye vergence, another visual decoupling strategy, the association with the attentional switch seemed specific to gaze aversion. Our results provide for the first time several arguments supporting the hypothesis that gaze aversion is related to the attentional switch from the outside world to memory.
Collapse
Affiliation(s)
- Anaïs Servais
- Centre de recherche Cerveau et Cognition (CerCo), UMR5549 (CNRS-UPS), Pavillon Baudot, 31052 Toulouse, France; National Civil Aviation School (ENAC), 7 avenue Edouard Belin, 31055 Toulouse, France.
| | - Noémie Préa
- Centre de recherche Cerveau et Cognition (CerCo), UMR5549 (CNRS-UPS), Pavillon Baudot, 31052 Toulouse, France
| | - Christophe Hurter
- National Civil Aviation School (ENAC), 7 avenue Edouard Belin, 31055 Toulouse, France.
| | - Emmanuel J Barbeau
- Centre de recherche Cerveau et Cognition (CerCo), UMR5549 (CNRS-UPS), Pavillon Baudot, 31052 Toulouse, France.
| |
Collapse
|
8
|
Yamagata M, Nagai R, Morihiro K, Nonaka T. Relation between the kinematic synergy controlling swing foot and visual exploration during obstacle crossing. J Biomech 2023; 157:111702. [PMID: 37429178 DOI: 10.1016/j.jbiomech.2023.111702] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 05/24/2023] [Accepted: 06/26/2023] [Indexed: 07/12/2023]
Abstract
To step over obstacles of varying heights, two distinct ongoing streams of activities-visual exploration of the environment and gait adjustment- were required to occur concurrently without interfering each other. Yet, it remains unclear whether and how the manner of embodied behavior of visual exploration is related to the synergistic control of foot trajectory to negotiate with the irregular terrain. Thus, we aimed to explore that how the synergistic control of the vertical trajectory of the swing foot (i.e., obstacle clearance) crossing an obstacle is related to the manner of visual exploration of the environment during approach. Twenty healthy young adults crossed an obstacle (depth: 1 cm, width: 60 cm, height: 8 cm) during their comfortable-speed walking. The visual exploration was evaluated as the amount of time spent in fixating the vicinity of the obstacle on the floor during the period from two to four steps prior to crossing the obstacle, and the strengths of kinematic synergy to control obstacle clearance were estimated using the uncontrolled manifold approach. We found that the participants with relatively weak synergy spent more time fixating at the vicinity of the obstacle from two to four steps prior to crossing the obstacle, and those participants exhibited greater amount of head flexion movement compared to those with stronger kinematic synergy. Taking advantage of this complex relationship between exploratory activities (e.g. looking movement) and performative activities (e.g. adjustment of ground clearance) would be crucial to adapt walking in a complex environment.
Collapse
Affiliation(s)
- Momoko Yamagata
- Faculty of Rehabilitation, Kansai Medical University, 18-89 Uyama Higashimachi, Hirakata, Osaka 573-1136, Japan; Department of Human Health Sciences, Graduate School of Medicine, Kyoto University, 53 Kawahara-cho, Shogoin, Sakyo Kyoto 606-8507, Japan.
| | - Rira Nagai
- Department of Human Development, Graduate School of Human Development and Environment, Kobe University, 3-11 Tsurukabuto, Nada-ku, Kobe, Hyogo 657-0011, Japan
| | - Kaoru Morihiro
- Department of Human Development, Graduate School of Human Development and Environment, Kobe University, 3-11 Tsurukabuto, Nada-ku, Kobe, Hyogo 657-0011, Japan
| | - Tetsushi Nonaka
- Department of Human Development, Graduate School of Human Development and Environment, Kobe University, 3-11 Tsurukabuto, Nada-ku, Kobe, Hyogo 657-0011, Japan
| |
Collapse
|
9
|
Higgins NC, Pupo DA, Ozmeral EJ, Eddins DA. Head movement and its relation to hearing. Front Psychol 2023; 14:1183303. [PMID: 37448716 PMCID: PMC10338176 DOI: 10.3389/fpsyg.2023.1183303] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 06/07/2023] [Indexed: 07/15/2023] Open
Abstract
Head position at any point in time plays a fundamental role in shaping the auditory information that reaches a listener, information that continuously changes as the head moves and reorients to different listening situations. The connection between hearing science and the kinesthetics of head movement has gained interest due to technological advances that have increased the feasibility of providing behavioral and biological feedback to assistive listening devices that can interpret movement patterns that reflect listening intent. Increasing evidence also shows that the negative impact of hearing deficits on mobility, gait, and balance may be mitigated by prosthetic hearing device intervention. Better understanding of the relationships between head movement, full body kinetics, and hearing health, should lead to improved signal processing strategies across a range of assistive and augmented hearing devices. The purpose of this review is to introduce the wider hearing community to the kinesiology of head movement and to place it in the context of hearing and communication with the goal of expanding the field of ecologically-specific listener behavior.
Collapse
Affiliation(s)
- Nathan C. Higgins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| | - Daniel A. Pupo
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
- School of Aging Studies, University of South Florida, Tampa, FL, United States
| | - Erol J. Ozmeral
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| | - David A. Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| |
Collapse
|
10
|
Greene HH, Diwadkar VA, Brown JM. Regularities in vertical saccadic metrics: new insights, and future perspectives. Front Psychol 2023; 14:1157686. [PMID: 37251031 PMCID: PMC10213562 DOI: 10.3389/fpsyg.2023.1157686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Accepted: 04/11/2023] [Indexed: 05/31/2023] Open
Abstract
Introduction Asymmetries in processing by the healthy brain demonstrate regularities that facilitate the modeling of brain operations. The goal of the present study was to determine asymmetries in saccadic metrics during visual exploration, devoid of confounding clutter in the visual field. Methods Twenty healthy adults searched for a small, low-contrast gaze-contingent target on a blank computer screen. The target was visible, only if eye fixation was within a 5 deg. by 5 deg. area of the target's location. Results Replicating previously-reported asymmetries, repeated measures contrast analyses indicated that up-directed saccades were executed earlier, were smaller in amplitude, and had greater probability than down-directed saccades. Given that saccade velocities are confounded by saccade amplitudes, it was also useful to investigate saccade kinematics of visual exploration, as a function of vertical saccade direction. Saccade kinematics were modeled for each participant, as a square root relationship between average saccade velocity (i.e., average velocity between launching and landing of a saccade) and corresponding saccade amplitude (Velocity = S*[Saccade Amplitude]0.5). A comparison of the vertical scaling parameter (S) for up- and down-directed saccades showed that up-directed saccades tended to be slower than down-directed ones. Discussion To motivate future research, an ecological theory of asymmetric pre-saccadic inhibition was presented to explain the collection of vertical saccadic regularities. For example, given that the theory proposes strong inhibition for the releasing of reflexive down-directed prosaccades (cued by an attracting peripheral target below eye fixation), and weak inhibition for the releasing of up-directed prosaccades (cued by an attracting peripheral target above eye fixation), a prediction for future studies is longer reaction times for vertical anti-saccade cues above eye fixation. Finally, the present study with healthy individuals demonstrates a rationale for further study of vertical saccades in psychiatric disorders, as bio-markers for brain pathology.
Collapse
Affiliation(s)
- Harold H. Greene
- Department of Psychology, University of Detroit Mercy, Detroit, MI, United States
| | - Vaibhav A. Diwadkar
- Department of Psychiatry and Behavioral Neurosciences, Brain Imaging Research Division, Wayne State University, Detroit, MI, United States
| | - James M. Brown
- Department of Psychology, University of Georgia, Athens, GA, United States
| |
Collapse
|
11
|
Dilbeck MD, Gentry TN, Economides JR, Horton JC. Quotidian Profile of Vergence Angle in Ambulatory Subjects Monitored With Wearable Eye Tracking Glasses. Transl Vis Sci Technol 2023; 12:17. [PMID: 36780142 PMCID: PMC9927788 DOI: 10.1167/tvst.12.2.17] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Accepted: 01/17/2023] [Indexed: 02/14/2023] Open
Abstract
Purpose Wearable eye trackers record gaze position as ambulatory subjects navigate their environment. Tobii Pro Glasses 3 were tested to assess their accuracy and precision in the measurement of vergence angle. Methods Four subjects wore the eye tracking glasses, with their head stabilized, while fixating at a series of distances corresponding to vergence demands of: 0.25, 0.50, 1, 2, 4, 8, 16, and 32°. After these laboratory trials were completed, 10 subjects wore the glasses for a prolonged period while carrying out their customary daily pursuits. A vergence profile was compiled for each subject and compared with interpupillary distance. Results In the laboratory, the eye tracking glasses were comparable in accuracy to remote video eye trackers, outputting a mean vergence value within 1° of demand at all angles except 32°. In ambulatory subjects, the glasses were less accurate, due to tracking interruptions and measurement errors, partly mitigated by the application of data filters. Nonetheless, a useful record of vergence behavior was obtained in every subject. Vergence profiles often had a bimodal distribution, reflecting a preponderance of activities at near (mobile phone and computer) or far (driving and walking). As expected, vergence angle correlated with interpupillary distance. Conclusions Wearable eye tracking glasses make it possible to compile a nearly continuous record of vergence angle over hours, which can be correlated with the corresponding visual scene viewed by ambulatory subjects. Translational Relevance This technology provides new insight into the diversity of human ocular motor behavior and may become useful for the diagnosis of disorders that affect vergence function such as: convergence insufficiency, Parkinson disease, and strabismus.
Collapse
Affiliation(s)
- Mikayla D. Dilbeck
- Program in Neuroscience, Department of Ophthalmology, University of California, San Francisco, San Francisco, CA, USA
| | - Thomas N. Gentry
- Program in Neuroscience, Department of Ophthalmology, University of California, San Francisco, San Francisco, CA, USA
| | - John R. Economides
- Program in Neuroscience, Department of Ophthalmology, University of California, San Francisco, San Francisco, CA, USA
| | - Jonathan C. Horton
- Program in Neuroscience, Department of Ophthalmology, University of California, San Francisco, San Francisco, CA, USA
| |
Collapse
|
12
|
van Andel S, Schmidt AR, Federolf PA. Distinct coordination patterns integrate exploratory head movements with whole-body movement patterns during walking. Sci Rep 2023; 13:1235. [PMID: 36683115 PMCID: PMC9868120 DOI: 10.1038/s41598-022-26848-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Accepted: 12/21/2022] [Indexed: 01/23/2023] Open
Abstract
Visual guidance of gait is an important skill for everyday mobility. While this has often been studied using eye-tracking techniques, recent studies have shown that visual exploration involves more than just the eye; head movement and potentially the whole body is involved for successful visual exploration. This study aimed to assess coordinative patterns associated with head movement and it was hypothesized that these patterns would span across the body, rather than being localized. Twenty-one (after exclusions) healthy young adult volunteers followed a treadmill walking protocol designed to elicit different types of head movements (no stimuli compared to stimuli requiring horizontal, vertical, and mixed gaze shifts). Principal Component Analysis was used to establish whole-body correlated patterns of marker movement (Principal Movements; PMs) related to the activity of the head. In total 37 higher order PMs were found to be associated with head movement, two of these showed significant differences between trials associated with strong head rotations in the horizontal and sagittal plane. Both of these were associated with a whole-body pattern of activity. An analysis of the higher order components revealed that exploratory head movements are associated with distinct movement patterns, which span across the body. This shows that visual exploration can produce whole-body movement patterns that have a potentially destabilizing influence. These findings shed new light on established results in visual search research and hold relevance for fall and injury prevention.
Collapse
Affiliation(s)
- Steven van Andel
- Department of Sport Science, University of Innsbruck, Fürstenweg 185, 6020, Innsbruck, Austria.
- IJsselheem Foundation, Kampen, The Netherlands.
| | - Andreas R Schmidt
- Department of Sport Science, University of Innsbruck, Fürstenweg 185, 6020, Innsbruck, Austria
| | - Peter A Federolf
- Department of Sport Science, University of Innsbruck, Fürstenweg 185, 6020, Innsbruck, Austria
| |
Collapse
|
13
|
Benefits associated with the standing position during visual search tasks. Exp Brain Res 2023; 241:187-199. [PMID: 36416923 DOI: 10.1007/s00221-022-06512-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 11/14/2022] [Indexed: 11/24/2022]
Abstract
The literature on postural control highlights that task performance should be worse in challenging dual tasks than in a single task, because the brain has limited attentional resources. Instead, in the context of visual tasks, we assumed that (i) performance in a visual search task should be better when standing than when sitting and (ii) when standing, postural control should be better when searching than performing the control task. 32 and 16 young adults participated in studies 1 and 2, respectively. They performed three visual tasks (searching to locate targets, free-viewing and fixating a stationary cross) displayed in small images (visual angle: 22°) either when standing or when sitting. Task performance, eye, head, upper back, lower back and center of pressure displacements were recorded. In both studies, task performance in searching was as good (and clearly not worse) when standing as when sitting. Sway magnitude was smaller during the search task (vs. other tasks) when standing but not when sitting. Hence, only when standing, postural control was adapted to perform the challenging search task. When exploring images, and especially so in the search task, participants rotated their head instead of their eyes as if they used an eye-centered strategy. Remarkably in Study 2, head rotation was greater when sitting than when standing. Overall, we consider that variability in postural control was not detrimental but instead useful to facilitate visual task performance. When sitting, this variability may be lacking, thus requiring compensatory movements.
Collapse
|
14
|
Servais A, Hurter C, Barbeau EJ. Gaze direction as a facial cue of memory retrieval state. Front Psychol 2022; 13:1063228. [PMID: 36619020 PMCID: PMC9813397 DOI: 10.3389/fpsyg.2022.1063228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 12/02/2022] [Indexed: 12/24/2022] Open
Abstract
Gaze direction is a powerful social cue that indicates the direction of attention and can be used to decode others' mental states. When an individual looks at an external object, inferring where their attention is focused from their gaze direction is easy. But when people are immersed in memories, their attention is oriented towards their inner world. Is there any specific gaze direction in this situation, and if so, which one? While trying to remember, a common behavior is gaze aversion, which has mostly been reported as an upward-directed gaze. Our primary aim was to evaluate whether gaze direction plays a role in the inference of the orientation of attention-i.e., external vs. internal-in particular, whether an upward direction is considered as an indicator of attention towards the internal world. Our secondary objective was to explore whether different gaze directions are consistently attributed to different types of internal mental states and, more specifically, memory states (autobiographical or semantic memory retrieval, or working memory). Gaze aversion is assumed to play a role in perceptual decoupling, which is supposed to support internal attention. We therefore also tested whether internal attention was associated with high gaze eccentricity because the mismatch between head and eye direction alters visual acuity. We conducted two large-sample (160-163 participants) online experiments. Participants were asked to choose which mental state-among different internal and external attentional states-they would attribute to faces with gazes oriented in different directions. Participants significantly associated internal attention with an upward-averted gaze across experiments, while external attention was mostly associated with a gaze remaining on the horizontal axis. This shows that gaze direction is robustly used by observers to infer others' mental states. Unexpectedly, internal attentional states were not more associated with gaze eccentricity at high (30°) than low (10°) eccentricity and we found that autobiographical memory retrieval, but not the other memory states, was highly associated with 10° downward gaze. This reveals the possible existence of different types of gaze aversion for different types of memories and opens new perspectives.
Collapse
Affiliation(s)
- Anaïs Servais
- Centre de Recherche Cerveau et Cognition (CerCo), CNRS-UPS, UMR5549, Toulouse, France,Ecole Nationale d’Aviation Civile (ENAC), Toulouse, France,*Correspondence: Anaïs Servais,
| | | | - Emmanuel J. Barbeau
- Centre de Recherche Cerveau et Cognition (CerCo), CNRS-UPS, UMR5549, Toulouse, France
| |
Collapse
|
15
|
Dabrowski O, Courvoisier S, Falcone JL, Klauser A, Songeon J, Kocher M, Chopard B, Lazeyras F. Choreography Controlled (ChoCo) brain MRI artifact generation for labeled motion-corrupted datasets. Phys Med 2022; 102:79-87. [PMID: 36137403 DOI: 10.1016/j.ejmp.2022.09.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 08/26/2022] [Accepted: 09/12/2022] [Indexed: 11/19/2022] Open
Abstract
MRI is a non-invasive medical imaging modality that is sensitive to patient motion, which constitutes a major limitation in most clinical applications. Solutions may arise from the reduction of acquisition times or from motion-correction techniques, either prospective or retrospective. Benchmarking the latter methods requires labeled motion-corrupted datasets, which are uncommon. Up to our best knowledge, no protocol for generating labeled datasets of MRI images corrupted by controlled motion has yet been proposed. Hence, we present a methodology allowing the acquisition of reproducible motion-corrupted MRI images as well as validation of the system's performance by motion estimation through rigid-body volume registration of fast 3D echo-planar imaging (EPI) time series. A proof-of-concept is presented, to show how the protocol can be implemented to provide qualitative and quantitative results. An MRI-compatible video system displays a moving target that volunteers equipped with customized plastic glasses must follow to perform predefined head choreographies. Motion estimation using rigid-body EPI time series registration demonstrated that head position can be accurately determined (with an average standard deviation of about 0.39 degrees). A spatio-temporal upsampling and interpolation method to cope with fast motion is also proposed in order to improve motion estimation. The proposed protocol is versatile and straightforward. It is compatible with all MRI systems and may provide insights on the origins of specific motion artifacts. The MRI and artificial intelligence research communities could benefit from this work to build in-vivo labeled datasets of motion-corrupted MRI images suitable for training/testing any retrospective motion correction or machine learning algorithm.
Collapse
Affiliation(s)
- Oscar Dabrowski
- Computer Science Department, Faculty of Sciences, University of Geneva, Switzerland.
| | - Sébastien Courvoisier
- Department of Radiology and Medical Informatics, Faculty of Medicine, University of Geneva, Switzerland; CIBM Center for Biomedical Imaging, MRI HUG-UNIGE, Geneva, Switzerland
| | - Jean-Luc Falcone
- Computer Science Department, Faculty of Sciences, University of Geneva, Switzerland
| | - Antoine Klauser
- Department of Radiology and Medical Informatics, Faculty of Medicine, University of Geneva, Switzerland; CIBM Center for Biomedical Imaging, MRI HUG-UNIGE, Geneva, Switzerland
| | - Julien Songeon
- Department of Radiology and Medical Informatics, Faculty of Medicine, University of Geneva, Switzerland; CIBM Center for Biomedical Imaging, MRI HUG-UNIGE, Geneva, Switzerland
| | - Michel Kocher
- Biomedical Imaging Group (BIG), School of Engineering, EPFL, Lausanne, Switzerland
| | - Bastien Chopard
- Computer Science Department, Faculty of Sciences, University of Geneva, Switzerland
| | - François Lazeyras
- Department of Radiology and Medical Informatics, Faculty of Medicine, University of Geneva, Switzerland; CIBM Center for Biomedical Imaging, MRI HUG-UNIGE, Geneva, Switzerland
| |
Collapse
|
16
|
Anderson EM, Seemiller ES, Smith LB. Scene saliencies in egocentric vision and their creation by parents and infants. Cognition 2022; 229:105256. [PMID: 35988453 DOI: 10.1016/j.cognition.2022.105256] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 08/09/2022] [Accepted: 08/11/2022] [Indexed: 11/15/2022]
Abstract
Across the lifespan, humans are biased to look first at what is easy to see, with a handful of well-documented visual saliences shaping our attention (e.g., Itti & Koch, 2001). These attentional biases may emerge from the contexts in which moment-tomoment attention occurs, where perceivers and their social partners actively shape bottom-up saliences, moving their bodies and objects to make targets of interest more salient. The goal of the present study was to determine the bottom-up saliences present in infant egocentric images and to provide evidence on the role that infants and their mature social partners play in highlighting targets of interest via these saliences. We examined 968 unique scenes in which an object had purposefully been placed in the infant's egocentric view, drawn from videos created by one-year-old infants wearing a head camera during toy-play with a parent. To understand which saliences mattered in these scenes, we conducted a visual search task, asking participants (n = 156) to find objects in the egocentric images. To connect this to the behaviors of perceivers, we then characterized the saliences of objects placed by infants or parents compared to objects that were otherwise present in the scenes. Our results show that body-centric properties, such as increases in the centering and visual size of the object, as well as decreases in the number of competing objects immediately surrounding it, both predicted faster search time and distinguished placed and unplaced objects. The present results suggest that the bottom-up saliences that can be readily controlled by perceivers and their social partners may most strongly impact our attention. This finding has implications for the functional role of saliences in human vision, their origin, the social structure of perceptual environments, and how the relation between bottom-up and top-down control of attention in these environments may support infant learning.
Collapse
Affiliation(s)
| | | | - Linda B Smith
- Psychological and Brain Sciences, Indiana University, USA
| |
Collapse
|
17
|
Beyond screen time: Using head-mounted eye tracking to study natural behavior. ADVANCES IN CHILD DEVELOPMENT AND BEHAVIOR 2022; 62:61-91. [PMID: 35249686 DOI: 10.1016/bs.acdb.2021.11.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Head-mounted eye tracking is a new method that allows researchers to catch a glimpse of what infants and children see during naturalistic activities. In this chapter, we review how mobile, wearable eye trackers improve the construct validity of important developmental constructs, such as visual object experiences and social attention, in ways that would be impossible using screen-based eye tracking. Head-mounted eye tracking improves ecological validity by allowing researchers to present more realistic and complex visual scenes, create more interactive experimental situations, and examine how the body influences what infants and children see. As with any new method, there are difficulties to overcome. Accordingly, we identify what aspects of head-mounted eye-tracking study design affect the measurement quality, interpretability of the results, and efficiency of gathering data. Moreover, we provide a summary of best practices aimed at allowing researchers to make well-informed decisions about whether and how to apply head-mounted eye tracking to their own research questions.
Collapse
|