1
|
Charbonneau E, Begon M, Romeas T. A temporal quantitative analysis of visuomotor behavior during four twisting somersaults in elite and sub-elite trampolinists. Hum Mov Sci 2024; 98:103295. [PMID: 39378631 DOI: 10.1016/j.humov.2024.103295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Revised: 07/18/2024] [Accepted: 09/30/2024] [Indexed: 10/10/2024]
Abstract
Vision has previously been correlated with performance in acrobatic sports, highlighting visuomotor expertise adaptations. However, we still poorly understand the visuomotor strategies athletes use while executing twisting somersaults, even though this knowledge might be helpful for skill development. Thus, the present study sought to identify the differences in gaze behavior between elite and sub-elite trampolinists during the execution of four acrobatics of increasing difficulty. Seventeen inertial measurement units and a wearable eye-tracker were used to record the body and gaze kinematics of 17 trampolinists (8 elites, 9 sub-elites). Six typical metrics were analyzed using a mixed analysis of variance (ANOVA) with the Expertise as inter-subject and the Acrobatics as intra-subject factors. To complement this analysis, advanced temporal eye-tracking metrics are reported, such as the dwell time on areas of interest, the scan path on the trampoline bed, the temporal evolution of the gaze orientation endpoint (SPGO), and the time spent executing specific neck and eye strategies. A significant main effect of Expertise was only evidenced in one of the typical metrics, where elite athletes exhibited a higher number of fixations compared to sub-elites (p = 0.033). Significant main effects of Acrobatics were observed on all metrics (p < 0.05), revealing that gaze strategies are task-dependent in trampolining. The recordings of eyes and neck movements performed in this study confirmed the use of "spotting" at the beginning and end of the acrobatics. They also revealed a unique sport-specific visual strategy that we termed as self-motion detection. This strategy consists of not moving the eyes during fast head rotations, a strategy mainly used by trampolinists during the twisting phase. This study proposes a detailed exploration of trampolinists' gaze behavior in highly realistic settings and a temporal description of the visuomotor strategies to enhance understanding of perception-action interactions during the execution of twisting somersaults.
Collapse
Affiliation(s)
- Eve Charbonneau
- Laboratoire de Simulation et Modélisation du Mouvement, Faculté de Médecine, Université de Montréal, Montréal, Canada; Institut national du sport du Québec, Montréal, Canada.
| | - Mickaël Begon
- Laboratoire de Simulation et Modélisation du Mouvement, Faculté de Médecine, Université de Montréal, Montréal, Canada; Sainte-Justine Hospital Research Center, Montréal, Canada
| | - Thomas Romeas
- Institut national du sport du Québec, Montréal, Canada; École d'optométrie, Université de Montréal, Montréal, Canada
| |
Collapse
|
2
|
Fragaszy DM, Kelty-Stephen DG, Mangalam M. How bipedalism shapes humans' actions with hand tools. Philos Trans R Soc Lond B Biol Sci 2024; 379:20230152. [PMID: 39155723 PMCID: PMC11391300 DOI: 10.1098/rstb.2023.0152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 03/28/2024] [Accepted: 05/09/2024] [Indexed: 08/20/2024] Open
Abstract
The task for an embodied cognitive understanding of humans' actions with tools is to elucidate how the human body, as a whole, supports the perception of affordances and dexterous action with objects in relation to other objects. Here, we focus on the relationship between humans' actions with handheld tools and bipedal posture. Posture plays a pivotal role in shaping animals' perception and action dynamics. While humans stand and locomote bipedally, other primates predominantly employ quadrupedal postures and locomotion, relying on both hands and feet to support the body. Drawing upon evidence from evolutionary biology, developmental psychology and performance studies, we elucidate the influence of bipedalism on our actions with objects and on our proficiency in using tools. We use the metaphor of cascades to capture the dynamic, nonlinear transformations in morphology and behaviour associated with posture and the use of tools across evolutionary and developmental timescales. Recent work illustrates the promise of multifractal cascade analysis to reveal nonlinear, cross-scale interactions across the entire body in real-time, supporting the perception of affordances for actions with tools. Cascade analysis enriches our comprehension of real-time performance and facilitates exploration of the relationships among whole-body coordination, individual development, and evolutionary processes.This article is part of the theme issue 'Minds in movement: embodied cognition in the age of artificial intelligence'.
Collapse
Affiliation(s)
| | - Damian G Kelty-Stephen
- Department of Psychology, State University of New York at New Paltz, New Paltz, NY 12561, USA
| | - Madhur Mangalam
- Division of Biomechanics and Research Development, Department of Biomechanics, Center for Research in Human Movement Variability, University of Nebraska, Omaha, NE 68182, USA
| |
Collapse
|
3
|
Kojima T, Kokubu M. Role of gaze behaviors, body movements, and bicycle movements during cycling on a straight and narrow path. Hum Mov Sci 2024; 98:103290. [PMID: 39293132 DOI: 10.1016/j.humov.2024.103290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 08/27/2024] [Accepted: 09/12/2024] [Indexed: 09/20/2024]
Abstract
Cycling requires the integration of gaze behaviors, body movements, and bicycle movements. However, whether these movements contribute to skilled cycling performance, such as cycling on straight and narrow paths are uncertain. The present study aimed to differentiate optokinetic nystagmus (OKN) from vestibulo-ocular reflex (VOR) that characterize the relationship between eye and head movements during cycling on straight and narrow path, and to identify gaze behaviors, body movements, and bicycle movements that contribute to cycling performance. Nineteen participants with no prior competitive experience cycled three times on a 12-cm wide path. The participants were asked to avoid deviating from the path as much as possible. The measured variables were gaze behavior in a sagittal plane, body movement, and bicycle movement. As a result, OKN was observed among 16 of the 19 participants. The cross-correlation between the eye and head did not show negative value, indicating the absence of VOR. These results suggest that the participants moved their eyes while keeping their heads stable during cycling on a straight and narrow path. In the results of the multiple regression analysis, the variables with small standard deviations (SD) of the steering angle and upward eye position were related to a lower deviation from the path. These results suggest that a small SD of the steering angle and directed gaze in the forward direction may contribute to skilled cycling.
Collapse
Affiliation(s)
- Takashi Kojima
- Graduate School of Comprehensive Human Sciences, University of Tsukuba, Tsukuba, Ibaraki, Japan.
| | - Masahiro Kokubu
- Institute of Health and Sport Sciences, University of Tsukuba, Tsukuba, Ibaraki, Japan
| |
Collapse
|
4
|
Raja V. The motifs of radical embodied neuroscience. Eur J Neurosci 2024; 60:4738-4755. [PMID: 38816952 DOI: 10.1111/ejn.16434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 04/05/2024] [Accepted: 05/20/2024] [Indexed: 06/01/2024]
Abstract
In this paper, I analyse how the emerging scientific framework of radical embodied neuroscience is different from contemporary mainstream cognitive neuroscience. To do so, I propose the notion of motif to enrich the philosophical toolkit of cognitive neuroscience. This notion can be used to characterize the guiding ideas of any given scientific framework in psychology and neuroscience. Motifs are highly unconstrained, open-ended concepts that support equally open-ended families of explanations. Different scientific frameworks-e.g., psychophysics or cognitive neuroscience-provide these motifs to answer the overarching themes of these disciplines, such as the relationship between stimuli and sensations or the proper methods of the sciences of the mind. Some motifs of mainstream cognitive neuroscience are the motif of encoding, the motif of input-output systems, and the motif of algorithms. The two first ones answer the question about the relationship between stimuli, sensations and experience (e.g., stimuli are input and are encoded by brain structures). The latter one answers the question regarding the mechanism of cognition and experience. The three of them are equally unconstrained and open-ended, and they serve as an umbrella for different kinds of explanation-i.e., different positions regarding what counts as a code or as an input. Along with the articulation of the notion of motif, the main aim of this article is to present three motifs for radical embodied neuroscience: the motif of complex stimulation, the motif of organic behaviour and the motif of resonance.
Collapse
Affiliation(s)
- Vicente Raja
- Department of Philosophy, Universidad de Murcia, Murcia, Spain
- Rotman Institute of Philosophy, Western University, London, Canada
| |
Collapse
|
5
|
Horrocks EAB, Rodrigues FR, Saleem AB. Flexible neural population dynamics govern the speed and stability of sensory encoding in mouse visual cortex. Nat Commun 2024; 15:6415. [PMID: 39080254 PMCID: PMC11289260 DOI: 10.1038/s41467-024-50563-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 07/15/2024] [Indexed: 08/02/2024] Open
Abstract
Time courses of neural responses underlie real-time sensory processing and perception. How these temporal dynamics change may be fundamental to how sensory systems adapt to different perceptual demands. By simultaneously recording from hundreds of neurons in mouse primary visual cortex, we examined neural population responses to visual stimuli at sub-second timescales, during different behavioural states. We discovered that during active behavioural states characterised by locomotion, single-neurons shift from transient to sustained response modes, facilitating rapid emergence of visual stimulus tuning. Differences in single-neuron response dynamics were associated with changes in temporal dynamics of neural correlations, including faster stabilisation of stimulus-evoked changes in the structure of correlations during locomotion. Using Factor Analysis, we examined temporal dynamics of latent population responses and discovered that trajectories of population activity make more direct transitions between baseline and stimulus-encoding neural states during locomotion. This could be partly explained by dampening of oscillatory dynamics present during stationary behavioural states. Functionally, changes in temporal response dynamics collectively enabled faster, more stable and more efficient encoding of new visual information during locomotion. These findings reveal a principle of how sensory systems adapt to perceptual demands, where flexible neural population dynamics govern the speed and stability of sensory encoding.
Collapse
Affiliation(s)
- Edward A B Horrocks
- Institute of Behavioural Neuroscience, University College London, London, WC1V 0AP, UK.
| | - Fabio R Rodrigues
- Institute of Behavioural Neuroscience, University College London, London, WC1V 0AP, UK
| | - Aman B Saleem
- Institute of Behavioural Neuroscience, University College London, London, WC1V 0AP, UK.
| |
Collapse
|
6
|
Park SY, Kang TW, Koo DK. Investigating Eye Movement and Postural Stability Relationships Using Mobile Eye-Tracking and Posturography: A Cross-Sectional Study. Bioengineering (Basel) 2024; 11:742. [PMID: 39199700 PMCID: PMC11351117 DOI: 10.3390/bioengineering11080742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Revised: 07/13/2024] [Accepted: 07/18/2024] [Indexed: 09/01/2024] Open
Abstract
Vision and eye movements play a crucial role in maintaining postural stability. This study investigated the relationship between eye movements and postural control in healthy adults using mobile eye-tracking technology and posturography. Forty healthy participants underwent assessments of eye movements using a mobile eye-tracking system and postural stability using Tetrax posturography under various sensory conditions. Pearson correlation coefficients were computed to examine associations between eye movement parameters and postural control indices. Significant correlations were found between eye movement parameters and postural stability indices. Faster and more consistent horizontal eye movements were associated with better postural stability (r = -0.63, p < 0.05). Eye movement speed variability positively correlated with weight distribution indices under normal eyes open (r = 0.65, p < 0.05) and closed (r = 0.59, p < 0.05) conditions. Coordination of horizontal and vertical eye movements positively correlated with postural control (r = 0.69, p < 0.01). Negative correlations were observed between eye movement coordination and Fourier indices in various frequency bands (p < 0.05) and the stability index under different head positions (p < 0.05). The findings provide insights into sensory integration mechanisms underlying balance maintenance and highlight the importance of integrated sensory processing in postural stability. Eye movement assessments have potential applications in balance evaluation and fall risk prediction.
Collapse
Affiliation(s)
- Seo-Yoon Park
- Department of Physical Therapy, College of Health and Welfare, Woosuk University, 443 Samnye-ro, Samnye-eup, Wanju-gun 55338, Republic of Korea; (S.-Y.P.); (T.-W.K.)
| | - Tae-Woo Kang
- Department of Physical Therapy, College of Health and Welfare, Woosuk University, 443 Samnye-ro, Samnye-eup, Wanju-gun 55338, Republic of Korea; (S.-Y.P.); (T.-W.K.)
| | - Dong-Kyun Koo
- HiVE Center, University-Industry Foundation, Wonkwang Health Science University, 514, Iksan-daero, Iksan-si 54538, Republic of Korea
| |
Collapse
|
7
|
Vater C. Viewing angle, skill level and task representativeness affect response times in basketball defence. Sci Rep 2024; 14:3337. [PMID: 38336961 PMCID: PMC10858043 DOI: 10.1038/s41598-024-53706-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 02/04/2024] [Indexed: 02/12/2024] Open
Abstract
In basketball defence, it is impossible to keep track of all players without peripheral vision. This is the first study to investigate peripheral vision usage in an experimentally controlled setup, with sport-specific basketball stimuli from a first-person perspective, large viewing eccentricities (up to 90° to the left and right), and natural action responses. A CAVE and a motion-tracking system was used to project the scenarios and capture movement responses of high- and low-skilled basketball players, respectively. Four video conditions were created: (1) a simple reaction time task without crowding (only attackers), (2) a simple reaction time task with crowding (with attackers and defenders), (3) a choice-reaction time task where the player cutting to the basket eventually passed the ball to another player and (4) a game simulation. The results indicated eccentricity effects in all tests, a crowding effect in condition 2, and expertise differences in conditions 3 and 4 only. These findings suggest that viewing eccentricity has an impact on response times, that crowding is a limiting factor for peripheral perception in sports games, and that high-skilled but not low-skilled players can compensate for eccentricity effects in real game situations, indicating their superior positioning and perceptual strategies.
Collapse
Affiliation(s)
- Christian Vater
- Institute of Sport Science, University of Bern, Bremgartenstrasse 145, 3012, Bern, Switzerland.
| |
Collapse
|
8
|
Yang YH, Fukiage T, Sun Z, Nishida S. Psychophysical measurement of perceived motion flow of naturalistic scenes. iScience 2023; 26:108307. [PMID: 38025782 PMCID: PMC10679809 DOI: 10.1016/j.isci.2023.108307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2023] [Revised: 08/09/2023] [Accepted: 10/20/2023] [Indexed: 12/01/2023] Open
Abstract
The neural and computational mechanisms underlying visual motion perception have been extensively investigated over several decades, but little attempt has been made to measure and analyze, how human observers perceive the map of motion vectors, or optical flow, in complex naturalistic scenes. Here, we developed a psychophysical method to assess human-perceived motion flows using local vector matching and a flash probe. The estimated perceived flow for naturalistic movies agreed with the physically correct flow (ground truth) at many points, but also showed consistent deviations from the ground truth (flow illusions) at other points. Comparisons with the predictions of various computational models, including cutting-edge computer vision algorithms and coordinate transformation models, indicated that some flow illusions are attributable to lower-level factors such as spatiotemporal pooling and signal loss, while others reflect higher-level computations, including vector decomposition. Our study demonstrates a promising data-driven psychophysical paradigm for an advanced understanding of visual motion perception.
Collapse
Affiliation(s)
- Yung-Hao Yang
- Cognitive Informatics Laboratory, Graduate School of Informatics, Kyoto University, Yoshida-Honmachi, Sakyo-ku, Kyoto 606-8501, Japan
| | - Taiki Fukiage
- Human Information Science Laboratory, NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, 3-1, Morinosato-Wakamiya, Atsugi, Kanagawa 243-0198, Japan
| | - Zitang Sun
- Cognitive Informatics Laboratory, Graduate School of Informatics, Kyoto University, Yoshida-Honmachi, Sakyo-ku, Kyoto 606-8501, Japan
| | - Shin’ya Nishida
- Cognitive Informatics Laboratory, Graduate School of Informatics, Kyoto University, Yoshida-Honmachi, Sakyo-ku, Kyoto 606-8501, Japan
- Human Information Science Laboratory, NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, 3-1, Morinosato-Wakamiya, Atsugi, Kanagawa 243-0198, Japan
| |
Collapse
|
9
|
Casartelli L, Maronati C, Cavallo A. From neural noise to co-adaptability: Rethinking the multifaceted architecture of motor variability. Phys Life Rev 2023; 47:245-263. [PMID: 37976727 DOI: 10.1016/j.plrev.2023.10.036] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 10/27/2023] [Indexed: 11/19/2023]
Abstract
In the last decade, the source and the functional meaning of motor variability have attracted considerable attention in behavioral and brain sciences. This construct classically combined different levels of description, variable internal robustness or coherence, and multifaceted operational meanings. We provide here a comprehensive review of the literature with the primary aim of building a precise lexicon that goes beyond the generic and monolithic use of motor variability. In the pars destruens of the work, we model three domains of motor variability related to peculiar computational elements that influence fluctuations in motor outputs. Each domain is in turn characterized by multiple sub-domains. We begin with the domains of noise and differentiation. However, the main contribution of our model concerns the domain of adaptability, which refers to variation within the same exact motor representation. In particular, we use the terms learning and (social)fitting to specify the portions of motor variability that depend on our propensity to learn and on our largely constitutive propensity to be influenced by external factors. A particular focus is on motor variability in the context of the sub-domain named co-adaptability. Further groundbreaking challenges arise in the modeling of motor variability. Therefore, in a separate pars construens, we attempt to characterize these challenges, addressing both theoretical and experimental aspects as well as potential clinical implications for neurorehabilitation. All in all, our work suggests that motor variability is neither simply detrimental nor beneficial, and that studying its fluctuations can provide meaningful insights for future research.
Collapse
Affiliation(s)
- Luca Casartelli
- Theoretical and Cognitive Neuroscience Unit, Scientific Institute IRCCS E. MEDEA, Italy
| | - Camilla Maronati
- Move'n'Brains Lab, Department of Psychology, Università degli Studi di Torino, Italy
| | - Andrea Cavallo
- Move'n'Brains Lab, Department of Psychology, Università degli Studi di Torino, Italy; C'MoN Unit, Fondazione Istituto Italiano di Tecnologia, Genova, Italy.
| |
Collapse
|
10
|
Solbach MD, Tsotsos JK. The psychophysics of human three-dimensional active visuospatial problem-solving. Sci Rep 2023; 13:19967. [PMID: 37968501 PMCID: PMC10651907 DOI: 10.1038/s41598-023-47188-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 11/09/2023] [Indexed: 11/17/2023] Open
Abstract
Our understanding of how visual systems detect, analyze and interpret visual stimuli has advanced greatly. However, the visual systems of all animals do much more; they enable visual behaviours. How well the visual system performs while interacting with the visual environment and how vision is used in the real world is far from fully understood, especially in humans. It has been suggested that comparison is the most primitive of psychophysical tasks. Thus, as a probe into these active visual behaviours, we use a same-different task: Are two physical 3D objects visually the same? This task is a fundamental cognitive ability. We pose this question to human subjects who are free to move about and examine two real objects in a physical 3D space. The experimental design is such that all behaviours are directed to viewpoint change. Without any training, our participants achieved a mean accuracy of 93.82%. No learning effect was observed on accuracy after many trials, but some effect was seen for response time, number of fixations and extent of head movement. Our probe task, even though easily executed at high-performance levels, uncovered a surprising variety of complex strategies for viewpoint control, suggesting that solutions were developed dynamically and deployed in a seemingly directed hypothesize-and-test manner tailored to the specific task. Subjects need not acquire task-specific knowledge; instead, they formulate effective solutions right from the outset, and as they engage in a series of attempts, those solutions progressively refine, becoming more efficient without compromising accuracy.
Collapse
Affiliation(s)
- Markus D Solbach
- Department of Electrical Engineering and Computer Science, York University, Toronto, ON, M3J 1P3, Canada.
| | - John K Tsotsos
- Department of Electrical Engineering and Computer Science, York University, Toronto, ON, M3J 1P3, Canada
| |
Collapse
|
11
|
Fooken J, Baltaretu BR, Barany DA, Diaz G, Semrau JA, Singh T, Crawford JD. Perceptual-Cognitive Integration for Goal-Directed Action in Naturalistic Environments. J Neurosci 2023; 43:7511-7522. [PMID: 37940592 PMCID: PMC10634571 DOI: 10.1523/jneurosci.1373-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 08/15/2023] [Accepted: 08/18/2023] [Indexed: 11/10/2023] Open
Abstract
Real-world actions require one to simultaneously perceive, think, and act on the surrounding world, requiring the integration of (bottom-up) sensory information and (top-down) cognitive and motor signals. Studying these processes involves the intellectual challenge of cutting across traditional neuroscience silos, and the technical challenge of recording data in uncontrolled natural environments. However, recent advances in techniques, such as neuroimaging, virtual reality, and motion tracking, allow one to address these issues in naturalistic environments for both healthy participants and clinical populations. In this review, we survey six topics in which naturalistic approaches have advanced both our fundamental understanding of brain function and how neurologic deficits influence goal-directed, coordinated action in naturalistic environments. The first part conveys fundamental neuroscience mechanisms related to visuospatial coding for action, adaptive eye-hand coordination, and visuomotor integration for manual interception. The second part discusses applications of such knowledge to neurologic deficits, specifically, steering in the presence of cortical blindness, impact of stroke on visual-proprioceptive integration, and impact of visual search and working memory deficits. This translational approach-extending knowledge from lab to rehab-provides new insights into the complex interplay between perceptual, motor, and cognitive control in naturalistic tasks that are relevant for both basic and clinical research.
Collapse
Affiliation(s)
- Jolande Fooken
- Centre for Neuroscience, Queen's University, Kingston, Ontario K7L3N6, Canada
| | - Bianca R Baltaretu
- Department of Psychology, Justus Liebig University, Giessen, 35394, Germany
| | - Deborah A Barany
- Department of Kinesiology, University of Georgia, and Augusta University/University of Georgia Medical Partnership, Athens, Georgia 30602
| | - Gabriel Diaz
- Center for Imaging Science, Rochester Institute of Technology, Rochester, New York 14623
| | - Jennifer A Semrau
- Department of Kinesiology and Applied Physiology, University of Delaware, Newark, Delaware 19713
| | - Tarkeshwar Singh
- Department of Kinesiology, Pennsylvania State University, University Park, Pennsylvania 16802
| | - J Douglas Crawford
- Centre for Integrative and Applied Neuroscience, York University, Toronto, Ontario M3J 1P3, Canada
| |
Collapse
|
12
|
Vafaii H, Yates JL, Butts DA. Hierarchical VAEs provide a normative account of motion processing in the primate brain. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.27.559646. [PMID: 37808629 PMCID: PMC10557690 DOI: 10.1101/2023.09.27.559646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
Abstract
The relationship between perception and inference, as postulated by Helmholtz in the 19th century, is paralleled in modern machine learning by generative models like Variational Autoencoders (VAEs) and their hierarchical variants. Here, we evaluate the role of hierarchical inference and its alignment with brain function in the domain of motion perception. We first introduce a novel synthetic data framework, Retinal Optic Flow Learning (ROFL), which enables control over motion statistics and their causes. We then present a new hierarchical VAE and test it against alternative models on two downstream tasks: (i) predicting ground truth causes of retinal optic flow (e.g., self-motion); and (ii) predicting the responses of neurons in the motion processing pathway of primates. We manipulate the model architectures (hierarchical versus non-hierarchical), loss functions, and the causal structure of the motion stimuli. We find that hierarchical latent structure in the model leads to several improvements. First, it improves the linear decodability of ground truth factors and does so in a sparse and disentangled manner. Second, our hierarchical VAE outperforms previous state-of-the-art models in predicting neuronal responses and exhibits sparse latent-to-neuron relationships. These results depend on the causal structure of the world, indicating that alignment between brains and artificial neural networks depends not only on architecture but also on matching ecologically relevant stimulus statistics. Taken together, our results suggest that hierarchical Bayesian inference underlines the brain's understanding of the world, and hierarchical VAEs can effectively model this understanding.
Collapse
|
13
|
Guénot J, Trotter Y, Delaval A, Baurès R, Soler V, Cottereau BR. Processing of translational, radial and rotational optic flow in older adults. Sci Rep 2023; 13:15312. [PMID: 37714896 PMCID: PMC10504320 DOI: 10.1038/s41598-023-42479-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Accepted: 09/11/2023] [Indexed: 09/17/2023] Open
Abstract
Aging impacts human observer's performance in a wide range of visual tasks and notably in motion discrimination. Despite numerous studies, we still poorly understand how optic flow processing is impacted in healthy older adults. Here, we estimated motion coherence thresholds in two groups of younger (age: 18-30, n = 42) and older (70-90, n = 42) adult participants for the three components of optic flow (translational, radial and rotational patterns). Stimuli were dynamic random-dot kinematograms (RDKs) projected on a large screen. Participants had to report their perceived direction of motion (leftward versus rightward for translational, inward versus outward for radial and clockwise versus anti-clockwise for rotational patterns). Stimuli had an average speed of 7°/s (additional recordings were performed at 14°/s) and were either presented full-field or in peripheral vision. Statistical analyses showed that thresholds in older adults were similar to those measured in younger participants for translational patterns, thresholds for radial patterns were significantly increased in our slowest condition and thresholds for rotational patterns were significantly decreased. Altogether, these findings support the idea that aging does not lead to a general decline in visual perception but rather has specific effects on the processing of each optic flow component.
Collapse
Affiliation(s)
- Jade Guénot
- Brain and Cognition Research Center, Université Toulouse III - Paul Sabatier, Toulouse, France.
- Centre National de la Recherche Scientifique, CNRS UMR5549, Toulouse, France.
| | - Yves Trotter
- Brain and Cognition Research Center, Université Toulouse III - Paul Sabatier, Toulouse, France
- Centre National de la Recherche Scientifique, CNRS UMR5549, Toulouse, France
| | - Angélique Delaval
- Brain and Cognition Research Center, Université Toulouse III - Paul Sabatier, Toulouse, France
- Centre National de la Recherche Scientifique, CNRS UMR5549, Toulouse, France
| | - Robin Baurès
- Brain and Cognition Research Center, Université Toulouse III - Paul Sabatier, Toulouse, France
- Centre National de la Recherche Scientifique, CNRS UMR5549, Toulouse, France
| | - Vincent Soler
- Brain and Cognition Research Center, Université Toulouse III - Paul Sabatier, Toulouse, France
- Centre National de la Recherche Scientifique, CNRS UMR5549, Toulouse, France
- Hôpital Purpan, Unité de Rétine - CHU Toulouse, Toulouse, France
| | - Benoit R Cottereau
- Brain and Cognition Research Center, Université Toulouse III - Paul Sabatier, Toulouse, France.
- Centre National de la Recherche Scientifique, CNRS UMR5549, Toulouse, France.
| |
Collapse
|
14
|
Liu B, Shan J, Gu Y. Temporal and spatial properties of vestibular signals for perception of self-motion. Front Neurol 2023; 14:1266513. [PMID: 37780704 PMCID: PMC10534010 DOI: 10.3389/fneur.2023.1266513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 08/29/2023] [Indexed: 10/03/2023] Open
Abstract
It is well recognized that the vestibular system is involved in numerous important cognitive functions, including self-motion perception, spatial orientation, locomotion, and vector-based navigation, in addition to basic reflexes, such as oculomotor or body postural control. Consistent with this rationale, vestibular signals exist broadly in the brain, including several regions of the cerebral cortex, potentially allowing tight coordination with other sensory systems to improve the accuracy and precision of perception or action during self-motion. Recent neurophysiological studies in animal models based on single-cell resolution indicate that vestibular signals exhibit complex spatiotemporal dynamics, producing challenges in identifying their exact functions and how they are integrated with other modality signals. For example, vestibular and optic flow could provide congruent and incongruent signals regarding spatial tuning functions, reference frames, and temporal dynamics. Comprehensive studies, including behavioral tasks, neural recording across sensory and sensory-motor association areas, and causal link manipulations, have provided some insights into the neural mechanisms underlying multisensory self-motion perception.
Collapse
Affiliation(s)
- Bingyu Liu
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Jiayu Shan
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Yong Gu
- Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, International Center for Primate Brain Research, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
15
|
Kumano H, Uka T. Representation of Motion Direction in Visual Area MT Accounts for High Sensitivity to Centripetal Motion, Aligning with Efficient Coding of Retinal Motion Statistics. J Neurosci 2023; 43:5893-5904. [PMID: 37495384 PMCID: PMC10436761 DOI: 10.1523/jneurosci.0451-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 07/19/2023] [Accepted: 07/20/2023] [Indexed: 07/28/2023] Open
Abstract
The overrepresentation of centrifugal motion in the middle temporal visual area (area MT) has long been thought to provide an efficient coding strategy for optic flow processing. However, this overrepresentation compromises the detection of approaching objects, which is essential for survival. In the present study, we revisited this long-held notion by reanalyzing motion selectivity in area MT of three macaque monkeys (two males, one female) using random-dot stimuli instead of spot stimuli. We found no differences in the number of neurons tuned to centrifugal versus centripetal motion; however, centrifugally tuned neurons showed stronger tuning than centripetally tuned neurons. This was attributed to the heightened suppression of responses in centrifugal neurons to centripetal motion compared with that of centripetal neurons to centrifugal motion. Our modeling implies that this intensified suppression accounts for superior detection performance for weak centripetal motion stimuli. Moreover, through Fisher information analysis, we establish that the population sensitivity to motion direction in peripheral vision corresponds well with retinal motion statistics during forward locomotion. While these results challenge established concepts, considering the interplay of logarithmic Gaussian receptive fields and spot stimuli can shed light on the previously documented overrepresentation of centrifugal motion. Significantly, our findings reconcile a previously found discrepancy between MT activity and human behavior, highlighting the proficiency of peripheral MT neurons in encoding motion direction efficiently.SIGNIFICANCE STATEMENT The efficient coding hypothesis states that sensory neurons are tuned to specific, frequently experienced stimuli. Whereas previous work has found that neurons in the middle temporal (MT) area favor centrifugal motion, which results from forward locomotion, we show here that there is no such bias. Moreover, we found that the response of centrifugal neurons for centripetal motion was more suppressed than that of centripetal neurons for centrifugal motion. Combined with modeling, this provides a solution to a previously known discrepancy between reported centrifugal bias in MT and better detection of centripetal motion by human observers. Additionally, we show that population sensitivity in peripheral MT neurons conforms to an efficient code of retinal motion statistics during forward locomotion.
Collapse
Affiliation(s)
- Hironori Kumano
- Department of Integrative Physiology, Graduate School of Medicine, University of Yamanashi, Chuo-shi, Yamanashi 409-3898, Japan
| | - Takanori Uka
- Department of Integrative Physiology, Graduate School of Medicine, University of Yamanashi, Chuo-shi, Yamanashi 409-3898, Japan
| |
Collapse
|
16
|
Rodriguez-Lopez V, Dorronsoro C. Case report of the evidence of a spontaneous Reverse Pulfrich effect in monovision after cataract surgery. BMC Ophthalmol 2023; 23:289. [PMID: 37353733 PMCID: PMC10290313 DOI: 10.1186/s12886-023-03041-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 06/13/2023] [Indexed: 06/25/2023] Open
Abstract
BACKGROUND Cataracts affect the optics of the eye in terms of absorption, blur, and scattering. When cataracts are unilateral, they cause differences between the eyes that can produce visual discomfort and harm binocular vision. These interocular differences can also induce differences in the processing speed of the eyes that may cause a spontaneous Pulfrich effect, a visual illusion provoking important depth misperceptions. Interocular differences in light level, like those present in unilateral cataracts, can cause the Classic Pulfrich effect, and interocular differences in blur, like those present in monovision, a common correction for presbyopia, can cause the Reverse Pulfrich effect. The visual system may be able to adapt, or not, to the new optical condition, depending on the degree of the cataract and the magnitude of the monovision correction. CASE PRESENTATION Here, we report a unique case of a 45-year-old patient that underwent unilateral cataract surgery resulting in a monovision correction of 2.5 diopters (D): left eye emmetropic after the surgery compensated with a monofocal intraocular lens and right eye myopic with a spherical equivalent of -2.50 D. This patient suffered severe symptoms in binocular vision, which can be explained by a spontaneous Pulfrich effect (a delay measured of 4.82 ms, that could be eliminated with a 0.19 optical density filter). After removing the monovision with clear lens extraction in the second eye, symptoms disappeared. We demonstrate that, at least in this patient, both Classic and Reverse Pulfrich effects coexist after unilateral cataract surgery and that can be readapted by reverting the interocular differences. Besides, we report that the adaptation/readaptation process to the Reverse Pulfrich effect happens in a timeframe of weeks, as opposed to the Classic Pulfrich effect, known to have timeframes of days. Additionally, we used the illusion measured in the laboratory to quantify the relevance of the spontaneous Pulfrich effect in different visual scenarios and tasks, using geometrical models and optic flow algorithms. CONCLUSIONS Measuring the different versions of the Pulfrich effect might help to understand the visual discomfort reported by many patients after cataract surgery or with monovision and could guide compensation or intervention strategies.
Collapse
Affiliation(s)
- Victor Rodriguez-Lopez
- Institute of Optics, Spanish National Research Council (IO-CSIC), Serrano 121, Madrid, Spain.
| | - Carlos Dorronsoro
- Institute of Optics, Spanish National Research Council (IO-CSIC), Serrano 121, Madrid, Spain
- 2EyesVision SL, Madrid, Spain
| |
Collapse
|
17
|
Rosenblum L, Kreß A, Arikan BE, Straube B, Bremmer F. Neural correlates of visual and tactile path integration and their task related modulation. Sci Rep 2023; 13:9913. [PMID: 37337037 DOI: 10.1038/s41598-023-36797-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 06/09/2023] [Indexed: 06/21/2023] Open
Abstract
Self-motion induces sensory signals that allow to determine travel distance (path integration). For veridical path integration, one must distinguish self-generated from externally induced sensory signals. Predictive coding has been suggested to attenuate self-induced sensory responses, while task relevance can reverse the attenuating effect of prediction. But how is self-motion processing affected by prediction and task demands, and do effects generalize across senses? In this fMRI study, we investigated visual and tactile self-motion processing and its modulation by task demands. Visual stimuli simulated forward self-motion across a ground plane. Tactile self-motion stimuli were delivered by airflow across the subjects' forehead. In one task, subjects replicated a previously observed distance (Reproduction/Active; high behavioral demand) of passive self-displacement (Reproduction/Passive). In a second task, subjects travelled a self-chosen distance (Self/Active; low behavioral demand) which was recorded and played back to them (Self/Passive). For both tasks and sensory modalities, Active as compared to Passive trials showed enhancement in early visual areas and suppression in higher order areas of the inferior parietal lobule (IPL). Contrasting high and low demanding active trials yielded supramodal enhancement in the anterior insula. Suppression in the IPL suggests this area to be a comparator of sensory self-motion signals and predictions thereof.
Collapse
Affiliation(s)
- Lisa Rosenblum
- Department Neurophysics, Philipps-Universität Marburg, Karl-Von-Frisch-Straße 8a, 35043, Marburg, Germany.
- Center for Mind, Brain and Behavior, Philipps-Universität Marburg and Justus-Liebig-Universität Giessen, Giessen, Germany.
| | - Alexander Kreß
- Department Neurophysics, Philipps-Universität Marburg, Karl-Von-Frisch-Straße 8a, 35043, Marburg, Germany
- Center for Mind, Brain and Behavior, Philipps-Universität Marburg and Justus-Liebig-Universität Giessen, Giessen, Germany
| | - B Ezgi Arikan
- Center for Mind, Brain and Behavior, Philipps-Universität Marburg and Justus-Liebig-Universität Giessen, Giessen, Germany
- Department of Psychology, Justus-Liebig-Universität Giessen, Giessen, Germany
| | - Benjamin Straube
- Center for Mind, Brain and Behavior, Philipps-Universität Marburg and Justus-Liebig-Universität Giessen, Giessen, Germany
- Translational Neuroimaging Marburg, Department of Psychiatry and Psychotherapy, Philipps-Universität Marburg, Marburg, Germany
| | - Frank Bremmer
- Department Neurophysics, Philipps-Universität Marburg, Karl-Von-Frisch-Straße 8a, 35043, Marburg, Germany
- Center for Mind, Brain and Behavior, Philipps-Universität Marburg and Justus-Liebig-Universität Giessen, Giessen, Germany
| |
Collapse
|
18
|
Ciceri T, Malerba G, Gatti A, Diella E, Peruzzo D, Biffi E, Casartelli L. Context expectation influences the gait pattern biomechanics. Sci Rep 2023; 13:5644. [PMID: 37024572 PMCID: PMC10079826 DOI: 10.1038/s41598-023-32665-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 03/30/2023] [Indexed: 04/08/2023] Open
Abstract
Beyond classical aspects related to locomotion (biomechanics), it has been hypothesized that walking pattern is influenced by a combination of distinct computations including online sensory/perceptual sampling and the processing of expectations (neuromechanics). Here, we aimed to explore the potential impact of contrasting scenarios ("risky and potentially dangerous" scenario; "safe and comfortable" scenario) on walking pattern in a group of healthy young adults. Firstly, and consistently with previous literature, we confirmed that the scenario influences gait pattern when it is recalled concurrently to participants' walking activity (motor interference). More intriguingly, our main result showed that participants' gait pattern is also influenced by the contextual scenario when it is evoked only before the start of walking activity (motor expectation). This condition was designed to test the impact of expectations (risky scenario vs. safe scenario) on gait pattern, and the stimulation that preceded walking activity served as prior. Noteworthy, we combined statistical and machine learning (Support-Vector Machine classifier) approaches to stratify distinct levels of analyses that explored the multi-facets architecture of walking. In a nutshell, our combined statistical and machine learning analyses converge in suggesting that walking before steps is not just a paradox.
Collapse
Affiliation(s)
- Tommaso Ciceri
- Department of Information Engineering, University of Padova, Padua, PD, Italy
- Neuroimaging Lab, Scientific Institute IRCCS E. Medea, Bosisio Parini, LC, Italy
| | - Giorgia Malerba
- Bioengineering Lab, Scientific Institute IRCCS E. Medea, Bosisio Parini, LC, Italy
| | - Alice Gatti
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, MI, Italy
| | - Eleonora Diella
- Bioengineering Lab, Scientific Institute IRCCS E. Medea, Bosisio Parini, LC, Italy
| | - Denis Peruzzo
- Neuroimaging Lab, Scientific Institute IRCCS E. Medea, Bosisio Parini, LC, Italy
| | - Emilia Biffi
- Bioengineering Lab, Scientific Institute IRCCS E. Medea, Bosisio Parini, LC, Italy.
| | - Luca Casartelli
- Theoretical and Cognitive Neuroscience Unit, Scientific Institute IRCCS E. Medea, Bosisio Parini, LC, Italy
| |
Collapse
|
19
|
Does optic flow provide information about actions? Atten Percept Psychophys 2023; 85:1287-1303. [PMID: 36918506 PMCID: PMC10013980 DOI: 10.3758/s13414-023-02674-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/09/2023] [Indexed: 03/16/2023]
Abstract
Optic flow, the pattern of light generated in the visual field by motion of objects and the observer's body, serves as information that underwrites perception of events, actions, and affordances. This visual pattern informs the observer about their own actions in relation to their surroundings, as well as those of others. This study explored the limits of action detection for others as well as the role of optic flow. First-person videos were created using camera recordings of the actor's perspective as they performed various movements (jumping jacks, jumping, squatting, sitting, etc.). In three experiments participants attempted to detect the action from first-person video footage using open ended responses (Experiment 1), forced-choice responses (Experiment 2), and a match-to-sample paradigm (Experiment 3). It was discovered that some actions are more difficult to detect than others. When the task was challenging (Experiment 1) athletes were more accurate, but this was not the case in Experiments 2 and 3. All actions were identified above chance level across viewpoints, suggesting that invariant information was detected and used to perform the task.
Collapse
|
20
|
Ali M, Decker E, Layton OW. Temporal stability of human heading perception. J Vis 2023; 23:8. [PMID: 36786748 PMCID: PMC9932552 DOI: 10.1167/jov.23.2.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/15/2023] Open
Abstract
Humans are capable of accurately judging their heading from optic flow during straight forward self-motion. Despite the global coherence in the optic flow field, however, visual clutter and other naturalistic conditions create constant flux on the eye. This presents a problem that must be overcome to accurately perceive heading from optic flow-the visual system must maintain sensitivity to optic flow variations that correspond with actual changes in self-motion and disregard those that do not. One solution could involve integrating optic flow over time to stabilize heading signals while suppressing transient fluctuations. Stability, however, may come at the cost of sluggishness. Here, we investigate the stability of human heading perception when subjects judge their heading after the simulated direction of self-motion changes. We found that the initial heading exerted an attractive influence on judgments of the final heading. Consistent with an evolving heading representation, bias toward the initial heading increased with the size of the heading change and as the viewing duration of the optic flow consistent with the final heading decreased. Introducing periods of sensory dropout (blackouts) later in the trial increased bias whereas an earlier one did not. Simulations of a neural model, the Competitive Dynamics Model, demonstrates that a mechanism that produces an evolving heading signal through recurrent competitive interactions largely captures the human data. Our findings characterize how the visual system balances stability in heading perception with sensitivity to change and support the hypothesis that heading perception evolves over time.
Collapse
Affiliation(s)
- Mufaddal Ali
- Department of Computer Science, Colby College, Waterville, ME, USA.,
| | - Eli Decker
- Department of Computer Science, Colby College, Waterville, ME, USA.,
| | - Oliver W. Layton
- Department of Computer Science, Colby College, Waterville, ME, USA,https://sites.google.com/colby.edu/owlab
| |
Collapse
|
21
|
Horrocks EAB, Mareschal I, Saleem AB. Walking humans and running mice: perception and neural encoding of optic flow during self-motion. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210450. [PMID: 36511417 PMCID: PMC9745880 DOI: 10.1098/rstb.2021.0450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Accepted: 08/30/2022] [Indexed: 12/15/2022] Open
Abstract
Locomotion produces full-field optic flow that often dominates the visual motion inputs to an observer. The perception of optic flow is in turn important for animals to guide their heading and interact with moving objects. Understanding how locomotion influences optic flow processing and perception is therefore essential to understand how animals successfully interact with their environment. Here, we review research investigating how perception and neural encoding of optic flow are altered during self-motion, focusing on locomotion. Self-motion has been found to influence estimation and sensitivity for optic flow speed and direction. Nonvisual self-motion signals also increase compensation for self-driven optic flow when parsing the visual motion of moving objects. The integration of visual and nonvisual self-motion signals largely follows principles of Bayesian inference and can improve the precision and accuracy of self-motion perception. The calibration of visual and nonvisual self-motion signals is dynamic, reflecting the changing visuomotor contingencies across different environmental contexts. Throughout this review, we consider experimental research using humans, non-human primates and mice. We highlight experimental challenges and opportunities afforded by each of these species and draw parallels between experimental findings. These findings reveal a profound influence of locomotion on optic flow processing and perception across species. This article is part of a discussion meeting issue 'New approaches to 3D vision'.
Collapse
Affiliation(s)
- Edward A. B. Horrocks
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| | - Isabelle Mareschal
- School of Biological and Behavioural Sciences, Queen Mary, University of London, London E1 4NS, UK
| | - Aman B. Saleem
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| |
Collapse
|
22
|
Bonnen K. Motion vision: Fish swimming to see. Curr Biol 2023; 33:R30-R32. [PMID: 36626861 DOI: 10.1016/j.cub.2022.11.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
Self-motion generates optic flow, a visual motion signal used by many organisms for navigation and self-stabilization. A new study quantitatively demonstrates how environmental structure and current behavioral state explain the spatial biases observed in zebrafish optomotor responses.
Collapse
Affiliation(s)
- Kathryn Bonnen
- School of Optometry, Indiana University, 800 Atwater Avenue, Bloomington, IN 47405, USA.
| |
Collapse
|
23
|
Alexander E, Cai LT, Fuchs S, Hladnik TC, Zhang Y, Subramanian V, Guilbeault NC, Vijayakumar C, Arunachalam M, Juntti SA, Thiele TR, Arrenberg AB, Cooper EA. Optic flow in the natural habitats of zebrafish supports spatial biases in visual self-motion estimation. Curr Biol 2022; 32:5008-5021.e8. [PMID: 36327979 PMCID: PMC9729457 DOI: 10.1016/j.cub.2022.10.009] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 08/15/2022] [Accepted: 10/05/2022] [Indexed: 12/12/2022]
Abstract
Animals benefit from knowing if and how they are moving. Across the animal kingdom, sensory information in the form of optic flow over the visual field is used to estimate self-motion. However, different species exhibit strong spatial biases in how they use optic flow. Here, we show computationally that noisy natural environments favor visual systems that extract spatially biased samples of optic flow when estimating self-motion. The performance associated with these biases, however, depends on interactions between the environment and the animal's brain and behavior. Using the larval zebrafish as a model, we recorded natural optic flow associated with swimming trajectories in the animal's habitat with an omnidirectional camera mounted on a mechanical arm. An analysis of these flow fields suggests that lateral regions of the lower visual field are most informative about swimming speed. This pattern is consistent with the recent findings that zebrafish optomotor responses are preferentially driven by optic flow in the lateral lower visual field, which we extend with behavioral results from a high-resolution spherical arena. Spatial biases in optic-flow sampling are likely pervasive because they are an effective strategy for determining self-motion in noisy natural environments.
Collapse
Affiliation(s)
- Emma Alexander
- Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley, Berkeley, CA 94720, USA,Present address: Department of Computer Science, Northwestern University, Evanston, IL 60208, USA,Lead contact,Correspondence:
| | - Lanya T. Cai
- Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley, Berkeley, CA 94720, USA,Present address: Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA 94158, USA
| | - Sabrina Fuchs
- Werner Reichardt Centre for Integrative Neuroscience, Institute of Neurobiology, University of Tubingen, 72076 Tubingen, Germany
| | - Tim C. Hladnik
- Werner Reichardt Centre for Integrative Neuroscience, Institute of Neurobiology, University of Tubingen, 72076 Tubingen, Germany,Graduate Training Centre for Neuroscience, University of Tubingen, 72074 Tubingen, Germany
| | - Yue Zhang
- Werner Reichardt Centre for Integrative Neuroscience, Institute of Neurobiology, University of Tubingen, 72076 Tubingen, Germany,Graduate Training Centre for Neuroscience, University of Tubingen, 72074 Tubingen, Germany,Present address: Department of Cellular and Systems Neurobiology, Max Planck Institute for Biological Intelligence in Foundation, 82152 Martinsried, Germany
| | - Venkatesh Subramanian
- Department of Biological Sciences, University of Toronto Scarborough, Toronto M1C 1A4, Canada
| | - Nicholas C. Guilbeault
- Department of Biological Sciences, University of Toronto Scarborough, Toronto M1C 1A4, Canada,Department of Cell and Systems Biology, University of Toronto, Toronto M5S 3G5, Canada
| | - Chinnian Vijayakumar
- Department of Zoology, St. Andrew’s College, Gorakhpur, Uttar Pradesh 273001, India
| | - Muthukumarasamy Arunachalam
- Department of Zoology, School of Biological Sciences, Central University of Kerala, Kerala 671316, India,Present address: Centre for Inland Fishes and Conservation, St. Andrew’s College, Gorakhpur, Uttar Pradesh 273001, India
| | - Scott A. Juntti
- Department of Biology, University of Maryland, College Park, MD 20742, USA
| | - Tod R. Thiele
- Department of Biological Sciences, University of Toronto Scarborough, Toronto M1C 1A4, Canada,Department of Cell and Systems Biology, University of Toronto, Toronto M5S 3G5, Canada
| | - Aristides B. Arrenberg
- Werner Reichardt Centre for Integrative Neuroscience, Institute of Neurobiology, University of Tubingen, 72076 Tubingen, Germany
| | - Emily A. Cooper
- Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley, Berkeley, CA 94720, USA,Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA
| |
Collapse
|
24
|
Alexander E, Cai LT, Fuchs S, Hladnik TC, Zhang Y, Subramanian V, Guilbeault NC, Vijayakumar C, Arunachalam M, Juntti SA, Thiele TR, Arrenberg AB, Cooper EA. Optic flow in the natural habitats of zebrafish supports spatial biases in visual self-motion estimation. Curr Biol 2022. [PMID: 36327979 DOI: 10.5281/zenodo.6604546] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Animals benefit from knowing if and how they are moving. Across the animal kingdom, sensory information in the form of optic flow over the visual field is used to estimate self-motion. However, different species exhibit strong spatial biases in how they use optic flow. Here, we show computationally that noisy natural environments favor visual systems that extract spatially biased samples of optic flow when estimating self-motion. The performance associated with these biases, however, depends on interactions between the environment and the animal's brain and behavior. Using the larval zebrafish as a model, we recorded natural optic flow associated with swimming trajectories in the animal's habitat with an omnidirectional camera mounted on a mechanical arm. An analysis of these flow fields suggests that lateral regions of the lower visual field are most informative about swimming speed. This pattern is consistent with the recent findings that zebrafish optomotor responses are preferentially driven by optic flow in the lateral lower visual field, which we extend with behavioral results from a high-resolution spherical arena. Spatial biases in optic-flow sampling are likely pervasive because they are an effective strategy for determining self-motion in noisy natural environments.
Collapse
Affiliation(s)
- Emma Alexander
- Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley, Berkeley, CA 94720, USA.
| | - Lanya T Cai
- Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Sabrina Fuchs
- Werner Reichardt Centre for Integrative Neuroscience, Institute of Neurobiology, University of Tubingen, 72076 Tubingen, Germany
| | - Tim C Hladnik
- Werner Reichardt Centre for Integrative Neuroscience, Institute of Neurobiology, University of Tubingen, 72076 Tubingen, Germany; Graduate Training Centre for Neuroscience, University of Tubingen, 72074 Tubingen, Germany
| | - Yue Zhang
- Werner Reichardt Centre for Integrative Neuroscience, Institute of Neurobiology, University of Tubingen, 72076 Tubingen, Germany; Graduate Training Centre for Neuroscience, University of Tubingen, 72074 Tubingen, Germany
| | - Venkatesh Subramanian
- Department of Biological Sciences, University of Toronto Scarborough, Toronto M1C 1A4, Canada
| | - Nicholas C Guilbeault
- Department of Biological Sciences, University of Toronto Scarborough, Toronto M1C 1A4, Canada; Department of Cell and Systems Biology, University of Toronto, Toronto M5S 3G5, Canada
| | - Chinnian Vijayakumar
- Department of Zoology, St. Andrew's College, Gorakhpur, Uttar Pradesh 273001, India
| | | | - Scott A Juntti
- Department of Biology, University of Maryland, College Park, MD 20742, USA
| | - Tod R Thiele
- Department of Biological Sciences, University of Toronto Scarborough, Toronto M1C 1A4, Canada; Department of Cell and Systems Biology, University of Toronto, Toronto M5S 3G5, Canada
| | - Aristides B Arrenberg
- Werner Reichardt Centre for Integrative Neuroscience, Institute of Neurobiology, University of Tubingen, 72076 Tubingen, Germany
| | - Emily A Cooper
- Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley, Berkeley, CA 94720, USA; Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA
| |
Collapse
|
25
|
Alexander E, Cai LT, Fuchs S, Hladnik TC, Zhang Y, Subramanian V, Guilbeault NC, Vijayakumar C, Arunachalam M, Juntti SA, Thiele TR, Arrenberg AB, Cooper EA. Optic flow in the natural habitats of zebrafish supports spatial biases in visual self-motion estimation. Curr Biol 2022. [PMID: 36327979 DOI: 10.5281/zenodo.7120876] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Animals benefit from knowing if and how they are moving. Across the animal kingdom, sensory information in the form of optic flow over the visual field is used to estimate self-motion. However, different species exhibit strong spatial biases in how they use optic flow. Here, we show computationally that noisy natural environments favor visual systems that extract spatially biased samples of optic flow when estimating self-motion. The performance associated with these biases, however, depends on interactions between the environment and the animal's brain and behavior. Using the larval zebrafish as a model, we recorded natural optic flow associated with swimming trajectories in the animal's habitat with an omnidirectional camera mounted on a mechanical arm. An analysis of these flow fields suggests that lateral regions of the lower visual field are most informative about swimming speed. This pattern is consistent with the recent findings that zebrafish optomotor responses are preferentially driven by optic flow in the lateral lower visual field, which we extend with behavioral results from a high-resolution spherical arena. Spatial biases in optic-flow sampling are likely pervasive because they are an effective strategy for determining self-motion in noisy natural environments.
Collapse
Affiliation(s)
- Emma Alexander
- Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley, Berkeley, CA 94720, USA.
| | - Lanya T Cai
- Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Sabrina Fuchs
- Werner Reichardt Centre for Integrative Neuroscience, Institute of Neurobiology, University of Tubingen, 72076 Tubingen, Germany
| | - Tim C Hladnik
- Werner Reichardt Centre for Integrative Neuroscience, Institute of Neurobiology, University of Tubingen, 72076 Tubingen, Germany; Graduate Training Centre for Neuroscience, University of Tubingen, 72074 Tubingen, Germany
| | - Yue Zhang
- Werner Reichardt Centre for Integrative Neuroscience, Institute of Neurobiology, University of Tubingen, 72076 Tubingen, Germany; Graduate Training Centre for Neuroscience, University of Tubingen, 72074 Tubingen, Germany
| | - Venkatesh Subramanian
- Department of Biological Sciences, University of Toronto Scarborough, Toronto M1C 1A4, Canada
| | - Nicholas C Guilbeault
- Department of Biological Sciences, University of Toronto Scarborough, Toronto M1C 1A4, Canada; Department of Cell and Systems Biology, University of Toronto, Toronto M5S 3G5, Canada
| | - Chinnian Vijayakumar
- Department of Zoology, St. Andrew's College, Gorakhpur, Uttar Pradesh 273001, India
| | | | - Scott A Juntti
- Department of Biology, University of Maryland, College Park, MD 20742, USA
| | - Tod R Thiele
- Department of Biological Sciences, University of Toronto Scarborough, Toronto M1C 1A4, Canada; Department of Cell and Systems Biology, University of Toronto, Toronto M5S 3G5, Canada
| | - Aristides B Arrenberg
- Werner Reichardt Centre for Integrative Neuroscience, Institute of Neurobiology, University of Tubingen, 72076 Tubingen, Germany
| | - Emily A Cooper
- Herbert Wertheim School of Optometry & Vision Science, University of California, Berkeley, Berkeley, CA 94720, USA; Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, CA 94720, USA
| |
Collapse
|
26
|
Guénot J, Trotter Y, Fricker P, Cherubini M, Soler V, Cottereau BR. Optic Flow Processing in Patients With Macular Degeneration. Invest Ophthalmol Vis Sci 2022; 63:21. [DOI: 10.1167/iovs.63.12.21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Affiliation(s)
- Jade Guénot
- Centre de Recherche Cerveau et Cognition, Université Toulouse III–Paul Sabatier, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse Cedex–CNRS: UMR5549, Toulouse, France
| | - Yves Trotter
- Centre de Recherche Cerveau et Cognition, Université Toulouse III–Paul Sabatier, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse Cedex–CNRS: UMR5549, Toulouse, France
| | - Paul Fricker
- Centre de Recherche Cerveau et Cognition, Université Toulouse III–Paul Sabatier, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse Cedex–CNRS: UMR5549, Toulouse, France
| | - Marta Cherubini
- Centre de Recherche Cerveau et Cognition, Université Toulouse III–Paul Sabatier, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse Cedex–CNRS: UMR5549, Toulouse, France
| | - Vincent Soler
- Centre de Recherche Cerveau et Cognition, Université Toulouse III–Paul Sabatier, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse Cedex–CNRS: UMR5549, Toulouse, France
- Unité de rétine, consultation d'ophtalmologie, hôpital Pierre-Paul-Riquet, CHU Toulouse, Toulouse, France
| | - Benoit R. Cottereau
- Centre de Recherche Cerveau et Cognition, Université Toulouse III–Paul Sabatier, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse Cedex–CNRS: UMR5549, Toulouse, France
| |
Collapse
|
27
|
OpenBloodFlow: A User-Friendly OpenCV-Based Software Package for Blood Flow Velocity and Blood Cell Count Measurement for Fish Embryos. BIOLOGY 2022; 11:biology11101471. [PMID: 36290375 PMCID: PMC9598615 DOI: 10.3390/biology11101471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/04/2022] [Revised: 09/22/2022] [Accepted: 09/30/2022] [Indexed: 11/16/2022]
Abstract
The transparent appearance of fish embryos provides an excellent assessment feature for observing cardiovascular function in vivo. Previously, methods to conduct vascular function assessment were based on measuring blood-flow velocity using third-party software. In this study, we reported a simple software, free of costs and skills, called OpenBloodFlow, which can measure blood flow velocity and count blood cells in fish embryos for the first time. First, videos captured by high-speed CCD were processed for better image stabilization and contrast. Next, the optical flow of moving objects was extracted from the non-moving background in a frame-by-frame manner. Finally, blood flow velocity was calculated by the Gunner Farneback algorithm in Python. Data validation with zebrafish and medaka embryos in OpenBloodFlow was consistent with our previously published ImageJ-based method. We demonstrated consistent blood flow alterations by either OpenBloodFlow or ImageJ in the dorsal aorta of zebrafish embryos when exposed to either phenylhydrazine or ractopamine. In addition, we validated that OpenBloodFlow was able to conduct precise blood cell counting. In this study, we provide an easy and fully automatic programming for blood flow velocity calculation and blood cell counting that is useful for toxicology and pharmacology studies in fish.
Collapse
|
28
|
Causal contribution of optic flow signal in Macaque extrastriate visual cortex for roll perception. Nat Commun 2022; 13:5479. [PMID: 36123363 PMCID: PMC9485245 DOI: 10.1038/s41467-022-33245-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Accepted: 09/08/2022] [Indexed: 11/08/2022] Open
Abstract
Optic flow is a powerful cue for inferring self-motion status which is critical for postural control, spatial orientation, locomotion and navigation. In primates, neurons in extrastriate visual cortex (MSTd) are predominantly modulated by high-order optic flow patterns (e.g., spiral), yet a functional link to direct perception is lacking. Here, we applied electrical microstimulation to selectively manipulate population of MSTd neurons while macaques discriminated direction of rotation around line-of-sight (roll) or direction of linear-translation (heading), two tasks which were orthogonal in 3D spiral coordinate using a four-alternative-forced-choice paradigm. Microstimulation frequently biased animal's roll perception towards coded labeled-lines of the artificial-stimulated neurons in either context with spiral or pure-rotation stimuli. Choice frequency was also altered between roll and translation flow-pattern. Our results provide direct causal-link evidence supporting that roll signals in MSTd, despite often mixed with translation signals, can be extracted by downstream areas for perception of rotation relative to gravity-vertical.
Collapse
|
29
|
Ngo V, Gorman JC, De la Fuente MF, Souto A, Schiel N, Miller CT. Active vision during prey capture in wild marmoset monkeys. Curr Biol 2022; 32:3423-3428.e3. [PMID: 35750054 PMCID: PMC10203885 DOI: 10.1016/j.cub.2022.06.028] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 06/08/2022] [Accepted: 06/09/2022] [Indexed: 10/17/2022]
Abstract
A foundational pressure in the evolution of all animals is the ability to travel through the world, inherently coupling the sensory and motor systems. While this relationship has been explored in several species,1-4 it has been largely overlooked in primates, which have typically relied on paradigms in which head-restrained subjects view stimuli on screens.5 Natural visual behaviors, by contrast, are typified by locomotion through the environment guided by active sensing as animals explore and interact with the world,4,6 a relationship well illustrated by prey capture.7-12 Here, we characterized prey capture in wild marmoset monkeys as they negotiated their dynamic, arboreal habitat to illustrate the inherent role of vision as an active process in natural nonhuman primate behavior. Not only do marmosets share the core properties of vision that typify the primate Order,13-18 but they are prolific hunters that prey on a diverse set of prey animals.19-22 Marmosets pursued prey using vision in several different contexts, but executed precise visually guided motor control that predominantly involved grasping with hands for successful capture of prey. Applying markerless tracking for the first time in wild primates yielded novel findings that precisely quantified how marmosets track insects prior to initiating an attack and the rapid visually guided corrections of the hands during capture. These findings offer the first detailed insight into the active nature of vision to guide multiple facets of a natural goal-directed behavior in wild primates and can inform future laboratory studies of natural primate visual behaviors and the supporting neural processes.
Collapse
Affiliation(s)
- Victoria Ngo
- Cortical Systems and Behavior Laboratory, University of California, San Diego, La Jolla, CA 92039, USA
| | - Julia C Gorman
- Cortical Systems and Behavior Laboratory, University of California, San Diego, La Jolla, CA 92039, USA; Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA 92039, USA
| | - María Fernanda De la Fuente
- Programa de Pós-graduação em Etnobiologia e Conservação da Natureza, Universidade Estadual da Paraíba, Campina Grande, Paraíba 58429-500, Brazil; Laboratório de Etologia Teórica e Aplicada, Departamento de Biologia, Universidade Federal Rural de Pernambuco, Recife, Pernambuco 52171-900, Brazil
| | - Antonio Souto
- Laboratório de Etologia, Departamento de Zoologia, Universidade Federal de Pernambuco, Recife, Pernambuco 50670-901, Brazil
| | - Nicola Schiel
- Laboratório de Etologia Teórica e Aplicada, Departamento de Biologia, Universidade Federal Rural de Pernambuco, Recife, Pernambuco 52171-900, Brazil
| | - Cory T Miller
- Cortical Systems and Behavior Laboratory, University of California, San Diego, La Jolla, CA 92039, USA; Neurosciences Graduate Program, University of California, San Diego, La Jolla, CA 92039, USA.
| |
Collapse
|
30
|
Gelperin A, Ambrosini AE. Quantitative Characterization of Output from the Directionally Selective Visual Interneuron H1 in the Grey Flesh Fly Sarcophaga bullata. JOURNAL OF UNDERGRADUATE NEUROSCIENCE EDUCATION : JUNE : A PUBLICATION OF FUN, FACULTY FOR UNDERGRADUATE NEUROSCIENCE 2021; 20:A88-A99. [PMID: 35540945 PMCID: PMC9053427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 08/31/2021] [Accepted: 09/20/2021] [Indexed: 06/14/2023]
Abstract
H1, a very well-studied insect visual interneuron, has a panoramic receptive field and is directionally selective in responding to optic flow. The synaptic basis for the directional selectivity of the H1 neuron has been studied using both theoretical and cellular approaches. Extracellular single-unit recordings are readily obtained by beginning students using commercially available adults of the grey flesh fly Sarcophaga bullata. We describe an apparatus which allows students to present a series of moving visual stimuli to the eye of the restrained, minimally dissected adult Sarcophaga, while recording both the single unit responses of the H1 neuron and the position and velocity of the moving stimulus. Students obtain quantitative and reproducible responses of H1, probing the response properties of the neuron by modulating stimulus parameters such as: direction and speed of movement, visual contrast, spatial wavelength, or the extent of the visual field occupied. Students learn to perform quantitative analysis of their data and to generate graphical representations of their results characterizing the tuning and receptive field of this neuron. This exercise demonstrates the utility of single unit recording of an identified interneuron in an awake restrained insect and promotes interpretation of these results in terms of the visual stimuli normally encountered by freely flying flies in their natural environment.
Collapse
Affiliation(s)
- Alan Gelperin
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544
| | | |
Collapse
|