1
|
Jafari S, Park J, Lu Y, Demer JL. Finite element model of ocular adduction with unconstrained globe translation. Biomech Model Mechanobiol 2024; 23:601-614. [PMID: 38418799 DOI: 10.1007/s10237-023-01794-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 11/16/2023] [Indexed: 03/02/2024]
Abstract
Details of the anatomy and behavior of the structures responsible for human eye movements have been extensively elaborated since the first modern biomechanical models were introduced. Based on these findings, a finite element model of human ocular adduction is developed based on connective anatomy and measured optic nerve (ON) properties, as well as active contractility of bilaminar extraocular muscles (EOMs), but incorporating the novel feature that globe translation is not otherwise constrained so that realistic kinematics can be simulated. Anatomy of the hemisymmetric model is defined by magnetic resonance imaging. The globe is modeled as suspended by anatomically realistic connective tissues, orbital fat, and contiguous ON. The model incorporates a material subroutine that implements active EOM contraction based on fiber twitch characteristics. Starting from the initial condition of 26° adduction, the medial rectus (MR) muscle was commanded to contract as the lateral rectus (LR) relaxed. We alternatively modeled absence or presence of orbital fat. During pursuit-like adduction from 26 to 32°, the globe translated 0.52 mm posteriorly and 0.1 mm medially with orbital fat present, but 1.2 mm posteriorly and 0.1 mm medially without fat. Maximum principal strains in the optic disk and peripapillary reached 0.05-0.06, and von-Mises stress 96 kPa. Tension in the MR orbital layer was ~ 24 g-force after 6° adduction, but only ~ 3 gm-f in the whole LR. This physiologically plausible simulation of EOM activation in an anatomically realistic globe suspensory system demonstrates that orbital connective tissues and fat are integral to the biomechanics of adduction, including loading by the ON.
Collapse
Affiliation(s)
- Somaye Jafari
- Stein Eye Institute, UCLA, University of California , 100 Stein Plaza, Los Angeles, CA, 90095-7002, USA
| | - Joseph Park
- Stein Eye Institute, UCLA, University of California , 100 Stein Plaza, Los Angeles, CA, 90095-7002, USA
| | - Yongtao Lu
- Department of Engineering Mechanics, Dalian University of Technology, Dalian, China
| | - Joseph L Demer
- Stein Eye Institute, UCLA, University of California , 100 Stein Plaza, Los Angeles, CA, 90095-7002, USA.
- Bioengineering Department, University of California, Los Angeles, USA.
- Neuroscience Interdepartmental Program, University of California, Los Angeles, USA.
- Department of Neurology, University of California, Los Angeles, USA.
| |
Collapse
|
2
|
Ostendorf F, Dolan RJ. Integration of retinal and extraretinal information across eye movements. PLoS One 2015; 10:e0116810. [PMID: 25602956 PMCID: PMC4300226 DOI: 10.1371/journal.pone.0116810] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2014] [Accepted: 12/15/2014] [Indexed: 01/08/2023] Open
Abstract
Visual perception is burdened with a highly discontinuous input stream arising from saccadic eye movements. For successful integration into a coherent representation, the visuomotor system needs to deal with these self-induced perceptual changes and distinguish them from external motion. Forward models are one way to solve this problem where the brain uses internal monitoring signals associated with oculomotor commands to predict the visual consequences of corresponding eye movements during active exploration. Visual scenes typically contain a rich structure of spatial relational information, providing additional cues that may help disambiguate self-induced from external changes of perceptual input. We reasoned that a weighted integration of these two inherently noisy sources of information should lead to better perceptual estimates. Volunteer subjects performed a simple perceptual decision on the apparent displacement of a visual target, jumping unpredictably in sync with a saccadic eye movement. In a critical test condition, the target was presented together with a flanker object, where perceptual decisions could take into account the spatial distance between target and flanker object. Here, precision was better compared to control conditions in which target displacements could only be estimated from either extraretinal or visual relational information alone. Our findings suggest that under natural conditions, integration of visual space across eye movements is based upon close to optimal integration of both retinal and extraretinal pieces of information.
Collapse
Affiliation(s)
- Florian Ostendorf
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
- Dept. of Neurology, Charité—Universitätsmedizin Berlin, Berlin, Germany
- * E-mail:
| | - Raymond J. Dolan
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
- Wellcome Trust Centre for Neuroimaging, University College London, London, United Kingdom
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Russell Square House, London, United Kingdom
| |
Collapse
|
3
|
Churan J, Guitton D, Pack CC. Spatiotemporal structure of visual receptive fields in macaque superior colliculus. J Neurophysiol 2012; 108:2653-67. [DOI: 10.1152/jn.00389.2012] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022] Open
Abstract
Saccades are useful for directing the high-acuity fovea to visual targets that are of behavioral relevance. The selection of visual targets for eye movements involves the superior colliculus (SC), where many neurons respond to visual stimuli. Many of these neurons are also activated before and during saccades of specific directions and amplitudes. Although the role of the SC in controlling eye movements has been thoroughly examined, far less is known about the nature of the visual responses in this area. We have, therefore, recorded from neurons in the intermediate layers of the macaque SC, while using a sparse-noise mapping procedure to obtain a detailed characterization of the spatiotemporal structure of visual receptive fields. We find that SC responses to flashed visual stimuli start roughly 50 ms after the onset of the stimulus and last for on average ∼70 ms. About 50% of these neurons are strongly suppressed by visual stimuli flashed at certain locations flanking the excitatory center, and the spatiotemporal pattern of suppression exerts a predictable influence on the timing of saccades. This suppression may, therefore, contribute to the filtering of distractor stimuli during target selection. We also find that saccades affect the processing of visual stimuli by SC neurons in a manner that is quite similar to the saccadic suppression and postsaccadic enhancement that has been observed in the cortex and in perception. However, in contrast to what has been observed in the cortex, decreased visual sensitivity was generally associated with increased firing rates, while increased sensitivity was associated with decreased firing rates. Overall, these results suggest that the processing of visual stimuli by SC receptive fields can influence oculomotor behavior and that oculomotor signals originating in the SC can shape perisaccadic visual perception.
Collapse
Affiliation(s)
- Jan Churan
- Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
| | - Daniel Guitton
- Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
| | - Christopher C. Pack
- Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
4
|
Van Grootel TJ, Van der Willigen RF, Van Opstal AJ. Experimental test of spatial updating models for monkey eye-head gaze shifts. PLoS One 2012; 7:e47606. [PMID: 23118883 PMCID: PMC3485288 DOI: 10.1371/journal.pone.0047606] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2012] [Accepted: 09/13/2012] [Indexed: 12/02/2022] Open
Abstract
How the brain maintains an accurate and stable representation of visual target locations despite the occurrence of saccadic gaze shifts is a classical problem in oculomotor research. Here we test and dissociate the predictions of different conceptual models for head-unrestrained gaze-localization behavior of macaque monkeys. We adopted the double-step paradigm with rapid eye-head gaze shifts to measure localization accuracy in response to flashed visual stimuli in darkness. We presented the second target flash either before (static), or during (dynamic) the first gaze displacement. In the dynamic case the brief visual flash induced a small retinal streak of up to about 20 deg at an unpredictable moment and retinal location during the eye-head gaze shift, which provides serious challenges for the gaze-control system. However, for both stimulus conditions, monkeys localized the flashed targets with accurate gaze shifts, which rules out several models of visuomotor control. First, these findings exclude the possibility that gaze-shift programming relies on retinal inputs only. Instead, they support the notion that accurate eye-head motor feedback updates the gaze-saccade coordinates. Second, in dynamic trials the visuomotor system cannot rely on the coordinates of the planned first eye-head saccade either, which rules out remapping on the basis of a predictive corollary gaze-displacement signal. Finally, because gaze-related head movements were also goal-directed, requiring continuous access to eye-in-head position, we propose that our results best support a dynamic feedback scheme for spatial updating in which visuomotor control incorporates accurate signals about instantaneous eye- and head positions rather than relative eye- and head displacements.
Collapse
Affiliation(s)
- Tom J. Van Grootel
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Department of Biophysics, Nijmegen, The Netherlands
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Robert F. Van der Willigen
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Department of Biophysics, Nijmegen, The Netherlands
| | - A. John Van Opstal
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Department of Biophysics, Nijmegen, The Netherlands
- * E-mail:
| |
Collapse
|
5
|
Abstract
Humans and other animals are surprisingly adept at estimating the duration of temporal intervals, even without the use of watches and clocks. This ability is typically studied in the lab by asking observers to indicate their estimate of the time between two external sensory events. The results of such studies confirm that humans can accurately estimate durations on a variety of time scales. Although many brain areas are thought to contribute to the representation of elapsed time, recent neurophysiological studies have linked the parietal cortex in particular to the perception of sub-second time intervals. In this Primer, we describe previous work on parietal cortex and time perception, and we highlight the findings of a study published in this issue of PLOS Biology, in which Schneider and Ghose characterize single-neuron responses during performance of a novel "Temporal Production" task. During temporal production, the observer must track the passage of time without anticipating any external sensory event, and it appears that the parietal cortex may use a unique strategy to support this type of measurement.
Collapse
Affiliation(s)
- Erik P. Cook
- Department of Physiology, McGill University, Montreal, Quebec, Canada
| | - Christopher C. Pack
- Neurology & Neurosurgery, McGill University, Montreal, Quebec, Canada
- * E-mail:
| |
Collapse
|
6
|
Richard A, Churan J, Guitton DE, Pack CC. The geometry of perisaccadic visual perception. J Neurosci 2009; 29:10160-70. [PMID: 19675250 PMCID: PMC6664982 DOI: 10.1523/jneurosci.0511-09.2009] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2009] [Revised: 07/04/2009] [Accepted: 07/11/2009] [Indexed: 11/21/2022] Open
Abstract
Our ability to explore our surroundings requires a combination of high-resolution vision and frequent rotations of the visual axis toward objects of interest. Such gaze shifts are themselves a source of powerful retinal stimulation, and so the visual system appears to have evolved mechanisms to maintain perceptual stability during movements of the eyes in space. The mechanisms underlying this perceptual stability can be probed in the laboratory by briefly presenting a stimulus around the time of a saccadic eye movement and asking subjects to report its position. Under such conditions, there is a systematic misperception of the probes toward the saccade end point. This perisaccadic compression of visual space has been the subject of much research, but few studies have attempted to relate it to specific brain mechanisms. Here, we show that the magnitude of perceptual compression for a wide variety of probe stimuli and saccade amplitudes is quantitatively predicted by a simple heuristic model based on the geometry of retinotopic representations in the primate brain. Specifically, we propose that perisaccadic compression is determined by the distance between the probe and saccade end point on a map that has a logarithmic representation of visual space, similar to those found in numerous cortical and subcortical visual structures. Under this assumption, the psychophysical data on perisaccadic compression can be appreciated intuitively by imagining that, around the time of a saccade, the brain confounds nearby oculomotor and sensory signals while attempting to localize the position of objects in visual space.
Collapse
Affiliation(s)
- Alby Richard
- Montreal Neurological Institute, McGill University School of Medicine, Quebec, Canada.
| | | | | | | |
Collapse
|