26
|
Unlocking crowding by ensemble statistics. Curr Biol 2022; 32:4975-4981.e3. [PMID: 36309011 DOI: 10.1016/j.cub.2022.10.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 08/16/2022] [Accepted: 10/03/2022] [Indexed: 11/06/2022]
Abstract
In crowding,1,2,3,4,5,6,7 objects that can be easily recognized in isolation appear jumbled when surrounded by other elements.8 Traditionally, crowding is explained by local pooling mechanisms,3,6,9,10,11,12,13,14,15 but many findings have shown that the global configuration of the entire stimulus display, rather than local aspects, determines crowding.8,16,17,18,19,20,21,22,23,24,25,26,27,28 However, understanding global configurations is challenging because even slight changes can lead from crowding to uncrowding and vice versa.23,25,28,29 Unfortunately, the number of configurations to explore is virtually infinite. Here, we show that one does not need to know the specific configuration of flankers to determine crowding strength but only their ensemble statistics, which allow for the rapid computation of groups within the stimulus display.30,31,32,33,34,35,36,37 To investigate the role of ensemble statistics in (un)crowding, we used a classic vernier offset discrimination task in which the vernier was flanked by multiple squares. We manipulated the orientation statistics of the squares based on the following rationale: a central square with an orientation different from the mean orientation of the other squares stands out from the rest and groups with the vernier, causing strong crowding. If, on the other hand, all squares group together, the vernier is the only element that stands out, and crowding is weak. These effects should depend exclusively on the perceived ensemble statistics, i.e., on the mean orientation of the squares and not on their individual orientations. In two experiments, we confirmed these predictions.
Collapse
|
27
|
Srikantharajah J, Ellard C. How central and peripheral vision influence focal and ambient processing during scene viewing. J Vis 2022; 22:4. [PMID: 36322076 PMCID: PMC9639699 DOI: 10.1167/jov.22.12.4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
Abstract
Central and peripheral vision carry out different functions during scene processing. The ambient mode of visual processing is more likely to involve peripheral visual processes, whereas the focal mode of visual processing is more likely to involve central visual processes. Although the ambient mode is responsible for navigating space and comprehending scene layout, the focal mode gathers detailed information as central vision is oriented to salient areas of the visual field. Previous work suggests that during the time course of scene viewing, there is a transition from ambient processing during the first few seconds to focal processing during later time intervals, characterized by longer fixations and shorter saccades. In this study, we identify the influence of central and peripheral vision on changes in eye movements and the transition from ambient to focal processing during the time course of scene processing. Using a gaze-contingent protocol, we restricted the visual field to central or peripheral vision while participants freely viewed scenes for 20 seconds. Results indicated that fixation durations are shorter when vision is restricted to central vision compared to normal vision. During late visual processing, fixations in peripheral vision were longer than those in central vision. We show that a transition from more ambient to more focal processing during scene viewing will occur even when vision is restricted to only central vision or peripheral vision.
Collapse
|
28
|
Braaten LF, Arntzen E. Peripheral vision in matching-to-sample procedures. J Exp Anal Behav 2022; 118:425-441. [PMID: 36053794 DOI: 10.1002/jeab.795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Revised: 07/21/2022] [Accepted: 08/14/2022] [Indexed: 01/07/2023]
Abstract
Eye-tracking has been used to investigate observing responses in matching-to-sample procedures. However, in visual search, peripheral vision plays an important role. Therefore, three experiments were conducted to investigate the extent to which adult participants can discriminate stimuli that vary in size and position in the periphery. Experiment 1 used arbitrary matching with abstract stimuli, Experiment 2 used identity matching with abstract stimuli, and Experiment 3 used identity matching with simple (familiar) shapes. In all three experiments, participants were taught eight conditional discriminations establishing four 3-member classes of stimuli. Four different stimulus sizes and three different stimulus positions were manipulated in the 12 peripheral test phases. In these test trials, participants had to fixate their gaze on the sample stimulus in the middle of the screen while selecting a comparison stimulus. Eye movements were measured with a head-mounted eye-tracker during both training and testing. Experiment 1 shows that participants can discriminate small abstract stimuli that are arbitrarily related in the periphery. Experiment 2 shows that matching identical stimuli does not affect discrimination in the periphery compared to arbitrarily related stimuli. However, Experiment 3 shows that discrimination increases when stimuli are well-known simple shapes.
Collapse
|
29
|
Očić M, Bon I, Ružić L, Cigrovski V, Rupčić T. The Influence of Protective Headgear on the Visual Field of Recreational-Level Skiers. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:10626. [PMID: 36078342 PMCID: PMC9518168 DOI: 10.3390/ijerph191710626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Revised: 08/13/2022] [Accepted: 08/23/2022] [Indexed: 06/15/2023]
Abstract
The benefit of protective headgear for recreational skiers is an ongoing debate in the snow sports industry, and there are a lot of opposing opinions. Due to the dynamic conditions in which winter sports are performed, athletes demand rapid and constant processing of visual information. A sufficient level of anticipation helps athletes to properly position themselves to reduce the forces transferred to the head or even move to avoid a collision. To objectively identify the impact of protective headgear on the visual field when skiing, it is necessary to conduct suitable measurements. The sample consisted of 43 recreational-level skiers (27 M, 16 F; age 31.6 ± 8.23 years). A predefined testing protocol on an ortoreter was used to assess the visual field for three conditions of wearing protective headgear. Differences in perceived visual stimuli between the three conditions were evaluated by repeated measures analysis of variance (ANOVA). Based on the observed results, it can be concluded that the combination of wearing a ski helmet and ski goggles significantly negatively influences visual performance in a way that the visual field is narrowed, for both helmet users and non-users, only when comparing the tested conditions. When comparing helmet users and non-users, there are no differences in the amount of visual impairment; therefore, the habit of wearing a helmet does not influence the ability of perceiving visual stimuli.
Collapse
|
30
|
Wolfe B, Sawyer BD, Rosenholtz R. Toward a Theory of Visual Information Acquisition in Driving. HUMAN FACTORS 2022; 64:694-713. [PMID: 32678682 PMCID: PMC9136385 DOI: 10.1177/0018720820939693] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2019] [Accepted: 06/09/2020] [Indexed: 06/01/2023]
Abstract
OBJECTIVE The aim of this study is to describe information acquisition theory, explaining how drivers acquire and represent the information they need. BACKGROUND While questions of what drivers are aware of underlie many questions in driver behavior, existing theories do not directly address how drivers in particular and observers in general acquire visual information. Understanding the mechanisms of information acquisition is necessary to build predictive models of drivers' representation of the world and can be applied beyond driving to a wide variety of visual tasks. METHOD We describe our theory of information acquisition, looking to questions in driver behavior and results from vision science research that speak to its constituent elements. We focus on the intersection of peripheral vision, visual attention, and eye movement planning and identify how an understanding of these visual mechanisms and processes in the context of information acquisition can inform more complete models of driver knowledge and state. RESULTS We set forth our theory of information acquisition, describing the gap in understanding that it fills and how existing questions in this space can be better understood using it. CONCLUSION Information acquisition theory provides a new and powerful way to study, model, and predict what drivers know about the world, reflecting our current understanding of visual mechanisms and enabling new theories, models, and applications. APPLICATION Using information acquisition theory to understand how drivers acquire, lose, and update their representation of the environment will aid development of driver assistance systems, semiautonomous vehicles, and road safety overall.
Collapse
|
31
|
Camponogara I, Volcic R. Visual uncertainty unveils the distinct role of haptic cues in multisensory grasping. eNeuro 2022; 9:ENEURO.0079-22.2022. [PMID: 35641223 PMCID: PMC9215692 DOI: 10.1523/eneuro.0079-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 04/26/2022] [Accepted: 05/19/2022] [Indexed: 11/21/2022] Open
Abstract
Human multisensory grasping movements (i.e., seeing and feeling a handheld object while grasping it with the contralateral hand) are superior to movements guided by each separate modality. This multisensory advantage might be driven by the integration of vision with either the haptic position only or with both position and size cues. To contrast these two hypotheses, we manipulated visual uncertainty (central vs. peripheral vision) and the availability of haptic cues during multisensory grasping. We showed a multisensory benefit irrespective of the degree of visual uncertainty suggesting that the integration process involved in multisensory grasping can be flexibly modulated by the contribution of each modality. Increasing visual uncertainty revealed the role of the distinct haptic cues. The haptic position cue was sufficient to promote multisensory benefits evidenced by faster actions with smaller grip apertures, whereas the haptic size was fundamental in fine-tuning the grip aperture scaling. These results support the hypothesis that, in multisensory grasping, vision is integrated with all haptic cues, with the haptic position cue playing the key part. Our findings highlight the important role of non-visual sensory inputs in sensorimotor control and hint at the potential contributions of the haptic modality in developing and maintaining visuomotor functions.Significance statementThe longstanding view that vision is considered the primary sense we rely on to guide grasping movements relegates the equally important haptic inputs, such as touch and proprioception, to a secondary role. Here we show that by increasing visual uncertainty during visuo-haptic grasping, the central nervous system exploits distinct haptic inputs about the object position and size to optimize grasping performance. Specifically, we demonstrate that haptic inputs about the object position are fundamental to support vision in enhancing grasping performance, whereas haptic size inputs can further refine hand shaping. Our results provide strong evidence that non-visual inputs serve an important, previously under-appreciated, functional role in grasping.
Collapse
|
32
|
Zhang LQ, Cottaris NP, Brainard DH. An image reconstruction framework for characterizing initial visual encoding. eLife 2022; 11:e71132. [PMID: 35037622 PMCID: PMC8846596 DOI: 10.7554/elife.71132] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Accepted: 01/14/2022] [Indexed: 11/13/2022] Open
Abstract
We developed an image-computable observer model of the initial visual encoding that operates on natural image input, based on the framework of Bayesian image reconstruction from the excitations of the retinal cone mosaic. Our model extends previous work on ideal observer analysis and evaluation of performance beyond psychophysical discrimination, takes into account the statistical regularities of the visual environment, and provides a unifying framework for answering a wide range of questions regarding the visual front end. Using the error in the reconstructions as a metric, we analyzed variations of the number of different photoreceptor types on human retina as an optimal design problem. In addition, the reconstructions allow both visualization and quantification of information loss due to physiological optics and cone mosaic sampling, and how these vary with eccentricity. Furthermore, in simulations of color deficiencies and interferometric experiments, we found that the reconstructed images provide a reasonable proxy for modeling subjects' percepts. Lastly, we used the reconstruction-based observer for the analysis of psychophysical threshold, and found notable interactions between spatial frequency and chromatic direction in the resulting spatial contrast sensitivity function. Our method is widely applicable to experiments and applications in which the initial visual encoding plays an important role.
Collapse
|
33
|
Yu D. Training peripheral vision to read: Using stimulus exposure and identity priming. Front Neurosci 2022; 16:916447. [PMID: 36090292 PMCID: PMC9451508 DOI: 10.3389/fnins.2022.916447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Accepted: 07/26/2022] [Indexed: 11/26/2022] Open
Abstract
Reading in the periphery can be improved with perceptual learning. A conventional training paradigm involves repeated practice on a character-based task (e.g., recognizing random letters/words). While the training is effective, the hours of strenuous effort required from the trainees makes it difficult to implement the training in low-vision patients. Here, we developed a training paradigm utilizing stimulus exposure and identity priming to minimize training effort and improve training accessibility while maintaining the active engagement of observers through a stimulus visibility task. Twenty-one normally sighted young adults were randomly assigned to three groups: a control group, a with-repetition training group, and a without-repetition training group. All observers received a pre-test and a post-test scheduled 1 week apart. Each test consisted of measurements of reading speed, visual-span profile, the spatial extent of crowding, and isolated-letter profiles at 10° eccentricity in the lower visual field. Training consists of five daily sessions (a total of 7,150 trials) of viewing trigram stimuli (strings of three letters) with identity priming (prior knowledge of target letter identity). The with-repetition group was given the option to replay each stimulus (averaged 0.4 times). In comparison to the control group, both training groups showed significant improvements in all four performance measures. Stimulus replay did not yield a measurable benefit on learning. Learning transferred to various untrained tasks and conditions, such as the reading task and untrained letter size. Reduction in crowding was the main basis of the training-related improvement in reading. We also found that the learning can be partially retained for a minimum of 3 months and that complete retention is attainable with additional monthly training. Our findings suggest that conventional training task that requires recognizing random letters or words is dispensable for improving peripheral reading. Utilizing stimulus exposure and identity priming accompanied by a stimulus visibility task, our novel training procedure offers effective intervention, simple implementation, capability for remote and self-administration, and an easy translation into low-vision reading rehabilitation.
Collapse
|
34
|
Lukanov H, König P, Pipa G. Biologically Inspired Deep Learning Model for Efficient Foveal- Peripheral Vision. Front Comput Neurosci 2021; 15:746204. [PMID: 34880741 PMCID: PMC8645638 DOI: 10.3389/fncom.2021.746204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Accepted: 10/27/2021] [Indexed: 11/13/2022] Open
Abstract
While abundant in biology, foveated vision is nearly absent from computational models and especially deep learning architectures. Despite considerable hardware improvements, training deep neural networks still presents a challenge and constraints complexity of models. Here we propose an end-to-end neural model for foveal-peripheral vision, inspired by retino-cortical mapping in primates and humans. Our model has an efficient sampling technique for compressing the visual signal such that a small portion of the scene is perceived in high resolution while a large field of view is maintained in low resolution. An attention mechanism for performing "eye-movements" assists the agent in collecting detailed information incrementally from the observed scene. Our model achieves comparable results to a similar neural architecture trained on full-resolution data for image classification and outperforms it at video classification tasks. At the same time, because of the smaller size of its input, it can reduce computational effort tenfold and uses several times less memory. Moreover, we present an easy to implement bottom-up and top-down attention mechanism which relies on task-relevant features and is therefore a convenient byproduct of the main architecture. Apart from its computational efficiency, the presented work provides means for exploring active vision for agent training in simulated environments and anthropomorphic robotics.
Collapse
|
35
|
Li MS, Abbatecola C, Petro LS, Muckli L. Numerosity Perception in Peripheral Vision. Front Hum Neurosci 2021; 15:750417. [PMID: 34803635 PMCID: PMC8597708 DOI: 10.3389/fnhum.2021.750417] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 10/14/2021] [Indexed: 11/13/2022] Open
Abstract
Peripheral vision has different functional priorities for mammals than foveal vision. One of its roles is to monitor the environment while central vision is focused on the current task. Becoming distracted too easily would be counterproductive in this perspective, so the brain should react to behaviourally relevant changes. Gist processing is good for this purpose, and it is therefore not surprising that evidence from both functional brain imaging and behavioural research suggests a tendency to generalize and blend information in the periphery. This may be caused by the balance of perceptual influence in the periphery between bottom-up (i.e., sensory information) and top-down (i.e., prior or contextual information) processing channels. Here, we investigated this interaction behaviourally using a peripheral numerosity discrimination task with top-down and bottom-up manipulations. Participants compared numerosity between the left and right peripheries of a screen. Each periphery was divided into a centre and a surrounding area, only one of which was a task relevant target region. Our top-down task modulation was the instruction which area to attend - centre or surround. We varied the signal strength by altering the stimuli durations i.e., the amount of information presented/processed (as a combined bottom-up and recurrent top-down feedback factor). We found that numerosity perceived in target regions was affected by contextual information in neighbouring (but irrelevant) areas. This effect appeared as soon as stimulus duration allowed the task to be reliably performed and persisted even at the longest duration (1 s). We compared the pattern of results with an ideal-observer model and found a qualitative difference in the way centre and surround areas interacted perceptually in the periphery. When participants reported on the central area, the irrelevant surround would affect the response as a weighted combination - consistent with the idea of a receptive field focused in the target area to which irrelevant surround stimulation leaks in. When participants report on surround, we can best describe the response with a model in which occasionally the attention switches from task relevant surround to task irrelevant centre - consistent with a selection model of two competing streams of information. Overall our results show that the influence of spatial context in the periphery is mandatory but task dependent.
Collapse
|
36
|
Bolarinwa J, Eimontaite I, Mitchell T, Dogramadzi S, Caleb-Solly P. Assessing the Role of Gaze Tracking in Optimizing Humans-In-The-Loop Telerobotic Operation Using Multimodal Feedback. Front Robot AI 2021; 8:578596. [PMID: 34671646 PMCID: PMC8521448 DOI: 10.3389/frobt.2021.578596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 08/02/2021] [Indexed: 12/01/2022] Open
Abstract
A key challenge in achieving effective robot teleoperation is minimizing teleoperators’ cognitive workload and fatigue. We set out to investigate the extent to which gaze tracking data can reveal how teleoperators interact with a system. In this study, we present an analysis of gaze tracking, captured as participants completed a multi-stage task: grasping and emptying the contents of a jar into a container. The task was repeated with different combinations of visual, haptic, and verbal feedback. Our aim was to determine if teleoperation workload can be inferred by combining the gaze duration, fixation count, task completion time, and complexity of robot motion (measured as the sum of robot joint steps) at different stages of the task. Visual information of the robot workspace was captured using four cameras, positioned to capture the robot workspace from different angles. These camera views (aerial, right, eye-level, and left) were displayed through four quadrants (top-left, top-right, bottom-left, and bottom-right quadrants) of participants’ video feedback computer screen, respectively. We found that the gaze duration and the fixation count were highly dependent on the stage of the task and the feedback scenario utilized. The results revealed that combining feedback modalities reduced the cognitive workload (inferred by investigating the correlation between gaze duration, fixation count, task completion time, success or failure of task completion, and robot gripper trajectories), particularly in the task stages that require more precision. There was a significant positive correlation between gaze duration and complexity of robot joint movements. Participants’ gaze outside the areas of interest (distractions) was not influenced by feedback scenarios. A learning effect was observed in the use of the controller for all participants as they repeated the task with different feedback combination scenarios. To design a system for teleoperation, applicable in healthcare, we found that the analysis of teleoperators’ gaze can help understand how teleoperators interact with the system, hence making it possible to develop the system from the teleoperators’ stand point.
Collapse
|
37
|
Abstract
In crowding, perception of a target deteriorates in the presence of nearby flankers. Surprisingly, perception can be rescued from crowding if additional flankers are added (uncrowding). Uncrowding is a major challenge for all classic models of crowding and vision in general, because the global configuration of the entire stimulus is crucial. However, it is unclear which characteristics of the configuration impact (un)crowding. Here, we systematically dissected flanker configurations and showed that (un)crowding cannot be easily explained by the effects of the sub-parts or low-level features of the stimulus configuration. Our modeling results suggest that (un)crowding requires global processing. These results are well in line with previous studies showing the importance of global aspects in crowding.
Collapse
|
38
|
Baig A, Buckley D, Codina C. Behavioural Adaptation to Hereditary Macular Dystrophy: A Systematic Review on the Effect of Early Onset Central Field Loss on Peripheral Visual Abilities. Br Ir Orthopt J 2021; 17:104-118. [PMID: 34278226 PMCID: PMC8269784 DOI: 10.22599/bioj.177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Accepted: 05/25/2021] [Indexed: 11/30/2022] Open
Abstract
Purpose: Hereditary macular dystrophies (HMD) result in early onset central field loss. Evidence for cortical plasticity has been found in HMD, which may enhance peripheral visual abilities to meet the increased demands and reliance on the peripheral field, as has been found in congenitally deaf adults and habitual action video-game players. This is a qualitative synthesis of the literature on the effect of early onset central field loss on peripheral visual abilities. The knowledge gained may help in developing rehabilitative strategies that enable optimisation of remaining peripheral vision. Methods: A systematic search performed on the Web of Science and PubMED databases yielded 728 records published between 1809 to 2020, of which seven case-control studies were eligible for qualitative synthesis. Results: The search highlighted an overall paucity of literature, which lacked validity due to small heterogeneous samples and deficiencies in reporting of methods and population characteristics. A range of peripheral visual abilities at different eccentricities were studied. Superior performance of HMD observers in the peripheral field or similarities between the preferred retinal loci (PRL) and normal fovea were observed in four of seven studies. Findings were often based on studies including a single observer. Further larger rigorous studies are required in this area. Conclusions: Spontaneous perceptual learning through reliance on and repeated use of the peripheral field and PRL may result in some specific superior peripheral visual abilities. However, worse performance in some tasks could reflect unexpected rod disease, lack of intensive training, or persistent limitations due to the need for cones for specific tasks. Perceptual learning through training regimes could enable patients to optimise use of the PRL and remaining peripheral vision. However, further studies are needed to design optimal training regimes.
Collapse
|
39
|
Alexander RG, Mintz RJ, Custodio PJ, Macknik SL, Vaziri A, Venkatakrishnan A, Gindina S, Martinez-Conde S. Gaze mechanisms enabling the detection of faint stars in the night sky. Eur J Neurosci 2021; 54:5357-5367. [PMID: 34160864 DOI: 10.1111/ejn.15335] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 05/24/2021] [Accepted: 05/25/2021] [Indexed: 11/26/2022]
Abstract
For millennia, people have used "averted vision" to improve their detection of faint celestial objects, a technique first documented around 325 BCE. Yet, no studies have assessed gaze location during averted vision to determine what pattern best facilitates perception. Here, we characterized averted vision while recording eye-positions of dark-adapted human participants, for the first time. We simulated stars of apparent magnitudes 3.3 and 3.5, matching their brightness to Megrez (the dimmest star in the Big Dipper) and Tau Ceti. Participants indicated whether each star was visible from a series of fixation locations, providing a comprehensive map of detection performance in all directions. Contrary to prior predictions, maximum detection was first achieved at ~8° from the star, much closer to the fovea than expected from rod-cone distributions alone. These findings challenge the assumption of optimal detection at the rod density peak and provide the first systematic assessment of an age-old facet of human vision.
Collapse
|
40
|
Wei J, Kong D, Yu X, Wei L, Xiong Y, Yang A, Drobe B, Bao J, Zhou J, Gao Y, He Z. Is Peripheral Motion Detection Affected by Myopia? Front Neurosci 2021; 15:683153. [PMID: 34163327 PMCID: PMC8215660 DOI: 10.3389/fnins.2021.683153] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2021] [Accepted: 05/14/2021] [Indexed: 12/04/2022] Open
Abstract
Purpose The current study was to investigate whether myopia affected peripheral motion detection and whether the potential effect interacted with spatial frequency, motion speed, or eccentricity. Methods Seventeen young adults aged 22–26 years participated in the study. They were six low to medium myopes [spherical equivalent refractions −1.0 to −5.0 D (diopter)], five high myopes (<-5.5 D) and six emmetropes (+0.5 to −0.5 D). All myopes were corrected by self-prepared, habitual soft contact lenses. A four-alternative forced-choice task in which the subject was to determine the location of the phase-shifting Gabor from the four quadrants (superior, inferior, nasal, and temporal) of the visual field, was employed. The experiment was blocked by eccentricity (20° and 27°), spatial frequency (0.6, 1.2, 2.4, and 4.0 cycles per degree (c/d) for 20° eccentricity, and 0.6, 1.2, 2.0, and 3.2 c/d for 27° eccentricity), as well as the motion speed [2 and 6 degree per second (d/s)]. Results Mixed-model analysis of variances showed no significant difference in the thresholds of peripheral motion detection between three refractive groups at either 20° (F[2,14] = 0.145, p = 0.866) or 27° (F[2,14] = 0.475, p = 0.632). At 20°, lower motion detection thresholds were associated with higher myopia (p < 0.05) mostly for low spatial frequency and high-speed targets in the nasal and superior quadrants, and for high spatial frequency and high-speed targets in the temporal quadrant in myopic viewers. Whereas at 27°, no significant correlation was found between the spherical equivalent and the peripheral motion detection threshold under all conditions (all p > 0.1). Spatial frequency, speed, and quadrant of the visual field all showed significant effect on the peripheral motion detection threshold. Conclusion There was no significant difference between the three refractive groups in peripheral motion detection. However, lower motion detection thresholds were associated with higher myopia, mostly for low spatial frequency targets, at 20° in myopic viewers.
Collapse
|
41
|
Toscani M, Mamassian P, Valsecchi M. Underconfidence in peripheral vision. J Vis 2021; 21:2. [PMID: 34106222 PMCID: PMC8196405 DOI: 10.1167/jov.21.6.2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 03/15/2021] [Indexed: 12/30/2022] Open
Abstract
Our visual experience appears uniform across the visual field, despite the poor resolution of peripheral vision. This may be because we do not notice that we are missing details in the periphery of our visual field and believe that peripheral vision is just as rich as central vision. In other words, the uniformity of the visual scene could be explained by a metacognitive bias. We deployed a confidence forced-choice method to measure metacognitive performance in peripheral as compared to central vision. Participants judged the orientation of gratings presented in central and peripheral vision, and reported whether they thought they were more likely to be correct in the perceptual decision for the central or for the peripheral stimulus. Observers were underconfident in the periphery: higher sensory evidence in the periphery was needed to equate confidence choices between central and peripheral perceptual decisions. When performance on the central and peripheral tasks was matched, observers were still more confident in their ability to report the orientation of the central gratings over the one of the peripheral gratings. In a second experiment, we measured metacognitive sensitivity, as the difference in perceptual sensitivity between perceptual decisions that are chosen with high confidence and decisions that are chosen with low confidence. Results showed that metacognitive sensitivity is lower when participants compare central to peripheral perceptual decisions compared to when they compare peripheral to peripheral or central to central perceptual decisions. In a third experiment, we showed that peripheral underconfidence does not arise because observers based confidence judgments on stimulus size or contrast range rather than on perceptual performance. Taken together, results indicate that humans are impaired in comparing central with peripheral perceptual performance, but metacognitive biases cannot explain our impression of uniformity, as this would require peripheral overconfidence.
Collapse
|
42
|
Haun AM. What is visible across the visual field? Neurosci Conscious 2021; 2021:niab006. [PMID: 34084558 PMCID: PMC8167368 DOI: 10.1093/nc/niab006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2020] [Revised: 11/09/2020] [Accepted: 01/08/2021] [Indexed: 12/22/2022] Open
Abstract
It is sometimes claimed that because the resolution and sensitivity of visual perception are better in the fovea than in the periphery, peripheral vision cannot support the same kinds of colour and sharpness percepts as foveal vision. The fact that a scene nevertheless seems colourful and sharp throughout the visual field then poses a puzzle. In this study, I use a detailed model of human spatial vision to estimate the visibility of certain properties of natural scenes, including aspects of colourfulness, sharpness, and blurriness, across the visual field. The model is constructed to reproduce basic aspects of human contrast and colour sensitivity over a range of retinal eccentricities. I apply the model to colourful, complex natural scene images, and estimate the degree to which colour and edge information are present in the model's representation of the scenes. I find that, aside from the intrinsic drift in the spatial scale of the representation, there are not large qualitative differences between foveal and peripheral representations of 'colourfulness' and 'sharpness'.
Collapse
|
43
|
Wang FS, Wolf J, Farshad M, Meboldt M, Lohmeyer Q. Object-Gaze Distance: Quantifying Near- Peripheral Gaze Behavior in Real-World Applications. J Eye Mov Res 2021; 14. [PMID: 34122747 PMCID: PMC8189527 DOI: 10.16910/jemr.14.1.5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Eye tracking (ET) has shown to reveal the wearer’s cognitive processes using the measurement
of the central point of foveal vision. However, traditional ET evaluation methods have
not been able to take into account the wearers’ use of the peripheral field of vision. We
propose an algorithmic enhancement to a state-of-the-art ET analysis method, the Object-
Gaze Distance (OGD), which additionally allows the quantification of near-peripheral gaze
behavior in complex real-world environments. The algorithm uses machine learning for area
of interest (AOI) detection and computes the minimal 2D Euclidean pixel distance to the
gaze point, creating a continuous gaze-based time-series. Based on an evaluation of two
AOIs in a real surgical procedure, the results show that a considerable increase of interpretable
fixation data from 23.8 % to 78.3 % of AOI screw and from 4.5 % to 67.2 % of AOI
screwdriver was achieved, when incorporating the near-peripheral field of vision. Additionally,
the evaluation of a multi-OGD time series representation has shown the potential to
reveal novel gaze patterns, which may provide a more accurate depiction of human gaze
behavior in multi-object environments.
Collapse
|
44
|
Veríssimo IS, Hölsken S, Olivers CNL. Individual differences in crowding predict visual search performance. J Vis 2021; 21:29. [PMID: 34038508 PMCID: PMC8164367 DOI: 10.1167/jov.21.5.29] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Accepted: 03/12/2021] [Indexed: 11/24/2022] Open
Abstract
Visual search is an integral part of human behavior and has proven important to understanding mechanisms of perception, attention, memory, and oculomotor control. Thus far, the dominant theoretical framework posits that search is mainly limited by covert attentional mechanisms, comprising a central bottleneck in visual processing. A different class of theories seeks the cause in the inherent limitations of peripheral vision, with search being constrained by what is known as the functional viewing field (FVF). One of the major factors limiting peripheral vision, and thus the FVF, is crowding. We adopted an individual differences approach to test the prediction from FVF theories that visual search performance is determined by the efficacy of peripheral vision, in particular crowding. Forty-four participants were assessed with regard to their sensitivity to crowding (as measured by critical spacing) and their search efficiency (as indicated by manual responses and eye movements). This revealed substantial correlations between the two tasks, as stronger susceptibility to crowding was predictive of slower search, more eye movements, and longer fixation durations. Our results support FVF theories in showing that peripheral vision is an important determinant of visual search efficiency.
Collapse
|
45
|
Kujala T, Kircher K, Ahlström C. A Review of Occlusion as a Tool to Assess Attentional Demand in Driving. HUMAN FACTORS 2021:187208211010953. [PMID: 33908809 PMCID: PMC10374995 DOI: 10.1177/00187208211010953] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
OBJECTIVE The aim of this review is to identify how visual occlusion contributes to our understanding of attentional demand and spare visual capacity in driving and the strengths and limitations of the method. BACKGROUND The occlusion technique was developed by John W. Senders to evaluate the attentional demand of driving. Despite its utility, it has been used infrequently in driver attention/inattention research. METHOD Visual occlusion studies in driving published between 1967 and 2020 were reviewed. The focus was on original studies in which the forward visual field was intermittently occluded while the participant was driving. RESULTS Occlusion studies have shown that attentional demand varies across situations and drivers and have indicated environmental, situational, and inter-individual factors behind the variability. The occlusion technique complements eye tracking in being able to indicate the temporal requirements for and redundancy in visual information sampling. The proper selection of occlusion settings depends on the target of the research. CONCLUSION Although there are a number of occlusion studies looking at various aspects of attentional demand, we are still only beginning to understand how these demands vary, interact, and covary in naturalistic driving. APPLICATION The findings of this review have methodological and theoretical implications for human factors research and for the development of distraction monitoring and in-vehicle system testing. Distraction detection algorithms and testing guidelines should consider the variability in drivers' situational and individual spare visual capacity.
Collapse
|
46
|
Protective Football Headgear and Peripheral Visuomotor Ability in NCAA Football Athletes: The Role of Facemasks and Visors. J Funct Morphol Kinesiol 2021; 6:jfmk6020034. [PMID: 33917828 PMCID: PMC8167592 DOI: 10.3390/jfmk6020034] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/22/2021] [Revised: 04/05/2021] [Accepted: 04/07/2021] [Indexed: 12/05/2022] Open
Abstract
The purpose of this investigation was to determine the effects of varying facemask reinforcement and visor tint on peripheral visuomotor abilities in collegiate football players. Division I NCAA football players (n = 14) completed two peripheral visuomotor experiments: (1) Varying facemask reinforcement, (2) Varying visor tinting. In experiment 1, participants were tested under the following conditions: baseline (no helmet; BL), helmet + light (HL), helmet + medium (HM), helmet + heavy (HH), and helmet + extra heavy (HXH) reinforced facemasks. In experiment 2, participants were tested under the following conditions: baseline (no helmet; BL), helmet only (HO), helmet + clear (HCV), helmet + smoke-tinted (HSV), and helmet + mirror-tinted (HMV) visors. For each condition, a 60 s peripheral visuomotor test was completed on a Dynavision D2 visuomotor board. For experiment 1, the BL peripheral reaction time (PRT) was faster than all facemask conditions (p < 0.05). Furthermore, PRT was impaired with the HXH compared to HL (p < 0.001), HM (p < 0.001), and HH (p = 0.001). Both HH and HXH resulted in the potentiation of PRT impairments in the outermost and inferior peripheral visual areas (p < 0.05). In experiment 2, BL PRT was faster than all helmeted conditions (p < 0.05). Additionally, PRT was slower in HSV (p = 0.013) and HMV (p < 0.001) conditions compared to HO. HMV resulted in slower PRT in all peripheral areas (p < 0.05) while PRT was impaired only in outer areas for HSV (p < 0.05). Wearing protective football headgear impairs peripheral visuomotor ability. Lighter reinforced facemasks and clear visors do not appear to exacerbate impairment. However, heavier reinforced facemasks and tinted visors further decrease visuomotor performance in outer and inferior visual areas, indicating a potential need for considerations of on-field player performance and safety.
Collapse
|
47
|
Ringer RV, Coy AM, Larson AM, Loschky LC. Investigating Visual Crowding of Objects in Complex Real-World Scenes. Iperception 2021; 12:2041669521994150. [PMID: 35145614 PMCID: PMC8822316 DOI: 10.1177/2041669521994150] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2019] [Accepted: 01/07/2021] [Indexed: 11/23/2022] Open
Abstract
Visual crowding, the impairment of object recognition in peripheral vision due to flanking objects, has generally been studied using simple stimuli on blank backgrounds. While crowding is widely assumed to occur in natural scenes, it has not been shown rigorously yet. Given that scene contexts can facilitate object recognition, crowding effects may be dampened in real-world scenes. Therefore, this study investigated crowding using objects in computer-generated real-world scenes. In two experiments, target objects were presented with four flanker objects placed uniformly around the target. Previous research indicates that crowding occurs when the distance between the target and flanker is approximately less than half the retinal eccentricity of the target. In each image, the spacing between the target and flanker objects was varied considerably above or below the standard (0.5) threshold to either suppress or facilitate the crowding effect. Experiment 1 cued the target location and then briefly flashed the scene image before participants could move their eyes. Participants then selected the target object’s category from a 15-alternative forced choice response set (including all objects shown in the scene). Experiment 2 used eye tracking to ensure participants were centrally fixating at the beginning of each trial and showed the image for the duration of the participant’s fixation. Both experiments found object recognition accuracy decreased with smaller spacing between targets and flanker objects. Thus, this study rigorously shows crowding of objects in semantically consistent real-world scenes.
Collapse
|
48
|
Herrera-Esposito D, Coen-Cagli R, Gomez-Sena L. Flexible contextual modulation of naturalistic texture perception in peripheral vision. J Vis 2021; 21:1. [PMID: 33393962 PMCID: PMC7794279 DOI: 10.1167/jov.21.1.1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Accepted: 12/01/2020] [Indexed: 11/24/2022] Open
Abstract
Peripheral vision comprises most of our visual field, and is essential in guiding visual behavior. Its characteristic capabilities and limitations, which distinguish it from foveal vision, have been explained by the most influential theory of peripheral vision as the product of representing the visual input using summary statistics. Despite its success, this account may provide a limited understanding of peripheral vision, because it neglects processes of perceptual grouping and segmentation. To test this hypothesis, we studied how contextual modulation, namely the modulation of the perception of a stimulus by its surrounds, interacts with segmentation in human peripheral vision. We used naturalistic textures, which are directly related to summary-statistics representations. We show that segmentation cues affect contextual modulation, and that this is not captured by our implementation of the summary-statistics model. We then characterize the effects of different texture statistics on contextual modulation, providing guidance for extending the model, as well as for probing neural mechanisms of peripheral vision.
Collapse
|
49
|
Kato T. Using "Enzan No Metsuke" (Gazing at the Far Mountain) as a Visual Search Strategy in Kendo. Front Sports Act Living 2020; 2:40. [PMID: 33345032 PMCID: PMC7739574 DOI: 10.3389/fspor.2020.00040] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2019] [Accepted: 03/24/2020] [Indexed: 12/05/2022] Open
Abstract
In Kendo (Japanese fencing), “Enzan no Metsuke” is an important Waza (technique) that is applied by expert Kendo fighters. It involves looking at the opponent's eyes with “a gaze toward the far mountain,” taking in not only the opponent's face but also his or her whole body. Over the last few decades, a considerable number of studies on visual search behaviors in sport have been conducted. Yet, there are few articles that examine visual search behaviors in combat sports, such as martial arts. This study aimed to analyze the visual search strategies used by expert Kendo fighters through sparring practices to discuss what “Enzan no Metsuke” is under experimental, but natural (in situ), conditions. Ten experts, 10 novices, and one Shihan (a master of Kendo) participated in this study. The fighters wore a mobile eye tracker and faced a real opponent. They were instructed to do the following in five different sessions: prepare themselves, practice their offense and defense techniques, and fight in a real Shiai (match). The results indicated differences in the visual search strategies between the Shihan, experts, and novices. The Shihan and experts fixated on their opponent's eyes or head region most of the time and adopted a visual search strategy involving fewer fixations of longer duration. Conversely, novices set their eyes mainly on the opponent's Shinai (sword). Only the Shihan always looked at the opponent's eyes, even during the preparation, offense, and defense sessions. Shihan and experts set their “visual pivot” on the opponent's eyes quietly, even when the opponent tried to attack with the Shinai. Novices, however, moved their eyes up and down based on the influence of their opponent's movements. As these results indicate, novices tried to search for detailed information about their opponent and processed visual information depending on their focal vision, whereas Shihan and experts absorbed information not from their opponent's eyes but from their entire body by utilizing their peripheral vision; this means that Shihan and experts could see an opening or opportunity and react instantaneously by using “Enzan no Metsuke.”
Collapse
|
50
|
Ryu D, Cooke A, Bellomo E, Woodman T. Watch out for the hazard! Blurring peripheral vision facilitates hazard perception in driving. ACCIDENT; ANALYSIS AND PREVENTION 2020; 146:105755. [PMID: 32927281 DOI: 10.1016/j.aap.2020.105755] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 08/04/2020] [Accepted: 08/27/2020] [Indexed: 06/11/2023]
Abstract
The objectives of this paper were to directly examine the roles of central and peripheral vision in hazard perception and to test whether perceptual training can enhance hazard perception. We also examined putative cortical mechanisms underpinning any effect of perceptual training on performance. To address these objectives, we used the gaze-contingent display paradigm to selectively present information to central and peripheral parts of the visual field. In Experiment 1, we compared hazard perception abilities of experienced and inexperienced drivers while watching video clips in three different viewing conditions (full vision; clear central and blurred peripheral vision; blurred central and clear peripheral vision). Participants' visual search behaviour and cortical activity were simultaneously recorded. In Experiment 2, we determined whether training with clear central and blurred peripheral vision could improve hazard perception among non-licensed drivers. Results demonstrated that (i) information from central vision is more important than information from peripheral vision in identifying hazard situations, for screen-based hazard perception tests, (ii) clear central and blurred peripheral vision viewing helps the alignment of line-of-gaze and attention, (iii) training with clear central and blurred peripheral vision can improve screen-based hazard perception. The findings have important implications for road safety and provide a new training paradigm to improve hazard perception.
Collapse
|