1
|
Carpio A, Dreher JC, Ferrera D, Galán D, Mercado F, Obeso I. Causal computations of supplementary motor area on spatial impulsivity. Sci Rep 2024; 14:17040. [PMID: 39048603 PMCID: PMC11269645 DOI: 10.1038/s41598-024-67673-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2024] [Accepted: 07/15/2024] [Indexed: 07/27/2024] Open
Abstract
Spatial proximity to important stimuli often induces impulsive behaviour. How we overcome impulsive tendencies is what determines behaviour to be adaptive. Here, we used virtual reality to investigate whether the spatial proximity of stimuli is causally related to the supplementary motor area (SMA) functions. In two experiments, we set out to investigate these processes using a virtual environment that recreates close and distant spaces to test the causal contributions of the SMA in spatial impulsivity. In an online first experiment (N = 93) we validated and measured the influence of distant stimuli using a go/no-go task with close (21 cm) or distant stimuli (360 cm). In experiment 2 (N = 28), we applied transcranial static magnetic stimulation (tSMS) over the SMA (double-blind, crossover, sham-controlled design) to test its computations in controlling impulsive tendencies towards close vs distant stimuli. Reaction times and error rates (omission and commission) were analysed. In addition, the EZ Model parameters (a, v, Ter and MDT) were computed. Close stimuli elicited faster responses compared to distant stimuli but also exhibited higher error rates, specifically in commission errors (experiment 1). Real stimulation over SMA slowed response latencies (experiment 2), an effect mediated by an increase in decision thresholds (a). Current findings suggest that impulsivity might be modulated by spatial proximity, resulting in accelerated actions that may lead to an increase of inaccurate responses to nearby objects. Our study also provides a first starting point on the role of the SMA in regulating spatial impulsivity.
Collapse
Affiliation(s)
- Alberto Carpio
- Department of Psychology, School of Health Sciences, Universidad Rey Juan Carlos, Av. Atenas S/N, 28922, Alcorcón, Madrid, Spain
| | - Jean-Claude Dreher
- Neuroeconomics, Reward and Decision-Making Team, Centre National de La Recherche Scientifique, Institut Des Sciences Cognitives Marc Jeannerod, UMR 5229, 69675, Bron, France
| | - David Ferrera
- Department of Psychology, School of Health Sciences, Universidad Rey Juan Carlos, Av. Atenas S/N, 28922, Alcorcón, Madrid, Spain
| | - Diego Galán
- Department of Psychology, School of Health Sciences, Universidad Rey Juan Carlos, Av. Atenas S/N, 28922, Alcorcón, Madrid, Spain
| | - Francisco Mercado
- Department of Psychology, School of Health Sciences, Universidad Rey Juan Carlos, Av. Atenas S/N, 28922, Alcorcón, Madrid, Spain.
| | - Ignacio Obeso
- HM Hospitales - Centro Integral de Neurociencias HM CINAC, HM Hospitales Puerta del Sur, Móstoles, Madrid, Spain.
- CINC-CSIC, Avda Leon S/N, 28805, Alcalá de Henares, Madrid, Spain.
| |
Collapse
|
2
|
Priorelli M, Pezzulo G, Stoianov IP. Active Vision in Binocular Depth Estimation: A Top-Down Perspective. Biomimetics (Basel) 2023; 8:445. [PMID: 37754196 PMCID: PMC10526497 DOI: 10.3390/biomimetics8050445] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 09/08/2023] [Accepted: 09/19/2023] [Indexed: 09/28/2023] Open
Abstract
Depth estimation is an ill-posed problem; objects of different shapes or dimensions, even if at different distances, may project to the same image on the retina. Our brain uses several cues for depth estimation, including monocular cues such as motion parallax and binocular cues such as diplopia. However, it remains unclear how the computations required for depth estimation are implemented in biologically plausible ways. State-of-the-art approaches to depth estimation based on deep neural networks implicitly describe the brain as a hierarchical feature detector. Instead, in this paper we propose an alternative approach that casts depth estimation as a problem of active inference. We show that depth can be inferred by inverting a hierarchical generative model that simultaneously predicts the eyes' projections from a 2D belief over an object. Model inversion consists of a series of biologically plausible homogeneous transformations based on Predictive Coding principles. Under the plausible assumption of a nonuniform fovea resolution, depth estimation favors an active vision strategy that fixates the object with the eyes, rendering the depth belief more accurate. This strategy is not realized by first fixating on a target and then estimating the depth; instead, it combines the two processes through action-perception cycles, with a similar mechanism of the saccades during object recognition. The proposed approach requires only local (top-down and bottom-up) message passing, which can be implemented in biologically plausible neural circuits.
Collapse
Affiliation(s)
- Matteo Priorelli
- Institute of Cognitive Sciences and Technologies, National Research Council of Italy, 35137 Padova, Italy;
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council of Italy, 00185 Rome, Italy;
| | - Ivilin Peev Stoianov
- Institute of Cognitive Sciences and Technologies, National Research Council of Italy, 35137 Padova, Italy;
| |
Collapse
|
3
|
Fulvio JM, Rokers B, Samaha J. Task feedback suggests a post-perceptual component to serial dependence. J Vis 2023; 23:6. [PMID: 37682557 PMCID: PMC10500366 DOI: 10.1167/jov.23.10.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 08/14/2023] [Indexed: 09/09/2023] Open
Abstract
Decisions across a range of perceptual tasks are biased toward past stimuli. Such serial dependence is thought to be an adaptive low-level mechanism that promotes perceptual stability across time. However, recent studies suggest post-perceptual mechanisms may also contribute to serially biased responses, calling into question a single locus of serial dependence and the nature of integration of past and present sensory inputs. We measured serial dependence in the context of a three-dimensional (3D) motion perception task where uncertainty in the sensory information varied substantially from trial to trial. We found that serial dependence varied with stimulus properties that impact sensory uncertainty on the current trial. Reduced stimulus contrast was associated with an increased bias toward the stimulus direction of the previous trial. Critically, performance feedback, which reduced sensory uncertainty, abolished serial dependence. These results provide clear evidence for a post-perceptual locus of serial dependence in 3D motion perception and support the role of serial dependence as a response strategy in the face of substantial sensory uncertainty.
Collapse
Affiliation(s)
| | - Bas Rokers
- Department of Psychology, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
- Department of Psychology and Center for Neural Science, New York University, New York, NY, USA
| | - Jason Samaha
- Department of Psychology, University of California, Santa Cruz, Santa Cruz, CA, USA
| |
Collapse
|
4
|
Stereopsis provides a constant feed to visual shape representation. Vision Res 2023; 204:108175. [PMID: 36571983 DOI: 10.1016/j.visres.2022.108175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 11/10/2022] [Accepted: 12/05/2022] [Indexed: 12/25/2022]
Abstract
The contribution of stereopsis in human visual shape perception was examined using stimuli with either null, normal, or reversed binocular disparity in an old/new object recognition task. The highest levels of recognition performance were observed with null and normal binocular disparity displays, which did not differ. However, reversed disparity led to significantly worse performance than either of the other display conditions. This indicates that stereopsis provides a continuous input to the mechanisms involved in shape perception.
Collapse
|
5
|
Hibbard PB, Goutcher R, Hornsey RL, Hunter DW, Scarfe P. Luminance contrast provides metric depth information. ROYAL SOCIETY OPEN SCIENCE 2023; 10:220567. [PMID: 36816842 PMCID: PMC9929495 DOI: 10.1098/rsos.220567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 01/20/2023] [Indexed: 06/18/2023]
Abstract
The perception of depth from retinal images depends on information from multiple visual cues. One potential depth cue is the statistical relationship between luminance and distance; darker points in a local region of an image tend to be farther away than brighter points. We establish that this statistical relationship acts as a quantitative cue to depth. We show that luminance variations affect depth in naturalistic scenes containing multiple cues to depth. This occurred when the correlation between variations of luminance and depth was manipulated within an object, but not between objects. This is consistent with the local nature of the statistical relationship in natural scenes. We also showed that perceived depth increases as contrast is increased, but only when the depth signalled by luminance and binocular disparity are consistent. Our results show that the negative correlation between luminance and distance, as found under diffuse lighting, provides a depth cue that is combined with depth from binocular disparity, in a way that is consistent with the simultaneous estimation of surface depth and reflectance variations. Adopting more complex lighting models such as ambient occlusion in computer rendering will thus contribute to the accuracy as well as the aesthetic appearance of three-dimensional graphics.
Collapse
Affiliation(s)
- Paul B. Hibbard
- Department of Psychology, University of Essex, Colchester, Essex, UK
| | - Ross Goutcher
- Psychology Division, Faculty of Natural Sciences, University of Stirling, Stirling, UK
| | | | - David W. Hunter
- Department of Computer Science, Aberystwyth University, Aberystwyth, UK
| | - Peter Scarfe
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, Berkshire, UK
| |
Collapse
|
6
|
Johnsdorf M, Kisker J, Gruber T, Schöne B. Comparing encoding mechanisms in realistic virtual reality and conventional 2D laboratory settings: Event-related potentials in a repetition suppression paradigm. Front Psychol 2023; 14:1051938. [PMID: 36777234 PMCID: PMC9912617 DOI: 10.3389/fpsyg.2023.1051938] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Accepted: 01/06/2023] [Indexed: 01/28/2023] Open
Abstract
Although the human brain is adapted to function within three-dimensional environments, conventional laboratory research commonly investigates cognitive mechanisms in a reductionist approach using two-dimensional stimuli. However, findings regarding mnemonic processes indicate that realistic experiences in Virtual Reality (VR) are stored in richer and more intertwined engrams than those obtained from the conventional laboratory. Our study aimed to further investigate the generalizability of laboratory findings and to differentiate whether the processes underlying memory formation differ between VR and the conventional laboratory already in early encoding stages. Therefore, we investigated the Repetition Suppression (RS) effect as a correlate of the earliest instance of mnemonic processes under conventional laboratory conditions and in a realistic virtual environment. Analyses of event-related potentials (ERPs) indicate that the ERP deflections at several electrode clusters were lower in VR compared to the PC condition. These results indicate an optimized distribution of cognitive resources in realistic contexts. The typical RS effect was replicated under both conditions at most electrode clusters for a late time window. Additionally, a specific RS effect was found in VR at anterior electrodes for a later time window, indicating more extensive encoding processes in VR compared to the laboratory. Specifically, electrotomographic results (VARETA) indicate multimodal integration involving a broad cortical network and higher cognitive processes during the encoding of realistic objects. Our data suggest that object perception under realistic conditions, in contrast to the conventional laboratory, requires multisensory integration involving an interconnected functional system, facilitating the formation of intertwined memory traces in realistic environments.
Collapse
|
7
|
Xi S, Zhou Y, Yao J, Ye X, Zhang P, Wen W, Zhao C. Cortical Deficits are Correlated with Impaired Stereopsis in Patients with Strabismus. Neurosci Bull 2022:10.1007/s12264-022-00987-7. [DOI: 10.1007/s12264-022-00987-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 09/21/2022] [Indexed: 12/13/2022] Open
Abstract
AbstractIn this study, we explored the neural mechanism underlying impaired stereopsis and possible functional plasticity after strabismus surgery. We enrolled 18 stereo-deficient patients with intermittent exotropia before and after surgery, along with 18 healthy controls. Functional magnetic resonance imaging data were collected when participants viewed three-dimensional stimuli. Compared with controls, preoperative patients showed hypoactivation in higher-level dorsal (visual and parietal) areas and ventral visual areas. Pre- and postoperative activation did not significantly differ in patients overall; patients with improved stereopsis showed stronger postoperative activation than preoperative activation in the right V3A and left intraparietal sulcus. Worse stereopsis and fusional control were correlated with preoperative hypoactivation, suggesting that cortical deficits along the two streams might reflect impaired stereopsis in intermittent exotropia. The correlation between improved stereopsis and activation in the right V3A after surgery indicates that functional plasticity may underlie the improvement of stereopsis. Thus, additional postoperative strategies are needed to promote functional plasticity and enhance the recovery of stereopsis.
Collapse
|
8
|
Poom L, Matin M. Priming and reversals of the perceived ambiguous orientation of a structure-from-motion shape and relation to personality traits. PLoS One 2022; 17:e0273772. [PMID: 36018885 PMCID: PMC9417019 DOI: 10.1371/journal.pone.0273772] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 08/15/2022] [Indexed: 11/26/2022] Open
Abstract
We demonstrate contributions of top-down and bottom-up influences in perception as explored by priming and counts of perceived reversals and mixed percepts, as probed by an ambiguously slanted structure-from-motion (SFM) test-cylinder. We included three different disambiguated primes: a SFM cylinder, a still image of a cylinder, and an imagined cylinder. In Experiment 1 where the prime and test sequentially occupied the same location, we also administered questionnaires with the Big-5 trait openness and vividness of visual imagery to probe possible relations to top-down driven priming. Since influences of gaze or position in the prime conditions in Experiment 1 could not be ruled out completely, Experiment 2 was conducted where the test cylinder appeared at a randomly chosen position after the prime. In Experiment 2 we also measured the number of perceptual reversals and mixed percepts during prolonged viewing of our ambiguous SFM-cylinder, and administered questionnaires to measure all Big-5 traits, autism, spatial and object imagery, and rational or experiential cognitive styles, associated with bottom-up and top-down processes. The results revealed contributions of position-invariant and cue-invariant priming. In addition, residual contributions of low-level priming was found when the prime and test were both defined by SFM, and were presented at the same location, and the correlation between the SFM priming and the other two priming conditions were weaker than between the pictorial and imagery priming. As previously found with ambiguous binocular rivalry stimuli, we found positive correlations between mixed percepts and the Big-5 dimension openness to experience, and between reversals, mixed percepts and neuroticism. Surprisingly, no correlations between the scores from the vividness of imagery questionnaires and influence from any of the primes were obtained. An intriguing finding was the significant differences between the positive correlation from the experiential cognitive style scores, and the negative correlation between rational style and the cue invariant priming. Among other results, negative correlations between agreeableness and all priming conditions were obtained. These results not only support the notion of multiple processes involved in the perception of ambiguous SFM, but also link these processes in perception to specific personality traits.
Collapse
Affiliation(s)
- Leo Poom
- Department of Psychology, Uppsala University, Uppsala, Sweden
- * E-mail:
| | - Melina Matin
- Department of Psychology, Uppsala University, Uppsala, Sweden
| |
Collapse
|
9
|
Chen X, Liao M, Jiang P, Sun H, Liu L, Gong Q. Abnormal effective connectivity in visual cortices underlies stereopsis defects in amblyopia. Neuroimage Clin 2022; 34:103005. [PMID: 35421811 PMCID: PMC9011166 DOI: 10.1016/j.nicl.2022.103005] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 02/15/2022] [Accepted: 04/05/2022] [Indexed: 02/08/2023]
Abstract
Abnormal effective connectivity inherent stereopsis defects in amblyopia was studied. A weakened connection from V2v to LO2 relates to stereopsis defects in amblyopia. Higher-order visual cortices may serve as key nodes to the stereopsis defects. An independent longitudinal dataset was used to validate the obtained results.
The neural basis underlying stereopsis defects in patients with amblyopia remains unclear, which hinders the development of clinical therapy. This study aimed to investigate visual network abnormalities in patients with amblyopia and their associations with stereopsis function. Spectral dynamic causal modeling methods were employed for resting-state functional magnetic resonance imaging data to investigate the effective connectivity (EC) among 14 predefined regions of interest in the dorsal and ventral visual pathways. We adopted two independent datasets, including a cross-sectional and a longitudinal dataset. In the cross-sectional dataset, we compared group differences in EC between 31 patients with amblyopia (mean age: 26.39 years old) and 31 healthy controls (mean age: 25.71 years old) and investigated the association between EC and stereoacuity. In addition, we explored EC changes after perceptual learning in a novel longitudinal dataset including 9 patients with amblyopia (mean age: 15.78 years old). We found consistent evidence from the two datasets indicating that the aberrant EC from V2v to LO2 is crucial for the stereoscopic deficits in the patients with amblyopia: it was weaker in the patients than in the controls, showed a positive linear relationship with the stereoscopic function, and increased after perceptual learning in the patients. In addition, higher-level dorsal (V3d, V3A, and V3B) and ventral areas (LO1 and LO2) were important nodes in the network of abnormal ECs associated with stereoscopic deficits in the patients with amblyopia. Our research provides insights into the neural mechanism underlying stereopsis deficits in patients with amblyopia and provides candidate targets for focused stimulus interventions to enhance the efficacy of clinical treatment for the improvement of stereopsis deficiency.
Collapse
Affiliation(s)
- Xia Chen
- Department of Optometry and Visual Science, West China Hospital, Sichuan University, Chengdu, China
| | - Meng Liao
- Department of Optometry and Visual Science, West China Hospital, Sichuan University, Chengdu, China; Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China
| | - Ping Jiang
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu, China; Research Unit of Psychoradiology, Chinese Academy of Medical Sciences, Chengdu, China; Functional and Molecular Imaging Key Laboratory of Sichuan Province, Chengdu, China.
| | - Huaiqiang Sun
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu, China; Imaging Research Core Facilities, West China Hospital of Sichuan University, Chengdu, Sichuan, China
| | - Longqian Liu
- Department of Optometry and Visual Science, West China Hospital, Sichuan University, Chengdu, China; Department of Ophthalmology, West China Hospital, Sichuan University, Chengdu, China.
| | - Qiyong Gong
- Huaxi MR Research Center (HMRRC), Department of Radiology, West China Hospital of Sichuan University, Chengdu, China; Research Unit of Psychoradiology, Chinese Academy of Medical Sciences, Chengdu, China; Functional and Molecular Imaging Key Laboratory of Sichuan Province, Chengdu, China
| |
Collapse
|
10
|
Xie D, Yin K, Yang ZJ, Huang H, Li X, Shu Z, Duan H, He J, Jiang J. Polarization-perceptual anisotropic two-dimensional ReS 2 neuro-transistor with reconfigurable neuromorphic vision. MATERIALS HORIZONS 2022; 9:1448-1459. [PMID: 35234765 DOI: 10.1039/d1mh02036f] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Polarization is a common and unique phenomenon in nature, which reveals more camouflage features of objects. However, current polarization-perceptual devices based on conventional physical architectures face enormous challenges for high-performance computation due to the traditional von Neumann bottleneck. In this work, a novel polarization-perceptual neuro-transistor with reconfigurable anisotropic vision is proposed based on a two-dimensional ReS2 phototransistor. The device exhibits excellent photodetection ability and superior polarization sensitivity due to its direct band gap semiconductor property and strong anisotropic crystal structure, respectively. The fascinating polarization-sensitive neuromorphic behavior, such as polarization memory consolidation and reconfigurable visual imaging, are successfully realized. In particular, the regulated polarization responsivity and dichroic ratio are successfully emulated through our artificial compound eyes. More importantly, two intriguing polarization-perceptual applications for polarized navigation with reconfigurable adaptive learning abilities and three-dimensional visual polarization imaging are also experimentally demonstrated. The proposed device may provide a promising opportunity for future polarization perception systems in intelligent humanoid robots and autonomous vehicles.
Collapse
Affiliation(s)
- Dingdong Xie
- Hunan Key Laboratory of Nanophotonics and Devices, School of Physics and Electronics, Central South University, 932 South Lushan Road, Changsha, Hunan 410083, P. R. China.
| | - Kai Yin
- Hunan Key Laboratory of Nanophotonics and Devices, School of Physics and Electronics, Central South University, 932 South Lushan Road, Changsha, Hunan 410083, P. R. China.
| | - Zhong-Jian Yang
- Hunan Key Laboratory of Nanophotonics and Devices, School of Physics and Electronics, Central South University, 932 South Lushan Road, Changsha, Hunan 410083, P. R. China.
| | - Han Huang
- Hunan Key Laboratory of Nanophotonics and Devices, School of Physics and Electronics, Central South University, 932 South Lushan Road, Changsha, Hunan 410083, P. R. China.
| | - Xiaohui Li
- School of Physics and Information Technology, Shanxi Normal University, Xi'an 710119, P. R. China
| | - Zhiwen Shu
- State Key Laboratory of Advanced Design and Manufacturing for Vehicle Body, College of Mechanical and Vehicle Engineering, Hunan University, Changsha 410082, P. R. China
| | - Huigao Duan
- State Key Laboratory of Advanced Design and Manufacturing for Vehicle Body, College of Mechanical and Vehicle Engineering, Hunan University, Changsha 410082, P. R. China
| | - Jun He
- Hunan Key Laboratory of Nanophotonics and Devices, School of Physics and Electronics, Central South University, 932 South Lushan Road, Changsha, Hunan 410083, P. R. China.
| | - Jie Jiang
- Hunan Key Laboratory of Nanophotonics and Devices, School of Physics and Electronics, Central South University, 932 South Lushan Road, Changsha, Hunan 410083, P. R. China.
| |
Collapse
|
11
|
Kase SE, Hung CP, Krayzman T, Hare JZ, Rinderspacher BC, Su SM. The Future of Collaborative Human-Artificial Intelligence Decision-Making for Mission Planning. Front Psychol 2022; 13:850628. [PMID: 35444590 PMCID: PMC9014866 DOI: 10.3389/fpsyg.2022.850628] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 02/28/2022] [Indexed: 11/15/2022] Open
Abstract
In an increasingly complex military operating environment, next generation wargaming platforms can reduce risk, decrease operating costs, and improve overall outcomes. Novel Artificial Intelligence (AI) enabled wargaming approaches, based on software platforms with multimodal interaction and visualization capacity, are essential to provide the decision-making flexibility and adaptability required to meet current and emerging realities of warfighting. We highlight three areas of development for future warfighter-machine interfaces: AI-directed decisional guidance, computationally informed decision-making, and realistic representations of decision spaces. Progress in these areas will enable development of effective human-AI collaborative decision-making, to meet the increasing scale and complexity of today's battlespace.
Collapse
Affiliation(s)
- Sue E. Kase
- U.S. Army Combat Capabilities Development Command – Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| | - Chou P. Hung
- U.S. Army Combat Capabilities Development Command – Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| | - Tomer Krayzman
- U.S. Army Combat Capabilities Development Command – Army Research Laboratory, Aberdeen Proving Ground, MD, United States
- Oak Ridge Affiliated Universities, Oak Ridge, TN, United States
- Department of Computer Science, University of Maryland, College Park, MD, United States
| | - James Z. Hare
- U.S. Army Combat Capabilities Development Command – Army Research Laboratory, Adelphi, MD, United States
| | - B. Christopher Rinderspacher
- U.S. Army Combat Capabilities Development Command – Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| | - Simon M. Su
- U.S. Army Combat Capabilities Development Command – Army Research Laboratory, Aberdeen Proving Ground, MD, United States
- National Institute of Standards and Technology, Gaithersburg, MD, United States
| |
Collapse
|
12
|
Duan Y, Thatte J, Yaklovleva A, Norcia AM. Disparity in Context: Understanding how monocular image content interacts with disparity processing in human visual cortex. Neuroimage 2021; 237:118139. [PMID: 33964460 PMCID: PMC10786599 DOI: 10.1016/j.neuroimage.2021.118139] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 04/16/2021] [Accepted: 04/19/2021] [Indexed: 11/24/2022] Open
Abstract
Horizontal disparities between the two eyes' retinal images are the primary cue for depth. Commonly used random ot tereograms (RDS) intentionally camouflage the disparity cue, breaking the correlations between monocular image structure and the depth map that are present in natural images. Because of the nonlinear nature of visual processing, it is unlikely that simple computational rules derived from RDS will be sufficient to explain binocular vision in natural environments. In order to understand the interplay between natural scene structure and disparity encoding, we used a depth-image-based-rendering technique and a library of natural 3D stereo pairs to synthesize two novel stereogram types in which monocular scene content was manipulated independent of scene depth information. The half-images of the novel stereograms comprised either random-dots or scrambled natural scenes, each with the same depth maps as the corresponding natural scene stereograms. Using these stereograms in a simultaneous Event-Related Potential and behavioral discrimination task, we identified multiple disparity-contingent encoding stages between 100 ~ 500 msec. The first disparity sensitive evoked potential was observed at ~100 msec after an earlier evoked potential (between ~50-100 msec) that was sensitive to the structure of the monocular half-images but blind to disparity. Starting at ~150 msec, disparity responses were stereogram-specific and predictive of perceptual depth. Complex features associated with natural scene content are thus at least partially coded prior to disparity information, but these features and possibly others associated with natural scene content interact with disparity information only after an intermediate, 2D scene-independent disparity processing stage.
Collapse
Affiliation(s)
- Yiran Duan
- Wu Tsai Neurosciences Institute, 290 Jane Stanford Way, Stanford, CA 94305
| | - Jayant Thatte
- Department of Electrical Engineering, David Packard Building, Stanford University, 350 Jane Stanford Way, Stanford, CA 94305
| | | | - Anthony M Norcia
- Wu Tsai Neurosciences Institute, 290 Jane Stanford Way, Stanford, CA 94305.
| |
Collapse
|
13
|
Ishioka T, Hirayama K, Hosokai Y, Takeda A, Suzuki K, Nishio Y, Sawada Y, Abe N, Mori E. Impaired perception of illusory contours and cortical hypometabolism in patients with Parkinson's disease. NEUROIMAGE-CLINICAL 2021; 32:102779. [PMID: 34418792 PMCID: PMC8385116 DOI: 10.1016/j.nicl.2021.102779] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2021] [Revised: 07/29/2021] [Accepted: 07/30/2021] [Indexed: 11/21/2022]
Abstract
We assessed the perception of illusory contours in patients with PD. PD patients showed difficulty in perceiving Kanizsa illusory figures. Impaired perception of Kanizsa illusory figures was related to LOC hypometabolism.
Neuroimaging evidence suggests that areas of the higher-order visual cortex, including the lateral occipital complex (LOC), are engaged in the perception of illusory contours; however, these findings remain unsubstantiated by human lesion data. Therefore, we assessed the presentation time necessary to perceive two types of illusory contours formed by Kanizsa figures or aligned line ends in patients with Parkinson's disease (PD). Additionally, we used 18F-fluorodeoxyglucose positron emission tomography (FDG-PET) to measure regional cerebral glucose metabolism in PD patients. Although there were no significant differences in the stimulus durations required for perception of illusory contours formed by aligned line ends between PD patients and controls, PD patients required significantly longer stimulus durations for the perception of Kanizsa illusory figures. Difficulty in perceiving Kanizsa illusory figures was correlated with hypometabolism in the higher-order visual cortical areas, including the posterior inferior temporal gyrus. These findings indicate an association between dysfunction in the posterior inferior temporal gyrus, a region corresponding to a portion of the LOC, and impaired perception of Kanizsa illusory figures in PD patients.
Collapse
Affiliation(s)
- Toshiyuki Ishioka
- Department of Occupational Therapy, School of Health and Social Services, Saitama Prefectural University, Japan; Department of Behavioral Neurology and Cognitive Neuroscience, Graduate School of Medicine, Tohoku University, Japan.
| | - Kazumi Hirayama
- Department of Behavioral Neurology and Cognitive Neuroscience, Graduate School of Medicine, Tohoku University, Japan; Department of Occupational Therapy, Yamagata Prefectural University of Health Science, Japan
| | - Yoshiyuki Hosokai
- Department of Behavioral Neurology and Cognitive Neuroscience, Graduate School of Medicine, Tohoku University, Japan; Department of Radiological Sciences, International University of Health and Welfare, Japan
| | - Atsushi Takeda
- Department of Neurology, Sendai Nishitaga Hospital, Japan
| | - Kyoko Suzuki
- Department of Behavioral Neurology and Cognitive Neuroscience, Graduate School of Medicine, Tohoku University, Japan
| | - Yoshiyuki Nishio
- Department of Behavioral Neurology and Cognitive Neuroscience, Graduate School of Medicine, Tohoku University, Japan; Department of Psychiatry, Tokyo Metropolitan Matsuzawa Hospital, Japan
| | - Yoichi Sawada
- Department of Behavioral Neurology and Cognitive Neuroscience, Graduate School of Medicine, Tohoku University, Japan; Department of Health and Welfare Science, Okayama Prefectural University, Japan
| | - Nobuhito Abe
- Department of Behavioral Neurology and Cognitive Neuroscience, Graduate School of Medicine, Tohoku University, Japan; Kokoro Research Center, Kyoto University, Japan
| | - Etsuro Mori
- Department of Behavioral Neurology and Cognitive Neuroscience, Graduate School of Medicine, Tohoku University, Japan; Department of Behavioral Neurology and Neuropsychiatry, United Graduate School of Child Development, Osaka University, Japan
| |
Collapse
|
14
|
Alvarez I, Hurley SA, Parker AJ, Bridge H. Human primary visual cortex shows larger population receptive fields for binocular disparity-defined stimuli. Brain Struct Funct 2021; 226:2819-2838. [PMID: 34347164 PMCID: PMC8541985 DOI: 10.1007/s00429-021-02351-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 07/22/2021] [Indexed: 11/26/2022]
Abstract
The visual perception of 3D depth is underpinned by the brain's ability to combine signals from the left and right eyes to produce a neural representation of binocular disparity for perception and behaviour. Electrophysiological studies of binocular disparity over the past 2 decades have investigated the computational role of neurons in area V1 for binocular combination, while more recent neuroimaging investigations have focused on identifying specific roles for different extrastriate visual areas in depth perception. Here we investigate the population receptive field properties of neural responses to binocular information in striate and extrastriate cortical visual areas using ultra-high field fMRI. We measured BOLD fMRI responses while participants viewed retinotopic mapping stimuli defined by different visual properties: contrast, luminance, motion, correlated and anti-correlated stereoscopic disparity. By fitting each condition with a population receptive field model, we compared quantitatively the size of the population receptive field for disparity-specific stimulation. We found larger population receptive fields for disparity compared with contrast and luminance in area V1, the first stage of binocular combination, which likely reflects the binocular integration zone, an interpretation supported by modelling of the binocular energy model. A similar pattern was found in region LOC, where it may reflect the role of disparity as a cue for 3D shape. These findings provide insight into the binocular receptive field properties underlying processing for human stereoscopic vision.
Collapse
Affiliation(s)
- Ivan Alvarez
- Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, OX3 9DU, UK
| | - Samuel A Hurley
- Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, OX3 9DU, UK
- Department of Radiology, University of Wisconsin, Madison, WI, 53705, USA
| | - Andrew J Parker
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, OX1 3PT, UK
- Institut für Biologie, Otto-von-Guericke Universität, 39120, Magdeburg, Germany
| | - Holly Bridge
- Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, OX3 9DU, UK.
| |
Collapse
|
15
|
Li Z. Unique Neural Activity Patterns Among Lower Order Cortices and Shared Patterns Among Higher Order Cortices During Processing of Similar Shapes With Different Stimulus Types. Iperception 2021; 12:20416695211018222. [PMID: 34104383 PMCID: PMC8161881 DOI: 10.1177/20416695211018222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2020] [Accepted: 04/28/2021] [Indexed: 11/16/2022] Open
Abstract
We investigated the neural mechanism of the processing of three-dimensional (3D) shapes defined by disparity and perspective. We measured blood oxygenation level-dependent signals as participants viewed and classified 3D images of convex-concave shapes. According to the cue (disparity or perspective) and element type (random dots or black and white dotted lines), three types of stimuli were used: random dot stereogram, black and white dotted lines with perspective, and black and white dotted lines with binocular disparity. The blood oxygenation level-dependent images were then classified by multivoxel pattern analysis. To identify areas selective to shape, we assessed convex-concave classification accuracy with classifiers trained and tested using signals evoked by the same stimulus type (same cue and element type). To identify cortical regions with similar neural activity patterns regardless of stimulus type, we assessed the convex-concave classification accuracy of transfer classification in which classifiers were trained and tested using different stimulus types (different cues or element types). Classification accuracy using the same stimulus type was high in the early visual areas and subregions of the intraparietal sulcus (IPS), whereas transfer classification accuracy was high in the dorsal subregions of the IPS. These results indicate that the early visual areas process the specific features of stimuli, whereas the IPS regions perform more generalized processing of 3D shapes, independent of a specific stimulus type.
Collapse
Affiliation(s)
- Zhen Li
- Department of Psychology, The University of Hong Kong, Hong Kong, China; Graduate School of Engineering, Kochi University of Technology, Kochi, Japan
| |
Collapse
|
16
|
Markov YA, Tiurina NA. Size-distance rescaling in the ensemble representation of range: Study with binocular and monocular cues. Acta Psychol (Amst) 2021; 213:103238. [PMID: 33387867 DOI: 10.1016/j.actpsy.2020.103238] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2020] [Revised: 10/08/2020] [Accepted: 12/09/2020] [Indexed: 11/15/2022] Open
Abstract
According to numerous studies observers can rapidly and precisely evaluate mean or range of the set. Recent studies have shown that the mean size estimated based on sizes of objects rescaled to their distances (Tiurina & Utochkin, 2019). In the current study, we directly tested this rescaling mechanism on the perception of range using binocular and monocular cues. In Experiment 1, a sample set of circles with different angular sizes and in different apparent distances were stereoscopically presented. Participants had to adjust the range of the test set to match the range of the sample set. The main manipulation was the size-distance correlation for sample and test sets: in negative size-distance correlation, the apparent range had to decrease, while in positive correlation - increase. We found the highest underestimation in the condition with the negative sample correlation and positive test correlation, which could be explained only if ensemble summary statistics were estimated after the item's rescaling. In Experiment 2, we used Ponzo-like illusion and spatial positions as a depth cue. Sets were presented with positive, negative or without size-distance correlation on a grey background or the background with Ponzo-like illusion. We found that the range was underestimated in negative correlation and overestimated in positive correlation. Thus, items of ensemble could be automatically rescaled according to their distance, based on both binocular and monocular cues, and ensemble summary statistics estimation is based on perceived sizes.
Collapse
Affiliation(s)
- Yuri A Markov
- National Research University Higher School of Economics, Russia.
| | | |
Collapse
|
17
|
Abstract
With the increase in popularity of consumer virtual reality headsets, for research and other applications, it is important to understand the accuracy of 3D perception in VR. We investigated the perceptual accuracy of near-field virtual distances using a size and shape constancy task, in two commercially available devices. Participants wore either the HTC Vive or the Oculus Rift and adjusted the size of a virtual stimulus to match the geometric qualities (size and depth) of a physical stimulus they were able to refer to haptically. The judgments participants made allowed for an indirect measure of their perception of the egocentric, virtual distance to the stimuli. The data show under-constancy and are consistent with research from carefully calibrated psychophysical techniques. There was no difference in the degree of constancy found in the two headsets. We conclude that consumer virtual reality headsets provide a sufficiently high degree of accuracy in distance perception, to allow them to be used confidently in future experimental vision science, and other research applications in psychology.
Collapse
|
18
|
Zheng H, Yao L, Chen M, Long Z. 3D Contrast Image Reconstruction From Human Brain Activity. IEEE Trans Neural Syst Rehabil Eng 2020; 28:2699-2710. [PMID: 33147146 DOI: 10.1109/tnsre.2020.3035818] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Several studies demonstrated that functional magnetic resonance imaging (fMRI) signals in early visual cortex can be used to reconstruct 2-dimensional (2D) visual contents. However, it remains unknown how to reconstruct 3-dimensional (3D) visual stimuli from fMRI signals in visual cortex. 3D visual stimuli contain 2D visual features and depth information. Moreover, binocular disparity is an important cue for depth perception. Thus, it is more challenging to reconstruct 3D visual stimuli than 2D visual stimuli from the fMRI signals of visual cortex. This study aimed to reconstruct 3D visual images by constructing three decoding models: contrast-decoding, disparity-decoding and contrast-disparity-decoding models, and testing these models with fMRI data from humans viewing 3D contrast images. The results revealed that the 3D contrast stimuli can be reconstructed from the visual cortex. And the early visual regions (V1, V2) showed predominant advantages in reconstructing the contrast in 3D images for the contrast-decoding model. The dorsal visual regions (V3A, V7 and MT) showed predominant advantages in decoding the disparity in 3D images for the disparity-decoding model. The combination of the early and dorsal visual regions showed predominant advantages in decoding both the contrast and disparity for the contrast-disparity-decoding model. The results suggested that the contrast and disparity in 3D images were mainly represented in the early and dorsal visual regions separately. The two visual systems may interact with each other to decode 3D-contrast images.
Collapse
|
19
|
McCaslin AG, Vancleef K, Hubert L, Read JCA, Port N. Stereotest Comparison: Efficacy, Reliability, and Variability of a New Glasses-Free Stereotest. Transl Vis Sci Technol 2020; 9:29. [PMID: 32879785 PMCID: PMC7442860 DOI: 10.1167/tvst.9.9.29] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2019] [Accepted: 07/15/2020] [Indexed: 11/24/2022] Open
Abstract
Purpose To test the validity of the ASTEROID stereotest as a clinical test of depth perception by comparing it to clinical and research standard tests. Methods Thirty-nine subjects completed four stereotests twice: the ASTEROID test on an autostereo 3D tablet, a research standard on a VPixx PROPixx 3D projector, Randot Circles, and Randot Preschool. Within 14 days, subjects completed each test for a third time. Results ASTEROID stereo thresholds correlated well with research standard thresholds (r = 0.87, P < 0.001), although ASTEROID underestimated standard threshold (mean difference = 11 arcsec). ASTEROID results correlated less strongly with Randot Circles (r = 0.54, P < 0.001) and Randot Preschool (r = 0.64, P < 0.001), due to the greater measurement range of ASTEROID (1–1000 arcsec) compared to Randot Circles or Randot Preschool. Stereo threshold variability was low for all three clinical stereotests (Bland–Altman 95% limits of agreement between test and retest: ASTEROID, ±0.37; Randot Circles, ±0.24; Randot Preschool, ±0.23). ASTEROID captured the largest range of stereo in a normal population with test–retest reliability comparable to research standards (immediate r = 0.86 for ASTEROID vs. 0.90 for PROPixx; follow-up r = 0.68 for ASTEROID vs. 0.88 for PROPixx). Conclusions Compared to clinical and research standards for assessing depth perception, ASTEROID is highly accurate, has good test–retest reliability, and measures a wider range of stereo threshold. Translational Relevance The ASTEROID stereotest is a better clinical tool for determining baseline stereopsis and tracking changes during treatment for amblyopia and strabismus compared to current clinical tests.
Collapse
Affiliation(s)
| | - Kathleen Vancleef
- Institute of Neuroscience, Newcastle University, Newcastle Upon Tyne, UK
| | - Luke Hubert
- School of Optometry, Indiana University, Bloomington, IN, USA
| | - Jenny C A Read
- Institute of Neuroscience, Newcastle University, Newcastle Upon Tyne, UK
| | - Nicholas Port
- School of Optometry, Indiana University, Bloomington, IN, USA
| |
Collapse
|
20
|
Wang XM, Lind M, Bingham GP. A stratified process for the perception of objects: From optical transformations to 3D relief structure to 3D similarity structure to slant or aspect ratio. Vision Res 2020; 173:77-89. [PMID: 32480110 DOI: 10.1016/j.visres.2020.04.014] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2019] [Revised: 02/10/2020] [Accepted: 04/11/2020] [Indexed: 11/27/2022]
Abstract
Previously, we developed a stratified process for slant perception. First, optical transformations in structure-from-motion (SFM) and stereo were used to derive 3D relief structure (where depth scaling remains arbitrary). Second, with sufficient continuous perspective change (≥45°), a bootstrap process derived 3D similarity structure. Third, the perceived slant was derived. As predicted by theoretical work on SFM, small visual angle (<5°) viewing requires non-coplanar points. Slanted surfaces with small 3D cuboids or tetrahedrons yielded accurate judgment while planar surfaces did not. Normally, object perception entails non-coplanar points. Now, we apply the stratified process to object perception where, after deriving similarity structure, alternative metric properties of the object can be derived (e.g. slant of the top surface or width-to-depth aspect ratio). First, we tested slant judgments of the smooth planar tops of three different polyhedral objects. We tested rectangular, hexagonal, and asymmetric pentagonal surfaces, finding that symmetry was required to determine the direction of slant (AP&P, 2019, https://doi.org/10.3758/s13414-019-01859-5). Our current results replicated the previous findings. Second, we tested judgments of aspect ratios, finding accurate performance only for symmetric objects. Results from this study suggest that, first, trackable non-coplanar points can be attained in the form of 3D objects. Second, symmetry is necessary to constrain slant and aspect ratio perception. Finally, deriving 3D similarity structure precedes estimating object properties, such as slant or aspect ratio. Together, evidence presented here supports the stratified bootstrap process for 3D object perception. STATEMENT OF SIGNIFICANCE: Planning interactions with objects in the surrounding environment entails the perception of 3D shape and slant. Studying ways through which 3D metric shape and slant can be perceived accurately by moving observers not only sheds light on how the visual system works, but also provides understanding that can be applied to other fields, like machine vision or remote sensing. The current study is a logical extension of previous studies by the same authors and explores the roles of large continuous perspective changes, relief structure, and symmetry in a stratified process for object perception.
Collapse
Affiliation(s)
- Xiaoye Michael Wang
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA; Center for Visual Research, York University, Toronto, ON, Canada.
| | - Mats Lind
- Department of Information Technology, Uppsala University, Uppsala, Sweden
| | - Geoffrey P Bingham
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN, USA
| |
Collapse
|
21
|
Bootstrapping a better slant: A stratified process for recovering 3D metric slant. Atten Percept Psychophys 2020; 82:1504-1519. [DOI: 10.3758/s13414-019-01860-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
22
|
Cue-dependent effects of VR experience on motion-in-depth sensitivity. PLoS One 2020; 15:e0229929. [PMID: 32150569 PMCID: PMC7062262 DOI: 10.1371/journal.pone.0229929] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Accepted: 02/18/2020] [Indexed: 02/02/2023] Open
Abstract
The visual system exploits multiple signals, including monocular and binocular cues, to determine the motion of objects through depth. In the laboratory, sensitivity to different three-dimensional (3D) motion cues varies across observers and is often weak for binocular cues. However, laboratory assessments may reflect factors beyond inherent perceptual sensitivity. For example, the appearance of weak binocular sensitivity may relate to extensive prior experience with two-dimensional (2D) displays in which binocular cues are not informative. Here we evaluated the impact of experience on motion-in-depth (MID) sensitivity in a virtual reality (VR) environment. We tested a large cohort of observers who reported having no prior VR experience and found that binocular cue sensitivity was substantially weaker than monocular cue sensitivity. As expected, sensitivity was greater when monocular and binocular cues were presented together than in isolation. Surprisingly, the addition of motion parallax signals appeared to cause observers to rely almost exclusively on monocular cues. As observers gained experience in the VR task, sensitivity to monocular and binocular cues increased. Notably, most observers were unable to distinguish the direction of MID based on binocular cues above chance level when tested early in the experiment, whereas most showed statistically significant sensitivity to binocular cues when tested late in the experiment. This result suggests that observers may discount binocular cues when they are first encountered in a VR environment. Laboratory assessments may thus underestimate the sensitivity of inexperienced observers to MID, especially for binocular cues.
Collapse
|
23
|
Optimized but Not Maximized Cue Integration for 3D Visual Perception. eNeuro 2020; 7:ENEURO.0411-19.2019. [PMID: 31836597 PMCID: PMC6948924 DOI: 10.1523/eneuro.0411-19.2019] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2019] [Revised: 12/05/2019] [Accepted: 12/08/2019] [Indexed: 02/02/2023] Open
Abstract
Reconstructing three-dimensional (3D) scenes from two-dimensional (2D) retinal images is an ill-posed problem. Despite this, 3D perception of the world based on 2D retinal images is seemingly accurate and precise. The integration of distinct visual cues is essential for robust 3D perception in humans, but it is unclear whether this is true for non-human primates (NHPs). Here, we assessed 3D perception in macaque monkeys using a planar surface orientation discrimination task. Perception was accurate across a wide range of spatial poses (orientations and distances), but precision was highly dependent on the plane's pose. The monkeys achieved robust 3D perception by dynamically reweighting the integration of stereoscopic and perspective cues according to their pose-dependent reliabilities. Errors in performance could be explained by a prior resembling the 3D orientation statistics of natural scenes. We used neural network simulations based on 3D orientation-selective neurons recorded from the same monkeys to assess how neural computation might constrain perception. The perceptual data were consistent with a model in which the responses of two independent neuronal populations representing stereoscopic cues and perspective cues (with perspective signals from the two eyes combined using nonlinear canonical computations) were optimally integrated through linear summation. Perception of combined-cue stimuli was optimal given this architecture. However, an alternative architecture in which stereoscopic cues, left eye perspective cues, and right eye perspective cues were represented by three independent populations yielded two times greater precision than the monkeys. This result suggests that, due to canonical computations, cue integration for 3D perception is optimized but not maximized.
Collapse
|
24
|
|
25
|
Decramer T, Premereur E, Uytterhoeven M, Van Paesschen W, van Loon J, Janssen P, Theys T. Single-cell selectivity and functional architecture of human lateral occipital complex. PLoS Biol 2019; 17:e3000280. [PMID: 31513563 PMCID: PMC6759181 DOI: 10.1371/journal.pbio.3000280] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Revised: 09/24/2019] [Accepted: 08/20/2019] [Indexed: 02/06/2023] Open
Abstract
The human lateral occipital complex (LOC) is more strongly activated by images of objects compared to scrambled controls, but detailed information at the neuronal level is currently lacking. We recorded with microelectrode arrays in the LOC of 2 patients and obtained highly selective single-unit, multi-unit, and high-gamma responses to images of objects. Contrary to predictions derived from functional imaging studies, all neuronal properties indicated that the posterior subsector of LOC we recorded from occupies an unexpectedly high position in the hierarchy of visual areas. Notably, the response latencies of LOC neurons were long, the shape selectivity was spatially clustered, LOC receptive fields (RFs) were large and bilateral, and a number of LOC neurons exhibited three-dimensional (3D)-structure selectivity (a preference for convex or concave stimuli), which are all properties typical of end-stage ventral stream areas. Thus, our results challenge prevailing ideas about the position of the more posterior subsector of LOC in the hierarchy of visual areas.
Collapse
Affiliation(s)
- Thomas Decramer
- Laboratory for Neuro- and Psychophysiology, KU Leuven and the Leuven Brain Institute, Leuven, Belgium
- Department of Neurosurgery, University Hospitals Leuven, Leuven, Belgium
- Research Group Experimental Neurosurgery and Neuroanatomy, KU Leuven and the Leuven Brain Institute, Leuven, Belgium
| | - Elsie Premereur
- Laboratory for Neuro- and Psychophysiology, KU Leuven and the Leuven Brain Institute, Leuven, Belgium
| | - Mats Uytterhoeven
- Research Group Experimental Neurosurgery and Neuroanatomy, KU Leuven and the Leuven Brain Institute, Leuven, Belgium
| | - Wim Van Paesschen
- Department of Neurology, University Hospitals Leuven, Leuven, Belgium
- Laboratory for Epilepsy Research, KU Leuven, Leuven, Belgium
| | - Johannes van Loon
- Department of Neurosurgery, University Hospitals Leuven, Leuven, Belgium
- Research Group Experimental Neurosurgery and Neuroanatomy, KU Leuven and the Leuven Brain Institute, Leuven, Belgium
| | - Peter Janssen
- Laboratory for Neuro- and Psychophysiology, KU Leuven and the Leuven Brain Institute, Leuven, Belgium
| | - Tom Theys
- Department of Neurosurgery, University Hospitals Leuven, Leuven, Belgium
- Research Group Experimental Neurosurgery and Neuroanatomy, KU Leuven and the Leuven Brain Institute, Leuven, Belgium
| |
Collapse
|
26
|
Multivariate Analysis of BOLD Activation Patterns Recovers Graded Depth Representations in Human Visual and Parietal Cortex. eNeuro 2019; 6:ENEURO.0362-18.2019. [PMID: 31285275 PMCID: PMC6709213 DOI: 10.1523/eneuro.0362-18.2019] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2018] [Revised: 06/24/2019] [Accepted: 06/26/2019] [Indexed: 11/21/2022] Open
Abstract
Navigating through natural environments requires localizing objects along three distinct spatial axes. Information about position along the horizontal and vertical axes is available from an object’s position on the retina, while position along the depth axis must be inferred based on second-order cues such as the disparity between the images cast on the two retinae. Past work has revealed that object position in two-dimensional (2D) retinotopic space is robustly represented in visual cortex and can be robustly predicted using a multivariate encoding model, in which an explicit axis is modeled for each spatial dimension. However, no study to date has used an encoding model to estimate a representation of stimulus position in depth. Here, we recorded BOLD fMRI while human subjects viewed a stereoscopic random-dot sphere at various positions along the depth (z) and the horizontal (x) axes, and the stimuli were presented across a wider range of disparities (out to ∼40 arcmin) compared to previous neuroimaging studies. In addition to performing decoding analyses for comparison to previous work, we built encoding models for depth position and for horizontal position, allowing us to directly compare encoding between these dimensions. Our results validate this method of recovering depth representations from retinotopic cortex. Furthermore, we find convergent evidence that depth is encoded most strongly in dorsal area V3A.
Collapse
|
27
|
Li Y, Hou C, Yao L, Zhang C, Zheng H, Zhang J, Long Z. Disparity level identification using the voxel-wise Gabor model of fMRI data. Hum Brain Mapp 2019; 40:2596-2610. [PMID: 30811782 DOI: 10.1002/hbm.24547] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2018] [Revised: 01/18/2019] [Accepted: 02/03/2019] [Indexed: 11/08/2022] Open
Abstract
Perceiving disparities is the intuitive basis for our understanding of the physical world. Although many electrophysiology studies have revealed the disparity-tuning characteristics of the neurons in the visual areas of the macaque brain, neuron population responses to disparity processing have seldom been investigated. Many disparity studies using functional magnetic resonance imaging (fMRI) have revealed the disparity-selective visual areas in the human brain. However, it is unclear how to characterize neuron population disparity-tuning responses using fMRI technique. In the present study, we constructed three voxel-wise encoding Gabor models to predict the voxel responses to novel disparity levels and used a decoding method to identify the new disparity levels from population responses in the cortex. Among the three encoding models, the fine-coarse model (FCM) that used fine/coarse disparities to fit the voxel responses to disparities outperformed the single model and uncrossed-crossed model. Moreover, the FCM demonstrated high accuracy in predicting voxel responses in V3A complex and high accuracy in identifying novel disparities from responses in V3A complex. Our results suggest that the FCM can better characterize the voxel responses to disparities than the other two models and V3A complex is a critical visual area for representing disparity information.
Collapse
Affiliation(s)
- Yuan Li
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Chunping Hou
- School of Electrical and Information Engineering, Tianjin University, Tianjin, China
| | - Li Yao
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China.,College of Information Science and Technology, Beijing Normal University, Beijing, China
| | - Chuncheng Zhang
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Hongna Zheng
- College of Information Science and Technology, Beijing Normal University, Beijing, China
| | - Jiacai Zhang
- College of Information Science and Technology, Beijing Normal University, Beijing, China
| | - Zhiying Long
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, China
| |
Collapse
|
28
|
Fouhey DF, Gupta A, Zisserman A. From Images to 3D Shape Attributes. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2019; 41:93-106. [PMID: 29990013 DOI: 10.1109/tpami.2017.2782810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Our goal in this paper is to investigate properties of 3D shape that can be determined from a single image. We define 3D shape attributes-generic properties of the shape that capture curvature, contact and occupied space. Our first objective is to infer these 3D shape attributes from a single image. A second objective is to infer a 3D shape embedding-a low dimensional vector representing the 3D shape. We study how the 3D shape attributes and embedding can be obtained from a single image by training a Convolutional Neural Network (CNN) for this task. We start with synthetic images so that the contribution of various cues and nuisance parameters can be controlled. We then turn to real images and introduce a large scale image dataset of sculptures containing 143K images covering 2197 works from 242 artists. For the CNN trained on the sculpture dataset we show the following: (i) which regions of the imaged sculpture are used by the CNN to infer the 3D shape attributes; (ii) that the shape embedding can be used to match previously unseen sculptures largely independent of viewpoint; and (iii) that the 3D attributes generalize to images of other (non-sculpture) object classes.
Collapse
|
29
|
Akhavein H, Dehmoobadsharifabadi A, Farivar R. Magnetoencephalography adaptation reveals depth-cue-invariant object representations in the visual cortex. J Vis 2018; 18:6. [PMID: 30458514 DOI: 10.1167/18.12.6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Independent of edges and 2-D shape that can be highly informative of object identity, depth cues alone can also give rise to vivid and effective object percepts. The processing of different depth cues engages segregated cortical areas, and an efficient object representation would be one that is invariant to depth cues. Here, we investigated depth-cue invariance of object representations by measuring the category-specific response to faces-the M170 response measured with magnetoencephalography. The M170 response is strongest to faces and is sensitive to adaptation, such that repeated presentation of a face diminishes subsequent M170 responses. We used this feature of the M170 and measured the degree to which the adaptation effect is affected by variations in depth cue and 3-D object shape. Subjects viewed a rapid presentation of two stimuli-an adaptor and a test stimulus. The adaptor was either a face, a chair, or a face-like oval surface, and rendered with a single depth cue (shading, structure from motion, or texture). The test stimulus was always a shaded face of a random identity, thus completely controlling for low-level influences on the M170 response to the test stimulus. In the left fusiform face area, we found strong M170 adaptation when the adaptor was a face regardless of its depth cue. This adaptation was marginal in the right fusiform and negligible in the occipital regions. Our results support the presence of depth-cue-invariant representations in the human visual system, alongside size, position, and viewpoint invariance.
Collapse
Affiliation(s)
- Hassan Akhavein
- McGill Vision Research, Department of Ophthalmology, McGill University, Montreal, Canada
| | | | - Reza Farivar
- McGill Vision Research, Department of Ophthalmology, McGill University, Montreal, Canada
| |
Collapse
|
30
|
Oliver ZJ, Cristino F, Roberts MV, Pegna AJ, Leek EC. Stereo viewing modulates three-dimensional shape processing during object recognition: A high-density ERP study. J Exp Psychol Hum Percept Perform 2018; 44:518-534. [PMID: 29022728 PMCID: PMC5896504 DOI: 10.1037/xhp0000444] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2016] [Revised: 03/29/2017] [Accepted: 04/10/2017] [Indexed: 11/17/2022]
Abstract
The role of stereo disparity in the recognition of 3-dimensional (3D) object shape remains an unresolved issue for theoretical models of the human visual system. We examined this issue using high-density (128 channel) recordings of event-related potentials (ERPs). A recognition memory task was used in which observers were trained to recognize a subset of complex, multipart, 3D novel objects under conditions of either (bi-) monocular or stereo viewing. In a subsequent test phase they discriminated previously trained targets from untrained distractor objects that shared either local parts, 3D spatial configuration, or neither dimension, across both previously seen and novel viewpoints. The behavioral data showed a stereo advantage for target recognition at untrained viewpoints. ERPs showed early differential amplitude modulations to shape similarity defined by local part structure and global 3D spatial configuration. This occurred initially during an N1 component around 145-190 ms poststimulus onset, and then subsequently during an N2/P3 component around 260-385 ms poststimulus onset. For mono viewing, amplitude modulation during the N1 was greatest between targets and distracters with different local parts for trained views only. For stereo viewing, amplitude modulation during the N2/P3 was greatest between targets and distracters with different global 3D spatial configurations and generalized across trained and untrained views. The results show that image classification is modulated by stereo information about the local part, and global 3D spatial configuration of object shape. The findings challenge current theoretical models that do not attribute functional significance to stereo input during the computation of 3D object shape. (PsycINFO Database Record
Collapse
|
31
|
Kim S, Burge J. The lawful imprecision of human surface tilt estimation in natural scenes. eLife 2018; 7:31448. [PMID: 29384477 PMCID: PMC5844693 DOI: 10.7554/elife.31448] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2017] [Accepted: 01/29/2018] [Indexed: 01/03/2023] Open
Abstract
Estimating local surface orientation (slant and tilt) is fundamental to recovering the three-dimensional structure of the environment. It is unknown how well humans perform this task in natural scenes. Here, with a database of natural stereo-images having groundtruth surface orientation at each pixel, we find dramatic differences in human tilt estimation with natural and artificial stimuli. Estimates are precise and unbiased with artificial stimuli and imprecise and strongly biased with natural stimuli. An image-computable Bayes optimal model grounded in natural scene statistics predicts human bias, precision, and trial-by-trial errors without fitting parameters to the human data. The similarities between human and model performance suggest that the complex human performance patterns with natural stimuli are lawful, and that human visual systems have internalized local image and scene statistics to optimally infer the three-dimensional structure of the environment. These results generalize our understanding of vision from the lab to the real world.
Collapse
Affiliation(s)
- Seha Kim
- Department of Psychology, University of Pennsylvania, Philadelphia, United States
| | - Johannes Burge
- Department of Psychology, University of Pennsylvania, Philadelphia, United States
| |
Collapse
|
32
|
Fedorov LA, Dijkstra TMH, Giese MA. Lighting-from-above prior in biological motion perception. Sci Rep 2018; 8:1507. [PMID: 29367629 PMCID: PMC5784142 DOI: 10.1038/s41598-018-19851-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2017] [Accepted: 01/02/2018] [Indexed: 11/09/2022] Open
Abstract
The visual system is able to recognize body motion from impoverished stimuli. This requires combining stimulus information with visual priors. We present a new visual illusion showing that one of these priors is the assumption that bodies are typically illuminated from above. A change of illumination direction from above to below flips the perceived locomotion direction of a biological motion stimulus. Control experiments show that the underlying mechanism is different from shape-from-shading and directly combines information about body motion with a lighting-from-above prior. We further show that the illusion is critically dependent on the intrinsic luminance gradients of the most mobile parts of the moving body. We present a neural model with physiologically plausible mechanisms that accounts for the illusion and shows how the illumination prior might be encoded within the visual pathway. Our experiments demonstrate, for the first time, a direct influence of illumination priors in high-level motion vision.
Collapse
Affiliation(s)
- Leonid A Fedorov
- Section for Computational Sensomotorics, Dept. Cognitive Neurology, CIN & HIH, UKT, University of Tübingen, Otfried-Müller Strasse 25, 72076, Tübingen, Germany.,International Max Planck Research School for Cognitive and Systems Neuroscience, University of Tübingen, Spemannstrasse 38, 72076, Tübingen, Germany
| | - Tjeerd M H Dijkstra
- Section for Computational Sensomotorics, Dept. Cognitive Neurology, CIN & HIH, UKT, University of Tübingen, Otfried-Müller Strasse 25, 72076, Tübingen, Germany.,Max Planck Institute for Developmental Biology, Spemannstrasse 35, 72076, Tübingen, Germany
| | - Martin A Giese
- Section for Computational Sensomotorics, Dept. Cognitive Neurology, CIN & HIH, UKT, University of Tübingen, Otfried-Müller Strasse 25, 72076, Tübingen, Germany. .,International Max Planck Research School for Cognitive and Systems Neuroscience, University of Tübingen, Spemannstrasse 38, 72076, Tübingen, Germany.
| |
Collapse
|
33
|
Banaei M, Hatami J, Yazdanfar A, Gramann K. Walking through Architectural Spaces: The Impact of Interior Forms on Human Brain Dynamics. Front Hum Neurosci 2017; 11:477. [PMID: 29033807 PMCID: PMC5627023 DOI: 10.3389/fnhum.2017.00477] [Citation(s) in RCA: 69] [Impact Index Per Article: 9.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2017] [Accepted: 09/12/2017] [Indexed: 11/17/2022] Open
Abstract
Neuroarchitecture uses neuroscientific tools to better understand architectural design and its impact on human perception and subjective experience. The form or shape of the built environment is fundamental to architectural design, but not many studies have shown the impact of different forms on the inhabitants’ emotions. This study investigated the neurophysiological correlates of different interior forms on the perceivers’ affective state and the accompanying brain activity. To understand the impact of naturalistic three-dimensional (3D) architectural forms, it is essential to perceive forms from different perspectives. We computed clusters of form features extracted from pictures of residential interiors and constructed exemplary 3D room models based on and representing different formal clusters. To investigate human brain activity during 3D perception of architectural spaces, we used a mobile brain/body imaging (MoBI) approach recording the electroencephalogram (EEG) of participants while they naturally walk through different interior forms in virtual reality (VR). The results revealed a strong impact of curvature geometries on activity in the anterior cingulate cortex (ACC). Theta band activity in ACC correlated with specific feature types (rs (14) = 0.525, p = 0.037) and geometry (rs (14) = −0.579, p = 0.019), providing evidence for a role of this structure in processing architectural features beyond their emotional impact. The posterior cingulate cortex and the occipital lobe were involved in the perception of different room perspectives during the stroll through the rooms. This study sheds new light on the use of mobile EEG and VR in architectural studies and provides the opportunity to study human brain dynamics in participants that actively explore and realistically experience architectural spaces.
Collapse
Affiliation(s)
- Maryam Banaei
- School of Architecture and Environmental Design, Iran University of Science and Technology, Tehran, Iran
| | - Javad Hatami
- Department of Psychology, University of Tehran, Tehran, Iran
| | - Abbas Yazdanfar
- School of Architecture and Environmental Design, Iran University of Science and Technology, Tehran, Iran
| | - Klaus Gramann
- Department of Psychology and Ergonomics, Berlin Institute of Technology, Berlin, Germany.,Center for Advanced Neurological Engineering, University of California, San Diego, La Jolla, CA, United States.,School of Software, University of Technology Sydney, Sydney, NSW, Australia
| |
Collapse
|
34
|
Bridge H. Effects of cortical damage on binocular depth perception. Philos Trans R Soc Lond B Biol Sci 2017; 371:rstb.2015.0254. [PMID: 27269597 PMCID: PMC4901448 DOI: 10.1098/rstb.2015.0254] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/10/2015] [Indexed: 12/20/2022] Open
Abstract
Stereoscopic depth perception requires considerable neural computation, including the initial correspondence of the two retinal images, comparison across the local regions of the visual field and integration with other cues to depth. The most common cause for loss of stereoscopic vision is amblyopia, in which one eye has failed to form an adequate input to the visual cortex, usually due to strabismus (deviating eye) or anisometropia. However, the significant cortical processing required to produce the percept of depth means that, even when the retinal input is intact from both eyes, brain damage or dysfunction can interfere with stereoscopic vision. In this review, I examine the evidence for impairment of binocular vision and depth perception that can result from insults to the brain, including both discrete damage, temporal lobectomy and more systemic diseases such as posterior cortical atrophy. This article is part of the themed issue ‘Vision in our three-dimensional world’.
Collapse
Affiliation(s)
- Holly Bridge
- FMRIB Centre, Nuffield Department of Clinical Neurosciences, University of Oxford, John Radcliffe Hospital, Oxford OX3 9DU, UK
| |
Collapse
|
35
|
Using Eye Tracking to Explore the Guidance and Constancy of Visual Variables in 3D Visualization. ISPRS INTERNATIONAL JOURNAL OF GEO-INFORMATION 2017. [DOI: 10.3390/ijgi6090274] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
36
|
Separate Perceptual and Neural Processing of Velocity- and Disparity-Based 3D Motion Signals. J Neurosci 2017; 36:10791-10802. [PMID: 27798134 DOI: 10.1523/jneurosci.1298-16.2016] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2016] [Accepted: 08/26/2016] [Indexed: 11/21/2022] Open
Abstract
Although the visual system uses both velocity- and disparity-based binocular information for computing 3D motion, it is unknown whether (and how) these two signals interact. We found that these two binocular signals are processed distinctly at the levels of both cortical activity in human MT and perception. In human MT, adaptation to both velocity-based and disparity-based 3D motions demonstrated direction-selective neuroimaging responses. However, when adaptation to one cue was probed using the other cue, there was no evidence of interaction between them (i.e., there was no "cross-cue" adaptation). Analogous psychophysical measurements yielded correspondingly weak cross-cue motion aftereffects (MAEs) in the face of very strong within-cue adaptation. In a direct test of perceptual independence, adapting to opposite 3D directions generated by different binocular cues resulted in simultaneous, superimposed, opposite-direction MAEs. These findings suggest that velocity- and disparity-based 3D motion signals may both flow through area MT but constitute distinct signals and pathways. SIGNIFICANCE STATEMENT Recent human neuroimaging and monkey electrophysiology have revealed 3D motion selectivity in area MT, which is driven by both velocity-based and disparity-based 3D motion signals. However, to elucidate the neural mechanisms by which the brain extracts 3D motion given these binocular signals, it is essential to understand how-or indeed if-these two binocular cues interact. We show that velocity-based and disparity-based signals are mostly separate at the levels of both fMRI responses in area MT and perception. Our findings suggest that the two binocular cues for 3D motion might be processed by separate specialized mechanisms.
Collapse
|
37
|
Vedamurthy I, Knill DC, Huang SJ, Yung A, Ding J, Kwon OS, Bavelier D, Levi DM. Recovering stereo vision by squashing virtual bugs in a virtual reality environment. Philos Trans R Soc Lond B Biol Sci 2017; 371:rstb.2015.0264. [PMID: 27269607 DOI: 10.1098/rstb.2015.0264] [Citation(s) in RCA: 56] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/09/2016] [Indexed: 12/11/2022] Open
Abstract
Stereopsis is the rich impression of three-dimensionality, based on binocular disparity-the differences between the two retinal images of the same world. However, a substantial proportion of the population is stereo-deficient, and relies mostly on monocular cues to judge the relative depth or distance of objects in the environment. Here we trained adults who were stereo blind or stereo-deficient owing to strabismus and/or amblyopia in a natural visuomotor task-a 'bug squashing' game-in a virtual reality environment. The subjects' task was to squash a virtual dichoptic bug on a slanted surface, by hitting it with a physical cylinder they held in their hand. The perceived surface slant was determined by monocular texture and stereoscopic cues, with these cues being either consistent or in conflict, allowing us to track the relative weighting of monocular versus stereoscopic cues as training in the task progressed. Following training most participants showed greater reliance on stereoscopic cues, reduced suppression and improved stereoacuity. Importantly, the training-induced changes in relative stereo weights were significant predictors of the improvements in stereoacuity. We conclude that some adults deprived of normal binocular vision and insensitive to the disparity information can, with appropriate experience, recover access to more reliable stereoscopic information.This article is part of the themed issue 'Vision in our three-dimensional world'.
Collapse
Affiliation(s)
- Indu Vedamurthy
- Department of Brain and Cognitive Sciences and Center for Visual Science, University of Rochester, Rochester, NY 14627-0268, USA
| | - David C Knill
- Department of Brain and Cognitive Sciences and Center for Visual Science, University of Rochester, Rochester, NY 14627-0268, USA
| | - Samuel J Huang
- Department of Brain and Cognitive Sciences and Center for Visual Science, University of Rochester, Rochester, NY 14627-0268, USA
| | - Amanda Yung
- Department of Brain and Cognitive Sciences and Center for Visual Science, University of Rochester, Rochester, NY 14627-0268, USA
| | - Jian Ding
- School of Optometry and Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94720, USA
| | - Oh-Sang Kwon
- Department of Brain and Cognitive Sciences and Center for Visual Science, University of Rochester, Rochester, NY 14627-0268, USA School of Design and Human Engineering, UNIST, Ulsan 689-798, South Korea
| | - Daphne Bavelier
- Department of Brain and Cognitive Sciences and Center for Visual Science, University of Rochester, Rochester, NY 14627-0268, USA Faculty of Psychology and Education Sciences, University of Geneva, CH-1211 Geneva 4, Switzerland
| | - Dennis M Levi
- School of Optometry and Helen Wills Neuroscience Institute, University of California, Berkeley, CA 94720, USA
| |
Collapse
|
38
|
|
39
|
The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal 2017; 37:66-90. [DOI: 10.1016/j.media.2017.01.007] [Citation(s) in RCA: 183] [Impact Index Per Article: 26.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2016] [Revised: 01/16/2017] [Accepted: 01/23/2017] [Indexed: 12/27/2022]
|
40
|
Groen IIA, Silson EH, Baker CI. Contributions of low- and high-level properties to neural processing of visual scenes in the human brain. Philos Trans R Soc Lond B Biol Sci 2017; 372:rstb.2016.0102. [PMID: 28044013 DOI: 10.1098/rstb.2016.0102] [Citation(s) in RCA: 90] [Impact Index Per Article: 12.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/20/2016] [Indexed: 11/12/2022] Open
Abstract
Visual scene analysis in humans has been characterized by the presence of regions in extrastriate cortex that are selectively responsive to scenes compared with objects or faces. While these regions have often been interpreted as representing high-level properties of scenes (e.g. category), they also exhibit substantial sensitivity to low-level (e.g. spatial frequency) and mid-level (e.g. spatial layout) properties, and it is unclear how these disparate findings can be united in a single framework. In this opinion piece, we suggest that this problem can be resolved by questioning the utility of the classical low- to high-level framework of visual perception for scene processing, and discuss why low- and mid-level properties may be particularly diagnostic for the behavioural goals specific to scene perception as compared to object recognition. In particular, we highlight the contributions of low-level vision to scene representation by reviewing (i) retinotopic biases and receptive field properties of scene-selective regions and (ii) the temporal dynamics of scene perception that demonstrate overlap of low- and mid-level feature representations with those of scene category. We discuss the relevance of these findings for scene perception and suggest a more expansive framework for visual scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'.
Collapse
Affiliation(s)
- Iris I A Groen
- Laboratory of Brain and Cognition, National Institutes of Health, 10 Center Drive 10-3N228, Bethesda, MD, USA
| | - Edward H Silson
- Laboratory of Brain and Cognition, National Institutes of Health, 10 Center Drive 10-3N228, Bethesda, MD, USA
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institutes of Health, 10 Center Drive 10-3N228, Bethesda, MD, USA
| |
Collapse
|
41
|
Stegemann S, Riedl R, Sourij H. Identification of different shapes, colors and sizes of standard oral dosage forms in diabetes type 2 patients—A pilot study. Int J Pharm 2017; 517:112-118. [DOI: 10.1016/j.ijpharm.2016.11.066] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2016] [Revised: 11/29/2016] [Accepted: 11/30/2016] [Indexed: 01/14/2023]
|
42
|
Finlayson NJ, Zhang X, Golomb JD. Differential patterns of 2D location versus depth decoding along the visual hierarchy. Neuroimage 2016; 147:507-516. [PMID: 28039760 DOI: 10.1016/j.neuroimage.2016.12.039] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2016] [Revised: 11/27/2016] [Accepted: 12/14/2016] [Indexed: 11/25/2022] Open
Abstract
Visual information is initially represented as 2D images on the retina, but our brains are able to transform this input to perceive our rich 3D environment. While many studies have explored 2D spatial representations or depth perception in isolation, it remains unknown if or how these processes interact in human visual cortex. Here we used functional MRI and multi-voxel pattern analysis to investigate the relationship between 2D location and position-in-depth information. We stimulated different 3D locations in a blocked design: each location was defined by horizontal, vertical, and depth position. Participants remained fixated at the center of the screen while passively viewing the peripheral stimuli with red/green anaglyph glasses. Our results revealed a widespread, systematic transition throughout visual cortex. As expected, 2D location information (horizontal and vertical) could be strongly decoded in early visual areas, with reduced decoding higher along the visual hierarchy, consistent with known changes in receptive field sizes. Critically, we found that the decoding of position-in-depth information tracked inversely with the 2D location pattern, with the magnitude of depth decoding gradually increasing from intermediate to higher visual and category regions. Representations of 2D location information became increasingly location-tolerant in later areas, where depth information was also tolerant to changes in 2D location. We propose that spatial representations gradually transition from 2D-dominant to balanced 3D (2D and depth) along the visual hierarchy.
Collapse
Affiliation(s)
- Nonie J Finlayson
- Department of Psychology, Center for Cognitive & Brain Sciences, The Ohio State University, Columbus, OH 43210, USA.
| | - Xiaoli Zhang
- Department of Psychology, Center for Cognitive & Brain Sciences, The Ohio State University, Columbus, OH 43210, USA
| | - Julie D Golomb
- Department of Psychology, Center for Cognitive & Brain Sciences, The Ohio State University, Columbus, OH 43210, USA.
| |
Collapse
|
43
|
Zylinski S, Osorio D, Johnsen S. Cuttlefish see shape from shading, fine-tuning coloration in response to pictorial depth cues and directional illumination. Proc Biol Sci 2016; 283:20160062. [PMID: 26984626 DOI: 10.1098/rspb.2016.0062] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Humans use shading as a cue to three-dimensional form by combining low-level information about light intensity with high-level knowledge about objects and the environment. Here, we examine how cuttlefish Sepia officinalis respond to light and shadow to shade the white square (WS) feature in their body pattern. Cuttlefish display the WS in the presence of pebble-like objects, and they can shade it to render the appearance of surface curvature to a human observer, which might benefit camouflage. Here we test how they colour the WS on visual backgrounds containing two-dimensional circular stimuli, some of which were shaded to suggest surface curvature, whereas others were uniformly coloured or divided into dark and light semicircles. WS shading, measured by lateral asymmetry, was greatest when the animal rested on a background of shaded circles and three-dimensional hemispheres, and less on plain white circles or black/white semicircles. In addition, shading was enhanced when light fell from the lighter side of the shaded stimulus, as expected for real convex surfaces. Thus, the cuttlefish acts as if it perceives surface curvature from shading, and takes account of the direction of illumination. However, the direction of WS shading is insensitive to the directions of background shading and illumination; instead the cuttlefish tend to turn to face the light source.
Collapse
Affiliation(s)
- Sarah Zylinski
- Faculty of Biological Sciences, University of Leeds, Leeds LS2 9JT, UK
| | - D Osorio
- School of Biological Sciences, University of Sussex, Brighton BN1 9QG, UK
| | - Sonke Johnsen
- Department of Biology, Duke University, Durham, NC 27708, USA
| |
Collapse
|
44
|
Affiliation(s)
- Andrew E. Welchman
- Department of Psychology, University of Cambridge, Cambridge CB2 3EB, United Kingdom;
| |
Collapse
|
45
|
Burge J, McCann BC, Geisler WS. Estimating 3D tilt from local image cues in natural scenes. J Vis 2016; 16:2. [PMID: 27738702 PMCID: PMC5066913 DOI: 10.1167/16.13.2] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2015] [Accepted: 08/15/2016] [Indexed: 11/24/2022] Open
Abstract
Estimating three-dimensional (3D) surface orientation (slant and tilt) is an important first step toward estimating 3D shape. Here, we examine how three local image cues from the same location (disparity gradient, luminance gradient, and dominant texture orientation) should be combined to estimate 3D tilt in natural scenes. We collected a database of natural stereoscopic images with precisely co-registered range images that provide the ground-truth distance at each pixel location. We then analyzed the relationship between ground-truth tilt and image cue values. Our analysis is free of assumptions about the joint probability distributions and yields the Bayes optimal estimates of tilt, given the cue values. Rich results emerge: (a) typical tilt estimates are only moderately accurate and strongly influenced by the cardinal bias in the prior probability distribution; (b) when cue values are similar, or when slant is greater than 40°, estimates are substantially more accurate; (c) when luminance and texture cues agree, they often veto the disparity cue, and when they disagree, they have little effect; and (d) simplifying assumptions common in the cue combination literature is often justified for estimating tilt in natural scenes. The fact that tilt estimates are typically not very accurate is consistent with subjective impressions from viewing small patches of natural scene. The fact that estimates are substantially more accurate for a subset of image locations is also consistent with subjective impressions and with the hypothesis that perceived surface orientation, at more global scales, is achieved by interpolation or extrapolation from estimates at key locations.
Collapse
Affiliation(s)
- Johannes Burge
- Department of Psychology, University of Pennsylvania, Philadelphia, PA,
| | - Brian C McCann
- Texas Advanced Computing Center, University of Texas at Austin, Austin, TX, USA
| | - Wilson S Geisler
- Center for Perceptual Systems and Department of Psychology, University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
46
|
Finlayson NJ, Golomb JD. Feature-location binding in 3D: Feature judgments are biased by 2D location but not position-in-depth. Vision Res 2016; 127:49-56. [PMID: 27468654 PMCID: PMC5035601 DOI: 10.1016/j.visres.2016.07.003] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2016] [Revised: 07/01/2016] [Accepted: 07/05/2016] [Indexed: 11/29/2022]
Abstract
A fundamental aspect of human visual perception is the ability to recognize and locate objects in the environment. Importantly, our environment is predominantly three-dimensional (3D), but while there is considerable research exploring the binding of object features and location, it is unknown how depth information interacts with features in the object binding process. A recent paradigm called the spatial congruency bias demonstrated that 2D location is fundamentally bound to object features, such that irrelevant location information biases judgments of object features, but irrelevant feature information does not bias judgments of location or other features. Here, using the spatial congruency bias paradigm, we asked whether depth is processed as another type of location, or more like other features. We initially found that depth cued by binocular disparity biased judgments of object color. However, this result seemed to be driven more by the disparity differences than the depth percept: Depth cued by occlusion and size did not bias color judgments, whereas vertical disparity information (with no depth percept) did bias color judgments. Our results suggest that despite the 3D nature of our visual environment, only 2D location information - not position-in-depth - seems to be automatically bound to object features, with depth information processed more similarly to other features than to 2D location.
Collapse
Affiliation(s)
- Nonie J Finlayson
- Department of Psychology, Center for Cognitive & Brain Sciences, The Ohio State University, Columbus, OH 43210, USA.
| | - Julie D Golomb
- Department of Psychology, Center for Cognitive & Brain Sciences, The Ohio State University, Columbus, OH 43210, USA
| |
Collapse
|
47
|
Tian M, Yamins D, Grill-Spector K. Learning the 3-D structure of objects from 2-D views depends on shape, not format. J Vis 2016; 16:7. [PMID: 27153196 PMCID: PMC4898268 DOI: 10.1167/16.7.7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2015] [Indexed: 11/24/2022] Open
Abstract
Humans can learn to recognize new objects just from observing example views. However, it is unknown what structural information enables this learning. To address this question, we manipulated the amount of structural information given to subjects during unsupervised learning by varying the format of the trained views. We then tested how format affected participants' ability to discriminate similar objects across views that were rotated 90° apart. We found that, after training, participants' performance increased and generalized to new views in the same format. Surprisingly, the improvement was similar across line drawings, shape from shading, and shape from shading + stereo even though the latter two formats provide richer depth information compared to line drawings. In contrast, participants' improvement was significantly lower when training used silhouettes, suggesting that silhouettes do not have enough information to generate a robust 3-D structure. To test whether the learned object representations were format-specific or format-invariant, we examined if learning novel objects from example views transfers across formats. We found that learning objects from example line drawings transferred to shape from shading and vice versa. These results have important implications for theories of object recognition because they suggest that (a) learning the 3-D structure of objects does not require rich structural cues during training as long as shape information of internal and external features is provided and (b) learning generates shape-based object representations independent of the training format.
Collapse
|
48
|
Ward LM, Morison G, Simpson WA, Simmers AJ, Shahani U. Using Functional Near Infrared Spectroscopy (fNIRS) to Study Dynamic Stereoscopic Depth Perception. Brain Topogr 2016; 29:515-23. [PMID: 26900069 PMCID: PMC4899499 DOI: 10.1007/s10548-016-0476-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2015] [Accepted: 02/08/2016] [Indexed: 11/28/2022]
Abstract
The parietal cortex has been widely implicated in the processing of depth perception by many neuroimaging studies, yet functional near infrared spectroscopy (fNIRS) has been an under-utilised tool to examine the relationship of oxy- ([HbO]) and de-oxyhaemoglobin ([HbR]) in perception. Here we examine the haemodynamic response (HDR) to the processing of induced depth stimulation using dynamic random-dot-stereograms (RDS). We used fNIRS to measure the HDR associated with depth perception in healthy young adults (n = 13, mean age 24). Using a blocked design, absolute values of [HbO] and [HbR] were recorded across parieto-occipital and occipital cortices, in response to dynamic RDS. Control and test images were identical except for the horizontal shift in pixels in the RDS that resulted in binocular disparity and induced the percept of a 3D sine wave that ‘popped out’ of the test stimulus. The control stimulus had zero disparity and induced a ‘flat’ percept. All participants had stereoacuity within normal clinical limits and successfully perceived the depth in the dynamic RDS. Results showed a significant effect of this complex visual stimulation in the right parieto-occipital cortex (p < 0.01, η2 = 0.54). The test stimulus elicited a significant increase in [HbO] during depth perception compared to the control image (p < 0.001, 99.99 % CI [0.008–0.294]). The similarity between the two stimuli may have resulted in the HDR of the occipital cortex showing no significant increase or decrease of cerebral oxygenation levels during depth stimulation. Cerebral oxygenation measures of [HbO] confirmed the strong association of the right parieto-occipital cortex with processing depth perception. Our study demonstrates the validity of fNIRS to investigate [HbO] and [HbR] during high-level visual processing of complex stimuli.
Collapse
Affiliation(s)
- Laura M Ward
- Department of Vision Sciences, Glasgow Caledonian University, 70 Cowcaddens Road, Glasgow, G4 0BA, UK
| | - Gordon Morison
- Department of Engineering, Glasgow Caledonian University, 70 Cowcaddens Road, Glasgow, G4 0BA, UK
| | - William A Simpson
- School of Psychology, Plymouth University, Drake Circus, Plymouth, Devon, PL4 8AA, UK
| | - Anita J Simmers
- Department of Vision Sciences, Glasgow Caledonian University, 70 Cowcaddens Road, Glasgow, G4 0BA, UK
| | - Uma Shahani
- Department of Vision Sciences, Glasgow Caledonian University, 70 Cowcaddens Road, Glasgow, G4 0BA, UK.
| |
Collapse
|
49
|
Tsushima Y, Komine K, Sawahata Y, Morita T. Undetectable Changes in Image Resolution of Luminance-Contrast Gradients Affect Depth Perception. Front Psychol 2016; 7:242. [PMID: 26941693 PMCID: PMC4763190 DOI: 10.3389/fpsyg.2016.00242] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2015] [Accepted: 02/05/2016] [Indexed: 11/13/2022] Open
Abstract
A great number of studies have suggested a variety of ways to get depth information from two dimensional images such as binocular disparity, shape-from-shading, size gradient/foreshortening, aerial perspective, and so on. Are there any other new factors affecting depth perception? A recent psychophysical study has investigated the correlation between image resolution and depth sensation of Cylinder images (A rectangle contains gradual luminance-contrast changes.). It was reported that higher resolution images facilitate depth perception. However, it is still not clear whether or not the finding generalizes to other kinds of visual stimuli, because there are more appropriate visual stimuli for exploration of depth perception of luminance-contrast changes, such as Gabor patch. Here, we further examined the relationship between image resolution and depth perception by conducting a series of psychophysical experiments with not only Cylinders but also Gabor patches having smoother luminance-contrast gradients. As a result, higher resolution images produced stronger depth sensation with both images. This finding suggests that image resolution affects depth perception of simple luminance-contrast differences (Gabor patch) as well as shape-from-shading (Cylinder). In addition, this phenomenon was found even when the resolution difference was undetectable. This indicates the existence of consciously available and unavailable information in our visual system. These findings further support the view that image resolution is a cue for depth perception that was previously ignored. It partially explains the unparalleled viewing experience of novel high resolution displays.
Collapse
Affiliation(s)
- Yoshiaki Tsushima
- Three-Dimensional Image Research Division, NHK Science and Technology Research LabsTokyo, Japan
- Universal Communication Research Institute, National Institute of Information and Communications TechnologyKyoto, Japan
- Sutokuin LabOsaka, Japan
| | - Kazuteru Komine
- Three-Dimensional Image Research Division, NHK Science and Technology Research LabsTokyo, Japan
| | - Yasuhito Sawahata
- Three-Dimensional Image Research Division, NHK Science and Technology Research LabsTokyo, Japan
| | - Toshiya Morita
- Three-Dimensional Image Research Division, NHK Science and Technology Research LabsTokyo, Japan
| |
Collapse
|
50
|
Bedford R, Pellicano E, Mareschal D, Nardini M. Flexible integration of visual cues in adolescents with autism spectrum disorder. Autism Res 2016; 9:272-81. [PMID: 26097109 PMCID: PMC4864758 DOI: 10.1002/aur.1509] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2015] [Accepted: 05/20/2015] [Indexed: 11/18/2022]
Abstract
Although children with autism spectrum disorder (ASD) show atypical sensory processing, evidence for impaired integration of multisensory information has been mixed. In this study, we took a Bayesian model-based approach to assess within-modality integration of congruent and incongruent texture and disparity cues to judge slant in typical and autistic adolescents. Human adults optimally combine multiple sources of sensory information to reduce perceptual variance but in typical development this ability to integrate cues does not develop until late childhood. While adults cannot help but integrate cues, even when they are incongruent, young children's ability to keep cues separate gives them an advantage in discriminating incongruent stimuli. Given that mature cue integration emerges in later childhood, we hypothesized that typical adolescents would show adult-like integration, combining both congruent and incongruent cues. For the ASD group there were three possible predictions (1) "no fusion": no integration of congruent or incongruent cues, like 6-year-old typical children; (2) "mandatory fusion": integration of congruent and incongruent cues, like typical adults; (3) "selective fusion": cues are combined when congruent but not incongruent, consistent with predictions of Enhanced Perceptual Functioning (EPF) theory. As hypothesized, typical adolescents showed significant integration of both congruent and incongruent cues. The ASD group showed results consistent with "selective fusion," integrating congruent but not incongruent cues. This allowed adolescents with ASD to make perceptual judgments which typical adolescents could not. In line with EPF, results suggest that perception in ASD may be more flexible and less governed by mandatory top-down feedback.
Collapse
Affiliation(s)
- Rachael Bedford
- Biostatistics DepartmentInstitute of Psychiatry, King's College LondonUnited Kingdom
| | - Elizabeth Pellicano
- Centre for Research in Autism and Education (CRAE)Institute of Education, University of LondonUnited Kingdom
- School of PsychologyUniversity of Western AustraliaPerthAustralia
| | - Denis Mareschal
- Centre for Brain and Cognitive DevelopmentBirkbeck University of LondonUnited Kingdom
| | - Marko Nardini
- Department of PsychologyDurham UniversityDurhamUnited Kingdom
| |
Collapse
|