1
|
Stein N, Watson T, Lappe M, Westendorf M, Durant S. Eye and head movements in visual search in the extended field of view. Sci Rep 2024; 14:8907. [PMID: 38632334 PMCID: PMC11023950 DOI: 10.1038/s41598-024-59657-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 04/12/2024] [Indexed: 04/19/2024] Open
Abstract
In natural environments, head movements are required to search for objects outside the field of view (FoV). Here we investigate the power of a salient target in an extended visual search array to facilitate faster detection once this item comes into the FoV by a head movement. We conducted two virtual reality experiments using spatially clustered sets of stimuli to observe target detection and head and eye movements during visual search. Participants completed search tasks with three conditions: (1) target in the initial FoV, (2) head movement needed to bring the target into the FoV, (3) same as condition 2 but the periphery was initially hidden and appeared after the head movement had brought the location of the target set into the FoV. We measured search time until participants found a more salient (O) or less salient (T) target among distractors (L). On average O's were found faster than T's. Gaze analysis showed that saliency facilitation occurred due to the target guiding the search only if it was within the initial FoV. When targets required a head movement to enter the FoV, participants followed the same search strategy as in trials without a visible target in the periphery. Moreover, faster search times for salient targets were only caused by the time required to find the target once the target set was reached. This suggests that the effect of stimulus saliency differs between visual search on fixed displays and when we are actively searching through an extended visual field.
Collapse
Affiliation(s)
- Niklas Stein
- Institute for Psychology, University of Münster, 48143, Münster, Germany.
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, 48143, Münster, Germany.
| | - Tamara Watson
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, 2751, Australia
| | - Markus Lappe
- Institute for Psychology, University of Münster, 48143, Münster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, 48143, Münster, Germany
| | - Maren Westendorf
- Institute for Psychology, University of Münster, 48143, Münster, Germany
| | - Szonya Durant
- Department of Psychology, Royal Holloway, University of London, Egham, TW20 0EX, UK
| |
Collapse
|
2
|
Haskins AJ, Mentch J, Van Wicklin C, Choi YB, Robertson CE. Brief Report: Differences in Naturalistic Attention to Real-World Scenes in Adolescents with 16p.11.2 Deletion. J Autism Dev Disord 2024; 54:1078-1087. [PMID: 36512194 DOI: 10.1007/s10803-022-05850-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/23/2022] [Indexed: 12/15/2022]
Abstract
Sensory differences are nearly universal in autism, but their genetic origins are poorly understood. Here, we tested how individuals with an autism-linked genotype, 16p.11.2 deletion ("16p"), attend to visual information in immersive, real-world photospheres. We monitored participants' (N = 44) gaze while they actively explored 360° scenes via headmounted virtual reality. We modeled the visually salient and semantically meaningful information in scenes and quantified the relative bottom-up vs. top-down influences on attentional deployment. We found, when compared to typically developed control (TD) participants, 16p participants' attention was less dominantly predicted by semantically meaningful scene regions, relative to visually salient regions. These results suggest that a reduction in top-down relative to bottom-up attention characterizes how individuals with 16p.11.2 deletions engage with naturalistic visual environments.
Collapse
Affiliation(s)
- Amanda J Haskins
- Department of Psychological & Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH, 03755, USA.
| | - Jeff Mentch
- Program in Speech and Hearing Bioscience and Technology, Harvard University, Boston, MA, 02115, USA
- McGovern Institute for Brain Research, MIT, Cambridge, MA, 02139, USA
| | | | - Yeo Bi Choi
- Department of Psychological & Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH, 03755, USA
| | - Caroline E Robertson
- Department of Psychological & Brain Sciences, Dartmouth College, 3 Maynard Street, Hanover, NH, 03755, USA
| |
Collapse
|
3
|
Steel A, Garcia BD, Goyal K, Mynick A, Robertson CE. Scene Perception and Visuospatial Memory Converge at the Anterior Edge of Visually Responsive Cortex. J Neurosci 2023; 43:5723-5737. [PMID: 37474310 PMCID: PMC10401646 DOI: 10.1523/jneurosci.2043-22.2023] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 07/10/2023] [Accepted: 07/14/2023] [Indexed: 07/22/2023] Open
Abstract
To fluidly engage with the world, our brains must simultaneously represent both the scene in front of us and our memory of the immediate surrounding environment (i.e., local visuospatial context). How does the brain's functional architecture enable sensory and mnemonic representations to closely interface while also avoiding sensory-mnemonic interference? Here, we asked this question using first-person, head-mounted virtual reality and fMRI. Using virtual reality, human participants of both sexes learned a set of immersive, real-world visuospatial environments in which we systematically manipulated the extent of visuospatial context associated with a scene image in memory across three learning conditions, spanning from a single FOV to a city street. We used individualized, within-subject fMRI to determine which brain areas support memory of the visuospatial context associated with a scene during recall (Experiment 1) and recognition (Experiment 2). Across the whole brain, activity in three patches of cortex was modulated by the amount of known visuospatial context, each located immediately anterior to one of the three scene perception areas of high-level visual cortex. Individual subject analyses revealed that these anterior patches corresponded to three functionally defined place memory areas, which selectively respond when visually recalling personally familiar places. In addition to showing activity levels that were modulated by the amount of visuospatial context, multivariate analyses showed that these anterior areas represented the identity of the specific environment being recalled. Together, these results suggest a convergence zone for scene perception and memory of the local visuospatial context at the anterior edge of high-level visual cortex.SIGNIFICANCE STATEMENT As we move through the world, the visual scene around us is integrated with our memory of the wider visuospatial context. Here, we sought to understand how the functional architecture of the brain enables coexisting representations of the current visual scene and memory of the surrounding environment. Using a combination of immersive virtual reality and fMRI, we show that memory of visuospatial context outside the current FOV is represented in a distinct set of brain areas immediately anterior and adjacent to the perceptually oriented scene-selective areas of high-level visual cortex. This functional architecture would allow efficient interaction between immediately adjacent mnemonic and perceptual areas while also minimizing interference between mnemonic and perceptual representations.
Collapse
Affiliation(s)
- Adam Steel
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| | - Brenda D Garcia
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| | - Kala Goyal
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| | - Anna Mynick
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| | - Caroline E Robertson
- Department of Psychological & Brain Sciences, Dartmouth College, Hanover, New Hampshire 03755
| |
Collapse
|
4
|
Yang Y, Zhong L, Li S, Yu A. Research on the Perceived Quality of Virtual Reality Headsets in Human-Computer Interaction. SENSORS (BASEL, SWITZERLAND) 2023; 23:6824. [PMID: 37571607 PMCID: PMC10422407 DOI: 10.3390/s23156824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 07/16/2023] [Accepted: 07/28/2023] [Indexed: 08/13/2023]
Abstract
The progress of commercial VR headsets largely depends on the progress of sensor technology, the iteration of which often means longer research and development cycles, and also higher costs. With the continuous maturity and increasing competition of VR headsets, designers need to create a balance among user needs, technologies, and costs to achieve commercial competition advantages. To make accurate judgments, consumer feedback and opinions are particularly important. Due to the increasing maturity in the technology of commercial VR headsets in recent years, the cost has been continuously decreasing, and potential consumers have gradually increased. With the increase in consumer demand for virtual reality headsets, it is particularly important to establish a perceptual quality evaluation system. The relationship between consumer perception and product quality determined by evaluations of experience is improving. Using the research method implemented in this work, through semi-structured interviews and big data analysis of VR headset consumption, the perceptual quality elements of VR headsets are proposed, and the order of importance of perceptual quality attributes is determined by questionnaire surveys, quantitative analysis, and verification. In this study, the perceptual quality elements, including technical perceptual quality (TPQ) and value perceptual quality (VPQ), of 14 types of VR headsets were obtained, and the importance ranking of the VR headsets' perceptual quality attributes was constructed. In theory, this study enriches the research on VR headsets. In practice, this study provides better guidance and suggestions for designing and producing VR headsets so that producers can better understand which sensor technology has met the needs of consumers, and which sensor technology still has room for improvement.
Collapse
Affiliation(s)
| | - Linling Zhong
- Department of Business Administration, Business School, Sichuan University, Chengdu 610207, China
| | | | | |
Collapse
|
5
|
Botch TL, Garcia BD, Choi YB, Feffer N, Robertson CE. Active visual search in naturalistic environments reflects individual differences in classic visual search performance. Sci Rep 2023; 13:631. [PMID: 36635491 PMCID: PMC9837148 DOI: 10.1038/s41598-023-27896-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Accepted: 01/10/2023] [Indexed: 01/13/2023] Open
Abstract
Visual search is a ubiquitous activity in real-world environments. Yet, traditionally, visual search is investigated in tightly controlled paradigms, where head-restricted participants locate a minimalistic target in a cluttered array that is presented on a computer screen. Do traditional visual search tasks predict performance in naturalistic settings, where participants actively explore complex, real-world scenes? Here, we leverage advances in virtual reality technology to test the degree to which classic and naturalistic search are limited by a common factor, set size, and the degree to which individual differences in classic search behavior predict naturalistic search behavior in a large sample of individuals (N = 75). In a naturalistic search task, participants looked for an object within their environment via a combination of head-turns and eye-movements using a head-mounted display. Then, in a classic search task, participants searched for a target within a simple array of colored letters using only eye-movements. In each task, we found that participants' search performance was impacted by increases in set size-the number of items in the visual display. Critically, we observed that participants' efficiency in classic search tasks-the degree to which set size slowed performance-indeed predicted efficiency in real-world scenes. These results demonstrate that classic, computer-based visual search tasks are excellent models of active, real-world search behavior.
Collapse
Affiliation(s)
- Thomas L Botch
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA.
| | - Brenda D Garcia
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
| | - Yeo Bi Choi
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
| | - Nicholas Feffer
- Department of Computer Science, Dartmouth College, Hanover, NH, 03755, USA
- Department of Computer Science, Stanford University, Stanford, CA, 94305, USA
| | - Caroline E Robertson
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
| |
Collapse
|
6
|
Haskins AJ, Mentch J, Botch TL, Garcia BD, Burrows AL, Robertson CE. Reduced social attention in autism is magnified by perceptual load in naturalistic environments. Autism Res 2022; 15:2310-2323. [PMID: 36207799 PMCID: PMC10092155 DOI: 10.1002/aur.2829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 09/13/2022] [Indexed: 12/15/2022]
Abstract
Individuals with autism spectrum conditions (ASC) describe differences in both social cognition and sensory processing, but little is known about the causal relationship between these disparate functional domains. In the present study, we sought to understand how a core characteristic of autism-reduced social attention-is impacted by the complex multisensory signals present in real-world environments. We tested the hypothesis that reductions in social attention associated with autism would be magnified by increasing perceptual load (e.g., motion, multisensory cues). Adult participants (N = 40; 19 ASC) explored a diverse set of 360° real-world scenes in a naturalistic, active viewing paradigm (immersive virtual reality + eyetracking). Across three conditions, we systematically varied perceptual load while holding the social and semantic information present in each scene constant. We demonstrate that reduced social attention is not a static signature of the autistic phenotype. Rather, group differences in social attention emerged with increasing perceptual load in naturalistic environments, and the susceptibility of social attention to perceptual load predicted continuous measures of autistic traits across groups. Crucially, this pattern was specific to the social domain: we did not observe differential impacts of perceptual load on attention directed toward nonsocial semantic (i.e., object, place) information or low-level fixation behavior (i.e., overall fixation frequency or duration). This study provides a direct link between social and sensory processing in autism. Moreover, reduced social attention may be an inaccurate characterization of autism. Instead, our results suggest that social attention in autism is better explained by "social vulnerability," particularly to the perceptual load of real-world environments.
Collapse
Affiliation(s)
- Amanda J. Haskins
- Department of Psychological & Brain SciencesDartmouth CollegeHanoverNew HampshireUSA
| | - Jeff Mentch
- Speech and Hearing Bioscience and TechnologyHarvard UniversityBostonMassachusettsUSA
- McGovern Institute for Brain Research, MITCambridgeMassachusettsUSA
| | - Thomas L. Botch
- Department of Psychological & Brain SciencesDartmouth CollegeHanoverNew HampshireUSA
| | - Brenda D. Garcia
- Department of Psychological & Brain SciencesDartmouth CollegeHanoverNew HampshireUSA
| | - Alexandra L. Burrows
- Department of Psychological & Brain SciencesDartmouth CollegeHanoverNew HampshireUSA
| | - Caroline E. Robertson
- Department of Psychological & Brain SciencesDartmouth CollegeHanoverNew HampshireUSA
| |
Collapse
|
7
|
Hayes TR, Henderson JM. Meaning maps detect the removal of local semantic scene content but deep saliency models do not. Atten Percept Psychophys 2022; 84:647-654. [PMID: 35138579 PMCID: PMC11128357 DOI: 10.3758/s13414-021-02395-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/12/2021] [Indexed: 11/08/2022]
Abstract
Meaning mapping uses human raters to estimate different semantic features in scenes, and has been a useful tool in demonstrating the important role semantics play in guiding attention. However, recent work has argued that meaning maps do not capture semantic content, but like deep learning models of scene attention, represent only semantically-neutral image features. In the present study, we directly tested this hypothesis using a diffeomorphic image transformation that is designed to remove the meaning of an image region while preserving its image features. Specifically, we tested whether meaning maps and three state-of-the-art deep learning models were sensitive to the loss of semantic content in this critical diffeomorphed scene region. The results were clear: meaning maps generated by human raters showed a large decrease in the diffeomorphed scene regions, while all three deep saliency models showed a moderate increase in the diffeomorphed scene regions. These results demonstrate that meaning maps reflect local semantic content in scenes while deep saliency models do something else. We conclude the meaning mapping approach is an effective tool for estimating semantic content in scenes.
Collapse
Affiliation(s)
- Taylor R Hayes
- Center for Mind and Brain, University of California, Davis, CA, USA.
| | - John M Henderson
- Center for Mind and Brain, University of California, Davis, CA, USA
- Department of Psychology, University of California, Davis, CA, USA
| |
Collapse
|
8
|
Callahan-Flintoft C, Barentine C, Touryan J, Ries AJ. A Case for Studying Naturalistic Eye and Head Movements in Virtual Environments. Front Psychol 2022; 12:650693. [PMID: 35035362 PMCID: PMC8759101 DOI: 10.3389/fpsyg.2021.650693] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Accepted: 11/10/2021] [Indexed: 12/03/2022] Open
Abstract
Using head mounted displays (HMDs) in conjunction with virtual reality (VR), vision researchers are able to capture more naturalistic vision in an experimentally controlled setting. Namely, eye movements can be accurately tracked as they occur in concert with head movements as subjects navigate virtual environments. A benefit of this approach is that, unlike other mobile eye tracking (ET) set-ups in unconstrained settings, the experimenter has precise control over the location and timing of stimulus presentation, making it easier to compare findings between HMD studies and those that use monitor displays, which account for the bulk of previous work in eye movement research and vision sciences more generally. Here, a visual discrimination paradigm is presented as a proof of concept to demonstrate the applicability of collecting eye and head tracking data from an HMD in VR for vision research. The current work’s contribution is 3-fold: firstly, results demonstrating both the strengths and the weaknesses of recording and classifying eye and head tracking data in VR, secondly, a highly flexible graphical user interface (GUI) used to generate the current experiment, is offered to lower the software development start-up cost of future researchers transitioning to a VR space, and finally, the dataset analyzed here of behavioral, eye and head tracking data synchronized with environmental variables from a task specifically designed to elicit a variety of eye and head movements could be an asset in testing future eye movement classification algorithms.
Collapse
Affiliation(s)
- Chloe Callahan-Flintoft
- Humans in Complex System Directorate, United States Army Research Laboratory, Adelphi, MD, United States
| | - Christian Barentine
- Warfighter Effectiveness Research Center, United States Air Force Academy, Colorado Springs, CO, United States
| | - Jonathan Touryan
- Humans in Complex System Directorate, United States Army Research Laboratory, Adelphi, MD, United States
| | - Anthony J Ries
- Humans in Complex System Directorate, United States Army Research Laboratory, Adelphi, MD, United States.,Warfighter Effectiveness Research Center, United States Air Force Academy, Colorado Springs, CO, United States
| |
Collapse
|
9
|
Ren X, Duan H, Min X, Zhu Y, Shen W, Wang L, Shi F, Fan L, Yang X, Zhai G. Where are the Children with Autism Looking in Reality? ARTIF INTELL 2022. [DOI: 10.1007/978-3-031-20500-2_48] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
10
|
Window View Access in Architecture: Spatial Visualization and Probability Evaluations Based on Human Vision Fields and Biophilia. BUILDINGS 2021. [DOI: 10.3390/buildings11120627] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This paper presents a computational method for spatial visualization and probability evaluations of window view access in architecture based on human eyes’ vision fields and biophilic recommendations. Window view access establishes occupants’ visual connections to outdoors. Window view access has not, yet, been discussed in terms of the typical vision fields and related visual experiences. Occupants’ views of outdoors could change from almost blocked and poor to good, wide, and immersive visions in relation to the binocular focus to monocular (far-) peripheral sights of human eyes. The proposed methodological framework includes spatial visualizations and cumulative distribution functions of window view access based on visual experiences of occupants. The framework is integrated with biophilic recommendations and existing rating systems for view evaluations. As a pilot study, the method is used to evaluate occupants’ view access in a space designed with 15 different configurations of windows and overhangs. Results characterize likelihood of experiencing various field of views (FOVs) in case studies. In particular, window-to-wall-area ratios of between 40% and 70% offer optimum distributions of view access in space by offering 75% likelihoods of experiencing good to wide views and less than 25% probabilities of exposing to poor and almost blocked views. Results show the contribution of the proposed method to informative decision-making processes in architecture.
Collapse
|
11
|
Henderson JM, Hayes TR, Peacock CE, Rehrig G. Meaning maps capture the density of local semantic features in scenes: A reply to Pedziwiatr, Kümmerer, Wallis, Bethge & Teufel (2021). Cognition 2021; 214:104742. [PMID: 33892912 PMCID: PMC11166323 DOI: 10.1016/j.cognition.2021.104742] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 04/13/2021] [Accepted: 04/15/2021] [Indexed: 11/17/2022]
Abstract
Pedziwiatr, Kümmerer, Wallis, Bethge, & Teufel (2021) contend that Meaning Maps do not represent the spatial distribution of semantic features in scenes. We argue that Pesziwiatr et al. provide neither logical nor empirical support for that claim, and we conclude that Meaning Maps do what they were designed to do: represent the spatial distribution of meaning in scenes.
Collapse
Affiliation(s)
- John M Henderson
- Center for Mind and Brain, University of California, Davis, USA; Department of Psychology, University of California, Davis, USA.
| | - Taylor R Hayes
- Center for Mind and Brain, University of California, Davis, USA
| | - Candace E Peacock
- Center for Mind and Brain, University of California, Davis, USA; Department of Psychology, University of California, Davis, USA
| | | |
Collapse
|
12
|
Baror S, He BJ. Spontaneous perception: a framework for task-free, self-paced perception. Neurosci Conscious 2021; 2021:niab016. [PMID: 34377535 PMCID: PMC8333690 DOI: 10.1093/nc/niab016] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 05/13/2021] [Accepted: 06/15/2021] [Indexed: 11/20/2022] Open
Abstract
Flipping through social media feeds, viewing exhibitions in a museum, or walking through the botanical gardens, people consistently choose to engage with and disengage from visual content. Yet, in most laboratory settings, the visual stimuli, their presentation duration, and the task at hand are all controlled by the researcher. Such settings largely overlook the spontaneous nature of human visual experience, in which perception takes place independently from specific task constraints and its time course is determined by the observer as a self-governing agent. Currently, much remains unknown about how spontaneous perceptual experiences unfold in the brain. Are all perceptual categories extracted during spontaneous perception? Does spontaneous perception inherently involve volition? Is spontaneous perception segmented into discrete episodes? How do different neural networks interact over time during spontaneous perception? These questions are imperative to understand our conscious visual experience in daily life. In this article we propose a framework for spontaneous perception. We first define spontaneous perception as a task-free and self-paced experience. We propose that spontaneous perception is guided by four organizing principles that grant it temporal and spatial structures. These principles include coarse-to-fine processing, continuity and segmentation, agency and volition, and associative processing. We provide key suggestions illustrating how these principles may interact with one another in guiding the multifaceted experience of spontaneous perception. We point to testable predictions derived from this framework, including (but not limited to) the roles of the default-mode network and slow cortical potentials in underlying spontaneous perception. We conclude by suggesting several outstanding questions for future research, extending the relevance of this framework to consciousness and spontaneous brain activity. In conclusion, the spontaneous perception framework proposed herein integrates components in human perception and cognition, which have been traditionally studied in isolation, and opens the door to understand how visual perception unfolds in its most natural context.
Collapse
Affiliation(s)
- Shira Baror
- Neuroscience Institute, New York University School of Medicine, 435 E 30th Street, New York, NY 10016, USA
| | - Biyu J He
- Neuroscience Institute, New York University School of Medicine, 435 E 30th Street, New York, NY 10016, USA
| |
Collapse
|