1
|
Gerharz L, Brenner E, Billino J, Voudouris D. Age effects on predictive eye movements for action. J Vis 2024; 24:8. [PMID: 38856982 PMCID: PMC11166221 DOI: 10.1167/jov.24.6.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2024] [Accepted: 04/22/2024] [Indexed: 06/11/2024] Open
Abstract
When interacting with the environment, humans typically shift their gaze to where information is to be found that is useful for the upcoming action. With increasing age, people become slower both in processing sensory information and in performing their movements. One way to compensate for this slowing down could be to rely more on predictive strategies. To examine whether we could find evidence for this, we asked younger (19-29 years) and older (55-72 years) healthy adults to perform a reaching task wherein they hit a visual target that appeared at one of two possible locations. In separate blocks of trials, the target could appear always at the same location (predictable), mainly at one of the locations (biased), or at either location randomly (unpredictable). As one might expect, saccades toward predictable targets had shorter latencies than those toward less predictable targets, irrespective of age. Older adults took longer to initiate saccades toward the target location than younger adults, even when the likely target location could be deduced. Thus we found no evidence of them relying more on predictive gaze. Moreover, both younger and older participants performed more saccades when the target location was less predictable, but again no age-related differences were found. Thus we found no tendency for older adults to rely more on prediction.
Collapse
Affiliation(s)
- Leonard Gerharz
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- https://orcid.org/0009-0006-0487-2609
| | - Eli Brenner
- Department of Human Movement Science, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Jutta Billino
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| | - Dimitris Voudouris
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
2
|
Stein N, Watson T, Lappe M, Westendorf M, Durant S. Eye and head movements in visual search in the extended field of view. Sci Rep 2024; 14:8907. [PMID: 38632334 PMCID: PMC11023950 DOI: 10.1038/s41598-024-59657-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 04/12/2024] [Indexed: 04/19/2024] Open
Abstract
In natural environments, head movements are required to search for objects outside the field of view (FoV). Here we investigate the power of a salient target in an extended visual search array to facilitate faster detection once this item comes into the FoV by a head movement. We conducted two virtual reality experiments using spatially clustered sets of stimuli to observe target detection and head and eye movements during visual search. Participants completed search tasks with three conditions: (1) target in the initial FoV, (2) head movement needed to bring the target into the FoV, (3) same as condition 2 but the periphery was initially hidden and appeared after the head movement had brought the location of the target set into the FoV. We measured search time until participants found a more salient (O) or less salient (T) target among distractors (L). On average O's were found faster than T's. Gaze analysis showed that saliency facilitation occurred due to the target guiding the search only if it was within the initial FoV. When targets required a head movement to enter the FoV, participants followed the same search strategy as in trials without a visible target in the periphery. Moreover, faster search times for salient targets were only caused by the time required to find the target once the target set was reached. This suggests that the effect of stimulus saliency differs between visual search on fixed displays and when we are actively searching through an extended visual field.
Collapse
Affiliation(s)
- Niklas Stein
- Institute for Psychology, University of Münster, 48143, Münster, Germany.
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, 48143, Münster, Germany.
| | - Tamara Watson
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, 2751, Australia
| | - Markus Lappe
- Institute for Psychology, University of Münster, 48143, Münster, Germany
- Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, 48143, Münster, Germany
| | - Maren Westendorf
- Institute for Psychology, University of Münster, 48143, Münster, Germany
| | - Szonya Durant
- Department of Psychology, Royal Holloway, University of London, Egham, TW20 0EX, UK
| |
Collapse
|
3
|
Beitner J, Helbing J, David EJ, Võ MLH. Using a flashlight-contingent window paradigm to investigate visual search and object memory in virtual reality and on computer screens. Sci Rep 2024; 14:8596. [PMID: 38615047 DOI: 10.1038/s41598-024-58941-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 04/04/2024] [Indexed: 04/15/2024] Open
Abstract
A popular technique to modulate visual input during search is to use gaze-contingent windows. However, these are often rather discomforting, providing the impression of visual impairment. To counteract this, we asked participants in this study to search through illuminated as well as dark three-dimensional scenes using a more naturalistic flashlight with which they could illuminate the rooms. In a surprise incidental memory task, we tested the identities and locations of objects encountered during search. Importantly, we tested this study design in both immersive virtual reality (VR; Experiment 1) and on a desktop-computer screen (Experiment 2). As hypothesized, searching with a flashlight increased search difficulty and memory usage during search. We found a memory benefit for identities of distractors in the flashlight condition in VR but not in the computer screen experiment. Surprisingly, location memory was comparable across search conditions despite the enormous difference in visual input. Subtle differences across experiments only appeared in VR after accounting for previous recognition performance, hinting at a benefit of flashlight search in VR. Our findings highlight that removing visual information does not necessarily impair location memory, and that screen experiments using virtual environments can elicit the same major effects as VR setups.
Collapse
Affiliation(s)
- Julia Beitner
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany.
| | - Jason Helbing
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Erwan Joël David
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
- LIUM, Le Mans Université, Le Mans, France
| | - Melissa Lê-Hoa Võ
- Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
4
|
Malpica S, Martin D, Serrano A, Gutierrez D, Masia B. Task-Dependent Visual Behavior in Immersive Environments: A Comparative Study of Free Exploration, Memory and Visual Search. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:4417-4425. [PMID: 37788210 DOI: 10.1109/tvcg.2023.3320259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
Visual behavior depends on both bottom-up mechanisms, where gaze is driven by the visual conspicuity of the stimuli, and top-down mechanisms, guiding attention towards relevant areas based on the task or goal of the viewer. While this is well-known, visual attention models often focus on bottom-up mechanisms. Existing works have analyzed the effect of high-level cognitive tasks like memory or visual search on visual behavior; however, they have often done so with different stimuli, methodology, metrics and participants, which makes drawing conclusions and comparisons between tasks particularly difficult. In this work we present a systematic study of how different cognitive tasks affect visual behavior in a novel within-subjects design scheme. Participants performed free exploration, memory and visual search tasks in three different scenes while their eye and head movements were being recorded. We found significant, consistent differences between tasks in the distributions of fixations, saccades and head movements. Our findings can provide insights for practitioners and content creators designing task-oriented immersive applications.
Collapse
|
5
|
Nachtnebel SJ, Cambronero-Delgadillo AJ, Helmers L, Ischebeck A, Höfler M. The impact of different distractions on outdoor visual search and object memory. Sci Rep 2023; 13:16700. [PMID: 37794077 PMCID: PMC10551016 DOI: 10.1038/s41598-023-43679-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 09/27/2023] [Indexed: 10/06/2023] Open
Abstract
We investigated whether and how different types of search distractions affect visual search behavior and target memory while participants searched in a real-world environment. They searched either undistracted (control condition), listened to a podcast (auditory distraction), counted down aloud at intervals of three while searching (executive working memory load), or were forced to stop the search on half of the trials (time pressure). In line with findings from laboratory settings, participants searched longer but made fewer errors when the target was absent than when it was present, regardless of distraction condition. Furthermore, compared to the auditory distraction condition, the executive working memory load led to higher error rates (but not longer search times). In a surprise memory test after the end of the search tasks, recognition was better for previously present targets than for absent targets. Again, this was regardless of the previous distraction condition, although significantly fewer targets were remembered by the participants in the executive working memory load condition than by those in the control condition. The findings suggest that executive working memory load, but likely not auditory distraction and time pressure affected visual search performance and target memory in a real-world environment.
Collapse
Affiliation(s)
| | | | - Linda Helmers
- Department of Psychology, University of Graz, Universitätsplatz 2/III, 8010, Graz, Austria
| | - Anja Ischebeck
- Department of Psychology, University of Graz, Universitätsplatz 2/III, 8010, Graz, Austria
| | - Margit Höfler
- Department of Psychology, University of Graz, Universitätsplatz 2/III, 8010, Graz, Austria
- Department for Dementia Research, University for Continuing Education Krems, Dr.-Karl-Dorrek-Straße 30, 3500, Krems, Austria
| |
Collapse
|
6
|
Draschkow D, Anderson NC, David E, Gauge N, Kingstone A, Kumle L, Laurent X, Nobre AC, Shiels S, Võ MLH. Using XR (Extended Reality) for Behavioral, Clinical, and Learning Sciences Requires Updates in Infrastructure and Funding. POLICY INSIGHTS FROM THE BEHAVIORAL AND BRAIN SCIENCES 2023; 10:317-323. [PMID: 37900910 PMCID: PMC10602770 DOI: 10.1177/23727322231196305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/31/2023]
Abstract
Extended reality (XR, including augmented and virtual reality) creates a powerful intersection between information technology and cognitive, clinical, and education sciences. XR technology has long captured the public imagination, and its development is the focus of major technology companies. This article demonstrates the potential of XR to (1) deliver behavioral insights, (2) transform clinical treatments, and (3) improve learning and education. However, without appropriate policy, funding, and infrastructural investment, many research institutions will struggle to keep pace with the advances and opportunities of XR. To realize the full potential of XR for basic and translational research, funding should incentivize (1) appropriate training, (2) open software solutions, and (3) collaborations between complementary academic and industry partners. Bolstering the XR research infrastructure with the right investments and incentives is vital for delivering on the potential for transformative discoveries, innovations, and applications.
Collapse
Affiliation(s)
- Dejan Draschkow
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Nicola C. Anderson
- Department of Psychology, University of British Columbia, Vancouver, Canada
| | - Erwan David
- Department of Psychology, Scene Grammar Lab, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Nathan Gauge
- OxSTaR Oxford Simulation Teaching and Research, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Alan Kingstone
- Department of Psychology, University of British Columbia, Vancouver, Canada
| | - Levi Kumle
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Xavier Laurent
- Centre for Teaching and Learning, University of Oxford, Oxford, UK
| | - Anna C. Nobre
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK
- Department of Experimental Psychology, University of Oxford, Oxford, UK
- Wu Tsai Institute, Yale University, New Haven, USA
| | - Sally Shiels
- OxSTaR Oxford Simulation Teaching and Research, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Melissa L.-H. Võ
- Department of Psychology, Scene Grammar Lab, Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
7
|
Kondyli V, Bhatt M, Levin D, Suchan J. How do drivers mitigate the effects of naturalistic visual complexity? : On attentional strategies and their implications under a change blindness protocol. Cogn Res Princ Implic 2023; 8:54. [PMID: 37556047 PMCID: PMC10412523 DOI: 10.1186/s41235-023-00501-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 07/07/2023] [Indexed: 08/10/2023] Open
Abstract
How do the limits of high-level visual processing affect human performance in naturalistic, dynamic settings of (multimodal) interaction where observers can draw on experience to strategically adapt attention to familiar forms of complexity? In this backdrop, we investigate change detection in a driving context to study attentional allocation aimed at overcoming environmental complexity and temporal load. Results indicate that visuospatial complexity substantially increases change blindness but also that participants effectively respond to this load by increasing their focus on safety-relevant events, by adjusting their driving, and by avoiding non-productive forms of attentional elaboration, thereby also controlling "looked-but-failed-to-see" errors. Furthermore, analyses of gaze patterns reveal that drivers occasionally, but effectively, limit attentional monitoring and lingering for irrelevant changes. Overall, the experimental outcomes reveal how drivers exhibit effective attentional compensation in highly complex situations. Our findings uncover implications for driving education and development of driving skill-testing methods, as well as for human-factors guided development of AI-based driving assistance systems.
Collapse
Affiliation(s)
- Vasiliki Kondyli
- CoDesign Lab EU - codesign-lab.org, Örebro University, Örebro, Sweden.
| | - Mehul Bhatt
- CoDesign Lab EU - codesign-lab.org, Örebro University, Örebro, Sweden
| | | | - Jakob Suchan
- German Aerospace Center - DLR, Institute of Systems Engineering for Future Mobility, Oldenburg, Germany
| |
Collapse
|
8
|
Chawoush B, Draschkow D, van Ede F. Capacity and selection in immersive visual working memory following naturalistic object disappearance. J Vis 2023; 23:9. [PMID: 37548958 PMCID: PMC10411649 DOI: 10.1167/jov.23.8.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Accepted: 07/06/2023] [Indexed: 08/08/2023] Open
Abstract
Visual working memory-holding past visual information in mind for upcoming behavior-is commonly studied following the abrupt removal of visual objects from static two-dimensional (2D) displays. In everyday life, visual objects do not typically vanish from the environment in front of us. Rather, visual objects tend to enter working memory following self or object motion: disappearing from view gradually and changing the spatial relation between memoranda and observer. Here, we used virtual reality (VR) to investigate whether two classic findings from visual working memory research-a capacity of around three objects and the reliance on space for object selection-generalize to more naturalistic modes of object disappearance. Our static reference condition mimicked traditional laboratory tasks whereby visual objects were held static in front of the participant and removed from view abruptly. In our critical flow condition, the same visual objects flowed by participants, disappearing from view gradually and behind the observer. We considered visual working memory performance and capacity, as well as space-based mnemonic selection, indexed by directional biases in gaze. Despite vastly distinct modes of object disappearance and altered spatial relations between memoranda and observer, we found comparable capacity and comparable gaze signatures of space-based mnemonic selection. This finding reveals how classic findings from visual working memory research generalize to immersive situations with more naturalistic modes of object disappearance and with dynamic spatial relations between memoranda and observer.
Collapse
Affiliation(s)
- Babak Chawoush
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Dejan Draschkow
- Department of Experimental Psychology, University of Oxford, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | - Freek van Ede
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
9
|
Martin D, Sun X, Gutierrez D, Masia B. A Study of Change Blindness in Immersive Environments. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; PP:2446-2455. [PMID: 37027712 DOI: 10.1109/tvcg.2023.3247102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Human performance is poor at detecting certain changes in a scene, a phenomenon known as change blindness. Although the exact reasons of this effect are not yet completely understood, there is a consensus that it is due to our constrained attention and memory capacity: We create our own mental, structured representation of what surrounds us, but such representation is limited and imprecise. Previous efforts investigating this effect have focused on 2D images; however, there are significant differences regarding attention and memory between 2D images and the viewing conditions of daily life. In this work, we present a systematic study of change blindness using immersive 3D environments, which offer more natural viewing conditions closer to our daily visual experience. We devise two experiments; first, we focus on analyzing how different change properties (namely type, distance, complexity, and field of view) may affect change blindness. We then further explore its relation with the capacity of our visual working memory and conduct a second experiment analyzing the influence of the number of changes. Besides gaining a deeper understanding of the change blindness effect, our results may be leveraged in several VR applications such as redirected walking, games, or even studies on saliency or attention prediction.
Collapse
|
10
|
Bischof WF, Anderson NC, Kingstone A. Eye and head movements while encoding and recognizing panoramic scenes in virtual reality. PLoS One 2023; 18:e0282030. [PMID: 36800398 PMCID: PMC9937482 DOI: 10.1371/journal.pone.0282030] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 02/06/2023] [Indexed: 02/18/2023] Open
Abstract
One approach to studying the recognition of scenes and objects relies on the comparison of eye movement patterns during encoding and recognition. Past studies typically analyzed the perception of flat stimuli of limited extent presented on a computer monitor that did not require head movements. In contrast, participants in the present study saw omnidirectional panoramic scenes through an immersive 3D virtual reality viewer, and they could move their head freely to inspect different parts of the visual scenes. This allowed us to examine how unconstrained observers use their head and eyes to encode and recognize visual scenes. By studying head and eye movement within a fully immersive environment, and applying cross-recurrence analysis, we found that eye movements are strongly influenced by the content of the visual environment, as are head movements-though to a much lesser degree. Moreover, we found that the head and eyes are linked, with the head supporting, and by and large mirroring the movements of the eyes, consistent with the notion that the head operates to support the acquisition of visual information by the eyes.
Collapse
Affiliation(s)
- Walter F. Bischof
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada
| | - Nicola C. Anderson
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada
| | - Alan Kingstone
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada
- * E-mail:
| |
Collapse
|
11
|
Active visual search in naturalistic environments reflects individual differences in classic visual search performance. Sci Rep 2023; 13:631. [PMID: 36635491 PMCID: PMC9837148 DOI: 10.1038/s41598-023-27896-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Accepted: 01/10/2023] [Indexed: 01/13/2023] Open
Abstract
Visual search is a ubiquitous activity in real-world environments. Yet, traditionally, visual search is investigated in tightly controlled paradigms, where head-restricted participants locate a minimalistic target in a cluttered array that is presented on a computer screen. Do traditional visual search tasks predict performance in naturalistic settings, where participants actively explore complex, real-world scenes? Here, we leverage advances in virtual reality technology to test the degree to which classic and naturalistic search are limited by a common factor, set size, and the degree to which individual differences in classic search behavior predict naturalistic search behavior in a large sample of individuals (N = 75). In a naturalistic search task, participants looked for an object within their environment via a combination of head-turns and eye-movements using a head-mounted display. Then, in a classic search task, participants searched for a target within a simple array of colored letters using only eye-movements. In each task, we found that participants' search performance was impacted by increases in set size-the number of items in the visual display. Critically, we observed that participants' efficiency in classic search tasks-the degree to which set size slowed performance-indeed predicted efficiency in real-world scenes. These results demonstrate that classic, computer-based visual search tasks are excellent models of active, real-world search behavior.
Collapse
|
12
|
Moskowitz JB, Berger SA, Fooken J, Castelhano MS, Gallivan JP, Flanagan JR. The influence of movement-related costs when searching to act and acting to search. J Neurophysiol 2023; 129:115-130. [PMID: 36475897 DOI: 10.1152/jn.00305.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Real-world search behavior often involves limb movements, either during search or after search. Here we investigated whether movement-related costs influence search behavior in two kinds of search tasks. In our visual search tasks, participants made saccades to find a target object among distractors and then moved a cursor, controlled by the handle of a robotic manipulandum, to the target. In our manual search tasks, participants moved the cursor to perform the search, placing it onto objects to reveal their identity as either a target or a distractor. In all tasks, there were multiple targets. Across experiments, we manipulated either the effort or time costs associated with movement such that these costs varied across the search space. We varied effort by applying different resistive forces to the handle, and we varied time costs by altering the speed of the cursor. Our analysis of cursor and eye movements during manual and visual search, respectively, showed that effort influenced manual search but did not influence visual search. In contrast, time costs influenced both visual and manual search. Our results demonstrate that, in addition to perceptual and cognitive factors, movement-related costs can also influence search behavior.NEW & NOTEWORTHY Numerous studies have investigated the perceptual and cognitive factors that influence decision making about where to look, or move, in search tasks. However, little is known about how search is influenced by movement-related costs associated with acting on an object once it has been visually located or acting during manual search. In this article, we show that movement time costs can bias visual and manual search and that movement effort costs bias manual search.
Collapse
Affiliation(s)
- Joshua B Moskowitz
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Department of Psychology, Queen's University, Kingston, Ontario, Canada
| | - Sarah A Berger
- Department of Psychology, Queen's University, Kingston, Ontario, Canada
| | - Jolande Fooken
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada
| | - Monica S Castelhano
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Department of Psychology, Queen's University, Kingston, Ontario, Canada
| | - Jason P Gallivan
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Department of Psychology, Queen's University, Kingston, Ontario, Canada.,Department of Biomedical and Molecular Sciences, Queen's University, Kingston, Ontario, Canada
| | - J Randall Flanagan
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Department of Psychology, Queen's University, Kingston, Ontario, Canada
| |
Collapse
|
13
|
Lin TC, Krishnan AU, Li Z. Perception-Motion Coupling in Active Telepresence: Human Behavior and Teleoperation Interface Design. ACM TRANSACTIONS ON HUMAN-ROBOT INTERACTION 2022. [DOI: 10.1145/3571599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Teleoperation enables complex robot platforms to perform tasks beyond the scope of the current state-of-the-art robot autonomy by imparting human intelligence and critical thinking to these operations. For seamless control of robot platforms, it is essential to facilitate optimal situational awareness of the workspace for the operator through active telepresence cameras. However, the control of these active telepresence cameras adds an additional degree of complexity to the task of teleoperation. In this paper we present our results from the user study that investigates: 1) how the teleoperator learns or adapts to performing the tasks via active cameras modeled after camera placements on the TRINA humanoid robot; 2) the perception-action coupling operators implement to control active telepresence cameras, and 3) the camera preferences for performing the tasks. These findings from the human motion analysis and post-study survey will help us determine desired design features for robot teleoperation interfaces and assistive autonomy.
Collapse
Affiliation(s)
- Tsung-Chi Lin
- Worcester Polytechnic Institute, Robotics Engineering
| | | | - Zhi Li
- Worcester Polytechnic Institute, Robotics Engineering
| |
Collapse
|
14
|
Rubin M, Muller K, Hayhoe MM, Telch MJ. Attention guidance augmentation of virtual reality exposure therapy for social anxiety disorder: a pilot randomized controlled trial. Cogn Behav Ther 2022; 51:371-387. [PMID: 35383544 PMCID: PMC9458616 DOI: 10.1080/16506073.2022.2053882] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 03/11/2022] [Indexed: 11/03/2022]
Abstract
Biased attention to social threats has been implicated in social anxiety disorder. Modifying visual attention during exposure therapy offers a direct test of this mechanism. We developed and tested a brief virtual reality exposure therapy (VRET) protocol using 360°-video and eye tracking. Participants (N = 21) were randomized to either standard VRET or VRET + attention guidance training (AGT). Multilevel Bayesian models were used to test (1) whether there was an effect of condition over time and (2) whether post-treatment changes in gaze patterns mediated the effect of condition at follow-up. There was a large overall effect of the intervention on symptoms of social anxiety, as well as an effect of the AGT augmentation on changes in visual attention to audience members. There was weak evidence against an effect of condition on fear of public speaking and weak evidence supporting a mediation effect, however these estimates were strongly influenced by model priors. Taken together, our findings suggest that attention can be modified within and during VRET and that modification of visual gaze avoidance may be casually linked to reductions in social anxiety. Replication with a larger sample size is needed.
Collapse
Affiliation(s)
- Mikael Rubin
- Department of Psychology, The University of Texas at Austin, TX, USA
| | - Karl Muller
- Center for Perceptual Systems, The University of Texas at Austin, TX, USA
| | - Mary M. Hayhoe
- Center for Perceptual Systems, The University of Texas at Austin, TX, USA
| | - Michael J. Telch
- Department of Psychology, The University of Texas at Austin, TX, USA
| |
Collapse
|
15
|
Helbing J, Draschkow D, L-H Võ M. Auxiliary Scene-Context Information Provided by Anchor Objects Guides Attention and Locomotion in Natural Search Behavior. Psychol Sci 2022; 33:1463-1476. [PMID: 35942922 DOI: 10.1177/09567976221091838] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Successful adaptive behavior requires efficient attentional and locomotive systems. Previous research has thoroughly investigated how we achieve this efficiency during natural behavior by exploiting prior knowledge related to targets of our actions (e.g., attending to metallic targets when looking for a pot) and to the environmental context (e.g., looking for the pot in the kitchen). Less is known about whether and how individual nontarget components of the environment support natural behavior. In our immersive virtual reality task, 24 adult participants searched for objects in naturalistic scenes in which we manipulated the presence and arrangement of large, static objects that anchor predictions about targets (e.g., the sink provides a prediction for the location of the soap). Our results show that gaze and body movements in this naturalistic setting are strongly guided by these anchors. These findings demonstrate that objects auxiliary to the target are incorporated into the representations guiding attention and locomotion.
Collapse
Affiliation(s)
- Jason Helbing
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
| | - Dejan Draschkow
- Brain and Cognition Laboratory, Department of Experimental Psychology, University of Oxford.,Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford
| | - Melissa L-H Võ
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
| |
Collapse
|
16
|
Draschkow D, Nobre AC, van Ede F. Multiple spatial frames for immersive working memory. Nat Hum Behav 2022; 6:536-544. [PMID: 35058640 PMCID: PMC7612679 DOI: 10.1038/s41562-021-01245-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Accepted: 10/25/2021] [Indexed: 11/09/2022]
Abstract
As we move around, relevant information that disappears from sight can still be held in working memory to serve upcoming behaviour. How we maintain and select visual information as we move through the environment remains poorly understood because most laboratory tasks of working memory rely on removing visual material while participants remain still. We used virtual reality to study visual working memory following self-movement in immersive environments. Directional biases in gaze revealed the recruitment of more than one spatial frame for maintaining and selecting memoranda following self-movement. The findings bring the important realization that multiple spatial frames support working memory in natural behaviour. The results also illustrate how virtual reality can be a critical experimental tool to characterize this core memory system.
Collapse
Affiliation(s)
- Dejan Draschkow
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
- Department of Experimental Psychology, University of Oxford, Oxford, UK.
| | - Anna C Nobre
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Freek van Ede
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
17
|
Selective visual attention during public speaking in an immersive context. Atten Percept Psychophys 2022; 84:396-407. [PMID: 35064557 PMCID: PMC8993214 DOI: 10.3758/s13414-021-02430-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/21/2021] [Indexed: 02/03/2023]
Abstract
It has recently become feasible to study selective visual attention to social cues in increasingly ecologically valid ways. In this secondary analysis, we examined gaze behavior in response to the actions of others in a social context. Participants (N = 84) were asked to give a 5-minute speech to a five-member audience that had been filmed in 360° video, displayed in a virtual reality headset containing a built-in eye tracker. Audience members were coached to make movements that would indicate interest or lack of interest (e.g., nodding vs. looking away). The goal of this paper was to analyze whether these actions influenced the speaker's gaze. We found that participants showed reliable evidence of gaze towards audience member actions in general, and towards audience member actions involving their phone specifically (compared with other actions like looking away or leaning back). However, there were no differences in gaze towards actions reflecting interest (like nodding) compared with actions reflecting lack of interest (like looking away). Participants were more likely to look away from audience member actions as well, but there were no specific actions that elicited looking away more or less. Taken together, these findings suggest that the actions of audience members are broadly influential in motivating gaze behaviors in a realistic, contextually embedded (public speaking) setting. Further research is needed to examine the ways in which these findings can be elucidated in more controlled laboratory environments as well as in the real world.
Collapse
|
18
|
Enders LR, Smith RJ, Gordon SM, Ries AJ, Touryan J. Gaze Behavior During Navigation and Visual Search of an Open-World Virtual Environment. Front Psychol 2021; 12:681042. [PMID: 34434140 PMCID: PMC8380848 DOI: 10.3389/fpsyg.2021.681042] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 06/28/2021] [Indexed: 11/13/2022] Open
Abstract
Eye tracking has been an essential tool within the vision science community for many years. However, the majority of studies involving eye-tracking technology employ a relatively passive approach through the use of static imagery, prescribed motion, or video stimuli. This is in contrast to our everyday interaction with the natural world where we navigate our environment while actively seeking and using task-relevant visual information. For this reason, an increasing number of vision researchers are employing virtual environment platforms, which offer interactive, realistic visual environments while maintaining a substantial level of experimental control. Here, we recorded eye movement behavior while subjects freely navigated through a rich, open-world virtual environment. Within this environment, subjects completed a visual search task where they were asked to find and count occurrence of specific targets among numerous distractor items. We assigned each participant into one of four target conditions: Humvees, motorcycles, aircraft, or furniture. Our results show a statistically significant relationship between gaze behavior and target objects across Target Conditions with increased visual attention toward assigned targets. Specifically, we see an increase in the number of fixations and an increase in dwell time on target relative to distractor objects. In addition, we included a divided attention task to investigate how search changed with the addition of a secondary task. With increased cognitive load, subjects slowed their speed, decreased gaze on objects, and increased the number of objects scanned in the environment. Overall, our results confirm previous findings and support that complex virtual environments can be used for active visual search experimentation, maintaining a high level of precision in the quantification of gaze information and visual attention. This study contributes to our understanding of how individuals search for information in a naturalistic (open-world) virtual environment. Likewise, our paradigm provides an intriguing look into the heterogeneity of individual behaviors when completing an un-timed visual search task while actively navigating.
Collapse
Affiliation(s)
| | | | | | - Anthony J Ries
- DEVCOM Army Research Laboratory, Aberdeen Proving Ground, MD, United States.,Warfighter Effectiveness Research Center, U.S. Air Force Academy, Colorado Springs, CO, United States
| | - Jonathan Touryan
- DEVCOM Army Research Laboratory, Aberdeen Proving Ground, MD, United States
| |
Collapse
|
19
|
David EJ, Beitner J, Võ MLH. The importance of peripheral vision when searching 3D real-world scenes: A gaze-contingent study in virtual reality. J Vis 2021; 21:3. [PMID: 34251433 PMCID: PMC8287039 DOI: 10.1167/jov.21.7.3] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Visual search in natural scenes is a complex task relying on peripheral vision to detect potential targets and central vision to verify them. The segregation of the visual fields has been particularly established by on-screen experiments. We conducted a gaze-contingent experiment in virtual reality in order to test how the perceived roles of central and peripheral visions translated to more natural settings. The use of everyday scenes in virtual reality allowed us to study visual attention by implementing a fairly ecological protocol that cannot be implemented in the real world. Central or peripheral vision was masked during visual search, with target objects selected according to scene semantic rules. Analyzing the resulting search behavior, we found that target objects that were not spatially constrained to a probable location within the scene impacted search measures negatively. Our results diverge from on-screen studies in that search performances were only slightly affected by central vision loss. In particular, a central mask did not impact verification times when the target was grammatically constrained to an anchor object. Our findings demonstrates that the role of central vision (up to 6 degrees of eccentricities) in identifying objects in natural scenes seems to be minor, while the role of peripheral preprocessing of targets in immersive real-world searches may have been underestimated by on-screen experiments.
Collapse
Affiliation(s)
- Erwan Joël David
- Department of Psychology, Goethe-Universität, Frankfurt, Germany.,
| | - Julia Beitner
- Department of Psychology, Goethe-Universität, Frankfurt, Germany.,
| | | |
Collapse
|
20
|
Sullivan B, Ludwig CJH, Damen D, Mayol-Cuevas W, Gilchrist ID. Look-ahead fixations during visuomotor behavior: Evidence from assembling a camping tent. J Vis 2021; 21:13. [PMID: 33688920 PMCID: PMC7961111 DOI: 10.1167/jov.21.3.13] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
Eye movements can support ongoing manipulative actions, but a class of so-called look ahead fixations (LAFs) are related to future tasks. We examined LAFs in a complex natural task—assembling a camping tent. Tent assembly is a relatively uncommon task and requires the completion of multiple subtasks in sequence over a 5- to 20-minute duration. Participants wore a head-mounted camera and eye tracker. Subtasks and LAFs were annotated. We document four novel aspects of LAFs. First, LAFs were not random and their frequency was biased to certain objects and subtasks. Second, latencies are larger than previously noted, with 35% of LAFs occurring within 10 seconds before motor manipulation and 75% within 100 seconds. Third, LAF behavior extends far into future subtasks, because only 47% of LAFs are made to objects relevant to the current subtask. Seventy-five percent of LAFs are to objects used within five upcoming steps. Last, LAFs are often directed repeatedly to the target before manipulation, suggesting memory volatility. LAFs with short fixation–action latencies have been hypothesized to benefit future visual search and/or motor manipulation. However, the diversity of LAFs suggest they may also reflect scene exploration and task relevance, as well as longer term problem solving and task planning.
Collapse
Affiliation(s)
- Brian Sullivan
- School of Psychological Sciences, University of Bristol, Bristol, UK.,
| | | | - Dima Damen
- Department of Computer Science, University of Bristol, Bristol, UK.,
| | | | - Iain D Gilchrist
- School of Psychological Sciences, University of Bristol, Bristol, UK.,
| |
Collapse
|
21
|
Kristjánsson Á, Draschkow D. Keeping it real: Looking beyond capacity limits in visual cognition. Atten Percept Psychophys 2021; 83:1375-1390. [PMID: 33791942 PMCID: PMC8084831 DOI: 10.3758/s13414-021-02256-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/23/2020] [Indexed: 11/23/2022]
Abstract
Research within visual cognition has made tremendous strides in uncovering the basic operating characteristics of the visual system by reducing the complexity of natural vision to artificial but well-controlled experimental tasks and stimuli. This reductionist approach has for example been used to assess the basic limitations of visual attention, visual working memory (VWM) capacity, and the fidelity of visual long-term memory (VLTM). The assessment of these limits is usually made in a pure sense, irrespective of goals, actions, and priors. While it is important to map out the bottlenecks our visual system faces, we focus here on selected examples of how such limitations can be overcome. Recent findings suggest that during more natural tasks, capacity may be higher than reductionist research suggests and that separable systems subserve different actions, such as reaching and looking, which might provide important insights about how pure attentional or memory limitations could be circumvented. We also review evidence suggesting that the closer we get to naturalistic behavior, the more we encounter implicit learning mechanisms that operate "for free" and "on the fly." These mechanisms provide a surprisingly rich visual experience, which can support capacity-limited systems. We speculate whether natural tasks may yield different estimates of the limitations of VWM, VLTM, and attention, and propose that capacity measurements should also pass the real-world test within naturalistic frameworks. Our review highlights various approaches for this and suggests that our understanding of visual cognition will benefit from incorporating the complexities of real-world cognition in experimental approaches.
Collapse
Affiliation(s)
- Árni Kristjánsson
- School of Health Sciences, University of Iceland, Reykjavík, Iceland.
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia.
| | - Dejan Draschkow
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| |
Collapse
|
22
|
Zhang JY, Shen ZH, Wang BP, Liu F, Li J. Influence of 3D laparoscopic surgery on surgeon's visual pattern and mental workload. J Med Eng Technol 2021; 45:375-379. [PMID: 33843431 DOI: 10.1080/03091902.2021.1907466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
Abstract
Previous studies have found that surgeons perform better in three-dimensional (3D) surgery than in two-dimensional (2D) surgery. However, no comparative studies have revealed the impact of 3D laparoscopic surgery on the surgeon's vision. To explore the effect of laparoscopic surgeons' depth perception during 3D laparoscopic surgery, 10 participants were recruited and performed 4 sets comparative simulated laparoscopic procedures in a virtual simulator, and eye movement signals were acquired, which were used to characteristics the visual differences. Fixation rate and saccade speed were used to characterise the influence of moderating variables for visual characteristics. The results from the data showed significant differences in eye movement behaviour. Compared with 2D laparoscopic surgery, surgeons have more average fixation rate (p-values = 0.001, 0.000, 0.003 and 0.015, respectively) and faster saccade speed (p-values = 0.037, 0.003, 0.073 and 0.105, respectively) in 3D laparoscopic surgery. The results of this study showed that surgeons had more efficient visual search in 3D laparoscopic surgery. At the same time, the results also indicated that surgeon's mental workload in 3D laparoscopic surgery was low. The relevant conclusions of this paper revealed the advantages of 3D laparoscopic surgery through visual efficiency.
Collapse
Affiliation(s)
- Jian-Yang Zhang
- School of Computer and Information Technology, Nanyang Normal University, Nanyang, P. R. China.,Department of Medical Engineering, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, P. R. China
| | - Zhi-Hao Shen
- School of Computer and Information Technology, Nanyang Normal University, Nanyang, P. R. China
| | - Bao-Ping Wang
- School of Computer and Information Technology, Nanyang Normal University, Nanyang, P. R. China
| | - Feng Liu
- School of Computer and Information Technology, Nanyang Normal University, Nanyang, P. R. China
| | - Juan Li
- School of Civil Engineering and Architecture, Nanyang Normal University, Nanyang, P. R. China
| |
Collapse
|
23
|
Greene CM, Broughan J, Hanlon A, Keane S, Hanrahan S, Kerr S, Rooney B. Visual Search in 3D: Effects of Monoscopic and Stereoscopic Cues to Depth on the Validity of Feature Integration Theory and Perceptual Load Theory. Front Psychol 2021; 12:596511. [PMID: 33815197 PMCID: PMC8009999 DOI: 10.3389/fpsyg.2021.596511] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Accepted: 02/22/2021] [Indexed: 11/21/2022] Open
Abstract
Previous research has successfully used feature integration theory to operationalise the predictions of Perceptual Load Theory, while simultaneously testing the predictions of both models. Building on this work, we test the extent to which these models hold up in a 3D world. In two experiments, participants responded to a target stimulus within an array of shapes whose apparent depth was manipulated using a combination of monoscopic and stereoscopic cues. The search task was designed to test the predictions of (a) feature integration theory, as the target was identified by a single feature or a conjunction of features and embedded in search arrays of varying size, and (b) perceptual load theory, as the task included congruent and incongruent distractors presented alongside search tasks imposing high or low perceptual load. Findings from both experiments upheld the predictions of feature integration theory, regardless of 2D/3D condition. Longer search times in conditions with a combination of monoscopic and stereoscopic depth cues suggests that binding features into three-dimensional objects requires greater attentional effort. This additional effort should have implications for perceptual load theory, yet our findings did not uphold its predictions; the effect of incongruent distractors did not differ between conjunction search trials (conceptualised as high perceptual load) and feature search trials (low perceptual load). Individual differences in susceptibility to the effects of perceptual load were evident and likely explain the absence of load effects. Overall, our findings suggest that feature integration theory may be useful for predicting attentional performance in a 3D world.
Collapse
Affiliation(s)
- Ciara M Greene
- School of Psychology, University College Dublin, Dublin, Ireland
| | - John Broughan
- School of Psychology, University College Dublin, Dublin, Ireland
| | - Anthony Hanlon
- School of Psychology, University College Dublin, Dublin, Ireland
| | - Seán Keane
- School of Psychology, University College Dublin, Dublin, Ireland
| | - Sophia Hanrahan
- School of Psychology, University College Dublin, Dublin, Ireland
| | - Stephen Kerr
- School of Psychology, University College Dublin, Dublin, Ireland
| | - Brendan Rooney
- School of Psychology, University College Dublin, Dublin, Ireland
| |
Collapse
|
24
|
Lukashova-Sanz O, Wahl S. Saliency-Aware Subtle Augmentation Improves Human Visual Search Performance in VR. Brain Sci 2021; 11:283. [PMID: 33669081 PMCID: PMC7996609 DOI: 10.3390/brainsci11030283] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Revised: 02/17/2021] [Accepted: 02/19/2021] [Indexed: 12/12/2022] Open
Abstract
Visual search becomes challenging when the time to find the target is limited. Here we focus on how performance in visual search can be improved via a subtle saliency-aware modulation of the scene. Specifically, we investigate whether blurring salient regions of the scene can improve participant's ability to find the target faster when the target is located in non-salient areas. A set of real-world omnidirectional images were displayed in virtual reality with a search target overlaid on the visual scene at a pseudorandom location. Participants performed a visual search task in three conditions defined by blur strength, where the task was to find the target as fast as possible. The mean search time, and the proportion of trials where participants failed to find the target, were compared across different conditions. Furthermore, the number and duration of fixations were evaluated. A significant effect of blur on behavioral and fixation metrics was found using linear mixed models. This study shows that it is possible to improve the performance by a saliency-aware subtle scene modulation in a challenging realistic visual search scenario. The current work provides an insight into potential visual augmentation designs aiming to improve user's performance in everyday visual search tasks.
Collapse
Affiliation(s)
- Olga Lukashova-Sanz
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany;
| | - Siegfried Wahl
- Institute for Ophthalmic Research, University of Tübingen, 72076 Tübingen, Germany;
- Carl Zeiss Vision International GmbH, 73430 Aalen, Germany
| |
Collapse
|
25
|
Võ MLH. The meaning and structure of scenes. Vision Res 2021; 181:10-20. [PMID: 33429218 DOI: 10.1016/j.visres.2020.11.003] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Revised: 10/31/2020] [Accepted: 11/03/2020] [Indexed: 01/09/2023]
Abstract
We live in a rich, three dimensional world with complex arrangements of meaningful objects. For decades, however, theories of visual attention and perception have been based on findings generated from lines and color patches. While these theories have been indispensable for our field, the time has come to move on from this rather impoverished view of the world and (at least try to) get closer to the real thing. After all, our visual environment consists of objects that we not only look at, but constantly interact with. Having incorporated the meaning and structure of scenes, i.e. its "grammar", then allows us to easily understand objects and scenes we have never encountered before. Studying this grammar provides us with the fascinating opportunity to gain new insights into the complex workings of attention, perception, and cognition. In this review, I will discuss how the meaning and the complex, yet predictive structure of real-world scenes influence attention allocation, search, and object identification.
Collapse
Affiliation(s)
- Melissa Le-Hoa Võ
- Department of Psychology, Johann Wolfgang-Goethe-Universität, Frankfurt, Germany. https://www.scenegrammarlab.com/
| |
Collapse
|
26
|
Bennett CR, Bex PJ, Merabet LB. Assessing visual search performance using a novel dynamic naturalistic scene. J Vis 2021; 21:5. [PMID: 33427871 PMCID: PMC7804579 DOI: 10.1167/jov.21.1.5] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2020] [Accepted: 12/01/2020] [Indexed: 11/24/2022] Open
Abstract
Daily activities require the constant searching and tracking of visual targets in dynamic and complex scenes. Classic work assessing visual search performance has been dominated by the use of simple geometric shapes, patterns, and static backgrounds. Recently, there has been a shift toward investigating visual search in more naturalistic dynamic scenes using virtual reality (VR)-based paradigms. In this direction, we have developed a first-person perspective VR environment combined with eye tracking for the capture of a variety of objective measures. Participants were instructed to search for a preselected human target walking in a crowded hallway setting. Performance was quantified based on saccade and smooth pursuit ocular motor behavior. To assess the effect of task difficulty, we manipulated factors of the visual scene, including crowd density (i.e., number of surrounding distractors) and the presence of environmental clutter. In general, results showed a pattern of worsening performance with increasing crowd density. In contrast, the presence of visual clutter had no effect. These results demonstrate how visual search performance can be investigated using VR-based naturalistic dynamic scenes and with high behavioral relevance. This engaging platform may also have utility in assessing visual search in a variety of clinical populations of interest.
Collapse
Affiliation(s)
- Christopher R Bennett
- The Laboratory for Visual Neuroplasticity, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Peter J Bex
- Translational Vision Lab, Department of Psychology, Northeastern University, Boston, MA, USA
| | - Lotfi B Merabet
- The Laboratory for Visual Neuroplasticity, Department of Ophthalmology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
27
|
Beitner J, Helbing J, Draschkow D, Võ MLH. Get Your Guidance Going: Investigating the Activation of Spatial Priors for Efficient Search in Virtual Reality. Brain Sci 2021; 11:44. [PMID: 33406655 PMCID: PMC7823740 DOI: 10.3390/brainsci11010044] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 12/21/2020] [Accepted: 12/22/2020] [Indexed: 11/21/2022] Open
Abstract
Repeated search studies are a hallmark in the investigation of the interplay between memory and attention. Due to a usually employed averaging, a substantial decrease in response times occurring between the first and second search through the same search environment is rarely discussed. This search initiation effect is often the most dramatic decrease in search times in a series of sequential searches. The nature of this initial lack of search efficiency has thus far remained unexplored. We tested the hypothesis that the activation of spatial priors leads to this search efficiency profile. Before searching repeatedly through scenes in VR, participants either (1) previewed the scene, (2) saw an interrupted preview, or (3) started searching immediately. The search initiation effect was present in the latter condition but in neither of the preview conditions. Eye movement metrics revealed that the locus of this effect lies in search guidance instead of search initiation or decision time, and was beyond effects of object learning or incidental memory. Our study suggests that upon visual processing of an environment, a process of activating spatial priors to enable orientation is initiated, which takes a toll on search time at first, but once activated it can be used to guide subsequent searches.
Collapse
Affiliation(s)
- Julia Beitner
- Scene Grammar Lab, Institute of Psychology, Goethe University, 60323 Frankfurt am Main, Germany; (J.H.); (M.L.-H.V.)
| | - Jason Helbing
- Scene Grammar Lab, Institute of Psychology, Goethe University, 60323 Frankfurt am Main, Germany; (J.H.); (M.L.-H.V.)
| | - Dejan Draschkow
- Brain and Cognition Laboratory, Department of Psychiatry, University of Oxford, Oxford OX3 7JX, UK;
| | - Melissa L.-H. Võ
- Scene Grammar Lab, Institute of Psychology, Goethe University, 60323 Frankfurt am Main, Germany; (J.H.); (M.L.-H.V.)
| |
Collapse
|
28
|
Perry CM, Singh T, Springer KG, Harrison AT, McLain AC, Herter TM. Multiple processes independently predict motor learning. J Neuroeng Rehabil 2020; 17:151. [PMID: 33203416 PMCID: PMC7670816 DOI: 10.1186/s12984-020-00766-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Accepted: 10/02/2020] [Indexed: 11/19/2022] Open
Abstract
Background Our ability to acquire, refine and adapt skilled limb movements is a hallmark of human motor learning that allows us to successfully perform many daily activities. The capacity to acquire, refine and adapt other features of motor performance, such as visual search, eye-hand coordination and visuomotor decisions, may also contribute to motor learning. However, the extent to which refinements of multiple behavioral features and their underlying neural processes independently contribute to motor learning remains unknown. In the current study, we used an ethological approach to test the hypothesis that practice-related refinements of multiple behavioral features would be independently predictive of motor learning. Methods Eighteen healthy, young adults used an upper-limb robot with eye-tracking to practice six trials of a continuous, visuomotor task once a week for six consecutive weeks. Participants used virtual paddles to hit away 200 “Targets” and avoid hitting 100 “Distractors” that continuously moved towards them from the back of the workspace. Motor learning was inferred from trial-by-trial acquisition and week-by-week retention of improvements on two measures of task performance related to motor execution and motor inhibition. Adaptations involving underlying neural processes were inferred from trial-by-trial acquisition and week-by-week retention of refinements on measures of skilled limb movement, visual search, eye-hand coordination and visuomotor decisions. We tested our hypothesis by quantifying the extent to which refinements on measures of multiple behavioral features (predictors) were independently predictive of improvements on our two measures of task performance (outcomes) after removing all shared variance between predictors. Results We found that refinements on measures of skilled limb movement, visual search and eye-hand coordination were independently predictive of improvements on our measure of task performance related to motor execution. In contrast, only refinements of eye-hand coordination were independently predictive of improvements on our measure of task performance related to motor inhibition. Conclusion Our results provide indirect evidence that refinements involving multiple, neural processes may independently contribute to motor learning, and distinct neural processes may underlie improvements in task performance related to motor execution and motor inhibition. This also suggests that refinements involving multiple, neural processes may contribute to motor recovery after stroke, and rehabilitation interventions should be designed to produce refinements of all behavioral features that may contribute to motor recovery.
Collapse
Affiliation(s)
- Christopher M Perry
- Department of Exercise Science, University of South Carolina, Columbia, SC, 29208, USA
| | - Tarkeshwar Singh
- Department of Kinesiology, The Pennsylvania State University, University Park, PA, 16802, USA
| | - Kayla G Springer
- Department of Exercise Science, University of South Carolina, Columbia, SC, 29208, USA
| | - Adam T Harrison
- Department of Exercise Science, University of South Carolina, Columbia, SC, 29208, USA
| | - Alexander C McLain
- Department of Epidemiology and Biostatistics, University of South Carolina, Columbia, SC, 29208, USA
| | - Troy M Herter
- Department of Exercise Science, University of South Carolina, Columbia, SC, 29208, USA.
| |
Collapse
|
29
|
Baxter R, Smith AD. Searching for individual determinants of probabilistic cueing in large-scale immersive virtual environments. Q J Exp Psychol (Hove) 2020; 75:328-347. [PMID: 33089735 DOI: 10.1177/1747021820969148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Large-scale search behaviour is an everyday occurrence, yet its underlying mechanisms are not commonly examined within experimental psychology. Key to efficient search behaviour is the sensitivity to environmental cues that might guide exploration, such as a target appearing with greater regularity in one region than another. Spatial cueing by probability has been examined in visual search paradigms, but the few studies that have addressed its contribution to large-scale search and foraging present contrasting accounts of the conditions under which a cueing effect can be reliably observed. In the present study, participants physically searched a virtual arena by inspecting identical locations until they found the target. The target was always present, although its location was probabilistically defined so that it appeared in the cued hemispace on 80% of trials. In Experiment 1, when participants' starting positions were stable, a probabilistic cueing effect was observed, with a strong bias towards searching the cued side. In Experiment 2, the starting position changed across the experiment, such that the cued region was defined in allocentric co-ordinates only. In this case, a probabilistic cueing effect was not observed across the sample. Analysis of individual differences in Experiment 2 suggests, however, that some participants may have learned the contingency underpinning the target's location, although these differences were unrelated to other tests of visuospatial ability. These results suggest that the ability to learn the likelihood of an item's fixed location when starting from different perspectives is driven by individual differences in other cognitive or perceptual factors.
Collapse
Affiliation(s)
- Rory Baxter
- School of Psychology, University of Plymouth, Plymouth, UK
| | | |
Collapse
|
30
|
Rubin M, Minns S, Muller K, Tong MH, Hayhoe MM, Telch MJ. Avoidance of social threat: Evidence from eye movements during a public speaking challenge using 360°- video. Behav Res Ther 2020; 134:103706. [PMID: 32920165 PMCID: PMC7530106 DOI: 10.1016/j.brat.2020.103706] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2020] [Revised: 06/02/2020] [Accepted: 07/29/2020] [Indexed: 11/19/2022]
Abstract
Social anxiety (SA) is thought to be maintained in part by avoidance of social threat, which exacerbates fear of negative evaluation. Yet, relatively little research has been conducted to evaluate the connection between social anxiety and attentional processes in realistic contexts. The current pilot study examined patterns of attention (eye movements) in a commonly feared social context - public speaking. Participants (N = 84) with a range of social anxiety symptoms gave an impromptu five-minute speech in an immersive 360°-video environment, while wearing a virtual reality headset equipped with eye-tracking hardware. We found evidence for the expected interaction between fear of public speaking and social threat (uninterested vs. interested audience members). Consistent with prediction, participants with greater fear of public speaking looked fewer times at uninterested members of the audience (high social threat) compared to interested members of the audience (low social threat) b = 0.418, p = 0.046, 95% CI [0.008, 0.829]. Analyses of attentional indices over the course of the speech revealed that the interaction between fear of public speaking and gaze on audience members was only significant in the first three-minutes. Our results provide support for theoretical models implicating avoidance of social threat as a maintaining factor in social anxiety. Future research is needed to test whether guided attentional training targeting in vivo attentional avoidance may improve clinical outcomes for those presenting with social anxiety.
Collapse
Affiliation(s)
- Mikael Rubin
- Department of Psychology, The University of Texas at Austin, TX, USA
| | - Sean Minns
- Department of Psychology, The University of Texas at Austin, TX, USA
| | - Karl Muller
- Center for Perceptual Systems, The University of Texas at Austin, TX, USA
| | - Matthew H Tong
- Center for Perceptual Systems, The University of Texas at Austin, TX, USA
| | - Mary M Hayhoe
- Center for Perceptual Systems, The University of Texas at Austin, TX, USA
| | - Michael J Telch
- Department of Psychology, The University of Texas at Austin, TX, USA.
| |
Collapse
|
31
|
Kristjánsson T, Draschkow D, Pálsson Á, Haraldsson D, Jónsson PÖ, Kristjánsson Á. Moving foraging into three dimensions: Feature- versus conjunction-based foraging in virtual reality. Q J Exp Psychol (Hove) 2020; 75:313-327. [PMID: 32519926 DOI: 10.1177/1747021820937020] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Visual attention evolved in a three-dimensional (3D) world, yet studies on human attention in three dimensions are sparse. Here we present findings from a human foraging study in immersive 3D virtual reality. We used a foraging task introduced in Kristjánsson et al. to examine how well their findings generalise to more naturalistic settings. The second goal was to examine what effect the motion of targets and distractors has on inter-target times (ITTs), run patterns, and foraging organisation. Observers foraged for 50 targets among 50 distractors in four different conditions. Targets were distinguished from distractors by either a single feature (feature foraging) or a conjunction of features (conjunction foraging). Furthermore, those conditions were performed both with static and moving targets and distractors. Our results replicate previous foraging studies in many aspects, with constant ITTs during a "cruise-phase" within foraging trials and response time peaks at the end of foraging trials. Some key differences emerged, however, such as more frequent switches between target types during conjunction foraging than previously seen and a lack of clear mid-peaks during conjunction foraging, possibly reflecting that differences between feature and conjunction processing are smaller within 3D environments. Observers initiated their foraging in the bottom part of the visual field and motion did not have much of an effect on selection times between different targets (ITTs) or run behaviour patterns except for the end-peaks. Our results cast new light upon visual attention in 3D environments and highlight how 3D virtual reality studies can provide important extensions to two-dimensional studies of visual attention.
Collapse
Affiliation(s)
- Tómas Kristjánsson
- Icelandic Vision Laboratory, School of Health Sciences, University of Iceland, Reykjavík, Iceland
| | - Dejan Draschkow
- Department of Psychiatry, Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK
| | - Ágúst Pálsson
- Icelandic Vision Laboratory, School of Health Sciences, University of Iceland, Reykjavík, Iceland
| | - Davíð Haraldsson
- Icelandic Vision Laboratory, School of Health Sciences, University of Iceland, Reykjavík, Iceland
| | - Pétur Örn Jónsson
- Icelandic Vision Laboratory, School of Health Sciences, University of Iceland, Reykjavík, Iceland
| | - Árni Kristjánsson
- Icelandic Vision Laboratory, School of Health Sciences, University of Iceland, Reykjavík, Iceland.,School of Psychology, National Research University Higher School of Economics, Moscow, Russian Federation
| |
Collapse
|
32
|
Does active learning benefit spatial memory during navigation with restricted peripheral field? Atten Percept Psychophys 2020; 82:3033-3047. [PMID: 32346822 DOI: 10.3758/s13414-020-02038-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Spatial learning of real-world environments is impaired with severely restricted peripheral field of view (FOV). In prior research, the effects of restricted FOV on spatial learning have been studied using passive learning paradigms - learners walk along pre-defined paths and are told the location of targets to be remembered. Our research has shown that mobility demands and environmental complexity may contribute to impaired spatial learning with restricted FOV through attentional mechanisms. Here, we examine the role of active navigation, both in locomotion and in target search. First, we compared effects of active versus passive locomotion (walking with a physical guide versus being pushed in a wheelchair) on a task of pointing to remembered targets in participants with simulated 10° FOV. We found similar performance between active and passive locomotion conditions in both simpler (Experiment 1) and more complex (Experiment 2) spatial learning tasks. Experiment 3 required active search for named targets to remember while navigating, using both a mild and a severe FOV restriction. We observed no difference in pointing accuracy between the two FOV restrictions but an increase in attentional demands with severely restricted FOV. Experiment 4 compared active and passive search with severe FOV restriction, within subjects. We found no difference in pointing accuracy, but observed an increase in cognitive load in active versus passive search. Taken together, in the context of navigating with restricted FOV, neither locomotion method nor level of active search affected spatial learning. However, the greater cognitive demands could have counteracted the potential advantage of the active learning conditions.
Collapse
|
33
|
Williams LH, Drew T. What do we know about volumetric medical image interpretation?: a review of the basic science and medical image perception literatures. Cogn Res Princ Implic 2019; 4:21. [PMID: 31286283 PMCID: PMC6614227 DOI: 10.1186/s41235-019-0171-6] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Accepted: 05/19/2019] [Indexed: 11/26/2022] Open
Abstract
Interpretation of volumetric medical images represents a rapidly growing proportion of the workload in radiology. However, relatively little is known about the strategies that best guide search behavior when looking for abnormalities in volumetric images. Although there is extensive literature on two-dimensional medical image perception, it is an open question whether the conclusions drawn from these images can be generalized to volumetric images. Importantly, volumetric images have distinct characteristics (e.g., scrolling through depth, smooth-pursuit eye-movements, motion onset cues, etc.) that should be considered in future research. In this manuscript, we will review the literature on medical image perception and discuss relevant findings from basic science that can be used to generate predictions about expertise in volumetric image interpretation. By better understanding search through volumetric images, we may be able to identify common sources of error, characterize the optimal strategies for searching through depth, or develop new training and assessment techniques for radiology residents.
Collapse
|
34
|
Domínguez-Zamora FJ, Marigold DS. Motor cost affects the decision of when to shift gaze for guiding movement. J Neurophysiol 2019; 122:378-388. [PMID: 31141440 DOI: 10.1152/jn.00027.2019] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Frequent gait modifications are often required to navigate our world. These can involve long or wide steps or changes in direction. People generally prefer to minimize the motor cost (or effort) of a movement, although with changes in gait this is not always possible. The decision of when and where to shift gaze is critical for controlling motor actions, since vision informs the brain about the available choices for movement-in this case, where to step. Here we asked how motor cost influences the allocation of gaze. To address this, we had participants walk and step to the center of sequential targets on the ground. We manipulated the motor cost associated with controlling foot placement by varying the location of one target in the lateral direction on a trial-to-trial basis within environments with different numbers of targets. Costlier steps caused a switch from a gaze strategy of planning future steps to one favoring visual feedback of the current foot placement when participants had to negotiate another target immediately after. Specifically, costlier steps delayed gaze shifts away from the manipulated target. We show that this relates to the cost of moving the leg and redirecting the body's center of mass from target to target. Overall, our results suggest that temporal gaze decisions are affected by motor costs associated with step-to-step demands of the environment. Moreover, they provide insight into what affects the coordination between the eyes and feet for the control of stable and accurate foot placement while walking. NEW & NOTEWORTHY Changes in gait allow us to navigate our world. For instance, one may step long or wide to avoid a spilled drink. The brain can direct gaze to gather relevant information for making these types of motor decisions; however, the factors affecting gaze allocation in natural behaviors are poorly understood. We show how the motor cost associated with a step influences the decision of when to redirect gaze to ensure accurate foot placement while walking.
Collapse
Affiliation(s)
- F Javier Domínguez-Zamora
- Department of Biomedical Physiology and Kinesiology, Simon Fraser University , Burnaby, British Columbia , Canada
| | - Daniel S Marigold
- Department of Biomedical Physiology and Kinesiology, Simon Fraser University , Burnaby, British Columbia , Canada.,Behavioural and Cognitive Neuroscience Institute, Simon Fraser University , Burnaby, British Columbia , Canada
| |
Collapse
|
35
|
Kahana-Levy N, Shavitzky-Golkin S, Borowsky A, Vakil E. The effects of repetitive presentation of specific hazards on eye movements in hazard perception training, of experienced and young-inexperienced drivers. ACCIDENT; ANALYSIS AND PREVENTION 2019; 122:255-267. [PMID: 30391702 DOI: 10.1016/j.aap.2018.09.033] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2017] [Revised: 09/16/2018] [Accepted: 09/29/2018] [Indexed: 06/08/2023]
Abstract
Recent evidence shows that compared to experienced drivers, young-inexperienced drivers are more likely to be involved in a crash mainly due to their poor hazard perception (HP) abilities. This skill develops with experience and may be developed through training. We assumed that as any other skill, HP developed through implicit learning. Nevertheless, current training methods, rely on deliberate learning where young-inexperienced drivers are instructed what hazards that they should seek and where they might be located. In this exploratory study, we investigated the effectiveness of a novel training procedure, in which learners were repeatedly exposed to target video clips of driving scenarios embedded within filler scenarios. Each of the target videos included scenarios of either a visible hazard, a hidden materialized hazard or hidden unmaterialized hazard. Twenty-three young-inexperienced drivers and 35 experienced drivers participated in training session followed by a learning transference testing session and 24 additional young-inexperienced drivers participated only in the transference testing session with no training, during which participants were shown novel hazards video clips. Participants responded by pressing a button when they identified a hazard. Eye movement was also tracked using fixations patterns as a proxy to evaluate HP performance. During training, young-inexperienced drivers gradually increased their focus on visible materialized hazards but exhibited no learning curve with respect to hidden hazards. During the learning transference session, both trained groups focused on hazards earlier compared to untrained drivers. These results imply that repetitive training may facilitate HP acquisition among young-inexperienced drivers. Patterns concerning experienced drivers are also discussed.
Collapse
Affiliation(s)
| | | | - Avinoam Borowsky
- Ben-Gurion University of the Negev, Department of Industrial Engineering and Management, Beer-Sheva, Israel
| | - Eli Vakil
- Psychology Department, Bar-Ilan University, Ramat-Gan, Israel
| |
Collapse
|
36
|
Olk B, Dinu A, Zielinski DJ, Kopper R. Measuring visual search and distraction in immersive virtual reality. ROYAL SOCIETY OPEN SCIENCE 2018; 5:172331. [PMID: 29892418 PMCID: PMC5990815 DOI: 10.1098/rsos.172331] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2017] [Accepted: 03/27/2018] [Indexed: 05/27/2023]
Abstract
An important issue of psychological research is how experiments conducted in the laboratory or theories based on such experiments relate to human performance in daily life. Immersive virtual reality (VR) allows control over stimuli and conditions at increased ecological validity. The goal of the present study was to accomplish a transfer of traditional paradigms that assess attention and distraction to immersive VR. To further increase ecological validity we explored attentional effects with daily objects as stimuli instead of simple letters. Participants searched for a target among distractors on the countertop of a virtual kitchen. Target-distractor discriminability was varied and the displays were accompanied by a peripheral flanker that was congruent or incongruent to the target. Reaction time was slower when target-distractor discriminability was low and when flankers were incongruent. The results were replicated in a second experiment in which stimuli were presented on a computer screen in two dimensions. The study demonstrates the successful translation of traditional paradigms and manipulations into immersive VR and lays a foundation for future research on attention and distraction in VR. Further, we provide an outline for future studies that should use features of VR that are not available in traditional laboratory research.
Collapse
Affiliation(s)
- Bettina Olk
- Jacobs University Bremen, Campus Ring 1, 28759 Bremen, Germany
- HSD University of Applied Sciences, Waidmarkt 3 and 9, 50676 Cologne, Germany
| | - Alina Dinu
- Jacobs University Bremen, Campus Ring 1, 28759 Bremen, Germany
- Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Martinistrasse 52, 20251 Hamburg, Germany
| | - David J. Zielinski
- Duke University, Pratt School of Engineering, FCiemas Building, 101 Science Dr., Durham, NC 27708-0271, USA
| | - Regis Kopper
- Duke University, Pratt School of Engineering, FCiemas Building, 101 Science Dr., Durham, NC 27708-0271, USA
| |
Collapse
|
37
|
Abstract
Search is a central visual function. Most of what is known about search derives from experiments where subjects view 2D displays on computer monitors. In the natural world, however, search involves movement of the body in large-scale spatial contexts, and it is unclear how this might affect search strategies. In this experiment, we explore the nature of memory representations developed when searching in an immersive virtual environment. By manipulating target location, we demonstrate that search depends on episodic spatial memory as well as learnt spatial priors. Subjects rapidly learned the large-scale structure of the space, with shorter paths and less head rotation to find targets. These results suggest that spatial memory of the global structure allows a search strategy that involves efficient attention allocation based on the relevance of scene regions. Thus spatial memory may allow less energetically costly search strategies.
Collapse
Affiliation(s)
- Chia-Ling Li
- Center for Perceptual Systems, The University of Texas at Austin, Austin, Texas, USA.
| | - M Pilar Aivar
- Facultad de Psicología, Universidad Autónoma de Madrid, Madrid, Spain
| | | | - Mary M Hayhoe
- Center for Perceptual Systems, The University of Texas at Austin, Austin, Texas, USA
| |
Collapse
|
38
|
Draschkow D, Võ MLH. Scene grammar shapes the way we interact with objects, strengthens memories, and speeds search. Sci Rep 2017; 7:16471. [PMID: 29184115 PMCID: PMC5705766 DOI: 10.1038/s41598-017-16739-x] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2017] [Accepted: 11/16/2017] [Indexed: 11/09/2022] Open
Abstract
Predictions of environmental rules (here referred to as "scene grammar") can come in different forms: seeing a toilet in a living room would violate semantic predictions, while finding a toilet brush next to the toothpaste would violate syntactic predictions. The existence of such predictions has usually been investigated by showing observers images containing such grammatical violations. Conversely, the generative process of creating an environment according to one's scene grammar and its effects on behavior and memory has received little attention. In a virtual reality paradigm, we either instructed participants to arrange objects according to their scene grammar or against it. Subsequently, participants' memory for the arrangements was probed using a surprise recall (Exp1), or repeated search (Exp2) task. As a result, participants' construction behavior showed strategic use of larger, static objects to anchor the location of smaller objects which are generally the goals of everyday actions. Further analysis of this scene construction data revealed possible commonalities between the rules governing word usage in language and object usage in naturalistic environments. Taken together, we revealed some of the building blocks of scene grammar necessary for efficient behavior, which differentially influence how we interact with objects and what we remember about scenes.
Collapse
Affiliation(s)
- Dejan Draschkow
- Scene Grammar Lab, Johann Wolfgang Goethe-Universität, Frankfurt, Germany.
| | - Melissa L-H Võ
- Scene Grammar Lab, Johann Wolfgang Goethe-Universität, Frankfurt, Germany
| |
Collapse
|
39
|
Abstract
Investigation of natural behavior has contributed a number of insights to our understanding of visual guidance of actions by highlighting the importance of behavioral goals and focusing attention on how vision and action play out in time. In this context, humans make continuous sequences of sensory-motor decisions to satisfy current behavioral goals, and the role of vision is to provide the relevant information for making good decisions in order to achieve those goals. This conceptualization of visually guided actions as a sequence of sensory-motor decisions has been formalized within the framework of statistical decision theory, which structures the problem and provides the context for much recent progress in vision and action. Components of a good decision include the task, which defines the behavioral goals, the rewards and costs associated with those goals, uncertainty about the state of the world, and prior knowledge.
Collapse
Affiliation(s)
- Mary M Hayhoe
- Center for Perceptual Systems, University of Texas at Austin, Texas 78712;
| |
Collapse
|
40
|
Tong MH, Zohar O, Hayhoe MM. Control of gaze while walking: Task structure, reward, and uncertainty. J Vis 2017; 17:28. [PMID: 28114501 PMCID: PMC5256682 DOI: 10.1167/17.1.28] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2016] [Accepted: 11/20/2016] [Indexed: 11/24/2022] Open
Abstract
While it is universally acknowledged that both bottom up and top down factors contribute to allocation of gaze, we currently have limited understanding of how top-down factors determine gaze choices in the context of ongoing natural behavior. One purely top-down model by Sprague, Ballard, and Robinson (2007) suggests that natural behaviors can be understood in terms of simple component behaviors, or modules, that are executed according to their reward value, with gaze targets chosen in order to reduce uncertainty about the particular world state needed to execute those behaviors. We explore the plausibility of the central claims of this approach in the context of a task where subjects walk through a virtual environment performing interceptions, avoidance, and path following. Many aspects of both walking direction choices and gaze allocation are consistent with this approach. Subjects use gaze to reduce uncertainty for task-relevant information that is used to inform action choices. Notably the addition of motion to peripheral objects did not affect fixations when the objects were irrelevant to the task, suggesting that stimulus saliency was not a major factor in gaze allocation. The modular approach of independent component behaviors is consistent with the main aspects of performance, but there were a number of deviations suggesting that modules interact. Thus the model forms a useful, but incomplete, starting point for understanding top-down factors in active behavior.
Collapse
Affiliation(s)
- Matthew H Tong
- Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA
| | - Oran Zohar
- Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA
| | - Mary M Hayhoe
- Center for Perceptual Systems, University of Texas at Austin, Austin, TX, USA
| |
Collapse
|