1
|
Sekine S, Sakurai Y, Omori Y, Morio Y, Yamamoto J. Quantitative analysis of the gaze and the kinetic/kinematic evaluation of expert and novice physical therapists during standing/sitting assistance: a pilot study. FRONTIERS IN REHABILITATION SCIENCES 2024; 5:1426699. [PMID: 39640874 PMCID: PMC11617524 DOI: 10.3389/fresc.2024.1426699] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Accepted: 11/04/2024] [Indexed: 12/07/2024]
Abstract
Introduction In rehabilitation practices, expert therapists are believed to proficiently observe and assist patients. However, limited research has quantified the gaze behaviors of physical therapists during patient support. This study investigated the gaze patterns of expert and novice physical therapists from a first-person perspective during the process of assisting collaborators to stand. The aim was to determine which body parts received prolonged attention and to explore the characteristics of the support provided. Methods Seven experienced physical therapists were recruited as expert participants, and 17 physical therapy students served as novice participants. We also recruited additional students as collaborators and asked them to behave as if they were patients. Both expert and novice participants wore a wearable eye tracker while assisting the collaborators to stand. We analyzed the gaze focus on specific body parts and the center of mass sway of the collaborators. Results Experts spent 10.75% of the total time gazing at the head area, compared to 4.06% for novices, with experts displaying significantly longer gaze durations (p < .05). Furthermore, there was a significant difference in the number of gaze fixations, with experts averaging 25.71 fixations and novices 8.65 (p < .05). Experts also facilitated a slower sway in the collaborator's center of mass (0.44 m/s for experts vs. 0.49 m/s for novices; p < .01) and positioned the collaborator with a more pronounced trunk flexion during sitting and standing transitions (41.0 degrees for experts vs. 37.8 degrees for novices; p < .01). Discussion The findings suggest that experts may monitor the collaborator's center of mass position by focusing on the head area. Properly positioning the head forward may allow for optimal forward movement of the center of mass, potentially reducing the effort required by the collaborator to stand. This study is the first to explore differences in support strategies through the measurement of physical therapists' gaze during assistance.
Collapse
Affiliation(s)
- Satoru Sekine
- Medical Informatics, Tottori University Hospital, Yonago, Tottori, Japan
- Faculty of Systems Design, Tokyo Metropolitan University, Hachioji, Tokyo, Japan
| | - Yoshimi Sakurai
- Faculty of Systems Design, Tokyo Metropolitan University, Hachioji, Tokyo, Japan
- Faculty of Medical Sciences, Shonan University of Medical Sciences, Yokohama, Kanagawa, Japan
| | - Yoshitsugu Omori
- Faculty of Systems Design, Tokyo Metropolitan University, Hachioji, Tokyo, Japan
- Faculty of Medical Sciences, Shonan University of Medical Sciences, Yokohama, Kanagawa, Japan
| | - Yuji Morio
- Faculty of Systems Design, Tokyo Metropolitan University, Hachioji, Tokyo, Japan
- Faculty of Medical Sciences, Shonan University of Medical Sciences, Yokohama, Kanagawa, Japan
| | - Junichi Yamamoto
- Faculty of Systems Design, Tokyo Metropolitan University, Hachioji, Tokyo, Japan
| |
Collapse
|
2
|
Fernandes EG, Tatler BW, Slessor G, Phillips LH. Age Differences in Gaze Following: Older Adults Follow Gaze More than Younger Adults When free-viewing Scenes. Exp Aging Res 2024; 50:84-101. [PMID: 36572660 DOI: 10.1080/0361073x.2022.2156760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 11/22/2022] [Indexed: 12/28/2022]
Abstract
Previous research investigated age differences in gaze following with an attentional cueing paradigm where participants view a face with averted gaze, and then respond to a target appearing in a location congruent or incongruent with the gaze cue. However, this paradigm is far removed from the way we use gaze cues in everyday settings. Here we recorded the eye movements of younger and older adults while they freely viewed naturalistic scenes where a person looked at an object or location. Older adults were more likely to fixate and made more fixations to the gazed-at location, compared to younger adults. Our findings suggest that, contrary to what was observed in the traditional gaze-cueing paradigm, in a non-constrained task that uses contextualized stimuli older adults follow gaze as much as or even more than younger adults.
Collapse
Affiliation(s)
- Eunice G Fernandes
- Department of Foreign Languages and Translation, Universitet i Agder, Kristiansand, Norway
- School of Psychology, University of Aberdeen, Aberdeen, UK
| | | | | | | |
Collapse
|
3
|
Chawoush B, Draschkow D, van Ede F. Capacity and selection in immersive visual working memory following naturalistic object disappearance. J Vis 2023; 23:9. [PMID: 37548958 PMCID: PMC10411649 DOI: 10.1167/jov.23.8.9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Accepted: 07/06/2023] [Indexed: 08/08/2023] Open
Abstract
Visual working memory-holding past visual information in mind for upcoming behavior-is commonly studied following the abrupt removal of visual objects from static two-dimensional (2D) displays. In everyday life, visual objects do not typically vanish from the environment in front of us. Rather, visual objects tend to enter working memory following self or object motion: disappearing from view gradually and changing the spatial relation between memoranda and observer. Here, we used virtual reality (VR) to investigate whether two classic findings from visual working memory research-a capacity of around three objects and the reliance on space for object selection-generalize to more naturalistic modes of object disappearance. Our static reference condition mimicked traditional laboratory tasks whereby visual objects were held static in front of the participant and removed from view abruptly. In our critical flow condition, the same visual objects flowed by participants, disappearing from view gradually and behind the observer. We considered visual working memory performance and capacity, as well as space-based mnemonic selection, indexed by directional biases in gaze. Despite vastly distinct modes of object disappearance and altered spatial relations between memoranda and observer, we found comparable capacity and comparable gaze signatures of space-based mnemonic selection. This finding reveals how classic findings from visual working memory research generalize to immersive situations with more naturalistic modes of object disappearance and with dynamic spatial relations between memoranda and observer.
Collapse
Affiliation(s)
- Babak Chawoush
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Dejan Draschkow
- Department of Experimental Psychology, University of Oxford, United Kingdom
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, United Kingdom
| | - Freek van Ede
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
4
|
Helbing J, Draschkow D, L-H Võ M. Auxiliary Scene-Context Information Provided by Anchor Objects Guides Attention and Locomotion in Natural Search Behavior. Psychol Sci 2022; 33:1463-1476. [PMID: 35942922 DOI: 10.1177/09567976221091838] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Successful adaptive behavior requires efficient attentional and locomotive systems. Previous research has thoroughly investigated how we achieve this efficiency during natural behavior by exploiting prior knowledge related to targets of our actions (e.g., attending to metallic targets when looking for a pot) and to the environmental context (e.g., looking for the pot in the kitchen). Less is known about whether and how individual nontarget components of the environment support natural behavior. In our immersive virtual reality task, 24 adult participants searched for objects in naturalistic scenes in which we manipulated the presence and arrangement of large, static objects that anchor predictions about targets (e.g., the sink provides a prediction for the location of the soap). Our results show that gaze and body movements in this naturalistic setting are strongly guided by these anchors. These findings demonstrate that objects auxiliary to the target are incorporated into the representations guiding attention and locomotion.
Collapse
Affiliation(s)
- Jason Helbing
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
| | - Dejan Draschkow
- Brain and Cognition Laboratory, Department of Experimental Psychology, University of Oxford.,Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford
| | - Melissa L-H Võ
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt
| |
Collapse
|
5
|
Draschkow D, Nobre AC, van Ede F. Multiple spatial frames for immersive working memory. Nat Hum Behav 2022; 6:536-544. [PMID: 35058640 PMCID: PMC7612679 DOI: 10.1038/s41562-021-01245-y] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Accepted: 10/25/2021] [Indexed: 11/09/2022]
Abstract
As we move around, relevant information that disappears from sight can still be held in working memory to serve upcoming behaviour. How we maintain and select visual information as we move through the environment remains poorly understood because most laboratory tasks of working memory rely on removing visual material while participants remain still. We used virtual reality to study visual working memory following self-movement in immersive environments. Directional biases in gaze revealed the recruitment of more than one spatial frame for maintaining and selecting memoranda following self-movement. The findings bring the important realization that multiple spatial frames support working memory in natural behaviour. The results also illustrate how virtual reality can be a critical experimental tool to characterize this core memory system.
Collapse
Affiliation(s)
- Dejan Draschkow
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
- Department of Experimental Psychology, University of Oxford, Oxford, UK.
| | - Anna C Nobre
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Freek van Ede
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK
- Institute for Brain and Behavior Amsterdam, Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands
| |
Collapse
|
6
|
Beyond screen time: Using head-mounted eye tracking to study natural behavior. ADVANCES IN CHILD DEVELOPMENT AND BEHAVIOR 2022; 62:61-91. [PMID: 35249686 DOI: 10.1016/bs.acdb.2021.11.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Head-mounted eye tracking is a new method that allows researchers to catch a glimpse of what infants and children see during naturalistic activities. In this chapter, we review how mobile, wearable eye trackers improve the construct validity of important developmental constructs, such as visual object experiences and social attention, in ways that would be impossible using screen-based eye tracking. Head-mounted eye tracking improves ecological validity by allowing researchers to present more realistic and complex visual scenes, create more interactive experimental situations, and examine how the body influences what infants and children see. As with any new method, there are difficulties to overcome. Accordingly, we identify what aspects of head-mounted eye-tracking study design affect the measurement quality, interpretability of the results, and efficiency of gathering data. Moreover, we provide a summary of best practices aimed at allowing researchers to make well-informed decisions about whether and how to apply head-mounted eye tracking to their own research questions.
Collapse
|
7
|
Franchak JM, McGee B, Blanch G. Adapting the coordination of eyes and head to differences in task and environment during fully-mobile visual exploration. PLoS One 2021; 16:e0256463. [PMID: 34415981 PMCID: PMC8378697 DOI: 10.1371/journal.pone.0256463] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Accepted: 08/06/2021] [Indexed: 11/19/2022] Open
Abstract
How are eyes and head adapted to meet the demands of visual exploration in different tasks and environments? In two studies, we measured the horizontal movements of the eyes (using mobile eye tracking in Studies 1 and 2) and the head (using inertial sensors in Study 2) while participants completed a walking task and a search and retrieval task in a large, outdoor environment. We found that the spread of visual exploration was greater while searching compared with walking, and this was primarily driven by increased movement of the head as opposed to the eyes. The contributions of the head to gaze shifts of different eccentricities was greater when searching compared to when walking. Findings are discussed with respect to understanding visual exploration as a motor action with multiple degrees of freedom.
Collapse
Affiliation(s)
- John M. Franchak
- Department of Psychology, University of California, Riverside, Riverside, California, United States of America
| | - Brianna McGee
- Department of Psychology, University of California, Riverside, Riverside, California, United States of America
| | - Gabrielle Blanch
- Department of Psychology, University of California, Riverside, Riverside, California, United States of America
| |
Collapse
|
8
|
Smith ME, Loschky LC, Bailey HR. Knowledge guides attention to goal-relevant information in older adults. COGNITIVE RESEARCH-PRINCIPLES AND IMPLICATIONS 2021; 6:56. [PMID: 34406505 PMCID: PMC8374018 DOI: 10.1186/s41235-021-00321-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Accepted: 07/31/2021] [Indexed: 11/18/2022]
Abstract
How does viewers’ knowledge guide their attention while they watch everyday events, how does it affect their memory, and does it change with age? Older adults have diminished episodic memory for everyday events, but intact semantic knowledge. Indeed, research suggests that older adults may rely on their semantic memory to offset impairments in episodic memory, and when relevant knowledge is lacking, older adults’ memory can suffer. Yet, the mechanism by which prior knowledge guides attentional selection when watching dynamic activity is unclear. To address this, we studied the influence of knowledge on attention and memory for everyday events in young and older adults by tracking their eyes while they watched videos. The videos depicted activities that older adults perform more frequently than young adults (balancing a checkbook, planting flowers) or activities that young adults perform more frequently than older adults (installing a printer, setting up a video game). Participants completed free recall, recognition, and order memory tests after each video. We found age-related memory deficits when older adults had little knowledge of the activities, but memory did not differ between age groups when older adults had relevant knowledge and experience with the activities. Critically, results showed that knowledge influenced where viewers fixated when watching the videos. Older adults fixated less goal-relevant information compared to young adults when watching young adult activities, but they fixated goal-relevant information similarly to young adults, when watching more older adult activities. Finally, results showed that fixating goal-relevant information predicted free recall of the everyday activities for both age groups. Thus, older adults may use relevant knowledge to more effectively infer the goals of actors, which guides their attention to goal-relevant actions, thus improving their episodic memory for everyday activities.
Collapse
Affiliation(s)
- Maverick E Smith
- Department of Psychological Sciences, Kansas State University, 471 Bluemont Hall, 1100 Mid-campus Dr., Manhattan, KS, 66506, USA.
| | - Lester C Loschky
- Department of Psychological Sciences, Kansas State University, 471 Bluemont Hall, 1100 Mid-campus Dr., Manhattan, KS, 66506, USA
| | - Heather R Bailey
- Department of Psychological Sciences, Kansas State University, 471 Bluemont Hall, 1100 Mid-campus Dr., Manhattan, KS, 66506, USA
| |
Collapse
|
9
|
Kristjánsson Á, Draschkow D. Keeping it real: Looking beyond capacity limits in visual cognition. Atten Percept Psychophys 2021; 83:1375-1390. [PMID: 33791942 PMCID: PMC8084831 DOI: 10.3758/s13414-021-02256-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/23/2020] [Indexed: 11/23/2022]
Abstract
Research within visual cognition has made tremendous strides in uncovering the basic operating characteristics of the visual system by reducing the complexity of natural vision to artificial but well-controlled experimental tasks and stimuli. This reductionist approach has for example been used to assess the basic limitations of visual attention, visual working memory (VWM) capacity, and the fidelity of visual long-term memory (VLTM). The assessment of these limits is usually made in a pure sense, irrespective of goals, actions, and priors. While it is important to map out the bottlenecks our visual system faces, we focus here on selected examples of how such limitations can be overcome. Recent findings suggest that during more natural tasks, capacity may be higher than reductionist research suggests and that separable systems subserve different actions, such as reaching and looking, which might provide important insights about how pure attentional or memory limitations could be circumvented. We also review evidence suggesting that the closer we get to naturalistic behavior, the more we encounter implicit learning mechanisms that operate "for free" and "on the fly." These mechanisms provide a surprisingly rich visual experience, which can support capacity-limited systems. We speculate whether natural tasks may yield different estimates of the limitations of VWM, VLTM, and attention, and propose that capacity measurements should also pass the real-world test within naturalistic frameworks. Our review highlights various approaches for this and suggests that our understanding of visual cognition will benefit from incorporating the complexities of real-world cognition in experimental approaches.
Collapse
Affiliation(s)
- Árni Kristjánsson
- School of Health Sciences, University of Iceland, Reykjavík, Iceland.
- School of Psychology, National Research University Higher School of Economics, Moscow, Russia.
| | - Dejan Draschkow
- Oxford Centre for Human Brain Activity, Wellcome Centre for Integrative Neuroimaging, Department of Psychiatry, University of Oxford, Oxford, UK.
| |
Collapse
|
10
|
Wang Q, Chang J, Chawarska K. Atypical Value-Driven Selective Attention in Young Children With Autism Spectrum Disorder. JAMA Netw Open 2020; 3:e204928. [PMID: 32374399 PMCID: PMC7203607 DOI: 10.1001/jamanetworkopen.2020.4928] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Accepted: 03/11/2020] [Indexed: 11/21/2022] Open
Abstract
Importance Enhanced selective attention toward nonsocial objects and impaired attention to social stimuli constitute key clinical features of autism spectrum disorder (ASD). Yet, the mechanisms associated with atypical selective attention in ASD are poorly understood, which limits the development of more effective interventions. In typically developing individuals, selective attention to social and nonsocial stimuli is associated with the informational value of the stimuli, which is typically learned over the course of repeated interactions with the stimuli. Objective To examine value learning (VL) of social and nonsocial stimuli and its association with selective attention in preschoolers with and without ASD. Design, Setting, and Participants This case-control study compared children with ASD vs children with developmental delay (DD) and children with typical development (TD) recruited between March 3, 2017, and June 13, 2018, at a university-based research laboratory. Participants were preschoolers with ASD, DD, or TD. Main Outcomes and Measures Procedure consisted of an eye-tracking gaze-contingent VL task involving social (faces) and nonsocial (fractals) stimuli and consisting of baseline, training, and choice test phases. Outcome measures were preferential attention to stimuli reinforced (high value) vs not reinforced (low value) during training. The hypotheses were stated before data collection. Results Included were 115 preschoolers with ASD (n = 48; mean [SD] age, 38.30 [15.55] months; 37 [77%] boys), DD (n = 31; mean [SD] age, 45.73 [19.49] months; 19 [61%] boys), or TD (n = 36; mean [SD] age, 36.53 [12.39] months; 22 [61%] boys). The groups did not differ in sex distribution; participants with ASD or TD had similar chronological age; and participants with ASD or DD had similar verbal IQ and nonverbal IQ. After training, the ASD group showed preference for the high-value nonsocial stimuli (mean proportion, 0.61 [95% CI, 0.56-0.65]; P < .001) but not for the high-value social stimuli (mean proportion, 0.51 [95% CI, 0.46-0.56]; P = .58). In contrast, the DD and TD groups demonstrated preference for the high-value social stimuli (DD mean proportion, 0.59 [95% CI, 0.54-0.64]; P = .001 and TD mean proportion, 0.57 [95% CI, 0.53-0.61]; P = .002) but not for the high-value nonsocial stimuli (DD mean proportion, 0.52 [95% CI, 0.44-0.59]; P = .64 and TD mean proportion, 0.50 [95% CI, 0.44-0.57]; P = .91). Controlling for age and nonverbal IQ, autism severity was positively correlated with enhanced learning in the nonsocial domain (r = 0.22; P = .03) and with poorer learning in the social domain (r = -0.26; P = .01). Conclusions and Relevance Increased attention to objects in preschoolers with ASD may be associated with enhanced VL in the nonsocial domain. When paired with poor VL in the social domain, enhanced value-driven attention to objects may play a formative role in the emergence of autism symptoms by altering attentional priorities and thus learning opportunities in affected children.
Collapse
Affiliation(s)
- Quan Wang
- Child Study Center, Yale School of Medicine, New Haven, Connecticut
- Key Laboratory of Spectral Imaging Technology, Key Laboratory of Biomedical Spectroscopy of Xi’an, Xi’an Institute of Optics and Precision Mechanics of Chinese Academy of Sciences
| | - Joseph Chang
- Child Study Center, Yale School of Medicine, New Haven, Connecticut
- Department of Statistics and Data Science, Yale University, New Haven, Connecticut
| | - Katarzyna Chawarska
- Child Study Center, Yale School of Medicine, New Haven, Connecticut
- Department of Statistics and Data Science, Yale University, New Haven, Connecticut
| |
Collapse
|
11
|
Helbing J, Draschkow D, Võ MLH. Search superiority: Goal-directed attentional allocation creates more reliable incidental identity and location memory than explicit encoding in naturalistic virtual environments. Cognition 2020; 196:104147. [PMID: 32004760 DOI: 10.1016/j.cognition.2019.104147] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Revised: 11/19/2019] [Accepted: 11/20/2019] [Indexed: 01/23/2023]
Abstract
We use representations and expectations formed during life-long learning to support attentional allocation and perception. In comparison to traditional laboratory investigations, real-world memory formation is usually achieved without explicit instruction and on-the-fly as a by-product of natural interactions with our environment. Understanding this process and the quality of naturally formed representations is critical to understanding how memory is used to guide attention and perception. Utilizing immersive, navigable, and realistic virtual environments, we investigated incidentally generated memory representations by comparing them to memories for items which were explicitly memorized. Participants either searched for objects embedded in realistic indoor environments or explicitly memorized them for follow-up identity and location memory tests. We show for the first time that memory for the identity of naturalistic objects and their location in 3D space is higher after incidental encoding compared to explicit memorization, even though the subsequent memory tests came as a surprise to participants. Relating gaze behavior to memory performance revealed that encoding time was more predictive of subsequent memory when participants explicitly memorized an item, compared to incidentally encoding it. Our results suggest that the active nature of guiding attentional allocation during proactive behavior allows for behaviorally optimal formation and utilization of representations. This highlights the importance of investigating cognition under ecologically valid conditions and shows that understanding the most natural processes for encoding and maintaining information is critical for understanding adaptive behavior.
Collapse
Affiliation(s)
- Jason Helbing
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Dejan Draschkow
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany; Department of Psychiatry, University of Oxford, Oxford, England, United Kingdom of Great Britain and Northern Ireland.
| | - Melissa L-H Võ
- Scene Grammar Lab, Department of Psychology, Goethe University Frankfurt, Frankfurt am Main, Germany
| |
Collapse
|
12
|
Franchak JM. Visual exploratory behavior and its development. PSYCHOLOGY OF LEARNING AND MOTIVATION 2020. [DOI: 10.1016/bs.plm.2020.07.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
13
|
Draschkow D, Võ MLH. Scene grammar shapes the way we interact with objects, strengthens memories, and speeds search. Sci Rep 2017; 7:16471. [PMID: 29184115 PMCID: PMC5705766 DOI: 10.1038/s41598-017-16739-x] [Citation(s) in RCA: 52] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2017] [Accepted: 11/16/2017] [Indexed: 11/09/2022] Open
Abstract
Predictions of environmental rules (here referred to as "scene grammar") can come in different forms: seeing a toilet in a living room would violate semantic predictions, while finding a toilet brush next to the toothpaste would violate syntactic predictions. The existence of such predictions has usually been investigated by showing observers images containing such grammatical violations. Conversely, the generative process of creating an environment according to one's scene grammar and its effects on behavior and memory has received little attention. In a virtual reality paradigm, we either instructed participants to arrange objects according to their scene grammar or against it. Subsequently, participants' memory for the arrangements was probed using a surprise recall (Exp1), or repeated search (Exp2) task. As a result, participants' construction behavior showed strategic use of larger, static objects to anchor the location of smaller objects which are generally the goals of everyday actions. Further analysis of this scene construction data revealed possible commonalities between the rules governing word usage in language and object usage in naturalistic environments. Taken together, we revealed some of the building blocks of scene grammar necessary for efficient behavior, which differentially influence how we interact with objects and what we remember about scenes.
Collapse
Affiliation(s)
- Dejan Draschkow
- Scene Grammar Lab, Johann Wolfgang Goethe-Universität, Frankfurt, Germany.
| | - Melissa L-H Võ
- Scene Grammar Lab, Johann Wolfgang Goethe-Universität, Frankfurt, Germany
| |
Collapse
|
14
|
Of "what" and "where" in a natural search task: Active object handling supports object location memory beyond the object's identity. Atten Percept Psychophys 2017; 78:1574-84. [PMID: 27165170 DOI: 10.3758/s13414-016-1111-x] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Looking for as well as actively manipulating objects that are relevant to ongoing behavioral goals are intricate parts of natural behavior. It is, however, not clear to what degree these two forms of interaction with our visual environment differ with regard to their memory representations. In a real-world paradigm, we investigated if physically engaging with objects as part of a search task influences identity and position memory differently for task-relevant versus irrelevant objects. Participants equipped with a mobile eye tracker either searched for cued objects without object interaction (Find condition) or actively collected the objects they found (Handle condition). In the following free-recall task, identity memory was assessed, demonstrating superior memory for relevant compared to irrelevant objects, but no difference between the Handle and Find conditions. Subsequently, location memory was inferred via times to first fixation in a final object search task. Active object manipulation and task-relevance interacted in that location memory for relevant objects was superior to irrelevant ones only in the Handle condition. Including previous object recall performance as a covariate in the linear mixed-model analysis of times to first fixation allowed us to explore the interaction between remembered/forgotten object identities and the execution of location memory. Identity memory performance predicted location memory in the Find but not the Handle condition, suggesting that active object handling leads to strong spatial representations independent of object identity memory. We argue that object handling facilitates the prioritization of relevant location information, but this might come at the cost of deprioritizing irrelevant information.
Collapse
|
15
|
Donnarumma F, Costantini M, Ambrosini E, Friston K, Pezzulo G. Action perception as hypothesis testing. Cortex 2017; 89:45-60. [PMID: 28226255 PMCID: PMC5383736 DOI: 10.1016/j.cortex.2017.01.016] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2016] [Revised: 11/21/2016] [Accepted: 01/18/2017] [Indexed: 01/27/2023]
Abstract
We present a novel computational model that describes action perception as an active inferential process that combines motor prediction (the reuse of our own motor system to predict perceived movements) and hypothesis testing (the use of eye movements to disambiguate amongst hypotheses). The system uses a generative model of how (arm and hand) actions are performed to generate hypothesis-specific visual predictions, and directs saccades to the most informative places of the visual scene to test these predictions - and underlying hypotheses. We test the model using eye movement data from a human action observation study. In both the human study and our model, saccades are proactive whenever context affords accurate action prediction; but uncertainty induces a more reactive gaze strategy, via tracking the observed movements. Our model offers a novel perspective on action observation that highlights its active nature based on prediction dynamics and hypothesis testing.
Collapse
Affiliation(s)
- Francesco Donnarumma
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
| | - Marcello Costantini
- Centre for Brain Science, Department of Psychology, University of Essex, Colchester, UK; Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University G. d'Annunzio, Chieti, Italy; Institute for Advanced Biomedical Technologies - ITAB, Foundation University G. d'Annunzio, Chieti, Italy
| | - Ettore Ambrosini
- Department of Neuroscience, University of Padua, Padua, Italy; Laboratory of Neuropsychology and Cognitive Neuroscience, Department of Neuroscience and Imaging, University G. d'Annunzio, Chieti, Italy; Institute for Advanced Biomedical Technologies - ITAB, Foundation University G. d'Annunzio, Chieti, Italy
| | - Karl Friston
- The Wellcome Trust Centre for Neuroimaging, UCL, London, UK
| | - Giovanni Pezzulo
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy.
| |
Collapse
|
16
|
Chawarska K, Ye S, Shic F, Chen L. Multilevel Differences in Spontaneous Social Attention in Toddlers With Autism Spectrum Disorder. Child Dev 2016; 87:543-57. [PMID: 26682668 PMCID: PMC5460372 DOI: 10.1111/cdev.12473] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
This study examined the latent structure of spontaneous social attention in 11- to 26-month-olds with autism spectrum disorder (ASD, n = 90) and typically developing (n = 79) controls. Application of the joint and individual variance explained decomposition technique revealed that attention was driven by a condition-independent tuning into the dynamic social scenes construct and context-specific constructs capturing selection of the most relevant social features for processing. Gaze behavior in ASD is characterized by a limited tuning into the social scenes and by a selection of atypical targets for processing. While the former may be due to early disruption of the reward circuitry leading to limited appreciation of the behavioral relevance of social information, the latter may represent secondary deficits reflecting limited knowledge about social partners.
Collapse
|
17
|
Priorities for representation: Task settings and object interaction both influence object memory. Mem Cognit 2015; 44:114-23. [DOI: 10.3758/s13421-015-0550-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
18
|
Chisholm JD, Chapman CS, Amm M, Bischof WF, Smilek D, Kingstone A. A cognitive ethology study of first- and third-person perspectives. PLoS One 2014; 9:e92696. [PMID: 24671136 PMCID: PMC3966828 DOI: 10.1371/journal.pone.0092696] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2013] [Accepted: 02/25/2014] [Indexed: 11/24/2022] Open
Abstract
The aim of the present study was to test the cognitive ethology approach, which seeks to link cognitions and behaviours as they operate in everyday life with those studied in controlled lab-based investigations. Our test bed was the understanding of first-person and third-person perspectives, which in lab-based investigations have been defined in a diverse and multi-faceted manner. We hypothesized that because these lab-based investigations seek to connect with how first- and third-person perspective operates in everyday life, then either some of the divergent lab-based definitions are missing their mark or the everyday conceptualization of first- and third-person perspective is multi-faceted. Our investigation revealed the latter. By applying a cognitive ethology approach we were able to determine that a) peoples’ everyday understanding of perspective is diverse yet reliable, and b) a lab-based investigation that applies these diverse understandings in a controlled setting can accurately predict how people will perform. These findings provide a ‘proof of concept’ for the cognitive ethology approach. Moreover, the present data demonstrate that previous lab-based studies, that often had very different understandings of first- and third-person perspective, were each in and of themselves valid. That is, each is capturing part of a broader understanding of perspective in everyday life. Our results also revealed a novel social factor not included in traditional conceptualizations of first-person third-perspective, that of eye gaze, i.e., eye contact is equated strongly with first-person perspective and the lack of eye-contact with third-person perspective.
Collapse
Affiliation(s)
- Joseph D. Chisholm
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
- * E-mail:
| | - Craig S. Chapman
- Faculty of Physical Education and Recreation, University of Alberta, Edmonton, Alberta, Canada
| | - Marvin Amm
- School of Psychology, University of Aberdeen, Aberdeen, United Kingdom
| | - Walter F. Bischof
- Department of Computing Science, University of Alberta, Edmonton, Alberta, Canada
| | - Dan Smilek
- Department of Psychology, University of Waterloo, Waterloo, Ontario, Canada
| | - Alan Kingstone
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
19
|
Schneider WX, Einhäuser W, Horstmann G. Attentional selection in visual perception, memory and action: a quest for cross-domain integration. Philos Trans R Soc Lond B Biol Sci 2013; 368:20130053. [PMID: 24018715 DOI: 10.1098/rstb.2013.0053] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
For decades, the cognitive and neural sciences have benefitted greatly from a separation of mind and brain into distinct functional domains. The tremendous success of this approach notwithstanding, it is self-evident that such a view is incomplete. Goal-directed behaviour of an organism requires the joint functioning of perception, memory and sensorimotor control. A prime candidate for achieving integration across these functional domains are attentional processes. Consequently, this Theme Issue brings together studies of attentional selection from many fields, both experimental and theoretical, that are united in their quest to find overreaching integrative principles of attention between perception, memory and action. In all domains, attention is understood as combination of competition and priority control ('bias'), with the task as a decisive driving factor to ensure coherent goal-directed behaviour and cognition. Using vision as the predominant model system for attentional selection, many studies of this Theme Issue focus special emphasis on eye movements as a selection process that is both a fundamental action and serves a key function in perception. The Theme Issue spans a wide range of methods, from measuring human behaviour in the real word to recordings of single neurons in the non-human primate brain. We firmly believe that combining such a breadth in approaches is necessary not only for attentional selection, but also to take the next decisive step in all of the cognitive and neural sciences: to understand cognition and behaviour beyond isolated domains.
Collapse
Affiliation(s)
- Werner X Schneider
- Center for Interdisciplinary Research (ZiF), Center of Excellence (CITEC), Bielefeld University, , Bielefeld, Germany
| | | | | |
Collapse
|