1
|
Recker L, Foerster RM, Schneider WX, Poth CH. Emphasizing speed or accuracy in an eye-tracking version of the Trail-Making-Test: Towards experimental diagnostics for decomposing executive functions. PLoS One 2022; 17:e0274579. [PMID: 36094948 PMCID: PMC9467318 DOI: 10.1371/journal.pone.0274579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 08/30/2022] [Indexed: 11/20/2022] Open
Abstract
The Trail-Making-Test (TMT) is one of the most widely used neuropsychological tests for assessing executive functions, the brain functions underlying cognitively controlled thought and action. Obtaining a number of test scores at once, the TMT allows to characterize an assortment of executive functions efficiently. Critically, however, as most test scores are derived from test completion times, the scores only provide a summary measure of various cognitive control processes. To address this problem, we extended the TMT in two ways. First, using a computerized eye-tracking version of the TMT, we added specific eye movement measures that deliver a richer set of data with a higher degree of cognitive process specificity. Second, we included an experimental manipulation of a fundamental executive function, namely participants’ ability to emphasize speed or accuracy in task performance. Our study of healthy participants showed that eye movement measures differed between TMT conditions that are usually compared to assess the cognitive control process of alternating between task sets for action control. This demonstrates that eye movement measures are indeed sensitive to executive functions implicated in the TMT. Crucially, comparing performance under cognitive control sets of speed vs. accuracy emphasis revealed which test scores primarily varied due to this manipulation (e.g., trial duration, number of fixations), and which were still more sensitive to other differences between individuals (e.g., fixation duration, saccade amplitude). This provided an experimental construct validation of the test scores by distinguishing scores primarily reflecting the executive function of emphasizing speed vs. accuracy and those independent from it. In sum, both the inclusion of eye movement measures and of the experimental manipulation of executive functions in the TMT enabled a more specific interpretation of the TMT in terms of cognitive functions and mechanisms, which offers more precise diagnoses in clinical applications and basic research.
Collapse
Affiliation(s)
- Lukas Recker
- Neuro-cognitive Psychology and Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
- * E-mail:
| | - Rebecca M. Foerster
- Neuro-cognitive Psychology and Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
- Medical School EWL, Bielefeld University, Bielefeld, Germany
| | - Werner X. Schneider
- Neuro-cognitive Psychology and Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
| | - Christian H. Poth
- Neuro-cognitive Psychology and Center for Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
| |
Collapse
|
2
|
Earlier detection facilitates skilled responses to deceptive actions. Hum Mov Sci 2021; 80:102885. [PMID: 34678581 DOI: 10.1016/j.humov.2021.102885] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2021] [Revised: 09/03/2021] [Accepted: 09/28/2021] [Indexed: 11/22/2022]
Abstract
High-skilled and recreational rugby players were placed in a semi-immersive CAREN Lab environment to examine susceptibility to, and detection of, deception. To achieve this, a broad window of seven occlusion times was used in which participants responded to life-size video clips of an opposing player 'cutting' left or right, with or without a deceptive sidestep. Participants made full-body responses to 'intercept' the player and gave a verbal judgement of the opponent's final running direction. Response kinematic and kinetic data were recorded using three-dimensional motion capture cameras and force plates, respectively. Based on response accuracy, the results were separated into deception susceptibility and deception detection windows then signal detection analysis was used to calculate indices of discriminability between genuine and deceptive actions (d') and judgement bias (c). Analysis revealed that high-skilled and low-skilled players were similarly susceptible to deception; however, high-skilled players detected deception earlier in the action sequence, which enabled them to make more effective behavioural responses to deceptive actions.
Collapse
|
3
|
Langridge RW, Marotta JJ. Manipulation of physical 3-D and virtual 2-D stimuli: comparing digit placement and fixation position. Exp Brain Res 2021; 239:1863-1875. [PMID: 33860822 DOI: 10.1007/s00221-021-06101-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Accepted: 03/30/2021] [Indexed: 11/28/2022]
Abstract
The visuomotor processes involved in grasping a 2-D target are known to be fundamentally different than those involved in grasping a 3-D object, and this has led to concerns regarding the generalizability of 2-D grasping research. This study directly compared participants' fixation positions and digit placement during interaction with either physical square objects or 2-D virtual versions of these objects. Participants were instructed to either simply grasp the stimulus or grasp and slide it to another location. Participants' digit placement and fixation positions did not significantly differ as a function of stimulus type when grasping in the center of the display. However, gaze and grasp positions shifted toward the near side of non-central virtual stimuli, while consistently remaining close to the horizontal midline of the physical stimulus. Participants placed their digits at less stable locations when grasping the virtual stimulus in comparison to the physical stimulus on the right side of the display, but this difference disappeared when grasping in the center and on the left. Similar outward shifts in digit placement and lowered fixations were observed when sliding both stimulus types, suggesting participants incorporated similar adjustments in grasp selection in anticipation of manipulation in both Physical and Virtual stimulus conditions. These results suggest that while fixation position and grasp point selection differed between stimulus type as a function of stimulus position, certain eye-hand coordinated behaviours were maintained when grasping both physical and virtual stimuli.
Collapse
Affiliation(s)
- Ryan W Langridge
- Perception and Action Lab, Department of Psychology, University of Manitoba, 190 Dysart Rd, Winnipeg, MB, R3T-2N2, Canada.
| | - Jonathan J Marotta
- Perception and Action Lab, Department of Psychology, University of Manitoba, 190 Dysart Rd, Winnipeg, MB, R3T-2N2, Canada
| |
Collapse
|
4
|
Tamaki Y, Nobusako S, Takamura Y, Miyawaki Y, Terada M, Morioka S. Effects of Tool Novelty and Action Demands on Gaze Searching During Tool Observation. Front Psychol 2020; 11:587270. [PMID: 33329245 PMCID: PMC7719837 DOI: 10.3389/fpsyg.2020.587270] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Accepted: 10/15/2020] [Indexed: 11/13/2022] Open
Abstract
Technical reasoning refers to making inferences about how to use tools. The degree of technical reasoning is indicated by the bias of the gaze (fixation) on the functional part of the tool when in use. Few studies have examined whether technical reasoning differs between familiar and unfamiliar novel tools. In addition, what effect the intention to use the tool has on technical reasoning has not been determined. This study examined gaze shifts in relation to familiar or unfamiliar tools, under three conditions (free viewing, lift, and use), among 14 healthy adults (mean age ± standard deviation, 29.4 ± 3.9 years). The cumulative fixation time on the functional part of the tool served as a quantitative indicator of the degree of technical reasoning. The two-way analysis of variance for tools (familiar and unfamiliar) and conditions (free viewing, lift, and use) revealed that the cumulative fixation time significantly increased under free viewing and use conditions, compared to lift conditions. Relative to the free viewing condition, cumulative fixation time for unfamiliar tools significantly decreased in the lift condition and significantly increased in the use condition. Importantly, the results showed that technical reasoning was performed in both the use and the free viewing conditions. However, technical reasoning in the free viewing condition was not as strong as in the use condition. The difference between technical reasoning in free viewing and use conditions may indicate the difference between automatic and intentional technical reasoning.
Collapse
Affiliation(s)
| | - Satoshi Nobusako
- Graduate School of Health Sciences, Kio University, Nara, Japan.,Neurorehabilitation Research Center, Kio University, Nara, Japan
| | - Yusaku Takamura
- Graduate School of Health Sciences, Kio University, Nara, Japan.,Department of Rehabilitation for the Movement Functions, Research Institute, National Rehabilitation Center for Persons With Disabilities, Saitama, Japan
| | - Yu Miyawaki
- Graduate School of Health Sciences, Kio University, Nara, Japan.,Research Fellow of Japan Society for the Promotion of Science, Tokyo, Japan.,Department of Rehabilitation Medicine, Keio University School of Medicine, Tokyo, Japan
| | - Moe Terada
- Department of Rehabilitation, Murata Hospital, Osaka, Japan
| | - Shu Morioka
- Graduate School of Health Sciences, Kio University, Nara, Japan.,Neurorehabilitation Research Center, Kio University, Nara, Japan
| |
Collapse
|
5
|
Wang X, Haji Fathaliyan A, Santos VJ. Toward Shared Autonomy Control Schemes for Human-Robot Systems: Action Primitive Recognition Using Eye Gaze Features. Front Neurorobot 2020; 14:567571. [PMID: 33178006 PMCID: PMC7593660 DOI: 10.3389/fnbot.2020.567571] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2020] [Accepted: 08/13/2020] [Indexed: 11/13/2022] Open
Abstract
The functional independence of individuals with upper limb impairment could be enhanced by teleoperated robots that can assist with activities of daily living. However, robot control is not always intuitive for the operator. In this work, eye gaze was leveraged as a natural way to infer human intent and advance action recognition for shared autonomy control schemes. We introduced a classifier structure for recognizing low-level action primitives that incorporates novel three-dimensional gaze-related features. We defined an action primitive as a triplet comprised of a verb, target object, and hand object. A recurrent neural network was trained to recognize a verb and target object, and was tested on three different activities. For a representative activity (making a powdered drink), the average recognition accuracy was 77% for the verb and 83% for the target object. Using a non-specific approach to classifying and indexing objects in the workspace, we observed a modest level of generalizability of the action primitive classifier across activities, including those for which the classifier was not trained. The novel input features of gaze object angle and its rate of change were especially useful for accurately recognizing action primitives and reducing the observational latency of the classifier.
Collapse
Affiliation(s)
| | | | - Veronica J. Santos
- Biomechatronics Laboratory, Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
6
|
Foulsham T. Beyond the picture frame: The function of fixations in interactive tasks. PSYCHOLOGY OF LEARNING AND MOTIVATION 2020. [DOI: 10.1016/bs.plm.2020.06.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
7
|
Williot A, Blanchette I. The influence of an emotional processing strategy on visual threat detection by police trainees and officers. APPLIED COGNITIVE PSYCHOLOGY 2019. [DOI: 10.1002/acp.3616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Affiliation(s)
- Alexandre Williot
- Groupe de recherche CogNAC (Cognition, Neurosciences, Affect et Comportement), Department of PsychologyUniversité du Québec à Trois‐Rivières Québec Canada
| | - Isabelle Blanchette
- Groupe de recherche CogNAC (Cognition, Neurosciences, Affect et Comportement), Department of PsychologyUniversité du Québec à Trois‐Rivières Québec Canada
| |
Collapse
|
8
|
Toscani M, Yücel EI, Doerschner K. Gloss and Speed Judgments Yield Different Fine Tuning of Saccadic Sampling in Dynamic Scenes. Iperception 2019; 10:2041669519889070. [PMID: 31897284 PMCID: PMC6918497 DOI: 10.1177/2041669519889070] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2019] [Accepted: 10/25/2019] [Indexed: 12/02/2022] Open
Abstract
Image motion contains potential cues about the material properties of objects. In earlier work, we proposed motion cues that could predict whether a moving object would be perceived as shiny or matte. However, whether the visual system uses these cues is still uncertain. Herein, we use the tracking of eye movements as a tool to understand what visual information observers use when engaged in material perception. Observers judged either the gloss or the speed of moving blobby shapes in an eye tracking experiment. Results indicate that during glossiness judgments, participants tend to look at gloss-diagnostic dynamic features more than during speed judgments. This suggests a fine tuning of the visual system to properties of moving stimuli: Task relevant information is actively singled out and processed in a dynamically changing environment.
Collapse
Affiliation(s)
| | - Ezgi I. Yücel
- Department of Psychology, University of Washington, Seattle, WA, USA
| | - Katja Doerschner
- Department of Psychology, Giessen University, Germany; Department of Psychology & National Magnetic Resonance Research Center, Bilkent University, Turkey
| |
Collapse
|
9
|
Thomas NA, Manning R, Saccone EJ. Left-handers know what's left is right: Handedness and object affordance. PLoS One 2019; 14:e0218988. [PMID: 31339898 PMCID: PMC6655602 DOI: 10.1371/journal.pone.0218988] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2018] [Accepted: 06/14/2019] [Indexed: 11/18/2022] Open
Abstract
We live in a right-hander's world. Although left-handers become accustomed to using right-handed devices, an underlying preference for objects that afford the dominant hand could remain. We employed eye tracking while left- and right-handed participants viewed advertisements for everyday products. Participants then rated aesthetic appeal, purchase intention, and perceived value. Left-handed participants found advertisements for products that more easily afforded them action to be more aesthetically appealing. They also indicated greater future purchase intention for products that were oriented towards the left hand, and gave these products a higher perceived value. Eye tracking data showed that object handles attracted attention, and were also able to retain participants' attention. Further, across multiple eye movement measures, our data show that participant eye movements were altered by the orientation of the handle, such that this side of the image was examined earlier and for longer, regardless of handedness. Left-handers' preferences might be stronger because they are more aware of object orientation, whereas right-handers do not experience the same difficulties. These findings highlight intrinsic differences in the way in which we perceive objects and our underlying judgments about those products, based on handedness.
Collapse
Affiliation(s)
- Nicole A. Thomas
- School of Psychological Sciences, Monash University, Melbourne, Australia
- College of Education, Psychology and Social Work, Flinders University, Adelaide, Australia
| | - Rebekah Manning
- College of Education, Psychology and Social Work, Flinders University, Adelaide, Australia
| | - Elizabeth J. Saccone
- School of Psychological and Public Health, La Trobe University, Bendigo, Australia
| |
Collapse
|
10
|
Lancioni GE, Olivetti Belardinelli M, Singh NN, O'Reilly MF, Sigafoos J, Alberti G. Recent Technology-Aided Programs to Support Adaptive Responses, Functional Activities, and Leisure and Communication in People With Significant Disabilities. Front Neurol 2019; 10:643. [PMID: 31312169 PMCID: PMC6614206 DOI: 10.3389/fneur.2019.00643] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2019] [Accepted: 05/31/2019] [Indexed: 12/17/2022] Open
Abstract
This paper presents an overview of recent technology-aided programs (i. e., technology-aided support tools) designed to help people with significant disabilities (a) engage in adaptive responses, functional activities, and leisure and communication, and thus (b) interact with their physical and social environment and improve their performance/achievement. In order to illustrate the support tools, the paper provides an overview of recent studies aimed at developing and assessing those tools. The paper also examines the tools' accessibility and usability, and comments on possible ways of modifying and advancing them to improve their impact. The tools taken into consideration concern, among others, (a) microswitches linked to computer systems, and aimed at promoting (i.e., through positive stimulation) minimal responses or functional body movements in individuals with intellectual disabilities and motor impairments; (b) computer systems, tablets, or smartphones aimed at supporting functional activity engagement of individuals with intellectual disabilities or Alzheimer's disease; and (c) microswitches with computer-aided systems, elaborate communication devices, and specifically arranged smartphones or tablets, directed at promoting leisure, communication, or both.
Collapse
Affiliation(s)
- Giulio E. Lancioni
- Department of Neuroscience and Sense Organs, University of Bari, Bari, Italy
| | - Marta Olivetti Belardinelli
- Interuniversity Center for Research on Cognitive Processing in Natural and Artificial Systems (ECONA), Sapienza University of Rome, Rome, Italy
| | - Nirbhay N. Singh
- Medical College of Georgia, Augusta University, Augusta, GA, United States
| | - Mark F. O'Reilly
- Department of Special Education, University of Texas at Austin, Austin, TX, United States
| | - Jeff Sigafoos
- School of Education, Victoria University of Wellington, Wellington, New Zealand
| | | |
Collapse
|
11
|
Damjanovic L, Williot A, Blanchette I. Is it dangerous? The role of an emotional visual search strategy and threat-relevant training in the detection of guns and knives. Br J Psychol 2019; 111:275-296. [PMID: 31190378 DOI: 10.1111/bjop.12404] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2018] [Revised: 04/06/2019] [Indexed: 11/29/2022]
Abstract
Counter-terrorism strategies rely on the assumption that it is possible to increase threat detection by providing explicit verbal instructions to orient people's attention to dangerous objects and hostile behaviours in their environment. Nevertheless, whether verbal cues can be used to enhance threat detection performance under laboratory conditions is currently unclear. In Experiment 1, student participants were required to detect a picture of a dangerous or neutral object embedded within a visual search display on the basis of an emotional strategy 'is it dangerous?' or a semantic strategy 'is it an object?'. The results showed a threat superiority effect that was enhanced by the emotional visual search strategy. In Experiment 2, whilst trainee police officers displayed a greater threat superiority effect than student controls, both groups benefitted from performing the task under the emotional than semantic visual search strategy. Manipulating situational threat levels (high vs. low) in the experimental instructions had no effect on visual search performance. The current findings provide new support for the language-as-context hypothesis. They are also consistent with a dual-processing account of threat detection involving a verbally mediated route in working memory and the deployment of a visual template developed as a function of training.
Collapse
Affiliation(s)
- Ljubica Damjanovic
- School of Natural Sciences and Psychology, Liverpool John Moores University, UK
| | - Alexandre Williot
- Department of Psychology, Université du Québec à Trois-Rivières, Québec, Canada
| | - Isabelle Blanchette
- Department of Psychology, Université du Québec à Trois-Rivières, Québec, Canada
| |
Collapse
|
12
|
Scrafton S, Stainer MJ, Tatler BW. Object Properties Influence Visual Guidance of Motor Actions. Vision (Basel) 2019; 3:vision3020028. [PMID: 31735829 PMCID: PMC6802787 DOI: 10.3390/vision3020028] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Revised: 05/29/2019] [Accepted: 06/04/2019] [Indexed: 11/17/2022] Open
Abstract
The dynamic nature of the real world poses challenges for predicting where best to allocate gaze during object interactions. The same object may require different visual guidance depending on its current or upcoming state. Here, we explore how object properties (the material and shape of objects) and object state (whether it is full of liquid, or to be set down in a crowded location) influence visual supervision while setting objects down, which is an element of object interaction that has been relatively neglected in the literature. In a liquid pouring task, we asked participants to move empty glasses to a filling station; to leave them empty, half fill, or completely fill them with water; and then move them again to a tray. During the first putdown (when the glasses were all empty), visual guidance was determined only by the type of glass being set down—with more unwieldy champagne flutes being more likely to be guided than other types of glasses. However, when the glasses were then filled, glass type no longer mattered, with the material and fill level predicting whether the glasses were set down with visual supervision: full, glass material containers were more likely to be guided than empty, plastic ones. The key finding from this research is that the visual system responds flexibly to dynamic changes in object properties, likely based on predictions of risk associated with setting-down the object unsupervised by vision. The factors that govern these mechanisms can vary within the same object as it changes state.
Collapse
Affiliation(s)
- Sharon Scrafton
- School of Applied Psychology, Griffith University, Gold Coast 4222, Australia
- Correspondence:
| | - Matthew J. Stainer
- School of Applied Psychology, Griffith University, Gold Coast 4222, Australia
| | | |
Collapse
|
13
|
Palmiero M, Piccardi L, Giancola M, Nori R, D'Amico S, Olivetti Belardinelli M. The format of mental imagery: from a critical review to an integrated embodied representation approach. Cogn Process 2019; 20:277-289. [PMID: 30798484 DOI: 10.1007/s10339-019-00908-z] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2018] [Accepted: 02/18/2019] [Indexed: 12/20/2022]
Abstract
The issue of the format of mental imagery is still an open debate. The classical analogue (depictive)-propositional (descriptive) debate has not provided definitive conclusions. Over the years, the debate has shifted within the frame of the embodied cognition approach, which focuses on the interdependence of perception, cognition and action. Although the simulation approach still retains the concept of representation, the more radical line of the embodied cognition approach emphasizes the importance of action and clearly disregards the concept of representation. In particular, the enactive approach focuses on motor procedures that allow the body to interact with the environment, whereas the sensorimotor approach focuses on the possession and exercise of sensorimotor knowledge about how the sensory input changes as a function of movement. In this review, the embodied approaches are presented and critically discussed. Then, in an attempt to show that the format of mental imagery varies according to the ability and the strategy used to represent information, the role of individual differences in imagery ability (e.g., vividness and expertise) and imagery strategy (e.g., object vs. spatial imagers) is reviewed. Since vividness is mainly associated with perceptual information, reflecting the activation level of specific imagery systems, whereas the preferred strategy used is mainly associated with perceptual (e.g., object imagery) or amodal and motor information (e.g., spatial imagery), the format of mental imagery appears to be based on dynamic embodied representations, depending on imagery abilities and imagery strategies.
Collapse
Affiliation(s)
- Massimiliano Palmiero
- Cognitive and Motor Rehabilitation and Neuroimaging Unit, I.R.C.C.S. Fondazione Santa Lucia, Via Ardeatina 306, 00179, Rome, Italy. .,Department of Biotechnological and Applied Clinical Sciences, University of L'Aquila, L'Aquila, Italy.
| | - Laura Piccardi
- Cognitive and Motor Rehabilitation and Neuroimaging Unit, I.R.C.C.S. Fondazione Santa Lucia, Via Ardeatina 306, 00179, Rome, Italy.,Department of Life, Health and Environmental Sciences, University of L'Aquila, L'Aquila, Italy
| | - Marco Giancola
- Department of Biotechnological and Applied Clinical Sciences, University of L'Aquila, L'Aquila, Italy
| | - Raffaella Nori
- Department of Psychology, University of Bologna, Bologna, Italy
| | - Simonetta D'Amico
- Department of Biotechnological and Applied Clinical Sciences, University of L'Aquila, L'Aquila, Italy
| | - Marta Olivetti Belardinelli
- ECONA, Interuniversity Centre for Research on Cognitive Processing in Natural and Artificial Systems, Rome, Italy
| |
Collapse
|
14
|
Voudouris D, Smeets JBJ, Fiehler K, Brenner E. Gaze when reaching to grasp a glass. J Vis 2018; 18:16. [PMID: 30167674 DOI: 10.1167/18.8.16] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
People have often been reported to look near their index finger's contact point when grasping. They have only been reported to look near the thumb's contact point when grasping an opaque object at eye height with a horizontal grip-thus when the region near the index finger's contact point is occluded. To examine to what extent being able to see the digits' final trajectories influences where people look, we compared gaze when reaching to grasp a glass of water or milk that was placed at eye or hip height. Participants grasped the glass and poured its contents into another glass on their left. Surprisingly, most participants looked nearer to their thumb's contact point. To examine whether this was because gaze was biased toward the position of the subsequent action, which was to the left, we asked participants in a second experiment to grasp a glass and either place it or pour its contents into another glass either to their left or right. Most participants' gaze was biased to some extent toward the position of the next action, but gaze was not influenced consistently across participants. Gaze was also not influenced consistently across the experiments for individual participants-even for those who participated in both experiments. We conclude that gaze is not simply determined by the identity of the digit or by details of the contact points, such as their visibility, but that gaze is just as sensitive to other factors, such as where one will manipulate the object after grasping.
Collapse
Affiliation(s)
| | - Jeroen B J Smeets
- Department of Human Movement Sciences, Vrije Universiteit, Amsterdam, The Netherlands
| | - Katja Fiehler
- Experimental Psychology, Justus-Liebig University, Giessen, Germany
| | - Eli Brenner
- Department of Human Movement Sciences, Vrije Universiteit, Amsterdam, The Netherlands
| |
Collapse
|
15
|
Williot A, Blanchette I. Can threat detection be enhanced using processing strategies by police trainees and officers? Acta Psychol (Amst) 2018; 187:9-18. [PMID: 29729440 DOI: 10.1016/j.actpsy.2018.04.010] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2017] [Revised: 04/17/2018] [Accepted: 04/17/2018] [Indexed: 11/26/2022] Open
Abstract
The ability to detect threatening stimuli is an important skill for police officers. No research has yet examined whether implementing different information processing strategies can improve threat detection in police officers and police trainees. The first aim of our study was to compare the effect of strategies accentuating the processing of the emotional or the semantic dimension of stimuli on attention towards threatening and neutral information. The second aim was to consider the impact of PTSD symptoms on threat detection, as a function of processing strategies, in police officers and trainees. In a cueing paradigm, participants had to respond to a target that was presented following a threatening or neutral cue. Participants then answered a question, known beforehand, concerning the cue. The question was used to induce a more emotional or semantic processing strategy. Results showed that when the processing strategy was emotional, police trainees and officers were faster to detect the target when it followed a threatening cue, compared to a neutral cue, independently of its spatial location. This was not the case when the processing strategy was semantic. This study shows that induced processing strategies can influence attentional mechanisms related to threat detection in police trainees and police officers.
Collapse
|
16
|
Haji Fathaliyan A, Wang X, Santos VJ. Exploiting Three-Dimensional Gaze Tracking for Action Recognition During Bimanual Manipulation to Enhance Human-Robot Collaboration. Front Robot AI 2018; 5:25. [PMID: 33500912 PMCID: PMC7805858 DOI: 10.3389/frobt.2018.00025] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2017] [Accepted: 03/01/2018] [Indexed: 11/25/2022] Open
Abstract
Human-robot collaboration could be advanced by facilitating the intuitive, gaze-based control of robots, and enabling robots to recognize human actions, infer human intent, and plan actions that support human goals. Traditionally, gaze tracking approaches to action recognition have relied upon computer vision-based analyses of two-dimensional egocentric camera videos. The objective of this study was to identify useful features that can be extracted from three-dimensional (3D) gaze behavior and used as inputs to machine learning algorithms for human action recognition. We investigated human gaze behavior and gaze-object interactions in 3D during the performance of a bimanual, instrumental activity of daily living: the preparation of a powdered drink. A marker-based motion capture system and binocular eye tracker were used to reconstruct 3D gaze vectors and their intersection with 3D point clouds of objects being manipulated. Statistical analyses of gaze fixation duration and saccade size suggested that some actions (pouring and stirring) may require more visual attention than other actions (reach, pick up, set down, and move). 3D gaze saliency maps, generated with high spatial resolution for six subtasks, appeared to encode action-relevant information. The "gaze object sequence" was used to capture information about the identity of objects in concert with the temporal sequence in which the objects were visually regarded. Dynamic time warping barycentric averaging was used to create a population-based set of characteristic gaze object sequences that accounted for intra- and inter-subject variability. The gaze object sequence was used to demonstrate the feasibility of a simple action recognition algorithm that utilized a dynamic time warping Euclidean distance metric. Averaged over the six subtasks, the action recognition algorithm yielded an accuracy of 96.4%, precision of 89.5%, and recall of 89.2%. This level of performance suggests that the gaze object sequence is a promising feature for action recognition whose impact could be enhanced through the use of sophisticated machine learning classifiers and algorithmic improvements for real-time implementation. Robots capable of robust, real-time recognition of human actions during manipulation tasks could be used to improve quality of life in the home and quality of work in industrial environments.
Collapse
Affiliation(s)
| | | | - Veronica J. Santos
- Biomechatronics Laboratory, Mechanical and Aerospace Engineering, University of California, Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
17
|
Li S, Zhang X, Webb JD. 3-D-Gaze-Based Robotic Grasping Through Mimicking Human Visuomotor Function for People With Motion Impairments. IEEE Trans Biomed Eng 2017; 64:2824-2835. [PMID: 28278455 DOI: 10.1109/tbme.2017.2677902] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE The goal of this paper is to achieve a novel 3-D-gaze-based human-robot-interaction modality, with which a user with motion impairment can intuitively express what tasks he/she wants the robot to do by directly looking at the object of interest in the real world. Toward this goal, we investigate 1) the technology to accurately sense where a person is looking in real environments and 2) the method to interpret the human gaze and convert it into an effective interaction modality. Looking at a specific object reflects what a person is thinking related to that object, and the gaze location contains essential information for object manipulation. METHODS A novel gaze vector method is developed to accurately estimate the 3-D coordinates of the object being looked at in real environments, and a novel interpretation framework that mimics human visuomotor functions is designed to increase the control capability of gaze in object grasping tasks. RESULTS High tracking accuracy was achieved using the gaze vector method. Participants successfully controlled a robotic arm for object grasping by directly looking at the target object. CONCLUSION Human 3-D gaze can be effectively employed as an intuitive interaction modality for robotic object manipulation. SIGNIFICANCE It is the first time that 3-D gaze is utilized in a real environment to command a robot for a practical application. Three-dimensional gaze tracking is promising as an intuitive alternative for human-robot interaction especially for disabled and elderly people who cannot handle the conventional interaction modalities.
Collapse
|
18
|
Jiang T, Long Z, Ran X, Zhao X, Xu F, Qiu F, Kanwal JS, Feng J. Using sounds for making decisions: greater tube-nosed bats prefer antagonistic calls over non-communicative sounds when feeding. Biol Open 2016; 5:1864-1868. [PMID: 27815241 PMCID: PMC5200914 DOI: 10.1242/bio.021865] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Bats vocalize extensively within different social contexts. The type and extent of information conveyed via their vocalizations and their perceptual significance, however, remains controversial and difficult to assess. Greater tube-nosed bats, Murina leucogaster, emit calls consisting of long rectangular broadband noise burst (rBNBl) syllables during aggression between males. To experimentally test the behavioral impact of these sounds for feeding, we deployed an approach and place-preference paradigm. Two food trays were placed on opposite sides and within different acoustic microenvironments, created by sound playback, within a specially constructed tent. Specifically, we tested whether the presence of rBNBl sounds at a food source effectively deters the approach of male bats in comparison to echolocation sounds and white noise. In each case, contrary to our expectation, males preferred to feed at a location where rBNBl sounds were present. We propose that the species-specific rBNBl provides contextual information, not present within non-communicative sounds, to facilitate approach towards a food source.
Collapse
Affiliation(s)
- Tinglei Jiang
- Jilin Key Laboratory of Animal Resource Conservation and Utilization, Northeast Normal University, Jingyue St 2555, Changchun 130117, People's Republic of China.,Key Laboratory for Wetland Ecology and Vegetation Restoration of National Environmental Protection, Northeast Normal University, Jingyue St 2555, Changchun 130117, People's Republic of China
| | - Zhenyu Long
- Jilin Key Laboratory of Animal Resource Conservation and Utilization, Northeast Normal University, Jingyue St 2555, Changchun 130117, People's Republic of China
| | - Xin Ran
- Jilin Key Laboratory of Animal Resource Conservation and Utilization, Northeast Normal University, Jingyue St 2555, Changchun 130117, People's Republic of China
| | - Xue Zhao
- Jilin Key Laboratory of Animal Resource Conservation and Utilization, Northeast Normal University, Jingyue St 2555, Changchun 130117, People's Republic of China
| | - Fei Xu
- Jilin Key Laboratory of Animal Resource Conservation and Utilization, Northeast Normal University, Jingyue St 2555, Changchun 130117, People's Republic of China
| | - Fuyuan Qiu
- Jilin Key Laboratory of Animal Resource Conservation and Utilization, Northeast Normal University, Jingyue St 2555, Changchun 130117, People's Republic of China
| | - Jagmeet S Kanwal
- Department of Neurology, Georgetown University, Washington, DC 20057, USA
| | - Jiang Feng
- Jilin Key Laboratory of Animal Resource Conservation and Utilization, Northeast Normal University, Jingyue St 2555, Changchun 130117, People's Republic of China .,Key Laboratory for Wetland Ecology and Vegetation Restoration of National Environmental Protection, Northeast Normal University, Jingyue St 2555, Changchun 130117, People's Republic of China
| |
Collapse
|
19
|
Foerster RM. Task-Irrelevant Expectation Violations in Sequential Manual Actions: Evidence for a "Check-after-Surprise" Mode of Visual Attention and Eye-Hand Decoupling. Front Psychol 2016; 7:1845. [PMID: 27933016 PMCID: PMC5120088 DOI: 10.3389/fpsyg.2016.01845] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2016] [Accepted: 11/07/2016] [Indexed: 11/13/2022] Open
Abstract
When performing sequential manual actions (e.g., cooking), visual information is prioritized according to the task determining where and when to attend, look, and act. In well-practiced sequential actions, long-term memory (LTM)-based expectations specify which action targets might be found where and when. We have previously demonstrated (Foerster and Schneider, 2015b) that violations of such expectations that are task-relevant (e.g., target location change) cause a regression from a memory-based mode of attentional selection to visual search. How might task-irrelevant expectation violations in such well-practiced sequential manual actions modify attentional selection? This question was investigated by a computerized version of the number-connection test. Participants clicked on nine spatially distributed numbered target circles in ascending order while eye movements were recorded as proxy for covert attention. Target’s visual features and locations stayed constant for 65 prechange-trials, allowing practicing the manual action sequence. Consecutively, a task-irrelevant expectation violation occurred and stayed for 20 change-trials. Specifically, action target number 4 appeared in a different font. In 15 reversion-trials, number 4 returned to the original font. During the first task-irrelevant change trial, manual clicking was slower and eye scanpaths were larger and contained more fixations. The additional fixations were mainly checking fixations on the changed target while acting on later targets. Whereas the eyes repeatedly revisited the task-irrelevant change, cursor-paths remained completely unaffected. Effects lasted for 2–3 change trials and did not reappear during reversion. In conclusion, an unexpected task-irrelevant change on a task-defining feature of a well-practiced manual sequence leads to eye-hand decoupling and a “check-after-surprise” mode of attentional selection.
Collapse
Affiliation(s)
- Rebecca M Foerster
- Neuro-cognitive Psychology, Department of Psychology & Cluster of Excellence Cognitive Interaction Technology 'CITEC', Bielefeld University Bielefeld, Germany
| |
Collapse
|
20
|
Butz MV. Toward a Unified Sub-symbolic Computational Theory of Cognition. Front Psychol 2016; 7:925. [PMID: 27445895 PMCID: PMC4915327 DOI: 10.3389/fpsyg.2016.00925] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2015] [Accepted: 06/03/2016] [Indexed: 11/13/2022] Open
Abstract
This paper proposes how various disciplinary theories of cognition may be combined into a unifying, sub-symbolic, computational theory of cognition. The following theories are considered for integration: psychological theories, including the theory of event coding, event segmentation theory, the theory of anticipatory behavioral control, and concept development; artificial intelligence and machine learning theories, including reinforcement learning and generative artificial neural networks; and theories from theoretical and computational neuroscience, including predictive coding and free energy-based inference. In the light of such a potential unification, it is discussed how abstract cognitive, conceptualized knowledge and understanding may be learned from actively gathered sensorimotor experiences. The unification rests on the free energy-based inference principle, which essentially implies that the brain builds a predictive, generative model of its environment. Neural activity-oriented inference causes the continuous adaptation of the currently active predictive encodings. Neural structure-oriented inference causes the longer term adaptation of the developing generative model as a whole. Finally, active inference strives for maintaining internal homeostasis, causing goal-directed motor behavior. To learn abstract, hierarchical encodings, however, it is proposed that free energy-based inference needs to be enhanced with structural priors, which bias cognitive development toward the formation of particular, behaviorally suitable encoding structures. As a result, it is hypothesized how abstract concepts can develop from, and thus how they are structured by and grounded in, sensorimotor experiences. Moreover, it is sketched-out how symbol-like thought can be generated by a temporarily active set of predictive encodings, which constitute a distributed neural attractor in the form of an interactive free-energy minimum. The activated, interactive network attractor essentially characterizes the semantics of a concept or a concept composition, such as an actual or imagined situation in our environment. Temporal successions of attractors then encode unfolding semantics, which may be generated by a behavioral or mental interaction with an actual or imagined situation in our environment. Implications, further predictions, possible verification, and falsifications, as well as potential enhancements into a fully spelled-out unified theory of cognition are discussed at the end of the paper.
Collapse
Affiliation(s)
- Martin V Butz
- Cognitive Modeling, Department of Computer Science and Department of Psychology, Eberhard Karls University of Tübingen Tübingen, Germany
| |
Collapse
|
21
|
Anticipatory eye fixations reveal tool knowledge for tool interaction. Exp Brain Res 2016; 234:2415-31. [DOI: 10.1007/s00221-016-4646-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Accepted: 04/02/2016] [Indexed: 10/22/2022]
|