1
|
Orientation control strategies and adaptation to a visuomotor perturbation in rotational hand movements. PLoS Comput Biol 2022; 18:e1010248. [DOI: 10.1371/journal.pcbi.1010248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Revised: 12/15/2022] [Accepted: 11/14/2022] [Indexed: 12/12/2022] Open
Abstract
Computational approaches to biological motor control are used to discover the building blocks of human motor behaviour. Models explaining features of human hand movements have been studied thoroughly, yet only a few studies attempted to explain the control of the orientation of the hand; instead, they mainly focus on the control of hand translation, predominantly in a single plane. In this study, we present a new methodology to study the way humans control the orientation of their hands in three dimensions and demonstrate it in two sequential experiments. We developed a quaternion-based score that quantifies the geodicity of rotational hand movements and evaluated it experimentally. In the first experiment, participants performed a simple orientation-matching task with a robotic manipulator. We found that rotations are generally performed by following a geodesic in the quaternion hypersphere, which suggests that, similarly to translation, the orientation of the hand is centrally controlled, possibly by optimizing geometrical properties of the hand’s rotation. This result established a baseline for the study of human response to perturbed visual feedback of the orientation of the hand. In the second experiment, we developed a novel visuomotor rotation task in which the rotation is applied on the hand’s rotation, and studied the adaptation of participants to this rotation, and the transfer of the adaptation to a different initial orientation. We observed partial adaptation to the rotation. The patterns of the transfer of the adaptation to a different initial orientation were consistent with the representation of the orientation in extrinsic coordinates. The methodology that we developed allows for studying the control of a rigid body without reducing the dimensionality of the task. The results of the two experiments open questions for future studies regarding the mechanisms underlying the central control of hand orientation. These results can be of benefit for many applications that involve fine manipulation of rigid bodies, such as teleoperation and neurorehabilitation.
Collapse
|
2
|
Murdison TS, Standage DI, Lefèvre P, Blohm G. Effector-dependent stochastic reference frame transformations alter decision-making. J Vis 2022; 22:1. [PMID: 35816048 PMCID: PMC9284468 DOI: 10.1167/jov.22.8.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Psychophysical, motor control, and modeling studies have revealed that sensorimotor reference frame transformations (RFTs) add variability to transformed signals. For perceptual decision-making, this phenomenon could decrease the fidelity of a decision signal's representation or alternatively improve its processing through stochastic facilitation. We investigated these two hypotheses under various sensorimotor RFT constraints. Participants performed a time-limited, forced-choice motion discrimination task under eight combinations of head roll and/or stimulus rotation while responding either with a saccade or button press. This paradigm, together with the use of a decision model, allowed us to parameterize and correlate perceptual decision behavior with eye-, head-, and shoulder-centered sensory and motor reference frames. Misalignments between sensory and motor reference frames produced systematic changes in reaction time and response accuracy. For some conditions, these changes were consistent with a degradation of motion evidence commensurate with a decrease in stimulus strength in our model framework. Differences in participant performance were explained by a continuum of eye–head–shoulder representations of accumulated motion evidence, with an eye-centered bias during saccades and a shoulder-centered bias during button presses. In addition, we observed evidence for stochastic facilitation during head-rolled conditions (i.e., head roll resulted in faster, more accurate decisions in oblique motion for a given stimulus–response misalignment). We show that perceptual decision-making and stochastic RFTs are inseparable within the present context. We show that by simply rolling one's head, perceptual decision-making is altered in a way that is predicted by stochastic RFTs.
Collapse
Affiliation(s)
- T Scott Murdison
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada.,
| | - Dominic I Standage
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada.,School of Psychology, University of Birmingham, UK.,
| | - Philippe Lefèvre
- ICTEAM Institute and Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium.,
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada.,
| |
Collapse
|
3
|
Rajendran SK, Wei Q, Zhang F. Two degree-of-freedom robotic eye: design, modeling, and learning-based control in foveation and smooth pursuit. BIOINSPIRATION & BIOMIMETICS 2021; 16:10.1088/1748-3190/abfe40. [PMID: 33951619 PMCID: PMC10644786 DOI: 10.1088/1748-3190/abfe40] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 05/05/2021] [Indexed: 06/12/2023]
Abstract
With increasing ocular motility disorders affecting human eye movement, the need to understand the biomechanics of the human eye rises constantly. A robotic eye system that physically mimics the human eye can serve as a useful tool for biomedical researchers to obtain an intuitive understanding of the functions and defects of the extraocular muscles and the eye. This paper presents the design, modeling, and control of a two degree-of-freedom (2-DOF) robotic eye, driven by artificial muscles, in particular, made of super-coiled polymers (SCPs). Considering the highly nonlinear dynamics of the robotic eye system, this paper applies deep deterministic policy gradient (DDPG), a machine learning algorithm to solve the control design problem in foveation and smooth pursuit of the robotic eye. To the best of our knowledge, this paper presents the first modeling effort to establish the dynamics of a robotic eye driven by SCP actuators, as well as the first control design effort for robotic eyes using a DDPG-based control strategy. A linear quadratic regulator-type reward function is proposed to achieve a balance between system performances (convergence speed and tracking accuracy) and control efforts. Simulation results are presented to demonstrate the effectiveness of the proposed control strategy for the 2-DOF robotic eye.
Collapse
Affiliation(s)
- Sunil Kumar Rajendran
- Department of Electrical and Computer Engineering, George Mason University, VA, United States of America
| | - Qi Wei
- Department of Bioengineering, George Mason University, VA, United States of America
| | - Feitian Zhang
- Department of Electrical and Computer Engineering, George Mason University, VA, United States of America
| |
Collapse
|
4
|
Murdison TS, Blohm G, Bremmer F. Saccade-induced changes in ocular torsion reveal predictive orientation perception. J Vis 2020; 19:10. [PMID: 31533148 DOI: 10.1167/19.11.10] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Natural orienting of gaze often results in a retinal image that is rotated relative to space due to ocular torsion. However, we perceive neither this rotation nor a moving world despite visual rotational motion on the retina. This perceptual stability is often attributed to the phenomenon known as predictive remapping, but the current remapping literature ignores this torsional component. In addition, studies often simply measure remapping across either space or features (e.g., orientation) but in natural circumstances, both components are bound together for stable perception. One natural circumstance in which the perceptual system must account for the current and future eye orientation to correctly interpret the orientation of external stimuli occurs during movements to or from oblique eye orientations (i.e., eye orientations with both a horizontal and vertical angular component relative to the primary position). Here we took advantage of oblique eye orientation-induced ocular torsion to examine perisaccadic orientation perception. First, we found that orientation perception was largely predicted by the rotated retinal image. Second, we observed a presaccadic remapping of orientation perception consistent with maintaining a stable (but spatially inaccurate) retinocentric perception throughout the saccade. These findings strongly suggest that our seamless perceptual stability relies on retinocentric signals that are predictively remapped in all three ocular dimensions with each saccade.
Collapse
Affiliation(s)
- T Scott Murdison
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada
| | - Frank Bremmer
- Department of Neurophysics, Philipps-Universität Marburg, Germany
| |
Collapse
|
5
|
Murdison TS, Leclercq G, Lefèvre P, Blohm G. Misperception of motion in depth originates from an incomplete transformation of retinal signals. J Vis 2019; 19:21. [PMID: 31647515 DOI: 10.1167/19.12.21] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Depth perception requires the use of an internal model of the eye-head geometry to infer distance from binocular retinal images and extraretinal 3D eye-head information, particularly ocular vergence. Similarly, for motion in depth perception, gaze angle is required to correctly interpret the spatial direction of motion from retinal images; however, it is unknown whether the brain can make adequate use of extraretinal version and vergence information to correctly transform binocular retinal motion into 3D spatial coordinates. Here we tested this hypothesis by asking participants to reconstruct the spatial trajectory of an isolated disparity stimulus moving in depth either peri-foveally or peripherally while participants' gaze was oriented at different vergence and version angles. We found large systematic errors in the perceived motion trajectory that reflected an intermediate reference frame between a purely retinal interpretation of binocular retinal motion (not accounting for veridical vergence and version) and the spatially correct motion. We quantify these errors with a 3D reference frame model accounting for target, eye, and head position upon motion percept encoding. This model could capture the behavior well, revealing that participants tended to underestimate their version by up to 17%, overestimate their vergence by up to 22%, and underestimate the overall change in retinal disparity by up to 64%, and that the use of extraretinal information depended on retinal eccentricity. Since such large perceptual errors are not observed in everyday viewing, we suggest that both monocular retinal cues and binocular extraretinal signals are required for accurate real-world motion in depth perception.
Collapse
Affiliation(s)
- T Scott Murdison
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada
| | - Guillaume Leclercq
- ICTEAM and Institute for Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Philippe Lefèvre
- ICTEAM and Institute for Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN), Kingston, Ontario, Canada
| |
Collapse
|
6
|
A Tangible Solution for Hand Motion Tracking in Clinical Applications. SENSORS 2019; 19:s19010208. [PMID: 30626130 PMCID: PMC6339214 DOI: 10.3390/s19010208] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/21/2018] [Revised: 12/22/2018] [Accepted: 12/23/2018] [Indexed: 11/16/2022]
Abstract
Objective real-time assessment of hand motion is crucial in many clinical applications including technically-assisted physical rehabilitation of the upper extremity. We propose an inertial-sensor-based hand motion tracking system and a set of dual-quaternion-based methods for estimation of finger segment orientations and fingertip positions. The proposed system addresses the specific requirements of clinical applications in two ways: (1) In contrast to glove-based approaches, the proposed solution maintains the sense of touch. (2) In contrast to previous work, the proposed methods avoid the use of complex calibration procedures, which means that they are suitable for patients with severe motor impairment of the hand. To overcome the limited significance of validation in lab environments with homogeneous magnetic fields, we validate the proposed system using functional hand motions in the presence of severe magnetic disturbances as they appear in realistic clinical settings. We show that standard sensor fusion methods that rely on magnetometer readings may perform well in perfect laboratory environments but can lead to more than 15 cm root-mean-square error for the fingertip distances in realistic environments, while our advanced method yields root-mean-square errors below 2 cm for all performed motions.
Collapse
|
7
|
Balbinot G, Schuch CP, Jeffers MS, McDonald MW, Livingston-Thomas JM, Corbett D. Post-stroke kinematic analysis in rats reveals similar reaching abnormalities as humans. Sci Rep 2018; 8:8738. [PMID: 29880827 PMCID: PMC5992226 DOI: 10.1038/s41598-018-27101-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2018] [Accepted: 05/25/2018] [Indexed: 12/22/2022] Open
Abstract
A coordinated pattern of multi-muscle activation is essential to produce efficient reaching trajectories. Disruption of these coordinated activation patterns, termed synergies, is evident following stroke and results in reaching deficits; however, preclinical investigation of this phenomenon has been largely ignored. Furthermore, traditional outcome measures of post-stroke performance seldom distinguish between impairment restitution and compensatory movement strategies. We sought to address this by using kinematic analysis to characterize reaching movements and kinematic synergies of rats performing the Montoya staircase task, before and after ischemic stroke. Synergy was defined as the simultaneous movement of the wrist and other proximal forelimb joints (i.e. shoulder, elbow) during reaching. Following stroke, rats exhibited less individuation between joints, moving the affected limb more as a unit. Moreover, abnormal flexor synergy characterized by concurrent elbow flexion, shoulder adduction, and external rotation was evident. These abnormalities ultimately led to inefficient and unstable reaching trajectories, and decreased reaching performance (pellets retrieved). The observed reaching abnormalities in this preclinical stroke model are similar to those classically observed in humans. This highlights the potential of kinematic analysis to better align preclinical and clinical outcome measures, which is essential for developing future rehabilitation strategies following stroke.
Collapse
Affiliation(s)
- Gustavo Balbinot
- Department of Cellular and Molecular Medicine, Faculty of Medicine, University of Ottawa, Ottawa, ON, Canada
- Brain Institute, Federal University of Rio Grande do Norte, Natal, RN, Brazil
| | - Clarissa Pedrini Schuch
- Department of Cellular and Molecular Medicine, Faculty of Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Matthew S Jeffers
- Department of Cellular and Molecular Medicine, Faculty of Medicine, University of Ottawa, Ottawa, ON, Canada
- Canadian Partnership for Stroke Recovery, University of Ottawa, Ottawa, ON, Canada
| | - Matthew W McDonald
- Department of Cellular and Molecular Medicine, Faculty of Medicine, University of Ottawa, Ottawa, ON, Canada
- Canadian Partnership for Stroke Recovery, University of Ottawa, Ottawa, ON, Canada
| | - Jessica M Livingston-Thomas
- Department of Cellular and Molecular Medicine, Faculty of Medicine, University of Ottawa, Ottawa, ON, Canada
- Canadian Partnership for Stroke Recovery, University of Ottawa, Ottawa, ON, Canada
| | - Dale Corbett
- Department of Cellular and Molecular Medicine, Faculty of Medicine, University of Ottawa, Ottawa, ON, Canada.
- Canadian Partnership for Stroke Recovery, University of Ottawa, Ottawa, ON, Canada.
| |
Collapse
|
8
|
Abstract
SUMMARYDifferential kinematics is a traditional approach to linearize the mapping between the workspace and joint space. However, a Jacobian matrix cannot be inverted directly in redundant systems or in configurations where kinematic singularities occur. This work presents a novel approach to the solution of differential kinematics through the use of dual quaternions. The main advantage of this approach is to reduce “drift” error in differential kinematics and to ignore the kinematic singularities. An analytical dual-quaternionic Jacobian is defined, which allows for the application of this approach in any robotic system.
Collapse
|
9
|
Murdison TS, Leclercq G, Lefèvre P, Blohm G. Computations underlying the visuomotor transformation for smooth pursuit eye movements. J Neurophysiol 2015; 113:1377-99. [PMID: 25475344 DOI: 10.1152/jn.00273.2014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Smooth pursuit eye movements are driven by retinal motion and enable us to view moving targets with high acuity. Complicating the generation of these movements is the fact that different eye and head rotations can produce different retinal stimuli but giving rise to identical smooth pursuit trajectories. However, because our eyes accurately pursue targets regardless of eye and head orientation (Blohm G, Lefèvre P. J Neurophysiol 104: 2103-2115, 2010), the brain must somehow take these signals into account. To learn about the neural mechanisms potentially underlying this visual-to-motor transformation, we trained a physiologically inspired neural network model to combine two-dimensional (2D) retinal motion signals with three-dimensional (3D) eye and head orientation and velocity signals to generate a spatially correct 3D pursuit command. We then simulated conditions of 1) head roll-induced ocular counterroll, 2) oblique gaze-induced retinal rotations, 3) eccentric gazes (invoking the half-angle rule), and 4) optokinetic nystagmus to investigate how units in the intermediate layers of the network accounted for different 3D constraints. Simultaneously, we simulated electrophysiological recordings (visual and motor tunings) and microstimulation experiments to quantify the reference frames of signals at each processing stage. We found a gradual retinal-to-intermediate-to-spatial feedforward transformation through the hidden layers. Our model is the first to describe the general 3D transformation for smooth pursuit mediated by eye- and head-dependent gain modulation. Based on several testable experimental predictions, our model provides a mechanism by which the brain could perform the 3D visuomotor transformation for smooth pursuit.
Collapse
Affiliation(s)
- T Scott Murdison
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada; Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN); and
| | - Guillaume Leclercq
- ICTEAM Institute and Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Philippe Lefèvre
- ICTEAM Institute and Institute of Neuroscience (IoNS), Université catholique de Louvain, Louvain-La-Neuve, Belgium
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada; Association for Canadian Neuroinformatics and Computational Neuroscience (CNCN); and
| |
Collapse
|
10
|
Fiehler K, Wolf C, Klinghammer M, Blohm G. Integration of egocentric and allocentric information during memory-guided reaching to images of a natural environment. Front Hum Neurosci 2014; 8:636. [PMID: 25202252 PMCID: PMC4141549 DOI: 10.3389/fnhum.2014.00636] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2014] [Accepted: 07/30/2014] [Indexed: 11/13/2022] Open
Abstract
When interacting with our environment we generally make use of egocentric and allocentric object information by coding object positions relative to the observer or relative to the environment, respectively. Bayesian theories suggest that the brain integrates both sources of information optimally for perception and action. However, experimental evidence for egocentric and allocentric integration is sparse and has only been studied using abstract stimuli lacking ecological relevance. Here, we investigated the use of egocentric and allocentric information during memory-guided reaching to images of naturalistic scenes. Participants encoded a breakfast scene containing six objects on a table (local objects) and three objects in the environment (global objects). After a 2 s delay, a visual test scene reappeared for 1 s in which 1 local object was missing (= target) and of the remaining, 1, 3 or 5 local objects or one of the global objects were shifted to the left or to the right. The offset of the test scene prompted participants to reach to the target as precisely as possible. Only local objects served as potential reach targets and thus were task-relevant. When shifting objects we predicted accurate reaching if participants only used egocentric coding of object position and systematic shifts of reach endpoints if allocentric information were used for movement planning. We found that reaching movements were largely affected by allocentric shifts showing an increase in endpoint errors in the direction of object shifts with the number of local objects shifted. No effect occurred when one local or one global object was shifted. Our findings suggest that allocentric cues are indeed used by the brain for memory-guided reaching towards targets in naturalistic visual scenes. Moreover, the integration of egocentric and allocentric object information seems to depend on the extent of changes in the scene.
Collapse
Affiliation(s)
- Katja Fiehler
- Department of Experimental Psychology, Justus-Liebig-University Giessen, Germany
| | - Christian Wolf
- Department of Experimental Psychology, Justus-Liebig-University Giessen, Germany
| | - Mathias Klinghammer
- Department of Experimental Psychology, Justus-Liebig-University Giessen, Germany
| | - Gunnar Blohm
- Canadian Action and Perception Network (CAPnet), Centre for Neuroscience Studies, Queen's University Kingston, ON, Canada
| |
Collapse
|
11
|
Leclercq G, Blohm G, Lefèvre P. Accounting for direction and speed of eye motion in planning visually guided manual tracking. J Neurophysiol 2013; 110:1945-57. [DOI: 10.1152/jn.00130.2013] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Abstract
Accurate motor planning in a dynamic environment is a critical skill for humans because we are often required to react quickly and adequately to the visual motion of objects. Moreover, we are often in motion ourselves, and this complicates motor planning. Indeed, the retinal and spatial motions of an object are different because of the retinal motion component induced by self-motion. Many studies have investigated motion perception during smooth pursuit and concluded that eye velocity is partially taken into account by the brain. Here we investigate whether the eye velocity during ongoing smooth pursuit is taken into account for the planning of visually guided manual tracking. We had 10 human participants manually track a target while in steady-state smooth pursuit toward another target such that the difference between the retinal and spatial target motion directions could be large, depending on both the direction and the speed of the eye. We used a measure of initial arm movement direction to quantify whether motor planning occurred in retinal coordinates (not accounting for eye motion) or was spatially correct (incorporating eye velocity). Results showed that the eye velocity was nearly fully taken into account by the neuronal areas involved in the visuomotor velocity transformation (between 75% and 102%). In particular, these neuronal pathways accounted for the nonlinear effects due to the relative velocity between the target and the eye. In conclusion, the brain network transforming visual motion into a motor plan for manual tracking adequately uses extraretinal signals about eye velocity.
Collapse
Affiliation(s)
- Guillaume Leclercq
- Institute of Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), Université catholique de Louvain, Louvain-la-Neuve, Belgium
- Institute of Neuroscience (IoNS), Université catholique de Louvain, Brussels, Belgium
| | - Gunnar Blohm
- Centre for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada; and
- Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada
| | - Philippe Lefèvre
- Institute of Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), Université catholique de Louvain, Louvain-la-Neuve, Belgium
- Institute of Neuroscience (IoNS), Université catholique de Louvain, Brussels, Belgium
| |
Collapse
|