1
|
Action Observation Facilitates Anticipatory Control of Grasp for Object Mass but not Weight Distribution. Neurosci Lett 2022; 775:136549. [DOI: 10.1016/j.neulet.2022.136549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 02/01/2022] [Accepted: 02/23/2022] [Indexed: 11/19/2022]
|
2
|
Does the brain encode the gaze of others as beams emitted by their eyes? Proc Natl Acad Sci U S A 2020; 117:20375-20376. [PMID: 32843563 DOI: 10.1073/pnas.2012462117] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
3
|
What first drives visual attention during the recognition of object-directed actions? The role of kinematics and goal information. Atten Percept Psychophys 2020; 81:2400-2409. [PMID: 31292941 DOI: 10.3758/s13414-019-01784-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
The recognition of others' object-directed actions is known to involve the decoding of both the visual kinematics of the action and the action goal. Yet whether action recognition is first guided by the processing of visual kinematics or by a prediction about the goal of the actor remains debated. In order to provide experimental evidence to this issue, the present study aimed at investigating whether visual attention would be preferentially captured by visual kinematics or by action goal information when processing others' actions. In a visual search task, participants were asked to find correct actions (e.g., drinking from glass) among distractor actions. Distractors actions contained grip and/or goal violations and could therefore share the correct goal and/or the correct grip with the target. The time course of fixation proportion on each distractor action has been taken as an indicator of visual attention allocation. Results show that visual attention is first captured by the distractor action with similar goal. Then the withdrawal of visual attention from the action distractor with similar goal suggests a later attentional capture by the action distractor with similar grip. Overall, results are in line with predictive approaches of action understanding, which assume that observers first make a prediction about the actor's goal before verifying this prediction using the visual kinematics of the action.
Collapse
|
4
|
McMahon EG, Zheng CY, Pereira F, Gonzalez R, Ungerleider LG, Vaziri-Pashkam M. Subtle predictive movements reveal actions regardless of social context. J Vis 2020; 19:16. [PMID: 31355865 PMCID: PMC6662941 DOI: 10.1167/19.7.16] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Humans have a remarkable ability to predict the actions of others. To address what information enables this prediction and how the information is modulated by social context, we used videos collected during an interactive reaching game. Two participants (an “initiator” and a “responder”) sat on either side of a plexiglass screen on which two targets were affixed. The initiator was directed to tap one of the two targets, and the responder had to either beat the initiator to the target (competition) or arrive at the same time (cooperation). In a psychophysics experiment, new observers predicted the direction of the initiators' reach from brief clips, which were clipped relative to when the initiator began reaching. A machine learning classifier performed the same task. Both humans and the classifier were able to determine the direction of movement before the finger lift-off in both social conditions. Further, using an information mapping technique, the relevant information was found to be distributed throughout the body of the initiator in both social conditions. Our results indicate that we reveal our intentions during cooperation, in which communicating the future course of actions is beneficial, and also during competition despite the social motivation to reveal less information.
Collapse
Affiliation(s)
- Emalie G McMahon
- Section on Neurocircuitry, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Charles Y Zheng
- Machine Learning Team, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Francisco Pereira
- Machine Learning Team, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Ray Gonzalez
- Vision Laboratory, Department of Psychology, Harvard University, Cambridge, MA, USA
| | - Leslie G Ungerleider
- Section on Neurocircuitry, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| | - Maryam Vaziri-Pashkam
- Section on Neurocircuitry, Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda, MD, USA
| |
Collapse
|
5
|
Schneider TR, Buckingham G, Hermsdörfer J. Visual cues, expectations, and sensorimotor memories in the prediction and perception of object dynamics during manipulation. Exp Brain Res 2020; 238:395-409. [PMID: 31932867 PMCID: PMC7007906 DOI: 10.1007/s00221-019-05711-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Accepted: 12/12/2019] [Indexed: 10/25/2022]
Abstract
When we grasp and lift novel objects, we rely on visual cues and sensorimotor memories to predictively scale our finger forces and exert compensatory torques according to object properties. Recently, it was shown that object appearance, previous force scaling errors, and previous torque compensation errors strongly impact our percept. However, the influence of visual geometric cues on the perception of object torques and weights in a grasp to lift task is poorly understood. Moreover, little is known about how visual cues, prior expectations, sensory feedback, and sensorimotor memories are integrated for anticipatory torque control and object perception. Here, 12 young and 12 elderly participants repeatedly grasped and lifted an object while trying to prevent object tilt. Before each trial, we randomly repositioned both the object handle, providing a geometric cue on the upcoming torque, as well as a hidden weight, adding an unforeseeable torque variation. Before lifting, subjects indicated their torque expectations, as well as reporting their experience of torque and weight after each lift. Mixed-effect multiple regression models showed that visual shape cues governed anticipatory torque compensation, whereas sensorimotor memories played less of a role. In contrast, the external torque and committed compensation errors at lift-off mainly determined how object torques and weight were perceived. The modest effect of handle position differed for torque and weight perception. Explicit torque expectations were also correlated with anticipatory torque compensation and torque perception. Our main findings generalized across both age groups. Our results suggest distinct weighting of inputs for action and perception according to reliability.
Collapse
Affiliation(s)
- Thomas Rudolf Schneider
- Chair of Human Movement Science, Department of Sport and Health Sciences, Technical University of Munich, Georg-Brauchle-Ring 60/ 62, 80992, Munich, Germany.
| | - Gavin Buckingham
- Sport and Health Sciences, College of Life and Environmental Sciences, University of Exeter, Heavitree Road, Exeter, EX1 2LU, UK
| | - Joachim Hermsdörfer
- Chair of Human Movement Science, Department of Sport and Health Sciences, Technical University of Munich, Georg-Brauchle-Ring 60/ 62, 80992, Munich, Germany
| |
Collapse
|
6
|
Dynamic task observation: A gaze-mediated complement to traditional action observation treatment? Behav Brain Res 2019; 379:112351. [PMID: 31726070 DOI: 10.1016/j.bbr.2019.112351] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2019] [Revised: 10/22/2019] [Accepted: 11/08/2019] [Indexed: 11/21/2022]
Abstract
Action observation elicits changes in primary motor cortex known as motor resonance, a phenomenon thought to underpin several functions, including our ability to understand and imitate others' actions. Motor resonance is modulated not only by the observer's motor expertise, but also their gaze behaviour. The aim of the present study was to investigate motor resonance and eye movements during observation of a dynamic goal-directed action, relative to an everyday one - a reach-grasp-lift (RGL) action, commonly used in action-observation-based neurorehabilitation protocols. Skilled and novice golfers watched videos of a golf swing and an RGL action as we recorded MEPs from three forearm muscles; gaze behaviour was concurrently monitored. Corticospinal excitability increased during golf swing observation, but it was not modulated by expertise, relative to baseline; no such changes were observed for the RGL task. MEP amplitudes were related to participants' gaze behaviour: in the RGL condition, target viewing was associated with lower MEP amplitudes; in the golf condition, MEP amplitudes were positively correlated with time spent looking at the effector or neighbouring regions. Viewing of a dynamic action such as the golf swing may enhance action observation treatment, especially when concurrent physical practice is not possible.
Collapse
|
7
|
Quesque F, Behrens F, Kret ME. Pupils say more than a thousand words: Pupil size reflects how observed actions are interpreted. Cognition 2019; 190:93-98. [PMID: 31034971 DOI: 10.1016/j.cognition.2019.04.016] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2018] [Revised: 04/18/2019] [Accepted: 04/19/2019] [Indexed: 01/15/2023]
Abstract
Humans attend to others' facial expressions and body language to better understand their emotions and predict goals and intentions. The eyes and its pupils reveal important social information. Because pupil size is beyond voluntary control yet reflective of a range of cognitive and affective processes, pupils in principal have the potential to convey whether others' actions are interpreted correctly or not. Here, we measured pupil size while participants observed video-clips showing reach-to-grasp arm movements. Expressors in the video-clips were playing a board game and moved a dowel to a new position. Participants' task was to decide whether the dowel was repositioned with the intention to be followed up by another move of the same expressor (personal intention) or whether the arm movement carried the implicit message that expressor's turn was over (social intention). Replicating earlier findings, results showed that participants recognized expressors' intentions on the basis of their arm kinematics. Results further showed that participants' pupil size was larger when observing actions reflecting personal compared to social intentions. Most interestingly, before participants indicated how they interpreted the observed actions by choosing to press one of two keys (corresponding to the personal or social intention), their pupils within a split second, had already given away how they interpreted the expressor's movement. In sum, this study underscores the importance of nonverbal behavior in helping social messages get across quickly. Revealing how actions are interpreted, pupils may provide additional feedback for effective social interactions.
Collapse
Affiliation(s)
- François Quesque
- University of Lille, CNRS, UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, F-59000 Lille, France
| | - Friederike Behrens
- Leiden University, Cognitive Psychology Unit, Leiden, the Netherlands; Leiden Institute for Brain and Cognition (LIBC), the Netherlands
| | - Mariska E Kret
- Leiden University, Cognitive Psychology Unit, Leiden, the Netherlands; Leiden Institute for Brain and Cognition (LIBC), the Netherlands.
| |
Collapse
|
8
|
Bayani KY, Lawson RR, Levinson L, Mitchell S, Atawala N, Otwell M, Rickerson B, Wheaton LA. Implicit development of gaze strategies support motor improvements during action encoding training of prosthesis use. Neuropsychologia 2019; 127:75-83. [PMID: 30807755 DOI: 10.1016/j.neuropsychologia.2019.02.015] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2018] [Revised: 02/06/2019] [Accepted: 02/22/2019] [Indexed: 11/17/2022]
Abstract
BACKGROUND Action observation training has been suggested to facilitate motor improvements in the lives of persons with neural injury. Previous studies have shown that for persons with upper limb amputation, matched limb training, where prosthesis users emulate each other, has shown promise above mismatched training where a prosthesis user emulates actions of a person with sound limbs (most commonly that of a therapist). OBJECTIVE The mechanism underlying the matched limb training benefit is unclear. Gaze strategies may reveal unique patterns between matched and mismatched training which could explain improvements in motor function in matched limb training. METHODS Twenty persons with sound limbs were trained on how to use a prosthesis simulator using matched or mismatched limb training in a single session. Eye movements were recorded during the training phase. Kinematics were recorded as persons performed the task. RESULTS Gaze patterns showed differences between the training groups. The mismatched group demonstrated a higher probability of gaze on the path between the start and end of the action, while the matched group demonstrated a significantly higher probability of focusing on the elements of the path of the action and a trend of focusing on the shoulders. Kinematics also revealed overall improvements in motor control for the matched group. CONCLUSIONS This study proposes a putative mechanism that may explain improvements in matched limb training through shifting gaze strategies. Further work is needed to understand whether implicit visual strategies seen during matched limb training might encourage motor learning during functional training with prostheses.
Collapse
Affiliation(s)
- Kristel Y Bayani
- School of Biological Sciences Georgia Institute of Technology, United States
| | - Regan R Lawson
- School of Biological Sciences Georgia Institute of Technology, United States
| | - Lauren Levinson
- School of Biological Sciences Georgia Institute of Technology, United States
| | - Sarah Mitchell
- School of Biological Sciences Georgia Institute of Technology, United States
| | - Neel Atawala
- School of Biological Sciences Georgia Institute of Technology, United States
| | - Malone Otwell
- School of Biological Sciences Georgia Institute of Technology, United States
| | - Beth Rickerson
- School of Biological Sciences Georgia Institute of Technology, United States
| | - Lewis A Wheaton
- School of Biological Sciences Georgia Institute of Technology, United States.
| |
Collapse
|
9
|
Li Y, Wang Y, Cui H. Eye-hand coordination during flexible manual interception of an abruptly appearing, moving target. J Neurophysiol 2018; 119:221-234. [DOI: 10.1152/jn.00476.2017] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
As a vital skill in an evolving world, interception of moving objects relies on accurate prediction of target motion. In natural circumstances, active gaze shifts often accompany hand movements when exploring targets of interest, but how eye and hand movements are coordinated during manual interception and their dependence on visual prediction remain unclear. Here, we trained gaze-unrestrained monkeys to manually intercept targets appearing at random locations and circularly moving with random speeds. We found that well-trained animals were able to intercept the targets with adequate compensation for both sensory transmission and motor delays. Before interception, the animals' gaze followed the targets with adequate compensation for the sensory delay, but not for extra target displacement during the eye movements. Both hand and eye movements were modulated by target kinematics, and their reaction times were correlated. Moreover, retinal errors and reaching errors were correlated across different stages of reach execution. Our results reveal eye-hand coordination during manual interception, yet the eye and hand movements may show different levels of prediction based on the task context. NEW & NOTEWORTHY Here we studied the eye-hand coordination of monkeys during flexible manual interception of a moving target. Eye movements were untrained and not explicitly associated with reward. We found that the initial saccades toward the moving target adequately compensated for sensory transmission delays, but not for extra target displacement, whereas the reaching arm movements fully compensated for sensorimotor delays, suggesting that the mode of eye-hand coordination strongly depends on behavioral context.
Collapse
Affiliation(s)
- Yuhui Li
- Brain and Behavior Discovery Institute, Medical College of Georgia, Augusta University, Augusta, Georgia
| | - Yong Wang
- Brain and Behavior Discovery Institute, Medical College of Georgia, Augusta University, Augusta, Georgia
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
| | - He Cui
- Brain and Behavior Discovery Institute, Medical College of Georgia, Augusta University, Augusta, Georgia
- CAS Key Laboratory of Primate Neurobiology, Shanghai, China
- CAS Center for Excellence in Brain Science and Intelligent Technology, Shanghai, China
- Institute of Neuroscience, Chinese Academy of Sciences, Shanghai, China
| |
Collapse
|
10
|
|
11
|
Craighero L, Mele S, Zorzi V. An object-identity probability cueing paradigm during grasping observation: the facilitating effect is present only when the observed kinematics is suitable for the cued object. Front Psychol 2015; 6:1479. [PMID: 26483732 PMCID: PMC4586326 DOI: 10.3389/fpsyg.2015.01479] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2015] [Accepted: 09/14/2015] [Indexed: 11/23/2022] Open
Abstract
Electrophysiological and psychophysical data indicate that grasping observation automatically orients attention toward the incoming interactions between the actor’s hand and the object. The aim of the present study was to clarify if this effect facilitates the detection of a graspable object with the observed action as compared to an ungraspable one. We submitted participants to an object-identity probability cueing experiment in which the two possible targets were of the same dimensions but one of them presented sharp tips at one extreme while the other presented flat faces. At the beginning of each trial the most probable target was briefly shown. After a variable interval, at the same position, the same (75%) or a different target (25%) was presented. Participants had to press a key in response to target appearance. Superimposed to the video showing cue and target, an agent performing the reaching and grasping of the target was presented. The kinematics of the action was or was not suitable for grasping the cued target, according to the absence or presence of the sharp tips. Results showed that response was modulated by the probability of target identity but only when the observed kinematics was suitable to grasp the attended target. A further experiment clarified that response modulation was never present when the superimposed video always showed the agent at a rest position. These findings are discussed at the light of neurophysiological and psychophysical literature, considering the relationship between the motor system and the perception of objects and of others’ actions. We conclude that the prediction of the mechanical events that arise from the interactions between the hand and the attended object is at the basis of the capability to select a graspable object in space.
Collapse
Affiliation(s)
- Laila Craighero
- Section of Human Physiology, Department of Biomedical and Specialty Surgical Sciences, University of Ferrara , Ferrara, Italy
| | - Sonia Mele
- Section of Human Physiology, Department of Biomedical and Specialty Surgical Sciences, University of Ferrara , Ferrara, Italy
| | - Valentina Zorzi
- Section of Human Physiology, Department of Biomedical and Specialty Surgical Sciences, University of Ferrara , Ferrara, Italy
| |
Collapse
|
12
|
Ivaldi S, Anzalone SM, Rousseau W, Sigaud O, Chetouani M. Robot initiative in a team learning task increases the rhythm of interaction but not the perceived engagement. Front Neurorobot 2014; 8:5. [PMID: 24596554 PMCID: PMC3925832 DOI: 10.3389/fnbot.2014.00005] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2013] [Accepted: 01/15/2014] [Indexed: 11/13/2022] Open
Abstract
We hypothesize that the initiative of a robot during a collaborative task with a human can influence the pace of interaction, the human response to attention cues, and the perceived engagement. We propose an object learning experiment where the human interacts in a natural way with the humanoid iCub. Through a two-phases scenario, the human teaches the robot about the properties of some objects. We compare the effect of the initiator of the task in the teaching phase (human or robot) on the rhythm of the interaction in the verification phase. We measure the reaction time of the human gaze when responding to attention utterances of the robot. Our experiments show that when the robot is the initiator of the learning task, the pace of interaction is higher and the reaction to attention cues faster. Subjective evaluations suggest that the initiating role of the robot, however, does not affect the perceived engagement. Moreover, subjective and third-person evaluations of the interaction task suggest that the attentive mechanism we implemented in the humanoid robot iCub is able to arouse engagement and make the robot's behavior readable.
Collapse
Affiliation(s)
- Serena Ivaldi
- Sorbonne Université, UPMC Univ Paris 06, UMR 7222, Institut des Systèmes Intelligents et de Robotique Paris, France ; CNRS, UMR 7222, Institut des Systèmes Intelligents et de Robotique Paris, France
| | - Salvatore M Anzalone
- Sorbonne Université, UPMC Univ Paris 06, UMR 7222, Institut des Systèmes Intelligents et de Robotique Paris, France ; CNRS, UMR 7222, Institut des Systèmes Intelligents et de Robotique Paris, France
| | - Woody Rousseau
- Sorbonne Université, UPMC Univ Paris 06, UMR 7222, Institut des Systèmes Intelligents et de Robotique Paris, France ; CNRS, UMR 7222, Institut des Systèmes Intelligents et de Robotique Paris, France
| | - Olivier Sigaud
- Sorbonne Université, UPMC Univ Paris 06, UMR 7222, Institut des Systèmes Intelligents et de Robotique Paris, France ; CNRS, UMR 7222, Institut des Systèmes Intelligents et de Robotique Paris, France
| | - Mohamed Chetouani
- Sorbonne Université, UPMC Univ Paris 06, UMR 7222, Institut des Systèmes Intelligents et de Robotique Paris, France ; CNRS, UMR 7222, Institut des Systèmes Intelligents et de Robotique Paris, France
| |
Collapse
|
13
|
Schneider WX. Selective visual processing across competition episodes: a theory of task-driven visual attention and working memory. Philos Trans R Soc Lond B Biol Sci 2013; 368:20130060. [PMID: 24018722 PMCID: PMC3758203 DOI: 10.1098/rstb.2013.0060] [Citation(s) in RCA: 54] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The goal of this review is to introduce a theory of task-driven visual attention and working memory (TRAM). Based on a specific biased competition model, the ‘theory of visual attention’ (TVA) and its neural interpretation (NTVA), TRAM introduces the following assumption. First, selective visual processing over time is structured in competition episodes. Within an episode, that is, during its first two phases, a limited number of proto-objects are competitively encoded—modulated by the current task—in activation-based visual working memory (VWM). In processing phase 3, relevant VWM objects are transferred via a short-term consolidation into passive VWM. Second, each time attentional priorities change (e.g. after an eye movement), a new competition episode is initiated. Third, if a phase 3 VWM process (e.g. short-term consolidation) is not finished, whereas a new episode is called, a protective maintenance process allows its completion. After a VWM object change, its protective maintenance process is followed by an encapsulation of the VWM object causing attentional resource costs in trailing competition episodes. Viewed from this perspective, a new explanation of key findings of the attentional blink will be offered. Finally, a new suggestion will be made as to how VWM items might interact with visual search processes.
Collapse
Affiliation(s)
- Werner X Schneider
- Department of Psychology, Neuro-Cognitive Psychology, Bielefeld University, , PO Box 10 01 31, 33501 Bielefeld, Germany
| |
Collapse
|
14
|
Schneider WX, Einhäuser W, Horstmann G. Attentional selection in visual perception, memory and action: a quest for cross-domain integration. Philos Trans R Soc Lond B Biol Sci 2013; 368:20130053. [PMID: 24018715 DOI: 10.1098/rstb.2013.0053] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
For decades, the cognitive and neural sciences have benefitted greatly from a separation of mind and brain into distinct functional domains. The tremendous success of this approach notwithstanding, it is self-evident that such a view is incomplete. Goal-directed behaviour of an organism requires the joint functioning of perception, memory and sensorimotor control. A prime candidate for achieving integration across these functional domains are attentional processes. Consequently, this Theme Issue brings together studies of attentional selection from many fields, both experimental and theoretical, that are united in their quest to find overreaching integrative principles of attention between perception, memory and action. In all domains, attention is understood as combination of competition and priority control ('bias'), with the task as a decisive driving factor to ensure coherent goal-directed behaviour and cognition. Using vision as the predominant model system for attentional selection, many studies of this Theme Issue focus special emphasis on eye movements as a selection process that is both a fundamental action and serves a key function in perception. The Theme Issue spans a wide range of methods, from measuring human behaviour in the real word to recordings of single neurons in the non-human primate brain. We firmly believe that combining such a breadth in approaches is necessary not only for attentional selection, but also to take the next decisive step in all of the cognitive and neural sciences: to understand cognition and behaviour beyond isolated domains.
Collapse
Affiliation(s)
- Werner X Schneider
- Center for Interdisciplinary Research (ZiF), Center of Excellence (CITEC), Bielefeld University, , Bielefeld, Germany
| | | | | |
Collapse
|