1
|
Türker B, Musat EM, Chabani E, Fonteix-Galet A, Maranci JB, Wattiez N, Pouget P, Sitt J, Naccache L, Arnulf I, Oudiette D. Behavioral and brain responses to verbal stimuli reveal transient periods of cognitive integration of the external world during sleep. Nat Neurosci 2023; 26:1981-1993. [PMID: 37828228 PMCID: PMC10620087 DOI: 10.1038/s41593-023-01449-7] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 09/05/2023] [Indexed: 10/14/2023]
Abstract
Sleep has long been considered as a state of behavioral disconnection from the environment, without reactivity to external stimuli. Here we questioned this 'sleep disconnection' dogma by directly investigating behavioral responsiveness in 49 napping participants (27 with narcolepsy and 22 healthy volunteers) engaged in a lexical decision task. Participants were instructed to frown or smile depending on the stimulus type. We found accurate behavioral responses, visible via contractions of the corrugator or zygomatic muscles, in most sleep stages in both groups (except slow-wave sleep in healthy volunteers). Across sleep stages, responses occurred more frequently when stimuli were presented during high cognitive states than during low cognitive states, as indexed by prestimulus electroencephalography. Our findings suggest that transient windows of reactivity to external stimuli exist during bona fide sleep, even in healthy individuals. Such windows of reactivity could pave the way for real-time communication with sleepers to probe sleep-related mental and cognitive processes.
Collapse
Affiliation(s)
- Başak Türker
- Sorbonne Université, Institut du Cerveau-Paris Brain Institute-ICM, INSERM, CNRS, Paris, France
| | - Esteban Munoz Musat
- Sorbonne Université, Institut du Cerveau-Paris Brain Institute-ICM, INSERM, CNRS, Paris, France
| | - Emma Chabani
- Sorbonne Université, Institut du Cerveau-Paris Brain Institute-ICM, INSERM, CNRS, Paris, France
| | | | - Jean-Baptiste Maranci
- Sorbonne Université, Institut du Cerveau-Paris Brain Institute-ICM, INSERM, CNRS, Paris, France
- AP-HP, Hôpital Pitié-Salpêtrière, Service des Pathologies du Sommeil, National Reference Centre for Narcolepsy, Paris, France
| | - Nicolas Wattiez
- Sorbonne Université, INSERM, Neurophysiologie Respiratoire Expérimentale et Clinique, Paris, France
| | - Pierre Pouget
- Sorbonne Université, Institut du Cerveau-Paris Brain Institute-ICM, INSERM, CNRS, Paris, France
| | - Jacobo Sitt
- Sorbonne Université, Institut du Cerveau-Paris Brain Institute-ICM, INSERM, CNRS, Paris, France
| | - Lionel Naccache
- Sorbonne Université, Institut du Cerveau-Paris Brain Institute-ICM, INSERM, CNRS, Paris, France
- AP-HP, Hôpital Pitié-Salpêtrière, Service de Neurophysiologie Clinique, Paris, France
| | - Isabelle Arnulf
- Sorbonne Université, Institut du Cerveau-Paris Brain Institute-ICM, INSERM, CNRS, Paris, France
- AP-HP, Hôpital Pitié-Salpêtrière, Service des Pathologies du Sommeil, National Reference Centre for Narcolepsy, Paris, France
| | - Delphine Oudiette
- Sorbonne Université, Institut du Cerveau-Paris Brain Institute-ICM, INSERM, CNRS, Paris, France.
- AP-HP, Hôpital Pitié-Salpêtrière, Service des Pathologies du Sommeil, National Reference Centre for Narcolepsy, Paris, France.
| |
Collapse
|
2
|
Spatial shifts in swiping actions, the impact of “left” and “right” verbalizations. Exp Brain Res 2022; 240:1547-1556. [PMID: 35348839 PMCID: PMC9038887 DOI: 10.1007/s00221-022-06348-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Accepted: 03/10/2022] [Indexed: 11/29/2022]
Abstract
Movements are often modulated by the meaning of cue words. We explore the interaction between verbal and visual constraints during a movement by investigating if spoken words during movement execution bias late movement control of swiping actions on a tablet when vision of the target is removed during the movement. Verbalization trials required participants to vocalize the spatial directions ‘LEFT’, ‘MIDDLE’, or ‘RIGHT’ of the active target, relative to the other presented targets. A late influence of semantics emerged on movement execution in verbalized trials with action endpoints landing more in the direction of the spoken word than without verbalization. The emergence of the semantic effect as the movement progresses reflects the temporal unfolding of the visual and verbal constraints during the swiping action. Comparing our current results with a similar task using a variant verbalization, we also conclude that, larger semantic content effects are found with spatial direction than numerical magnitude verbalization.
Collapse
|
3
|
Jovanovic L, López-Moliner J, Mamassian P. Contrasting contributions of movement onset and duration to self-evaluation of sensorimotor timing performance. Eur J Neurosci 2021; 54:5092-5111. [PMID: 34196067 PMCID: PMC9291449 DOI: 10.1111/ejn.15378] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Accepted: 06/22/2021] [Indexed: 12/01/2022]
Abstract
Movement execution is not always optimal. Understanding how humans evaluate their own motor decisions can give us insights into their suboptimality. Here, we investigated how humans time the action of synchronizing an arm movement with a predictable visual event and how well they can evaluate the outcome of this action. On each trial, participants had to decide when to start (reaction time) and for how long to move (movement duration) to reach a target on time. After each trial, participants judged the confidence they had that their performance on that trial was better than average. We found that participants mostly varied their reaction time, keeping the average movement duration short and relatively constant across conditions. Interestingly, confidence judgements reflected deviations from the planned reaction time and were not related to planned movement duration. In two other experiments, we replicated these results in conditions where the contribution of sensory uncertainty was reduced. In contrast to confidence judgements, when asked to make an explicit estimation of their temporal error, participants' estimates were related in a similar manner to both reaction time and movement duration. In summary, humans control the timing of their actions primarily by adjusting the delay to initiate the action, and they estimate their confidence in their action from the difference between the planned and executed movement onset. Our results highlight the critical role of the internal model for the self‐evaluation of one's motor performance.
Collapse
Affiliation(s)
- Ljubica Jovanovic
- Laboratoire des systèmes perceptifs, Département d'études cognitives, École normale supérieure, PSL University, CNRS, Paris, France.,School of Psychology, University of Nottingham, Nottingham, UK
| | - Joan López-Moliner
- Vision and Control of Action (VISCA) Group, Department of Cognition, Development and Psychology of Education, Institut de Neurociències, Universitat de Barcelona, Barcelona, Catalonia, Spain
| | - Pascal Mamassian
- Laboratoire des systèmes perceptifs, Département d'études cognitives, École normale supérieure, PSL University, CNRS, Paris, France
| |
Collapse
|
4
|
Abstract
This chapter starts by reviewing the various interpretations of Bálint syndrome over time. We then develop a novel integrative view in which we propose that the various symptoms, historically reported and labeled by various authors, result from a core mislocalization deficit. This idea is in accordance with our previous proposal that the core deficit of Bálint syndrome is attentional (Pisella et al., 2009, 2013, 2017) since covert attention improves spatial resolution in visual periphery (Yeshurun and Carrasco, 1998); a deficit of covert attention would thus increase spatial uncertainty and thereby impair both visual object identification and visuomotor accuracy. In peripheral vision, we perceive the intrinsic characteristics of the perceptual elements surrounding us, but not their precise localization (Rosenholtz et al., 2012a,b), such that without covert attention we cannot organize them to their respective and recognizable objects; this explains why perceptual symptoms (simultanagnosia, neglect) could result from visual mislocalization. The visuomotor symptoms (optic ataxia) can be accounted for by both visual and proprioceptive mislocalizations in an oculocentric reference frame, leading to field and hand effects, respectively. This new pathophysiological account is presented along with a model of posterior parietal cortex organization in which the superior part is devoted to covert attention, while the right inferior part is involved in visual remapping. When the right inferior parietal cortex is damaged, additional representational mislocalizations across saccades worsen the clinical picture of peripheral mislocalizations due to an impairment of covert attention.
Collapse
|
5
|
Ro T, Koenig L. Unconscious Touch Perception After Disruption of the Primary Somatosensory Cortex. Psychol Sci 2021; 32:549-557. [PMID: 33635728 DOI: 10.1177/0956797620970551] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Brain damage or disruption to the primary visual cortex sometimes produces blindsight, a striking condition in which patients lose the ability to consciously detect visual information yet retain the ability to discriminate some attributes without awareness. Although there have been few demonstrations of somatosensory equivalents of blindsight, the lesions that produce "numbsense," in which patients can make accurate guesses about tactile information without awareness, have been rare and localized to different regions of the brain. Despite transient loss of tactile awareness in the contralateral hand after transcranial magnetic stimulation (TMS) of the primary somatosensory cortex but not TMS of a control site, 12 participants (six female) reliably performed at above-chance levels on a localization task. These results demonstrating TMS-induced numbsense implicate a parallel somatosensory pathway that processes the location of touch in the absence of awareness and highlight the importance of primary sensory cortices for conscious perception.
Collapse
Affiliation(s)
- Tony Ro
- Program in Cognitive Neuroscience, The Graduate Center, City University of New York.,Program in Psychology, The Graduate Center, City University of New York.,Program in Biology, The Graduate Center, City University of New York
| | - Lua Koenig
- Program in Psychology, The Graduate Center, City University of New York
| |
Collapse
|
6
|
Danckert J, Striemer C, Rossetti Y. Blindsight. HANDBOOK OF CLINICAL NEUROLOGY 2021; 178:297-310. [PMID: 33832682 DOI: 10.1016/b978-0-12-821377-3.00016-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
For over a century, research has demonstrated that damage to primary visual cortex does not eliminate all capacity for visual processing in the brain. From Riddoch's (1917) early demonstration of intact motion processing for blind field stimuli, to the iconic work of Weiskrantz et al. (1974) showing reliable spatial localization, it is clear that secondary visual pathways that bypass V1 carry information to the visual brain that in turn influences behavior. In this chapter, we briefly outline the history and phenomena associated with blindsight, before discussing the nature of the secondary visual pathways that support residual visual processing in the absence of V1. We finish with some speculation as to the functional characteristics of these secondary pathways.
Collapse
Affiliation(s)
- James Danckert
- Department of Psychology, University of Waterloo, Waterloo, ON, Canada.
| | | | - Yves Rossetti
- Trajectoires, Centre de Recherche en Neurosciences de Lyon, Inserm, CNRS, Université Lyon 1, Bron, France; Plateforme "Mouvement et Handicap", Hôpital Henry-Gabrielle, Hospices Civils de Lyon, Saint-Genis-Laval, France
| |
Collapse
|
7
|
Olthuis R, van der Kamp J, Lemmink K, Caljouw S. The influence of locative expressions on context-dependency of endpoint control in aiming. Conscious Cogn 2020; 87:103056. [PMID: 33310651 DOI: 10.1016/j.concog.2020.103056] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Revised: 11/05/2020] [Accepted: 11/15/2020] [Indexed: 11/24/2022]
Abstract
It has been claimed that increased reliance on context, or allocentric information, develops when aiming movements are more consciously monitored and/or controlled. Since verbalizing target features requires strong conscious monitoring, we expected an increased reliance on allocentric information when verbalizing a target label (i.e. target number) during movement execution. We examined swiping actions towards a global array of targets embedded in different local array configurations on a tablet under no-verbalization and verbalization conditions. The global and local array configurations allowed separation of contextual-effects from any possible numerical magnitude biases triggered from calling out specific target numbers.The patterns of constant errors in the target directionwere used to assess differences between conditions. Variation in the target context configuration systematically biased movement endpoints in both the no-verbalization and verbalization conditions. Ultimately, our results do not support the assertion that calling out target numbers during movement execution increases the context-dependency of targeted actions.
Collapse
Affiliation(s)
- Raimey Olthuis
- Center for Human Movement Sciences, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
| | - John van der Kamp
- Department of Human Movement Sciences, Faculty of Behavioral and Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Koen Lemmink
- Center for Human Movement Sciences, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Simone Caljouw
- Center for Human Movement Sciences, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
8
|
Ilardi CR, Iavarone A, Villano I, Rapuano M, Ruggiero G, Iachini T, Chieffi S. Egocentric and allocentric spatial representations in a patient with Bálint-like syndrome: A single-case study. Cortex 2020; 135:10-16. [PMID: 33341593 DOI: 10.1016/j.cortex.2020.11.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2020] [Revised: 07/28/2020] [Accepted: 11/17/2020] [Indexed: 10/22/2022]
Abstract
Previous studies suggested that egocentric and allocentric spatial representations are supported by neural networks in the occipito-parietal (dorsal) and occipito-temporal (ventral) streams, respectively. The present study aimed to explore the integrity of ego- and allo-centric spatial representations in a patient (GP) who presented bilateral occipito-parietal damage consistent with the picture of a Bálint-like syndrome. GP and healthy controls were asked to provide memory-based spatial judgments on triads of objects after a short (1.5sec) or long (5sec) delay. The results showed that GP's performance was selectively impaired in the Ego/1.5sec delay condition. As a whole, our findings suggest that GP's spared ventral stream could generate short- and long-term allocentric representations. Furthermore, the stored perceptual representation processed within the ventral stream might have been used to generate long-term egocentric representation. Conversely, the generation of short-term egocentric representation appeared to be selectively undermined by the damage of the dorsal stream.
Collapse
Affiliation(s)
- Ciro Rosario Ilardi
- Department of Psychology, University of Campania "Luigi Vanvitelli", Caserta, Italy; Department of Experimental Medicine, University of Campania "Luigi Vanvitelli", Naples, Italy
| | | | - Ines Villano
- Department of Experimental Medicine, University of Campania "Luigi Vanvitelli", Naples, Italy
| | - Mariachiara Rapuano
- Laboratory of Cognitive Science and Immersive Virtual Reality, Department of Psychology, University of Campania "Luigi Vanvitelli", Caserta, Italy
| | - Gennaro Ruggiero
- Laboratory of Cognitive Science and Immersive Virtual Reality, Department of Psychology, University of Campania "Luigi Vanvitelli", Caserta, Italy
| | - Tina Iachini
- Laboratory of Cognitive Science and Immersive Virtual Reality, Department of Psychology, University of Campania "Luigi Vanvitelli", Caserta, Italy
| | - Sergio Chieffi
- Department of Experimental Medicine, University of Campania "Luigi Vanvitelli", Naples, Italy
| |
Collapse
|
9
|
Horváth Á, Ragó A, Ferentzi E, Körmendi J, Köteles F. Short-term retention of proprioceptive information. Q J Exp Psychol (Hove) 2020; 73:2148-2157. [PMID: 32972307 PMCID: PMC7672777 DOI: 10.1177/1747021820957147] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
The Joint Position Reproduction test (JPR), one of the most widely used measurements to estimate proprioceptive accuracy, requires the short term storage of proprioceptive information. It has been suggested that visuospatial sketchpad plays a fundamental role in the memorization of proprioceptive information. The current study aimed to investigate this assumption. To do so, we developed and used a novel JPR protocol to measure the retention capacity with respect to sequences of different positions. Our goal was to develop the original task further to make it comparable with other widely used short-term memory measurements, in which the memory capacity was determined by the number of the items participants retain (memory span). We compared participants’ (N=39) performance in this task to that of results of Corsi block-tapping task (capacity of the visuospatial sketchpad) and Digit span task (capacity of the phonological loop). Proprioceptive memory capacity did not correlate either with spatial or verbal memory capacity. The exploratory analysis revealed that proprioceptive span correlated positively with the performance if 5 joint positions had to be retained. Further associations with verbal span for 6 or 7 positions, and spatial span for 5 positions were found. Our findings do not support the idea that visuospatial sketchpad plays a fundamental role in the storage of proprioceptive information. The independence of span measures indicates that proprioceptive information might be stored in a subsystem independent of the visuospatial sketchpad or phonological loop.
Collapse
Affiliation(s)
- Áron Horváth
- Doctoral School of Psychology, Eötvös Loránd University (ELTE), Budapest, Hungary.,Institute of Health Promotion and Sport Sciences, Eötvös Loránd University (ELTE), Budapest, Hungary
| | - Anett Ragó
- Institute of Psychology, Eötvös Loránd University (ELTE), Budapest, Hungary
| | - Eszter Ferentzi
- Institute of Health Promotion and Sport Sciences, Eötvös Loránd University (ELTE), Budapest, Hungary
| | - János Körmendi
- Institute of Health Promotion and Sport Sciences, Eötvös Loránd University (ELTE), Budapest, Hungary
| | - Ferenc Köteles
- Institute of Health Promotion and Sport Sciences, Eötvös Loránd University (ELTE), Budapest, Hungary
| |
Collapse
|
10
|
Touchscreen Pointing and Swiping: The Effect of Background Cues and Target Visibility. Motor Control 2020; 24:422-434. [PMID: 32502971 DOI: 10.1123/mc.2019-0096] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2019] [Revised: 04/01/2020] [Accepted: 04/01/2020] [Indexed: 11/18/2022]
Abstract
By assessing the precision of gestural interactions with touchscreen targets, the authors investigate how the type of gesture, target location, and scene visibility impact movement endpoints. Participants made visually and memory-guided pointing and swiping gestures with a stylus to targets located in a semicircle. Specific differences in aiming errors were identified between swiping and pointing. In particular, participants overshot the target more when swiping than when pointing and swiping endpoints showed a stronger bias toward the oblique than pointing gestures. As expected, the authors also found specific differences between conditions with and without delays. Overall, the authors observed an influence on movement execution from each of the three parameters studied and uncovered that the information used to guide movement appears to be gesture specific.
Collapse
|
11
|
Karimpur H, Kurz J, Fiehler K. The role of perception and action on the use of allocentric information in a large-scale virtual environment. Exp Brain Res 2020; 238:1813-1826. [PMID: 32500297 PMCID: PMC7438369 DOI: 10.1007/s00221-020-05839-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Accepted: 05/23/2020] [Indexed: 01/10/2023]
Abstract
In everyday life, our brain constantly builds spatial representations of the objects surrounding us. Many studies have investigated the nature of these spatial representations. It is well established that we use allocentric information in real-time and memory-guided movements. Most studies relied on small-scale and static experiments, leaving it unclear whether similar paradigms yield the same results on a larger scale using dynamic objects. We created a virtual reality task that required participants to encode the landing position of a virtual ball thrown by an avatar. Encoding differed in the nature of the task in that it was either purely perceptual (“view where the ball landed while standing still”—Experiment 1) or involved an action (“intercept the ball with the foot just before it lands”—Experiment 2). After encoding, participants were asked to place a real ball at the remembered landing position in the virtual scene. In some trials, we subtly shifted either the thrower or the midfield line on a soccer field to manipulate allocentric coding of the ball’s landing position. In both experiments, we were able to replicate classic findings from small-scale experiments and to generalize these results to different encoding tasks (perception vs. action) and response modes (reaching vs. walking-and-placing). Moreover, we found that participants preferably encoded the ball relative to the thrower when they had to intercept the ball, suggesting that the use of allocentric information is determined by the encoding task by enhancing task-relevant allocentric information. Our findings indicate that results previously obtained from memory-guided reaching are not restricted to small-scale movements, but generalize to whole-body movements in large-scale dynamic scenes.
Collapse
Affiliation(s)
- Harun Karimpur
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany.
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany.
| | - Johannes Kurz
- NemoLab-Neuromotor Behavior Laboratory, Justus Liebig University Giessen, Giessen, Germany
| | - Katja Fiehler
- Experimental Psychology, Justus Liebig University Giessen, Giessen, Germany
- Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University Giessen, Giessen, Germany
| |
Collapse
|
12
|
Herman WX, Smith RE, Kronemer SI, Watsky RE, Chen WC, Gober LM, Touloumes GJ, Khosla M, Raja A, Horien CL, Morse EC, Botta KL, Hirsch LJ, Alkawadri R, Gerrard JL, Spencer DD, Blumenfeld H. A Switch and Wave of Neuronal Activity in the Cerebral Cortex During the First Second of Conscious Perception. Cereb Cortex 2020; 29:461-474. [PMID: 29194517 DOI: 10.1093/cercor/bhx327] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Indexed: 12/17/2022] Open
Abstract
Conscious perception occurs within less than 1 s. To study events on this time scale we used direct electrical recordings from the human cerebral cortex during a conscious visual perception task. Faces were presented at individually titrated visual threshold for 9 subjects while measuring broadband 40-115 Hz gamma power in a total of 1621 intracranial electrodes widely distributed in both hemispheres. Surface maps and k-means clustering analysis showed initial activation of visual cortex for both perceived and non-perceived stimuli. However, only stimuli reported as perceived then elicited a forward-sweeping wave of activity throughout the cerebral cortex accompanied by large-scale network switching. Specifically, a monophasic wave of broadband gamma activation moves through bilateral association cortex at a rate of approximately 150 mm/s and eventually reenters visual cortex for perceived but not for non-perceived stimuli. Meanwhile, the default mode network and the initial visual cortex and higher association cortex networks are switched off for the duration of conscious stimulus processing. Based on these findings, we propose a new "switch-and-wave" model for the processing of consciously perceived stimuli. These findings are important for understanding normal conscious perception and may also shed light on its vulnerability to disruption by brain disorders.
Collapse
Affiliation(s)
- Wendy X Herman
- Department of Neurology, Yale University School of Medicine, 333 Cedar Street, New Haven, CT, USA
| | - Rachel E Smith
- Department of Neurology, Yale University School of Medicine, 333 Cedar Street, New Haven, CT, USA
| | - Sharif I Kronemer
- Department of Neurology, Yale University School of Medicine, 333 Cedar Street, New Haven, CT, USA
| | - Rebecca E Watsky
- Department of Neurology, Yale University School of Medicine, 333 Cedar Street, New Haven, CT, USA
| | - William C Chen
- Department of Neurology, Yale University School of Medicine, 333 Cedar Street, New Haven, CT, USA
| | - Leah M Gober
- Department of Neurology, Yale University School of Medicine, 333 Cedar Street, New Haven, CT, USA
| | - George J Touloumes
- Department of Neurology, Yale University School of Medicine, 333 Cedar Street, New Haven, CT, USA
| | - Meenakshi Khosla
- Department of Neurology, Yale University School of Medicine, 333 Cedar Street, New Haven, CT, USA
| | - Anusha Raja
- Department of Neurology, Yale University School of Medicine, 333 Cedar Street, New Haven, CT, USA
| | - Corey L Horien
- Department of Neurology, Yale University School of Medicine, 333 Cedar Street, New Haven, CT, USA
| | - Elliot C Morse
- Department of Neurology, Yale University School of Medicine, 333 Cedar Street, New Haven, CT, USA
| | - Katherine L Botta
- Department of Neurology, Yale University School of Medicine, 333 Cedar Street, New Haven, CT, USA
| | - Lawrence J Hirsch
- Department of Neurology, Yale University School of Medicine, 333 Cedar Street, New Haven, CT, USA
| | - Rafeed Alkawadri
- Department of Neurology, Yale University School of Medicine, 333 Cedar Street, New Haven, CT, USA
| | - Jason L Gerrard
- Department of Neurosurgery, Yale University School of Medicine, 333 Cedar Street, New Haven, CT, USA
| | - Dennis D Spencer
- Department of Neurosurgery, Yale University School of Medicine, 333 Cedar Street, New Haven, CT, USA
| | - Hal Blumenfeld
- Department of Neurology, Yale University School of Medicine, 333 Cedar Street, New Haven, CT, USA
- Department of Neurosurgery, Yale University School of Medicine, 333 Cedar Street, New Haven, CT, USA
- Department of Neuroscience, Yale University School of Medicine, 333 Cedar Street, New Haven, CT, USA
| |
Collapse
|
13
|
Blouin J, Saradjian AH, Pialasse JP, Manson GA, Mouchnino L, Simoneau M. Two Neural Circuits to Point Towards Home Position After Passive Body Displacements. Front Neural Circuits 2019; 13:70. [PMID: 31736717 PMCID: PMC6831616 DOI: 10.3389/fncir.2019.00070] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2019] [Accepted: 10/15/2019] [Indexed: 12/02/2022] Open
Abstract
A challenge in motor control research is to understand the mechanisms underlying the transformation of sensory information into arm motor commands. Here, we investigated these transformation mechanisms for movements whose targets were defined by information issued from body rotations in the dark (i.e., idiothetic information). Immediately after being rotated, participants reproduced the amplitude of their perceived rotation using their arm (Experiment 1). The cortical activation during movement planning was analyzed using electroencephalography and source analyses. Task-related activities were found in regions of interest (ROIs) located in the prefrontal cortex (PFC), dorsal premotor cortex, dorsal region of the anterior cingulate cortex (ACC) and the sensorimotor cortex. Importantly, critical regions for the cognitive encoding of space did not show significant task-related activities. These results suggest that arm movements were planned using a sensorimotor-type of spatial representation. However, when a 8 s delay was introduced between body rotation and the arm movement (Experiment 2), we found that areas involved in the cognitive encoding of space [e.g., ventral premotor cortex (vPM), rostral ACC, inferior and superior posterior parietal cortex (PPC)] showed task-related activities. Overall, our results suggest that the use of a cognitive-type of representation for planning arm movement after body motion is necessary when relevant spatial information must be stored before triggering the movement.
Collapse
Affiliation(s)
- Jean Blouin
- Aix-Marseille Univ, CNRS, Laboratoire de Neurosciences Cognitives, Marseille, France
| | - Anahid H Saradjian
- Aix-Marseille Univ, CNRS, Laboratoire de Neurosciences Cognitives, Marseille, France
| | | | - Gerome A Manson
- Aix-Marseille Univ, CNRS, Laboratoire de Neurosciences Cognitives, Marseille, France.,Centre for Motor Control, University of Toronto, Toronto, ON, Canada
| | - Laurence Mouchnino
- Aix-Marseille Univ, CNRS, Laboratoire de Neurosciences Cognitives, Marseille, France
| | - Martin Simoneau
- Faculté de Médecine, Département de Kinésiologie, Université Laval, Québec, QC, Canada.,Centre Interdisciplinaire de Recherche en Réadaptation et Intégration Sociale (CIRRIS), Québec, QC, Canada
| |
Collapse
|
14
|
Chen Y, Crawford JD. Allocentric representations for target memory and reaching in human cortex. Ann N Y Acad Sci 2019; 1464:142-155. [PMID: 31621922 DOI: 10.1111/nyas.14261] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Revised: 09/25/2019] [Accepted: 09/28/2019] [Indexed: 01/18/2023]
Abstract
The use of allocentric cues for movement guidance is complex because it involves the integration of visual targets and independent landmarks and the conversion of this information into egocentric commands for action. Here, we focus on the mechanisms for encoding reach targets relative to visual landmarks in humans. First, we consider the behavioral results suggesting that both of these cues influence target memory, but are then transformed-at the first opportunity-into egocentric commands for action. We then consider the cortical mechanisms for these behaviors. We discuss different allocentric versus egocentric mechanisms for coding of target directional selectivity in memory (inferior temporal gyrus versus superior occipital gyrus) and distinguish these mechanisms from parieto-frontal activation for planning egocentric direction of actual reach movements. Then, we consider where and how the former allocentric representations of remembered reach targets are converted into the latter egocentric plans. In particular, our recent neuroimaging study suggests that four areas in the parietal and frontal cortex (right precuneus, bilateral dorsal premotor cortex, and right presupplementary area) participate in this allo-to-ego conversion. Finally, we provide a functional overview describing how and why egocentric and landmark-centered representations are segregated early in the visual system, but then reintegrated in the parieto-frontal cortex for action.
Collapse
Affiliation(s)
- Ying Chen
- Center for Neuroscience Studies, Queen's University, Kingston, Ontario, Canada.,Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada
| | - J Douglas Crawford
- Canadian Action and Perception Network (CAPnet), Toronto, Ontario, Canada.,Center for Vision Research, Vision: Science to Applications (VISTA) Program, and Departments of Psychology, Biology, and Kinesiology & Health Science, York University, Toronto, Ontario, Canada
| |
Collapse
|
15
|
Munion A, Butner J, Stefanucci J, Geuss M, Story TN. An Ecological Approach to Modeling Vision: Quantifying Form Perception Using the Circle Map Equation. ECOLOGICAL PSYCHOLOGY 2019. [DOI: 10.1080/10407413.2019.1663704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
| | - Jonathan Butner
- Department of Psychology, University of Utah
- US Army Research Laboratory, CAST
| | | | | | - T. N. Story
- Department of Psychology, University of Utah
| |
Collapse
|
16
|
Guo LL, Patel N, Niemeier M. Emergent Synergistic Grasp-Like Behavior in a Visuomotor Joint Action Task: Evidence for Internal Forward Models as Building Blocks of Human Interactions. Front Hum Neurosci 2019; 13:37. [PMID: 30787873 PMCID: PMC6372946 DOI: 10.3389/fnhum.2019.00037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2018] [Accepted: 01/23/2019] [Indexed: 11/13/2022] Open
Abstract
Central to the mechanistic understanding of the human mind is to clarify how cognitive functions arise from simpler sensory and motor functions. A longstanding assumption is that forward models used by sensorimotor control to anticipate actions also serve to incorporate other people's actions and intentions, and give rise to sensorimotor interactions between people, and even abstract forms of interactions. That is, forward models could aid core aspects of human social cognition. To test whether forward models can be used to coordinate interactions, here we measured the movements of pairs of participants in a novel joint action task. For the task they collaborated to lift an object, each of them using fingers of one hand to push against the object from opposite sides, just like a single person would use two hands to grasp the object bimanually. Perturbations of the object were applied randomly as they are known to impact grasp-specific movement components in common grasping tasks. We found that co-actors quickly learned to make grasp-like movements with grasp components that showed coordination on average based on action observation of peak deviation and velocity of their partner's trajectories. Our data suggest that co-actors adopted pre-existing bimanual grasp programs for their own body to use forward models of their partner's effectors. This is consistent with the long-held assumption that human higher-order cognitive functions may take advantage of sensorimotor forward models to plan social behavior. New and Noteworthy: Taking an approach of sensorimotor neuroscience, our work provides evidence for a long-held belief that the coordination of physical as well as abstract interactions between people originates from certain sensorimotor control processes that form mental representations of people's bodies and actions, called forward models. With a new joint action paradigm and several new analysis approaches we show that, indeed, people coordinate each other's interactions based on forward models and mutual action observation.
Collapse
Affiliation(s)
- Lin Lawrence Guo
- Department of Psychology, University of Toronto Scarborough, Scarborough, ON, Canada
| | - Namita Patel
- Department of Psychology, University of Toronto Scarborough, Scarborough, ON, Canada
| | - Matthias Niemeier
- Department of Psychology, University of Toronto Scarborough, Scarborough, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
| |
Collapse
|
17
|
de'Sperati C, Thornton IM. Motion prediction at low contrast. Vision Res 2018; 154:85-96. [PMID: 30471309 DOI: 10.1016/j.visres.2018.11.004] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2018] [Revised: 10/26/2018] [Accepted: 11/06/2018] [Indexed: 11/17/2022]
Abstract
Accurate motion prediction is fundamental for survival. How does this reconcile with the well-known speed underestimation of low-contrast stimuli? Here we asked whether this contrast-dependent perceptual bias is retained in motion prediction under two different saccadic planning conditions: making a saccade to an occluded moving target, and real-time gaze interaction with multiple moving targets. In a first experiment, observers made a saccade to the mentally extrapolated position of a moving target (imagery condition). In a second experiment, observers had to prevent collisions among multiple moving targets by glancing at them through a gaze-contingent display or by hitting them with the touchpad cursor (interaction condition). In both experiments, target contrast was manipulated. We found that, whereas saccades to the imagined moving target were systematically biased by contrast, the gaze interaction performance, as measured by missed collisions, was generally unaffected - even though low-contrast targets looked slower. Interceptive actions increased at low contrast, but only when the gaze was used for interaction. Thus, perceptual speed underestimation transfers to saccades made to imagined low-contrast targets, without however necessarily being detrimental to effective performance when real-time interaction with multiple targets is required. This differential effect of stimulus contrast suggests that in complex dynamic conditions saccades are rather tolerant to visual speed biases.
Collapse
Affiliation(s)
- Claudio de'Sperati
- Faculty of Psychology, Laboratory of Action, Perception and Cognition, Vita-Salute San Raffaele University, via Olgettina 58, 20132 Milano, Italy; Experimental Psychology Unit, Division of Neuroscience, San Raffaele Scientific Institute, via Olgettina 60, 20132 Milano, Italy.
| | - Ian M Thornton
- Department of Cognitive Science, Faculty of Media and Knowledge Sciences, University of Malta, Msida MSD 2080, Malta
| |
Collapse
|
18
|
The grasping side of post-error slowing. Cognition 2018; 179:1-13. [DOI: 10.1016/j.cognition.2018.05.026] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2017] [Revised: 05/28/2018] [Accepted: 05/31/2018] [Indexed: 11/19/2022]
|
19
|
De Freitas J, Alvarez GA. Your visual system provides all the information you need to make moral judgments about generic visual events. Cognition 2018; 178:133-146. [DOI: 10.1016/j.cognition.2018.05.017] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2017] [Revised: 05/21/2018] [Accepted: 05/22/2018] [Indexed: 10/14/2022]
|
20
|
Chabanat E, Jacquin-Courtois S, Havé L, Kihoulou C, Tilikete C, Mauguière F, Rheims S, Rossetti Y. Can you guess the colour of this moving object? A dissociation between colour and motion in blindsight. Neuropsychologia 2018; 128:204-208. [PMID: 30102905 DOI: 10.1016/j.neuropsychologia.2018.08.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2017] [Revised: 06/01/2018] [Accepted: 08/08/2018] [Indexed: 10/28/2022]
Abstract
Blindsight has been primarily and extensively studied by Lawrence Weiskrantz. Residual visual abilities following a hemispheric lesion leading to homonymous hemianopia encompass a variety of visual-perceptual and visuo-motor functions. Attention blindsight produces the more salient subjective experiences, especially for motion (Riddoch phenomenon). Action blindsight illustrates visuo-motor abilities despite the patients' feeling that they produce random movements. Perception blindsight seems to be the weakest residual function observed in blindsight, e.g. for wavelength sensitivity. Discriminating motion produced by isoluminant colours does not give rise to blindsight for motion but the outcome of the reciprocal test is not known. Here we tested whether moving stimuli could give rise to colour discrimination in a patient with homonymous hemianopia. It was found that even though the patient exhibited nearly perfect performances for motion direction discrimination his colour discrimination for the same moving stimulus remained at chance level. It is concluded that easily discriminated moving stimuli do not give rise to colour discrimination and implications for the 3 levels of blindsight taxonomy are discussed.
Collapse
Affiliation(s)
- E Chabanat
- Inserm UMR-S 1028, CNRS UMR 5292, ImpAct, Centre de Recherche en Neurosciences de Lyon, France; Université de Lyon, Université Claude Bernard Lyon 1, France.
| | - S Jacquin-Courtois
- Inserm UMR-S 1028, CNRS UMR 5292, ImpAct, Centre de Recherche en Neurosciences de Lyon, France; Université de Lyon, Université Claude Bernard Lyon 1, France; Service de rééducation neurologique, Pavillon Bourret, Hôpital Henry-Gabrielle, Hospices Civils de Lyon, 20, route de Vourles, Saint-Genis-Laval, France; Plate-forme 'Mouvement et Handicap', Hôpital Henry-Gabrielle et Hôpital Neurologique Neurologique Pierre Wertheimer, Hospices Civils de Lyon, 20, route de Vourles, Saint-Genis-Laval, France.
| | - L Havé
- Inserm UMR-S 1028, CNRS UMR 5292, ImpAct, Centre de Recherche en Neurosciences de Lyon, France; Université de Lyon, Université Claude Bernard Lyon 1, France.
| | - C Kihoulou
- Inserm UMR-S 1028, CNRS UMR 5292, ImpAct, Centre de Recherche en Neurosciences de Lyon, France
| | - C Tilikete
- Inserm UMR-S 1028, CNRS UMR 5292, ImpAct, Centre de Recherche en Neurosciences de Lyon, France; Université de Lyon, Université Claude Bernard Lyon 1, France; Service de Neuro-Cognition et Neuro-Ophtalmologie, Hôpital Neurologique Pierre Wertheimer, Hospices Civils de Lyon, 59 boulevard Pinel, 69677 Bron Cedex, France.
| | - F Mauguière
- Université de Lyon, Université Claude Bernard Lyon 1, France; Département de Neurologie Fonctionnelle et Epileptologie, Hôpital Neurologique Pierre Wertheimer, Hospices Civils de Lyon, France; Inserm UMR-S 1028, CNRS UMR 5292, NeuroPain, Centre de Recherche en Neurosciences de Lyon, France.
| | - S Rheims
- Université de Lyon, Université Claude Bernard Lyon 1, France; Département de Neurologie Fonctionnelle et Epileptologie, Hôpital Neurologique Pierre Wertheimer, Hospices Civils de Lyon, France; Inserm UMR-S 1028, CNRS UMR 5292, TIGER, Centre de Recherche en Neurosciences de Lyon, France.
| | - Y Rossetti
- Inserm UMR-S 1028, CNRS UMR 5292, ImpAct, Centre de Recherche en Neurosciences de Lyon, France; Université de Lyon, Université Claude Bernard Lyon 1, France; Service de rééducation neurologique, Pavillon Bourret, Hôpital Henry-Gabrielle, Hospices Civils de Lyon, 20, route de Vourles, Saint-Genis-Laval, France; Plate-forme 'Mouvement et Handicap', Hôpital Henry-Gabrielle et Hôpital Neurologique Neurologique Pierre Wertheimer, Hospices Civils de Lyon, 20, route de Vourles, Saint-Genis-Laval, France.
| |
Collapse
|
21
|
Naccache L. Minimally conscious state or cortically mediated state? Brain 2018; 141:949-960. [PMID: 29206895 PMCID: PMC5888986 DOI: 10.1093/brain/awx324] [Citation(s) in RCA: 96] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 09/07/2017] [Accepted: 09/27/2017] [Indexed: 01/04/2023] Open
Abstract
Durable impairments of consciousness are currently classified in three main neurological categories: comatose state, vegetative state (also recently coined unresponsive wakefulness syndrome) and minimally conscious state. While the introduction of minimally conscious state, in 2002, was a major progress to help clinicians recognize complex non-reflexive behaviours in the absence of functional communication, it raises several problems. The most important issue related to minimally conscious state lies in its criteria: while behavioural definition of minimally conscious state lacks any direct evidence of patient's conscious content or conscious state, it includes the adjective 'conscious'. I discuss this major problem in this review and propose a novel interpretation of minimally conscious state: its criteria do not inform us about the potential residual consciousness of patients, but they do inform us with certainty about the presence of a cortically mediated state. Based on this constructive criticism review, I suggest three proposals aiming at improving the way we describe the subjective and cognitive state of non-communicating patients. In particular, I present a tentative new classification of impairments of consciousness that combines behavioural evidence with functional brain imaging data, in order to probe directly and univocally residual conscious processes.
Collapse
Affiliation(s)
- Lionel Naccache
- AP-HP, Groupe hospitalier Pitié-Salpêtrière, Department of Neurology, 75013, Paris, France
- AP-HP, Groupe hospitalier Pitié-Salpêtrière, Department of Neurophysiology, 75013, Paris, France
- INSERM, U 1127, F-75013, Paris, France
- Institut du Cerveau et de la Moelle épinière, ICM, PICNIC Lab, F-75013, Paris, France
| |
Collapse
|
22
|
Rossit S, Harvey M, Butler SH, Szymanek L, Morand S, Monaco S, McIntosh RD. Impaired peripheral reaching and on-line corrections in patient DF: Optic ataxia with visual form agnosia. Cortex 2018; 98:84-101. [DOI: 10.1016/j.cortex.2017.04.004] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2016] [Revised: 02/15/2017] [Accepted: 04/07/2017] [Indexed: 11/16/2022]
|
23
|
Rise and fall of the two visual systems theory. Ann Phys Rehabil Med 2017; 60:130-140. [DOI: 10.1016/j.rehab.2017.02.002] [Citation(s) in RCA: 59] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2016] [Accepted: 02/15/2017] [Indexed: 11/23/2022]
|
24
|
What is an affordance? 40 years later. Neurosci Biobehav Rev 2017; 77:403-417. [DOI: 10.1016/j.neubiorev.2017.04.014] [Citation(s) in RCA: 116] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2016] [Revised: 04/12/2017] [Accepted: 04/15/2017] [Indexed: 11/20/2022]
|
25
|
Rinsma T, van der Kamp J, Dicks M, Cañal-Bruland R. Nothing magical: pantomimed grasping is controlled by the ventral system. Exp Brain Res 2017; 235:1823-1833. [PMID: 28299409 PMCID: PMC5435791 DOI: 10.1007/s00221-016-4868-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2016] [Accepted: 12/26/2016] [Indexed: 11/25/2022]
Abstract
In a recent amendment to the two-visual-system model, it has been proposed that actions must result in tactile contact with the goal object for the dorsal system to become engaged (Whitwell et al., Neuropsychologia 55:41-50, 2014). The present study tested this addition by assessing the use of allocentric information in normal and pantomime actions. To this end, magicians, and participants who were inexperienced in performing pantomime actions made normal and pantomime grasps toward objects embedded in the Müller-Lyer illusion. During pantomime grasping, a grasp was made next to an object that was in full view (i.e., a displaced pantomime grasping task). The results showed that pantomime grasps took longer, were slower, and had smaller hand apertures than normal grasping. Most importantly, hand apertures were affected by the illusion during pantomime grasping but not in normal grasping, indicating that displaced pantomime grasping is based on allocentric information. This was true for participants without experience in performing pantomime grasps as well as for magicians with experience in pantomiming. The finding that the illusory bias is limited to pantomime grasping and persists with experience supports the conjecture that the normal engagement of the dorsal system's contribution requires tactile contact with a goal object. If no tactile contact is made, then movement control shifts toward the ventral system.
Collapse
Affiliation(s)
- Thijs Rinsma
- Research Institute MOVE Amsterdam, Faculty of Behavioural and Movement Sciences, VU University Amsterdam, Van der Boechorststraat 9, 1081 BT, Amsterdam, The Netherlands
| | - John van der Kamp
- Research Institute MOVE Amsterdam, Faculty of Behavioural and Movement Sciences, VU University Amsterdam, Van der Boechorststraat 9, 1081 BT, Amsterdam, The Netherlands.
- Institute of Human Performance, University of Hong Kong, Hong Kong SAR, China.
| | - Matt Dicks
- Department of Sport and Exercise Science, University of Portsmouth, Portsmouth, UK
| | | |
Collapse
|
26
|
Olthuis R, Van Der Kamp J, Caljouw S. Verbalizations Affect Visuomotor Control in Hitting Objects to Distant Targets. Front Psychol 2017; 8:661. [PMID: 28496425 PMCID: PMC5406461 DOI: 10.3389/fpsyg.2017.00661] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2016] [Accepted: 04/12/2017] [Indexed: 11/13/2022] Open
Abstract
There is a long-standing proposal for the existence of two neuroanatomically and functionally separate visual systems; one supported by the dorsal pathway to control action and the second supported by the ventral pathway to handle explicit perceptual judgments. The dorsal pathway requires fast access to egocentric information, while the ventral pathway primarily requires allocentric information. Despite the evidence for functionally distinct systems, researchers have posited important interactions. This paper examines evidence to what degree the interaction becomes more important when target-identity, the perception of which is supported by the ventral stream, is verbalized during the execution of a target-directed far-aiming movement. In the experiment reported here participants hit balls toward distant targets while concurrently making explicit perceptual judgments of target properties. The endpoint of a shaft served as the target, with conditions including illusory arrow fins at the endpoint. Participants verbalized the location of the target by comparing it to a reference line and calling out "closer" or "further" while propelling the ball to the target. The impact velocity at ball contact was compared for hits toward three shafts of lengths, 94, 100, and 106 cm, with and without verbalizations and delays. It was observed that the meaning of the expressed words modulated movement execution when the verbalizations were consistent with the action characteristics. This effect of semantic content was evident regardless of target visibility during movement execution, demonstrating it was not restricted to movements that rely on visual memory. In addition to a direct effect of semantic content we anticipated an indirect effect of verbalization to result in action shifting toward the use of context-dependent allocentric information. This would result in an illusion bias on the impact velocity when the target is embedded in a Müller-Lyer configuration. We observed an ubiquitous effect of illusory context on movement execution, and not only when verbalizations were made. We suggest that the current experimental design with a far-aiming task where most conditions required reporting or retaining spatial characteristics of targets for action over time may have elicited a strong reliance on allocentric information to guide action.
Collapse
Affiliation(s)
- Raimey Olthuis
- Center for Human Movement Sciences, University Medical Centre Groningen, University of GroningenGroningen, Netherlands
| | - John Van Der Kamp
- MOVE Research Institute Amsterdam, Faculty of Behavioural and Movement Sciences, Vrije Universiteit AmsterdamAmsterdam, Netherlands
| | - Simone Caljouw
- Center for Human Movement Sciences, University Medical Centre Groningen, University of GroningenGroningen, Netherlands
| |
Collapse
|
27
|
Frederick JA, Heim AS, Dunn KN, Powers CD, Klein CJ. Generalization of skills between operant control and discrimination of EEG alpha. Conscious Cogn 2016; 45:226-234. [PMID: 27662584 DOI: 10.1016/j.concog.2016.09.009] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2016] [Revised: 09/12/2016] [Accepted: 09/13/2016] [Indexed: 12/01/2022]
Affiliation(s)
- Jon A Frederick
- Department of Psychology, St. Cloud State University, 720 4th Avenue South, St. Cloud, MN 56301-4498, USA.
| | - Andrew S Heim
- Department of Psychology, Middle Tennessee State University, Box 87, 1301 E. Main St., Murfreesboro, TN 37132, USA
| | - Kelli N Dunn
- Department of Psychology, Middle Tennessee State University, Box 87, 1301 E. Main St., Murfreesboro, TN 37132, USA
| | - Cynthia D Powers
- Elite Behavior Analysis, 6116 Shallowford Road, STE 201, Chattanooga, TN 37421, USA
| | - Casey J Klein
- Department of Psychology, Middle Tennessee State University, Box 87, 1301 E. Main St., Murfreesboro, TN 37132, USA
| |
Collapse
|
28
|
The Contribution of the Cerebellum in the Hierarchial Development of the Self. THE CEREBELLUM 2016; 14:711-21. [PMID: 25940545 DOI: 10.1007/s12311-015-0675-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
What distinguishes human beings from other living organisms is that a human perceives himself as a "self". The self is developed hierarchially in a multi-layered process, which is based on the evolutionary maturation of the nervous system and patterns according to the rules and demands of the external world. Many researchers have attempted to explain the different aspects of the self, as well as the related neural substrates. In this paper, we first review the previously proposed ideas regarding the neurobiology of the self. We then suggest a new hypothesis regarding the hierarchial self, which proposes that the self is developed at three stages: subjective, objective, and reflective selves. In the second part, we attempt to answer the question "Why do we need a self?" We therefore explain that different parts of the self developed in an effort to identify stability in space, stability against constantly changing objects, and stability against changing cognitions. Finally, we discuss the role of the cerebellum as the neural substrate for the self.
Collapse
|
29
|
Rohaut B, Alario FX, Meadow J, Cohen L, Naccache L. Unconscious semantic processing of polysemous words is not automatic. Neurosci Conscious 2016; 2016:niw010. [PMID: 30109129 PMCID: PMC6084553 DOI: 10.1093/nc/niw010] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2015] [Revised: 06/08/2016] [Accepted: 06/13/2016] [Indexed: 11/15/2022] Open
Abstract
Semantic processing of visually presented words can be identified both on behavioral and neurophysiological evidence. One of the major discoveries of the last decades is the demonstration that these signatures of semantic processing, initially observed for consciously perceived words, can also be detected for masked words inaccessible to conscious reports. In this context, the distinction between conscious and unconscious verbal semantic processing constitutes a challenging scientific issue. A prominent view considered that while conscious representations are subject to executive control, unconscious ones would operate automatically in a modular way, independent from control and top-down influences. Recent findings challenged this view by revealing that endogenous attention and task-setting can have a strong influence on unconscious processing. However, one of the major arguments supporting the automaticity of unconscious semantic processing still stands, stemming from a seminal observation reported by Marcel in 1980 about polysemous words. In the present study we reexamined this evidence. We present a combination of behavioral and event-related-potentials (ERPs) results that refute this view by showing that the current conscious semantic context has a major and similar influence on the semantic processing of both visible and masked polysemous words. In a classical lexical decision task, a polysemous word was preceded by a word which defined the current semantic context. Crucially, this context was associated with only one of the two meanings of the polysemous word, and was followed by a word/pseudo-word target. Behavioral and electrophysiological evidence of semantic priming of target words by masked polysemous words was strongly dependent on the conscious context. Moreover, we describe a new type of influence related to the response-code used to answer for target words in the lexical decision task: unconscious semantic priming constrained by the conscious context was present both in behavior and ERPs exclusively when right-handed subjects were instructed to respond to words with their right hand. The strong and respective influences of conscious context and response-code on semantic processing of masked polysemous words demonstrate that unconscious verbal semantic representations are not automatic.
Collapse
Affiliation(s)
- Benjamin Rohaut
- Department of Neurology, AP-HP, Groupe Hospitalier Pitié-Salpêtrière, Paris, France.,INSERM, U 1127, Paris F-75013, France.,Institut Du Cerveau Et De La Moelle Épinière, ICM, PICNIC Lab, Paris F-75013, France.,Faculté De Médecine Pitié-Salpêtrière, Sorbonne Universités, UPMC Univ Paris 06, Paris, France
| | | | - Jacqueline Meadow
- INSERM, U 1127, Paris F-75013, France.,Institut Du Cerveau Et De La Moelle Épinière, ICM, PICNIC Lab, Paris F-75013, France
| | - Laurent Cohen
- Department of Neurology, AP-HP, Groupe Hospitalier Pitié-Salpêtrière, Paris, France.,INSERM, U 1127, Paris F-75013, France.,Institut Du Cerveau Et De La Moelle Épinière, ICM, PICNIC Lab, Paris F-75013, France.,Faculté De Médecine Pitié-Salpêtrière, Sorbonne Universités, UPMC Univ Paris 06, Paris, France
| | - Lionel Naccache
- Department of Neurology, AP-HP, Groupe Hospitalier Pitié-Salpêtrière, Paris, France.,INSERM, U 1127, Paris F-75013, France.,Institut Du Cerveau Et De La Moelle Épinière, ICM, PICNIC Lab, Paris F-75013, France.,Faculté De Médecine Pitié-Salpêtrière, Sorbonne Universités, UPMC Univ Paris 06, Paris, France.,Department of Neurophysiology, AP-HP, Groupe Hospitalier Pitié-Salpêtrière, Paris, France
| |
Collapse
|
30
|
Okamoto S, Wiertlewski M, Hayward V. Anticipatory Vibrotactile Cueing Facilitates Grip Force Adjustment during Perturbative Loading. IEEE TRANSACTIONS ON HAPTICS 2016; 9:233-242. [PMID: 26887013 DOI: 10.1109/toh.2016.2526613] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Grip force applied to an object held between the thumb and index finger is automatically and unconsciously adjusted upon perception of an external disturbance to the object. Typically, this adjustment occurs within approximately 100 ms. Here, we investigated the effect of anticipatory vibrotactile cues prior to a perturbative force, which the central nervous system may use for rapid grip re-stabilization. We asked participants to grip and hold an instrumented, actuated handle between the thumb and index finger. Under computer control, the handle could suddenly be pulled away from a static grip and could independently provide vibration to the gripping fingers. The mean latency of corrective motor action was 139 ms. When vibrotactile stimulation was applied 50 ms before application of tractive force, the latency was reduced to 117 ms, whereas the mean latency of the conscious response to vibrotactile stimuli alone was 229 ms. This suggests that vibrotactile stimulation can influence reflex-like actions. We also examined the effects of anticipatory cues using a set of perturbative loads with different rising rates. As expected, facilitation of grip force adjustment was observed for moderate loads. In contrast, anticipatory cues had an insignificant effect on rapid loads that evoked an adjustment within 60-80 ms, which approaches the minimum latency of human grip adjustment. Understanding the facilitative effects of anticipatory cues on human reactive grip can aid the development of human-machine interfaces to enhance human behavior.
Collapse
|
31
|
Granek JA, Sergio LE. Evidence for distinct brain networks in the control of rule-based motor behavior. J Neurophysiol 2015; 114:1298-309. [PMID: 26133796 DOI: 10.1152/jn.00233.2014] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2014] [Accepted: 06/30/2015] [Indexed: 11/22/2022] Open
Abstract
Reach guidance when the spatial location of the viewed target and hand movement are incongruent (i.e., decoupled) necessitates use of explicit cognitive rules (strategic control) or implicit recalibration of gaze and limb position (sensorimotor recalibration). In a patient with optic ataxia (OA) and bilateral superior parietal lobule damage, we recently demonstrated an increased reliance on strategic control when the patient performed a decoupled reach (Granek JA, Pisella L, Stemberger J, Vighetto A, Rossetti Y, Sergio LE. PLoS One 8: e86138, 2013). To more generally understand the fundamental mechanisms of decoupled visuomotor control and to more specifically test whether we could distinguish these two modes of movement control, we tested healthy participants in a cognitively demanding dual task. Participants continuously counted backward while simultaneously reaching toward horizontal (left or right) or diagonal (equivalent to top-left or top-right) targets with either veridical or rotated (90°) cursor feedback. By increasing the overall neural load and selectively compromising potentially overlapping neural circuits responsible for strategic control, the complex dual task served as a noninvasive means to disrupt the integration of a cognitive rule into a motor action. Complementary to our previous results observed in patients with optic ataxia, here our dual task led to greater performance deficits during movements that required an explicit rule, implying a selective disruption of strategic control in decoupled reaching. Our results suggest that distinct neural processing is required to control these different types of reaching because in considering the current results and previous patient results together, the two classes of movement could be differentiated depending on the type of interference.
Collapse
Affiliation(s)
- Joshua A Granek
- School of Kinesiology and Health Science, Centre for Vision Research, York University, Toronto, Ontario, Canada
| | - Lauren E Sergio
- School of Kinesiology and Health Science, Centre for Vision Research, York University, Toronto, Ontario, Canada
| |
Collapse
|
32
|
Salti M, Monto S, Charles L, King JR, Parkkonen L, Dehaene S. Distinct cortical codes and temporal dynamics for conscious and unconscious percepts. eLife 2015; 4. [PMID: 25997100 PMCID: PMC4467230 DOI: 10.7554/elife.05652] [Citation(s) in RCA: 72] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2014] [Accepted: 05/20/2015] [Indexed: 12/24/2022] Open
Abstract
The neural correlates of consciousness are typically sought by comparing the overall brain responses to perceived and unperceived stimuli. However, this comparison may be contaminated by non-specific attention, alerting, performance, and reporting confounds. Here, we pursue a novel approach, tracking the neuronal coding of consciously and unconsciously perceived contents while keeping behavior identical (blindsight). EEG and MEG were recorded while participants reported the spatial location and visibility of a briefly presented target. Multivariate pattern analysis demonstrated that considerable information about spatial location traverses the cortex on blindsight trials, but that starting ≈270 ms post-onset, information unique to consciously perceived stimuli, emerges in superior parietal and superior frontal regions. Conscious access appears characterized by the entry of the perceived stimulus into a series of additional brain processes, each restricted in time, while the failure of conscious access results in the breaking of this chain and a subsequent slow decay of the lingering unconscious activity. DOI:http://dx.doi.org/10.7554/eLife.05652.001 Our senses constantly receive information from the world around us, but we consciously perceive only a small portion of it. Nonetheless, even stimuli that are not consciously perceived are registered in our brain and influence our behavior. This is known as unconscious perception. Researchers disagree about how brain activity differs during conscious and unconscious perception. Some think that both consciously and unconsciously perceived objects are processed in the same way in the brain, but that the brain is more active during conscious perception. Others think that different neurons process the information in different types of perception. Salti et al. have now investigated this issue. While recording participants' brain activity, a line was briefly presented in one of eight different possible locations on a screen. The line was masked so it would be consciously perceived in roughly half of the presentations. Participants had to report the location of the line and then say whether they had seen it or had merely guessed its location. Even when they reported that they were guessing, participants identified the location of the line better than by chance, indicating unconscious perception on ‘guess’ trials. This enabled Salti et al. to compare how the brain encodes consciously perceived and unconsciously perceived stimuli. Unlike previous studies in which the brain activity associated with ‘seen’ and ‘unseen’ stimuli was compared, Salti et al. used a different approach to extract the neural activity underlying consciousness. A classifying algorithm was trained on a subset of the data to recognize from the recorded brain activity where on the screen a line had appeared. Applying this algorithm to the remaining data revealed the dynamics of stimulus encoding. Consciously and unconsciously perceived stimuli are encoded by the same neural responses for about a quater of a second. From this point on, consciously perceived stimuli benefit from a series of additional brain processes, each restricted in time. For unconsciously perceived stimuli, this chain of processing breaks and a slow decay of encoding is observed. Salti et al., therefore, conclude that conscious perception is represented differently to unconscious perception in the brain and produces more extensive and structured brain activity. Future work will focus on understanding these differences in neural coding and their contribution to the interplay between conscious and unconscious perception. DOI:http://dx.doi.org/10.7554/eLife.05652.002
Collapse
Affiliation(s)
- Moti Salti
- Cognitive Neuroimaging Unit, Institut National de la Santé et de la Recherche Médicale, Gif sur Yvette, France
| | - Simo Monto
- Cognitive Neuroimaging Unit, Institut National de la Santé et de la Recherche Médicale, Gif sur Yvette, France
| | - Lucie Charles
- Cognitive Neuroimaging Unit, Institut National de la Santé et de la Recherche Médicale, Gif sur Yvette, France
| | - Jean-Remi King
- Cognitive Neuroimaging Unit, Institut National de la Santé et de la Recherche Médicale, Gif sur Yvette, France
| | - Lauri Parkkonen
- Cognitive Neuroimaging Unit, Institut National de la Santé et de la Recherche Médicale, Gif sur Yvette, France
| | - Stanislas Dehaene
- Cognitive Neuroimaging Unit, Institut National de la Santé et de la Recherche Médicale, Gif sur Yvette, France
| |
Collapse
|
33
|
Color perception is impaired in baseball batters while performing an interceptive action. Atten Percept Psychophys 2015; 77:2074-81. [DOI: 10.3758/s13414-015-0906-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
34
|
Camors D, Jouffrais C, Cottereau BR, Durand JB. Allocentric coding: spatial range and combination rules. Vision Res 2015; 109:87-98. [PMID: 25749676 DOI: 10.1016/j.visres.2015.02.018] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2014] [Revised: 02/23/2015] [Accepted: 02/24/2015] [Indexed: 11/18/2022]
Abstract
When a visual target is presented with neighboring landmarks, its location can be determined both relative to the self (egocentric coding) and relative to these landmarks (allocentric coding). In the present study, we investigated (1) how allocentric coding depends on the distance between the targets and their surrounding landmarks (i.e. the spatial range) and (2) how allocentric and egocentric coding interact with each other across targets-landmarks distances (i.e. the combination rules). Subjects performed a memory-based pointing task toward previously gazed targets briefly superimposed (200ms) on background images of cluttered city landscapes. A variable portion of the images was occluded in order to control the distance between the targets and the closest potential landmarks within those images. The pointing responses were performed after large saccades and the reappearance of the images at their initial location. However, in some trials, the images' elements were slightly shifted (±3°) in order to introduce a subliminal conflict between the allocentric and egocentric reference frames. The influence of allocentric coding in the pointing responses was found to decrease with increasing target-landmarks distances, although it remained significant even at the largest distances (⩾10°). Interestingly, both the decreasing influence of allocentric coding and the concomitant increase in pointing responses variability were well captured by a Bayesian model in which the weighted combination of allocentric and egocentric cues is governed by a coupling prior.
Collapse
Affiliation(s)
- D Camors
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France; CNRS, CerCo, Toulouse, France; Université de Toulouse, IRIT, Toulouse, France; CNRS, IRIT, Toulouse, France
| | - C Jouffrais
- Université de Toulouse, IRIT, Toulouse, France; CNRS, IRIT, Toulouse, France
| | - B R Cottereau
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France; CNRS, CerCo, Toulouse, France
| | - J B Durand
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Toulouse, France; CNRS, CerCo, Toulouse, France.
| |
Collapse
|
35
|
Zimmermann E, Morrone MC, Burr DC. Buildup of spatial information over time and across eye-movements. Behav Brain Res 2014; 275:281-7. [PMID: 25224817 DOI: 10.1016/j.bbr.2014.09.013] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2014] [Revised: 09/04/2014] [Accepted: 09/07/2014] [Indexed: 11/27/2022]
Abstract
To interact rapidly and effectively with our environment, our brain needs access to a neural representation of the spatial layout of the external world. However, the construction of such a map poses major challenges, as the images on our retinae depend on where the eyes are looking, and shift each time we move our eyes, head and body to explore the world. Research from many laboratories including our own suggests that the visual system does compute spatial maps that are anchored to real-world coordinates. However, the construction of these maps takes time (up to 500ms) and also attentional resources. We discuss research investigating how retinotopic reference frames are transformed into spatiotopic reference-frames, and how this transformation takes time to complete. These results have implications for theories about visual space coordinates and particularly for the current debate about the existence of spatiotopic representations.
Collapse
Affiliation(s)
- Eckart Zimmermann
- Psychology Department, University of Florence, Italy, Neuroscience Institute, National Research Council, Pisa, Italy.
| | - M Concetta Morrone
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, via San Zeno 31, 56123 Pisa, Italy; Scientific Institute Stella Maris (IRCSS), viale del Tirreno 331, 56018 Calambrone, Pisa, Italy
| | - David C Burr
- Department of Neuroscience, Psychology, Pharmacology and Child Heath, University of Florence, via San Salvi 12, 50135 Florence, Italy; Institute of Neuroscience CNR, via Moruzzi 1, 56124 Pisa, Italy
| |
Collapse
|
36
|
Khanafer S, Cressman EK. Sensory integration during reaching: the effects of manipulating visual target availability. Exp Brain Res 2014; 232:3833-46. [DOI: 10.1007/s00221-014-4064-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2014] [Accepted: 08/01/2014] [Indexed: 11/24/2022]
|
37
|
Abstract
This research is an investigation of whether consciousness-one's ongoing experience-influences one's behavior and, if so, how. Analysis of the components, structure, properties, and temporal sequences of consciousness has established that, (1) contrary to one's intuitive understanding, consciousness does not have an active, executive role in determining behavior; (2) consciousness does have a biological function; and (3) consciousness is solely information in various forms. Consciousness is associated with a flexible response mechanism (FRM) for decision-making, planning, and generally responding in nonautomatic ways. The FRM generates responses by manipulating information and, to function effectively, its data input must be restricted to task-relevant information. The properties of consciousness correspond to the various input requirements of the FRM; and when important information is missing from consciousness, functions of the FRM are adversely affected; both of which indicate that consciousness is the input data to the FRM. Qualitative and quantitative information (shape, size, location, etc.) are incorporated into the input data by a qualia array of colors, sounds, and so on, which makes the input conscious. This view of the biological function of consciousness provides an explanation why we have experiences; why we have emotional and other feelings, and why their loss is associated with poor decision-making; why blindsight patients do not spontaneously initiate responses to events in their blind field; why counter-habitual actions are only possible when the intended action is in mind; and the reason for inattentional blindness.
Collapse
Affiliation(s)
- Brian Earl
- Independent Researcher, Formerly Affiliated with the School of Psychological Sciences, Monash UniversityMelbourne, Australia
| |
Collapse
|
38
|
Lévy-Bencheton D, Pélisson D, Panouillères M, Urquizar C, Tilikete C, Pisella L. Adaptation of scanning saccades co-occurs in different coordinate systems. J Neurophysiol 2014; 111:2505-15. [PMID: 24647436 DOI: 10.1152/jn.00733.2013] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Plastic changes of saccades (i.e., following saccadic adaptation) do not transfer between oppositely directed saccades, except when multiple directions are trained simultaneously, suggesting a saccadic planning in retinotopic coordinates. Interestingly, a recent study in human healthy subjects revealed that after an adaptive increase of rightward-scanning saccades, both leftward and rightward double-step, memory-guided saccades, triggered toward the adapted endpoint, were modified, revealing that target location was coded in spatial coordinates (Zimmermann et al. 2011). However, as the computer screen provided a visual frame, one alternative hypothesis could be a coding in allocentric coordinates. Here, we questioned whether adaptive modifications of saccadic planning occur in multiple coordinate systems. We reproduced the paradigm of Zimmermann et al. (2011) using target light-emitting diodes in the dark, with and without a visual frame, and tested different saccades before and after adaptation. With double-step, memory-guided saccades, we reproduced the transfer of adaptation to leftward saccades with the visual frame but not without, suggesting that the coordinate system used for saccade planning, when the frame is visible, is allocentric rather than spatiotopic. With single-step, memory-guided saccades, adaptation transferred to leftward saccades, both with and without the visual frame, revealing a target localization in a coordinate system that is neither retinotopic nor allocentric. Finally, with single-step, visually guided saccades, the classical, unidirectional pattern of amplitude change was reproduced, revealing retinotopic coordinate coding. These experiments indicate that the same procedure of adaptation modifies saccadic planning in multiple coordinate systems in parallel-each of them revealed by the use of different saccade tasks in postadaptation.
Collapse
Affiliation(s)
- Delphine Lévy-Bencheton
- Lyon Neuroscience Research Center, INSERM U1028, CNRS UMR5292, Team ImpAct, Bron, France; Lyon I University, Lyon, France; and
| | - Denis Pélisson
- Lyon Neuroscience Research Center, INSERM U1028, CNRS UMR5292, Team ImpAct, Bron, France; Lyon I University, Lyon, France; and
| | - Muriel Panouillères
- Lyon Neuroscience Research Center, INSERM U1028, CNRS UMR5292, Team ImpAct, Bron, France; Lyon I University, Lyon, France; and
| | - Christian Urquizar
- Lyon Neuroscience Research Center, INSERM U1028, CNRS UMR5292, Team ImpAct, Bron, France
| | - Caroline Tilikete
- Lyon Neuroscience Research Center, INSERM U1028, CNRS UMR5292, Team ImpAct, Bron, France; Lyon I University, Lyon, France; and Hospices Civils de Lyon, Neuro-Ophthalmology Unit, Hôpital Neurologique Pierre Wertheimer, Bron, France
| | - Laure Pisella
- Lyon Neuroscience Research Center, INSERM U1028, CNRS UMR5292, Team ImpAct, Bron, France; Lyon I University, Lyon, France; and
| |
Collapse
|
39
|
Hesse C, Schenk T. Delayed action does not always require the ventral stream: A study on a patient with visual form agnosia. Cortex 2014; 54:77-91. [DOI: 10.1016/j.cortex.2014.02.011] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2013] [Revised: 10/14/2013] [Accepted: 02/12/2014] [Indexed: 10/25/2022]
|
40
|
Fayel A, Chokron S, Cavézian C, Vergilino-Perez D, Lemoine C, Doré-Mazars K. Characteristics of contralesional and ipsilesional saccades in hemianopic patients. Exp Brain Res 2013; 232:903-17. [PMID: 24366440 DOI: 10.1007/s00221-013-3803-y] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2012] [Accepted: 11/29/2013] [Indexed: 11/25/2022]
Abstract
In order to further our understanding of action-blindsight, four hemianopic patients suffering from visual field loss contralateral to a unilateral occipital lesion were compared to six healthy controls during a double task of verbally reported target detection and saccadic responses toward the target. Three oculomotor tasks were used: a fixation task (i.e., without saccade) and two saccade tasks (eliciting reflexive and voluntary saccades, using step and overlap 600 ms paradigms, respectively), in separate sessions. The visual target was briefly presented at two different eccentricities (5° and 8°), in the right or left visual hemifield. Blank trials were interleaved with target trials, and signal detection theory was applied. Despite their hemifield defect, hemianopic patients retained the ability to direct a saccade toward their contralesional hemifield, whereas verbal detection reports were at chance level. However, saccade parameters (latency and amplitude) were altered by the defect. Saccades to the contralesional hemifield exhibited longer latencies and shorter amplitudes compared to those of the healthy group, whereas only the latencies of reflexive saccades to the ipsilesional hemifield were altered. Furthermore, healthy participants showed the expected latency difference between reflexive and voluntary saccades, with the latter longer than the former. This difference was not found in three out of four patients in either hemifield. Our results show action-blindsight for saccades, but also show that unilateral occipital lesions have effects on saccade generation in both visual hemifields.
Collapse
Affiliation(s)
- Alexandra Fayel
- Laboratoire Vision Action Cognition, EAU 01, INC, IUPDP, Institut de Psychologie, Université Paris Descartes, Sorbonne Paris Cité, 71 Avenue Edouard Vaillant, 92774, Boulogne-Billancourt Cedex, France
| | | | | | | | | | | |
Collapse
|
41
|
Short-lived effects of a visual inducer during egocentric space perception and manual behavior. Atten Percept Psychophys 2013; 75:1012-26. [PMID: 23653410 DOI: 10.3758/s13414-013-0455-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A pitched visual inducer has a strong effect on the visually perceived elevation of a target in extrapersonal space, and also on the elevation of the arm when a subject points with an unseen arm to the target's elevation. The manual effect is a systematic function of hand-to-body distance (Li and Matin Vision Research 45:533-550, 2005): When the arm is fully extended, manual responses to perceptually mislocalized luminous targets are veridical; when the arm is close to the body, gross matching errors occur. In the present experiments, we measured this hand-to-body distance effect during the presence of a pitched visual inducer and after inducer offset, using three values of hand-to-body distance (0, 40, and 70 cm) and two open-loop tasks (pointing to the perceived elevation of a target at true eye level and setting the height of the arm to match the elevation). We also measured manual behavior when subjects were instructed to point horizontally under induction and after inducer offset (no visual target at any time). In all cases, the hand-to-body distance effect disappeared shortly after inducer offset. We suggest that the rapid disappearance of the distance effect is a manifestation of processes in the dorsal visual stream that are involved in updating short-lived representations of the arm in egocentric visual perception and manual behavior.
Collapse
|
42
|
Buetti S, Tamietto M, Hervais-Adelman A, Kerzel D, de Gelder B, Pegna AJ. Dissociation between goal-directed and discrete response localization in a patient with bilateral cortical blindness. J Cogn Neurosci 2013; 25:1769-75. [PMID: 23944840 DOI: 10.1162/jocn_a_00404] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
We investigated localization performance of simple targets in patient TN, who suffered bilateral damage of his primary visual cortex and shows complete cortical blindness. Using a two-alternative forced-choice paradigm, TN was asked to guess the position of left-right targets with goal-directed and discrete manual responses. The results indicate a clear dissociation between goal-directed and discrete responses. TN pointed toward the correct target location in approximately 75% of the trials but was at chance level with discrete responses. This indicates that the residual ability to localize an unseen stimulus depends critically on the possibility to translate a visual signal into a goal-directed motor output at least in certain forms of blindsight.
Collapse
|
43
|
Riemer M, Kleinböhl D, Hölzl R, Trojan J. Action and perception in the rubber hand illusion. Exp Brain Res 2013; 229:383-93. [PMID: 23307154 DOI: 10.1007/s00221-012-3374-3] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2012] [Accepted: 12/05/2012] [Indexed: 10/27/2022]
Abstract
Voluntary motor control over artificial hands has been shown to provoke a subjective incorporation of the artificial limb into body representations. However, in most studies projected or mirrored images of own hands were presented as 'artificial' body parts. Using the paradigm of the rubber hand illusion (RHI), we assessed the impact of tactile sensations and voluntary movements with respect to an unambiguously body-extraneous, artificial hand. In addition to phenomenal self-reports and pointing movements towards the own hand, we introduced a new procedure for perceptual judgements enabling the assessment of proprioceptive drift and judgement reliability regarding perceived hand location. RHI effects were comparable for tactile sensations and voluntary movements, but characteristic discrepancies were found for pointing movements. They were differently affected by the induction methods, and RHI effects were uncorrelated between both methods. These observations shed new light on inconsistent results concerning RHI effects on motor responses.
Collapse
Affiliation(s)
- Martin Riemer
- Otto Selz Institute for Applied Psychology, Mannheim Centre for Work and Health, University of Mannheim, 68131, Mannheim, Germany.
| | | | | | | |
Collapse
|
44
|
Carey D, Trevethan C, Weiskrantz L, Sahraie A. Does delay impair localisation in blindsight? Neuropsychologia 2012; 50:3673-80. [DOI: 10.1016/j.neuropsychologia.2012.08.018] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2012] [Revised: 08/09/2012] [Accepted: 08/23/2012] [Indexed: 10/27/2022]
|
45
|
Kirsch W, Hennighausen E. Electrophysiological indicators of visuomotor planning: delay-dependent changes. Percept Mot Skills 2012; 115:69-89. [PMID: 23033746 DOI: 10.2466/22.24.27.pms.115.4.69-89] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
A visuomotor task was used to investigate the influence of a varying response delay on the evoked activity measured during motor planning. Participants performed one-dimensional hand movements to visual targets after 200-, 1,000-, and 5,000- msec. delays with respect to the target offset. In response to an imperative go signal, similar deflections were observed over motor areas in all delay conditions. In contrast, activity at posterior electrodes was strongly delay-dependent. During the shortest delay condition, evoked alpha oscillations were pronounced at occipitoparietal recording sites and were accompanied by P300-like positive waves. In contrast, when the delay was either 1,000 or 5,000 msec., lateral occipitotemporal deflections (N1) were observed. Also, during the longest delay condition another P300-like component was measured, which was entirely absent when the delay was 1,000 msec. These results suggest that neurophysiological processes underlie motor planning, change depending on the time of response.
Collapse
|
46
|
Infants and adults reaching in the dark. Exp Brain Res 2011; 217:237-49. [DOI: 10.1007/s00221-011-2984-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2011] [Accepted: 12/10/2011] [Indexed: 11/25/2022]
|
47
|
Hach S, Ishihara M, Keller PE, Schütz-Bosbach S. Hard and fast rules about the body: contributions of the action stream to judging body space. Exp Brain Res 2011; 212:563-74. [DOI: 10.1007/s00221-011-2765-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2011] [Accepted: 06/05/2011] [Indexed: 11/24/2022]
|
48
|
Are there unconscious perceptual processes? Conscious Cogn 2011; 20:449-63. [DOI: 10.1016/j.concog.2010.10.002] [Citation(s) in RCA: 54] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2009] [Revised: 07/11/2010] [Accepted: 10/03/2010] [Indexed: 11/21/2022]
|
49
|
|
50
|
Abstract
The concept of unconscious knowledge is fundamental for an understanding of human thought processes and mentation in general; however, the psychological community at large is not familiar with it. This paper offers a survey of the main psychological research currently being carried out into cognitive processes, and examines pathways that can be integrated into a discipline of unconscious knowledge. It shows that the field has already a defined history and discusses some of the features that all kinds of unconscious knowledge seem to share at a deeper level. With the aim of promoting further research, we discuss the main challenges which the postulation of unconscious cognition faces within the psychological community.
Collapse
Affiliation(s)
- Luís M. Augusto
- Institute of Philosophy, Faculty of Letters, University of Porto,
Portugal
| |
Collapse
|