1
|
Montalti M, Mirabella G. Investigating the impact of surgical masks on behavioral reactions to facial emotions in the COVID-19 era. Front Psychol 2024; 15:1359075. [PMID: 38638526 PMCID: PMC11025472 DOI: 10.3389/fpsyg.2024.1359075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Accepted: 03/04/2024] [Indexed: 04/20/2024] Open
Abstract
Introduction The widespread use of surgical masks during the COVID-19 pandemic has posed challenges in interpreting facial emotions. As the mouth is known to play a crucial role in decoding emotional expressions, its covering is likely to affect this process. Recent evidence suggests that facial expressions impact behavioral responses only when their emotional content is relevant to subjects' goals. Thus, this study investigates whether and how masked emotional faces alter such a phenomenon. Methods Forty participants completed two reaching versions of the Go/No-go task in a counterbalanced fashion. In the Emotional Discrimination Task (EDT), participants were required to respond to angry, fearful, or happy expressions by performing a reaching movement and withholding it when a neutral face was presented. In the Gender Discrimination Task (GDT), the same images were shown, but participants had to respond according to the poser's gender. The face stimuli were presented in two conditions: covered by a surgical mask (masked) or without any covering (unmasked). Results Consistent with previous studies, valence influenced behavioral control in the EDT but not in the GDT. Nevertheless, responses to facial emotions in the EDT exhibited significant differences between unmasked and masked conditions. In the former, angry expressions led to a slowdown in participants' responses. Conversely, in the masked condition, behavioral reactions were impacted by fearful and, to a greater extent, by happy expressions. Responses to fearful faces were slower, and those to happy faces exhibited increased variability in the masked condition compared to the unmasked condition. Furthermore, response accuracy to masked happy faces dramatically declined compared to the unmasked condition and other masked emotions. Discussion In sum, our findings indicate that surgical masks disrupt reactions to emotional expressions, leading people to react less accurately and with heightened variability to happy expressions, provided that the emotional dimension is relevant to people's goals.
Collapse
Affiliation(s)
- Martina Montalti
- Department of Clinical and Experimental Sciences, University of Brescia, Brescia, Italy
| | - Giovanni Mirabella
- Department of Clinical and Experimental Sciences, University of Brescia, Brescia, Italy
- IRCCS Neuromed, Pozzilli, Italy
| |
Collapse
|
2
|
Coizet V, Al Tannir R, Pautrat A, Overton PG. Separation of Channels Subserving Approach and Avoidance/Escape at the Level of the Basal Ganglia and Related Brainstem Structures. Curr Neuropharmacol 2024; 22:1473-1490. [PMID: 37594168 PMCID: PMC11097992 DOI: 10.2174/1570159x21666230818154903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Revised: 03/23/2023] [Accepted: 03/29/2023] [Indexed: 08/19/2023] Open
Abstract
The basal ganglia have the key function of directing our behavior in the context of events from our environment and/or our internal state. This function relies on afferents targeting the main input structures of the basal ganglia, entering bids for action selection at the level of the striatum or signals for behavioral interruption at the level of the subthalamic nucleus, with behavioral reselection facilitated by dopamine signaling. Numerous experiments have studied action selection in relation to inputs from the cerebral cortex. However, less is known about the anatomical and functional link between the basal ganglia and the brainstem. In this review, we describe how brainstem structures also project to the main input structures of the basal ganglia, namely the striatum, the subthalamic nucleus and midbrain dopaminergic neurons, in the context of approach and avoidance (including escape from threat), two fundamental, mutually exclusive behavioral choices in an animal's repertoire in which the brainstem is strongly involved. We focus on three particularly well-described loci involved in approach and avoidance, namely the superior colliculus, the parabrachial nucleus and the periaqueductal grey nucleus. We consider what is known about how these structures are related to the basal ganglia, focusing on their projections toward the striatum, dopaminergic neurons and subthalamic nucleus, and explore the functional consequences of those interactions.
Collapse
Affiliation(s)
- Véronique Coizet
- Grenoble Institute of Neuroscience, University Grenoble Alpes, Bâtiment E.J. Safra - Chemin Fortuné Ferrini - 38700 La Tronche France;
| | - Racha Al Tannir
- Grenoble Institute of Neuroscience, University Grenoble Alpes, Bâtiment E.J. Safra - Chemin Fortuné Ferrini - 38700 La Tronche France;
| | - Arnaud Pautrat
- Grenoble Institute of Neuroscience, University Grenoble Alpes, Bâtiment E.J. Safra - Chemin Fortuné Ferrini - 38700 La Tronche France;
| | - Paul G. Overton
- Department of Psychology, University of Sheffield, Sheffield, United Kingdom
| |
Collapse
|
3
|
Montalti M, Mirabella G. Unveiling the influence of task-relevance of emotional faces on behavioral reactions in a multi-face context using a novel Flanker-Go/No-go task. Sci Rep 2023; 13:20183. [PMID: 37978229 PMCID: PMC10656465 DOI: 10.1038/s41598-023-47385-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Accepted: 11/13/2023] [Indexed: 11/19/2023] Open
Abstract
Recent research indicates that emotional faces affect motor control only when task-relevant. However, these studies utilized a single-face presentation, which does not accurately mirror real-life situations wherein we frequently engage with multiple individuals simultaneously. To overcome this limitation, we gave 40 participants two versions of a novel Flanker-Go/No-go task, where we presented three-face stimuli with a central target and two task-irrelevant flankers that could be congruent or incongruent with the target for valence and gender. In the Emotional Discrimination Task (EDT), participants had to respond to fearful or happy targets and refrain from moving with neutral ones. In the Gender Discrimination Task (GDT), the same images were shown, but participants had to respond according to the target's gender. In line with previous studies, we found an effect of valence only in EDT, where fearful targets increased reaction times and omission error rates compared to happy faces. Notably, the flanker effect, i.e., slower and less accurate responses in incongruent than congruent conditions, was not found. This likely stems from the higher perceptual complexity of faces than that of stimuli traditionally used in the Eriksen Flanker task (letters or signs), leading to a capacity limit in face feature processing.
Collapse
Affiliation(s)
- Martina Montalti
- Department of Clinical and Experimental Sciences, Brescia University, Viale Europa, 11, 25123, Brescia, Italy
| | - Giovanni Mirabella
- Department of Clinical and Experimental Sciences, Brescia University, Viale Europa, 11, 25123, Brescia, Italy.
- IRCCS Neuromed, Pozzilli, Italy.
| |
Collapse
|
4
|
Esser S, Haider H, Lustig C, Tanaka T, Tanaka K. Action-effect knowledge transfers to similar effect stimuli. PSYCHOLOGICAL RESEARCH 2023; 87:2249-2258. [PMID: 36821009 PMCID: PMC10457235 DOI: 10.1007/s00426-023-01800-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Accepted: 01/30/2023] [Indexed: 02/24/2023]
Abstract
The ability to anticipate the sensory consequences of our actions (i.e., action-effects) is known to be important for intentional action initiation and control. Learned action-effects can select the responses that previously have been associated with them. What has been largely unexplored is how learned action-effect associations can aid action selection for effects that have not previously associated with an action but are similar to learned effects. In two studies, we aimed to show that when presented new, unknown action-effects, participants select the responses that have previously been associated with similar action-effects. In the first study (n = 27), action-effect similarity was operationalized via stimuli belonging to the same or different categories as the previously learned action-effects. In the second study (n = 31), action-effect similarity was realized via stimuli that require comparable motor responses in real life. Participants first learned that specific responses are followed by specific visual effect stimuli. In the test phase, learned effect stimuli, new but similar effect stimuli and new but dissimilar effect stimuli were presented ahead of the response. The findings revealed that both learned effect stimuli and new similar effect stimuli affected response times, whereas new dissimilar effects did not. When a learned or a new similar effect was followed by a learned response, compared to an unlearned response, the responses were faster. We interpret these findings in terms of action-effect learning. The action-effect once bound to an action is used to select an action if a similar effect for which no action has been learned yet is presented. However, it is noteworthy that, due to our design, other explanations for the found transfer are conceivable. We address these limitations in the General Discussion.
Collapse
Affiliation(s)
- Sarah Esser
- Department of Cognitive Psychology, University of Cologne, Cologne, Germany.
| | - Hilde Haider
- Department of Cognitive Psychology, University of Cologne, Cologne, Germany
| | - Clarissa Lustig
- Department of Cognitive Psychology, University of Cologne, Cologne, Germany
| | - Takumi Tanaka
- Graduate School of Humanities and Sociology and Faculty of Letters, The University of Tokyo, Tokyo, Japan
| | - Kanji Tanaka
- Faculty of Arts and Science, Kyushu University, Fukuoka, Japan
| |
Collapse
|
5
|
Wilkinson KM, Elko LR, Elko E, McCarty TV, Sowers DJ, Blackstone S, Roman-Lantzy C. An Evidence-Based Approach to Augmentative and Alternative Communication Design for Individuals With Cortical Visual Impairment. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2023; 32:1939-1960. [PMID: 37594735 DOI: 10.1044/2023_ajslp-22-00397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/19/2023]
Abstract
PURPOSE This article highlights the contributions of three pillars of an evidence-based practice approach (service providers, researchers, and families/clients) in the development of a framework to offer a way forward for professionals, families, and technology companies to support optimal visual and communication outcomes of individuals with cortical visual impairment (CVI) who use augmentative and alternative communication (AAC). By providing available research findings as well as practical information and lived experiences, the article offers clinical considerations and design features that can lead to addressing the unique needs of these individuals. METHOD This article reviews literature concerning what is known about CVI and describes in detail and from multiple viewpoints important features required in AAC systems to support individuals with CVI and enable them to communicate effectively. RESULTS Components necessary for teams, communication partners, and AAC designers to optimize AAC system design in CVI are presented using external research evidence as internal evidence from lived experience to support their importance. CONCLUSIONS An AAC system design that is tailored to the unique visual processing characteristics in CVI is likely to promote positive communication outcomes. The presentation of the lived experience of an individual who has CVI themselves illustrates the need for individualized assessments and interventions that incorporate and reflect the research presented here. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.23902239.
Collapse
Affiliation(s)
- Krista M Wilkinson
- Department of Communication Sciences and Disorders, The Pennsylvania State University, University Park
| | | | | | - Tara V McCarty
- Department of Communication Sciences and Disorders, The Pennsylvania State University, University Park
| | - Dawn J Sowers
- Department of Communication Sciences and Disorders, The Pennsylvania State University, University Park
| | | | | |
Collapse
|
6
|
Zhou M, Gong Z, Dai Y, Wen Y, Liu Y, Zhen Z. A large-scale fMRI dataset for human action recognition. Sci Data 2023; 10:415. [PMID: 37369643 DOI: 10.1038/s41597-023-02325-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Accepted: 06/21/2023] [Indexed: 06/29/2023] Open
Abstract
Human action recognition is a critical capability for our survival, allowing us to interact easily with the environment and others in everyday life. Although the neural basis of action recognition has been widely studied using a few action categories from simple contexts as stimuli, how the human brain recognizes diverse human actions in real-world environments still needs to be explored. Here, we present the Human Action Dataset (HAD), a large-scale functional magnetic resonance imaging (fMRI) dataset for human action recognition. HAD contains fMRI responses to 21,600 video clips from 30 participants. The video clips encompass 180 human action categories and offer a comprehensive coverage of complex activities in daily life. We demonstrate that the data are reliable within and across participants and, notably, capture rich representation information of the observed human actions. This extensive dataset, with its vast number of action categories and exemplars, has the potential to deepen our understanding of human action recognition in natural environments.
Collapse
Affiliation(s)
- Ming Zhou
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| | - Zhengxin Gong
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Yuxuan Dai
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Yushan Wen
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China
| | - Youyi Liu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing, 100875, China
| | - Zonglei Zhen
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing, 100875, China.
| |
Collapse
|
7
|
Derzsi Z, Volcic R. Not only perception but also grasping actions can obey Weber's law. Cognition 2023; 237:105465. [PMID: 37150154 DOI: 10.1016/j.cognition.2023.105465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Revised: 04/07/2023] [Accepted: 04/20/2023] [Indexed: 05/09/2023]
Abstract
Weber's law, the principle that the uncertainty of perceptual estimates increases proportionally with object size, is regularly violated when considering the uncertainty of the grip aperture during grasping movements. The origins of this perception-action dissociation are debated and are attributed to various reasons, including different coding of visual size information for perception and action, biomechanical factors, the use of positional information to guide grasping, or, sensorimotor calibration. Here, we contrasted these accounts and compared perceptual and grasping uncertainties by asking people to indicate the visually perceived center of differently sized objects (Perception condition) or to grasp and lift the same objects with the requirement to achieve a balanced lift (Action condition). We found that the variability (uncertainty) of contact positions increased as a function of object size in both perception and action. The adherence of the Action condition to Weber's law and the consequent absence of a perception-action dissociation contradict the predictions based on different coding of visual size information and sensorimotor calibration. These findings provide clear evidence that human perceptual and visuomotor systems rely on the same visual information and suggest that the previously reported violations of Weber's law in grasping movements should be attributed to other factors.
Collapse
Affiliation(s)
- Zoltan Derzsi
- Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.
| | - Robert Volcic
- Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates; Center for Artificial Intelligence and Robotics, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates; Center for Brain and Health, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.
| |
Collapse
|
8
|
Neuroplasticity enables bio-cultural feedback in Paleolithic stone-tool making. Sci Rep 2023; 13:2877. [PMID: 36807588 PMCID: PMC9938911 DOI: 10.1038/s41598-023-29994-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 02/14/2023] [Indexed: 02/20/2023] Open
Abstract
Stone-tool making is an ancient human skill thought to have played a key role in the bio-cultural co-evolutionary feedback that produced modern brains, culture, and cognition. To test the proposed evolutionary mechanisms underpinning this hypothesis we studied stone-tool making skill learning in modern participants and examined interactions between individual neurostructural differences, plastic accommodation, and culturally transmitted behavior. We found that prior experience with other culturally transmitted craft skills increased both initial stone tool-making performance and subsequent neuroplastic training effects in a frontoparietal white matter pathway associated with action control. These effects were mediated by the effect of experience on pre-training variation in a frontotemporal pathway supporting action semantic representation. Our results show that the acquisition of one technical skill can produce structural brain changes conducive to the discovery and acquisition of additional skills, providing empirical evidence for bio-cultural feedback loops long hypothesized to link learning and adaptive change.
Collapse
|
9
|
Bono D, Belyk M, Longo MR, Dick F. Beyond language: The unspoken sensory-motor representation of the tongue in non-primates, non-human and human primates. Neurosci Biobehav Rev 2022; 139:104730. [PMID: 35691470 DOI: 10.1016/j.neubiorev.2022.104730] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 04/06/2022] [Accepted: 06/06/2022] [Indexed: 11/28/2022]
Abstract
The English idiom "on the tip of my tongue" commonly acknowledges that something is known, but it cannot be immediately brought to mind. This phrase accurately describes sensorimotor functions of the tongue, which are fundamental for many tongue-related behaviors (e.g., speech), but often neglected by scientific research. Here, we review a wide range of studies conducted on non-primates, non-human and human primates with the aim of providing a comprehensive description of the cortical representation of the tongue's somatosensory inputs and motor outputs across different phylogenetic domains. First, we summarize how the properties of passive non-noxious mechanical stimuli are encoded in the putative somatosensory tongue area, which has a conserved location in the ventral portion of the somatosensory cortex across mammals. Second, we review how complex self-generated actions involving the tongue are represented in more anterior regions of the putative somato-motor tongue area. Finally, we describe multisensory response properties of the primate and non-primate tongue area by also defining how the cytoarchitecture of this area is affected by experience and deafferentation.
Collapse
Affiliation(s)
- Davide Bono
- Birkbeck/UCL Centre for Neuroimaging, 26 Bedford Way, London WC1H0AP, UK; Department of Experimental Psychology, UCL Division of Psychology and Language Sciences, 26 Bedford Way, London WC1H0AP, UK.
| | - Michel Belyk
- Department of Speech, Hearing, and Phonetic Sciences, UCL Division of Psychology and Language Sciences, 2 Wakefield Street, London WC1N 1PJ, UK
| | - Matthew R Longo
- Department of Psychological Sciences, Birkbeck College, University of London, Malet St, London WC1E7HX, UK
| | - Frederic Dick
- Birkbeck/UCL Centre for Neuroimaging, 26 Bedford Way, London WC1H0AP, UK; Department of Experimental Psychology, UCL Division of Psychology and Language Sciences, 26 Bedford Way, London WC1H0AP, UK; Department of Psychological Sciences, Birkbeck College, University of London, Malet St, London WC1E7HX, UK.
| |
Collapse
|
10
|
Winterbottom T, Xiao S, McLean A, Al Moubayed N. Bilinear pooling in video-QA: empirical challenges and motivational drift from neurological parallels. PeerJ Comput Sci 2022; 8:e974. [PMID: 35721409 PMCID: PMC9202627 DOI: 10.7717/peerj-cs.974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 04/18/2022] [Indexed: 06/15/2023]
Abstract
Bilinear pooling (BLP) refers to a family of operations recently developed for fusing features from different modalities predominantly for visual question answering (VQA) models. Successive BLP techniques have yielded higher performance with lower computational expense, yet at the same time they have drifted further from the original motivational justification of bilinear models, instead becoming empirically motivated by task performance. Furthermore, despite significant success in text-image fusion in VQA, BLP has not yet gained such notoriety in video question answering (video-QA). Though BLP methods have continued to perform well on video tasks when fusing vision and non-textual features, BLP has recently been overshadowed by other vision and textual feature fusion techniques in video-QA. We aim to add a new perspective to the empirical and motivational drift in BLP. We take a step back and discuss the motivational origins of BLP, highlighting the often-overlooked parallels to neurological theories (Dual Coding Theory and The Two-Stream Model of Vision). We seek to carefully and experimentally ascertain the empirical strengths and limitations of BLP as a multimodal text-vision fusion technique in video-QA using two models (TVQA baseline and heterogeneous-memory-enchanced 'HME' model) and four datasets (TVQA, TGif-QA, MSVD-QA, and EgoVQA). We examine the impact of both simply replacing feature concatenation in the existing models with BLP, and a modified version of the TVQA baseline to accommodate BLP that we name the 'dual-stream' model. We find that our relatively simple integration of BLP does not increase, and mostly harms, performance on these video-QA benchmarks. Using our insights on recent work in BLP for video-QA results and recently proposed theoretical multimodal fusion taxonomies, we offer insight into why BLP-driven performance gain for video-QA benchmarks may be more difficult to achieve than in earlier VQA models. We share our perspective on, and suggest solutions for, the key issues we identify with BLP techniques for multimodal fusion in video-QA. We look beyond the empirical justification of BLP techniques and propose both alternatives and improvements to multimodal fusion by drawing neurological inspiration from Dual Coding Theory and the Two-Stream Model of Vision. We qualitatively highlight the potential for neurological inspirations in video-QA by identifying the relative abundance of psycholinguistically 'concrete' words in the vocabularies for each of the text components (e.g., questions and answers) of the four video-QA datasets we experiment with.
Collapse
Affiliation(s)
| | - Sarah Xiao
- Durham University Business School, Durham University, Durham, Durham, United Kingdom
| | | | - Noura Al Moubayed
- Department of Computer Science, Durham University, Durham, United Kingdom
| |
Collapse
|
11
|
Relative, not absolute, stimulus size is responsible for a correspondence effect between physical stimulus size and left/right responses. Atten Percept Psychophys 2022; 84:1342-1358. [PMID: 35460026 PMCID: PMC9032296 DOI: 10.3758/s13414-022-02490-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/01/2022] [Indexed: 11/24/2022]
Abstract
Recent studies have demonstrated a novel compatibility (or correspondence) effect between physical stimulus size and horizontally aligned responses: Left-hand responses are shorter and more accurate to a small stimulus, compared to a large stimulus, whereas the opposite is true for right-hand responses. The present study investigated whether relative or absolute size is responsible for the effect. If relative size was important, a particular stimulus would elicit faster left-hand responses if the other stimuli in the set were larger, but the same stimulus would elicit a faster right-hand response if the other stimuli in the set were smaller. In terms of two-visual-systems theory, our study explores whether “vision for perception” (i.e., the ventral system) or “vision for action” (i.e., the dorsal system) dominates the processing of stimulus size in our task. In two experiments, participants performed a discrimination task in which they responded to stimulus color (Experiment 1) or to stimulus shape (Experiment 2) with their left/right hand. Stimulus size varied as an irrelevant stimulus feature, thus leading to corresponding (small-left; large-right) and non-corresponding (small-right; large-left) conditions. Moreover, a set of smaller stimuli and a set of larger stimuli, with both sets sharing an intermediately sized stimulus, were used in different conditions. The consistently significant two-way interaction between stimulus size and response location demonstrated the presence of the correspondence effect. The three-way interaction between stimulus size, response location, and stimulus set, however, was never significant. The results suggest that participants are inadvertently classifying stimuli according to relative size in a context-specific manner.
Collapse
|
12
|
Yang P, Saunders JA, Chen Z. The experience of stereoblindness does not improve use of texture for slant perception. J Vis 2022; 22:3. [PMID: 35412556 PMCID: PMC9012895 DOI: 10.1167/jov.22.5.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Stereopsis is an important depth cue for normal people, but a subset of people suffer from stereoblindness and cannot use binocular disparity as a cue to depth. Does this experience of stereoblindness modulate use of other depth cues? We investigated this question by comparing perception of 3D slant from texture for stereoblind people and stereo-normal people. Subjects performed slant discrimination and slant estimation tasks using both monocular and binocular stimuli. We found that two groups had comparable ability to discriminate slant from texture information and showed similar mappings between texture information and slant perception (biased perception toward frontal surface with texture information indicating low slants). The results suggest that the experience of stereoblindness did not change the use of texture information for slant perception. In addition, we found that stereoblind people benefitted from binocular viewing in the slant estimation task, despite their inability to use binocular disparity information. These findings are generally consistent with the optimal cue combination model of slant perception.
Collapse
Affiliation(s)
- Pin Yang
- Shanghai Key Laboratory of Brain Functional Genomics, Affiliated Mental Health Center, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China.,
| | | | - Zhongting Chen
- Shanghai Key Laboratory of Brain Functional Genomics, Affiliated Mental Health Center, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China.,Shanghai Changning Mental Health Center, Shanghai, China.,
| |
Collapse
|
13
|
MacIver MA, Finlay BL. The neuroecology of the water-to-land transition and the evolution of the vertebrate brain. Philos Trans R Soc Lond B Biol Sci 2022; 377:20200523. [PMID: 34957852 PMCID: PMC8710882 DOI: 10.1098/rstb.2020.0523] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
The water-to-land transition in vertebrate evolution offers an unusual opportunity to consider computational affordances of a new ecology for the brain. All sensory modalities are changed, particularly a greatly enlarged visual sensorium owing to air versus water as a medium, and expanded by mobile eyes and neck. The multiplication of limbs, as evolved to exploit aspects of life on land, is a comparable computational challenge. As the total mass of living organisms on land is a hundredfold larger than the mass underwater, computational improvements promise great rewards. In water, the midbrain tectum coordinates approach/avoid decisions, contextualized by water flow and by the animal's body state and learning. On land, the relative motions of sensory surfaces and effectors must be resolved, adding on computational architectures from the dorsal pallium, such as the parietal cortex. For the large-brained and long-living denizens of land, making the right decision when the wrong one means death may be the basis of planning, which allows animals to learn from hypothetical experience before enactment. Integration of value-weighted, memorized panoramas in basal ganglia/frontal cortex circuitry, with allocentric cognitive maps of the hippocampus and its associated cortices becomes a cognitive habit-to-plan transition as substantial as the change in ecology. This article is part of the theme issue 'Systems neuroscience through the lens of evolutionary theory'.
Collapse
Affiliation(s)
- Malcolm A. MacIver
- Center for Robotics and Biosystems, Northwestern University, Evanston, IL 60208, USA
| | - Barbara L. Finlay
- Department of Psychology, Behavioral and Evolutionary Neuroscience Group, Cornell University, Ithaca, NY 14850, USA
| |
Collapse
|
14
|
Nilsson DE. The Evolution of Visual Roles – Ancient Vision Versus Object Vision. Front Neuroanat 2022; 16:789375. [PMID: 35221931 PMCID: PMC8863595 DOI: 10.3389/fnana.2022.789375] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Accepted: 01/20/2022] [Indexed: 12/05/2022] Open
Abstract
Just like other complex biological features, image vision (multi-pixel light sensing) did not evolve suddenly. Animal visual systems have a long prehistory of non-imaging light sensitivity. The first spatial vision was likely very crude with only few pixels, and evolved to improve orientation behaviors previously supported by single-channel directional photoreception. The origin of image vision was simply a switch from single to multiple spatial channels, which improved the behaviors for finding a suitable habitat and position itself within it. Orientation based on spatial vision obviously involves active guidance of behaviors but, by necessity, also assessment of habitat suitability and environmental conditions. These conditions are crucial for deciding when to forage, reproduce, seek shelter, rest, etc. When spatial resolution became good enough to see other animals and interact with them, a whole range of new visual roles emerged: pursuit, escape, communication and other interactions. All these new visual roles require entirely new types of visual processing. Objects needed to be separated from the background, identified and classified to make the correct choice of interaction. Object detection and identification can be used actively to guide behaviors but of course also to assess the over-all situation. Visual roles can thus be classified as either ancient non-object-based tasks, or object vision. Each of these two categories can also be further divided into active visual tasks and visual assessment tasks. This generates four major categories of vision into which I propose that all visual roles can be categorized.
Collapse
|
15
|
Shoshina I, Zelenskaya I, Karpinskaia V, Shilov Y, Tomilovskaya E. Sensitivity of Visual System in 5-Day "Dry" Immersion With High-Frequency Electromyostimulation. Front Neural Circuits 2021; 15:702792. [PMID: 35002633 PMCID: PMC8740068 DOI: 10.3389/fncir.2021.702792] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 11/26/2021] [Indexed: 11/13/2022] Open
Abstract
The aim of this work was to study the sensitivity of the visual system in 5-day "dry" immersion with a course of high-frequency electromyostimulation (HFEMS) and without it. "Dry" immersion (DI) is one of the most effective models of microgravity. DI reproduces three basic effects of weightlessness: physical inactivity, support withdrawal and elimination of the vertical vascular gradient. The "dry" immersion included in the use of special waterproof and highly elastic fabric on of immersion in a liquid similar in density to the tissues of the human body. The sensitivity of the visual system was assessed by measuring contrast sensitivity and magnitude of the Müller-Lyer illusion. The visual contrast sensitivity was measured in the spatial frequency range from 0.4 to 10.0 cycles/degree. The strength of visual illusion was assessed by means of motor response using "tracking." Measurements were carried out before the start of immersion, on the 1st, 3rd, 5th days of DI, and after its completion. Under conditions of "dry" immersion without HFEMS, upon the transition from gravity to microgravity conditions (BG and DI1) we observed significant differences in contrast sensitivity in the low spatial frequency range, whereas in the experiment with HFEMS-in the medium spatial frequency range. In the experiment without HFEMS, the Müller-Lyer illusion in microgravity conditions was absent, while in the experiment using HFEMS it was significantly above zero at all stages. Thus, we obtained only limited evidence in favor of the hypothesis of a possible compensating effect of HFEMS on changes in visual sensitivity upon the transition from gravity to microgravity conditions and vice versa. The study is a pilot and requires further research on the effect of HFEMS on visual sensitivity.
Collapse
Affiliation(s)
- Irina Shoshina
- Laboratory of Physiology of Vision, Pavlov Institute of Physiology, Russian Academy of Sciences, Saint-Petersburg, Russia
| | - Inna Zelenskaya
- Institute of Biomedical Problems, Russian Academy of Sciences, Moscow, Russia
| | | | - Yuri Shilov
- Department of Psychology, Samara University, Samara, Russia
| | - Elena Tomilovskaya
- Institute of Biomedical Problems, Russian Academy of Sciences, Moscow, Russia
| |
Collapse
|
16
|
Dissociating the Influence of Perceptual Biases and Contextual Artifacts Within Target Configurations During the Planning and Control of Visually Guided Action. Motor Control 2021; 25:349-368. [PMID: 33811190 DOI: 10.1123/mc.2020-0054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Revised: 01/13/2021] [Accepted: 01/18/2021] [Indexed: 11/18/2022]
Abstract
The failure of perceptual illusions to elicit corresponding biases within movement supports the view of two visual pathways separately contributing to perception and action. However, several alternative findings may contest this overarching framework. The present study aimed to examine the influence of perceptual illusions within the planning and control of aiming. To achieve this, we manipulated and measured the planning/control phases by respectively perturbing the target illusion (relative size-contrast illusion; Ebbinghaus/Titchener circles) following movement onset and detecting the spatiotemporal characteristics of the movement trajectory. The perceptual bias that was indicated by the perceived target size estimates failed to correspondingly manifest within the effective target size. While movement time (specifically, time after peak velocity) was affected by the target configuration, this outcome was not consistent with the direction of the perceptual illusions. These findings advocate an influence of the surrounding contextual information (e.g., annuli) on movement control that is independent of the direction predicted by the illusion.
Collapse
|
17
|
Fournier LR, Richardson BP. Partial repetition between action plans delays responses to ideomotor compatible stimuli. PSYCHOLOGICAL RESEARCH 2021; 86:627-641. [PMID: 33740105 DOI: 10.1007/s00426-021-01491-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2020] [Accepted: 02/08/2021] [Indexed: 11/25/2022]
Abstract
Often one must depart from an intended course of events to react to sudden situational demands before resuming his or her original action retained in working memory. Retaining an action plan in working memory (WM) can delay or facilitate the execution of an intervening action when the action features of the two action plans partly overlap (partial repetition) compared to when they do not overlap. We investigated whether partial repetition costs (PRCs) or benefits (PRBs) occur when the intervening event is an ideomotor-compatible stimulus that is a biological representation of the response required by the participant. Participants viewed two visual events and retained an action plan to the first event (A) while executing a speeded response to the second, intervening event (B). In Experiment 1A, the two visual events were ideomotor compatible, non-ideomotor compatible (abstract), or one was ideomotor compatible, and the other abstract. Results showed PRCs for all event A-B stimulus combinations with reduced PRCs for intervening, ideomotor compatible events. In contrast to previous research, there was no evidence that ideomotor-compatible actions were automatic and bypassed the selection bottleneck. Experiment 1B confirmed PRCs for ideomotor compatible stimuli that more accurately mimicked the required response. Findings suggest that mechanisms for activating, selecting, and retaining action plans are similar between ideomotor compatible and abstract visual events. We conclude that PRCs occur in response to intervening events when action plans are generated offline and rely on WM, including those for ideomotor-compatible stimuli; but PRBs may be restricted to actions generated online. This conclusion is consistent with the perceptual-motor framework by Goodale and Milner (Trends in Neuroscience 15:22-25, 1992).
Collapse
Affiliation(s)
- Lisa R Fournier
- Department of Psychology, Washington State University, Pullman, WA, 99164-4820, USA.
| | | |
Collapse
|
18
|
A double dissociation between action and perception in bimanual grasping: evidence from the Ponzo and the Wundt-Jastrow illusions. Sci Rep 2020; 10:14665. [PMID: 32887921 PMCID: PMC7473850 DOI: 10.1038/s41598-020-71734-z] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Accepted: 07/24/2020] [Indexed: 11/11/2022] Open
Abstract
Research on visuomotor control suggests that visually guided actions toward objects rely on functionally distinct computations with respect to perception. For example, a double dissociation between grasping and between perceptual estimates was reported in previous experiments that pit real against illusory object size differences in the context of the Ponzo illusion. While most previous research on the relation between action and perception focused on one-handed grasping, everyday visuomotor interactions also entail the simultaneous use of both hands to grasp objects that are larger in size. Here, we examined whether this double dissociation extends to bimanual movement control. In Experiment 1, participants were presented with different-sized objects embedded in the Ponzo Illusion. In Experiment 2, we tested whether the dissociation between perception and action extends to a different illusion, the Wundt–Jastrow illusion, which has not been previously used in grasping experiments. In both experiments, bimanual grasping trajectories reflected the differences in physical size between the objects; At the same time, perceptual estimates reflected the differences in illusory size between the objects. These results suggest that the double dissociation between action and perception generalizes to bimanual movement control. Unlike conscious perception, bimanual grasping movements are tuned to real-world metrics, and can potentially resist irrelevant information on relative size and depth.
Collapse
|
19
|
Abstract
It is proposed that the perceived present is not a moment in time, but an information structure comprising an integrated set of products of perceptual processing. All information in the perceived present carries an informational time marker identifying it as "present". This marker is exclusive to information in the perceived present. There are other kinds of time markers, such as ordinality ("this stimulus occurred before that one") and duration ("this stimulus lasted for 50 ms"). These are different from the "present" time marker and may be attached to information regardless of whether it is in the perceived present or not. It is proposed that the perceived present is a very short-term and very high-capacity holding area for perceptual information. The maximum holding time for any given piece of information is ~100 ms: This is affected by the need to balance the value of informational persistence for further processing against the problem of obsolescence of the information. The main function of the perceived present is to facilitate access by other specialized, automatic processes.
Collapse
Affiliation(s)
- Peter A White
- School of Psychology, Cardiff University, Tower Building, Park Place, Cardiff, Wales, CF10 3YG, UK.
| |
Collapse
|
20
|
Henry CA, Jazayeri M, Shapley RM, Hawken MJ. Distinct spatiotemporal mechanisms underlie extra-classical receptive field modulation in macaque V1 microcircuits. eLife 2020; 9:54264. [PMID: 32458798 PMCID: PMC7253173 DOI: 10.7554/elife.54264] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2019] [Accepted: 05/11/2020] [Indexed: 01/23/2023] Open
Abstract
Complex scene perception depends upon the interaction between signals from the classical receptive field (CRF) and the extra-classical receptive field (eCRF) in primary visual cortex (V1) neurons. Although much is known about V1 eCRF properties, we do not yet know how the underlying mechanisms map onto the cortical microcircuit. We probed the spatio-temporal dynamics of eCRF modulation using a reverse correlation paradigm, and found three principal eCRF mechanisms: tuned-facilitation, untuned-suppression, and tuned-suppression. Each mechanism had a distinct timing and spatial profile. Laminar analysis showed that the timing, orientation-tuning, and strength of eCRF mechanisms had distinct signatures within magnocellular and parvocellular processing streams in the V1 microcircuit. The existence of multiple eCRF mechanisms provides new insights into how V1 responds to spatial context. Modeling revealed that the differences in timing and scale of these mechanisms predicted distinct patterns of net modulation, reconciling many previous disparate physiological and psychophysical findings.
Collapse
Affiliation(s)
- Christopher A Henry
- Center for Neural Science, New York University, New York, United States.,Dominick Purpura Department of Neuroscience, Albert Einstein College of Medicine, Bronx, United States
| | - Mehrdad Jazayeri
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, United States
| | - Robert M Shapley
- Center for Neural Science, New York University, New York, United States
| | - Michael J Hawken
- Center for Neural Science, New York University, New York, United States
| |
Collapse
|
21
|
Rolls ET. Neural Computations Underlying Phenomenal Consciousness: A Higher Order Syntactic Thought Theory. Front Psychol 2020; 11:655. [PMID: 32318008 PMCID: PMC7154119 DOI: 10.3389/fpsyg.2020.00655] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2020] [Accepted: 03/18/2020] [Indexed: 11/13/2022] Open
Abstract
Problems are raised with the global workspace hypothesis of consciousness, for example about exactly how global the workspace needs to be for consciousness to suddenly be present. Problems are also raised with Carruthers's (2019) version that excludes conceptual (categorical or discrete) representations, and in which phenomenal consciousness can be reduced to physical processes, with instead a different levels of explanation approach to the relation between the brain and the mind advocated. A different theory of phenomenal consciousness is described, in which there is a particular computational system involved in which Higher Order Syntactic Thoughts are used to perform credit assignment on first order thoughts of multiple step plans to correct them by manipulating symbols in a syntactic type of working memory. This provides a good evolutionary reason for the evolution of this kind of computational module, with which, it is proposed, phenomenal consciousness is associated. Some advantages of this HOST approach to phenomenal consciousness are then described with reference not only to the global workspace approach, but also to Higher Order Thought (HOT) theories. It is hypothesized that the HOST system which requires the ability to manipulate first order symbols in working memory might utilize parts of the prefrontal cortex implicated in working memory, and especially the left inferior frontal gyrus, which is involved in language and probably syntactical processing. Overall, the approach advocated is to identify the computations that are linked to consciousness, and to analyze the neural bases of those computations.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, United Kingdom.,Department of Computer Science, University of Warwick, Coventry, United Kingdom.,Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai, China
| |
Collapse
|
22
|
Yan S, Hondzinski JM. Gaze Direction Changes the Vertical-Horizontal Illusory Effects on Manual Length Estimations. J Mot Behav 2020; 53:92-104. [PMID: 32107981 DOI: 10.1080/00222895.2020.1732286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
We examined potentially deceptive influences of the vertical-horizontal (V-H) illusion on manual length estimations. When viewing V-H illusory configurations, people perceive that the bisecting segment length exceeds the bisected segment length when segments are actually equal. Participants used downward or rightward pointing movements to manually estimate the length of a short bisecting segment of the V-H illusion in upright or rotated configurations. Participants directed their gaze freely, on the configuration, or on the movement space. Manual length estimations for upright and rotated configurations depended on gaze direction, revealing bisection influences only for restricted viewing. People produced illusory influences on perceptuomotor control only when gaze was directed toward V-H configurations or their movement. Exploitation of deceptive visual cues can direct upper limb control for sensorimotor coordination.
Collapse
Affiliation(s)
- Shijun Yan
- School of Kinesiology, Louisiana State University, Baton Rouge, Louisiana, USA
| | - Jan M Hondzinski
- School of Kinesiology, Louisiana State University, Baton Rouge, Louisiana, USA
| |
Collapse
|
23
|
Free-choice and forced-choice actions: Shared representations and conservation of cognitive effort. Atten Percept Psychophys 2020; 82:2516-2530. [PMID: 32080805 DOI: 10.3758/s13414-020-01986-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
We examined two questions regarding the interplay of planned and ongoing actions. First: Do endogenous (free-choice) and exogenous (forced-choice) triggers of action plans activate similar cognitive representations? And, second: Are free-choice decisions biased by future action goals retained in working memory? Participants planned and retained a forced-choice action to one visual event (A) while executing an immediate forced-choice or free-choice action (action B) to a second visual event (B); then the retained action (A) was executed. We found performance costs for action B if the two action plans partly overlapped versus did not overlap (partial repetition costs). This held true even when action B required a free-choice response indicating that forced-choice and free-choice actions are represented similarly. Partial repetition costs for free-choice actions were evident regardless of whether participants did or did not show free-choice response biases. Also, a subset of participants showed a bias to freely choose actions that did not overlap (vs. did overlap) with the action plan retained in memory, which led to improved performance in executing action B and recalling action A. Because cognitive effort is likely required to resolve feature code competition and confusion assumed to underlie partial repetition costs, this free-choice decision bias may serve to conserve cognitive effort and preserve the future action goal retained in working memory.
Collapse
|
24
|
Predictive coding of action intentions in dorsal and ventral visual stream is based on visual anticipations, memory-based information and motor preparation. Brain Struct Funct 2019; 224:3291-3308. [PMID: 31673774 DOI: 10.1007/s00429-019-01970-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2019] [Accepted: 10/16/2019] [Indexed: 10/25/2022]
Abstract
Predictions of upcoming movements are based on several types of neural signals that span the visual, somatosensory, motor and cognitive system. Thus far, pre-movement signals have been investigated while participants viewed the object to be acted upon. Here, we studied the contribution of information other than vision to the classification of preparatory signals for action, even in the absence of online visual information. We used functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) to test whether the neural signals evoked by visual, memory-based and somato-motor information can be reliably used to predict upcoming actions in areas of the dorsal and ventral visual stream during the preparatory phase preceding the action, while participants were lying still. Nineteen human participants (nine women) performed one of two actions towards an object with their eyes open or closed. Despite the well-known role of ventral stream areas in visual recognition tasks and the specialization of dorsal stream areas in somato-motor processes, we decoded action intention in areas of both streams based on visual, memory-based and somato-motor signals. Interestingly, we could reliably decode action intention in absence of visual information based on neural activity evoked when visual information was available and vice versa. Our results show a similar visual, memory and somato-motor representation of action planning in dorsal and ventral visual stream areas that allows predicting action intention across domains, regardless of the availability of visual information.
Collapse
|
25
|
Smeets JBJ, van der Kooij K, Brenner E. A review of grasping as the movements of digits in space. J Neurophysiol 2019; 122:1578-1597. [DOI: 10.1152/jn.00123.2019] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
It is tempting to describe human reach-to-grasp movements in terms of two, more or less independent visuomotor channels, one relating hand transport to the object’s location and the other relating grip aperture to the object’s size. Our review of experimental work questions this framework for reasons that go beyond noting the dependence between the two channels. Both the lack of effect of size illusions on grip aperture and the finding that the variability in grip aperture does not depend on the object’s size indicate that size information is not used to control grip aperture. An alternative is to describe grip formation as emerging from controlling the movements of the digits in space. Each digit’s trajectory when grasping an object is remarkably similar to its trajectory when moving to tap the same position on its own. The similarity is also evident in the fast responses when the object is displaced. This review develops a new description of the speed-accuracy trade-off for multiple effectors that is applied to grasping. The most direct support for the digit-in-space framework is that prism-induced adaptation of each digit’s tapping movements transfers to that digit’s movements when grasping, leading to changes in grip aperture for adaptation in opposite directions for the two digits. We conclude that although grip aperture and hand transport are convenient variables to describe grasping, treating grasping as movements of the digits in space is a more suitable basis for understanding the neural control of grasping.
Collapse
Affiliation(s)
- Jeroen B. J. Smeets
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Katinka van der Kooij
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Eli Brenner
- Department of Human Movement Sciences, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
26
|
Finlay BL. The neuroscience of vision and pain: evolution of two disciplines. Philos Trans R Soc Lond B Biol Sci 2019; 374:20190292. [PMID: 31544620 DOI: 10.1098/rstb.2019.0292] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022] Open
Abstract
Research in the neuroscience of pain perception and visual perception has taken contrasting paths. The contextual and the social aspects of pain judgements predisposed pain researchers to develop computational and functional accounts early, while vision researchers tended to simple localizationist or descriptive approaches first. Evolutionary thought was applied to distinct domains, such as game-theoretic approaches to cheater detection in pain research, versus vision scientists' studies of comparative visual ecologies. Both fields now contemplate current motor or decision-based accounts of perception, particularly predictive coding. Vision researchers do so without the benefit of earlier attention to social and motivational aspects of vision, while pain researchers lack a comparative behavioural ecology of pain, the normal incidence and utility of responses to tissue damage. Hybrid hypotheses arising from predictive coding as used in both domains are applied to some perplexing phenomena in pain perception to suggest future directions. The contingent and predictive interpretation of complex sensations, in such domains as 'runner's high', multiple cosmetic procedures, self-harm and circadian rhythms in pain sensitivity is one example. The second, in an evolutionary time frame, considers enhancement of primary perception and expression of pain in social species, when expressions of pain might reliably elicit useful help. This article is part of the Theo Murphy meeting issue 'Evolution of mechanisms and behaviour important for pain'.
Collapse
Affiliation(s)
- Barbara L Finlay
- Department of Psychology, Behavioral and Evolutionary Neuroscience Group, Cornell University, Ithaca, NY 14853, USA
| |
Collapse
|
27
|
Ganel T, Ozana A, Goodale MA. When perception intrudes on 2D grasping: evidence from Garner interference. PSYCHOLOGICAL RESEARCH 2019; 84:2138-2143. [PMID: 31201534 DOI: 10.1007/s00426-019-01216-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2019] [Accepted: 06/08/2019] [Indexed: 11/28/2022]
Abstract
When participants reach out to pick up a real 3-D object, their grip aperture reflects the size of the object well before contact is made. At the same time, the classical psychophysical laws and principles of relative size and shape that govern visual perception do not appear to intrude into the control of such movements, which are instead tuned only to the relevant dimension for grasping. In contrast, accumulating evidence suggests that grasps directed at flat 2D objects are not immune to perceptual effects. Thus, in 2D but not 3D grasping, the aperture of the fingers has been shown to be affected by relative and contextual information about the size and shape of the target object. A notable example of this dissociation comes from studies of Garner interference, which signals holistic processing of shape. Previous research has shown that 3D grasping shows no evidence for Garner interference but 2D grasping does (Freud & Ganel, 2015). In a recent study published in this journal (Löhr-Limpens et al., 2019), participants were presented with 2D objects in a Garner paradigm. The pattern of results closely replicated the previously published results with 2D grasping. Unfortunately, the authors, who appear to be unaware the potential differences between 2D and 3D grasping, used their findings to draw an overgeneralized and unwarranted conclusion about the relation between 3D grasping and perception. In this short methodological commentary, we discuss current literature on aperture shaping during 2D grasping and suggest that researchers should play close attention to the nature of the target stimuli they use before drawing conclusions about visual processing for perception and action.
Collapse
Affiliation(s)
- Tzvi Ganel
- Psychology Department, Ben-Gurion University of the Negev, 8410501, Beer-Sheva, Israel.
| | - Aviad Ozana
- Psychology Department, Ben-Gurion University of the Negev, 8410501, Beer-Sheva, Israel
| | - Melvyn A Goodale
- The Brain and Mind Institute, The University of Western Ontario, London, ON, N6A 5B7, Canada
| |
Collapse
|
28
|
Ganel T, Goodale MA. Still holding after all these years: An action-perception dissociation in patient DF. Neuropsychologia 2019; 128:249-254. [DOI: 10.1016/j.neuropsychologia.2017.09.016] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2017] [Accepted: 09/17/2017] [Indexed: 10/18/2022]
|
29
|
Fields C, Glazebrook JF. A mosaic of Chu spaces and Channel Theory II: applications to object identification and mereological complexity. J EXP THEOR ARTIF IN 2018. [DOI: 10.1080/0952813x.2018.1544285] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
| | - James F. Glazebrook
- Department of Mathematics and Computer Science, Eastern Illinois University, Charleston, IL, USA
- Adjunct Faculty, Department of Mathematics, University of Illinois at Urbana–Champaign, Urbana, IL, USA
| |
Collapse
|
30
|
Rolls ET, Wirth S. Spatial representations in the primate hippocampus, and their functions in memory and navigation. Prog Neurobiol 2018; 171:90-113. [DOI: 10.1016/j.pneurobio.2018.09.004] [Citation(s) in RCA: 59] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2018] [Revised: 09/10/2018] [Accepted: 09/10/2018] [Indexed: 01/01/2023]
|
31
|
Pepperell R. Consciousness as a Physical Process Caused by the Organization of Energy in the Brain. Front Psychol 2018; 9:2091. [PMID: 30450064 PMCID: PMC6225786 DOI: 10.3389/fpsyg.2018.02091] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2018] [Accepted: 10/10/2018] [Indexed: 12/15/2022] Open
Abstract
To explain consciousness as a physical process we must acknowledge the role of energy in the brain. Energetic activity is fundamental to all physical processes and causally drives biological behavior. Recent neuroscientific evidence can be interpreted in a way that suggests consciousness is a product of the organization of energetic activity in the brain. The nature of energy itself, though, remains largely mysterious, and we do not fully understand how it contributes to brain function or consciousness. According to the principle outlined here, energy, along with forces and work, can be described as actualized differences of motion and tension. By observing physical systems, we can infer there is something it is like to undergo actualized difference from the intrinsic perspective of the system. Consciousness occurs because there is something it is like, intrinsically, to undergo a certain organization of actualized differences in the brain.
Collapse
Affiliation(s)
- Robert Pepperell
- FOVOLAB, Cardiff Metropolitan University, Cardiff, United Kingdom
| |
Collapse
|
32
|
Di Rosa G, Pironti E, Cucinotta F, Alibrandi A, Gagliano A. Gender affects early psychomotor milestones and long‐term neurodevelopment of preterm infants. INFANT AND CHILD DEVELOPMENT 2018. [DOI: 10.1002/icd.2110] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Gabriella Di Rosa
- Department of Human Pathology of the Adult and Developmental Age “Gaetano Barresi,” Unit of Child Neurology and PsychiatryUniversity of Messina Messina Italy
| | - Erica Pironti
- Department of Human Pathology of the Adult and Developmental Age “Gaetano Barresi,” Unit of Child Neurology and PsychiatryUniversity of Messina Messina Italy
| | - Francesca Cucinotta
- Department of Human Pathology of the Adult and Developmental Age “Gaetano Barresi,” Unit of Child Neurology and PsychiatryUniversity of Messina Messina Italy
| | - Angela Alibrandi
- Department of Economical, Business and Environmental Sciences and Quantitative MethodsUniversity of Messina Messina Italy
| | - Antonella Gagliano
- Department of Human Pathology of the Adult and Developmental Age “Gaetano Barresi,” Unit of Child Neurology and PsychiatryUniversity of Messina Messina Italy
| |
Collapse
|
33
|
The endless visuomotor calibration of reach-to-grasp actions. Sci Rep 2018; 8:14803. [PMID: 30287832 PMCID: PMC6172279 DOI: 10.1038/s41598-018-33009-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2018] [Accepted: 09/20/2018] [Indexed: 11/24/2022] Open
Abstract
It is reasonable to assume that when we grasp an object we carry out the movement based only on the currently available sensory information. Unfortunately, our senses are often prone to err. Here, we show that the visuomotor system exploits the mismatch between the predicted and sensory outcomes of the immediately preceding action (sensory prediction error) to attain a degree of robustness against the fallibility of our perceptual processes. Participants performed reach-to-grasp movements toward objects presented at eye level at various distances. Grip aperture was affected by the object distance, even though both visual feedback of the hand and haptic feedback were provided. Crucially, grip aperture as well as the trajectory of the hand were systematically influenced also by the immediately preceding action. These results are well predicted by a model that modifies an internal state of the visuomotor system by adjusting the visuomotor mapping based on the sensory prediction errors. In sum, the visuomotor system appears to be in a constant fine-tuning process which makes the generation and control of grasping movements more resistant to interferences caused by our perceptual errors.
Collapse
|
34
|
De Nunzio AM, Schweisfurth MA, Ge N, Falla D, Hahne J, Gödecke K, Petzke F, Siebertz M, Dechent P, Weiss T, Flor H, Graimann B, Aszmann OC, Farina D. Relieving phantom limb pain with multimodal sensory-motor training. J Neural Eng 2018; 15:066022. [PMID: 30229747 DOI: 10.1088/1741-2552/aae271] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
OBJECTIVE The causes for the disabling condition of phantom limb pain (PLP), affecting 85% of amputees, are so far unknown, with few effective treatments available. Sensory feedback based strategies to normalize the motor commands to control the phantom limb offer important targets for new effective treatments as the correlation between phantom limb motor control and sensory feedback from the motor intention has been identified as a possible mechanism for PLP development. APPROACH Ten upper-limb amputees, suffering from chronic PLP, underwent 16 days of intensive training on phantom-limb movement control. Visual and tactile feedback, driven by muscular activity at the stump, was provided with the aim of reducing PLP intensity. MAIN RESULTS A 32.1% reduction of PLP intensity was obtained at the follow-up (6 weeks after the end of the training, with an initial 21.6% reduction immediately at the end of the training) reaching clinical effectiveness for chronic pain reduction. Multimodal sensory-motor training on phantom-limb movements with visual and tactile feedback is a new method for PLP reduction. SIGNIFICANCE The study results revealed a substantial reduction in phantom limb pain intensity, obtained with a new training protocol focused on improving phantom limb motor output using visual and tactile feedback from the stump muscular activity executed to move the phantom limb.
Collapse
Affiliation(s)
- A M De Nunzio
- Centre of Precision Rehabilitation for Spinal Pain (CPR Spine), School of Sport, Exercise and Rehabilitation Sciences, College of Life and Environmental Sciences, University of Birmingham, Edgbaston B152TT, Birmingham, United Kingdom. Applied Surgical and Rehabilitation Technology Lab, Department of Trauma Surgery, Orthopedic Surgery and Hand Surgery, University Medical Center Göttingen, Göttingen, Germany. Department of Translational Research and Knowledge Management, Otto Bock HealthCare GmbH, Duderstadt, Germany
| | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
35
|
Xu Y. The Posterior Parietal Cortex in Adaptive Visual Processing. Trends Neurosci 2018; 41:806-822. [PMID: 30115412 DOI: 10.1016/j.tins.2018.07.012] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2018] [Revised: 07/19/2018] [Accepted: 07/20/2018] [Indexed: 01/09/2023]
Abstract
Although the primate posterior parietal cortex (PPC) has been largely associated with space, attention, and action-related processing, a growing number of studies have reported the direct representation of a diverse array of action-independent nonspatial visual information in the PPC during both perception and visual working memory. By describing the distinctions and the close interactions of visual representation with space, attention, and action-related processing in the PPC, here I propose that we may understand these diverse PPC functions together through the unique contribution of the PPC to adaptive visual processing and form a more integrated and structured view of the role of the PPC in vision, cognition, and action.
Collapse
Affiliation(s)
- Yaoda Xu
- Psychology Department, Harvard University, Cambridge, MA 02138, USA.
| |
Collapse
|
36
|
Carruthers P. Comparative psychology without consciousness. Conscious Cogn 2018; 63:47-60. [DOI: 10.1016/j.concog.2018.06.012] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2018] [Revised: 05/13/2018] [Accepted: 06/12/2018] [Indexed: 10/28/2022]
|
37
|
Xu Y. A Tale of Two Visual Systems: Invariant and Adaptive Visual Information Representations in the Primate Brain. Annu Rev Vis Sci 2018; 4:311-336. [PMID: 29949722 DOI: 10.1146/annurev-vision-091517-033954] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Visual information processing contains two opposite needs. There is both a need to comprehend the richness of the visual world and a need to extract only pertinent visual information to guide thoughts and behavior at a given moment. I argue that these two aspects of visual processing are mediated by two complementary visual systems in the primate brain-specifically, the occipitotemporal cortex (OTC) and the posterior parietal cortex (PPC). The role of OTC in visual processing has been documented extensively by decades of neuroscience research. I review here recent evidence from human imaging and monkey neurophysiology studies to highlight the role of PPC in adaptive visual processing. I first document the diverse array of visual representations found in PPC. I then describe the adaptive nature of visual representation in PPC by contrasting visual processing in OTC and PPC and by showing that visual representations in PPC largely originate from OTC.
Collapse
Affiliation(s)
- Yaoda Xu
- Visual Sciences Laboratory, Psychology Department, Harvard University, Cambridge, Massachusetts 02138, USA;
| |
Collapse
|
38
|
Cross-talk connections underlying dorsal and ventral stream integration during hand actions. Cortex 2018; 103:224-239. [DOI: 10.1016/j.cortex.2018.02.016] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2017] [Revised: 01/15/2018] [Accepted: 02/21/2018] [Indexed: 11/21/2022]
|
39
|
Erlikhman G, Caplovitz GP, Gurariy G, Medina J, Snow JC. Towards a unified perspective of object shape and motion processing in human dorsal cortex. Conscious Cogn 2018; 64:106-120. [PMID: 29779844 DOI: 10.1016/j.concog.2018.04.016] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2018] [Revised: 04/20/2018] [Accepted: 04/26/2018] [Indexed: 01/06/2023]
Abstract
Although object-related areas were discovered in human parietal cortex a decade ago, surprisingly little is known about the nature and purpose of these representations, and how they differ from those in the ventral processing stream. In this article, we review evidence for the unique contribution of object areas of dorsal cortex to three-dimensional (3-D) shape representation, the localization of objects in space, and in guiding reaching and grasping actions. We also highlight the role of dorsal cortex in form-motion interaction and spatiotemporal integration, possible functional relationships between 3-D shape and motion processing, and how these processes operate together in the service of supporting goal-directed actions with objects. Fundamental differences between the nature of object representations in the dorsal versus ventral processing streams are considered, with an emphasis on how and why dorsal cortex supports veridical (rather than invariant) representations of objects to guide goal-directed hand actions in dynamic visual environments.
Collapse
Affiliation(s)
| | | | - Gennadiy Gurariy
- Department of Psychology, University of Nevada, Reno, USA; Department of Psychology, University of Wisconsin, Milwaukee, USA
| | - Jared Medina
- Department of Psychological and Brain Sciences, University of Delaware, USA
| | | |
Collapse
|
40
|
Castaldi E, Tinelli F, Cicchini GM, Morrone MC. Supramodal agnosia for oblique mirror orientation in patients with periventricular leukomalacia. Cortex 2018; 103:179-198. [PMID: 29655042 PMCID: PMC6004039 DOI: 10.1016/j.cortex.2018.03.010] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2017] [Revised: 03/09/2018] [Accepted: 03/12/2018] [Indexed: 01/11/2023]
Abstract
Periventricular leukomalacia (PVL) is characterized by focal necrosis at the level of the periventricular white matter, often observed in preterm infants. PVL is frequently associated with motor impairment and with visual deficits affecting primary stages of visual processes as well as higher visual cognitive abilities. Here we describe six PVL subjects, with normal verbal IQ, showing orientation perception deficits in both the haptic and visual domains. Subjects were asked to compare the orientation of two stimuli presented simultaneously or sequentially, using both a two alternative forced choice (2AFC) orientation-discrimination and a matching procedure. Visual stimuli were oriented gratings or bars or collinear short lines embedded within a random pattern. Haptic stimuli comprised two rotatable wooden sticks. PVL patients performed at chance in discriminating the oblique orientation, both for visual and haptic stimuli. Moreover when asked to reproduce the oblique orientation, they often oriented the stimulus along the symmetric mirror orientation. The deficit generalized to stimuli varying in many low level features, was invariant for spatiotopic object orientation, and also occurred for sequential presentations. The deficit was specific to oblique orientations, and not for horizontal or vertical stimuli. These findings show that PVL can affect a specific network involved with the supramodal perception of mirror symmetry orientation.
Collapse
Affiliation(s)
- Elisa Castaldi
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Francesca Tinelli
- Department of Developmental Neuroscience, Stella Maris Scientific Institute, Pisa, Italy
| | | | - M Concetta Morrone
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy; Department of Developmental Neuroscience, Stella Maris Scientific Institute, Pisa, Italy.
| |
Collapse
|
41
|
Fattori P, Breveglieri R, Bosco A, Gamberini M, Galletti C. Vision for Prehension in the Medial Parietal Cortex. Cereb Cortex 2018; 27:1149-1163. [PMID: 26656999 DOI: 10.1093/cercor/bhv302] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
Abstract
In the last 2 decades, the medial posterior parietal area V6A has been extensively studied in awake macaque monkeys for visual and somatosensory properties and for its involvement in encoding of spatial parameters for reaching, including arm movement direction and amplitude. This area also contains populations of neurons sensitive to grasping movements, such as wrist orientation and grip formation. Recent work has shown that V6A neurons also encode the shape of graspable objects and their affordance. In other words, V6A seems to encode object visual properties specifically for the purpose of action, in a dynamic sequence of visuomotor transformations that evolve in the course of reach-to-grasp action.We propose a model of cortical circuitry controlling reach-to-grasp actions, in which V6A acts as a comparator that monitors differences between current and desired hand positions and configurations. This error signal could be used to continuously update the motor output, and to correct reach direction, hand orientation, and/or grip aperture as required during the act of prehension.In contrast to the generally accepted view that the dorsomedial component of the dorsal visual stream encodes reaching, but not grasping, the functional properties of V6A neurons strongly suggest the view that this area is involved in encoding all phases of prehension, including grasping.
Collapse
Affiliation(s)
- Patrizia Fattori
- Department of Pharmacy and Biotechnology (FaBiT), University of Bologna, 40126 Bologna, Italy
| | - Rossella Breveglieri
- Department of Pharmacy and Biotechnology (FaBiT), University of Bologna, 40126 Bologna, Italy
| | - Annalisa Bosco
- Department of Pharmacy and Biotechnology (FaBiT), University of Bologna, 40126 Bologna, Italy
| | - Michela Gamberini
- Department of Pharmacy and Biotechnology (FaBiT), University of Bologna, 40126 Bologna, Italy
| | - Claudio Galletti
- Department of Pharmacy and Biotechnology (FaBiT), University of Bologna, 40126 Bologna, Italy
| |
Collapse
|
42
|
Goal-directed reaching: the allocentric coding of target location renders an offline mode of control. Exp Brain Res 2018; 236:1149-1159. [PMID: 29453490 DOI: 10.1007/s00221-018-5205-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2017] [Accepted: 02/10/2018] [Indexed: 10/18/2022]
Abstract
Reaching to a veridical target permits an egocentric spatial code (i.e., absolute limb and target position) to effect fast and effective online trajectory corrections supported via the visuomotor networks of the dorsal visual pathway. In contrast, a response entailing decoupled spatial relations between stimulus and response is thought to be primarily mediated via an allocentric code (i.e., the position of a target relative to another external cue) laid down by the visuoperceptual networks of the ventral visual pathway. Because the ventral stream renders a temporally durable percept, it is thought that an allocentric code does not support a primarily online mode of control, but instead supports a mode wherein a response is evoked largely in advance of movement onset via central planning mechanisms (i.e., offline control). Here, we examined whether reaches defined via ego- and allocentric visual coordinates are supported via distinct control modes (i.e., online versus offline). Participants performed target-directed and allocentric reaches in limb visible and limb-occluded conditions. Notably, in the allocentric task, participants reached to a location that matched the position of a target stimulus relative to a reference stimulus, and to examine online trajectory amendments, we computed the proportion of variance explained (i.e., R2 values) by the spatial position of the limb at 75% of movement time relative to a response's ultimate movement endpoint. Target-directed trials performed with limb vision showed more online corrections and greater endpoint precision than their limb-occluded counterparts, which in turn were associated with performance metrics comparable to allocentric trials performed with and without limb vision. Accordingly, we propose that the absence of ego-motion cues (i.e., limb vision) and/or the specification of a response via an allocentric code renders motor output served via the 'slow' visuoperceptual networks of the ventral visual pathway.
Collapse
|
43
|
Abstract
The feeling of control is a fundamental aspect of human experience and accompanies our voluntary actions all the time. However, how the sense of control interacts with wider perception, cognition, and behavior remains poorly understood. This study focused on how controlling an external object influences the allocation of attention. Experiment 1 examined attention to an object that is under a different level of control from the others. Participants searched for a target among multiple distractors on screen. All the distractors were partially under the participant's control (50% control level), and the search target was either under more or less control than the distractors. The results showed that, against this background of partial control, visual attention was attracted to an object only if it was more controlled than other available objects and not if it was less controlled. Experiment 2 examined attention allocation in contexts of either perfect control or no control over most of the objects. Specifically, the distractors were under either perfect (100%) control or no (0%) control, and the search target had one of six levels of control varying from 0% to 100%. When differences in control between the distractors and the target were small, visual attention was now more strongly drawn to search targets that were less controlled than distractors, rather than more controlled, suggesting attention to objects over which one might be losing control. Experiment 3 studied the events of losing or gaining control as opposed to the states of having or not having control. ERP measures showed that P300 amplitude proportionally encoded the magnitude of both increases and decreases in degree of control. However, losing control had more marked effects on P170 and P300 than gaining an equivalent degree of control, indicating high priority for efficiently detecting failures of control. Overall, our results suggest that controlled objects preferentially attract attention in uncontrolled environments. However, once control has been registered, the brain becomes highly sensitive to subsequent loss of control. Our findings point toward careful perceptual monitoring of degree of one's own agentic control over external objects. We suggest that control has intrinsic cognitive value because perceptual systems are organized to detect it and, once it has been acquired, to maintain it.
Collapse
Affiliation(s)
- Wen Wen
- University College London.,University of Tokyo
| | | |
Collapse
|
44
|
Shi X, Shen X, Qian X. Grasping and Pointing — Visual Conflict and Interference. Multisens Res 2018; 31:439-454. [DOI: 10.1163/22134808-00002576] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2016] [Accepted: 04/16/2017] [Indexed: 11/19/2022]
Abstract
There have been many debates of the two-visual-systems (whatvs. how or perceptionvs. action) hypothesis that was proposed by Goodale and his colleagues. Many researchers have provided a variety of evidence for or against the hypothesis. For instance, a study performed by Agliotiet al. offered good evidence for the two-visual-systems theory using the Ebbinghaus illusion, but some researchers who used other visual illusions failed to find consistent results. Therefore, we used a perceptual task of conflict or interference to test this hypothesis. If the conflict or interference in perception had an influence on the processing of perception alone and did not affect the processing of action, we could infer that the two visual systems are separated, and vice versa. In the current study, we carried out two experiments which employed the Stroop, Garner and SNARC paradigms and used graspable 3-D Arabic numerals. We aimed to find if the effects resulting from perceptual conflicts or interferences would affect participants’ grasping and pointing. The results showed that the interaction between Stroop and numeral order (ascending or descending, or SNARC) was significant, and the SNARC effect significantly affected action, but the main effects of Stroop and Garner interference were not significant. The results indicated that, to some degree, perceptual conflict affects action processing. The results did not provide evidence for two separate visual systems.
Collapse
Affiliation(s)
- Xia Shi
- Department of Psychology, Jiangxi University of Traditional Chinese Medicine, No. 1688 Meiling Avenue, Wanli District, Nanchang 330004, China
- Tianjin University of Technology and Education, Tianjin, China
- Zhejiang University, Hangzhou, China
| | - Xunbing Shen
- Department of Psychology, Jiangxi University of Traditional Chinese Medicine, No. 1688 Meiling Avenue, Wanli District, Nanchang 330004, China
- Zhejiang University, Hangzhou, China
| | | |
Collapse
|
45
|
Rossit S, Harvey M, Butler SH, Szymanek L, Morand S, Monaco S, McIntosh RD. Impaired peripheral reaching and on-line corrections in patient DF: Optic ataxia with visual form agnosia. Cortex 2018; 98:84-101. [DOI: 10.1016/j.cortex.2017.04.004] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2016] [Revised: 02/15/2017] [Accepted: 04/07/2017] [Indexed: 11/16/2022]
|
46
|
Freud E, Macdonald SN, Chen J, Quinlan DJ, Goodale MA, Culham JC. Getting a grip on reality: Grasping movements directed to real objects and images rely on dissociable neural representations. Cortex 2018; 98:34-48. [DOI: 10.1016/j.cortex.2017.02.020] [Citation(s) in RCA: 48] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2016] [Revised: 12/07/2016] [Accepted: 02/24/2017] [Indexed: 10/19/2022]
|
47
|
Liang L, Zhou Y, Zhang M, Pan Y. Revealing the Radial Effect on Orientation Discrimination by Manual Reaction Time. Front Neurosci 2017; 11:638. [PMID: 29225564 PMCID: PMC5705562 DOI: 10.3389/fnins.2017.00638] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Accepted: 11/03/2017] [Indexed: 11/13/2022] Open
Abstract
It has been shown that the sensitivity and accuracy of orientation perception in the periphery is significantly better when the orientations are radial with respect to the fixation point than when they are tangential. However, since perception and action may be dissociated, it is unclear whether the perceptual radial effect has a counterpart in reaction time (RT) of motor responses. Furthermore, it is unknown whether or how stimulus-response-compatibility (SRC) effect interacts with the radial effect to determine RT. To address these questions, we measured subjects' manual RT to grating stimuli that appeared across upper visual field (VF). We found that (1) RTs were significantly shorter when a grating was oriented closer to the radial direction than when it was oriented closer to the tangential direction even though the perceptual accuracies for the more radial and more tangential orientations were not significantly different under our experimental condition; (2) This RT version of the radial effect was larger in the left VF than in the right VF; (3) The radial effect and SRC effect interacted with each other to determine the overall RT. These results suggest that the RT radial effect reported here is not a passive reflection of the radial effect in perceptual accuracy, but instead, represents different processing time of radial and tangential orientations along the sensorimotor pathway.
Collapse
Affiliation(s)
- Lixin Liang
- Department of Neurology, The First Clinical College of Harbin Medical University, Harbin, China
| | - Yang Zhou
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China.,Department of Neurobiology, University of Chicago, Chicago, IL, United States
| | - Mingsha Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Yujun Pan
- Department of Neurology, The First Clinical College of Harbin Medical University, Harbin, China
| |
Collapse
|
48
|
Attention in action and perception: Unitary or separate mechanisms of selectivity? PROGRESS IN BRAIN RESEARCH 2017. [PMID: 29157415 DOI: 10.1016/bs.pbr.2017.08.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/02/2023]
Abstract
What is the relation between the two visual stream hypothesis and selective visual attention? In this chapter, we first consider this question at a theoretical level before presenting an example of work from our lab that examines the question: Under what conditions does the emotional content of a visual object influence visually guided action? Previous research has demonstrated that fear can influence perception, both consciously and unconsciously, but it is unclear when fear influences visually guided action. The study tested participants with varying degrees of spiderphobia on two visually guided pointing tasks, while manipulating the emotional valence of the target (positive and negative) and the cognitive load of the participant (single vs dual task). Participants rapidly moved their finger from a home position to a suddenly appearing target image on a touch screen. The images were emotionally negative (e.g., spiders and scorpions) or positive (e.g., flowers and food). In order to test the effect of emotional valence on the online control of the reach, the target either remained static or jumped to a new location. In both the single and dual tasks, a stream of digits were presented on the screen near the finger's starting location, but only in the dual task were participants asked to identify a letter somewhere in the stream. In the single task, increased fear of spiders reduced the speed and accuracy of the movement. In the dual task, increased fear impaired letter identification, but pointing actions were now equally efficient for low- and high-fear participants. These results imply that the finger's autopilot is influenced by emotional content only when attention can be fully devoted to the identification of the emotion-evoking images. As such, the results support the view that the mechanisms of selection are not the same in the two visual streams.
Collapse
|
49
|
Attentional capture for tool images is driven by the head end of the tool, not the handle. Atten Percept Psychophys 2017; 78:2500-2514. [PMID: 27473377 DOI: 10.3758/s13414-016-1179-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Tools afford specialized actions that are tied closely to object identity. Although there is mounting evidence that functional objects, such as tools, capture visuospatial attention relative to non-tool competitors, this leaves open the question of which part of a tool drives attentional capture. We used a modified version of the Posner cueing task to determine whether attention is oriented towards the head versus the handle of realistic images of common elongated tools. We compared cueing effects for tools with control stimuli that consisted of images of fruit and vegetables of comparable elongation to the tools. Critically, our displays controlled for lower-level influences on attention that can arise from global shape asymmetries in the image cues. Observers were faster to detect low-contrast targets positioned near the head end versus the handle of tools. As expected, no lateralized performance bias was observed for the control stimuli. In a follow-up experiment, we confirmed that the bias towards tool heads was not due to inhibition of return as a result of early attentional orienting towards tool handles. Finally, we confirmed that real-world exemplars of the tools in the cueing studies were associated more strongly with specific grasping patterns than the elongated fruits and vegetables. Together, our results demonstrate that affordance effects on attentional capture are driven by the head end of a tool. Prioritizing the head end of a tool is adaptive because it ensures that the most relevant region of the object takes priority in selecting an effective motor plan.
Collapse
|
50
|
Abstract
According to Weber’s law, a fundamental principle of perception, visual resolution decreases in a linear fashion with an increase in object size. Previous studies have shown, however, that unlike for perception, grasping does not adhere to Weber’s law. Yet, this research was limited by the fact that perception and grasping were examined for a restricted range of stimulus sizes bounded by the maximum fingers span. The purpose of the current study was to test the generality of the dissociation between perception and action across a different type of visuomotor task, that of bimanual grasping. Bimanual grasping also allows to effectively measure visual resolution during perception and action across a wide range of stimulus sizes compared to unimanual grasps. Participants grasped or estimated the sizes of large objects using both their hands. The results showed that bimanual grasps violated Weber’s law throughout the entire movement trajectory. In contrast, Just Noticeable Differences (JNDs) for perceptual estimations of the objects increased linearly with size, in agreement with Weber’s law. The findings suggest that visuomotor control, across different types of actions and for a large range of size, is based on absolute rather than on relative representation of object size.
Collapse
|