1
|
Leo F, Gori M, Sciutti A. Early blindness modulates haptic object recognition. Front Hum Neurosci 2022; 16:941593. [PMID: 36158621 PMCID: PMC9498977 DOI: 10.3389/fnhum.2022.941593] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Accepted: 08/16/2022] [Indexed: 11/13/2022] Open
Abstract
Haptic object recognition is usually an efficient process although slower and less accurate than its visual counterpart. The early loss of vision imposes a greater reliance on haptic perception for recognition compared to the sighted. Therefore, we may expect that congenitally blind persons could recognize objects through touch more quickly and accurately than late blind or sighted people. However, the literature provided mixed results. Furthermore, most of the studies on haptic object recognition focused on performance, devoting little attention to the exploration procedures that conducted to that performance. In this study, we used iCube, an instrumented cube recording its orientation in space as well as the location of the points of contact on its faces. Three groups of congenitally blind, late blind and age and gender-matched blindfolded sighted participants were asked to explore the cube faces where little pins were positioned in varying number. Participants were required to explore the cube twice, reporting whether the cube was the same or it differed in pins disposition. Results showed that recognition accuracy was not modulated by the level of visual ability. However, congenitally blind touched more cells simultaneously while exploring the faces and changed more the pattern of touched cells from one recording sample to the next than late blind and sighted. Furthermore, the number of simultaneously touched cells negatively correlated with exploration duration. These findings indicate that early blindness shapes haptic exploration of objects that can be held in hands.
Collapse
Affiliation(s)
- Fabrizio Leo
- Cognitive Architecture for Collaborative Technologies Unit, Istituto Italiano di Tecnologia, Genova, Italy
- *Correspondence: Fabrizio Leo,
| | - Monica Gori
- Unit for Visually Impaired People, Istituto Italiano di Tecnologia, Genova, Italy
| | - Alessandra Sciutti
- Cognitive Architecture for Collaborative Technologies Unit, Istituto Italiano di Tecnologia, Genova, Italy
| |
Collapse
|
2
|
Rybář M, Daly I. Neural decoding of semantic concepts: A systematic literature review. J Neural Eng 2022; 19. [PMID: 35344941 DOI: 10.1088/1741-2552/ac619a] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Accepted: 03/27/2022] [Indexed: 11/12/2022]
Abstract
Objective Semantic concepts are coherent entities within our minds. They underpin our thought processes and are a part of the basis for our understanding of the world. Modern neuroscience research is increasingly exploring how individual semantic concepts are encoded within our brains and a number of studies are beginning to reveal key patterns of neural activity that underpin specific concepts. Building upon this basic understanding of the process of semantic neural encoding, neural engineers are beginning to explore tools and methods for semantic decoding: identifying which semantic concepts an individual is focused on at a given moment in time from recordings of their neural activity. In this paper we review the current literature on semantic neural decoding. Approach We conducted this review according to the Preferred Reporting Items for Systematic reviews and Meta-Analysis (PRISMA) guidelines. Specifically, we assess the eligibility of published peer-reviewed reports via a search of PubMed and Google Scholar. We identify a total of 74 studies in which semantic neural decoding is used to attempt to identify individual semantic concepts from neural activity. Results Our review reveals how modern neuroscientific tools have been developed to allow decoding of individual concepts from a range of neuroimaging modalities. We discuss specific neuroimaging methods, experimental designs, and machine learning pipelines that are employed to aid the decoding of semantic concepts. We quantify the efficacy of semantic decoders by measuring information transfer rates. We also discuss current challenges presented by this research area and present some possible solutions. Finally, we discuss some possible emerging and speculative future directions for this research area. Significance Semantic decoding is a rapidly growing area of research. However, despite its increasingly widespread popularity and use in neuroscientific research this is the first literature review focusing on this topic across neuroimaging modalities and with a focus on quantifying the efficacy of semantic decoders.
Collapse
Affiliation(s)
- Milan Rybář
- School of Computer Science and Electronic Engineering, University of Essex, Wivenhoe Park, Colchester, Essex, CO4 3SQ, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Ian Daly
- University of Essex, School of Computer Science and Electronic Engineering, Wivenhoe Park, Colchester, Colchester, Essex, CO4 3SQ, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| |
Collapse
|
3
|
Perquin MN, Taylor M, Lorusso J, Kolasinski J. Directional biases in whole hand motion perception revealed by mid-air tactile stimulation. Cortex 2021; 142:221-236. [PMID: 34280867 PMCID: PMC8422163 DOI: 10.1016/j.cortex.2021.03.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Revised: 12/31/2020] [Accepted: 03/30/2021] [Indexed: 11/22/2022]
Abstract
Many emerging technologies are attempting to leverage the tactile domain to convey complex spatiotemporal information translated directly from the visual domain, such as shape and motion. Despite the intuitive appeal of touch for communication, we do not know to what extent the hand can substitute for the retina in this way. Here we ask whether the tactile system can be used to perceive complex whole hand motion stimuli, and whether it exhibits the same kind of established perceptual biases as reported in the visual domain. Using ultrasound stimulation, we were able to project complex moving dot percepts onto the palm in mid-air, over 30 cm above an emitter device. We generated dot kinetogram stimuli involving motion in three different directional axes ('Horizontal', 'Vertical', and 'Oblique') on the ventral surface of the hand. Using Bayesian statistics, we found clear evidence that participants were able to discriminate tactile motion direction. Furthermore, there was a marked directional bias in motion perception: participants were both better and more confident at discriminating motion in the vertical and horizontal axes of the hand, compared to those stimuli moving obliquely. This pattern directly mirrors the perceptional biases that have been robustly reported in the visual field, termed the 'Oblique Effect'. These data demonstrate the existence of biases in motion perception that transcend sensory modality. Furthermore, we extend the Oblique Effect to a whole hand scale, using motion stimuli presented on the broad and relatively low acuity surface of the palm, away from the densely innervated and much studied fingertips. These findings highlight targeted ultrasound stimulation as a versatile method to convey potentially complex spatial and temporal information without the need for a user to wear or touch a device.
Collapse
Affiliation(s)
- Marlou N Perquin
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, UK; Biopsychology & Cognitive Neuroscience, Faculty of Psychology and Sports Science, Bielefeld University, Germany; Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Germany.
| | - Mason Taylor
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, UK
| | - Jarred Lorusso
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, UK; School of Biological Sciences, University of Manchester, Manchester, UK
| | - James Kolasinski
- Cardiff University Brain Research Imaging Centre, School of Psychology, Cardiff University, UK
| |
Collapse
|
4
|
Rybář M, Poli R, Daly I. Decoding of semantic categories of imagined concepts of animals and tools in fNIRS. J Neural Eng 2021; 18:046035. [PMID: 33780916 DOI: 10.1088/1741-2552/abf2e5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Accepted: 03/29/2021] [Indexed: 11/11/2022]
Abstract
Objective.Semantic decoding refers to the identification of semantic concepts from recordings of an individual's brain activity. It has been previously reported in functional magnetic resonance imaging and electroencephalography. We investigate whether semantic decoding is possible with functional near-infrared spectroscopy (fNIRS). Specifically, we attempt to differentiate between the semantic categories of animals and tools. We also identify suitable mental tasks for potential brain-computer interface (BCI) applications.Approach.We explore the feasibility of a silent naming task, for the first time in fNIRS, and propose three novel intuitive mental tasks based on imagining concepts using three sensory modalities: visual, auditory, and tactile. Participants are asked to visualize an object in their minds, imagine the sounds made by the object, and imagine the feeling of touching the object. A general linear model is used to extract hemodynamic responses that are then classified via logistic regression in a univariate and multivariate manner.Main results.We successfully classify all tasks with mean accuracies of 76.2% for the silent naming task, 80.9% for the visual imagery task, 72.8% for the auditory imagery task, and 70.4% for the tactile imagery task. Furthermore, we show that consistent neural representations of semantic categories exist by applying classifiers across tasks.Significance.These findings show that semantic decoding is possible in fNIRS. The study is the first step toward the use of semantic decoding for intuitive BCI applications for communication.
Collapse
Affiliation(s)
- Milan Rybář
- Brain-Computer Interfacing and Neural Engineering Laboratory, School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
| | - Riccardo Poli
- Brain-Computer Interfacing and Neural Engineering Laboratory, School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
| | - Ian Daly
- Brain-Computer Interfacing and Neural Engineering Laboratory, School of Computer Science and Electronic Engineering, University of Essex, Colchester, United Kingdom
| |
Collapse
|
5
|
Abstract
The spatial context in which we view a visual stimulus strongly determines how we perceive the stimulus. In the visual tilt illusion, the perceived orientation of a visual grating is affected by the orientation signals in its surrounding context. Conceivably, the spatial context in which a visual grating is perceived can be defined by interactive multisensory information rather than visual signals alone. Here, we tested the hypothesis that tactile signals engage the neural mechanisms supporting visual contextual modulation. Because tactile signals also convey orientation information and touch can selectively interact with visual orientation perception, we predicted that tactile signals would modulate the visual tilt illusion. We applied a bias-free method to measure the tilt illusion while testing visual-only, tactile-only or visuo-tactile contextual surrounds. We found that a tactile context can influence visual tilt perception. Moreover, combining visual and tactile orientation information in the surround results in a larger tilt illusion relative to the illusion achieved with the visual-only surround. These results demonstrate that the visual tilt illusion is subject to multisensory influences and imply that non-visual signals access the neural circuits whose computations underlie the contextual modulation of vision.
Collapse
|
6
|
Delis I, Dmochowski JP, Sajda P, Wang Q. Correlation of neural activity with behavioral kinematics reveals distinct sensory encoding and evidence accumulation processes during active tactile sensing. Neuroimage 2018; 175:12-21. [PMID: 29580968 PMCID: PMC5960621 DOI: 10.1016/j.neuroimage.2018.03.035] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2017] [Revised: 02/21/2018] [Accepted: 03/17/2018] [Indexed: 12/16/2022] Open
Abstract
Many real-world decisions rely on active sensing, a dynamic process for directing our sensors (e.g. eyes or fingers) across a stimulus to maximize information gain. Though ecologically pervasive, limited work has focused on identifying neural correlates of the active sensing process. In tactile perception, we often make decisions about an object/surface by actively exploring its shape/texture. Here we investigate the neural correlates of active tactile decision-making by simultaneously measuring electroencephalography (EEG) and finger kinematics while subjects interrogated a haptic surface to make perceptual judgments. Since sensorimotor behavior underlies decision formation in active sensing tasks, we hypothesized that the neural correlates of decision-related processes would be detectable by relating active sensing to neural activity. Novel brain-behavior correlation analysis revealed that three distinct EEG components, localizing to right-lateralized occipital cortex (LOC), middle frontal gyrus (MFG), and supplementary motor area (SMA), respectively, were coupled with active sensing as their activity significantly correlated with finger kinematics. To probe the functional role of these components, we fit their single-trial-couplings to decision-making performance using a hierarchical-drift-diffusion-model (HDDM), revealing that the LOC modulated the encoding of the tactile stimulus whereas the MFG predicted the rate of information integration towards a choice. Interestingly, the MFG disappeared from components uncovered from control subjects performing active sensing but not required to make perceptual decisions. By uncovering the neural correlates of distinct stimulus encoding and evidence accumulation processes, this study delineated, for the first time, the functional role of cortical areas in active tactile decision-making.
Collapse
Affiliation(s)
- Ioannis Delis
- Department of Biomedical Engineering, Columbia University, New York, NY, 10027, USA
| | - Jacek P Dmochowski
- Department of Biomedical Engineering, City College of New York, New York, NY, 10031, USA
| | - Paul Sajda
- Department of Biomedical Engineering, Columbia University, New York, NY, 10027, USA; Data Science Institute, Columbia University, New York, NY, 10027, USA.
| | - Qi Wang
- Department of Biomedical Engineering, Columbia University, New York, NY, 10027, USA.
| |
Collapse
|
7
|
Hegdé J. Neural Mechanisms of High-Level Vision. Compr Physiol 2018; 8:903-953. [PMID: 29978891 DOI: 10.1002/cphy.c160035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The last three decades have seen major strides in our understanding of neural mechanisms of high-level vision, or visual cognition of the world around us. Vision has also served as a model system for the study of brain function. Several broad insights, as yet incomplete, have recently emerged. First, visual perception is best understood not as an end unto itself, but as a sensory process that subserves the animal's behavioral goal at hand. Visual perception is likely to be simply a side effect that reflects the readout of visual information processing that leads to behavior. Second, the brain is essentially a probabilistic computational system that produces behaviors by collectively evaluating, not necessarily consciously or always optimally, the available information about the outside world received from the senses, the behavioral goals, prior knowledge about the world, and possible risks and benefits of a given behavior. Vision plays a prominent role in the overall functioning of the brain providing the lion's share of information about the outside world. Third, the visual system does not function in isolation, but rather interacts actively and reciprocally with other brain systems, including other sensory faculties. Finally, various regions of the visual system process information not in a strict hierarchical manner, but as parts of various dynamic brain-wide networks, collectively referred to as the "connectome." Thus, a full understanding of vision will ultimately entail understanding, in granular, quantitative detail, various aspects of dynamic brain networks that use visual sensory information to produce behavior under real-world conditions. © 2017 American Physiological Society. Compr Physiol 8:903-953, 2018.
Collapse
Affiliation(s)
- Jay Hegdé
- Brain and Behavior Discovery Institute, Augusta University, Augusta, Georgia, USA.,James and Jean Culver Vision Discovery Institute, Augusta University, Augusta, Georgia, USA.,Department of Ophthalmology, Medical College of Georgia, Augusta University, Augusta, Georgia, USA.,The Graduate School, Augusta University, Augusta, Georgia, USA
| |
Collapse
|
8
|
Xue Z, Zeng X, Koehl L, Shen L. Interpretation of Fabric Tactile Perceptions through Visual Features for Textile Products. J SENS STUD 2016. [DOI: 10.1111/joss.12201] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Affiliation(s)
- Z. Xue
- Department of clothing design and engineering, School of Textiles and Clothing; Jiangnan university; Wuxi Jiangsu province 214122 P.R China
- Research group of human centered design (HCD), Laboratoire de Génie et Matériaux Textiles (GEMTEX), Ecole Nationale Supérieure des Arts et Industries Textiles (ENSAIT); 2 allée Louise et Victor Champier, BP30329, F-59056 Roubaix Cedex 1 France
| | - X. Zeng
- Research group of human centered design (HCD), Laboratoire de Génie et Matériaux Textiles (GEMTEX), Ecole Nationale Supérieure des Arts et Industries Textiles (ENSAIT); 2 allée Louise et Victor Champier, BP30329, F-59056 Roubaix Cedex 1 France
| | - L. Koehl
- Research group of human centered design (HCD), Laboratoire de Génie et Matériaux Textiles (GEMTEX), Ecole Nationale Supérieure des Arts et Industries Textiles (ENSAIT); 2 allée Louise et Victor Champier, BP30329, F-59056 Roubaix Cedex 1 France
| | - L. Shen
- Department of clothing design and engineering, School of Textiles and Clothing; Jiangnan university; Wuxi Jiangsu province 214122 P.R China
| |
Collapse
|
9
|
Abstract
UNLABELLED The visual and haptic perceptual systems are understood to share a common neural representation of object shape. A region thought to be critical for recognizing visual and haptic shape information is the lateral occipital complex (LOC). We investigated whether LOC is essential for haptic shape recognition in humans by studying behavioral responses and brain activation for haptically explored objects in a patient (M.C.) with bilateral lesions of the occipitotemporal cortex, including LOC. Despite severe deficits in recognizing objects using vision, M.C. was able to accurately recognize objects via touch. M.C.'s psychophysical response profile to haptically explored shapes was also indistinguishable from controls. Using fMRI, M.C. showed no object-selective visual or haptic responses in LOC, but her pattern of haptic activation in other brain regions was remarkably similar to healthy controls. Although LOC is routinely active during visual and haptic shape recognition tasks, it is not essential for haptic recognition of object shape. SIGNIFICANCE STATEMENT The lateral occipital complex (LOC) is a brain region regarded to be critical for recognizing object shape, both in vision and in touch. However, causal evidence linking LOC with haptic shape processing is lacking. We studied recognition performance, psychophysical sensitivity, and brain response to touched objects, in a patient (M.C.) with extensive lesions involving LOC bilaterally. Despite being severely impaired in visual shape recognition, M.C. was able to identify objects via touch and she showed normal sensitivity to a haptic shape illusion. M.C.'s brain response to touched objects in areas of undamaged cortex was also very similar to that observed in neurologically healthy controls. These results demonstrate that LOC is not necessary for recognizing objects via touch.
Collapse
|
10
|
Stone KD, Gonzalez CLR. The contributions of vision and haptics to reaching and grasping. Front Psychol 2015; 6:1403. [PMID: 26441777 PMCID: PMC4584943 DOI: 10.3389/fpsyg.2015.01403] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2015] [Accepted: 09/02/2015] [Indexed: 11/23/2022] Open
Abstract
This review aims to provide a comprehensive outlook on the sensory (visual and haptic) contributions to reaching and grasping. The focus is on studies in developing children, normal, and neuropsychological populations, and in sensory-deprived individuals. Studies have suggested a right-hand/left-hemisphere specialization for visually guided grasping and a left-hand/right-hemisphere specialization for haptically guided object recognition. This poses the interesting possibility that when vision is not available and grasping relies heavily on the haptic system, there is an advantage to use the left hand. We review the evidence for this possibility and dissect the unique contributions of the visual and haptic systems to grasping. We ultimately discuss how the integration of these two sensory modalities shape hand preference.
Collapse
Affiliation(s)
- Kayla D Stone
- The Brain in Action Laboratory, Department of Kinesiology, University of Lethbridge, Lethbridge AB, Canada
| | - Claudia L R Gonzalez
- The Brain in Action Laboratory, Department of Kinesiology, University of Lethbridge, Lethbridge AB, Canada
| |
Collapse
|
11
|
Jao RJ, James TW, James KH. Crossmodal enhancement in the LOC for visuohaptic object recognition over development. Neuropsychologia 2015; 77:76-89. [PMID: 26272239 DOI: 10.1016/j.neuropsychologia.2015.08.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2014] [Revised: 08/05/2015] [Accepted: 08/07/2015] [Indexed: 10/23/2022]
Abstract
Research has provided strong evidence of multisensory convergence of visual and haptic information within the visual cortex. These studies implement crossmodal matching paradigms to examine how systems use information from different sensory modalities for object recognition. Developmentally, behavioral evidence of visuohaptic crossmodal processing has suggested that communication within sensory systems develops earlier than across systems; nonetheless, it is unknown how the neural mechanisms driving these behavioral effects develop. To address this gap in knowledge, BOLD functional Magnetic Resonance Imaging (fMRI) was measured during delayed match-to-sample tasks that examined intramodal (visual-to-visual, haptic-to-haptic) and crossmodal (visual-to-haptic, haptic-to-visual) novel object recognition in children aged 7-8.5 years and adults. Tasks were further divided into sample encoding and test matching phases to dissociate the relative contributions of each. Results of crossmodal and intramodal object recognition revealed the network of known visuohaptic multisensory substrates, including the lateral occipital complex (LOC) and the intraparietal sulcus (IPS). Critically, both adults and children showed crossmodal enhancement within the LOC, suggesting a sensitivity to changes in sensory modality during recognition. These groups showed similar regions of activation, although children generally exhibited more widespread activity during sample encoding and weaker BOLD signal change during test matching than adults. Results further provided evidence of a bilateral region in the occipitotemporal cortex that was haptic-preferring in both age groups. This region abutted the bimodal LOtv, and was consistent with a medial to lateral organization that transitioned from a visual to haptic bias within the LOC. These findings converge with existing evidence of visuohaptic processing in the LOC in adults, and extend our knowledge of crossmodal processing in adults and children.
Collapse
Affiliation(s)
- R Joanne Jao
- Cognitive Science Program, Indiana University, Bloomington, USA; Department of Psychological and Brain Sciences, Indiana University, Bloomington, USA.
| | - Thomas W James
- Cognitive Science Program, Indiana University, Bloomington, USA; Department of Psychological and Brain Sciences, Indiana University, Bloomington, USA; Program in Neuroscience, Indiana University, Bloomington, USA
| | - Karin Harman James
- Cognitive Science Program, Indiana University, Bloomington, USA; Department of Psychological and Brain Sciences, Indiana University, Bloomington, USA; Program in Neuroscience, Indiana University, Bloomington, USA
| |
Collapse
|
12
|
Cattaneo Z, Lega C, Ferrari C, Vecchi T, Cela-Conde CJ, Silvanto J, Nadal M. The role of the lateral occipital cortex in aesthetic appreciation of representational and abstract paintings: a TMS study. Brain Cogn 2015; 95:44-53. [PMID: 25682351 DOI: 10.1016/j.bandc.2015.01.008] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2014] [Revised: 12/31/2014] [Accepted: 01/15/2015] [Indexed: 11/25/2022]
Abstract
Neuroimaging studies of aesthetic appreciation have shown that activity in the lateral occipital area (LO)-a key node in the object recognition pathway-is modulated by the extent to which visual artworks are liked or found beautiful. However, the available evidence is only correlational. Here we used transcranial magnetic stimulation (TMS) to investigate the putative causal role of LO in the aesthetic appreciation of paintings. In our first experiment, we found that interfering with LO activity during aesthetic appreciation selectively reduced evaluation of representational paintings, leaving appreciation of abstract paintings unaffected. A second experiment demonstrated that, although the perceived clearness of the images overall positively correlated with liking, the detrimental effect of LO TMS on aesthetic appreciation does not owe to TMS reducing perceived clearness. Taken together, our findings suggest that object-recognition mechanisms mediated by LO play a causal role in aesthetic appreciation of representational art.
Collapse
Affiliation(s)
- Zaira Cattaneo
- Department of Psychology, University of Milano-Bicocca, Milano, Italy; Brain Connectivity Center, National Neurological Institute C. Mondino, Pavia, Italy.
| | - Carlotta Lega
- Department of Psychology, University of Milano-Bicocca, Milano, Italy
| | - Chiara Ferrari
- Brain Connectivity Center, National Neurological Institute C. Mondino, Pavia, Italy; Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | - Tomaso Vecchi
- Brain Connectivity Center, National Neurological Institute C. Mondino, Pavia, Italy; Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | | | - Juha Silvanto
- Department of Psychology, Faculty of Science and Technology, University of Westminster, UK
| | - Marcos Nadal
- Department of Basic Psychological Research and Research Methods, University of Vienna, Austria
| |
Collapse
|
13
|
Lacey S, Sathian K. CROSSMODAL AND MULTISENSORY INTERACTIONS BETWEEN VISION AND TOUCH. SCHOLARPEDIA 2015; 10:7957. [PMID: 26783412 DOI: 10.4249/scholarpedia.7957] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023] Open
Affiliation(s)
- Simon Lacey
- Departments of Neurology, Emory University, Atlanta, GA, USA
| | - K Sathian
- Departments of Neurology, Emory University, Atlanta, GA, USA; Rehabilitation Medicine, Emory University, Atlanta, GA, USA; Psychology, Emory University, Atlanta, GA, USA; Rehabilitation R&D Center of Excellence, Atlanta VAMC, Decatur, GA, USA
| |
Collapse
|
14
|
Abstract
The manipulation of objects commonly involves motion between object and skin. In this review, we discuss the neural basis of tactile motion perception and its similarities with its visual counterpart. First, much like in vision, the perception of tactile motion relies on the processing of spatiotemporal patterns of activation across populations of sensory receptors. Second, many neurons in primary somatosensory cortex are highly sensitive to motion direction, and the response properties of these neurons draw strong analogies to those of direction-selective neurons in visual cortex. Third, tactile speed may be encoded in the strength of the response of cutaneous mechanoreceptive afferents and of a subpopulation of speed-sensitive neurons in cortex. However, both afferent and cortical responses are strongly dependent on texture as well, so it is unclear how texture and speed signals are disambiguated. Fourth, motion signals from multiple fingers must often be integrated during the exploration of objects, but the way these signals are combined is complex and remains to be elucidated. Finally, visual and tactile motion perception interact powerfully, an integration process that is likely mediated by visual association cortex.
Collapse
Affiliation(s)
- Yu-Cheng Pei
- Department of Physical Medicine and Rehabilitation, Chang Gung Memorial Hospital, Taoyuan, Taiwan, Republic of China; Healthy Aging Research Center, Chang Gung University, Taoyuan, Taiwan, Republic of China
| | - Sliman J Bensmaia
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, Illinois; and Committee on Computational Neuroscience, University of Chicago, Chicago, Illinois
| |
Collapse
|
15
|
Lacey S, Sathian K. Visuo-haptic multisensory object recognition, categorization, and representation. Front Psychol 2014; 5:730. [PMID: 25101014 PMCID: PMC4102085 DOI: 10.3389/fpsyg.2014.00730] [Citation(s) in RCA: 60] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Accepted: 06/23/2014] [Indexed: 12/15/2022] Open
Abstract
Visual and haptic unisensory object processing show many similarities in terms of categorization, recognition, and representation. In this review, we discuss how these similarities contribute to multisensory object processing. In particular, we show that similar unisensory visual and haptic representations lead to a shared multisensory representation underlying both cross-modal object recognition and view-independence. This shared representation suggests a common neural substrate and we review several candidate brain regions, previously thought to be specialized for aspects of visual processing, that are now known also to be involved in analogous haptic tasks. Finally, we lay out the evidence for a model of multisensory object recognition in which top-down and bottom-up pathways to the object-selective lateral occipital complex are modulated by object familiarity and individual differences in object and spatial imagery.
Collapse
Affiliation(s)
- Simon Lacey
- Department of Neurology, Emory University School of Medicine Atlanta, GA, USA
| | - K Sathian
- Department of Neurology, Emory University School of Medicine Atlanta, GA, USA ; Department of Rehabilitation Medicine, Emory University School of Medicine Atlanta, GA, USA ; Department of Psychology, Emory University School of Medicine Atlanta, GA, USA ; Rehabilitation Research and Development Center of Excellence, Atlanta Veterans Affairs Medical Center Decatur, GA, USA
| |
Collapse
|
16
|
Lacey S, Stilla R, Sreenivasan K, Deshpande G, Sathian K. Spatial imagery in haptic shape perception. Neuropsychologia 2014; 60:144-58. [PMID: 25017050 DOI: 10.1016/j.neuropsychologia.2014.05.008] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2014] [Revised: 04/27/2014] [Accepted: 05/13/2014] [Indexed: 12/14/2022]
Abstract
We have proposed that haptic activation of the shape-selective lateral occipital complex (LOC) reflects a model of multisensory object representation in which the role of visual imagery is modulated by object familiarity. Supporting this, a previous functional magnetic resonance imaging (fMRI) study from our laboratory used inter-task correlations of blood oxygenation level-dependent (BOLD) signal magnitude and effective connectivity (EC) patterns based on the BOLD signals to show that the neural processes underlying visual object imagery (objIMG) are more similar to those mediating haptic perception of familiar (fHS) than unfamiliar (uHS) shapes. Here we employed fMRI to test a further hypothesis derived from our model, that spatial imagery (spIMG) would evoke activation and effective connectivity patterns more related to uHS than fHS. We found that few of the regions conjointly activated by spIMG and either fHS or uHS showed inter-task correlations of BOLD signal magnitudes, with parietal foci featuring in both sets of correlations. This may indicate some involvement of spIMG in HS regardless of object familiarity, contrary to our hypothesis, although we cannot rule out alternative explanations for the commonalities between the networks, such as generic imagery or spatial processes. EC analyses, based on inferred neuronal time series obtained by deconvolution of the hemodynamic response function from the measured BOLD time series, showed that spIMG shared more common paths with uHS than fHS. Re-analysis of our previous data, using the same EC methods as those used here, showed that, by contrast, objIMG shared more common paths with fHS than uHS. Thus, although our model requires some refinement, its basic architecture is supported: a stronger relationship between spIMG and uHS compared to fHS, and a stronger relationship between objIMG and fHS compared to uHS.
Collapse
Affiliation(s)
- Simon Lacey
- Department of Neurology, Emory University, Atlanta, GA, USA
| | - Randall Stilla
- Department of Neurology, Emory University, Atlanta, GA, USA
| | - Karthik Sreenivasan
- AU MRI Research Center, Department of Electrical & Computer Engineering, Auburn University, Auburn, AL, USA
| | - Gopikrishna Deshpande
- AU MRI Research Center, Department of Electrical & Computer Engineering, Auburn University, Auburn, AL, USA; Department of Psychology, Auburn University, Auburn, AL, USA
| | - K Sathian
- Department of Neurology, Emory University, Atlanta, GA, USA; Department of Rehabilitation Medicine, Emory University, Atlanta, GA, USA; Department of Psychology, Emory University, Atlanta, GA, USA; Rehabilitation R&D Center of Excellence, Atlanta VAMC, Decatur, GA, USA.
| |
Collapse
|
17
|
Kassuba T, Klinge C, Hölig C, Röder B, Siebner HR. Short-term plasticity of visuo-haptic object recognition. Front Psychol 2014; 5:274. [PMID: 24765082 PMCID: PMC3980106 DOI: 10.3389/fpsyg.2014.00274] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2014] [Accepted: 03/14/2014] [Indexed: 11/13/2022] Open
Abstract
Functional magnetic resonance imaging (fMRI) studies have provided ample evidence for the involvement of the lateral occipital cortex (LO), fusiform gyrus (FG), and intraparietal sulcus (IPS) in visuo-haptic object integration. Here we applied 30 min of sham (non-effective) or real offline 1 Hz repetitive transcranial magnetic stimulation (rTMS) to perturb neural processing in left LO immediately before subjects performed a visuo-haptic delayed-match-to-sample task during fMRI. In this task, subjects had to match sample (S1) and target (S2) objects presented sequentially within or across vision and/or haptics in both directions (visual-haptic or haptic-visual) and decide whether or not S1 and S2 were the same objects. Real rTMS transiently decreased activity at the site of stimulation and remote regions such as the right LO and bilateral FG during haptic S1 processing. Without affecting behavior, the same stimulation gave rise to relative increases in activation during S2 processing in the right LO, left FG, bilateral IPS, and other regions previously associated with object recognition. Critically, the modality of S2 determined which regions were recruited after rTMS. Relative to sham rTMS, real rTMS induced increased activations during crossmodal congruent matching in the left FG for haptic S2 and the temporal pole for visual S2. In addition, we found stronger activations for incongruent than congruent matching in the right anterior parahippocampus and middle frontal gyrus for crossmodal matching of haptic S2 and in the left FG and bilateral IPS for unimodal matching of visual S2, only after real but not sham rTMS. The results imply that a focal perturbation of the left LO triggers modality-specific interactions between the stimulated left LO and other key regions of object processing possibly to maintain unimpaired object recognition. This suggests that visual and haptic processing engage partially distinct brain networks during visuo-haptic object matching.
Collapse
Affiliation(s)
- Tanja Kassuba
- Danish Research Centre for Magnetic Resonance, Copenhagen University Hospital Hvidovre Hvidovre, Denmark ; NeuroImageNord/Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf Hamburg, Germany ; Department of Neurology, Christian-Albrechts-University Kiel, Germany
| | - Corinna Klinge
- NeuroImageNord/Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf Hamburg, Germany ; Department of Psychiatry, Warneford Hospital Oxford, UK
| | - Cordula Hölig
- NeuroImageNord/Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf Hamburg, Germany ; Biological Psychology and Neuropsychology, University of Hamburg Hamburg, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg Hamburg, Germany
| | - Hartwig R Siebner
- Danish Research Centre for Magnetic Resonance, Copenhagen University Hospital Hvidovre Hvidovre, Denmark ; NeuroImageNord/Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf Hamburg, Germany ; Department of Neurology, Christian-Albrechts-University Kiel, Germany
| |
Collapse
|
18
|
Hölig C, Föcker J, Best A, Röder B, Büchel C. Brain systems mediating voice identity processing in blind humans. Hum Brain Mapp 2014; 35:4607-19. [PMID: 24639401 DOI: 10.1002/hbm.22498] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2013] [Revised: 02/10/2014] [Accepted: 02/13/2014] [Indexed: 11/10/2022] Open
Abstract
Blind people rely more on vocal cues when they recognize a person's identity than sighted people. Indeed, a number of studies have reported better voice recognition skills in blind than in sighted adults. The present functional magnetic resonance imaging study investigated changes in the functional organization of neural systems involved in voice identity processing following congenital blindness. A group of congenitally blind individuals and matched sighted control participants were tested in a priming paradigm, in which two voice stimuli (S1, S2) were subsequently presented. The prime (S1) and the target (S2) were either from the same speaker (person-congruent voices) or from two different speakers (person-incongruent voices). Participants had to classify the S2 as either a old or a young person. Person-incongruent voices (S2) compared with person-congruent voices elicited an increased activation in the right anterior fusiform gyrus in congenitally blind individuals but not in matched sighted control participants. In contrast, only matched sighted controls showed a higher activation in response to person-incongruent compared with person-congruent voices (S2) in the right posterior superior temporal sulcus. These results provide evidence for crossmodal plastic changes of the person identification system in the brain after visual deprivation.
Collapse
Affiliation(s)
- Cordula Hölig
- Department of Biological Psychology and Neuropsychology, University of Hamburg, Germany; Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf, Germany
| | | | | | | | | |
Collapse
|
19
|
Gandhi TK, Ganesh S, Sinha P. Improvement in spatial imagery following sight onset late in childhood. Psychol Sci 2014; 25:693-701. [PMID: 24406396 DOI: 10.1177/0956797613513906] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The factors contributing to the development of spatial imagery skills are not well understood. Here, we consider whether visual experience shapes these skills. Although differences in spatial imagery between sighted and blind individuals have been reported, it is unclear whether these differences are truly due to visual deprivation or instead are due to extraneous factors, such as reduced opportunities for the blind to interact with their environment. A direct way of assessing vision's contribution to the development of spatial imagery is to determine whether spatial imagery skills change soon after the onset of sight in congenitally blind individuals. We tested 10 children who gained sight after several years of congenital blindness and found significant improvements in their spatial imagery skills following sight-restoring surgeries. These results provide evidence of vision's contribution to spatial imagery and also have implications for the nature of internal spatial representations.
Collapse
Affiliation(s)
- Tapan K Gandhi
- 1Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
| | | | | |
Collapse
|
20
|
Yau JM, Celnik P, Hsiao SS, Desmond JE. Feeling better: separate pathways for targeted enhancement of spatial and temporal touch. Psychol Sci 2014; 25:555-65. [PMID: 24390826 DOI: 10.1177/0956797613511467] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
People perceive spatial form and temporal frequency through touch. Although distinct somatosensory neurons represent spatial and temporal information, these neural populations are intermixed throughout the somatosensory system. Here, we show that spatial and temporal touch can be dissociated and separately enhanced via cortical pathways that are normally associated with vision and audition. In Experiments 1 and 2, we found that anodal transcranial direct current stimulation (tDCS) applied over visual cortex, but not auditory cortex, enhances tactile perception of spatial orientation. In Experiments 3 and 4, we found that anodal tDCS over auditory cortex, but not visual cortex, enhances tactile perception of temporal frequency. This double dissociation reveals separate cortical pathways that selectively support spatial and temporal channels. These results bolster the emerging view that sensory areas process multiple modalities and suggest that supramodal domains may be more fundamental to cortical organization.
Collapse
|
21
|
Abstract
Humans typically rely upon vision to identify object shape, but we can also recognize shape via touch (haptics). Our haptic shape recognition ability raises an intriguing question: To what extent do visual cortical shape recognition mechanisms support haptic object recognition? We addressed this question using a haptic fMRI repetition design, which allowed us to identify neuronal populations sensitive to the shape of objects that were touched but not seen. In addition to the expected shape-selective fMRI responses in dorsal frontoparietal areas, we observed widespread shape-selective responses in the ventral visual cortical pathway, including primary visual cortex. Our results indicate that shape processing via touch engages many of the same neural mechanisms as visual object recognition. The shape-specific repetition effects we observed in primary visual cortex show that visual sensory areas are engaged during the haptic exploration of object shape, even in the absence of concurrent shape-related visual input. Our results complement related findings in visually deprived individuals and highlight the fundamental role of the visual system in the processing of object shape.
Collapse
|
22
|
Kassuba T, Klinge C, Hölig C, Röder B, Siebner HR. Vision holds a greater share in visuo-haptic object recognition than touch. Neuroimage 2013; 65:59-68. [DOI: 10.1016/j.neuroimage.2012.09.054] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2012] [Revised: 09/19/2012] [Accepted: 09/20/2012] [Indexed: 10/27/2022] Open
|
23
|
Galvanic vestibular stimulation modulates the electrophysiological response during face processing. Vis Neurosci 2012; 29:255-62. [PMID: 22697300 DOI: 10.1017/s0952523812000235] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Although galvanic vestibular stimulation (GVS) is known to affect the speed and accuracy of visual judgments, the underlying electrophysiological response has not been explored. In the present study, we therefore investigated the effect of GVS on the N170 event-related potential, a marker commonly associated with early visual structural encoding. To elicit the waveform, participants distinguished famous from nonfamous faces that were presented in either upright or inverted orientation. Relative to a sham, stimulation increased the amplitude of the N170 and also elevated power spectra within the delta and theta frequency bands, components that have likewise been associated with face processing. This study constitutes the first attempt to model the effects of GVS on the electrophysiological response and, more specifically, indicates that unisensory visual processes linked to object construction are influenced by vestibular information. Given that reductions in the magnitude of both the N170 event-related potential and delta/theta activity accompany certain disease states, GVS may provide hitherto unreported therapeutic benefit.
Collapse
|
24
|
Hidaka S, Teramoto W, Nagai M. Sound can enhance the suppression of visual target detection in apparent motion trajectory. Vision Res 2012; 59:25-33. [PMID: 22406661 DOI: 10.1016/j.visres.2012.02.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2011] [Revised: 02/20/2012] [Accepted: 02/21/2012] [Indexed: 12/01/2022]
Abstract
Detection performance is impaired for a visual target presented in an apparent motion (AM) trajectory, and this AM interference weakens when orientation information is inconsistent between the target and AM stimuli. These indicate that the target is perceptually suppressed by internal object representations of AM stimuli established along the AM trajectory. Here, we showed that transient sounds presented together with AM stimuli could enhance the magnitude of AM interference. Furthermore, this auditory effect attenuated when frequencies of the sounds were inconsistent during AM. We also confirmed that the sounds wholly elevated the magnitude of AM interference irrespective of the inconsistency in orientation information between the target and AM stimuli when the saliency of the sounds was maintained. These results suggest that sounds can contribute to the robust establishment and spatiotemporal maintenance of the internal object representation of an AM stimulus.
Collapse
Affiliation(s)
- Souta Hidaka
- Department of Psychology, Rikkyo University, 1-2-26 Kitano, Niiza-shi, Saitama 352-8558, Japan.
| | | | | |
Collapse
|
25
|
Wacker E, Spitzer B, Lützkendorf R, Bernarding J, Blankenburg F. Tactile motion and pattern processing assessed with high-field FMRI. PLoS One 2011; 6:e24860. [PMID: 21949769 PMCID: PMC3174219 DOI: 10.1371/journal.pone.0024860] [Citation(s) in RCA: 57] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2011] [Accepted: 08/18/2011] [Indexed: 11/23/2022] Open
Abstract
Processing of motion and pattern has been extensively studied in the visual domain, but much less in the somatosensory system. Here, we used ultra-high-field functional magnetic resonance imaging (fMRI) at 7 Tesla to investigate the neuronal correlates of tactile motion and pattern processing in humans under tightly controlled stimulation conditions. Different types of dynamic stimuli created the sensation of moving or stationary bar patterns during passive touch. Activity in somatosensory cortex was increased during both motion and pattern processing and modulated by motion directionality in primary and secondary somatosensory cortices (SI and SII) as well as by pattern orientation in the anterior intraparietal sulcus. Furthermore, tactile motion and pattern processing induced activity in the middle temporal cortex (hMT+/V5) and in the inferior parietal cortex (IPC), involving parts of the supramarginal und angular gyri. These responses covaried with subjects' individual perceptual performance, suggesting that hMT+/V5 and IPC contribute to conscious perception of specific tactile stimulus features. In addition, an analysis of effective connectivity using psychophysiological interactions (PPI) revealed increased functional coupling between SI and hMT+/V5 during motion processing, as well as between SI and IPC during pattern processing. This connectivity pattern provides evidence for the direct engagement of these specialized cortical areas in tactile processing during somesthesis.
Collapse
Affiliation(s)
- Evelin Wacker
- Department of Neurology and Bernstein Center for Computational Neuroscience, Charité, Berlin, Germany.
| | | | | | | | | |
Collapse
|
26
|
Lacey S, Lin JB, Sathian K. Object and spatial imagery dimensions in visuo-haptic representations. Exp Brain Res 2011; 213:267-73. [PMID: 21424255 DOI: 10.1007/s00221-011-2623-1] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2010] [Accepted: 03/04/2011] [Indexed: 10/18/2022]
Abstract
Visual imagery comprises object and spatial dimensions. Both types of imagery encode shape but a key difference is that object imagers are more likely to encode surface properties than spatial imagers. Since visual and haptic object representations share many characteristics, we investigated whether haptic and multisensory representations also share an object-spatial continuum. Experiment 1 involved two tasks in both visual and haptic within-modal conditions, one requiring discrimination of shape across changes in texture, the other discrimination of texture across changes in shape. In both modalities, spatial imagers could ignore changes in texture but not shape, whereas object imagers could ignore changes in shape but not texture. Experiment 2 re-analyzed a cross-modal version of the shape discrimination task from an earlier study. We found that spatial imagers could discriminate shape across changes in texture but object imagers could not and that the more one preferred object imagery, the more texture changes impaired discrimination. These findings are the first evidence that object and spatial dimensions of imagery can be observed in haptic and multisensory representations.
Collapse
Affiliation(s)
- Simon Lacey
- Department of Neurology, Emory University School of Medicine, WMB-6000, 101 Woodruff Circle, Atlanta, GA 30322, USA.
| | | | | |
Collapse
|