1
|
Chow JK, Palmeri TJ, Gauthier I. Distinct but related abilities for visual and haptic object recognition. Psychon Bull Rev 2024:10.3758/s13423-024-02471-x. [PMID: 38381302 DOI: 10.3758/s13423-024-02471-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/29/2024] [Indexed: 02/22/2024]
Abstract
People vary in their ability to recognize objects visually. Individual differences for matching and recognizing objects visually is supported by a domain-general ability capturing common variance across different tasks (e.g., Richler et al., Psychological Review, 126, 226-251, 2019). Behavioral (e.g., Cooke et al., Neuropsychologia, 45, 484-495, 2007) and neural evidence (e.g., Amedi, Cerebral Cortex, 12, 1202-1212, 2002) suggest overlapping mechanisms in the processing of visual and haptic information in the service of object recognition, but it is unclear whether such group-average results generalize to individual differences. Psychometrically validated measures are required, which have been lacking in the haptic modality. We investigate whether object recognition ability is specific to vision or extends to haptics using psychometric measures we have developed. We use multiple visual and haptic tests with different objects and different formats to measure domain-general visual and haptic abilities and to test for relations across them. We measured object recognition abilities using two visual tests and four haptic tests (two each for two kinds of haptic exploration) in 97 participants. Partial correlation and confirmatory factor analyses converge to support the existence of a domain-general haptic object recognition ability that is moderately correlated with domain-general visual object recognition ability. Visual and haptic abilities share about 25% of their variance, supporting the existence of a multisensory domain-general ability while leaving a substantial amount of residual variance for modality-specific abilities. These results extend our understanding of the structure of object recognition abilities; while there are mechanisms that may generalize across categories, tasks, and modalities, there are still other mechanisms that are distinct between modalities.
Collapse
Affiliation(s)
- Jason K Chow
- Department of Psychology, Vanderbilt University, 111 21st Avenue South, Nashville, TN, 37240, USA.
| | - Thomas J Palmeri
- Department of Psychology, Vanderbilt University, 111 21st Avenue South, Nashville, TN, 37240, USA
| | - Isabel Gauthier
- Department of Psychology, Vanderbilt University, 111 21st Avenue South, Nashville, TN, 37240, USA
| |
Collapse
|
2
|
Landelle C, Caron-Guyon J, Nazarian B, Anton J, Sein J, Pruvost L, Amberg M, Giraud F, Félician O, Danna J, Kavounoudias A. Beyond sense-specific processing: decoding texture in the brain from touch and sonified movement. iScience 2023; 26:107965. [PMID: 37810223 PMCID: PMC10551894 DOI: 10.1016/j.isci.2023.107965] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 07/08/2023] [Accepted: 09/15/2023] [Indexed: 10/10/2023] Open
Abstract
Texture, a fundamental object attribute, is perceived through multisensory information including touch and auditory cues. Coherent perceptions may rely on shared texture representations across different senses in the brain. To test this hypothesis, we delivered haptic textures coupled with a sound synthesizer to generate real-time textural sounds. Participants completed roughness estimation tasks with haptic, auditory, or bimodal cues in an MRI scanner. Somatosensory, auditory, and visual cortices were all activated during haptic and auditory exploration, challenging the traditional view that primary sensory cortices are sense-specific. Furthermore, audio-tactile integration was found in secondary somatosensory (S2) and primary auditory cortices. Multivariate analyses revealed shared spatial activity patterns in primary motor and somatosensory cortices, for discriminating texture across both modalities. This study indicates that primary areas and S2 have a versatile representation of multisensory textures, which has significant implications for how the brain processes multisensory cues to interact more efficiently with our environment.
Collapse
Affiliation(s)
- C. Landelle
- McGill University, McConnell Brain Imaging Centre, Department of Neurology and Neurosurgery, Montreal Neurological Institute, Montreal, QC, Canada
- Aix-Marseille Université, CNRS, Laboratoire de Neurosciences Cognitives, LNC UMR 7291, Marseille, France
| | - J. Caron-Guyon
- Aix-Marseille Université, CNRS, Laboratoire de Neurosciences Cognitives, LNC UMR 7291, Marseille, France
- University of Louvain, Institute for Research in Psychology (IPSY) & Institute of Neuroscience (IoNS), Louvain Bionics Center, Crossmodal Perception and Plasticity Laboratory, Louvain-la-Neuve, Belgium
| | - B. Nazarian
- Aix-Marseille Université, CNRS, Centre IRM-INT@CERIMED, Institut de Neurosciences de la Timone, INT UMR 7289, Marseille, France
| | - J.L. Anton
- Aix-Marseille Université, CNRS, Centre IRM-INT@CERIMED, Institut de Neurosciences de la Timone, INT UMR 7289, Marseille, France
| | - J. Sein
- Aix-Marseille Université, CNRS, Centre IRM-INT@CERIMED, Institut de Neurosciences de la Timone, INT UMR 7289, Marseille, France
| | - L. Pruvost
- Aix-Marseille Université, CNRS, Perception, Représentations, Image, Son, Musique, PRISM UMR 7061, Marseille, France
| | - M. Amberg
- Université Lille, Laboratoire d'Electrotechnique et d'Electronique de Puissance, EA 2697-L2EP, Lille, France
| | - F. Giraud
- Université Lille, Laboratoire d'Electrotechnique et d'Electronique de Puissance, EA 2697-L2EP, Lille, France
| | - O. Félician
- Aix Marseille Université, INSERM, Institut des Neurosciences des Systèmes, INS UMR 1106, Marseille, France
| | - J. Danna
- Aix-Marseille Université, CNRS, Laboratoire de Neurosciences Cognitives, LNC UMR 7291, Marseille, France
- Université de Toulouse, CNRS, Laboratoire Cognition, Langues, Langage, Ergonomie, CLLE UMR5263, Toulouse, France
| | - A. Kavounoudias
- Aix-Marseille Université, CNRS, Laboratoire de Neurosciences Cognitives, LNC UMR 7291, Marseille, France
| |
Collapse
|
3
|
Chow JK, Palmeri TJ, Pluck G, Gauthier I. Evidence for an amodal domain-general object recognition ability. Cognition 2023; 238:105542. [PMID: 37419065 DOI: 10.1016/j.cognition.2023.105542] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 06/26/2023] [Accepted: 06/27/2023] [Indexed: 07/09/2023]
Abstract
A general object recognition ability predicts performance across a variety of high-level visual tests, categories, and performance in haptic recognition. Does this ability extend to auditory recognition? Vision and haptics tap into similar representations of shape and texture. In contrast, features of auditory perception like pitch, timbre, or loudness do not readily translate into shape percepts related to edges, surfaces, or spatial arrangement of parts. We find that an auditory object recognition ability correlates highly with a visual object recognition ability after controlling for general intelligence, perceptual speed, low-level visual ability, and memory ability. Auditory object recognition was a stronger predictor of visual object recognition than all control measures across two experiments, even though those control variables were also tested visually. These results point towards a single high-level ability used in both vision and audition. Much work highlights how the integration of visual and auditory information is important in specific domains (e.g., speech, music), with evidence for some overlap of visual and auditory neural representations. Our results are the first to reveal a domain-general ability, o, that predicts object recognition performance in both visual and auditory tests. Because o is domain-general, it reveals mechanisms that apply across a wide range of situations, independent of experience and knowledge. As o is distinct from general intelligence, it is well positioned to potentially add predictive validity when explaining individual differences in a variety of tasks, above and beyond measures of common cognitive abilities like general intelligence and working memory.
Collapse
Affiliation(s)
- Jason K Chow
- Department of Psychology, Vanderbilt University, USA.
| | | | - Graham Pluck
- Faculty of Psychology, Chulalongkorn University, Thailand
| | | |
Collapse
|
4
|
Scheliga S, Kellermann T, Lampert A, Rolke R, Spehr M, Habel U. Neural correlates of multisensory integration in the human brain: an ALE meta-analysis. Rev Neurosci 2023; 34:223-245. [PMID: 36084305 DOI: 10.1515/revneuro-2022-0065] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 07/22/2022] [Indexed: 02/07/2023]
Abstract
Previous fMRI research identified superior temporal sulcus as central integration area for audiovisual stimuli. However, less is known about a general multisensory integration network across senses. Therefore, we conducted activation likelihood estimation meta-analysis with multiple sensory modalities to identify a common brain network. We included 49 studies covering all Aristotelian senses i.e., auditory, visual, tactile, gustatory, and olfactory stimuli. Analysis revealed significant activation in bilateral superior temporal gyrus, middle temporal gyrus, thalamus, right insula, and left inferior frontal gyrus. We assume these regions to be part of a general multisensory integration network comprising different functional roles. Here, thalamus operate as first subcortical relay projecting sensory information to higher cortical integration centers in superior temporal gyrus/sulcus while conflict-processing brain regions as insula and inferior frontal gyrus facilitate integration of incongruent information. We additionally performed meta-analytic connectivity modelling and found each brain region showed co-activations within the identified multisensory integration network. Therefore, by including multiple sensory modalities in our meta-analysis the results may provide evidence for a common brain network that supports different functional roles for multisensory integration.
Collapse
Affiliation(s)
- Sebastian Scheliga
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Thilo Kellermann
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA-Institute Brain Structure Function Relationship, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Angelika Lampert
- Institute of Physiology, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Roman Rolke
- Department of Palliative Medicine, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Marc Spehr
- Department of Chemosensation, RWTH Aachen University, Institute for Biology, Worringerweg 3, 52074 Aachen, Germany
| | - Ute Habel
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA-Institute Brain Structure Function Relationship, Pauwelsstraße 30, 52074 Aachen, Germany
| |
Collapse
|
5
|
Bailey KM, Giordano BL, Kaas AL, Smith FW. Decoding sounds depicting hand-object interactions in primary somatosensory cortex. Cereb Cortex 2022; 33:3621-3635. [PMID: 36045002 DOI: 10.1093/cercor/bhac296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 05/24/2022] [Accepted: 07/07/2022] [Indexed: 11/13/2022] Open
Abstract
Neurons, even in the earliest sensory regions of cortex, are subject to a great deal of contextual influences from both within and across modality connections. Recent work has shown that primary sensory areas can respond to and, in some cases, discriminate stimuli that are not of their target modality: for example, primary somatosensory cortex (SI) discriminates visual images of graspable objects. In the present work, we investigated whether SI would discriminate sounds depicting hand-object interactions (e.g. bouncing a ball). In a rapid event-related functional magnetic resonance imaging experiment, participants listened attentively to sounds from 3 categories: hand-object interactions, and control categories of pure tones and animal vocalizations, while performing a one-back repetition detection task. Multivoxel pattern analysis revealed significant decoding of hand-object interaction sounds within SI, but not for either control category. Crucially, in the hand-sensitive voxels defined from an independent tactile localizer, decoding accuracies were significantly higher for hand-object interactions compared to pure tones in left SI. Our findings indicate that simply hearing sounds depicting familiar hand-object interactions elicit different patterns of activity in SI, despite the complete absence of tactile stimulation. These results highlight the rich contextual information that can be transmitted across sensory modalities even to primary sensory areas.
Collapse
Affiliation(s)
- Kerri M Bailey
- School of Psychology, University of East Anglia, Norwich NR4 7TJ, United Kingdom
| | - Bruno L Giordano
- Institut des Neurosciences de La Timone, CNRS UMR 7289, Université Aix-Marseille, Marseille CNRS UMR 7289, France
| | - Amanda L Kaas
- Department of Cognitive Neuroscience, Maastricht University, Maastricht 6229 EV, The Netherlands
| | - Fraser W Smith
- School of Psychology, University of East Anglia, Norwich NR4 7TJ, United Kingdom
| |
Collapse
|
6
|
Wang L, Ma L, Yang J, Wu J. Human Somatosensory Processing and Artificial Somatosensation. CYBORG AND BIONIC SYSTEMS 2021; 2021:9843259. [PMID: 36285142 PMCID: PMC9494715 DOI: 10.34133/2021/9843259] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 04/30/2021] [Indexed: 11/06/2022] Open
Abstract
In the past few years, we have gained a better understanding of the information processing mechanism in the human brain, which has led to advances in artificial intelligence and humanoid robots. However, among the various sensory systems, studying the somatosensory system presents the greatest challenge. Here, we provide a comprehensive review of the human somatosensory system and its corresponding applications in artificial systems. Due to the uniqueness of the human hand in integrating receptor and actuator functions, we focused on the role of the somatosensory system in object recognition and action guidance. First, the low-threshold mechanoreceptors in the human skin and somatotopic organization principles along the ascending pathway, which are fundamental to artificial skin, were summarized. Second, we discuss high-level brain areas, which interacted with each other in the haptic object recognition. Based on this close-loop route, we used prosthetic upper limbs as an example to highlight the importance of somatosensory information. Finally, we present prospective research directions for human haptic perception, which could guide the development of artificial somatosensory systems.
Collapse
Affiliation(s)
- Luyao Wang
- Beijing Advanced Innovation Center for Intelligent Robots and Systems, Beijing Institute of Technology, Beijing, China
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing, China
| | - Lihua Ma
- Beijing Advanced Innovation Center for Intelligent Robots and Systems, Beijing Institute of Technology, Beijing, China
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing, China
| | - Jiajia Yang
- Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Jinglong Wu
- Beijing Advanced Innovation Center for Intelligent Robots and Systems, Beijing Institute of Technology, Beijing, China
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing, China
- Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| |
Collapse
|
7
|
Graven T, Desebrock C. Touching and hearing the shapes: How auditory angular and curved sounds influence proficiency in recognising tactile angle and curve shapes when experienced and inexperienced in using haptic touch. BRITISH JOURNAL OF VISUAL IMPAIRMENT 2021. [DOI: 10.1177/02646196211003114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This study investigated whether adding auditory angular and curved sounds to tactile angle and curve shapes – one unspecified sound to one unspecified shape – positively influences the accuracy and exploration time in recognising tactile angles and curves when experienced and inexperienced in using haptic touch. A within-participant experiment was conducted, with two groups of participants: experienced and inexperienced in using haptic touch, and with two conditions: congruous (e.g., angle shape and angular sound) and incongruous (e.g., angle shape and curved sound) tactile and auditory shape information. Adding congruous auditory angular and curved sounds to tactile angle and curve shapes positively influences the accuracy in recognising tactile angles and curves both when experienced and inexperienced in using haptic touch, and the exploration time on correct recognitions when experienced. People integrate tactile and auditory (angle; curve) shape information and this improves their proficiency in recognising tactile angles and curves.
Collapse
|
8
|
García AM, Hesse E, Birba A, Adolfi F, Mikulan E, Caro MM, Petroni A, Bekinschtein TA, del Carmen García M, Silva W, Ciraolo C, Vaucheret E, Sedeño L, Ibáñez A. Time to Face Language: Embodied Mechanisms Underpin the Inception of Face-Related Meanings in the Human Brain. Cereb Cortex 2020; 30:6051-6068. [PMID: 32577713 PMCID: PMC7673477 DOI: 10.1093/cercor/bhaa178] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2019] [Revised: 04/21/2020] [Accepted: 06/02/2020] [Indexed: 12/18/2022] Open
Abstract
In construing meaning, the brain recruits multimodal (conceptual) systems and embodied (modality-specific) mechanisms. Yet, no consensus exists on how crucial the latter are for the inception of semantic distinctions. To address this issue, we combined electroencephalographic (EEG) and intracranial EEG (iEEG) to examine when nouns denoting facial body parts (FBPs) and nonFBPs are discriminated in face-processing and multimodal networks. First, FBP words increased N170 amplitude (a hallmark of early facial processing). Second, they triggered fast (~100 ms) activity boosts within the face-processing network, alongside later (~275 ms) effects in multimodal circuits. Third, iEEG recordings from face-processing hubs allowed decoding ~80% of items before 200 ms, while classification based on multimodal-network activity only surpassed ~70% after 250 ms. Finally, EEG and iEEG connectivity between both networks proved greater in early (0-200 ms) than later (200-400 ms) windows. Collectively, our findings indicate that, at least for some lexico-semantic categories, meaning is construed through fast reenactments of modality-specific experience.
Collapse
Affiliation(s)
- Adolfo M García
- Universidad de San Andrés, B1644BID Buenos Aires, Argentina
- National Scientific and Technical Research Council (CONICET), C1425FQB Buenos Aires, Argentina
- Faculty of Education, National University of Cuyo (UNCuyo), MM5502GKA Mendoza, Argentina
- Departamento de Lingüística y Literatura, Facultad de Humanidades, Universidad de Santiago de Chile, 9170020 Santiago, Chile
- Global Brain Health Institute, University of California, CA 94158 San Francisco, USA
| | - Eugenia Hesse
- Universidad de San Andrés, B1644BID Buenos Aires, Argentina
- National Scientific and Technical Research Council (CONICET), C1425FQB Buenos Aires, Argentina
| | - Agustina Birba
- Universidad de San Andrés, B1644BID Buenos Aires, Argentina
- National Scientific and Technical Research Council (CONICET), C1425FQB Buenos Aires, Argentina
| | - Federico Adolfi
- National Scientific and Technical Research Council (CONICET), C1425FQB Buenos Aires, Argentina
| | - Ezequiel Mikulan
- Department of Biomedical and Clinical Sciences “L. Sacco”, University of Milan, 20122 Milan, Italy
| | - Miguel Martorell Caro
- National Scientific and Technical Research Council (CONICET), C1425FQB Buenos Aires, Argentina
| | - Agustín Petroni
- Instituto de Ingeniería Biomédica, Facultad de Ingeniería, Universidad de Buenos Aires, C1063ACV Buenos Aires, Argentina
- Laboratorio de Inteligencia Artificial Aplicada, Departamento de Computación, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, ICC-CONICET, C1063ACV Buenos Aires, Argentina
| | | | - María del Carmen García
- Programa de Cirugía de Epilepsia, Hospital Italiano de Buenos Aires, C1181ACH, Buenos Aires, Argentina
| | - Walter Silva
- Programa de Cirugía de Epilepsia, Hospital Italiano de Buenos Aires, C1181ACH, Buenos Aires, Argentina
| | - Carlos Ciraolo
- Programa de Cirugía de Epilepsia, Hospital Italiano de Buenos Aires, C1181ACH, Buenos Aires, Argentina
| | - Esteban Vaucheret
- Programa de Cirugía de Epilepsia, Hospital Italiano de Buenos Aires, C1181ACH, Buenos Aires, Argentina
| | - Lucas Sedeño
- National Scientific and Technical Research Council (CONICET), C1425FQB Buenos Aires, Argentina
| | - Agustín Ibáñez
- Universidad de San Andrés, B1644BID Buenos Aires, Argentina
- National Scientific and Technical Research Council (CONICET), C1425FQB Buenos Aires, Argentina
- Global Brain Health Institute, University of California, CA 94158 San Francisco, USA
- Center for Social and Cognitive Neuroscience (CSCN), School of Psychology, Universidad Adolfo Ibáñez, 8320000, Santiago, Chile
- Universidad Autónoma del Caribe, 080003, Barranquilla, Colombia
| |
Collapse
|
9
|
LaMarca V, LaMarca J. Designing Receptive Language Programs: Pushing the Boundaries of Research and Practice. Behav Anal Pract 2018; 11:479-495. [PMID: 30538924 DOI: 10.1007/s40617-018-0208-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
Abstract
Initial difficulty with receptive language is a stumbling block for some children with autism. Numerous strategies have been attempted over the years, and general guidelines for teaching receptive language have been published. But what to do when all else fails? This article reviews 21 strategies that have been effective for some children with autism. Although many of the strategies require further research, behavioral practitioners should consider implementation after careful review. The purpose of this article is to help behavior analysts in practice to categorize different teaching procedures for systematic review, recognize the conceptually systematic rationale behind each strategy, identify different client profiles that may make 1 strategy more effective than another, and create modifications to receptive language programming that remain grounded in research.
Collapse
Affiliation(s)
- Vincent LaMarca
- LittleStar ABA Therapy, 12650 Hamilton Crossing Boulevard, Carmel, IN 46032 USA
| | - Jennifer LaMarca
- Applied Behavior Center for Autism, 7901 E. 88th St., Indianapolis, IN 46256 USA
| |
Collapse
|
10
|
Multimodal Interaction of Contextual and Non-Contextual Sound and Haptics in Virtual Simulations. INFORMATICS-BASEL 2018. [DOI: 10.3390/informatics5040043] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Touch plays a fundamental role in our daily interactions, allowing us to interact with and perceive objects and their spatial properties. Despite its importance in the real-world, touch is often ignored in virtual environments. However, accurately simulating the sense of touch is difficult, requiring the use of high-fidelity haptic devices that are cost-prohibitive. Lower fidelity consumer-level haptic devices are becoming more widespread, yet are generally limited in perceived fidelity and the range of motion (degrees of freedom) required to realistically simulate many tasks. Studies into sound and vision suggest that the presence or absence of sound can influence task performance. Here, we explore whether the presence or absence of contextually relevant sound cues influences the performance of a simple haptic drilling task. Although the results of this study do not show any statistically significant difference in task performance with general (task-irrelevant) sound, we discuss how this is a necessary step in understanding the role of sound on haptic perception.
Collapse
|
11
|
Gurtubay-Antolin A, León-Cabrera P, Rodríguez-Fornells A. Neural Evidence of Hierarchical Cognitive Control during Haptic Processing: An fMRI Study. eNeuro 2018; 5:ENEURO.0295-18.2018. [PMID: 30627631 PMCID: PMC6325533 DOI: 10.1523/eneuro.0295-18.2018] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2018] [Revised: 08/29/2018] [Accepted: 10/02/2018] [Indexed: 12/04/2022] Open
Abstract
Interacting with our immediate surroundings requires constant manipulation of objects. Dexterous manipulation depends on comparison between actual and predicted sensory input, with these predictions calculated by means of lower- and higher-order corollary discharge signals. However, there is still scarce knowledge about the hierarchy in the neural architecture supporting haptic monitoring during manipulation. The present study aimed to assess this issue focusing on the cross talk between lower-order sensory and higher-order associative regions. We used functional magnetic resonance imaging in humans during a haptic discrimination task in which participants had to judge whether a touched shape or texture corresponded to an expected stimulus whose name was previously presented. Specialized haptic regions identified with an independent localizer task did not differ between expected and unexpected conditions, suggesting their lack of involvement in tactile monitoring. When presented stimuli did not match previous expectations, the left supramarginal gyrus (SMG), middle temporal, and medial prefrontal cortices were activated regardless of the nature of the haptic mismatch (shape/texture). The left primary somatosensory area (SI) responded differently to unexpected shapes and textures in line with a specialized detection of haptic mismatch. Importantly, connectivity analyses revealed that the left SMG and SI were more functionally coupled during unexpected trials, emphasizing their interaction. The results point for the first time to a hierarchical organization in the neural substrates underlying haptic monitoring during manipulation with the SMG as a higher-order hub comparing actual and predicted somatosensory input, and SI as a lower-order site involved in the detection of more specialized haptic mismatch.
Collapse
Affiliation(s)
- Ane Gurtubay-Antolin
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), Barcelona 08097, Spain
- Department of Cognition, Development and Education Psychology, Campus Bellvitge, University of Barcelona, Barcelona 08907, Spain
- Institute of Research in Psychology (IPSY) and in Neuroscience (IoNS), Université catholique de Louvain, 1348, Louvain la Neuve, Belgium
| | - Patricia León-Cabrera
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), Barcelona 08097, Spain
- Department of Cognition, Development and Education Psychology, Campus Bellvitge, University of Barcelona, Barcelona 08907, Spain
| | - Antoni Rodríguez-Fornells
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), Barcelona 08097, Spain
- Department of Cognition, Development and Education Psychology, Campus Bellvitge, University of Barcelona, Barcelona 08907, Spain
- Catalan Institution for Research and Advanced Studies (ICREA), Barcelona 08010, Spain
| |
Collapse
|
12
|
Gui P, Li J, Ku Y, Li L, Li X, Zhou X, Bodner M, Lenz FA, Dong XW, Wang L, Zhou YD. Neural Correlates of Feedback Processing in Visuo-Tactile Crossmodal Paired-Associate Learning. Front Hum Neurosci 2018; 12:266. [PMID: 30018542 PMCID: PMC6037861 DOI: 10.3389/fnhum.2018.00266] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Accepted: 06/08/2018] [Indexed: 11/13/2022] Open
Abstract
Previous studies have examined the neural correlates for crossmodal paired-associate (PA) memory and the temporal dynamics of its formation. However, the neural dynamics for feedback processing of crossmodal PA learning remain unclear. To examine this process, we recorded event-related scalp electrical potentials for PA learning of unimodal visual-visual pairs and crossmodal visual-tactile pairs when participants performed unimodal and crossmodal tasks. We examined event-related potentials (ERPs) after the onset of feedback in the tasks for three effects: feedback type (positive feedback vs. negative feedback), learning (as the learning progressed) and the task modality (crossmodal vs. unimodal). The results were as follows: (1) feedback type: the amplitude of P300 decreased with incorrect trials and the P400/N400 complex was only present in incorrect trials; (2) learning: progressive positive voltage shifts in frontal recording sites and negative voltage shifts in central and posterior recording sites were identified as learning proceeded; and (3) task modality: compared with the unimodal PA learning task, positive voltage shifts in frontal sites and negative voltage shifts in posterior sites were found in the crossmodal PA learning task. To sum up, these results shed light on cortical excitability related to feedback processing of crossmodal PA learning.
Collapse
Affiliation(s)
- Peng Gui
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Shanghai Changning-ECNU Mental Health Center, Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Jun Li
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Shanghai Changning-ECNU Mental Health Center, Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Yixuan Ku
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Shanghai Changning-ECNU Mental Health Center, Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China.,NYU-ECNU Institute of Brain and Cognitive Science, NYU Shanghai, Shanghai, China
| | - Lei Li
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Shanghai Changning-ECNU Mental Health Center, Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Xiaojin Li
- Department of Electronic Engineering, East China Normal University, Shanghai, China
| | - Xianzhen Zhou
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Shanghai Changning-ECNU Mental Health Center, Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Mark Bodner
- MIND Research Institute, Irvine, CA, United States
| | - Fred A Lenz
- Department of Neurosurgery, School of Medicine, Johns Hopkins University, Baltimore, MD, United States
| | - Xiao-Wei Dong
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Shanghai Changning-ECNU Mental Health Center, Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China.,NYU-ECNU Institute of Brain and Cognitive Science, NYU Shanghai, Shanghai, China
| | - Liping Wang
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Shanghai Changning-ECNU Mental Health Center, Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China.,NYU-ECNU Institute of Brain and Cognitive Science, NYU Shanghai, Shanghai, China
| | - Yong-Di Zhou
- NYU-ECNU Institute of Brain and Cognitive Science, NYU Shanghai, Shanghai, China.,Department of Neurosurgery, School of Medicine, Johns Hopkins University, Baltimore, MD, United States.,Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD, United States
| |
Collapse
|
13
|
Rizza A, Terekhov AV, Montone G, Olivetti-Belardinelli M, O'Regan JK. Why Early Tactile Speech Aids May Have Failed: No Perceptual Integration of Tactile and Auditory Signals. Front Psychol 2018; 9:767. [PMID: 29875719 PMCID: PMC5974558 DOI: 10.3389/fpsyg.2018.00767] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2017] [Accepted: 04/30/2018] [Indexed: 11/23/2022] Open
Abstract
Tactile speech aids, though extensively studied in the 1980’s and 1990’s, never became a commercial success. A hypothesis to explain this failure might be that it is difficult to obtain true perceptual integration of a tactile signal with information from auditory speech: exploitation of tactile cues from a tactile aid might require cognitive effort and so prevent speech understanding at the high rates typical of everyday speech. To test this hypothesis, we attempted to create true perceptual integration of tactile with auditory information in what might be considered the simplest situation encountered by a hearing-impaired listener. We created an auditory continuum between the syllables /BA/ and /VA/, and trained participants to associate /BA/ to one tactile stimulus and /VA/ to another tactile stimulus. After training, we tested if auditory discrimination along the continuum between the two syllables could be biased by incongruent tactile stimulation. We found that such a bias occurred only when the tactile stimulus was above, but not when it was below its previously measured tactile discrimination threshold. Such a pattern is compatible with the idea that the effect is due to a cognitive or decisional strategy, rather than to truly perceptual integration. We therefore ran a further study (Experiment 2), where we created a tactile version of the McGurk effect. We extensively trained two Subjects over 6 days to associate four recorded auditory syllables with four corresponding apparent motion tactile patterns. In a subsequent test, we presented stimulation that was either congruent or incongruent with the learnt association, and asked Subjects to report the syllable they perceived. We found no analog to the McGurk effect, suggesting that the tactile stimulation was not being perceptually integrated with the auditory syllable. These findings strengthen our hypothesis according to which tactile aids failed because integration of tactile cues with auditory speech occurred at a cognitive or decisional level, rather than truly at a perceptual level.
Collapse
Affiliation(s)
- Aurora Rizza
- Department of Psychology, Faculty of Medicine and Psychology, Sapienza University of Rome, Rome, Italy
| | - Alexander V Terekhov
- Laboratoire Psychologie de la Perception, Université Paris Descartes, Paris, France
| | - Guglielmo Montone
- Laboratoire Psychologie de la Perception, Université Paris Descartes, Paris, France
| | - Marta Olivetti-Belardinelli
- Department of Psychology, Faculty of Medicine and Psychology, Sapienza University of Rome, Rome, Italy.,ECONA Interuniversity Centre for Research on Cognitive Processing in Natural and Artificial Systems, Rome, Italy
| | - J Kevin O'Regan
- Laboratoire Psychologie de la Perception, Université Paris Descartes, Paris, France
| |
Collapse
|
14
|
Gui P, Ku Y, Li L, Li X, Bodner M, Lenz FA, Wang L, Zhou YD. Neural correlates of visuo-tactile crossmodal paired-associate learning and memory in humans. Neuroscience 2017; 362:181-195. [PMID: 28843996 DOI: 10.1016/j.neuroscience.2017.08.035] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2017] [Revised: 08/16/2017] [Accepted: 08/17/2017] [Indexed: 11/16/2022]
Abstract
Studies have indicated that a cortical sensory system is capable of processing information from different sensory modalities. However, it still remains unclear when and how a cortical system integrates and retains information across sensory modalities during learning. Here we investigated the neural dynamics underlying crossmodal associations and memory by recording event-related potentials (ERPs) when human participants performed visuo-tactile (crossmodal) and visuo-visual (unimodal) paired-associate (PA) learning tasks. In a trial of the tasks, the participants were required to explore and learn the relationship (paired or non-paired) between two successive stimuli. EEG recordings revealed dynamic ERP changes during participants' learning of paired-associations. Specifically, (1) the frontal N400 component showed learning-related changes in both unimodal and crossmodal tasks but did not show any significant difference between these two tasks, while the central P400 displayed both learning changes and task differences; (2) a late posterior negative slow wave (LPN) showed the learning effect only in the crossmodal task; (3) alpha-band oscillations appeared to be involved in crossmodal working memory. Additional behavioral experiments suggested that these ERP components were not relevant to the participants' familiarity with stimuli per se. Further, by shortening the delay length (from 1300ms to 400ms or 200 ms) between the first and second stimulus in the crossmodal task, declines in participants' task performance were observed accordingly. Taken together, these results provide insights into the cortical plasticity (induced by PA learning) of neural networks involved in crossmodal associations in working memory.
Collapse
Affiliation(s)
- Peng Gui
- Key laboratory of Brain Functional Genomics (MOE & STCSM), Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| | - Yixuan Ku
- Key laboratory of Brain Functional Genomics (MOE & STCSM), Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China; NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai and Collaborative Innovation Center for Brain Science, Shanghai 200062, China
| | - Lei Li
- Key laboratory of Brain Functional Genomics (MOE & STCSM), Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| | - Xiaojin Li
- Department of Electronic Engineering, East China Normal University, Shanghai 200062, China
| | - Mark Bodner
- MIND Research Institute, Irvine, CA 92617, USA
| | - Fred A Lenz
- Department of Neurosurgery, School of Medicine, Johns Hopkins University, Baltimore, MD 21287, USA
| | - Liping Wang
- Key laboratory of Brain Functional Genomics (MOE & STCSM), Institute of Cognitive Neuroscience, School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China; NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai and Collaborative Innovation Center for Brain Science, Shanghai 200062, China.
| | - Yong-Di Zhou
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai and Collaborative Innovation Center for Brain Science, Shanghai 200062, China; Department of Neurosurgery, School of Medicine, Johns Hopkins University, Baltimore, MD 21287, USA; Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
15
|
A dynamical framework to relate perceptual variability with multisensory information processing. Sci Rep 2016; 6:31280. [PMID: 27502974 PMCID: PMC4977493 DOI: 10.1038/srep31280] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2016] [Accepted: 07/15/2016] [Indexed: 11/29/2022] Open
Abstract
Multisensory processing involves participation of individual sensory streams, e.g., vision, audition to facilitate perception of environmental stimuli. An experimental realization of the underlying complexity is captured by the “McGurk-effect”- incongruent auditory and visual vocalization stimuli eliciting perception of illusory speech sounds. Further studies have established that time-delay between onset of auditory and visual signals (AV lag) and perturbations in the unisensory streams are key variables that modulate perception. However, as of now only few quantitative theoretical frameworks have been proposed to understand the interplay among these psychophysical variables or the neural systems level interactions that govern perceptual variability. Here, we propose a dynamic systems model consisting of the basic ingredients of any multisensory processing, two unisensory and one multisensory sub-system (nodes) as reported by several researchers. The nodes are connected such that biophysically inspired coupling parameters and time delays become key parameters of this network. We observed that zero AV lag results in maximum synchronization of constituent nodes and the degree of synchronization decreases when we have non-zero lags. The attractor states of this network can thus be interpreted as the facilitator for stabilizing specific perceptual experience. Thereby, the dynamic model presents a quantitative framework for understanding multisensory information processing.
Collapse
|
16
|
Sathian K. Analysis of haptic information in the cerebral cortex. J Neurophysiol 2016; 116:1795-1806. [PMID: 27440247 DOI: 10.1152/jn.00546.2015] [Citation(s) in RCA: 55] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2015] [Accepted: 07/20/2016] [Indexed: 11/22/2022] Open
Abstract
Haptic sensing of objects acquires information about a number of properties. This review summarizes current understanding about how these properties are processed in the cerebral cortex of macaques and humans. Nonnoxious somatosensory inputs, after initial processing in primary somatosensory cortex, are partially segregated into different pathways. A ventrally directed pathway carries information about surface texture into parietal opercular cortex and thence to medial occipital cortex. A dorsally directed pathway transmits information regarding the location of features on objects to the intraparietal sulcus and frontal eye fields. Shape processing occurs mainly in the intraparietal sulcus and lateral occipital complex, while orientation processing is distributed across primary somatosensory cortex, the parietal operculum, the anterior intraparietal sulcus, and a parieto-occipital region. For each of these properties, the respective areas outside primary somatosensory cortex also process corresponding visual information and are thus multisensory. Consistent with the distributed neural processing of haptic object properties, tactile spatial acuity depends on interaction between bottom-up tactile inputs and top-down attentional signals in a distributed neural network. Future work should clarify the roles of the various brain regions and how they interact at the network level.
Collapse
Affiliation(s)
- K Sathian
- Departments of Neurology, Rehabilitation Medicine and Psychology, Emory University, Atlanta, Georgia; and Center for Visual and Neurocognitive Rehabilitation, Atlanta Department of Veterans Affairs Medical Center, Decatur, Georgia
| |
Collapse
|
17
|
Haptic, Virtual Interaction and Motor Imagery: Entertainment Tools and Psychophysiological Testing. SENSORS 2016; 16:s16030394. [PMID: 26999151 PMCID: PMC4813969 DOI: 10.3390/s16030394] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/24/2015] [Revised: 03/04/2016] [Accepted: 03/14/2016] [Indexed: 11/18/2022]
Abstract
In this work, the perception of affordances was analysed in terms of cognitive neuroscience during an interactive experience in a virtual reality environment. In particular, we chose a virtual reality scenario based on the Leap Motion controller: this sensor device captures the movements of the user’s hand and fingers, which are reproduced on a computer screen by the proper software applications. For our experiment, we employed a sample of 10 subjects matched by age and sex and chosen among university students. The subjects took part in motor imagery training and immersive affordance condition (a virtual training with Leap Motion and a haptic training with real objects). After each training sessions the subject performed a recognition task, in order to investigate event-related potential (ERP) components. The results revealed significant differences in the attentional components during the Leap Motion training. During Leap Motion session, latencies increased in the occipital lobes, which are entrusted to visual sensory; in contrast, latencies decreased in the frontal lobe, where the brain is mainly activated for attention and action planning.
Collapse
|
18
|
Pishnamazi M, Nojaba Y, Ganjgahi H, Amousoltani A, Oghabian MA. Neural correlates of audiotactile phonetic processing in early-blind readers: an fMRI study. Exp Brain Res 2015; 234:1263-77. [DOI: 10.1007/s00221-015-4515-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2014] [Accepted: 11/30/2015] [Indexed: 10/22/2022]
|
19
|
Leonardelli E, Braun C, Weisz N, Lithari C, Occelli V, Zampini M. Prestimulus oscillatory alpha power and connectivity patterns predispose perceptual integration of an audio and a tactile stimulus. Hum Brain Mapp 2015; 36:3486-98. [PMID: 26109518 DOI: 10.1002/hbm.22857] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2014] [Revised: 05/13/2015] [Accepted: 05/14/2015] [Indexed: 11/06/2022] Open
Abstract
To efficiently perceive and respond to the external environment, our brain has to perceptually integrate or segregate stimuli of different modalities. The temporal relationship between the different sensory modalities is therefore essential for the formation of different multisensory percepts. In this magnetoencephalography study, we created a paradigm where an audio and a tactile stimulus were presented by an ambiguous temporal relationship so that perception of physically identical audiotactile stimuli could vary between integrated (emanating from the same source) and segregated. This bistable paradigm allowed us to compare identical bimodal stimuli that elicited different percepts, providing a possibility to directly infer multisensory interaction effects. Local differences in alpha power over bilateral inferior parietal lobules (IPLs) and superior parietal lobules (SPLs) preceded integrated versus segregated percepts of the two stimuli (audio and tactile). Furthermore, differences in long-range cortical functional connectivity seeded in rIPL (region of maximum difference) revealed differential patterns that predisposed integrated or segregated percepts encompassing secondary areas of all different modalities and prefrontal cortex. We showed that the prestimulus brain states predispose the perception of the audiotactile stimulus both in a global and a local manner. Our findings are in line with a recent consistent body of findings on the importance of prestimulus brain states for perception of an upcoming stimulus. This new perspective on how stimuli originating from different modalities are integrated suggests a non-modality specific network predisposing multisensory perception.
Collapse
Affiliation(s)
| | - Christoph Braun
- Center for Mind/Brain Sciences, University of Trento, Trento, Italy.,MEG Center, University of Tübingen, Tübingen, Germany.,Werner Reichardt Centre for Integrative Neuroscience(CIN), University of Tübingen, Tübingen, Germany
| | - Nathan Weisz
- Center for Mind/Brain Sciences, University of Trento, Trento, Italy
| | - Chrysa Lithari
- Center for Mind/Brain Sciences, University of Trento, Trento, Italy
| | | | | |
Collapse
|
20
|
Man K, Damasio A, Meyer K, Kaplan JT. Convergent and invariant object representations for sight, sound, and touch. Hum Brain Mapp 2015; 36:3629-40. [PMID: 26047030 DOI: 10.1002/hbm.22867] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2014] [Revised: 05/21/2015] [Accepted: 05/21/2015] [Indexed: 12/30/2022] Open
Abstract
We continuously perceive objects in the world through multiple sensory channels. In this study, we investigated the convergence of information from different sensory streams within the cerebral cortex. We presented volunteers with three common objects via three different modalities-sight, sound, and touch-and used multivariate pattern analysis of functional magnetic resonance imaging data to map the cortical regions containing information about the identity of the objects. We could reliably predict which of the three stimuli a subject had seen, heard, or touched from the pattern of neural activity in the corresponding early sensory cortices. Intramodal classification was also successful in large portions of the cerebral cortex beyond the primary areas, with multiple regions showing convergence of information from two or all three modalities. Using crossmodal classification, we also searched for brain regions that would represent objects in a similar fashion across different modalities of presentation. We trained a classifier to distinguish objects presented in one modality and then tested it on the same objects presented in a different modality. We detected audiovisual invariance in the right temporo-occipital junction, audiotactile invariance in the left postcentral gyrus and parietal operculum, and visuotactile invariance in the right postcentral and supramarginal gyri. Our maps of multisensory convergence and crossmodal generalization reveal the underlying organization of the association cortices, and may be related to the neural basis for mental concepts.
Collapse
Affiliation(s)
- Kingson Man
- Brain and Creativity Institute, University of Southern California, Los Angeles, California, 90089
| | - Antonio Damasio
- Brain and Creativity Institute, University of Southern California, Los Angeles, California, 90089
| | - Kaspar Meyer
- Brain and Creativity Institute, University of Southern California, Los Angeles, California, 90089.,Institute of Anesthesiology, University Hospital, University of Zurich, Zurich, Switzerland
| | - Jonas T Kaplan
- Brain and Creativity Institute, University of Southern California, Los Angeles, California, 90089
| |
Collapse
|
21
|
Kaplan JT, Man K, Greening SG. Multivariate cross-classification: applying machine learning techniques to characterize abstraction in neural representations. Front Hum Neurosci 2015; 9:151. [PMID: 25859202 PMCID: PMC4373279 DOI: 10.3389/fnhum.2015.00151] [Citation(s) in RCA: 75] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2014] [Accepted: 03/04/2015] [Indexed: 12/22/2022] Open
Abstract
Here we highlight an emerging trend in the use of machine learning classifiers to test for abstraction across patterns of neural activity. When a classifier algorithm is trained on data from one cognitive context, and tested on data from another, conclusions can be drawn about the role of a given brain region in representing information that abstracts across those cognitive contexts. We call this kind of analysis Multivariate Cross-Classification (MVCC), and review several domains where it has recently made an impact. MVCC has been important in establishing correspondences among neural patterns across cognitive domains, including motor-perception matching and cross-sensory matching. It has been used to test for similarity between neural patterns evoked by perception and those generated from memory. Other work has used MVCC to investigate the similarity of representations for semantic categories across different kinds of stimulus presentation, and in the presence of different cognitive demands. We use these examples to demonstrate the power of MVCC as a tool for investigating neural abstraction and discuss some important methodological issues related to its application.
Collapse
Affiliation(s)
- Jonas T Kaplan
- Brain and Creativity Institute, University of Southern California Los Angeles, CA, USA ; Department of Psychology, University of Southern California Los Angeles, CA, USA
| | - Kingson Man
- Brain and Creativity Institute, University of Southern California Los Angeles, CA, USA
| | - Steven G Greening
- Department of Psychology, University of Southern California Los Angeles, CA, USA ; Department of Gerontology, University of Southern California Los Angeles, CA, USA
| |
Collapse
|
22
|
Man K, Kaplan J, Damasio H, Damasio A. Neural convergence and divergence in the mammalian cerebral cortex: from experimental neuroanatomy to functional neuroimaging. J Comp Neurol 2014; 521:4097-111. [PMID: 23840023 DOI: 10.1002/cne.23408] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2013] [Revised: 04/30/2013] [Accepted: 06/28/2013] [Indexed: 11/08/2022]
Abstract
A development essential for understanding the neural basis of complex behavior and cognition is the description, during the last quarter of the twentieth century, of detailed patterns of neuronal circuitry in the mammalian cerebral cortex. This effort established that sensory pathways exhibit successive levels of convergence, from the early sensory cortices to sensory-specific and multisensory association cortices, culminating in maximally integrative regions. It was also established that this convergence is reciprocated by successive levels of divergence, from the maximally integrative areas all the way back to the early sensory cortices. This article first provides a brief historical review of these neuroanatomical findings, which were relevant to the study of brain and mind-behavior relationships and to the proposal of heuristic anatomofunctional frameworks. In a second part, the article reviews new evidence that has accumulated from studies of functional neuroimaging, employing both univariate and multivariate analyses, as well as electrophysiology, in humans and other mammals, that the integration of information across the auditory, visual, and somatosensory-motor modalities proceeds in a content-rich manner. Behaviorally and cognitively relevant information is extracted from and conserved across the different modalities, both in higher order association cortices and in early sensory cortices. Such stimulus-specific information is plausibly relayed along the neuroanatomical pathways alluded to above. The evidence reviewed here suggests the need for further in-depth exploration of the intricate connectivity of the mammalian cerebral cortex in experimental neuroanatomical studies.
Collapse
Affiliation(s)
- Kingson Man
- Brain and Creativity Institute, University of Southern California, Los Angeles, California, 90089
| | | | | | | |
Collapse
|
23
|
Kassuba T, Klinge C, Hölig C, Röder B, Siebner HR. Short-term plasticity of visuo-haptic object recognition. Front Psychol 2014; 5:274. [PMID: 24765082 PMCID: PMC3980106 DOI: 10.3389/fpsyg.2014.00274] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2014] [Accepted: 03/14/2014] [Indexed: 11/13/2022] Open
Abstract
Functional magnetic resonance imaging (fMRI) studies have provided ample evidence for the involvement of the lateral occipital cortex (LO), fusiform gyrus (FG), and intraparietal sulcus (IPS) in visuo-haptic object integration. Here we applied 30 min of sham (non-effective) or real offline 1 Hz repetitive transcranial magnetic stimulation (rTMS) to perturb neural processing in left LO immediately before subjects performed a visuo-haptic delayed-match-to-sample task during fMRI. In this task, subjects had to match sample (S1) and target (S2) objects presented sequentially within or across vision and/or haptics in both directions (visual-haptic or haptic-visual) and decide whether or not S1 and S2 were the same objects. Real rTMS transiently decreased activity at the site of stimulation and remote regions such as the right LO and bilateral FG during haptic S1 processing. Without affecting behavior, the same stimulation gave rise to relative increases in activation during S2 processing in the right LO, left FG, bilateral IPS, and other regions previously associated with object recognition. Critically, the modality of S2 determined which regions were recruited after rTMS. Relative to sham rTMS, real rTMS induced increased activations during crossmodal congruent matching in the left FG for haptic S2 and the temporal pole for visual S2. In addition, we found stronger activations for incongruent than congruent matching in the right anterior parahippocampus and middle frontal gyrus for crossmodal matching of haptic S2 and in the left FG and bilateral IPS for unimodal matching of visual S2, only after real but not sham rTMS. The results imply that a focal perturbation of the left LO triggers modality-specific interactions between the stimulated left LO and other key regions of object processing possibly to maintain unimpaired object recognition. This suggests that visual and haptic processing engage partially distinct brain networks during visuo-haptic object matching.
Collapse
Affiliation(s)
- Tanja Kassuba
- Danish Research Centre for Magnetic Resonance, Copenhagen University Hospital Hvidovre Hvidovre, Denmark ; NeuroImageNord/Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf Hamburg, Germany ; Department of Neurology, Christian-Albrechts-University Kiel, Germany
| | - Corinna Klinge
- NeuroImageNord/Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf Hamburg, Germany ; Department of Psychiatry, Warneford Hospital Oxford, UK
| | - Cordula Hölig
- NeuroImageNord/Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf Hamburg, Germany ; Biological Psychology and Neuropsychology, University of Hamburg Hamburg, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg Hamburg, Germany
| | - Hartwig R Siebner
- Danish Research Centre for Magnetic Resonance, Copenhagen University Hospital Hvidovre Hvidovre, Denmark ; NeuroImageNord/Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf Hamburg, Germany ; Department of Neurology, Christian-Albrechts-University Kiel, Germany
| |
Collapse
|
24
|
Petrini K, Remark A, Smith L, Nardini M. When vision is not an option: children's integration of auditory and haptic information is suboptimal. Dev Sci 2014; 17:376-87. [PMID: 24612244 PMCID: PMC4240463 DOI: 10.1111/desc.12127] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2013] [Accepted: 08/19/2013] [Indexed: 11/29/2022]
Abstract
When visual information is available, human adults, but not children, have been shown to reduce sensory uncertainty by taking a weighted average of sensory cues. In the absence of reliable visual information (e.g. extremely dark environment, visual disorders), the use of other information is vital. Here we ask how humans combine haptic and auditory information from childhood. In the first experiment, adults and children aged 5 to 11 years judged the relative sizes of two objects in auditory, haptic, and non-conflicting bimodal conditions. In Experiment 2, different groups of adults and children were tested in non-conflicting and conflicting bimodal conditions. In Experiment 1, adults reduced sensory uncertainty by integrating the cues optimally, while children did not. In Experiment 2, adults and children used similar weighting strategies to solve audio–haptic conflict. These results suggest that, in the absence of visual information, optimal integration of cues for discrimination of object size develops late in childhood.
Collapse
Affiliation(s)
- Karin Petrini
- Institute of Ophthalmology, University College London, UK
| | | | | | | |
Collapse
|
25
|
Kassuba T, Klinge C, Hölig C, Röder B, Siebner HR. Vision holds a greater share in visuo-haptic object recognition than touch. Neuroimage 2013; 65:59-68. [DOI: 10.1016/j.neuroimage.2012.09.054] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2012] [Revised: 09/19/2012] [Accepted: 09/20/2012] [Indexed: 10/27/2022] Open
|