1
|
Csonka M, Mardmomen N, Webster PJ, Brefczynski-Lewis JA, Frum C, Lewis JW. Meta-Analyses Support a Taxonomic Model for Representations of Different Categories of Audio-Visual Interaction Events in the Human Brain. Cereb Cortex Commun 2021; 2:tgab002. [PMID: 33718874 PMCID: PMC7941256 DOI: 10.1093/texcom/tgab002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 12/31/2020] [Accepted: 01/06/2021] [Indexed: 01/23/2023] Open
Abstract
Our ability to perceive meaningful action events involving objects, people, and other animate agents is characterized in part by an interplay of visual and auditory sensory processing and their cross-modal interactions. However, this multisensory ability can be altered or dysfunctional in some hearing and sighted individuals, and in some clinical populations. The present meta-analysis sought to test current hypotheses regarding neurobiological architectures that may mediate audio-visual multisensory processing. Reported coordinates from 82 neuroimaging studies (137 experiments) that revealed some form of audio-visual interaction in discrete brain regions were compiled, converted to a common coordinate space, and then organized along specific categorical dimensions to generate activation likelihood estimate (ALE) brain maps and various contrasts of those derived maps. The results revealed brain regions (cortical "hubs") preferentially involved in multisensory processing along different stimulus category dimensions, including 1) living versus nonliving audio-visual events, 2) audio-visual events involving vocalizations versus actions by living sources, 3) emotionally valent events, and 4) dynamic-visual versus static-visual audio-visual stimuli. These meta-analysis results are discussed in the context of neurocomputational theories of semantic knowledge representations and perception, and the brain volumes of interest are available for download to facilitate data interpretation for future neuroimaging studies.
Collapse
Affiliation(s)
- Matt Csonka
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Nadia Mardmomen
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Paula J Webster
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Julie A Brefczynski-Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Chris Frum
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - James W Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| |
Collapse
|
2
|
Stanton TR, Spence C. The Influence of Auditory Cues on Bodily and Movement Perception. Front Psychol 2020; 10:3001. [PMID: 32010030 PMCID: PMC6978806 DOI: 10.3389/fpsyg.2019.03001] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2019] [Accepted: 12/18/2019] [Indexed: 12/31/2022] Open
Abstract
The sounds that result from our movement and that mark the outcome of our actions typically convey useful information concerning the state of our body and its movement, as well as providing pertinent information about the stimuli with which we are interacting. Here we review the rapidly growing literature investigating the influence of non-veridical auditory cues (i.e., inaccurate in terms of their context, timing, and/or spectral distribution) on multisensory body and action perception, and on motor behavior. Inaccurate auditory cues provide a unique opportunity to study cross-modal processes: the ability to detect the impact of each sense when they provide a slightly different message is greater. Additionally, given that similar cross-modal processes likely occur regardless of the accuracy or inaccuracy of sensory input, studying incongruent interactions are likely to also help us predict interactions between congruent inputs. The available research convincingly demonstrates that perceptions of the body, of movement, and of surface contact features (e.g., roughness) are influenced by the addition of non-veridical auditory cues. Moreover, auditory cues impact both motor behavior and emotional valence, the latter showing that sounds that are highly incongruent with the performed movement induce feelings of unpleasantness (perhaps associated with lower processing fluency). Such findings are relevant to the design of auditory cues associated with product interaction, and the use of auditory cues in sport performance and therapeutic situations given the impact on motor behavior.
Collapse
Affiliation(s)
- Tasha R. Stanton
- Pain and Perception Lab, IIMPACT in Health, The University of South Australia, Adelaide, SA, Australia
- Neuroscience Research Australia, Randwick, NSW, Australia
| | - Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
3
|
Immersive Virtual Reality as an Effective Tool for Second Language Vocabulary Learning. LANGUAGES 2019. [DOI: 10.3390/languages4010013] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Learning a second language (L2) presents a significant challenge to many people in adulthood. Platforms for effective L2 instruction have been developed in both academia and the industry. While real-life (RL) immersion is often lauded as a particularly effective L2 learning platform, little is known about the features of immersive contexts that contribute to the L2 learning process. Immersive virtual reality (iVR) offers a flexible platform to simulate an RL immersive learning situation, while allowing the researcher to have tight experimental control for stimulus delivery and learner interaction with the environment. Using a mixed counterbalanced design, the current study examines individual differences in L2 performance during learning of 60 Mandarin Chinese words across two learning sessions, with each participant learning 30 words in iVR and 30 words via word–word (WW) paired association. Behavioral performance was collected immediately after L2 learning via an alternative forced-choice recognition task. Our results indicate a main effect of L2 learning context, such that accuracy on trials learned via iVR was significantly higher as compared to trials learned in the WW condition. These effects are reflected especially in the differential effects of learning contexts, in that less successful learners show a significant benefit of iVR instruction as compared to WW, whereas successful learners do not show a significant benefit of either learning condition. Our findings have broad implications for L2 education, particularly for those who struggle in learning an L2.
Collapse
|
4
|
Brefczynski-Lewis JA, Lewis JW. Auditory object perception: A neurobiological model and prospective review. Neuropsychologia 2017; 105:223-242. [PMID: 28467888 PMCID: PMC5662485 DOI: 10.1016/j.neuropsychologia.2017.04.034] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2016] [Revised: 04/27/2017] [Accepted: 04/27/2017] [Indexed: 12/15/2022]
Abstract
Interaction with the world is a multisensory experience, but most of what is known about the neural correlates of perception comes from studying vision. Auditory inputs enter cortex with its own set of unique qualities, and leads to use in oral communication, speech, music, and the understanding of emotional and intentional states of others, all of which are central to the human experience. To better understand how the auditory system develops, recovers after injury, and how it may have transitioned in its functions over the course of hominin evolution, advances are needed in models of how the human brain is organized to process real-world natural sounds and "auditory objects". This review presents a simple fundamental neurobiological model of hearing perception at a category level that incorporates principles of bottom-up signal processing together with top-down constraints of grounded cognition theories of knowledge representation. Though mostly derived from human neuroimaging literature, this theoretical framework highlights rudimentary principles of real-world sound processing that may apply to most if not all mammalian species with hearing and acoustic communication abilities. The model encompasses three basic categories of sound-source: (1) action sounds (non-vocalizations) produced by 'living things', with human (conspecific) and non-human animal sources representing two subcategories; (2) action sounds produced by 'non-living things', including environmental sources and human-made machinery; and (3) vocalizations ('living things'), with human versus non-human animals as two subcategories therein. The model is presented in the context of cognitive architectures relating to multisensory, sensory-motor, and spoken language organizations. The models' predictive values are further discussed in the context of anthropological theories of oral communication evolution and the neurodevelopment of spoken language proto-networks in infants/toddlers. These phylogenetic and ontogenetic frameworks both entail cortical network maturations that are proposed to at least in part be organized around a number of universal acoustic-semantic signal attributes of natural sounds, which are addressed herein.
Collapse
Affiliation(s)
- Julie A Brefczynski-Lewis
- Blanchette Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA; Department of Physiology, Pharmacology, & Neuroscience, West Virginia University, PO Box 9229, Morgantown, WV 26506, USA
| | - James W Lewis
- Blanchette Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA; Department of Physiology, Pharmacology, & Neuroscience, West Virginia University, PO Box 9229, Morgantown, WV 26506, USA.
| |
Collapse
|
5
|
Webster PJ, Skipper-Kallal LM, Frum CA, Still HN, Ward BD, Lewis JW. Divergent Human Cortical Regions for Processing Distinct Acoustic-Semantic Categories of Natural Sounds: Animal Action Sounds vs. Vocalizations. Front Neurosci 2017; 10:579. [PMID: 28111538 PMCID: PMC5216875 DOI: 10.3389/fnins.2016.00579] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2016] [Accepted: 12/05/2016] [Indexed: 11/13/2022] Open
Abstract
A major gap in our understanding of natural sound processing is knowledge of where or how in a cortical hierarchy differential processing leads to categorical perception at a semantic level. Here, using functional magnetic resonance imaging (fMRI) we sought to determine if and where cortical pathways in humans might diverge for processing action sounds vs. vocalizations as distinct acoustic-semantic categories of real-world sound when matched for duration and intensity. This was tested by using relatively less semantically complex natural sounds produced by non-conspecific animals rather than humans. Our results revealed a striking double-dissociation of activated networks bilaterally. This included a previously well described pathway preferential for processing vocalization signals directed laterally from functionally defined primary auditory cortices to the anterior superior temporal gyri, and a less well-described pathway preferential for processing animal action sounds directed medially to the posterior insulae. We additionally found that some of these regions and associated cortical networks showed parametric sensitivity to high-order quantifiable acoustic signal attributes and/or to perceptual features of the natural stimuli, such as the degree of perceived recognition or intentional understanding. Overall, these results supported a neurobiological theoretical framework for how the mammalian brain may be fundamentally organized to process acoustically and acoustic-semantically distinct categories of ethologically valid, real-world sounds.
Collapse
Affiliation(s)
- Paula J. Webster
- Blanchette Rockefellar Neurosciences Institute, Department of Neurobiology & Anatomy, West Virginia UniversityMorgantown, WV, USA
| | - Laura M. Skipper-Kallal
- Blanchette Rockefellar Neurosciences Institute, Department of Neurobiology & Anatomy, West Virginia UniversityMorgantown, WV, USA
- Department of Neurology, Georgetown University Medical CampusWashington, DC, USA
| | - Chris A. Frum
- Department of Physiology and Pharmacology, West Virginia UniversityMorgantown, WV, USA
| | - Hayley N. Still
- Blanchette Rockefellar Neurosciences Institute, Department of Neurobiology & Anatomy, West Virginia UniversityMorgantown, WV, USA
| | - B. Douglas Ward
- Department of Biophysics, Medical College of WisconsinMilwaukee, WI, USA
| | - James W. Lewis
- Blanchette Rockefellar Neurosciences Institute, Department of Neurobiology & Anatomy, West Virginia UniversityMorgantown, WV, USA
| |
Collapse
|
6
|
Fargier R, Ploux S, Cheylus A, Reboul A, Paulignan Y, Nazir TA. Differentiating Semantic Categories during the Acquisition of Novel Words: Correspondence Analysis Applied to Event-related Potentials. J Cogn Neurosci 2014; 26:2552-63. [DOI: 10.1162/jocn_a_00669] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Growing evidence suggests that semantic knowledge is represented in distributed neural networks that include modality-specific structures. Here, we examined the processes underlying the acquisition of words from different semantic categories to determine whether the emergence of visual- and action-based categories could be tracked back to their acquisition. For this, we applied correspondence analysis (CA) to ERPs recorded at various moments during acquisition. CA is a multivariate statistical technique typically used to reveal distance relationships between words of a corpus. Applied to ERPs, it allows isolating factors that best explain variations in the data across time and electrodes. Participants were asked to learn new action and visual words by associating novel pseudowords with the execution of hand movements or the observation of visual images. Words were probed before and after training on two consecutive days. To capture processes that unfold during lexical access, CA was applied on the 100–400 msec post-word onset interval. CA isolated two factors that organized the data as a function of test sessions and word categories. Conventional ERP analyses further revealed a category-specific increase in the negativity of the ERPs to action and visual words at the frontal and occipital electrodes, respectively. The distinct neural processes underlying action and visual words can thus be tracked back to the acquisition of word-referent relationships and may have its origin in association learning. Given current evidence for the flexibility of language-induced sensory-motor activity, we argue that these associative links may serve functions beyond word understanding, that is, the elaboration of situation models.
Collapse
|
7
|
Knoblauch A, Körner E, Körner U, Sommer FT. Structural synaptic plasticity has high memory capacity and can explain graded amnesia, catastrophic forgetting, and the spacing effect. PLoS One 2014; 9:e96485. [PMID: 24858841 PMCID: PMC4032253 DOI: 10.1371/journal.pone.0096485] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2013] [Accepted: 04/08/2014] [Indexed: 11/19/2022] Open
Abstract
Although already William James and, more explicitly, Donald Hebb's theory of cell assemblies have suggested that activity-dependent rewiring of neuronal networks is the substrate of learning and memory, over the last six decades most theoretical work on memory has focused on plasticity of existing synapses in prewired networks. Research in the last decade has emphasized that structural modification of synaptic connectivity is common in the adult brain and tightly correlated with learning and memory. Here we present a parsimonious computational model for learning by structural plasticity. The basic modeling units are "potential synapses" defined as locations in the network where synapses can potentially grow to connect two neurons. This model generalizes well-known previous models for associative learning based on weight plasticity. Therefore, existing theory can be applied to analyze how many memories and how much information structural plasticity can store in a synapse. Surprisingly, we find that structural plasticity largely outperforms weight plasticity and can achieve a much higher storage capacity per synapse. The effect of structural plasticity on the structure of sparsely connected networks is quite intuitive: Structural plasticity increases the "effectual network connectivity", that is, the network wiring that specifically supports storage and recall of the memories. Further, this model of structural plasticity produces gradients of effectual connectivity in the course of learning, thereby explaining various cognitive phenomena including graded amnesia, catastrophic forgetting, and the spacing effect.
Collapse
Affiliation(s)
- Andreas Knoblauch
- Engineering Faculty, Albstadt-Sigmaringen University, Albstadt, Germany
- Honda Research Institute Europe, Offenbach am Main, Germany
| | - Edgar Körner
- Honda Research Institute Europe, Offenbach am Main, Germany
| | - Ursula Körner
- Honda Research Institute Europe, Offenbach am Main, Germany
| | - Friedrich T. Sommer
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, California, United States of America
| |
Collapse
|
8
|
Jola C, McAleer P, Grosbras MH, Love SA, Morison G, Pollick FE. Uni- and multisensory brain areas are synchronised across spectators when watching unedited dance recordings. Iperception 2013; 4:265-84. [PMID: 24349687 PMCID: PMC3859570 DOI: 10.1068/i0536] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2012] [Revised: 02/20/2013] [Indexed: 11/17/2022] Open
Abstract
The superior temporal sulcus (STS) and gyrus (STG) are commonly identified to be functionally relevant for multisensory integration of audiovisual (AV) stimuli. However, most neuroimaging studies on AV integration used stimuli of short duration in explicit evaluative tasks. Importantly though, many of our AV experiences are of a long duration and ambiguous. It is unclear if the enhanced activity in audio, visual, and AV brain areas would also be synchronised over time across subjects when they are exposed to such multisensory stimuli. We used intersubject correlation to investigate which brain areas are synchronised across novices for uni- and multisensory versions of a 6-min 26-s recording of an unfamiliar, unedited Indian dance recording (Bharatanatyam). In Bharatanatyam, music and dance are choreographed together in a highly intermodal-dependent manner. Activity in the middle and posterior STG was significantly correlated between subjects and showed also significant enhancement for AV integration when the functional magnetic resonance signals were contrasted against each other using a general linear model conjunction analysis. These results extend previous studies by showing an intermediate step of synchronisation for novices: while there was a consensus across subjects' brain activity in areas relevant for unisensory processing and AV integration of related audio and visual stimuli, we found no evidence for synchronisation of higher level cognitive processes, suggesting these were idiosyncratic.
Collapse
Affiliation(s)
- Corinne Jola
- INSERM-CEA Cognitive Neuroimaging Unit, NeuroSpin Center, F-91191 Gif-sur-Yvette, France, and School of Psychology, University of Glasgow, Glasgow G12 8QB, UK; e-mail:
| | - Phil McAleer
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, UK; e-mail:
| | - Marie-Hélène Grosbras
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G12 8QB, UK; e-mail:
| | - Scott A Love
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, Indiana, USA; e-mail:
| | - Gordon Morison
- Computer, Communication and Interactive Systems, Glasgow Caledonian University, Glasgow G4 0BA, UK; e-mail:
| | - Frank E Pollick
- School of Psychology, University of Glasgow, Glasgow G12 8QB, UK; e-mail:
| |
Collapse
|
9
|
Howell P, Jiang J, Peng D, Lu C. Neural control of fundamental frequency rise and fall in Mandarin tones. BRAIN AND LANGUAGE 2012; 121:35-46. [PMID: 22341758 DOI: 10.1016/j.bandl.2012.01.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2011] [Revised: 01/18/2012] [Accepted: 01/19/2012] [Indexed: 05/31/2023]
Abstract
The neural mechanisms used in tone rises and falls in Mandarin were investigated. Nine participants were scanned while they named one-character pictures that required rising or falling tone responses in Mandarin: the left insula and right putamen showed stronger activation between rising and falling tones; the left brainstem showed weaker activation between rising and falling tones. Connectivity analysis showed that the significant projection from the laryngeal motor cortex to the brainstem which was present in rising tones was absent in falling tones. Additionally, there was a significant difference between the connection from the insula to the laryngeal motor cortex which was negative in rising tones but positive in falling tones. These results suggest that the significant projection from the laryngeal motor cortex to the brainstem used in rising tones was not active in falling tones. The connection from the left insula to the laryngeal motor cortex that differs between rising and falling tones may control whether the rise mechanism is active or not.
Collapse
Affiliation(s)
- Peter Howell
- Division of Psychology and Language Sciences, University College London, London, England, UK.
| | | | | | | |
Collapse
|
10
|
Jola C, Abedian-Amiri A, Kuppuswamy A, Pollick FE, Grosbras MH. Motor simulation without motor expertise: enhanced corticospinal excitability in visually experienced dance spectators. PLoS One 2012; 7:e33343. [PMID: 22457754 PMCID: PMC3310063 DOI: 10.1371/journal.pone.0033343] [Citation(s) in RCA: 67] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2011] [Accepted: 02/14/2012] [Indexed: 11/18/2022] Open
Abstract
The human “mirror-system” is suggested to play a crucial role in action observation and execution, and is characterized by activity in the premotor and parietal cortices during the passive observation of movements. The previous motor experience of the observer has been shown to enhance the activity in this network. Yet visual experience could also have a determinant influence when watching more complex actions, as in dance performances. Here we tested the impact visual experience has on motor simulation when watching dance, by measuring changes in corticospinal excitability. We also tested the effects of empathic abilities. To fully match the participants' long-term visual experience with the present experimental setting, we used three live solo dance performances: ballet, Indian dance, and non-dance. Participants were either frequent dance spectators of ballet or Indian dance, or “novices” who never watched dance. None of the spectators had been physically trained in these dance styles. Transcranial magnetic stimulation was used to measure corticospinal excitability by means of motor-evoked potentials (MEPs) in both the hand and the arm, because the hand is specifically used in Indian dance and the arm is frequently engaged in ballet dance movements. We observed that frequent ballet spectators showed larger MEP amplitudes in the arm muscles when watching ballet compared to when they watched other performances. We also found that the higher Indian dance spectators scored on the fantasy subscale of the Interpersonal Reactivity Index, the larger their MEPs were in the arms when watching Indian dance. Our results show that even without physical training, corticospinal excitability can be enhanced as a function of either visual experience or the tendency to imaginatively transpose oneself into fictional characters. We suggest that spectators covertly simulate the movements for which they have acquired visual experience, and that empathic abilities heighten motor resonance during dance observation.
Collapse
Affiliation(s)
- Corinne Jola
- School of Psychology, University of Surrey, Guildford, United Kingdom.
| | | | | | | | | |
Collapse
|
11
|
Żygierewicz J, Sielużycki C, Zacharias N, Suffczyński P, Kordowski P, Scheich H, Durka P, König R. Estimation of the spatiotemporal structure of event-related desynchronization and synchronization in magnetoencephalography. J Neurosci Methods 2012; 205:148-58. [DOI: 10.1016/j.jneumeth.2011.12.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2011] [Revised: 12/09/2011] [Accepted: 12/20/2011] [Indexed: 10/14/2022]
|
12
|
Higuchi S, Holle H, Roberts N, Eickhoff S, Vogt S. Imitation and observational learning of hand actions: Prefrontal involvement and connectivity. Neuroimage 2012; 59:1668-83. [DOI: 10.1016/j.neuroimage.2011.09.021] [Citation(s) in RCA: 69] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2011] [Revised: 09/09/2011] [Accepted: 09/12/2011] [Indexed: 12/01/2022] Open
|
13
|
Fargier R, Paulignan Y, Boulenger V, Monaghan P, Reboul A, Nazir TA. Learning to associate novel words with motor actions: language-induced motor activity following short training. Cortex 2011; 48:888-99. [PMID: 21864836 DOI: 10.1016/j.cortex.2011.07.003] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2010] [Revised: 10/04/2010] [Accepted: 06/21/2011] [Indexed: 12/21/2022]
Abstract
Action words referring to face, arm or leg actions activate areas along the motor strip that also control the planning and execution of the actions specified by the words. This electroencephalogram (EEG) study aimed to test the learning profile of this language-induced motor activity. Participants were trained to associate novel verbal stimuli to videos of object-oriented hand and arm movements or animated visual images on two consecutive days. Each training session was preceded and followed by a test-session with isolated videos and verbal stimuli. We measured motor-related brain activity (reflected by a desynchronization in the μ frequency bands; 8-12 Hz range) localized at centro-parietal and fronto-central electrodes. We compared activity from viewing the videos to activity resulting from processing the language stimuli only. At centro-parietal electrodes, stable action-related μ suppression was observed during viewing of videos in each test-session of the two days. For processing of verbal stimuli associated with motor actions, a similar pattern of activity was evident only in the second test-session of Day 1. Over the fronto-central regions, μ suppression was observed in the second test-session of Day 2 for the videos and in the second test-session of Day 1 for the verbal stimuli. Whereas the centro-parietal μ suppression can be attributed to motor events actually experienced during training, the fronto-central μ suppression seems to serve as a convergence zone that mediates underspecified motor information. Consequently, sensory-motor reactivations through which concepts are comprehended seem to differ in neural dynamics from those implicated in their acquisition.
Collapse
Affiliation(s)
- Raphaël Fargier
- L2C2-Institut des Sciences Cognitives, CNRS/UCBL, Université Claude Bernard Lyon1, Bron, France.
| | | | | | | | | | | |
Collapse
|
14
|
McNamara A. Can we measure memes? FRONTIERS IN EVOLUTIONARY NEUROSCIENCE 2011; 3:1. [PMID: 21720531 PMCID: PMC3118481 DOI: 10.3389/fnevo.2011.00001] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2010] [Accepted: 05/12/2011] [Indexed: 11/13/2022]
Abstract
Memes are the fundamental unit of cultural evolution and have been left upon the periphery of cognitive neuroscience due to their inexact definition and the consequent presumption that they are impossible to measure. Here it is argued that although a precise definition of memes is rather difficult it does not preclude highly controlled experiments studying the neural substrates of their initiation and replication. In this paper, memes are termed as either internally or externally represented (i-memes/e-memes) in relation to whether they are represented as a neural substrate within the central nervous system or in some other form within our environment. It is argued that neuroimaging technology is now sufficiently advanced to image the connectivity profiles of i-memes and critically, to measure changes to i-memes over time, i.e., as they evolve. It is argued that it is wrong to simply pass off memes as an alternative term for "stimulus" and "learnt associations" as it does not accurately account for the way in which natural stimuli may dynamically "evolve" as clearly observed in our cultural lives.
Collapse
Affiliation(s)
- Adam McNamara
- Department of Psychology, University of Surrey Surrey, UK
| |
Collapse
|
15
|
Hoffman RE, Fernandez T, Pittman B, Hampson M. Elevated functional connectivity along a corticostriatal loop and the mechanism of auditory/verbal hallucinations in patients with schizophrenia. Biol Psychiatry 2011; 69:407-14. [PMID: 21145042 PMCID: PMC3039042 DOI: 10.1016/j.biopsych.2010.09.050] [Citation(s) in RCA: 116] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/14/2010] [Revised: 09/25/2010] [Accepted: 09/28/2010] [Indexed: 11/21/2022]
Abstract
BACKGROUND Higher levels of inter-region functional coordination can facilitate emergence of neural activity as conscious percepts. We consequently tested the hypothesis that auditory/verbal hallucinations (AVHs) arise from elevated functional coordination within a speech processing network. METHODS Functional coordination was indexed with functional connectivity (FC) computed from functional magnetic resonance imaging data. Thirty-two patients with schizophrenia reporting AVHs, 24 similarly diagnosed patients without hallucinations, and 23 healthy control subjects were studied. FC was seeded from a bilateral Wernicke's region delineated according to activation detected during AVHs in a prior study. RESULTS Wernicke's-seeded FC with Brodmann area 45/46 of the left inferior frontal gyrus (IFG) was significantly greater for hallucinating patients compared with nonhallucinating patients but not compared with healthy control subjects. In contrast, Wernicke's-seeded FC with a large subcortical region that included the thalamus, midbrain, and putamen was significantly greater for the combined patient group compared with healthy control subjects after false discovery rate correction, but not when comparing the two patient groups. Within that subcortical domain, the putamen demonstrated significantly greater FC relative to a secondary left IFG seed region when hallucinators were compared with nonhallucinating patients. A follow-up analysis found that FC summed along a loop linking the Wernicke's and IFG seed regions and the putamen was robustly greater for hallucinating patients compared with nonhallucinating patients and healthy control subjects. CONCLUSIONS These findings suggest that higher levels of functional coordination intrinsic to a corticostriatal loop comprise a causal factor leading to AVHs in schizophrenia.
Collapse
Affiliation(s)
- Ralph E Hoffman
- Department of Psychiatry, Yale University School of Medicine, Yale-New Haven Psychiatric Hospital, 184 Liberty Street LV108, New Haven, CT 06519, USA.
| | | | | | | |
Collapse
|
16
|
Kühn S, Keizer A, Rombouts SARB, Hommel B. The functional and neural mechanism of action preparation: roles of EBA and FFA in voluntary action control. J Cogn Neurosci 2011; 23:214-20. [PMID: 20044885 DOI: 10.1162/jocn.2010.21418] [Citation(s) in RCA: 64] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Ideomotor theory claims that actions are cognitively represented and accessed via representations of the sensory effects they evoke. Previous studies provide support for this claim by showing that the presentation of action effects primes activation in corresponding motor structures. However, whether people actually use action-effect representations to control their motor behavior is not yet clear. In our fMRI study, we had participants prepare for manual or facial actions on a trial-by-trial basis, and hypothesized that preparation would be mediated by the cortical areas that code for the perceptual effects of these actions. Preparing for manual action induced higher activation of hand-related areas of motor cortex (demonstrating actual preparation) and of the extrastriate body area, which is known to mediate the perception of body parts. In contrast, preparing for facial action induced higher activation of face-related motor areas and of the fusiform face area, known to mediate face perception. These observations provide further support for the ideomotor theory and suggest that visual imagery might play a role in voluntary action control.
Collapse
Affiliation(s)
- Simone Kühn
- Department of Experimental Psychology, University of Ghent, Ghent, Belgium.
| | | | | | | |
Collapse
|
17
|
Joassin F, Maurage P, Campanella S. The neural network sustaining the crossmodal processing of human gender from faces and voices: An fMRI study. Neuroimage 2011; 54:1654-61. [DOI: 10.1016/j.neuroimage.2010.08.073] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2010] [Revised: 08/30/2010] [Accepted: 08/31/2010] [Indexed: 10/19/2022] Open
|
18
|
Lewis JW, Talkington WJ, Puce A, Engel LR, Frum C. Cortical networks representing object categories and high-level attributes of familiar real-world action sounds. J Cogn Neurosci 2010; 23:2079-101. [PMID: 20812786 DOI: 10.1162/jocn.2010.21570] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
In contrast to visual object processing, relatively little is known about how the human brain processes everyday real-world sounds, transforming highly complex acoustic signals into representations of meaningful events or auditory objects. We recently reported a fourfold cortical dissociation for representing action (nonvocalization) sounds correctly categorized as having been produced by human, animal, mechanical, or environmental sources. However, it was unclear how consistent those network representations were across individuals, given potential differences between each participant's degree of familiarity with the studied sounds. Moreover, it was unclear what, if any, auditory perceptual attributes might further distinguish the four conceptual sound-source categories, potentially revealing what might drive the cortical network organization for representing acoustic knowledge. Here, we used functional magnetic resonance imaging to test participants before and after extensive listening experience with action sounds, and tested for cortices that might be sensitive to each of three different high-level perceptual attributes relating to how a listener associates or interacts with the sound source. These included the sound's perceived concreteness, effectuality (ability to be affected by the listener), and spatial scale. Despite some variation of networks for environmental sounds, our results verified the stability of a fourfold dissociation of category-specific networks for real-world action sounds both before and after familiarity training. Additionally, we identified cortical regions parametrically modulated by each of the three high-level perceptual sound attributes. We propose that these attributes contribute to the network-level encoding of category-specific acoustic knowledge representations.
Collapse
Affiliation(s)
- James W Lewis
- Department of Physiology and Pharmacology, PO Box 9229, West Virginia University, Morgantown, WV 26506, USA.
| | | | | | | | | |
Collapse
|
19
|
Jirak D, Menz MM, Buccino G, Borghi AM, Binkofski F. Grasping language--a short story on embodiment. Conscious Cogn 2010; 19:711-20. [PMID: 20739194 DOI: 10.1016/j.concog.2010.06.020] [Citation(s) in RCA: 98] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2009] [Revised: 06/26/2010] [Accepted: 06/28/2010] [Indexed: 12/29/2022]
Abstract
The new concept of embodied cognition theories has been enthusiastically studied by the cognitive sciences, by as well as such disparate disciplines as philosophy, anthropology, neuroscience, and robotics. Embodiment theory provides the framework for ongoing discussions on the linkage between "low" cognitive processes as perception and "high" cognition as language processing and comprehension, respectively. This review gives an overview along the lines of argumentation in the ongoing debate on the embodiment of language and employs an ALE meta-analysis to illustrate and weigh previous findings.The collected evidence on the somatotopic activation of motor areas, abstract and concrete word processing, as well as from reported patient and timing studies emphasizes the important role of sensorimotor areas in language processing and supports the hypothesis that the motor system is activated during language comprehension.
Collapse
Affiliation(s)
- Doreen Jirak
- Department of Systems Neuroscience and Neuroimage Nord, University Medical Center Hamburg Eppendorf, Hamburg, Germany
| | | | | | | | | |
Collapse
|
20
|
Functional but not structural networks of the human laryngeal motor cortex show left hemispheric lateralization during syllable but not breathing production. J Neurosci 2010; 29:14912-23. [PMID: 19940187 DOI: 10.1523/jneurosci.4897-09.2009] [Citation(s) in RCA: 85] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The laryngeal motor cortex (LMC) is indispensible for the vocal motor control of speech and song production. Patients with bilateral lesions in this region are unable to speak and sing, although their nonverbal vocalizations, such as laughter and cry, are preserved. Despite the importance of the LMC in the control of voluntary voice production in humans, the literature describing its connections remains sparse. We used diffusion tensor probabilistic tractography and functional magnetic resonance imaging-based functional connectivity analysis to identify LMC networks controlling two tasks necessary for speech production: voluntary voice as repetition of two different syllables and voluntary breathing as controlled inspiration and expiration. Peaks of activation during all tasks were found in the bilateral ventral primary motor cortex in close proximity to each other. Functional networks of the LMC during voice production but not during controlled breathing showed significant left-hemispheric lateralization (p < 0.0005). However, structural networks of the LMC associated with both voluntary voice production and controlled breathing had bilateral hemispheric organization. Our findings indicate the presence of a common bilateral structural network of the LMC, upon which different functional networks are built to control various voluntary laryngeal tasks. Bilateral organization of functional LMC networks during controlled breathing supports its indispensible role in all types of laryngeal behaviors. Significant left-hemispheric lateralization of functional networks during simple but highly learned voice production suggests the readiness of the LMC network for production of a complex voluntary behavior, such as human speech.
Collapse
|
21
|
Chouinard PA, Goodale MA. FMRI adaptation during performance of learned arbitrary visuomotor conditional associations. Neuroimage 2009; 48:696-706. [PMID: 19619662 DOI: 10.1016/j.neuroimage.2009.07.020] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2009] [Revised: 07/08/2009] [Accepted: 07/09/2009] [Indexed: 10/20/2022] Open
Abstract
In everyday life, people select motor responses according to arbitrary rules. For example, our movements while driving a car can be instructed by color cues that we see on traffic lights. These stimuli do not spatially relate to the actions that they specify. Associations between these stimuli and actions are called arbitrary visuomotor conditional associations. Earlier fMRI studies have tried to dissociate the sensory and motor components of these associations by introducing delays between the presentation of arbitrary cues and go-signals that instructed participants to perform actions. This approach, however, also introduces neural processes that are not necessarily related to the normal real-time production of arbitrary visuomotor responses, such as working memory and the suppression of motor responses. We used fMRI adaptation as an alternative approach to dissociate sensory and motor components. We found that visual areas in the occipital-temporal cortex adapted only to the presentation of arbitrary visual cues whereas a number of sensorimotor areas adapted only to the production of response. Visual areas in the occipital-temporal cortex do not have any known connections with parts of the brain that can control hand musculature. Therefore, it is conceivable that the brain areas that we report as having adapted to both stimulus presentation and response production (namely, the dorsal premotor area, the supplementary motor area, the cingulate, the anterior intra-parietal sulcus area, and the thalamus) are involved in the multiple steps between processing visual stimuli and activating the motor commands that these cues specify.
Collapse
Affiliation(s)
- Philippe A Chouinard
- CIHR Group on Action and Perception, Department of Psychology, University of Western Ontario, Ontario, Canada.
| | | |
Collapse
|
22
|
Engel LR, Frum C, Puce A, Walker NA, Lewis JW. Different categories of living and non-living sound-sources activate distinct cortical networks. Neuroimage 2009; 47:1778-91. [PMID: 19465134 DOI: 10.1016/j.neuroimage.2009.05.041] [Citation(s) in RCA: 77] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2008] [Revised: 04/28/2009] [Accepted: 05/13/2009] [Indexed: 11/25/2022] Open
Abstract
With regard to hearing perception, it remains unclear as to whether, or the extent to which, different conceptual categories of real-world sounds and related categorical knowledge are differentially represented in the brain. Semantic knowledge representations are reported to include the major divisions of living versus non-living things, plus more specific categories including animals, tools, biological motion, faces, and places-categories typically defined by their characteristic visual features. Here, we used functional magnetic resonance imaging (fMRI) to identify brain regions showing preferential activity to four categories of action sounds, which included non-vocal human and animal actions (living), plus mechanical and environmental sound-producing actions (non-living). The results showed a striking antero-posterior division in cortical representations for sounds produced by living versus non-living sources. Additionally, there were several significant differences by category, depending on whether the task was category-specific (e.g. human or not) versus non-specific (detect end-of-sound). In general, (1) human-produced sounds yielded robust activation in the bilateral posterior superior temporal sulci independent of task. Task demands modulated activation of left lateralized fronto-parietal regions, bilateral insular cortices, and sub-cortical regions previously implicated in observation-execution matching, consistent with "embodied" and mirror-neuron network representations subserving recognition. (2) Animal action sounds preferentially activated the bilateral posterior insulae. (3) Mechanical sounds activated the anterior superior temporal gyri and parahippocampal cortices. (4) Environmental sounds preferentially activated dorsal occipital and medial parietal cortices. Overall, this multi-level dissociation of networks for preferentially representing distinct sound-source categories provides novel support for grounded cognition models that may underlie organizational principles for hearing perception.
Collapse
Affiliation(s)
- Lauren R Engel
- Sensory Neuroscience Research Center, West Virginia University, Morgantown, WV 26506, USA
| | | | | | | | | |
Collapse
|