1
|
Ross LA, Molholm S, Butler JS, Bene VAD, Foxe JJ. Neural correlates of multisensory enhancement in audiovisual narrative speech perception: a fMRI investigation. Neuroimage 2022; 263:119598. [PMID: 36049699 DOI: 10.1016/j.neuroimage.2022.119598] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 08/26/2022] [Accepted: 08/28/2022] [Indexed: 11/25/2022] Open
Abstract
This fMRI study investigated the effect of seeing articulatory movements of a speaker while listening to a naturalistic narrative stimulus. It had the goal to identify regions of the language network showing multisensory enhancement under synchronous audiovisual conditions. We expected this enhancement to emerge in regions known to underlie the integration of auditory and visual information such as the posterior superior temporal gyrus as well as parts of the broader language network, including the semantic system. To this end we presented 53 participants with a continuous narration of a story in auditory alone, visual alone, and both synchronous and asynchronous audiovisual speech conditions while recording brain activity using BOLD fMRI. We found multisensory enhancement in an extensive network of regions underlying multisensory integration and parts of the semantic network as well as extralinguistic regions not usually associated with multisensory integration, namely the primary visual cortex and the bilateral amygdala. Analysis also revealed involvement of thalamic brain regions along the visual and auditory pathways more commonly associated with early sensory processing. We conclude that under natural listening conditions, multisensory enhancement not only involves sites of multisensory integration but many regions of the wider semantic network and includes regions associated with extralinguistic sensory, perceptual and cognitive processing.
Collapse
Affiliation(s)
- Lars A Ross
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; Department of Imaging Sciences, University of Rochester Medical Center, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA.
| | - Sophie Molholm
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA
| | - John S Butler
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA; School of Mathematical Sciences, Technological University Dublin, Kevin Street Campus, Dublin, Ireland
| | - Victor A Del Bene
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA; University of Alabama at Birmingham, Heersink School of Medicine, Department of Neurology, Birmingham, Alabama, 35233, USA
| | - John J Foxe
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA.
| |
Collapse
|
2
|
Embodied cognition in neurodegenerative disorders: What do we know so far? A narrative review focusing on the mirror neuron system and clinical applications. J Clin Neurosci 2022; 98:66-72. [DOI: 10.1016/j.jocn.2022.01.028] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Revised: 06/24/2021] [Accepted: 01/22/2022] [Indexed: 02/04/2023]
|
3
|
Abstract
Ten years ago, Perspectives in Psychological Science published the Mirror Neuron Forum, in which authors debated the role of mirror neurons in action understanding, speech, imitation, and autism and asked whether mirror neurons are acquired through visual-motor learning. Subsequent research on these themes has made significant advances, which should encourage further, more systematic research. For action understanding, multivoxel pattern analysis, patient studies, and brain stimulation suggest that mirror-neuron brain areas contribute to low-level processing of observed actions (e.g., distinguishing types of grip) but not to high-level action interpretation (e.g., inferring actors' intentions). In the area of speech perception, although it remains unclear whether mirror neurons play a specific, causal role in speech perception, there is compelling evidence for the involvement of the motor system in the discrimination of speech in perceptually noisy conditions. For imitation, there is strong evidence from patient, brain-stimulation, and brain-imaging studies that mirror-neuron brain areas play a causal role in copying of body movement topography. In the area of autism, studies using behavioral and neurological measures have tried and failed to find evidence supporting the "broken-mirror theory" of autism. Furthermore, research on the origin of mirror neurons has confirmed the importance of domain-general visual-motor associative learning rather than canalized visual-motor learning, or motor learning alone.
Collapse
Affiliation(s)
- Cecilia Heyes
- All Souls College, University of Oxford
- Department of Experimental Psychology, University of Oxford
| | - Caroline Catmur
- Department of Psychology, Institute of Psychiatry, Psychology, and Neuroscience, King’s College London
| |
Collapse
|
4
|
Fischer J, Mahon BZ. What tool representation, intuitive physics, and action have in common: The brain's first-person physics engine. Cogn Neuropsychol 2021; 38:455-467. [PMID: 35994054 PMCID: PMC11498101 DOI: 10.1080/02643294.2022.2106126] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 07/17/2022] [Accepted: 07/21/2022] [Indexed: 10/15/2022]
Abstract
An overlapping set of brain regions in parietal and frontal cortex are engaged by different types of tasks and stimuli: (i) making inferences about the physical structure and dynamics of the world, (ii) passively viewing, or actively interacting with, manipulable objects, and (iii) planning and execution of reaching and grasping actions. We suggest the observed neural overlap is because a common superordinate computation is engaged by each of those different tasks: A forward model of physical reasoning about how first-person actions will affect the world and be affected by unfolding physical events. This perspective offers an account of why some physical predictions are systematically incorrect - there can be a mismatch between how physical scenarios are experimentally framed and the native format of the inferences generated by the brain's first-person physics engine. This perspective generates new empirical expectations about the conditions under which physical reasoning may exhibit systematic biases.
Collapse
Affiliation(s)
- Jason Fischer
- Johns Hopkins University, Department of Psychological and Brain Sciences, Baltimore, MD 21218, USA
| | - Bradford Z. Mahon
- Carnegie Mellon University, Department of Psychology, 5000 Forbes Ave, Pittsburgh, PA 15213, USA
- Carnegie Mellon Neuroscience Institute, 5000 Forbes Ave, Pittsburgh, PA 15213, USA
| |
Collapse
|
5
|
Liu L, Zhang Y, Zhou Q, Garrett DD, Lu C, Chen A, Qiu J, Ding G. Auditory-Articulatory Neural Alignment between Listener and Speaker during Verbal Communication. Cereb Cortex 2021; 30:942-951. [PMID: 31318013 DOI: 10.1093/cercor/bhz138] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2018] [Revised: 05/20/2019] [Accepted: 05/25/2019] [Indexed: 11/13/2022] Open
Abstract
Whether auditory processing of speech relies on reference to the articulatory motor information of speaker remains elusive. Here, we addressed this issue under a two-brain framework. Functional magnetic resonance imaging was applied to record the brain activities of speakers when telling real-life stories and later of listeners when listening to the audio recordings of these stories. Based on between-brain seed-to-voxel correlation analyses, we revealed that neural dynamics in listeners' auditory temporal cortex are temporally coupled with the dynamics in the speaker's larynx/phonation area. Moreover, the coupling response in listener's left auditory temporal cortex follows the hierarchical organization for speech processing, with response lags in A1+, STG/STS, and MTG increasing linearly. Further, listeners showing greater coupling responses understand the speech better. When comprehension fails, such interbrain auditory-articulation coupling vanishes substantially. These findings suggest that a listener's auditory system and a speaker's articulatory system are inherently aligned during naturalistic verbal interaction, and such alignment is associated with high-level information transfer from the speaker to the listener. Our study provides reliable evidence supporting that references to the articulatory motor information of speaker facilitate speech comprehension under a naturalistic scene.
Collapse
Affiliation(s)
- Lanfang Liu
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, People's Republic of China.,Department of Psychology, Sun Yat-sen University, Guangzhou 510006, People's Republic of China
| | - Yuxuan Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, People's Republic of China
| | - Qi Zhou
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, People's Republic of China
| | - Douglas D Garrett
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Max Planck Institute for Human Development, Lentzeallee 94, Berlin 14195, Germany
| | - Chunming Lu
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, People's Republic of China
| | - Antao Chen
- Key Laboratory of Cognition and Personality (SWU), Ministry of Education & Department of Psychology, Southwest University, Chongqing 400715, People's Republic of China
| | - Jiang Qiu
- Key Laboratory of Cognition and Personality (SWU), Ministry of Education & Department of Psychology, Southwest University, Chongqing 400715, People's Republic of China
| | - Guosheng Ding
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, People's Republic of China
| |
Collapse
|
6
|
Karas PJ, Magnotti JF, Metzger BA, Zhu LL, Smith KB, Yoshor D, Beauchamp MS. The visual speech head start improves perception and reduces superior temporal cortex responses to auditory speech. eLife 2019; 8:e48116. [PMID: 31393261 PMCID: PMC6687434 DOI: 10.7554/elife.48116] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Accepted: 07/17/2019] [Indexed: 12/30/2022] Open
Abstract
Visual information about speech content from the talker's mouth is often available before auditory information from the talker's voice. Here we examined perceptual and neural responses to words with and without this visual head start. For both types of words, perception was enhanced by viewing the talker's face, but the enhancement was significantly greater for words with a head start. Neural responses were measured from electrodes implanted over auditory association cortex in the posterior superior temporal gyrus (pSTG) of epileptic patients. The presence of visual speech suppressed responses to auditory speech, more so for words with a visual head start. We suggest that the head start inhibits representations of incompatible auditory phonemes, increasing perceptual accuracy and decreasing total neural responses. Together with previous work showing visual cortex modulation (Ozker et al., 2018b) these results from pSTG demonstrate that multisensory interactions are a powerful modulator of activity throughout the speech perception network.
Collapse
Affiliation(s)
- Patrick J Karas
- Department of NeurosurgeryBaylor College of MedicineHoustonUnited States
| | - John F Magnotti
- Department of NeurosurgeryBaylor College of MedicineHoustonUnited States
| | - Brian A Metzger
- Department of NeurosurgeryBaylor College of MedicineHoustonUnited States
| | - Lin L Zhu
- Department of NeurosurgeryBaylor College of MedicineHoustonUnited States
| | - Kristen B Smith
- Department of NeurosurgeryBaylor College of MedicineHoustonUnited States
| | - Daniel Yoshor
- Department of NeurosurgeryBaylor College of MedicineHoustonUnited States
| | | |
Collapse
|
7
|
Almeida J, Amaral L, Garcea FE, Aguiar de Sousa D, Xu S, Mahon BZ, Martins IP. Visual and visuomotor processing of hands and tools as a case study of cross talk between the dorsal and ventral streams. Cogn Neuropsychol 2018; 35:288-303. [PMID: 29792367 DOI: 10.1080/02643294.2018.1463980] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
Abstract
A major principle of organization of the visual system is between a dorsal stream that processes visuomotor information and a ventral stream that supports object recognition. Most research has focused on dissociating processing across these two streams. Here we focus on how the two streams interact. We tested neurologically-intact and impaired participants in an object categorization task over two classes of objects that depend on processing within both streams-hands and tools. We measured how unconscious processing of images from one of these categories (e.g., tools) affects the recognition of images from the other category (i.e., hands). Our findings with neurologically-intact participants demonstrated that processing an image of a hand hampers the subsequent processing of an image of a tool, and vice versa. These results were not present in apraxic patients (N = 3). These findings suggest local and global inhibitory processes working in tandem to co-register information across the two streams.
Collapse
Affiliation(s)
- Jorge Almeida
- a Faculty of Psychology and Educational Sciences , University of Coimbra , Coimbra , Portugal.,b Faculty of Psychology and Educational Sciences , Proaction Laboratory, University of Coimbra , Coimbra , Portugal
| | - Lénia Amaral
- b Faculty of Psychology and Educational Sciences , Proaction Laboratory, University of Coimbra , Coimbra , Portugal
| | - Frank E Garcea
- c Department of Brain and Cognitive Sciences , University of Rochester , Rochester , NY , USA.,d Center for Visual Science, University of Rochester , Rochester , NY , USA
| | - Diana Aguiar de Sousa
- e Faculty of Medicine , Laboratório de Estudos da Linguagem, Centro de Estudos Egas Moniz, University of Lisbon, Hospital Santa Maria , Lisbon , Portugal
| | - Shan Xu
- f School of Psychology, Beijing Normal University , Beijing , People's Republic of China
| | - Bradford Z Mahon
- c Department of Brain and Cognitive Sciences , University of Rochester , Rochester , NY , USA.,d Center for Visual Science, University of Rochester , Rochester , NY , USA.,g Department of Neurosurgery , University of Rochester , Rochester , NY , USA
| | - Isabel Pavão Martins
- e Faculty of Medicine , Laboratório de Estudos da Linguagem, Centro de Estudos Egas Moniz, University of Lisbon, Hospital Santa Maria , Lisbon , Portugal
| |
Collapse
|
8
|
Echoes on the motor network: how internal motor control structures afford sensory experience. Brain Struct Funct 2017; 222:3865-3888. [DOI: 10.1007/s00429-017-1484-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2017] [Accepted: 07/25/2017] [Indexed: 01/10/2023]
|
9
|
Skipper JI, Devlin JT, Lametti DR. The hearing ear is always found close to the speaking tongue: Review of the role of the motor system in speech perception. BRAIN AND LANGUAGE 2017; 164:77-105. [PMID: 27821280 DOI: 10.1016/j.bandl.2016.10.004] [Citation(s) in RCA: 126] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2016] [Accepted: 10/24/2016] [Indexed: 06/06/2023]
Abstract
Does "the motor system" play "a role" in speech perception? If so, where, how, and when? We conducted a systematic review that addresses these questions using both qualitative and quantitative methods. The qualitative review of behavioural, computational modelling, non-human animal, brain damage/disorder, electrical stimulation/recording, and neuroimaging research suggests that distributed brain regions involved in producing speech play specific, dynamic, and contextually determined roles in speech perception. The quantitative review employed region and network based neuroimaging meta-analyses and a novel text mining method to describe relative contributions of nodes in distributed brain networks. Supporting the qualitative review, results show a specific functional correspondence between regions involved in non-linguistic movement of the articulators, covertly and overtly producing speech, and the perception of both nonword and word sounds. This distributed set of cortical and subcortical speech production regions are ubiquitously active and form multiple networks whose topologies dynamically change with listening context. Results are inconsistent with motor and acoustic only models of speech perception and classical and contemporary dual-stream models of the organization of language and the brain. Instead, results are more consistent with complex network models in which multiple speech production related networks and subnetworks dynamically self-organize to constrain interpretation of indeterminant acoustic patterns as listening context requires.
Collapse
Affiliation(s)
- Jeremy I Skipper
- Experimental Psychology, University College London, United Kingdom.
| | - Joseph T Devlin
- Experimental Psychology, University College London, United Kingdom
| | - Daniel R Lametti
- Experimental Psychology, University College London, United Kingdom; Department of Experimental Psychology, University of Oxford, United Kingdom
| |
Collapse
|
10
|
Abstract
There is growing interest in whether the motor system plays an essential role in rhythm perception. The motor system is active during the perception of rhythms, but is such motor activity merely a sign of unexecuted motor planning, or does it play a causal role in shaping the perception of rhythm? We present evidence for a causal role of motor planning and simulation, and review theories of internal simulation for beat-based timing prediction. Brain stimulation studies have the potential to conclusively test if the motor system plays a causal role in beat perception and ground theories to their neural underpinnings.
Collapse
Affiliation(s)
- Jessica M Ross
- a Cognitive and Information Sciences , University of California , Merced , CA , USA
| | - John R Iversen
- b Swartz Center for Computational Neuroscience, Institute for Neural Computation , University of California , San Diego , CA , USA
| | | |
Collapse
|
11
|
Abstract
How are the meanings of words, events, and objects represented and organized in the brain? This question, perhaps more than any other in the field, probes some of the deepest and most foundational puzzles regarding the structure of the mind and brain. Accordingly, it has spawned a field of inquiry that is diverse and multidisciplinary, has led to the discovery of numerous empirical phenomena, and has spurred the development of a wide range of theoretical positions. This special issue brings together the most recent theoretical developments from the leaders in the field, representing a range of viewpoints on issues of fundamental significance to a theory of meaning representation. Here we introduce the special issue by way of pulling out some key themes that cut across the contributions that form this issue and situating those themes in the broader literature. The core issues around which research on conceptual representation can be organized are representational format, representational content, the organization of concepts in the brain, and the processing dynamics that govern interactions between the conceptual system and sensorimotor representations. We highlight areas in which consensus has formed; for those areas in which opinion is divided, we seek to clarify the relation of theory and evidence and to set in relief the bridging assumptions that undergird current discussions.
Collapse
Affiliation(s)
- Bradford Z Mahon
- Department of Brain and Cognitive Sciences, University of Rochester, Meliora Hall, Rochester, NY, 14627-0268, USA.
- Department of Neurosurgery, University of Rochester, Rochester, NY, USA.
- Center for Visual Science, University of Rochester, Rochester, NY, USA.
- Center for Language Sciences, University of Rochester, Rochester, NY, USA.
| | - Gregory Hickok
- Department of Cognitive Sciences, University of California, Irvine, CA, USA
| |
Collapse
|
12
|
No evidence of somatotopic place of articulation feature mapping in motor cortex during passive speech perception. Psychon Bull Rev 2015; 23:1231-40. [DOI: 10.3758/s13423-015-0988-z] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|