1
|
Brain activity during shadowing of audiovisual cocktail party speech, contributions of auditory-motor integration and selective attention. Sci Rep 2022; 12:18789. [PMID: 36335137 PMCID: PMC9637225 DOI: 10.1038/s41598-022-22041-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 10/07/2022] [Indexed: 11/06/2022] Open
Abstract
Selective listening to cocktail-party speech involves a network of auditory and inferior frontal cortical regions. However, cognitive and motor cortical regions are differentially activated depending on whether the task emphasizes semantic or phonological aspects of speech. Here we tested whether processing of cocktail-party speech differs when participants perform a shadowing (immediate speech repetition) task compared to an attentive listening task in the presence of irrelevant speech. Participants viewed audiovisual dialogues with concurrent distracting speech during functional imaging. Participants either attentively listened to the dialogue, overtly repeated (i.e., shadowed) attended speech, or performed visual or speech motor control tasks where they did not attend to speech and responses were not related to the speech input. Dialogues were presented with good or poor auditory and visual quality. As a novel result, we show that attentive processing of speech activated the same network of sensory and frontal regions during listening and shadowing. However, in the superior temporal gyrus (STG), peak activations during shadowing were posterior to those during listening, suggesting that an anterior-posterior distinction is present for motor vs. perceptual processing of speech already at the level of the auditory cortex. We also found that activations along the dorsal auditory processing stream were specifically associated with the shadowing task. These activations are likely to be due to complex interactions between perceptual, attention dependent speech processing and motor speech generation that matches the heard speech. Our results suggest that interactions between perceptual and motor processing of speech relies on a distributed network of temporal and motor regions rather than any specific anatomical landmark as suggested by some previous studies.
Collapse
|
2
|
Valeriani D, Simonyan K. The dynamic connectome of speech control. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200256. [PMID: 34482717 DOI: 10.1098/rstb.2020.0256] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
Speech production relies on the orchestrated control of multiple brain regions. The specific, directional influences within these networks remain poorly understood. We used regression dynamic causal modelling to infer the whole-brain directed (effective) connectivity from functional magnetic resonance imaging data of 36 healthy individuals during the production of meaningful English sentences and meaningless syllables. We identified that the two dynamic connectomes have distinct architectures that are dependent on the complexity of task production. The speech was regulated by a dynamic neural network, the most influential nodes of which were centred around superior and inferior parietal areas and influenced the whole-brain network activity via long-ranging coupling with primary sensorimotor, prefrontal, temporal and insular regions. By contrast, syllable production was controlled by a more compressed, cost-efficient network structure, involving sensorimotor cortico-subcortical integration via superior parietal and cerebellar network hubs. These data demonstrate the mechanisms by which the neural network reorganizes the connectivity of its influential regions, from supporting the fundamental aspects of simple syllabic vocal motor output to multimodal information processing of speech motor output. This article is part of the theme issue 'Vocal learning in animals and humans'.
Collapse
Affiliation(s)
- Davide Valeriani
- Department of Otolaryngology - Head and Neck Surgery, Massachusetts Eye and Ear, 243 Charles Street, Boston, MA 02114, USA.,Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, 243 Charles Street, Boston, MA 02114, USA
| | - Kristina Simonyan
- Department of Otolaryngology - Head and Neck Surgery, Massachusetts Eye and Ear, 243 Charles Street, Boston, MA 02114, USA.,Department of Otolaryngology-Head and Neck Surgery, Harvard Medical School, 243 Charles Street, Boston, MA 02114, USA.,Department of Neurology, Massachusetts General Hospital, 55 Fruit Street, Boston, MA 02114, USA
| |
Collapse
|
3
|
Al Dahhan NZ, Kirby JR, Chen Y, Brien DC, Munoz DP. Examining the neural and cognitive processes that underlie reading through naming speed tasks. Eur J Neurosci 2020; 51:2277-2298. [PMID: 31912932 DOI: 10.1111/ejn.14673] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2018] [Revised: 12/13/2019] [Accepted: 12/31/2019] [Indexed: 11/29/2022]
Abstract
We combined fMRI with eye tracking and speech recording to examine the neural and cognitive mechanisms that underlie reading. To simplify the study of the complex processes involved during reading, we used naming speed (NS) tasks (also known as rapid automatized naming or RAN) as a focus for this study, in which average reading right-handed adults named sets of stimuli (letters or objects) as quickly and accurately as possible. Due to the possibility of spoken output during fMRI studies creating motion artifacts, we employed both an overt session and a covert session. When comparing the two sessions, there were no significant differences in behavioral performance, sensorimotor activation (except for regions involved in the motor aspects of speech production) or activation in regions within the left-hemisphere-dominant neural reading network. This established that differences found between the tasks within the reading network were not attributed to speech production motion artifacts or sensorimotor processes. Both behavioral and neuroimaging measures showed that letter naming was a more automatic and efficient task than object naming. Furthermore, specific manipulations to the NS tasks to make the stimuli more visually and/or phonologically similar differentially activated the reading network in the left hemisphere associated with phonological, orthographic and orthographic-to-phonological processing, but not articulatory/motor processing related to speech production. These findings further our understanding of the underlying neural processes that support reading by examining how activation within the reading network differs with both task performance and task characteristics.
Collapse
Affiliation(s)
- Noor Z Al Dahhan
- Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada
| | - John R Kirby
- Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada.,Faculty of Education, Queen's University, Kingston, ON, Canada
| | - Ying Chen
- Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada
| | - Donald C Brien
- Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada
| | - Douglas P Munoz
- Centre for Neuroscience Studies, Queen's University, Kingston, ON, Canada.,Department of Biomedical and Molecular Sciences, Queen's University, Kingston, ON, Canada
| |
Collapse
|
4
|
That does not sound right: Sounds affect visual ERPs during a piano sight-reading task. Behav Brain Res 2019; 367:1-9. [DOI: 10.1016/j.bbr.2019.03.037] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2018] [Revised: 03/05/2019] [Accepted: 03/21/2019] [Indexed: 11/20/2022]
|
5
|
Interaction of the effects associated with auditory-motor integration and attention-engaging listening tasks. Neuropsychologia 2019; 124:322-336. [PMID: 30444980 DOI: 10.1016/j.neuropsychologia.2018.11.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 09/20/2018] [Accepted: 11/08/2018] [Indexed: 11/22/2022]
Abstract
A number of previous studies have implicated regions in posterior auditory cortex (AC) in auditory-motor integration during speech production. Other studies, in turn, have shown that activation in AC and adjacent regions in the inferior parietal lobule (IPL) is strongly modulated during active listening and depends on task requirements. The present fMRI study investigated whether auditory-motor effects interact with those related to active listening tasks in AC and IPL. In separate task blocks, our subjects performed either auditory discrimination or 2-back memory tasks on phonemic or nonphonemic vowels. They responded to targets by either overtly repeating the last vowel of a target pair, overtly producing a given response vowel, or by pressing a response button. We hypothesized that the requirements for auditory-motor integration, and the associated activation, would be stronger during repetition than production responses and during repetition of nonphonemic than phonemic vowels. We also hypothesized that if auditory-motor effects are independent of task-dependent modulations, then the auditory-motor effects should not differ during discrimination and 2-back tasks. We found that activation in AC and IPL was significantly modulated by task (discrimination vs. 2-back), vocal-response type (repetition vs. production), and motor-response type (vocal vs. button). Motor-response and task effects interacted in IPL but not in AC. Overall, the results support the view that regions in posterior AC are important in auditory-motor integration. However, the present study shows that activation in wide AC and IPL regions is modulated by the motor requirements of active listening tasks in a more general manner. Further, the results suggest that activation modulations in AC associated with attention-engaging listening tasks and those associated with auditory-motor performance are mediated by independent mechanisms.
Collapse
|
6
|
Tanaka S, Kirino E. Dynamic Reconfiguration of the Supplementary Motor Area Network during Imagined Music Performance. Front Hum Neurosci 2017; 11:606. [PMID: 29311870 PMCID: PMC5732967 DOI: 10.3389/fnhum.2017.00606] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2017] [Accepted: 11/28/2017] [Indexed: 11/18/2022] Open
Abstract
The supplementary motor area (SMA) has been shown to be the center for motor planning and is active during music listening and performance. However, limited data exist on the role of the SMA in music. Music performance requires complex information processing in auditory, visual, spatial, emotional, and motor domains, and this information is integrated for the performance. We hypothesized that the SMA is engaged in multimodal integration of information, distributed across several regions of the brain to prepare for ongoing music performance. To test this hypothesis, functional networks involving the SMA were extracted from functional magnetic resonance imaging (fMRI) data that were acquired from musicians during imagined music performance and during the resting state. Compared with the resting condition, imagined music performance increased connectivity of the SMA with widespread regions in the brain including the sensorimotor cortices, parietal cortex, posterior temporal cortex, occipital cortex, and inferior and dorsolateral prefrontal cortex. Increased connectivity of the SMA with the dorsolateral prefrontal cortex suggests that the SMA is under cognitive control, while increased connectivity with the inferior prefrontal cortex suggests the involvement of syntax processing. Increased connectivity with the parietal cortex, posterior temporal cortex, and occipital cortex is likely for the integration of spatial, emotional, and visual information. Finally, increased connectivity with the sensorimotor cortices was potentially involved with the translation of thought planning into motor programs. Therefore, the reconfiguration of the SMA network observed in this study is considered to reflect the multimodal integration required for imagined and actual music performance. We propose that the SMA network construct “the internal representation of music performance” by integrating multimodal information required for the performance.
Collapse
Affiliation(s)
- Shoji Tanaka
- Department of Information and Communication Sciences, Sophia University, Tokyo, Japan
| | - Eiji Kirino
- Department of Psychiatry, School of Medicine, Juntendo University, Tokyo, Japan.,Department of Psychiatry, Juntendo Shizuoka Hospital, Shizuoka, Japan
| |
Collapse
|
7
|
Venezia JH, Fillmore P, Matchin W, Isenberg AL, Hickok G, Fridriksson J. Perception drives production across sensory modalities: A network for sensorimotor integration of visual speech. Neuroimage 2016; 126:196-207. [PMID: 26608242 PMCID: PMC4733636 DOI: 10.1016/j.neuroimage.2015.11.038] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2015] [Revised: 11/09/2015] [Accepted: 11/15/2015] [Indexed: 11/22/2022] Open
Abstract
Sensory information is critical for movement control, both for defining the targets of actions and providing feedback during planning or ongoing movements. This holds for speech motor control as well, where both auditory and somatosensory information have been shown to play a key role. Recent clinical research demonstrates that individuals with severe speech production deficits can show a dramatic improvement in fluency during online mimicking of an audiovisual speech signal suggesting the existence of a visuomotor pathway for speech motor control. Here we used fMRI in healthy individuals to identify this new visuomotor circuit for speech production. Participants were asked to perceive and covertly rehearse nonsense syllable sequences presented auditorily, visually, or audiovisually. The motor act of rehearsal, which is prima facie the same whether or not it is cued with a visible talker, produced different patterns of sensorimotor activation when cued by visual or audiovisual speech (relative to auditory speech). In particular, a network of brain regions including the left posterior middle temporal gyrus and several frontoparietal sensorimotor areas activated more strongly during rehearsal cued by a visible talker versus rehearsal cued by auditory speech alone. Some of these brain regions responded exclusively to rehearsal cued by visual or audiovisual speech. This result has significant implications for models of speech motor control, for the treatment of speech output disorders, and for models of the role of speech gesture imitation in development.
Collapse
Affiliation(s)
- Jonathan H Venezia
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA 92697, United States.
| | - Paul Fillmore
- Department of Communication Sciences and Disorders, Baylor University, Waco, TX 76798, United States
| | - William Matchin
- Department of Linguistics, University of Maryland, College Park, MD 20742, United States
| | - A Lisette Isenberg
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA 92697, United States
| | - Gregory Hickok
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA 92697, United States
| | - Julius Fridriksson
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia, SC 29208, United States
| |
Collapse
|
8
|
Kent RD. Nonspeech Oral Movements and Oral Motor Disorders: A Narrative Review. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2015; 24:763-89. [PMID: 26126128 PMCID: PMC4698470 DOI: 10.1044/2015_ajslp-14-0179] [Citation(s) in RCA: 64] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2014] [Revised: 04/02/2015] [Accepted: 06/13/2015] [Indexed: 05/25/2023]
Abstract
PURPOSE Speech and other oral functions such as swallowing have been compared and contrasted with oral behaviors variously labeled quasispeech, paraspeech, speechlike, and nonspeech, all of which overlap to some degree in neural control, muscles deployed, and movements performed. Efforts to understand the relationships among these behaviors are hindered by the lack of explicit and widely accepted definitions. This review article offers definitions and taxonomies for nonspeech oral movements and for diverse speaking tasks, both overt and covert. METHOD Review of the literature included searches of Medline, Google Scholar, HighWire Press, and various online sources. Search terms pertained to speech, quasispeech, paraspeech, speechlike, and nonspeech oral movements. Searches also were carried out for associated terms in oral biology, craniofacial physiology, and motor control. RESULTS AND CONCLUSIONS Nonspeech movements have a broad spectrum of clinical applications, including developmental speech and language disorders, motor speech disorders, feeding and swallowing difficulties, obstructive sleep apnea syndrome, trismus, and tardive stereotypies. The role and benefit of nonspeech oral movements are controversial in many oral motor disorders. It is argued that the clinical value of these movements can be elucidated through careful definitions and task descriptions such as those proposed in this review article.
Collapse
Affiliation(s)
- Ray D. Kent
- Waisman Center, University of Wisconsin–Madison
| |
Collapse
|
9
|
Braga RM, Leech R. Echoes of the Brain: Local-Scale Representation of Whole-Brain Functional Networks within Transmodal Cortex. Neuroscientist 2015; 21:540-551. [PMID: 25948648 PMCID: PMC4586496 DOI: 10.1177/1073858415585730] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Transmodal (nonsensory-specific) regions sit at the confluence of different information streams, and play an important role in cognition. These regions are thought to receive and integrate information from multiple functional networks. However, little is known about (1) how transmodal cortices are functionally organized and (2) how this organization might facilitate information processing. In this article, we discuss recent findings that transmodal cortices contain a detailed local functional architecture of adjacent and partially overlapping subregions. These subregions show relative specializations, and contain traces or "echoes" of the activity of different large-scale intrinsic connectivity networks. We propose that this finer-grained organization can (1) explain how the same transmodal region can play a role in multiple tasks and cognitive disorders, (2) provide a mechanism by which different types of signals can be simultaneously segregated and integrated within transmodal regions, and (3) enhance current network- and node-level models of brain function, by showing that non-stationary functional connectivity patterns may be a result of dynamic shifts in subnodal signals. Finally, we propose that LFA may have an important role in regulating neural dynamics and facilitating balanced activity across the cortex to enable efficient and flexible high-level cognition.
Collapse
Affiliation(s)
- Rodrigo M Braga
- The Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Hammersmith Hospital Campus, Imperial College London, London, UK Center for Brain Science, Harvard University, Cambridge, MA, USA Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital & Harvard Medical School, Charlestown, MA, USA
| | - Robert Leech
- The Computational, Cognitive and Clinical Neuroimaging Laboratory, Division of Brain Sciences, Hammersmith Hospital Campus, Imperial College London, London, UK
| |
Collapse
|
10
|
Abstract
In the past few years, several studies have been directed to understanding the complexity of functional interactions between different brain regions during various human behaviors. Among these, neuroimaging research installed the notion that speech and language require an orchestration of brain regions for comprehension, planning, and integration of a heard sound with a spoken word. However, these studies have been largely limited to mapping the neural correlates of separate speech elements and examining distinct cortical or subcortical circuits involved in different aspects of speech control. As a result, the complexity of the brain network machinery controlling speech and language remained largely unknown. Using graph theoretical analysis of functional MRI (fMRI) data in healthy subjects, we quantified the large-scale speech network topology by constructing functional brain networks of increasing hierarchy from the resting state to motor output of meaningless syllables to complex production of real-life speech as well as compared to non-speech-related sequential finger tapping and pure tone discrimination networks. We identified a segregated network of highly connected local neural communities (hubs) in the primary sensorimotor and parietal regions, which formed a commonly shared core hub network across the examined conditions, with the left area 4p playing an important role in speech network organization. These sensorimotor core hubs exhibited features of flexible hubs based on their participation in several functional domains across different networks and ability to adaptively switch long-range functional connectivity depending on task content, resulting in a distinct community structure of each examined network. Specifically, compared to other tasks, speech production was characterized by the formation of six distinct neural communities with specialized recruitment of the prefrontal cortex, insula, putamen, and thalamus, which collectively forged the formation of the functional speech connectome. In addition, the observed capacity of the primary sensorimotor cortex to exhibit operational heterogeneity challenged the established concept of unimodality of this region. This study uses graph theory to analyze functional MRI data recorded from speakers as they produce single syllables or whole sentences, revealing the complexity of the brain network machinery that controls speech and language. Speech production is a complex process that requires the orchestration of multiple brain regions. However, our current understanding of the large-scale neural architecture during speaking remains scant, as research has mostly focused on examining distinct brain circuits involved in distinct aspects of speech control. Here, we performed graph theoretical analyses of functional MRI data acquired from healthy subjects in order to reveal how brain regions relate to one another while speaking. We constructed functional brain networks of increasing hierarchy from rest to simple vocal motor output to the production of real-life speech, and compared these to nonspeech control tasks such as finger tapping and pure tone discrimination. We discovered a specialized network of densely connected sensorimotor regions, which formed a common processing core across all conditions. Specifically, the primary sensorimotor cortex participated in multiple functional domains across different networks and modulated long-range connections depending on task content, which challenges the established concept of low-order unimodal function of this region. Compared to other tasks, speech production was characterized by the formation of six distinct neural communities with specialized recruitment of the prefrontal cortex, insula, putamen, and thalamus, which collectively formed the functional speech connectome.
Collapse
Affiliation(s)
- Stefan Fuertinger
- Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, New York, United States of America
| | - Barry Horwitz
- Brain Imaging and Modeling Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, Maryland, United States of America
| | - Kristina Simonyan
- Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, New York, United States of America
- Department of Otolaryngology, Icahn School of Medicine at Mount Sinai, New York, New York, United States of America
- * E-mail:
| |
Collapse
|
11
|
A trade-off between somatosensory and auditory related brain activity during object naming but not reading. J Neurosci 2015; 35:4751-9. [PMID: 25788691 DOI: 10.1523/jneurosci.2292-14.2015] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
The parietal operculum, particularly the cytoarchitectonic area OP1 of the secondary somatosensory area (SII), is involved in somatosensory feedback. Using fMRI with 58 human subjects, we investigated task-dependent differences in SII/OP1 activity during three familiar speech production tasks: object naming, reading and repeatedly saying "1-2-3." Bilateral SII/OP1 was significantly suppressed (relative to rest) during object naming, to a lesser extent when repeatedly saying "1-2-3" and not at all during reading. These results cannot be explained by task difficulty but the contrasting difference between naming and reading illustrates how the demands on somatosensory activity change with task, even when motor output (i.e., production of object names) is matched. To investigate what determined SII/OP1 deactivation during object naming, we searched the whole brain for areas where activity increased as that in SII/OP1 decreased. This across subject covariance analysis revealed a region in the right superior temporal sulcus (STS) that lies within the auditory cortex, and is activated by auditory feedback during speech production. The tradeoff between activity in SII/OP1 and STS was not observed during reading, which showed significantly more activation than naming in both SII/OP1 and STS bilaterally. These findings suggest that, although object naming is more error prone than reading, subjects can afford to rely more or less on somatosensory or auditory feedback during naming. In contrast, fast and efficient error-free reading places more consistent demands on both types of feedback, perhaps because of the potential for increased competition between lexical and sublexical codes at the articulatory level.
Collapse
|
12
|
Vaden KI, Kuchinsky SE, Ahlstrom JB, Dubno JR, Eckert MA. Cortical activity predicts which older adults recognize speech in noise and when. J Neurosci 2015; 35:3929-37. [PMID: 25740521 PMCID: PMC4348188 DOI: 10.1523/jneurosci.2908-14.2015] [Citation(s) in RCA: 71] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2014] [Revised: 01/14/2015] [Accepted: 01/28/2015] [Indexed: 11/21/2022] Open
Abstract
Speech recognition in noise can be challenging for older adults and elicits elevated activity throughout a cingulo-opercular network that is hypothesized to monitor and modify behaviors to optimize performance. A word recognition in noise experiment was used to test the hypothesis that cingulo-opercular engagement provides performance benefit for older adults. Healthy older adults (N = 31; 50-81 years of age; mean pure tone thresholds <32 dB HL from 0.25 to 8 kHz, best ear; species: human) performed word recognition in multitalker babble at 2 signal-to-noise ratios (SNR = +3 or +10 dB) during a sparse sampling fMRI experiment. Elevated cingulo-opercular activity was associated with an increased likelihood of correct recognition on the following trial independently of SNR and performance on the preceding trial. The cingulo-opercular effect increased for participants with the best overall performance. These effects were lower for older adults compared with a younger, normal-hearing adult sample (N = 18). Visual cortex activity also predicted trial-level recognition for the older adults, which resulted from discrete decreases in activity before errors and occurred for the oldest adults with the poorest recognition. Participants demonstrating larger visual cortex effects also had reduced fractional anisotropy in an anterior portion of the left inferior frontal-occipital fasciculus, which projects between frontal and occipital regions where activity predicted word recognition. Together, the results indicate that older adults experience performance benefit from elevated cingulo-opercular activity, but not to the same extent as younger adults, and that declines in attentional control can limit word recognition.
Collapse
Affiliation(s)
- Kenneth I Vaden
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina 29425, and
| | - Stefanie E Kuchinsky
- Center for Advanced Study of Language, University of Maryland, College Park, Maryland 20742
| | - Jayne B Ahlstrom
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina 29425, and
| | - Judy R Dubno
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina 29425, and
| | - Mark A Eckert
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina 29425, and
| |
Collapse
|
13
|
Simonyan K, Fuertinger S. Speech networks at rest and in action: interactions between functional brain networks controlling speech production. J Neurophysiol 2015; 113:2967-78. [PMID: 25673742 DOI: 10.1152/jn.00964.2014] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2014] [Accepted: 02/06/2015] [Indexed: 01/08/2023] Open
Abstract
Speech production is one of the most complex human behaviors. Although brain activation during speaking has been well investigated, our understanding of interactions between the brain regions and neural networks remains scarce. We combined seed-based interregional correlation analysis with graph theoretical analysis of functional MRI data during the resting state and sentence production in healthy subjects to investigate the interface and topology of functional networks originating from the key brain regions controlling speech, i.e., the laryngeal/orofacial motor cortex, inferior frontal and superior temporal gyri, supplementary motor area, cingulate cortex, putamen, and thalamus. During both resting and speaking, the interactions between these networks were bilaterally distributed and centered on the sensorimotor brain regions. However, speech production preferentially recruited the inferior parietal lobule (IPL) and cerebellum into the large-scale network, suggesting the importance of these regions in facilitation of the transition from the resting state to speaking. Furthermore, the cerebellum (lobule VI) was the most prominent region showing functional influences on speech-network integration and segregation. Although networks were bilaterally distributed, interregional connectivity during speaking was stronger in the left vs. right hemisphere, which may have underlined a more homogeneous overlap between the examined networks in the left hemisphere. Among these, the laryngeal motor cortex (LMC) established a core network that fully overlapped with all other speech-related networks, determining the extent of network interactions. Our data demonstrate complex interactions of large-scale brain networks controlling speech production and point to the critical role of the LMC, IPL, and cerebellum in the formation of speech production network.
Collapse
Affiliation(s)
- Kristina Simonyan
- Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, New York; Department Otolaryngology, Icahn School of Medicine at Mount Sinai, New York, New York
| | - Stefan Fuertinger
- Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, New York
| |
Collapse
|
14
|
Geranmayeh F, Leech R, Wise RJS. Semantic retrieval during overt picture description: Left anterior temporal or the parietal lobe? Neuropsychologia 2014; 76:125-35. [PMID: 25497693 PMCID: PMC4582804 DOI: 10.1016/j.neuropsychologia.2014.12.012] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2014] [Revised: 12/07/2014] [Accepted: 12/08/2014] [Indexed: 11/15/2022]
Abstract
Retrieval of semantic representations is a central process during overt speech production. There is an increasing consensus that an amodal semantic 'hub' must exist that draws together modality-specific representations of concepts. Based on the distribution of atrophy and the behavioral deficit of patients with the semantic variant of fronto-temporal lobar degeneration, it has been proposed that this hub is localized within both anterior temporal lobes (ATL), and is functionally connected with verbal 'output' systems via the left ATL. An alternative view, dating from Geschwind's proposal in 1965, is that the angular gyrus (AG) is central to object-based semantic representations. In this fMRI study we examined the connectivity of the left ATL and parietal lobe (PL) with whole brain networks known to be activated during overt picture description. We decomposed each of these two brain volumes into 15 regions of interest (ROIs), using independent component analysis. A dual regression analysis was used to establish the connectivity of each ROI with whole brain-networks. An ROI within the left anterior superior temporal sulcus (antSTS) was functionally connected to other parts of the left ATL, including anterior ventromedial left temporal cortex (partially attenuated by signal loss due to susceptibility artifact), a large left dorsolateral prefrontal region (including 'classic' Broca's area), extensive bilateral sensory-motor cortices, and the length of both superior temporal gyri. The time-course of this functionally connected network was associated with picture description but not with non-semantic baseline tasks. This system has the distribution expected for the production of overt speech with appropriate semantic content, and the auditory monitoring of the overt speech output. In contrast, the only left PL ROI that showed connectivity with brain systems most strongly activated by the picture-description task, was in the superior parietal lobe (supPL). This region showed connectivity with predominantly posterior cortical regions required for the visual processing of the pictorial stimuli, with additional connectivity to the dorsal left AG and a small component of the left inferior frontal gyrus. None of the other PL ROIs that included part of the left AG were activated by Speech alone. The best interpretation of these results is that the left antSTS connects the proposed semantic hub (specifically localized to ventral anterior temporal cortex based on clinical neuropsychological studies) to posterior frontal regions and sensory-motor cortices responsible for the overt production of speech.
Collapse
Affiliation(s)
- Fatemeh Geranmayeh
- Computational Cognitive and Clinical Neuroimaging Laboratory, Imperial College, Hammersmith Hospital, London W12 0NN, UK.
| | - Robert Leech
- Computational Cognitive and Clinical Neuroimaging Laboratory, Imperial College, Hammersmith Hospital, London W12 0NN, UK
| | - Richard J S Wise
- Computational Cognitive and Clinical Neuroimaging Laboratory, Imperial College, Hammersmith Hospital, London W12 0NN, UK
| |
Collapse
|
15
|
Sensory-motor integration during speech production localizes to both left and right plana temporale. J Neurosci 2014; 34:12963-72. [PMID: 25253845 DOI: 10.1523/jneurosci.0336-14.2014] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Speech production relies on fine voluntary motor control of respiration, phonation, and articulation. The cortical initiation of complex sequences of coordinated movements is thought to result in parallel outputs, one directed toward motor neurons while the "efference copy" projects to auditory and somatosensory fields. It is proposed that the latter encodes the expected sensory consequences of speech and compares expected with actual postarticulatory sensory feedback. Previous functional neuroimaging evidence has indicated that the cortical target for the merging of feedforward motor and feedback sensory signals is left-lateralized and lies at the junction of the supratemporal plane with the parietal operculum, located mainly in the posterior half of the planum temporale (PT). The design of these studies required participants to imagine speaking or generating nonverbal vocalizations in response to external stimuli. The resulting assumption is that verbal and nonverbal vocal motor imagery activates neural systems that integrate the sensory-motor consequences of speech, even in the absence of primary motor cortical activity or sensory feedback. The present human functional magnetic resonance imaging study used univariate and multivariate analyses to investigate both overt and covert (internally generated) propositional and nonpropositional speech (noun definition and counting, respectively). Activity in response to overt, but not covert, speech was present in bilateral anterior PT, with no increased activity observed in posterior PT or parietal opercula for either speech type. On this evidence, the response of the left and right anterior PTs better fulfills the criteria for sensory target and state maps during overt speech production.
Collapse
|
16
|
Abstract
Spoken language production is a complex brain function that relies on large-scale networks. These include domain-specific networks that mediate language-specific processes, as well as domain-general networks mediating top-down and bottom-up attentional control. Language control is thought to involve a left-lateralized fronto-temporal-parietal (FTP) system. However, these regions do not always activate for language tasks and similar regions have been implicated in nonlinguistic cognitive processes. These inconsistent findings suggest that either the left FTP is involved in multidomain cognitive control or that there are multiple spatially overlapping FTP systems. We present evidence from an fMRI study using multivariate analysis to identify spatiotemporal networks involved in spoken language production in humans. We compared spoken language production (Speech) with multiple baselines, counting (Count), nonverbal decision (Decision), and "rest," to pull apart the multiple partially overlapping networks that are involved in speech production. A left-lateralized FTP network was activated during Speech and deactivated during Count and nonverbal Decision trials, implicating it in cognitive control specific to sentential spoken language production. A mirror right-lateralized FTP network was activated in the Count and Decision trials, but not Speech. Importantly, a second overlapping left FTP network showed relative deactivation in Speech. These three networks, with distinct time courses, overlapped in the left parietal lobe. Contrary to the standard model of the left FTP as being dominant for speech, we revealed a more complex pattern within the left FTP, including at least two left FTP networks with competing functional roles, only one of which was activated in speech production.
Collapse
|
17
|
Jenson D, Bowers AL, Harkrider AW, Thornton D, Cuellar M, Saltuklaroglu T. Temporal dynamics of sensorimotor integration in speech perception and production: independent component analysis of EEG data. Front Psychol 2014; 5:656. [PMID: 25071633 PMCID: PMC4091311 DOI: 10.3389/fpsyg.2014.00656] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2014] [Accepted: 06/08/2014] [Indexed: 11/17/2022] Open
Abstract
Activity in anterior sensorimotor regions is found in speech production and some perception tasks. Yet, how sensorimotor integration supports these functions is unclear due to a lack of data examining the timing of activity from these regions. Beta (~20 Hz) and alpha (~10 Hz) spectral power within the EEG μ rhythm are considered indices of motor and somatosensory activity, respectively. In the current study, perception conditions required discrimination (same/different) of syllables pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required covert and overt syllable productions and overt word production. Independent component analysis was performed on EEG data obtained during these conditions to (1) identify clusters of μ components common to all conditions and (2) examine real-time event-related spectral perturbations (ERSP) within alpha and beta bands. 17 and 15 out of 20 participants produced left and right μ-components, respectively, localized to precentral gyri. Discrimination conditions were characterized by significant (pFDR < 0.05) early alpha event-related synchronization (ERS) prior to and during stimulus presentation and later alpha event-related desynchronization (ERD) following stimulus offset. Beta ERD began early and gained strength across time. Differences were found between quiet and noisy discrimination conditions. Both overt syllable and word productions yielded similar alpha/beta ERD that began prior to production and was strongest during muscle activity. Findings during covert production were weaker than during overt production. One explanation for these findings is that μ-beta ERD indexes early predictive coding (e.g., internal modeling) and/or overt and covert attentional/motor processes. μ-alpha ERS may index inhibitory input to the premotor cortex from sensory regions prior to and during discrimination, while μ-alpha ERD may index sensory feedback during speech rehearsal and production.
Collapse
Affiliation(s)
- David Jenson
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - Andrew L. Bowers
- Department of Communication Disorders, University of ArkansasFayetteville, AR, USA
| | - Ashley W. Harkrider
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - David Thornton
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - Megan Cuellar
- Speech-Language Pathology Program, College of Health Sciences, Midwestern UniversityChicago, IL, USA
| | - Tim Saltuklaroglu
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| |
Collapse
|
18
|
Simmonds AJ, Leech R, Iverson P, Wise RJS. The response of the anterior striatum during adult human vocal learning. J Neurophysiol 2014; 112:792-801. [PMID: 24805076 DOI: 10.1152/jn.00901.2013] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Research on mammals predicts that the anterior striatum is a central component of human motor learning. However, because vocalizations in most mammals are innate, much of the neurobiology of human vocal learning has been inferred from studies on songbirds. Essential for song learning is a pathway, the homolog of mammalian cortical-basal ganglia "loops," which includes the avian striatum. The present functional magnetic resonance imaging (fMRI) study investigated adult human vocal learning, a skill that persists throughout life, albeit imperfectly given that late-acquired languages are spoken with an accent. Monolingual adult participants were scanned while repeating novel non-native words. After training on the pronunciation of half the words for 1 wk, participants underwent a second scan. During scanning there was no external feedback on performance. Activity declined sharply in left and right anterior striatum, both within and between scanning sessions, and this change was independent of training and performance. This indicates that adult speakers rapidly adapt to the novel articulatory movements, possibly by using motor sequences from their native speech to approximate those required for the novel speech sounds. Improved accuracy correlated only with activity in motor-sensory perisylvian cortex. We propose that future studies on vocal learning, using different behavioral and pharmacological manipulations, will provide insights into adult striatal plasticity and its potential for modification in both educational and clinical contexts.
Collapse
Affiliation(s)
- Anna J Simmonds
- Computational, Cognitive and Clinical Neuroimaging Laboratory (C3NL), Division of Brain Sciences, Imperial College London, United Kingdom; and
| | - Robert Leech
- Computational, Cognitive and Clinical Neuroimaging Laboratory (C3NL), Division of Brain Sciences, Imperial College London, United Kingdom; and
| | - Paul Iverson
- Department of Speech, Hearing and Phonetic Sciences, Division of Psychology and Language Sciences, University College London, United Kingdom
| | - Richard J S Wise
- Computational, Cognitive and Clinical Neuroimaging Laboratory (C3NL), Division of Brain Sciences, Imperial College London, United Kingdom; and
| |
Collapse
|