1
|
Zhang YS, Ghazanfar AA. A Hierarchy of Autonomous Systems for Vocal Production. Trends Neurosci 2020; 43:115-126. [PMID: 31955902 PMCID: PMC7213988 DOI: 10.1016/j.tins.2019.12.006] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2019] [Revised: 12/01/2019] [Accepted: 12/12/2019] [Indexed: 10/25/2022]
Abstract
Vocal production is hierarchical in the time domain. These hierarchies build upon biomechanical and neural dynamics across various timescales. We review studies in marmoset monkeys, songbirds, and other vertebrates. To organize these data in an accessible and across-species framework, we interpret the different timescales of vocal production as belonging to different levels of an autonomous systems hierarchy. The first level accounts for vocal acoustics produced on short timescales; subsequent levels account for longer timescales of vocal output. The hierarchy of autonomous systems that we put forth accounts for vocal patterning, sequence generation, dyadic interactions, and context dependence by sequentially incorporating central pattern generators, intrinsic drives, and sensory signals from the environment. We then show the framework's utility by providing an integrative explanation of infant vocal production learning in which social feedback modulates infant vocal acoustics through the tuning of a drive signal.
Collapse
Affiliation(s)
- Yisi S Zhang
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA.
| | - Asif A Ghazanfar
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544, USA; Department of Psychology, Princeton University, Princeton, NJ 08544, USA; Department of Ecology & Evolutionary Biology, Princeton University, Princeton, NJ 08544, USA.
| |
Collapse
|
2
|
Wesselmeier H, Jansen S, Müller HM. Influences of semantic and syntactic incongruence on readiness potential in turn-end anticipation. Front Hum Neurosci 2014; 8:296. [PMID: 24904349 PMCID: PMC4034500 DOI: 10.3389/fnhum.2014.00296] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2014] [Accepted: 04/22/2014] [Indexed: 11/16/2022] Open
Abstract
Knowing when it is convenient to take a turn in a conversation is an important task for dialog partners. As it appears that this decision is made before the transition point has been reached, it seems to involve anticipation. There are a variety of studies in the literature that provide possible explanations for turn-end anticipation. This study particularly focuses on how turn-end anticipation relies on syntactic and/or semantic information during utterance processing, as tested with syntactically and semantically violated sentences. With a combination reaction time and EEG experiment, we used the onset latencies of the readiness potential (RP) to uncover possible differences in response preparation. Although the mean anticipation timing accuracy (ATA) values of the behavioral test were all within a similar time range (control sentences: 108 ms, syntactically violated sentences: 93 ms and semantically violated sentences: 116 ms), we found evidence that response preparation is indeed different for syntactically and semantically violated sentences in comparison with control sentences. Our preconscious EEG data, in the form of RP results, indicated a response preparation onset to sentence end interval of 1452 ms in normal sentences, 937 ms in sentences with syntactic violations and 944 ms in sentences with semantic violations. Compared with control sentences, these intervals resulted in a significant RP interruption for both sentence types and indicate an interruption of preconscious response preparation. However, the behavioral response to sentence types occurred at comparable time points.
Collapse
Affiliation(s)
- Hendrik Wesselmeier
- Experimental Neurolinguistics Group, Collaborative Research Center “Alignment in Communication” (SFB 673), Bielefeld UniversityBielefeld, Germany
| | | | | |
Collapse
|
3
|
MEG correlates of learning novel objects properties in children. PLoS One 2013; 8:e69696. [PMID: 23936082 PMCID: PMC3729701 DOI: 10.1371/journal.pone.0069696] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2012] [Accepted: 06/13/2013] [Indexed: 11/22/2022] Open
Abstract
Learning the functional properties of objects is a core mechanism in the development of conceptual, cognitive and linguistic knowledge in children. The cerebral processes underlying these learning mechanisms remain unclear in adults and unexplored in children. Here, we investigated the neurophysiological patterns underpinning the learning of functions for novel objects in 10-year-old healthy children. Event-related fields (ERFs) were recorded using magnetoencephalography (MEG) during a picture-definition task. Two MEG sessions were administered, separated by a behavioral verbal learning session during which children learned short definitions about the “magical” function of 50 unknown non-objects. Additionally, 50 familiar real objects and 50 other unknown non-objects for which no functions were taught were presented at both MEG sessions. Children learned at least 75% of the 50 proposed definitions in less than one hour, illustrating children's powerful ability to rapidly map new functional meanings to novel objects. Pre- and post-learning ERFs differences were analyzed first in sensor then in source space. Results in sensor space disclosed a learning-dependent modulation of ERFs for newly learned non-objects, developing 500–800 msec after stimulus onset. Analyses in the source space windowed over this late temporal component of interest disclosed underlying activity in right parietal, bilateral orbito-frontal and right temporal regions. Altogether, our results suggest that learning-related evolution in late ERF components over those regions may support the challenging task of rapidly creating new semantic representations supporting the processing of the meaning and functions of novel objects in children.
Collapse
|
4
|
Park H, Iverson GK, Park HJ. Neural correlates in the processing of phoneme-level complexity in vowel production. BRAIN AND LANGUAGE 2011; 119:158-166. [PMID: 21802717 DOI: 10.1016/j.bandl.2011.05.010] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2010] [Revised: 05/26/2011] [Accepted: 05/27/2011] [Indexed: 05/31/2023]
Abstract
We investigated how articulatory complexity at the phoneme level is manifested neurobiologically in an overt production task. fMRI images were acquired from young Korean-speaking adults as they pronounced bisyllabic pseudowords in which we manipulated phonological complexity defined in terms of vowel duration and instability (viz., COMPLEX: /tiɯi/ >> MID-COMPLEX: /tiye/ >> SIMPLE: /tii/). Increased activity in the left inferior frontal gyrus (Brodmann Areas (BA) 44 and 47), supplementary motor area and anterior insula was observed for the articulation of COMPLEX sequences relative to MID-COMPLEX; this was the case with the articulation of MID-COMPLEX relative to SIMPLE, except that the pars orbitalis (BA 47) was dominantly identified in the Broca's area. The differentiation indicates that phonological complexity is reflected in the neural processing of distinct phonemic representations, both by recruiting brain regions associated with retrieval of phonological information from memory and via articulatory rehearsal for the production of COMPLEX vowels. In addition, the finding that increased complexity engages greater areas of the brain suggests that brain activation can be a neurobiological measure of articulo-phonological complexity, complementing, if not substituting for, biomechanical measurements of speech motor activity.
Collapse
Affiliation(s)
- Haeil Park
- Department of English Language and Literature, Myongji University, Seoul, Republic of Korea
| | | | | |
Collapse
|
5
|
Keller SS, Roberts N, García-Fiñana M, Mohammadi S, Ringelstein EB, Knecht S, Deppe M. Can the language-dominant hemisphere be predicted by brain anatomy? J Cogn Neurosci 2010; 23:2013-29. [PMID: 20807056 DOI: 10.1162/jocn.2010.21563] [Citation(s) in RCA: 53] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
It has long been suspected that cortical interhemispheric asymmetries may underlie hemispheric language dominance (HLD). To test this hypothesis, we determined interhemispheric asymmetries using stereology and MRI of three cortical regions hypothesized to be related to HLD (Broca's area, planum temporale, and insula) in healthy adults in whom HLD was determined using functional transcranial Doppler sonography and functional MRI (15 left HLD, 10 right HLD). We observed no relationship between volume asymmetry of the gyral correlates of Broca's area or planum temporale and HLD. However, we observed a robust relationship between volume asymmetry of the insula and HLD (p = .008), which predicted unilateral HLD in 88% individuals (86.7% left HDL and 90% right HLD). There was also a subtle but significant positive correlation between the extent of HLD and insula volume asymmetry (p = .02), indicating that a larger insula predicted functional lateralization to the same hemispheric side for the majority of subjects. We found no visual evidence of basic anatomical markers of HLD other than that the termination of the right posterior sylvian fissure was more likely to be vertical than horizontal in right HLD subjects (p = .02). Predicting HLD by virtue of gross brain anatomy is complicated by interindividual variability in sulcal contours, and the possibility remains that morphological and cytoarchitectural organization of the classical language regions may underlie HLD when analyses are not constrained by the natural limits imposed by measurement of gyral volume. Although the anatomical correlates of HLD will most likely be found to include complex intra- and interhemispheric connections, there is the possibility that such connectivity may correlate with gray matter morphology. We suggest that the potential significance of insular morphology should be considered in future studies addressing the anatomical correlates of human language lateralization.
Collapse
Affiliation(s)
- Simon S Keller
- The Department of Neurology, University of Münster, Albert-Schweitzer-Str. 33, D-48129 Münster, Germany.
| | | | | | | | | | | | | |
Collapse
|
6
|
Moser D, Fridriksson J, Bonilha L, Healy EW, Baylis G, Baker JM, Rorden C. Neural recruitment for the production of native and novel speech sounds. Neuroimage 2009; 46:549-57. [DOI: 10.1016/j.neuroimage.2009.01.015] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022] Open
|
7
|
Hultén A, Vihla M, Laine M, Salmelin R. Accessing newly learned names and meanings in the native language. Hum Brain Mapp 2009; 30:976-89. [PMID: 18412130 DOI: 10.1002/hbm.20561] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
Abstract
Ten healthy adults encountered pictures of unfamiliar archaic tools and successfully learned either their name, verbal definition of their usage, or both. Neural representation of the newly acquired information was probed with magnetoencephalography in an overt picture-naming task before and after learning, and in two categorization tasks after learning. Within 400 ms, activation proceeded from occipital through parietal to left temporal cortex, inferior frontal cortex (naming) and right temporal cortex (categorization). Comparison of naming of newly learned versus familiar pictures indicated that acquisition and maintenance of word forms are supported by the same neural network. Explicit access to newly learned phonology when such information was known strongly enhanced left temporal activation. By contrast, access to newly learned semantics had no comparable, direct neural effects. Both the behavioral learning pattern and neurophysiological results point to fundamentally different implementation of and access to phonological versus semantic features in processing pictured objects.
Collapse
Affiliation(s)
- Annika Hultén
- Brain Research Unit, Low Temperature Laboratory, Helsinki University of Technology, Espoo, Finland
| | | | | | | |
Collapse
|
8
|
Anderson K, Bones B, Robinson B, Hass C, Lee H, Ford K, Roberts TA, Jacobs B. The morphology of supragranular pyramidal neurons in the human insular cortex: a quantitative Golgi study. Cereb Cortex 2009; 19:2131-44. [PMID: 19126800 DOI: 10.1093/cercor/bhn234] [Citation(s) in RCA: 43] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022] Open
Abstract
Although the primate insular cortex has been studied extensively, a comprehensive investigation of its neuronal morphology has yet to be completed. To that end, neurons from 20 human subjects (10 males and 10 females; N = 600) were selected from the secondary gyrus brevis, precentral gyrus, and postcentral gyrus of the left insula. The secondary gyrus brevis was generally more complex in terms of dendritic/spine extent than either the precentral or postcentral insular gyri, which is consistent with the posterior-anterior gradient of dendritic complexity observed in other cortical regions. The male insula had longer, spinier dendrites than the female insula, potentially reflecting sex differences in interoception. In comparing the current insular data with regional dendritic data quantified from other Brodmann's areas (BAs), insular total dendritic length (TDL) was less than the TDL of high integration cortices (BA6beta, 10, 11, 39), but greater than the TDL of low integration cortices (BA3-1-2, 4, 22, 44). Insular dendritic spine number was significantly greater than both low and high integration regions. Overall, the insula had spinier, but shorter neurons than did high integration cortices, and thus may represent a specialized type of heteromodal cortex, one that integrates crude multisensory information crucial to interoceptive processes.
Collapse
Affiliation(s)
- Kaeley Anderson
- Laboratory of Quantitative Neuromorphology, Psychology, Colorado College, 14 E. Cache La Poudre, Colorado Springs, CO 80903, USA
| | | | | | | | | | | | | | | |
Collapse
|
9
|
|
10
|
Sassa Y, Sugiura M, Jeong H, Horie K, Sato S, Kawashima R. Cortical mechanism of communicative speech production. Neuroimage 2007; 37:985-92. [PMID: 17627852 DOI: 10.1016/j.neuroimage.2007.05.059] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2007] [Revised: 04/27/2007] [Accepted: 05/22/2007] [Indexed: 11/20/2022] Open
Abstract
Communicative speech requires conformity not only to linguistic rules but also to behavior that is appropriate for social interaction. The existence of a special brain mechanism for such behavioral aspects of communicative speech has been suggested by studies of social impairment in autism, and it may be related to communicative vocalization in animals. We used functional magnetic resonance imaging (fMRI) to measure cortical activation while normal subjects casually talked to an actor (communication task) or verbally described a situation (description task) while observing video clips of an action performed by a familiar or an unfamiliar actor in a typical daily situation. We assumed that the communication task differed from the description task in the involvement of behavioral aspects of communicative speech production, which may involve the processing of interaction-relevant biographical information. Significantly higher activation was observed during the communication task than during the description task in the medial prefrontal cortex (polar and dorsal parts), the bilateral anterior superior temporal sulci, and the left temporoparietal junction. The results suggest that these regions play a role in the behavioral aspects of communicative speech production, presumably in understanding of the context of the social interaction. The activation of the polar part of the medial prefrontal cortex during the communication task was greater when the actor was familiar than when the actor was unfamiliar, suggesting that this region is involved in communicative speech production with reference to biological information. The precuneus was activated during the communication task only with the familiar actor, suggesting that this region is related to access to biographical information per se.
Collapse
Affiliation(s)
- Yuko Sassa
- RISTEX, JST, Hon-cho 4-1-8, Kawaguchi 332-0012, Japan.
| | | | | | | | | | | |
Collapse
|
11
|
Toyomura A, Koyama S, Miyamaoto T, Terao A, Omori T, Murohashi H, Kuriki S. Neural correlates of auditory feedback control in human. Neuroscience 2007; 146:499-503. [PMID: 17395381 DOI: 10.1016/j.neuroscience.2007.02.023] [Citation(s) in RCA: 68] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2006] [Revised: 01/10/2007] [Accepted: 02/07/2007] [Indexed: 10/23/2022]
Abstract
Auditory feedback plays an important role in natural speech production. We conducted a functional magnetic resonance imaging (fMRI) experiment using a transformed auditory feedback (TAF) method to delineate the neural mechanism for auditory feedback control of pitch. Twelve right-handed subjects were required to vocalize /a/ for 5 s, while hearing their own voice through headphones. In the TAF condition, the pitch of the feedback voice was randomly shifted either up or down from the original pitch two or three times in each trial. The subjects were required to hold the pitch of the feedback voice constant by changing the pitch of original voice. In non-TAF condition, the pitch of the feedback voice was not modulated and the subjects just vocalized /a/ continuously. The contrast between TAF and non-TAF conditions revealed significant activations; the supramarginal gyrus, the prefrontal area, the anterior insula, the superior temporal area and the intraparietal sulcus in the right hemisphere, but only the premotor area in the left hemisphere. This result suggests that auditory feedback control of pitch is mainly supported by the right hemispheric network.
Collapse
Affiliation(s)
- A Toyomura
- Research Institute of Science and Technology for Society, Japan Science and Technology Agency, Japan.
| | | | | | | | | | | | | |
Collapse
|
12
|
Kato Y, Muramatsu T, Kato M, Shintani M, Kashima H. Activation of right insular cortex during imaginary speech articulation. Neuroreport 2007; 18:505-9. [PMID: 17496812 DOI: 10.1097/wnr.0b013e3280586862] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Human speech articulation is a complex process controlled by a form of 'programming' implemented in the brain. Analysis of speech articulation using neuroimaging techniques is difficult, however, because motor noise is time-locked to the articulatory events. The current magnetoencephalography study, in which 12 participants were required to imagine vocalizing a phonogram after a visual cue, was designed to visualize the prearticulatory 'automatic' processes corresponding to the motor initiation. Magnetic activity correlating with the preparation for articulation occurred in the insular cortices at about 160 ms after the visual cue, and had a relative dominance in the right hemisphere. This suggests that motor control of speech proceeds from the insular regions, although the 'automatic' nature of our task might have led to the observed right-sided dominance.
Collapse
Affiliation(s)
- Yutaka Kato
- Department of Neuropsychiatry, Keio University School of Medicine, Tokyo, Japan
| | | | | | | | | |
Collapse
|
13
|
Gunji A, Ishii R, Chau W, Kakigi R, Pantev C. Rhythmic brain activities related to singing in humans. Neuroimage 2007; 34:426-34. [PMID: 17049276 DOI: 10.1016/j.neuroimage.2006.07.018] [Citation(s) in RCA: 46] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2005] [Revised: 06/22/2006] [Accepted: 07/06/2006] [Indexed: 11/29/2022] Open
Abstract
To investigate the motor control related to sound production, we studied cortical rhythmic changes during continuous vocalization such as singing. Magnetoencephalographic (MEG) responses were recorded while subjects spoke in the usual way (speaking), sang (singing), hummed (humming) and imagined (imagining) a popular song. The power of alpha (8-15 Hz), beta (15-30 Hz) and low-gamma (30-60 Hz) frequency bands was changed during and after vocalization (singing, speaking and humming). In the alpha band, the oscillatory changes for singing were most pronounced in the right premotor, bilateral sensorimotor, right secondary somatosensory and bilateral superior parietal areas. The beta oscillation for the singing was also confirmed in the premotor, primary and secondary sensorimotor and superior parietal areas in the left and right hemispheres where were partly activated even for imagined a song (imaging). These regions have been traditionally described as vocalization-related sites. The cortical rhythmic changes were distinct in the singing condition compared with the other vocalizing conditions (speaking and humming) and thus we considered that more concentrated control of the vocal tract, diaphragm and abdominal muscles is responsible. Furthermore, characteristic oscillation in the high-gamma (60-200 Hz) frequency band was found in Broca's area only in the imaging condition and might occur singing rehearsal and storage process in Broca's area.
Collapse
Affiliation(s)
- Atsuko Gunji
- The Rotman Research Institute for Neuroscience, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1.
| | | | | | | | | |
Collapse
|
14
|
Abstract
When humans talk without conventionalized arrangements, they engage in conversation--that is, a continuous and largely nonsimultaneous exchange in which speakers take turns. Turn-taking is ubiquitous in conversation and is the normal case against which alternatives, such as interruptions, are treated as violations that warrant repair. Furthermore, turn-taking involves highly coordinated timing, including a cyclic rise and fall in the probability of initiating speech during brief silences, and involves the notable rarity, especially in two-party conversations, of two speakers' breaking a silence at once. These phenomena, reported by conversation analysts, have been neglected by cognitive psychologists, and to date there has been no adequate cognitive explanation. Here, we propose that, during conversation, endogenous oscillators in the brains of the speaker and the listeners become mutually entrained, on the basis of the speaker's rate of syllable production. This entrained cyclic pattern governs the potential for initiating speech at any given instant for the speaker and also for the listeners (as potential next speakers). Furthermore, the readiness functions of the listeners are counterphased with that of the speaker, minimizing the likelihood of simultaneous starts by a listener and the previous speaker. This mutual entrainment continues for a brief period when the speech stream ceases, accounting for the cyclic property of silences. This model not only captures the timing phenomena observed inthe literature on conversation analysis, but also converges with findings from the literatures on phoneme timing, syllable organization, and interpersonal coordination.
Collapse
Affiliation(s)
- Margaret Wilson
- Department of Psychology, University of California, Santa Cruz 95064, USA.
| | | |
Collapse
|
15
|
Sörös P, Sokoloff LG, Bose A, McIntosh AR, Graham SJ, Stuss DT. Clustered functional MRI of overt speech production. Neuroimage 2006; 32:376-87. [PMID: 16631384 DOI: 10.1016/j.neuroimage.2006.02.046] [Citation(s) in RCA: 93] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2005] [Revised: 02/21/2006] [Accepted: 02/24/2006] [Indexed: 10/24/2022] Open
Abstract
To investigate the neural network of overt speech production, event-related fMRI was performed in 9 young healthy adult volunteers. A clustered image acquisition technique was chosen to minimize speech-related movement artifacts. Functional images were acquired during the production of oral movements and of speech of increasing complexity (isolated vowel as well as monosyllabic and trisyllabic utterances). This imaging technique and behavioral task enabled depiction of the articulo-phonologic network of speech production from the supplementary motor area at the cranial end to the red nucleus at the caudal end. Speaking a single vowel and performing simple oral movements involved very similar activation of the cortical and subcortical motor systems. More complex, polysyllabic utterances were associated with additional activation in the bilateral cerebellum, reflecting increased demand on speech motor control, and additional activation in the bilateral temporal cortex, reflecting the stronger involvement of phonologic processing.
Collapse
Affiliation(s)
- Peter Sörös
- Imaging Research, Sunnybrook and Women's College Health Sciences Centre, 2075 Bayview Avenue, Toronto, Ontario, Canada.
| | | | | | | | | | | |
Collapse
|
16
|
Abstract
Apraxia of speech (AOS) is a motor speech disorder that can occur in the absence of aphasia or dysarthria. AOS has been the subject of some controversy since the disorder was first named and described by Darley and his Mayo Clinic colleagues in the 1960s. A recent revival of interest in AOS is due in part to the fact that it is often the first symptom of neurodegenerative diseases, such as primary progressive aphasia and corticobasal degeneration. This article will provide a brief review of terminology associated with AOS, its clinical hallmarks and neuroanatomical correlates. Current models of motor programming will also be addressed as they relate to AOS and finally, typical treatment strategies used in rehabilitating the articulation and prosody deficits associated with AOS will be summarized.
Collapse
Affiliation(s)
- Jennifer Ogar
- UCSF Memory and Aging Center, San Francisco, CA 94143-1207, USA
| | | | | | | | | |
Collapse
|
17
|
Abstract
The present study reports on the first case of crossed apraxia of speech (CAS) in a 69-year-old right-handed female (SE). The possibility of occurrence of apraxia of speech (AOS) following right hemisphere lesion is discussed in the context of known occurrences of ideomotor apraxias and acquired neurogenic stuttering in several cases with right hemisphere lesion. A current hypothesis on AOS-the dual route speech encoding (DRSE) hypothesis-and predictions based on DRSE were utilized to explore the nature of CAS in SE. One prediction based on the DRSE hypothesis is that there should be no difference in the frequency of occurrence of apraxic errors on words and non-words. This prediction was tested using a repetition task. The experimental stimuli included a list of minimal pairs that signaled voice-voiceless contrasts in words and non-words. Minimal-pair stimuli were presented orally, one at a time. SE's responses were recorded using audio and videotapes. Results indicate that SE's responses were characterized by numerous voicing errors. Most importantly, production of real word minimal pairs was superior to that of non-word minimal pairs. Implications of these results for the DRSE hypothesis are discussed with regard to currently developing perspectives on AOS.
Collapse
Affiliation(s)
- Venu Balasubramanian
- Speech-Language Pathology and Audiology, Seton Hall University, South Orange, NJ 07079, USA.
| | | |
Collapse
|
18
|
Gunji A, Kakigi R, Hoshiyama M. Cortical activities relating to modulation of sound frequency: how to vocalize? BRAIN RESEARCH. COGNITIVE BRAIN RESEARCH 2003; 17:495-506. [PMID: 12880919 DOI: 10.1016/s0926-6410(03)00165-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
This is the first report to clarify the underlying mechanisms of processing in the modulation of frequencies (tones) in humans using magnetoencephalography (MEG). Volunteers were instructed to vocalize a simple vowel sound (/u/) after receiving a cue (S2) for either one of three (low, middle, or high fundamental frequencies) (F0s). Three tasks, (1) the modulated vocalization task in which the subjects were asked to modulate vocalization tones according to S2, (2) the non-modulated vocalization task in which the subjects were asked to vocalize the same sound (/u/) with a fixed F0, and (3) the image task in which the subjects had to modulate according to S2 and imagine the vowel (/u/) sound, but not vocalize it. In all tasks, two clear components, 1M and 2M, were recorded at approximately 190 and 290 ms after the S2. Since both were identified even in the Image task, they appear to be specifically related to activity for modulation. The equivalent current dipoles of both 1M and 2M were estimated to lie mainly in the inferior frontal lobe or insula in both hemispheres. Therefore, the activity relating to modulation mainly took place in the inferior frontal lobe or insula in both hemispheres starting about 200 ms after the viewing of a cue.
Collapse
Affiliation(s)
- Atsuko Gunji
- Department of Integrative Physiology, National Institute for Physiological Sciences, Myodaiji, Okazaki, 444-8585, Japan.
| | | | | |
Collapse
|
19
|
Ingham RJ, Ingham JC, Finn P, Fox PT. Towards a functional neural systems model of developmental stuttering. JOURNAL OF FLUENCY DISORDERS 2003; 28:297-318. [PMID: 14643067 DOI: 10.1016/j.jfludis.2003.07.004] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
UNLABELLED This paper overviews recent developments in an ongoing program of brain imaging research on developmental stuttering that is being conducted at the University of Texas Health Science Center, San Antonio. This program has primarily used H(2)15O PET imaging of different speaking tasks by right-handed adult male and female persistent stutterers, recovered stutterers and controls in order to isolate the neural regions that are functionally associated with stuttered speech. The principal findings have emerged from studies using condition contrasts and performance correlation techniques. The emerging findings from these studies are reviewed and referenced to a neural model of normal speech production recently proposed by Jürgens [Neurosci. Biobehav. Rev. 26 (2002) 235]. This paper will report (1) the reconfiguration of previous findings within the Jürgens Model; (2) preliminary findings of an investigation with late recovered stutterers; (3) an investigation of neural activations during a treatment procedure designed to produce a sustained improvement in fluency; and (4) an across-studies comparison that seeks to isolate neural regions within the Jürgens Model that are consistently associated with stuttering. Two regions appear to meet this criterion: right anterior insula (activated) and anterior middle and superior temporal gyri (deactivated) mainly in right hemisphere. The implications of these findings and the direction of future imaging investigations are discussed. EDUCATIONAL OBJECTIVES The reader will learn about (1) recent uses of H(2)15O PET imaging in stuttering research; (2) the use of a new neurological model of speech production in imaging research on stuttering; and (3) initial findings from PET imaging investigations of treated and recovered stutterers.
Collapse
Affiliation(s)
- Roger J Ingham
- The Department of Speech and Hearing Sciences, University of California, Santa Barbara, CA, USA.
| | | | | | | |
Collapse
|
20
|
Houde JF, Nagarajan SS, Sekihara K, Merzenich MM. Modulation of the auditory cortex during speech: an MEG study. J Cogn Neurosci 2002; 14:1125-38. [PMID: 12495520 DOI: 10.1162/089892902760807140] [Citation(s) in RCA: 320] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Several behavioral and brain imaging studies have demonstrated a significant interaction between speech perception and speech production. In this study, auditory cortical responses to speech were examined during self-production and feedback alteration. Magnetic field recordings were obtained from both hemispheres in subjects who spoke while hearing controlled acoustic versions of their speech feedback via earphones. These responses were compared to recordings made while subjects listened to a tape playback of their production. The amplitude of tape playback was adjusted to match the amplitude of self-produced speech. Recordings of evoked responses to both self-produced and tape-recorded speech were obtained free of movement-related artifacts. Responses to self-produced speech were weaker than were responses to tape-recorded speech. Responses to tones were also weaker during speech production, when compared with responses to tones recorded in the presence of speech from tape playback. However, responses evoked by gated noise stimuli did not differ for recordings made during self-produced speech versus recordings made during tape-recorded speech playback. These data suggest that during speech production, the auditory cortex (1) attenuates its sensitivity and (2) modulates its activity as a function of the expected acoustic feedback.
Collapse
Affiliation(s)
- John F Houde
- Center for Integrative Neuroscience, University of California, San Francisco 94143, USA.
| | | | | | | |
Collapse
|
21
|
Kober H, Möller M, Nimsky C, Vieth J, Fahlbusch R, Ganslandt O. New approach to localize speech relevant brain areas and hemispheric dominance using spatially filtered magnetoencephalography. Hum Brain Mapp 2001; 14:236-50. [PMID: 11668655 PMCID: PMC6871960 DOI: 10.1002/hbm.1056] [Citation(s) in RCA: 83] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2001] [Accepted: 08/06/2001] [Indexed: 11/11/2022] Open
Abstract
We used a current localization by spatial filtering-technique to determine primary language areas with magnetoencephalography (MEG) using a silent reading and a silent naming task. In all cases we could localize the sensory speech area (Wernicke) in the posterior part of the left superior temporal gyrus (Brodmann area 22) and the motor speech area (Broca) in the left inferior frontal gyrus (Brodmann area 44). Left hemispheric speech dominance was determined in all cases by a laterality index comparing the current source strength of the activated left side speech areas to their right side homologous. In 12 cases we found early Wernicke and later Broca activation corresponding to the Wernicke-Geschwind model. In three cases, however, we also found early Broca activation indicating that speech-related brain areas need not necessarily be activated sequentially but can also be activated simultaneously. Magnetoencephalography can be a potent tool for functional mapping of speech-related brain areas in individuals, investigating the time-course of brain activation, and identifying the speech dominant hemisphere. This may have implications for presurgical planning in epilepsy and brain tumor patients.
Collapse
Affiliation(s)
- H Kober
- Department of Neurosurgery, University of Erlangen-Nürnberg, Erlangen, Germany.
| | | | | | | | | | | |
Collapse
|
22
|
Abstract
We visualized the brain activity for retrieval imagery of a sound using dual 37-channel magnetometers in seven right-handed healthy subjects. A soundless video image of a hammer striking an anvil was presented on a screen. Significantly larger evoked magnetic fields were recorded, dominantly in the right hemisphere, in six subjects when they imagined the sound than when they did not. The initial peak of the response was 151.0 +/- 26.5 ms (mean +/- s.d.) after the blow. Equivalent current dipoles (ECDs) for the responses recorded from the right hemisphere were located around the inferior frontal sulcus in three subjects and in the insular region in three subjects, but reliable ECDs were not estimated from the left hemisphere. The results suggested that the initial activity for sound retrieval imagery appeared around the inferior frontal and insular areas, dominantly in the right hemisphere.
Collapse
Affiliation(s)
- M Hoshiyama
- Department of Integrative Physiology, National Institute for Physiological Sciences, Myodaiji, Okazaki, 444-8585 Japan
| | | | | |
Collapse
|
23
|
Abstract
Physiological studies of speech production have demonstrated that even simple articulation involves a range of specialized motor and cognitive processes and the neural mechanisms responsible for speech reflect this complexity. Recently, a number of functional imaging techniques have contributed to our knowledge of the neuroanatomical and neurophysiological correlates of speech production. These new imaging approaches have the advantage of permitting study of large numbers of normal and disordered subjects but they bring with them a host of new methodological concerns. One of the challenges for understanding language production is the recording of articulation itself. The problems associated with measuring the vocal tract and measuring the neural activity during overt speech are reviewed. It is argued that advances in understanding fundamental questions such as what are the planning units of speech, what is the role of feedback during speech and what is the influence of learning, await the development of better methods for assessing task performance.
Collapse
Affiliation(s)
- K G Munhall
- Departments of Psychology and Otolaryngology, Queen's University, Kingston, Ont., Canada.
| |
Collapse
|
24
|
Gunji A, Hoshiyama M, Kakigi R. Auditory response following vocalization: a magnetoencephalographic study. Clin Neurophysiol 2001; 112:514-20. [PMID: 11222973 DOI: 10.1016/s1388-2457(01)00462-x] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
OBJECTIVE We recorded vocalization-related cortical fields (VRCF) under complete masking of a subject's own voice to identify the auditory component evoked by a subject's own voice in the VRCF complex. METHODS We recorded VRCF during simple vowel (/u/) vocalization in 10 right-handed healthy volunteers under two conditions: (1) no masking (control) and (2) masking of the subject's own voice by weighted-white noise during vocalization. In the second experiment, we recorded auditory evoked magnetic fields (AEF) following stimulation of a speech sound applied by voice-recorder. RESULTS The onset of VRCF appeared gradually before the vocalization onset, and a clear phase-reversed deflection was identified after the onset of vocalization. The difference waveform obtained by subtracting the VRCF of the masking condition from that of the control showed a deflection (1M) at 81.3+/-20.5 (mean+/-SD) ms after the onset of vocalization, but there was no consistent deflection before the vocalization onset. The AEF following voice sound in the second experiment showed the M100 component at 94.3+/-18.4 ms. The equivalent current dipole of the 1M component for different waveforms was located close in the auditory cortex to that of the M100 for AEF waveforms in each hemisphere. CONCLUSION We successfully separated the auditory feedback response from the VRCF complex, using an adequate masking condition during vocalization of a subject's own voice. The masking effect was crucial to the auditory feedback process after the onset of vocalization. The present results suggested that the 1M component was mainly generated from the auditory feedback process by the subject's own voice. The activated auditory area for simple own voice might be similar to that for simple external sound.
Collapse
Affiliation(s)
- A Gunji
- Department of Integrative Physiology, National Institute for Physiological Sciences, Myodaiji, 444-8585, Okazaki, Japan.
| | | | | |
Collapse
|
25
|
Grèzes J, Decety J. Functional anatomy of execution, mental simulation, observation, and verb generation of actions: a meta-analysis. Hum Brain Mapp 2001; 12:1-19. [PMID: 11198101 PMCID: PMC6872039 DOI: 10.1002/1097-0193(200101)12:1<1::aid-hbm10>3.0.co;2-v] [Citation(s) in RCA: 934] [Impact Index Per Article: 40.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2000] [Accepted: 09/11/2000] [Indexed: 11/08/2022] Open
Abstract
There is a large body of psychological and neuroimaging experiments that have interpreted their findings in favor of a functional equivalence between action generation, action simulation, action verbalization, and perception of action. On the basis of these data, the concept of shared motor representations has been proposed. Indeed several authors have argued that our capacity to understand other people's behavior and to attribute intention or beliefs to others is rooted in a neural, most likely distributed, execution/observation mechanism. Recent neuroimaging studies have explored the neural network engaged during motor execution, simulation, verbalization, and observation. The focus of this metaanalysis is to evaluate in specific detail to what extent the activated foci elicited by these studies overlap.
Collapse
Affiliation(s)
- Julie Grèzes
- INSERM Unit 280‐151 Cours Albert Thomas, Lyon, France
| | - Jean Decety
- INSERM Unit 280‐151 Cours Albert Thomas, Lyon, France
| |
Collapse
|
26
|
Ingham RJ, Fox PT, Costello Ingham J, Zamarripa F. Is overt stuttered speech a prerequisite for the neural activations associated with chronic developmental stuttering? BRAIN AND LANGUAGE 2000; 75:163-194. [PMID: 11049665 DOI: 10.1006/brln.2000.2351] [Citation(s) in RCA: 48] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Four adult right-handed chronic stutterers and four age-matched controls completed H(2)(15)O PET scans involving overt and imagined oral reading tasks. During overt stuttered speech prominent activations occurred in SMA (medial), BA 46 (right), anterior insula (bilateral), and cerebellum (bilateral) plus deactivations in right A2 (BA 21/22). These activations and deactivations also occurred when the same stutterers imagined they were stuttering. Some parietal regions were significantly activated during imagined stuttering, but not during overt stuttering. Most regional activations changed in the same direction when overt stuttering ceased (during chorus reading) and when subjects imagined that they were not stuttering (also during chorus reading). Controls displayed fewer similarities between regional activations and deactivations during actual and imagined oral reading. Thus overt stuttering appears not to be a prerequisite for the prominent regional activations and deactivations associated with stuttering.
Collapse
Affiliation(s)
- R J Ingham
- University of California, Santa Barbara 93106, USA.
| | | | | | | |
Collapse
|
27
|
Kent RD. Research on speech motor control and its disorders: a review and prospective. JOURNAL OF COMMUNICATION DISORDERS 2000; 33:391-428. [PMID: 11081787 DOI: 10.1016/s0021-9924(00)00023-x] [Citation(s) in RCA: 139] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
This paper reviews issues in speech motor control and a class of communication disorders known as motor speech disorders. Speech motor control refers to the systems and strategies that regulate the production of speech, including the planning and preparation of movements (sometimes called motor programming) and the execution of movement plans to result in muscle contractions and structural displacements. Traditionally, speech motor control is distinguished from phonologic operations, but in some recent phonologic theories, there is a deliberate blurring of the boundaries between phonologic representation and motor functions. Moreover, there is continuing discussion in the literature as to whether a given motor speech disorder (especially apraxia of speech and stuttering) should be understood at the phonologic level, the motoric level, or both of these. The motor speech disorders considered here include: the dysarthrias, apraxia of speech, developmental apraxia of speech, developmental stuttering, acquired (neurogenic and psychogenic) stuttering, and cluttering.
Collapse
Affiliation(s)
- R D Kent
- Waisman Center, University of Wisconsin-Madison, 53705-2280, USA.
| |
Collapse
|
28
|
Salmelin R, Schnitzler A, Schmitz F, Freund HJ. Single word reading in developmental stutterers and fluent speakers. Brain 2000; 123 ( Pt 6):1184-202. [PMID: 10825357 DOI: 10.1093/brain/123.6.1184] [Citation(s) in RCA: 180] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Ten fluent speakers and nine developmental stutterers read isolated nouns aloud in a delayed reading paradigm. Cortical activation sequences were mapped with a whole-head magnetoencephalography system. The stutterers were mostly fluent in this task. Although the overt performance was essentially identical in the two groups, the cortical activation patterns showed clear differences, both in the evoked responses, time-locked to word presentation and mouth movement onset, and in task-related suppression of 20-Hz oscillations. Within the first 400 ms after seeing the word, processing in fluent speakers advanced from the left inferior frontal cortex (articulatory programming) to the left lateral central sulcus and dorsal premotor cortex (motor preparation). This sequence was reversed in the stutterers, who showed an early left motor cortex activation followed by a delayed left inferior frontal signal. Stutterers thus appeared to initiate motor programmes before preparation of the articulatory code. During speech production, the right motor/premotor cortex generated consistent evoked activation in fluent speakers but was silent in stutterers. On the other hand, suppression of motor cortical 20-Hz rhythm, reflecting task-related neuronal processing, occurred bilaterally in both groups. Moreover, the suppression was right-hemisphere dominant in stutterers, as opposed to left-hemisphere dominant in fluent speakers. Accordingly, the right frontal cortex of stutterers was highly active during speech production but did not generate synchronous time-locked responses. The speech-related 20-Hz suppression concentrated in the mouth area in fluent speakers, but was evident in both the hand and mouth areas in stutterers. These findings may reflect imprecise functional connectivity within the right frontal cortex and incomplete segregation between the adjacent hand and mouth motor representations in stutterers during speech production. A network including the left inferior frontal cortex and the right motor/premotor cortex, likely to be relevant in merging linguistic and affective prosody with articulation during fluent speech, thus appears to be partly dysfunctional in developmental stutterers.
Collapse
Affiliation(s)
- R Salmelin
- Brain Research Unit, Helsinki University of Technology, Espoo, Finland.
| | | | | | | |
Collapse
|