1
|
Hamada N, Kunimura H, Matsuoka M, Oda H, Hiraoka K. Advanced cueing of auditory stimulus to the head induces body sway in the direction opposite to the stimulus site during quiet stance in male participants. Front Hum Neurosci 2022; 16:1028700. [PMID: 36569476 PMCID: PMC9775284 DOI: 10.3389/fnhum.2022.1028700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Accepted: 11/11/2022] [Indexed: 12/12/2022] Open
Abstract
Under certain conditions, a tactile stimulus to the head induces the movement of the head away from the stimulus, and this is thought to be caused by a defense mechanism. In this study, we tested our hypothesis that predicting the stimulus site of the head in a quiet stance activates the defense mechanism, causing a body to sway to keep the head away from the stimulus. Fourteen healthy male participants aged 31.2 ± 6.8 years participated in this study. A visual cue predicting the forthcoming stimulus site (forehead, left side of the head, right side of the head, or back of the head) was given. Four seconds after this cue, an auditory or electrical tactile stimulus was given at the site predicted by the cue. The cue predicting the tactile stimulus site of the head did not induce a body sway. The cue predicting the auditory stimulus to the back of the head induced a forward body sway, and the cue predicting the stimulus to the forehead induced a backward body sway. The cue predicting the auditory stimulus to the left side of the head induced a rightward body sway, and the cue predicting the stimulus to the right side of the head induced a leftward body sway. These findings support our hypothesis that predicting the auditory stimulus site of the head induces a body sway in a quiet stance to keep the head away from the stimulus. The right gastrocnemius muscle contributes to the control of the body sway in the anterior-posterior axis related to this defense mechanism.
Collapse
Affiliation(s)
- Naoki Hamada
- Department of Rehabilitation Science, School of Medicine, Osaka Metropolitan University, Habikino, Japan
| | - Hiroshi Kunimura
- Department of Rehabilitation Science, School of Medicine, Osaka Metropolitan University, Habikino, Japan
| | - Masakazu Matsuoka
- Department of Rehabilitation Science, School of Medicine, Osaka Metropolitan University, Habikino, Japan
| | - Hitoshi Oda
- Graduate School of Comprehensive Rehabilitation, Osaka Prefecture University, Habikino, Japan
| | - Koichi Hiraoka
- Department of Rehabilitation Science, School of Medicine, Osaka Metropolitan University, Habikino, Japan,*Correspondence: Koichi Hiraoka
| |
Collapse
|
2
|
Shader MJ, Luke R, McKay CM. Contralateral dominance to speech in the adult auditory cortex immediately after cochlear implantation. iScience 2022; 25:104737. [PMID: 35938045 PMCID: PMC9352526 DOI: 10.1016/j.isci.2022.104737] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Revised: 05/12/2022] [Accepted: 07/07/2022] [Indexed: 11/06/2022] Open
Abstract
Sensory deprivation causes structural and functional changes in the human brain. Cochlear implantation delivers immediate reintroduction of auditory sensory information. Previous reports have indicated that over a year is required for the brain to reestablish canonical cortical processing patterns after the reintroduction of auditory stimulation. We utilized functional near-infrared spectroscopy (fNIRS) to investigate brain activity to natural speech stimuli directly after cochlear implantation. We presented 12 cochlear implant recipients, who each had a minimum of 12 months of auditory deprivation, with unilateral auditory- and visual-speech stimuli. Regardless of the side of implantation, canonical responses were elicited primarily on the contralateral side of stimulation as early as 1 h after device activation. These data indicate that auditory pathway connections are sustained during periods of sensory deprivation in adults, and that typical cortical lateralization is observed immediately following the reintroduction of auditory sensory input. Auditory activity was present on the contralateral side directly after implantation Visual-evoked cross-modal activity was also present on the contralateral side Monaural auditory stimulation elicited bilateral activity in listeners with two CIs
Collapse
|
3
|
Parrell B, Houde J. Modeling the Role of Sensory Feedback in Speech Motor Control and Learning. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:2963-2985. [PMID: 31465712 PMCID: PMC6813034 DOI: 10.1044/2019_jslhr-s-csmc7-18-0127] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 09/08/2018] [Accepted: 02/22/2019] [Indexed: 05/14/2023]
Abstract
Purpose While the speech motor system is sensitive to feedback perturbations, sensory feedback does not seem to be critical to speech motor production. How the speech motor system is able to be so flexible in its use of sensory feedback remains an open question. Method We draw on evidence from a variety of disciplines to summarize current understanding of the sensory systems' role in speech motor control, including both online control and motor learning. We focus particularly on computational models of speech motor control that incorporate sensory feedback, as these models provide clear encapsulations of different theories of sensory systems' function in speech production. These computational models include the well-established directions into velocities of articulators model and computational models that we have been developing in our labs based on the domain-general theory of state feedback control (feedback aware control of tasks in speech model). Results After establishing the architecture of the models, we show that both the directions into velocities of articulators and state feedback control/feedback aware control of tasks models can replicate key behaviors related to sensory feedback in the speech motor system. Although the models agree on many points, the underlying architecture of the 2 models differs in a few key ways, leading to different predictions in certain areas. We cover key disagreements between the models to show the limits of our current understanding and point toward areas where future experimental studies can resolve these questions. Conclusions Understanding the role of sensory information in the speech motor system is critical to understanding speech motor production and sensorimotor learning in healthy speakers as well as in disordered populations. Computational models, with their concrete implementations and testable predictions, are an important tool to understand this process. Comparison of different models can highlight areas of agreement and disagreement in the field and point toward future experiments to resolve important outstanding questions about the speech motor control system.
Collapse
Affiliation(s)
- Benjamin Parrell
- Department of Communication Sciences and Disorders, University of Wisconsin–Madison
| | - John Houde
- Department of Otolaryngology—Head and Neck Surgery, University of California, San Francisco
| |
Collapse
|
4
|
Johnson LA, Della Santina CC, Wang X. Selective Neuronal Activation by Cochlear Implant Stimulation in Auditory Cortex of Awake Primate. J Neurosci 2016; 36:12468-12484. [PMID: 27927962 PMCID: PMC5148231 DOI: 10.1523/jneurosci.1699-16.2016] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2016] [Revised: 10/05/2016] [Accepted: 10/10/2016] [Indexed: 11/21/2022] Open
Abstract
Despite the success of cochlear implants (CIs) in human populations, most users perform poorly in noisy environments and music and tonal language perception. How CI devices engage the brain at the single neuron level has remained largely unknown, in particular in the primate brain. By comparing neuronal responses with acoustic and CI stimulation in marmoset monkeys unilaterally implanted with a CI electrode array, we discovered that CI stimulation was surprisingly ineffective at activating many neurons in auditory cortex, particularly in the hemisphere ipsilateral to the CI. Further analyses revealed that the CI-nonresponsive neurons were narrowly tuned to frequency and sound level when probed with acoustic stimuli; such neurons likely play a role in perceptual behaviors requiring fine frequency and level discrimination, tasks that CI users find especially challenging. These findings suggest potential deficits in central auditory processing of CI stimulation and provide important insights into factors responsible for poor CI user performance in a wide range of perceptual tasks. SIGNIFICANCE STATEMENT The cochlear implant (CI) is the most successful neural prosthetic device to date and has restored hearing in hundreds of thousands of deaf individuals worldwide. However, despite its huge successes, CI users still face many perceptual limitations, and the brain mechanisms involved in hearing through CI devices remain poorly understood. By directly comparing single-neuron responses to acoustic and CI stimulation in auditory cortex of awake marmoset monkeys, we discovered that neurons unresponsive to CI stimulation were sharply tuned to frequency and sound level. Our results point out a major deficit in central auditory processing of CI stimulation and provide important insights into mechanisms underlying the poor CI user performance in a wide range of perceptual tasks.
Collapse
Affiliation(s)
| | - Charles C Della Santina
- Departments of Biomedical Engineering and
- Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland 21025
| | | |
Collapse
|
5
|
Benazet M, Thénault F, Whittingstall K, Bernier PM. Attenuation of visual reafferent signals in the parietal cortex during voluntary movement. J Neurophysiol 2016; 116:1831-1839. [PMID: 27466131 PMCID: PMC5144698 DOI: 10.1152/jn.00231.2016] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2016] [Accepted: 07/24/2016] [Indexed: 11/22/2022] Open
Abstract
It is well established that the cortical processing of somatosensory and auditory signals is attenuated when they result from self-generated actions compared with external events. This phenomenon is thought to result from an efference copy of motor commands used to predict the sensory consequences of an action through a forward model. The present work examined whether attenuation also takes place for visual reafferent signals from the moving limb during voluntary reaching movements. To address this issue, EEG activity was recorded in a condition in which visual feedback of the hand was provided in real time and compared with a condition in which it was presented with a 150-ms delay, thus creating a mismatch between the predicted and actual visual consequences of the movement. Results revealed that the amplitude of the N1 component of the visual event-related potential evoked by hand visual feedback over the parietal cortex was significantly smaller when presented in real time compared with when it was delayed. These data suggest that the cortical processing of visual reafferent signals is attenuated when they are correctly predicted, likely as a result of a forward model.
Collapse
Affiliation(s)
- Marc Benazet
- Département de Kinanthropologie, Université de Sherbrooke, Sherbrooke, Quebec, Canada
| | - François Thénault
- Département de Kinanthropologie, Université de Sherbrooke, Sherbrooke, Quebec, Canada
| | - Kevin Whittingstall
- Département de Médecine Nucléaire et de Radiobiologie, Université de Sherbrooke, Sherbrooke, Quebec, Canada; and
- Département de Radiologie Diagnostique, Université de Sherbrooke, Sherbrooke, Quebec, Canada
| | - Pierre-Michel Bernier
- Département de Kinanthropologie, Université de Sherbrooke, Sherbrooke, Quebec, Canada;
| |
Collapse
|
6
|
Zhang GY, Yang M, Liu B, Huang ZC, Li J, Chen JY, Chen H, Zhang PP, Liu LJ, Wang J, Teng GJ. Changes of the directional brain networks related with brain plasticity in patients with long-term unilateral sensorineural hearing loss. Neuroscience 2015; 313:149-61. [PMID: 26621123 DOI: 10.1016/j.neuroscience.2015.11.042] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2015] [Revised: 11/16/2015] [Accepted: 11/18/2015] [Indexed: 10/22/2022]
Abstract
Previous studies often report that early auditory deprivation or congenital deafness contributes to cross-modal reorganization in the auditory-deprived cortex, and this cross-modal reorganization limits clinical benefit from cochlear prosthetics. However, there are inconsistencies among study results on cortical reorganization in those subjects with long-term unilateral sensorineural hearing loss (USNHL). It is also unclear whether there exists a similar cross-modal plasticity of the auditory cortex for acquired monaural deafness and early or congenital deafness. To address this issue, we constructed the directional brain functional networks based on entropy connectivity of resting-state functional MRI and researched changes of the networks. Thirty-four long-term USNHL individuals and seventeen normally hearing individuals participated in the test, and all USNHL patients had acquired deafness. We found that certain brain regions of the sensorimotor and visual networks presented enhanced synchronous output entropy connectivity with the left primary auditory cortex in the left long-term USNHL individuals as compared with normally hearing individuals. Especially, the left USNHL showed more significant changes of entropy connectivity than the right USNHL. No significant plastic changes were observed in the right USNHL. Our results indicate that the left primary auditory cortex (non-auditory-deprived cortex) in patients with left USNHL has been reorganized by visual and sensorimotor modalities through cross-modal plasticity. Furthermore, the cross-modal reorganization also alters the directional brain functional networks. The auditory deprivation from the left or right side generates different influences on the human brain.
Collapse
Affiliation(s)
- G-Y Zhang
- Department of Radiology, Jiangsu Key Laboratory of Molecule Imaging and Functional Imaging, Zhong-Da Hospital, Medical School of Southeast University, 87 Dingjiaqiao Road, Nanjing 210009, China; Department of Radiology, Taishan Medical University, Chang Cheng Road, Hi-Tech Development Zone, Taian 271016, Shandong Province, China.
| | - M Yang
- Department of Radiology, Jiangsu Key Laboratory of Molecule Imaging and Functional Imaging, Zhong-Da Hospital, Medical School of Southeast University, 87 Dingjiaqiao Road, Nanjing 210009, China
| | - B Liu
- Department of Radiology, Jiangsu Key Laboratory of Molecule Imaging and Functional Imaging, Zhong-Da Hospital, Medical School of Southeast University, 87 Dingjiaqiao Road, Nanjing 210009, China
| | - Z-C Huang
- Department of Otorhinolaryngology and Head-Neck Surgery, Zhong-Da Hospital, Medical School of Southeast University, 87 Dingjiaqiao Road, Nanjing 210009, China
| | - J Li
- Department of Radiology, Jiangsu Key Laboratory of Molecule Imaging and Functional Imaging, Zhong-Da Hospital, Medical School of Southeast University, 87 Dingjiaqiao Road, Nanjing 210009, China
| | - J-Y Chen
- Department of Radiology, Jiangsu Key Laboratory of Molecule Imaging and Functional Imaging, Zhong-Da Hospital, Medical School of Southeast University, 87 Dingjiaqiao Road, Nanjing 210009, China
| | - H Chen
- Department of Otorhinolaryngology and Head-Neck Surgery, Zhong-Da Hospital, Medical School of Southeast University, 87 Dingjiaqiao Road, Nanjing 210009, China
| | - P-P Zhang
- Department of Otorhinolaryngology and Head-Neck Surgery, Zhong-Da Hospital, Medical School of Southeast University, 87 Dingjiaqiao Road, Nanjing 210009, China
| | - L-J Liu
- Department of Physiology and Pharmacology, Medical School of Southeast University, 87 Dingjiaqiao Road, Nanjing 210009, China
| | - J Wang
- Department of Physiology and Pharmacology, Medical School of Southeast University, 87 Dingjiaqiao Road, Nanjing 210009, China; School of Human Communication Disorder, Dalhousie University, 1256 Barrington Street, Halifax B3J1Y6, Canada
| | - G-J Teng
- Department of Radiology, Jiangsu Key Laboratory of Molecule Imaging and Functional Imaging, Zhong-Da Hospital, Medical School of Southeast University, 87 Dingjiaqiao Road, Nanjing 210009, China
| |
Collapse
|
7
|
Abstract
Mindfulness, an attentive non-judgmental focus on “here and now” experiences, has been incorporated into various cognitive behavioral therapy approaches and beneficial effects have been demonstrated. Recently, mindfulness has also been identified as a potentially effective emotion regulation strategy. On the other hand, emotion suppression, which refers to trying to avoid or escape from experiencing and being aware of one’s own emotions, has been identified as a potentially maladaptive strategy. Previous studies suggest that both strategies can decrease affective responses to emotional stimuli. They would, however, be expected to provide regulation through different top-down modulation systems. The present study was aimed at elucidating the different neural systems underlying emotion regulation via mindfulness and emotion suppression approaches. Twenty-one healthy participants used the two types of strategy in response to emotional visual stimuli while functional magnetic resonance imaging was conducted. Both strategies attenuated amygdala responses to emotional triggers, but the pathways to regulation differed across the two. A mindful approach appears to regulate amygdala functioning via functional connectivity from the medial prefrontal cortex, while suppression uses connectivity with other regions, including the dorsolateral prefrontal cortex. Thus, the two types of emotion regulation recruit different top-down modulation processes localized at prefrontal areas. These different pathways are discussed.
Collapse
|
8
|
Houde JF, Chang EF. The cortical computations underlying feedback control in vocal production. Curr Opin Neurobiol 2015; 33:174-81. [PMID: 25989242 DOI: 10.1016/j.conb.2015.04.006] [Citation(s) in RCA: 75] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2015] [Revised: 04/14/2015] [Accepted: 04/27/2015] [Indexed: 11/26/2022]
Abstract
Recent neurophysiological studies of speaking are beginning to elucidate the neural mechanisms underlying auditory feedback processing during vocalizations. Here we review how research findings impact our state feedback control (SFC) model of speech motor control. We will discuss the evidence for cortical computations that compare incoming feedback with predictions derived from motor efference copy. We will also review observations from auditory feedback perturbation studies that demonstrate clear evidence for a state estimate correction process, which drives compensatory motor behavioral responses. While there is compelling support for cortical computations in the SFC model, there are still several outstanding questions that await resolution by future neural investigations.
Collapse
Affiliation(s)
- John F Houde
- Department of Otolaryngology - Head and Neck Surgery, University of California, San Francisco, United States.
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, United States.
| |
Collapse
|
9
|
Hiraoka K, Ae M, Ogura N, Sano C, Shiomi K, Morita Y, Yokoyama H, Iwata Y, Jono Y, Nomura Y, Tani K, Chujo Y. Monaural Auditory Cue Affects the Process of Choosing the Initial Swing Leg in Gait Initiation. J Mot Behav 2015; 47:522-6. [PMID: 25849897 DOI: 10.1080/00222895.2015.1020356] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
The authors investigated the effect of an auditory cue on the choice of the initial swing leg in gait initiation. Healthy humans initiated a gait in response to a monaural or binaural auditory cue. When the auditory cue was given in the ear ipsilateral to the preferred leg side, the participants consistently initiated their gait with the preferred leg. In the session in which the side of the monaural auditory cue was altered trial by trial randomly, the probability of initiating the gait with the nonpreferred leg increased when the auditory cue was given in the ear contralateral to the preferred leg side. The probability of choosing the nonpreferred leg did not increase significantly when the auditory cue was given in the ear contralateral to the preferred leg side in the session in which the auditory cue was constantly given in the ear contralateral to the preferred leg side. The reaction time of anticipatory postural adjustment was shortened, but the probability of choosing the nonpreferred leg was not significantly increased when the gait was initiated in response to a binaural auditory cue. An auditory cue in the ear contralateral to the preferred leg side weakens the preference for choosing the preferred leg as the initial swing leg in gait initiation when the side of the auditory cue is unpredictable.
Collapse
Affiliation(s)
- Koichi Hiraoka
- a College of Health and Human Sciences , Osaka Prefecture University , Habikino , Japan
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
10
|
Cortical plasticity after cochlear implantation. Neural Plast 2013; 2013:318521. [PMID: 24377050 PMCID: PMC3860139 DOI: 10.1155/2013/318521] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2013] [Accepted: 10/04/2013] [Indexed: 11/17/2022] Open
Abstract
The most dramatic progress in the restoration of hearing takes place in the first months after cochlear implantation. To map the brain activity underlying this process, we used positron emission tomography at three time points: within 14 days, three months, and six months after switch-on. Fifteen recently implanted adult implant recipients listened to running speech or speech-like noise in four sequential PET sessions at each milestone. CI listeners with postlingual hearing loss showed differential activation of left superior temporal gyrus during speech and speech-like stimuli, unlike CI listeners with prelingual hearing loss. Furthermore, Broca's area was activated as an effect of time, but only in CI listeners with postlingual hearing loss. The study demonstrates that adaptation to the cochlear implant is highly related to the history of hearing loss. Speech processing in patients whose hearing loss occurred after the acquisition of language involves brain areas associated with speech comprehension, which is not the case for patients whose hearing loss occurred before the acquisition of language. Finally, the findings confirm the key role of Broca's area in restoration of speech perception, but only in individuals in whom Broca's area has been active prior to the loss of hearing.
Collapse
|
11
|
Green KMJ, Ramsden RT, Julyan PJ, Hastings DEL. Cortical plasticity in the first year after cochlear implantation. Cochlear Implants Int 2013; 9:103-17. [DOI: 10.1179/cim.2008.9.2.103] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
12
|
Chan AM, Dykstra AR, Jayaram V, Leonard MK, Travis KE, Gygi B, Baker JM, Eskandar E, Hochberg LR, Halgren E, Cash SS. Speech-specific tuning of neurons in human superior temporal gyrus. ACTA ACUST UNITED AC 2013; 24:2679-93. [PMID: 23680841 DOI: 10.1093/cercor/bht127] [Citation(s) in RCA: 70] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
How the brain extracts words from auditory signals is an unanswered question. We recorded approximately 150 single and multi-units from the left anterior superior temporal gyrus of a patient during multiple auditory experiments. Against low background activity, 45% of units robustly fired to particular spoken words with little or no response to pure tones, noise-vocoded speech, or environmental sounds. Many units were tuned to complex but specific sets of phonemes, which were influenced by local context but invariant to speaker, and suppressed during self-produced speech. The firing of several units to specific visual letters was correlated with their response to the corresponding auditory phonemes, providing the first direct neural evidence for phonological recoding during reading. Maximal decoding of individual phonemes and words identities was attained using firing rates from approximately 5 neurons within 200 ms after word onset. Thus, neurons in human superior temporal gyrus use sparse spatially organized population encoding of complex acoustic-phonetic features to help recognize auditory and visual words.
Collapse
Affiliation(s)
- Alexander M Chan
- Medical Engineering and Medical Physics, Department of Neurology
| | - Andrew R Dykstra
- Program in Speech and Hearing Bioscience and Technology, Harvard-MIT Division of Health Sciences and Technology, Cambridge, MA, USA, Department of Neurology
| | - Vinay Jayaram
- Department of Neuroscience, Harvard University, Cambridge, MA, USA
| | | | | | - Brian Gygi
- National Institute for Health Research, Nottingham Hearing Biomedical Research Unit, Nottingham, UK and
| | - Janet M Baker
- Department of Otology and Laryngology, Harvard Medical School, Boston, MA, USA
| | - Emad Eskandar
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, USA
| | | | - Eric Halgren
- Multimodal Imaging Laboratory, Department of Radiology and Neurosciences, University of California, San Diego, La Jolla, CA, USA
| | | |
Collapse
|
13
|
Stoppelman N, Harpaz T, Ben-Shachar M. Do not throw out the baby with the bath water: choosing an effective baseline for a functional localizer of speech processing. Brain Behav 2013; 3:211-22. [PMID: 23785653 PMCID: PMC3683281 DOI: 10.1002/brb3.129] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/31/2012] [Revised: 12/23/2012] [Accepted: 01/15/2013] [Indexed: 11/07/2022] Open
Abstract
Speech processing engages multiple cortical regions in the temporal, parietal, and frontal lobes. Isolating speech-sensitive cortex in individual participants is of major clinical and scientific importance. This task is complicated by the fact that responses to sensory and linguistic aspects of speech are tightly packed within the posterior superior temporal cortex. In functional magnetic resonance imaging (fMRI), various baseline conditions are typically used in order to isolate speech-specific from basic auditory responses. Using a short, continuous sampling paradigm, we show that reversed ("backward") speech, a commonly used auditory baseline for speech processing, removes much of the speech responses in frontal and temporal language regions of adult individuals. On the other hand, signal correlated noise (SCN) serves as an effective baseline for removing primary auditory responses while maintaining strong signals in the same language regions. We show that the response to reversed speech in left inferior frontal gyrus decays significantly faster than the response to speech, thus suggesting that this response reflects bottom-up activation of speech analysis followed up by top-down attenuation once the signal is classified as nonspeech. The results overall favor SCN as an auditory baseline for speech processing.
Collapse
Affiliation(s)
- Nadav Stoppelman
- The Gonda Multidisciplinary Brain Research Center, Bar Ilan University Ramat Gan, Israel
| | | | | |
Collapse
|
14
|
Houde JF, Nagarajan SS. Speech production as state feedback control. Front Hum Neurosci 2011; 5:82. [PMID: 22046152 PMCID: PMC3200525 DOI: 10.3389/fnhum.2011.00082] [Citation(s) in RCA: 257] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2011] [Accepted: 07/27/2011] [Indexed: 11/13/2022] Open
Abstract
Spoken language exists because of a remarkable neural process. Inside a speaker's brain, an intended message gives rise to neural signals activating the muscles of the vocal tract. The process is remarkable because these muscles are activated in just the right way that the vocal tract produces sounds a listener understands as the intended message. What is the best approach to understanding the neural substrate of this crucial motor control process? One of the key recent modeling developments in neuroscience has been the use of state feedback control (SFC) theory to explain the role of the CNS in motor control. SFC postulates that the CNS controls motor output by (1) estimating the current dynamic state of the thing (e.g., arm) being controlled, and (2) generating controls based on this estimated state. SFC has successfully predicted a great range of non-speech motor phenomena, but as yet has not received attention in the speech motor control community. Here, we review some of the key characteristics of speech motor control and what they say about the role of the CNS in the process. We then discuss prior efforts to model the role of CNS in speech motor control, and argue that these models have inherent limitations – limitations that are overcome by an SFC model of speech motor control which we describe. We conclude by discussing a plausible neural substrate of our model.
Collapse
Affiliation(s)
- John F Houde
- Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco San Francisco, CA, USA
| | | |
Collapse
|
15
|
Functional reorganization of the auditory pathways (or lack thereof) in callosal agenesis is predicted by monaural sound localization performance. Neuropsychologia 2009; 48:601-6. [PMID: 19883670 DOI: 10.1016/j.neuropsychologia.2009.10.023] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2009] [Accepted: 10/22/2009] [Indexed: 11/27/2022]
Abstract
Neuroimaging studies show that permanent peripheral lesions such as unilateral deafness cause functional reorganization in the auditory pathways. However, functional reorganization of the auditory pathways as a result of higher-level damage or abnormalities remains poorly investigated. A relatively recent behavioural study points to functional changes in the auditory pathways in some, but interestingly not in all, of the acallosal individuals that were tested. The present study uses fMRI to investigate auditory activities in both cerebral hemispheres in those same acallosal subjects in order to directly investigate the contributions of ipsilateral and contralateral functional pathways reorganization. Predictions were made that functional reorganization could be predicted from behavioural performance. As reported previously in a number of neuroimaging studies, results showed that in neurologically intact subjects, binaural stimulation induced balanced activities between both hemispheres, while monaural stimulation induced strong contralateral activities and weak ipsilateral activities. In accordance with behavioural predictions, some acallosal subjects showed patterns of auditory cortical activities that were similar to those observed in neurologically intact subjects while others showed functional reorganization of the auditory pathways. Essentially they showed a significant increase and a significant decrease of neural activities in the contralateral and/or ipsilateral pathways, respectively. These findings indicate that at least in some acallosal subjects, functional reorganization inside the auditory pathways does contribute to compensate for the absence of the corpus callosum.
Collapse
|
16
|
Paiement P, Champoux F, Bacon B, Lassonde M, Gagné JP, Mensour B, Leroux JM, Lepore F. Functional reorganization of the human auditory pathways following hemispherectomy: An fMRI demonstration. Neuropsychologia 2008; 46:2936-42. [DOI: 10.1016/j.neuropsychologia.2008.06.009] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2008] [Revised: 05/28/2008] [Accepted: 06/03/2008] [Indexed: 10/21/2022]
|
17
|
Menéndez-Colino LM, Falcón C, Traserra J, Berenguer J, Pujol T, Doménech J, Bernal-Sprekelsen M. Activation patterns of the primary auditory cortex in normal-hearing subjects: a functional magnetic resonance imaging study. Acta Otolaryngol 2007; 127:1283-91. [PMID: 17851933 DOI: 10.1080/00016480701258705] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
CONCLUSIONS These results demonstrate that functional magnetic resonance imaging (fMRI) is an optimal tool to investigate the auditory cortex. The study suggests that there is a medio-lateral gradient of responsiveness to high frequencies medially and low frequencies laterally. The contralateral auditory cortex is more responsive than the ipsilateral cortex to tones presented monaurally. OBJECTIVES To demonstrate the activation of the primary auditory cortex in normal-hearing subjects using fMRI and to examine the response and topographic location of activation in the human auditory brain to stimulation with two different frequencies in a large group of volunteers. SUBJECTS AND METHODS Scanning was performed on a 1.5 Tesla MR with head gradient coils and a birdcage radiofrequency coil. Multiplanar echo-planar images were acquired in 32 subjects aged between 18 and 49 years. Two groups were defined, according to age (group A, 18 to <35 years old; group B, 35 to <50 years old). We studied normal-hearing subjects scanned while listening to auditory stimuli: narrative text in one volunteer and non-speech noise (pure tones 750 Hz and pure tones 2 KHz) in all subjects. RESULTS For both tone frequencies, auditory activation was observed bilaterally across the supratemporal plane in 29 of the 32 subjects (90.62%) with a probability level of p<0.001. In Heschl's gyrus (HG) contralateral to the stimulated ear, the extent of activation was generally greater than in homolateral HG. There were no statistical differences in HG activation according to age or sex. The 750 Hz tone activated more voxels in the medial area of the transverse temporal gyrus (TTG) whereas the 2000 Hz tone activated more voxels in the lateral TTG.
Collapse
|
18
|
Green KMJ, Julyan PJ, Hastings DL, Ramsden RT. Auditory cortical activation and speech perception in cochlear implant users. The Journal of Laryngology & Otology 2007; 122:238-45. [PMID: 17517160 DOI: 10.1017/s0022215107008043] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
AbstractCochlear implantation is generally accepted as a successful means of restoring auditory sensation to profoundly deaf individuals. Although most patients can expect a satisfactory outcome following implantation, some have poor speech perception outcomes. This investigation used [18F]-fluorodeoxyglucose positron emission tomography to measure cortical activity resulting from auditory stimulation in seven ‘good’ and four ‘poor’ cochlear implant recipients. Activations were significantly greater in both the primary and association cortices in the good compared with the poor implant users. We suggest that the ability to access the more specialised speech processing abilities of the auditory association cortices helps determine outcome following cochlear implantation.
Collapse
Affiliation(s)
- K M J Green
- Department of Otolaryngology, Manchester Royal Infirmary Manchester, UK.
| | | | | | | |
Collapse
|
19
|
Weinstein S, Werker JF, Vouloumanos A, Woodward TS, Ngan ETC. Do you hear what I hear? Neural correlates of thought disorder during listening to speech in schizophrenia. Schizophr Res 2006; 86:130-7. [PMID: 16806838 DOI: 10.1016/j.schres.2006.05.011] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/09/2005] [Revised: 05/09/2006] [Accepted: 05/11/2006] [Indexed: 11/22/2022]
Abstract
Thought disorder is a fundamental symptom of schizophrenia, observable as irregularities in speech. It has been associated with functional and structural abnormalities in brain regions involved in language processing, including left temporal regions, during language production tasks. We were interested in the neural correlates of thought disorder during receptive language processing, as this function is relatively preserved despite relying on the same brain regions as expressive language. Twelve patients with schizophrenia and 11 controls listened to 30-s speech samples while undergoing fMRI scanning. Thought disorder and global symptom ratings were obtained for each patient. Thought disorder but not global symptomatology correlated positively with the BOLD response in the left posterior superior temporal lobe while listening to comprehensible speech (cluster-level corrected p=.023). The pattern of brain activity associated with thought disorder during listening to comprehensible speech differs from that seen during language generation tasks, where a reduction of the leftward laterality of language has often been observed. As receptive language is spared in thought disorder, we propose that the increase in activation reflects compensatory processing allowing for normal performance.
Collapse
Affiliation(s)
- Sara Weinstein
- Department of Psychiatry, University of British Columbia, Vancouver, BC, Canada.
| | | | | | | | | |
Collapse
|
20
|
Devous MD, Altuna D, Furl N, Cooper W, Gabbert G, Ngai WT, Chiu S, Scott JM, Harris TS, Payne JK, Tobey EA. Maturation of speech and language functional neuroanatomy in pediatric normal controls. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2006; 49:856-66. [PMID: 16908880 DOI: 10.1044/1092-4388(2006/061)] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
PURPOSE This study explores the relationship between age and resting-state regional cerebral blood flow (rCBF) in regions associated with higher order language skills using a population of normal children, adolescents, and young adults. METHOD rCBF was measured in 33 normal participants between the ages of 7 and 19 years using single photon emission computed tomography. Participants' ages were regressed on rCBF values (normalized to whole-brain CBF) in 2 ways: (a) within anatomically defined, language-related regions of interest (ROIs) including Wernicke's area, Broca's area, angular gyrus, planum temporale, and Heschl's gyrus and (b) within clusters of voxels found to be significantly related to age in voxel-wise analyses. RESULTS rCBF in all anatomically defined ROIs except Heschl's gyrus declined as a function of age. Additionally, voxel-wise analyses revealed clusters where rCBF declined with age in left inferior parietal, left superior temporal, and right middle temporal regions-areas often implicated in higher order language functions. CONCLUSIONS These data suggest that ongoing maturation (e.g., dendritic pruning) in higher order cognitive areas (e.g., angular gyrus) continues into adolescence, as reflected by declining rCBF, while the primary auditory area (Heschl's gyrus) has become a stable neuronal population by age 7 years.
Collapse
Affiliation(s)
- Michael D Devous
- University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd., Dallas, TX 75390-9061, USA.
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
21
|
Behne N, Wendt B, Scheich H, Brechmann A. Contralateral White Noise Selectively Changes Left Human Auditory Cortex Activity in a Lexical Decision Task. J Neurophysiol 2006; 95:2630-7. [PMID: 16436478 DOI: 10.1152/jn.01201.2005] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In a previous study, we hypothesized that the approach of presenting information-bearing stimuli to one ear and noise to the other ear may be a general strategy to determine hemispheric specialization in auditory cortex (AC). In that study, we confirmed the dominant role of the right AC in directional categorization of frequency modulations by showing that fMRI activation of right but not left AC was sharply emphasized when masking noise was presented to the contralateral ear. Here, we tested this hypothesis using a lexical decision task supposed to be mainly processed in the left hemisphere. Subjects had to distinguish between pseudowords and natural words presented monaurally to the left or right ear either with or without white noise to the other ear. According to our hypothesis, we expected a strong effect of contralateral noise on fMRI activity in left AC. For the control conditions without noise, we found that activation in both auditory cortices was stronger on contralateral than on ipsilateral word stimulation consistent with a more influential contralateral than ipsilateral auditory pathway. Additional presentation of contralateral noise did not significantly change activation in right AC, whereas it led to a significant increase of activation in left AC compared with the condition without noise. This is consistent with a left hemispheric specialization for lexical decisions. Thus our results support the hypothesis that activation by ipsilateral information-bearing stimuli is upregulated mainly in the hemisphere specialized for a given task when noise is presented to the more influential contralateral ear.
Collapse
Affiliation(s)
- Nicole Behne
- Leibniz Institute for Neurobiology, Magdeburg, Germany.
| | | | | | | |
Collapse
|
22
|
Green KMJ, Julyan PJ, Hastings DL, Ramsden RT. Auditory cortical activation and speech perception in cochlear implant users: Effects of implant experience and duration of deafness. Hear Res 2005; 205:184-92. [PMID: 15953527 DOI: 10.1016/j.heares.2005.03.016] [Citation(s) in RCA: 71] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/23/2004] [Accepted: 03/17/2005] [Indexed: 11/22/2022]
Abstract
This study aimed to investigate the relationship between outcome following cochlear implantation and auditory cortical activation. It also studied the effects of length of implant use and duration of deafness on the auditory cortical activations. Cortical activity resulting from auditory stimulation was measured using [(18)F]FDG positron emission tomography. In a group of 18 experienced adult cochlear implant users, we found a positive correlation between speech perception and activations in both the primary and association auditory cortices. This correlation was present in a subgroup of experienced implant users but absent in a group of new implant users with similar speech perception abilities. There was a significant negative correlation between duration of deafness and auditory cortical activation. This study gives insights into the relationship between implant speech perception and auditory cortical activation and the influence of duration of preceding deafness and implant experience.
Collapse
Affiliation(s)
- Kevin M J Green
- Department of Otolaryngology, Manchester Royal Infirmary, Manchester, UK.
| | | | | | | |
Collapse
|
23
|
Behne N, Scheich H, Brechmann A. Contralateral White Noise Selectively Changes Right Human Auditory Cortex Activity Caused by a FM-Direction Task. J Neurophysiol 2005; 93:414-23. [PMID: 15356179 DOI: 10.1152/jn.00568.2004] [Citation(s) in RCA: 37] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Animal and human studies suggest that directional categorization of frequency-modulated (FM) tones (rising vs. falling) is a function of the right auditory cortex (AC). To investigate this hemispheric specialization in more detail, we analyzed both the binaural and monaural representation of FM tones and the influence of contralateral white noise on the processing of FM tone direction. In two fMRI-experiments, FM tones with varied direction, center-frequencies, and duration were presented binaurally or monaurally without contralateral white noise (experiment 1) and with contralateral white noise (experiment 2) while the subjects had to perform the same directional categorization task. In experiment 1, contralateral FM tones led to strongest activation, binaural FM tones to intermediate, and ipsilateral FM tones to weakest activation in each AC. This is in accordance with binaural response properties of neurons in animal AC. In experiment 2, contralateral white noise had no significant effect on the activation of left AC by FM tones, whereas in right AC, it led to a significant increase in activation for ipsilateral FM tones. This result provides further support for the critical role of right AC for directional categorization of FM tones, which for ipsilateral input has to be processed in competition to the excitatory input of white noise via the direct contralateral pathway.
Collapse
Affiliation(s)
- Nicole Behne
- Leibniz Institute for Neurobiology, Brenneckestr. 6, 39118 Magdeburg, Germany.
| | | | | |
Collapse
|
24
|
Mosier K, Gilbert R. New imaging techniques: integrating structural and functional imaging in the head and neck. Neuroimaging Clin N Am 2004; 14:827-52. [PMID: 15489154 DOI: 10.1016/j.nic.2004.07.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Traditionally, the mainstay of head and neck MR imaging has been the identification of structural alterations resulting from pathology. Now, the advent of fast MR imaging techniques provides the opportunity for radiologists to integrate structural and functional imaging in the head and neck. This article highlights functional imaging techniques that provide a means toward a complete evaluation of structural integrity and function in various systems of the head and neck.
Collapse
Affiliation(s)
- Kristine Mosier
- Departments of Radiology and Surgery, Memorial Sloan-Kettering Cancer Center, Box 506, 1275 York Avenue, New York, NY 10021, USA; Department of Radiology, University of Medicine and Dentistry of New Jersey, Newark, NJ, USA
| | | |
Collapse
|
25
|
Hadlington L, Bridges AM, Darby RJ. Auditory location in the irrelevant sound effect: The effects of presenting auditory stimuli to either the left ear, right ear or both ears. Brain Cogn 2004; 55:545-57. [PMID: 15223201 DOI: 10.1016/j.bandc.2004.04.001] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/07/2004] [Indexed: 11/16/2022]
Abstract
Two experiments used both irrelevant speech and tones in order to assess the effect of manipulating the spatial location of irrelevant sound. Previous research in this area had produced inconclusive results (e.g., Colle, 1980). The current study demonstrated a novel finding, that sound presented to the left ear produces the greatest level of disruption. These results were explained in terms of hemispheric specialisation for processing of some supra-linguistic components in the unattended sound. Results also supported previous research by demonstrating that both forms of irrelevant sound disrupted performance on serial memory tasks (Bridges & Jones, 1996; Colle & Welsh, 1976; Jones, Alford, Bridges, Tremblay, & Macken, 1999; Jones, Miles, & Page, 1990).
Collapse
Affiliation(s)
- Lee Hadlington
- Psychology Division, University of Wolverhampton, MC Building, Wolverhampton WV1 1SB, UK.
| | | | | |
Collapse
|
26
|
Allen A, Barnes A, Singh RS, Patterson J, Hadley DM, Wyper D. Perfusion SPECT in cochlear implantation and promontory stimulation. Nucl Med Commun 2004; 25:521-5. [PMID: 15100513 DOI: 10.1097/00006231-200405000-00015] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
BACKGROUND Recent studies of profoundly deaf patients with cochlear implants have demonstrated that these patients are able to process sound in the auditory cortex in a similar way to normal subjects. However, there are large variations in outcome. Various clinical criteria are used for subject selection and the decision as to which ear is to be implanted involves electrical stimulation of the promontory which is used to confirm the persistence of auditory neurones and fibres that can be utilized by the cochlear implant. In this study we have used SPECT with Tc-HMPAO to investigate activation of the auditory cortex in cochlear implantees post-surgery. In addition we also investigated whether electrical stimulation of the promontory does produce change in blood flow in the auditory cortex in pre-surgery candidates, which would indicate viable auditory networks that can be utilized by a cochlear implant device. METHODS AND RESULTS Image analysis was performed with SPM99. Results of a simple subtraction paradigm indicated bilateral activation of auditory cortex and Wernicke's area in the post-implant group during auditory stimulus (speech) and bilateral activation of the ventral lateral posterior thalamus and bilateral auditory association cortex BA21/22/42, in the pre-implant group during electrical stimulus but no activation of the primary auditory cortex. A conjunction analysis used to investigate the common areas of activation across both groups during the stimulus condition showed that there was a common bilateral activation of the primary auditory cortex in both groups (BA22/41/42). In addition, analysis of a subset of the seven post-implant subjects who did not comprehend the speech in our study showed an activation (Pu<0.05, where Pu is the peak voxel threshold, uncorrected for multiple comparisons) in the left auditory cortex that extended into area BA22 synonymous with Wernicke's area. This supports the theory that this region has a sensory role.
Collapse
|
27
|
Rissman J, Eliassen JC, Blumstein SE. An Event-Related fMRI Investigation of Implicit Semantic Priming. J Cogn Neurosci 2003; 15:1160-75. [PMID: 14709234 DOI: 10.1162/089892903322598120] [Citation(s) in RCA: 183] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
The neural basis underlying implicit semantic priming was investigated using event-related fMRI. Prime-target pairs were presented auditorily for lexical decision (LD) on the target stimulus, which was either semantically related or unrelated to the prime, or was a nonword. A tone task was also administered as a control. Behaviorally, all participants demonstrated semantic priming in the LD task. fMRI results showed that for all three conditions of the LD task, activation was seen in the superior temporal gyrus (STG), the middle temporal gyrus (MTG), and the inferior parietal lobe, with greater activation in the unrelated and nonword conditions than in the related condition. Direct comparisons of the related and unrelated conditions revealed foci in the left STG, left precentral gyrus, left and right MTGs, and right caudate, exhibiting significantly lower activation levels in the related condition. The reduced activity in the temporal lobe suggests that the perception of the prime word activates a lexical— semantic network that shares common elements with the target word, and, thus, the target can be recognized with enhanced neural efficiency. The frontal lobe reductions most likely reflect the increased efficiency in monitoring the activation of lexical representations in the temporal lobe, making a decision, and planning the appropriate motor response.
Collapse
Affiliation(s)
- Jesse Rissman
- Department of Cognitive and Linguistic Sciences, Brown University, Providence, RI 02912, USA
| | | | | |
Collapse
|
28
|
Tateya I, Naito Y, Hirano S, Kojima H, Inoue M, Kaneko KI, Toyoda H, Ueno M, Ishizu K, Ito J. Inner ear hearing loss modulates ipsilateral temporal lobe activation by monaural speech stimuli. Neuroreport 2003; 14:763-7. [PMID: 12692479 DOI: 10.1097/00001756-200304150-00021] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
We examined cortical activation by speech in patients with moderate inner ear hearing loss using PET to investigate the response of the language network to insufficient speech input. We made two word lists, well-perceived words and poorly-perceived words, and measured rCBF during monaural presentation of these words. Well-perceived words activated bilateral temporal lobes, bilateral inferior frontal gyri (IFG) and left angular gyrus (AG) regardless of the ear stimulated, Poorly-perceived words activated contralateral temporal lobe and bilateral IFG, while little or no activation was observed in the ipsilateral temporal lobe and left AG. Insufficient activation of the temporal lobe ipsilateral to the ear stimulated might correlated with less accurate word comprehension in patients with inner ear hearing loss.
Collapse
Affiliation(s)
- Ichiro Tateya
- Department of Otolaryngology-Head and Neck Surgery, Graduate School of Medicine, Kyoto University, Sakyo-ku, Kyoto 606-8507, Japan.
| | | | | | | | | | | | | | | | | | | |
Collapse
|
29
|
Busatto GF, Garrido GEJ, Almeida OP, Castro CC, Camargo CHP, Cid CG, Buchpiguel CA, Furuie S, Bottino CM. A voxel-based morphometry study of temporal lobe gray matter reductions in Alzheimer's disease. Neurobiol Aging 2003; 24:221-31. [PMID: 12498956 DOI: 10.1016/s0197-4580(02)00084-2] [Citation(s) in RCA: 138] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Several MRI studies have reported reductions in temporal lobe volumes in Alzheimer's disease (AD). Measures have been usually obtained with regions-of-interest (ROI) drawn manually on selected medial and lateral portions of the temporal lobes, with variable choices of anatomical borders across different studies. We used the fully automated voxel-based morphometry (VBM) approach to investigate gray matter abnormalities over the entire extension of the temporal lobe in 14 AD patients (MMSE 14-25) and 14 healthy controls. Foci of significantly reduced gray matter volume in AD patients were detected in both medial and lateral temporal regions, most significantly in the right and left posterior parahippocampal gyri and the left posterior inferior temporal gyrus/fusiform gyrus (P<0.05, corrected for multiple comparisons). At a more flexible statistical threshold (P<0.001, uncorrected for multiple comparisons), circumscribed foci of significant gray matter reduction were also detected in the right amygdala/enthorinal cortex, the anterior and posterior borders of the superior temporal gyrus bilaterally, and the anterior portion of the left middle temporal gyrus. These VBM results confirm previous findings of temporal lobe atrophic changes in AD, and suggest that these abnormalities may be confined to specific sites within that lobe, rather than showing a widespread distribution.
Collapse
Affiliation(s)
- Geraldo F Busatto
- Department of Psychiatry, Faculty of Medicine, University of São Paulo, São Paulo, Brazil.
| | | | | | | | | | | | | | | | | |
Collapse
|
30
|
Abstract
We used functional brain imaging with positron emission tomography (PET)-H2 15O to study a remarkable neurophysiological finding in the normal brain. Auditory stimulation at various frequencies in the gamma range elicits a steady-state scalp electroencephalographic (EEG) response that peaks in amplitude at 40 Hz, with smaller amplitudes at lower and higher stimulation frequencies. We confirmed this finding in 28 healthy subjects, each studied with monaural trains of stimuli at 12 different stimulation rates (12, 20, 30, 32, 35, 37.5, 40, 42.5, 45, 47.5, 50, and 60 Hz). There is disagreement as to whether the peak in the amplitude of the EEG response at 40 Hz corresponds simply to a superimposition of middle latency auditory evoked potentials, neuronal synchronization, or increased cortical synaptic activity at this stimulation frequency. To clarify this issue, we measured regional cerebral blood flow (rCBF) with PET-H2 15O in nine normal subjects at rest and during auditory stimulation at four different frequencies (12, 32, 40, and 47 Hz) and analyzed the results with statistical parametric mapping. The behavior of the rCBF response was similar to the steady-state EEG response, reaching a peak at 40 Hz. This finding suggests that the steady-state amplitude peak is related to increased cortical synaptic activity. Additionally, we found that, compared with other stimulation frequencies, 40 Hz selectively activated the auditory region of the pontocerebellum, a brain structure with important roles in cortical inhibition and timing.
Collapse
|
31
|
Houde JF, Nagarajan SS, Sekihara K, Merzenich MM. Modulation of the auditory cortex during speech: an MEG study. J Cogn Neurosci 2002; 14:1125-38. [PMID: 12495520 DOI: 10.1162/089892902760807140] [Citation(s) in RCA: 317] [Impact Index Per Article: 14.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Several behavioral and brain imaging studies have demonstrated a significant interaction between speech perception and speech production. In this study, auditory cortical responses to speech were examined during self-production and feedback alteration. Magnetic field recordings were obtained from both hemispheres in subjects who spoke while hearing controlled acoustic versions of their speech feedback via earphones. These responses were compared to recordings made while subjects listened to a tape playback of their production. The amplitude of tape playback was adjusted to match the amplitude of self-produced speech. Recordings of evoked responses to both self-produced and tape-recorded speech were obtained free of movement-related artifacts. Responses to self-produced speech were weaker than were responses to tape-recorded speech. Responses to tones were also weaker during speech production, when compared with responses to tones recorded in the presence of speech from tape playback. However, responses evoked by gated noise stimuli did not differ for recordings made during self-produced speech versus recordings made during tape-recorded speech playback. These data suggest that during speech production, the auditory cortex (1) attenuates its sensitivity and (2) modulates its activity as a function of the expected acoustic feedback.
Collapse
Affiliation(s)
- John F Houde
- Center for Integrative Neuroscience, University of California, San Francisco 94143, USA.
| | | | | | | |
Collapse
|
32
|
Mosier K, Gilbert R. New imaging techniques: integrating structural and functional imaging in the head and neck. Magn Reson Imaging Clin N Am 2002; 10:679-705. [PMID: 12685500 DOI: 10.1016/s1064-9689(02)00017-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
The application of fast MRI techniques provides the opportunity to image function in various systems of the head and neck. Incorporating fMRI techniques into head and neck imaging protocols provides the potential for the head and neck radiologist to investigate structural integrity and function and thus play a central role in the diagnostic and prognostic work-up of the patient.
Collapse
Affiliation(s)
- Kristine Mosier
- Department of Radiology, Memorial Sloan-Kettering Cancer Center, Box 506, 1275 York Avenue, New York, NY 10021, USA.
| | | |
Collapse
|
33
|
Mathiak K, Hertrich I, Grodd W, Ackermann H. Cerebellum and speech perception: a functional magnetic resonance imaging study. J Cogn Neurosci 2002; 14:902-12. [PMID: 12191457 DOI: 10.1162/089892902760191126] [Citation(s) in RCA: 88] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
A variety of data indicate that the cerebellum participates in perceptual tasks requiring the precise representation of temporal information. Access to the word form of a lexical item requires, among other functions, the processing of durational parameters of verbal utterances. Therefore, cerebellar dysfunctions must be expected to impair word recognition. In order to specify the topography of the assumed cerebellar speech perception mechanism, a functional magnetic resonance imaging study was performed using the German lexical items "Boden" ([bodn], Engl. "floor") and "Boten" ([botn], "messengers") as test materials. The contrast in sound structure of these two lexical items can be signaled either by the length of the wordmedial pause (closure time, CLT; an exclusively temporal measure) or by the aspiration noise of wordmedial "d" or "t" (voice onset time, VOT; an intrasegmental cue). A previous study found bilateral cerebellar disorders to compromise word recognition based on CLT whereas the encoding of VOT remained unimpaired. In the present study, two series of "Boden - Boten" utterances were resynthesized, systematically varying either in CLT or VOT. Subjects had to identify both words "Boden" and "Boten" by analysis of either the durational parameter CLT or the VOT aspiration segment. In a subtraction design, CLT categorization as compared to VOT identification (CLT - VOT) yielded a significant hemodynamic response of the right cerebellar hemisphere (neocerebellum Crus I) and the frontal lobe (anterior to Broca's area). The reversed contrast ( VOT - CLT) resulted in a single activation cluster located at the level of the supratemporal plane of the dominant hemisphere. These findings provide first evidence for a distinct contribution of the right cerebellar hemisphere to speech perception in terms of encoding of durational parameters of verbal utterances. Verbal working memory tasks, lexical response selection, and auditory imagery of word strings have been reported to elicit activation clusters of a similar location. Conceivably, representation of the temporal structure of speech sound sequences represents the common denominator of cerebellar participation in cognitive tasks acting on a phonetic code.
Collapse
Affiliation(s)
- Klaus Mathiak
- MEG-Zentrum, University of Tübingen, Otfried-Müller-Strasse 47, 72076 Tübingen, Germany.
| | | | | | | |
Collapse
|
34
|
Jäncke L, Wüstenberg T, Schulze K, Heinze HJ. Asymmetric hemodynamic responses of the human auditory cortex to monaural and binaural stimulation. Hear Res 2002; 170:166-78. [PMID: 12208550 DOI: 10.1016/s0378-5955(02)00488-4] [Citation(s) in RCA: 103] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Applying whole-head functional magnetic resonance imaging (fMRI) in 11 neurologically intact subjects, hemodynamic responses to mon- or binaurally presented auditory stimuli were measured. To expand on previous studies in this research area, we used tones and consonant-vowel (CV) syllables. In one group of subjects (n=6) the perceived loudness of the monaurally presented stimuli were adjusted so that they matched the loudness of the binaurally presented stimuli. In a second group (n=5) no loudness adjustment was performed, thus the monaural stimuli were perceived less loud ( approximately 10 dB) than the binaural stimuli. These extensions allowed us to test whether CV syllables and tones produce different contralaterality effects (stronger hemodynamic responses in the auditory cortex contralateral to the stimulated ear) and whether binaural stimulation results in stronger activations in the auditory areas than during both monaural stimulation conditions (binaural summation) independent of loudness influences. In summary, we obtained the following findings: (1) strong contralaterality effects during monaural acoustic stimulation in the posterior superior temporal gyrus (STG) comprising the planum temporale and the dorsal bank of the superior temporal sulcus to CV syllables and tones; (2) the hemodynamic responses to contralaterally presented stimuli (during the monaural conditions) were mostly stronger than those to binaurally presented CV syllables; (3) there was no interaction between stimulus type and the size of the contralaterality effect; (4) there was no indication of binaural summation, rather we found stronger hemodynamic responses to the sum of both monaural stimulations (right and left ear) than to binaural stimulation in all auditory areas; (5) there were generally stronger hemodynamic responses to CV syllables than to tones in the posterior STG, while the hemodynamic responses to tones were stronger in the anterior part of the STG (temporal pole); and finally (6) there was no general difference in terms of hemodynamic response in the auditory cortex between the two groups when receiving either loudness-matched or non-loudness-matched monaural stimulation. These findings are discussed in the context of the underlying neurophysiological mechanisms, the peculiarities of functional fMRI, and the direct access and callosal relay models of hemispheric lateralization.
Collapse
Affiliation(s)
- L Jäncke
- Institute of Psychology, Neuropsychology, University Zürich Zürichbergstr. 43, CH-8044, Zurich, Switzerland.
| | | | | | | |
Collapse
|
35
|
Majerus S, Collette F, Van der Linden M, Peigneux P, Laureys S, Delfiore G, Degueldre C, Luxen A, Salmon E. A PET investigation of lexicality and phonotactic frequency in oral language processing. Cogn Neuropsychol 2002; 19:343-61. [DOI: 10.1080/02643290143000213] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
36
|
Wong D, Pisoni DB, Learn J, Gandour JT, Miyamoto RT, Hutchins GD. PET imaging of differential cortical activation by monaural speech and nonspeech stimuli. Hear Res 2002; 166:9-23. [PMID: 12062754 PMCID: PMC3429125 DOI: 10.1016/s0378-5955(02)00311-8] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
Positron emission tomography imaging was used to investigate the brain activation patterns of listeners presented monaurally (right ear) with speech and nonspeech stimuli. The major objectives were to identify regions involved with speech and nonspeech processing, and to develop a stimulus paradigm suitable for studies of cochlear-implant subjects. Scans were acquired under a silent condition and stimulus conditions that required listeners to press a response button to repeated words, sentences, time-reversed (TR) words, or TR sentences. Group-averaged data showed activated foci in the posterior superior temporal gyrus (STG) bilaterally and in or near the anterior insula/frontal operculum across all stimulus conditions compared to silence. The anterior STG was activated bilaterally for speech signals, but only on the right side for TR sentences. Only nonspeech conditions showed frontal-lobe activation in both the left inferior frontal gyrus [Brodmann's area (BA) 47] and ventromedial prefrontal areas (BA 10/11). An STG focus near the superior temporal sulcus was observed for sentence compared to word. The present findings show that both speech and nonspeech engaged a distributed network in temporal cortex for early acoustic and prelexical phonological analysis. Yet backward speech, though lacking semantic content, is perceived as speechlike by engaging prefrontal regions implicated in lexico-semantic processing.
Collapse
Affiliation(s)
- Donald Wong
- Department of Anatomy and Cell Biology, Medical Science 5022, Indiana University School of Medicine, 635 Barnhill Drive, Indianapolis, IN 46202, USA.
| | | | | | | | | | | |
Collapse
|
37
|
Suzuki M, Kitano H, Kitanishi T, Itou R, Shiino A, Nishida Y, Yazawa Y, Ogawa F, Kitajima K. Cortical and subcortical activation with monaural monosyllabic stimulation by functional MRI. Hear Res 2002; 163:37-45. [PMID: 11788197 DOI: 10.1016/s0378-5955(01)00367-7] [Citation(s) in RCA: 42] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
Few reports have characterized auditory processing in monaural stimulation, which is important to the understanding of auditory brain activity in subjects with hearing loss. We therefore measured regional brain activity in response to monaural stimulation of 95 dB SPL monosyllables using functional magnetic resonance imaging (fMRI) in subjects with normal hearing and five with unilateral deafness as controls for 'cross-hearing'. Images were analyzed by statistical parametric mapping software. In subjects without hearing loss, the stimuli elicited cortical activation in the primary auditory (BA 41) and auditory association regions (BA 42, 22), particularly contralaterally where extent of activation was approximately 2.5 times the ipsilateral extent. All patients with profound unilateral deafness showed no statistically apparent response in the primary auditory and auditory association regions, ruling out an important influence from cross-hearing. We found fMRI to be a useful technique for analysis of auditory processing that should be applicable to patients with various hearing abnormalities.
Collapse
Affiliation(s)
- Mikio Suzuki
- Department of Otolaryngology, Shiga University of Medical Science, Seta, Otsu, Japan.
| | | | | | | | | | | | | | | | | |
Collapse
|
38
|
Ackermann H, Hertrich I, Mathiak K, Lutzenberger W. Contralaterality of cortical auditory processing at the level of the M50/M100 complex and the mismatch field: a whole-head magnetoencephalography study. Neuroreport 2001; 12:1683-7. [PMID: 11409739 DOI: 10.1097/00001756-200106130-00033] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Humans show a stronger cortical representation of auditory input at the opposite hemisphere each. To specify the temporal aspects of this contralaterality effect within the domain of speech stimuli, the present study recorded a series of evoked magnetic fields (M50, M100, mismatch field) subsequent to monaural application of stop consonant-vowel syllables using whole-head magnetoencephalography (MEG). The M50 components exhibited a skewed shape of cross-symmetrical distribution in terms of an initial maximum peak succeeded by a knot over the contralateral and a reversed pattern over the ipsilateral temporal lobe. Most presumably, this pattern of evoked fields reflects two distinct stages of central-auditory processing: (a) initial excitation of the larger contralateral and the smaller ipsilateral projection area of the stimulated ear; (b) subsequent transcallosal activation of the residual neurons, i.e. the targets of the non-stimulated ear, at either side. Previous studies using non-speech stimuli found contralaterality of central-auditory processing to extend to the M100 field. In contrast, a larger amplitude of ipsilateral M100 as compared to the respective opposite deflection emerged after stimulation of either ear. Finally, the computed magnetic analogues of mismatch negativity failed any significant laterality effects. These data provide first evidence for a distinct pattern of hemispheric differences at the level of the M50/M100 complex subsequent to monaural application of speech stimuli.
Collapse
Affiliation(s)
- H Ackermann
- Department of Neurology, University of Tuebingen, Germany
| | | | | | | |
Collapse
|
39
|
Engelien A, Stern E, Isenberg N, Engelien W, Frith C, Silbersweig D. The parahippocampal region and auditory-mnemonic processing. Ann N Y Acad Sci 2000; 911:477-85. [PMID: 10911898 DOI: 10.1111/j.1749-6632.2000.tb06750.x] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Affiliation(s)
- A Engelien
- Functional Neuroimaging Laboratory, Weill Medical College of Cornell University, New York, New York 10021, USA.
| | | | | | | | | | | |
Collapse
|
40
|
Naito Y, Tateya I, Fujiki N, Hirano S, Ishizu K, Nagahama Y, Fukuyama H, Kojima H. Increased cortical activation during hearing of speech in cochlear implant users. Hear Res 2000; 143:139-46. [PMID: 10771191 DOI: 10.1016/s0378-5955(00)00035-6] [Citation(s) in RCA: 41] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
To investigate the cortical activities while listening to noise and speech in cochlear implant (CI) users, we compared cerebral blood flow in postlingually deafened CI users with that in normal hearing subjects using positron emission tomography. While noise activation in CI users did not significantly differ from that in normal subjects, hearing speech activated more cortical areas in CI users than in normal subjects. A comparison of speech activation in these two groups revealed higher activation in CI users not only in the temporal cortices but also in Broca's area and its right hemisphere homologue, the supplementary motor area and the anterior cingulate gyrus. In postlingually deafened subjects, the hearing of speech coded by CI may be accompanied by increased activation of both the temporal and frontal cortices.
Collapse
Affiliation(s)
- Y Naito
- Department of Hearing and Speech Sciences, Kyoto University Graduate School of Medicine, Sakyo-ku, Kyoto, Japan.
| | | | | | | | | | | | | | | |
Collapse
|
41
|
Abstract
Abstract
Positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) have been extensively used to explore the functional neuroanatomy of cognitive functions. Here we review 275 PET and fMRI studies of attention (sustained, selective, Stroop, orientation, divided), perception (object, face, space/motion, smell), imagery (object, space/ motion), language (written/spoken word recognition, spoken/ no spoken response), working memory (verbal/numeric, object, spatial, problem solving), semantic memory retrieval (categorization, generation), episodic memory encoding (verbal, object, spatial), episodic memory retrieval (verbal, nonverbal, success, effort, mode, context), priming (perceptual, conceptual), and procedural memory (conditioning, motor, and nonmotor skill learning). To identify consistent activation patterns associated with these cognitive operations, data from 412 contrasts were summarized at the level of cortical Brodmann's areas, insula, thalamus, medial-temporal lobe (including hippocampus), basal ganglia, and cerebellum. For perception and imagery, activation patterns included primary and secondary regions in the dorsal and ventral pathways. For attention and working memory, activations were usually found in prefrontal and parietal regions. For language and semantic memory retrieval, typical regions included left prefrontal and temporal regions. For episodic memory encoding, consistently activated regions included left prefrontal and medial-temporal regions. For episodic memory retrieval, activation patterns included prefrontal, medial-temporal, and posterior midline regions. For priming, deactivations in prefrontal (conceptual) or extrastriate (perceptual) regions were consistently seen. For procedural memory, activations were found in motor as well as in non-motor brain areas. Analysis of regional activations across cognitive domains suggested that several brain regions, including the cerebellum, are engaged by a variety of cognitive challenges. These observations are discussed in relation to functional specialization as well as functional integration.
Collapse
|
42
|
Abstract
Positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) have been extensively used to explore the functional neuroanatomy of cognitive functions. Here we review 275 PET and fMRI studies of attention (sustained, selective, Stroop, orientation, divided), perception (object, face, space/motion, smell), imagery (object, space/motion), language (written/spoken word recognition, spoken/no spoken response), working memory (verbal/numeric, object, spatial, problem solving), semantic memory retrieval (categorization, generation), episodic memory encoding (verbal, object, spatial), episodic memory retrieval (verbal, nonverbal, success, effort, mode, context), priming (perceptual, conceptual), and procedural memory (conditioning, motor, and nonmotor skill learning). To identify consistent activation patterns associated with these cognitive operations, data from 412 contrasts were summarized at the level of cortical Brodmann's areas, insula, thalamus, medial-temporal lobe (including hippocampus), basal ganglia, and cerebellum. For perception and imagery, activation patterns included primary and secondary regions in the dorsal and ventral pathways. For attention and working memory, activations were usually found in prefrontal and parietal regions. For language and semantic memory retrieval, typical regions included left prefrontal and temporal regions. For episodic memory encoding, consistently activated regions included left prefrontal and medial temporal regions. For episodic memory retrieval, activation patterns included prefrontal, medial temporal, and posterior midline regions. For priming, deactivations in prefrontal (conceptual) or extrastriate (perceptual) regions were consistently seen. For procedural memory, activations were found in motor as well as in non-motor brain areas. Analysis of regional activations across cognitive domains suggested that several brain regions, including the cerebellum, are engaged by a variety of cognitive challenges. These observations are discussed in relation to functional specialization as well as functional integration.
Collapse
Affiliation(s)
- R Cabeza
- Department of Psychology, University of Alberta, Edmonton, Canada
| | | |
Collapse
|
43
|
Fujiki N, Naito Y, Hirano S, Kojima H, Shiomi Y, Nishizawa S, Konishi J, Honjo I. Correlation between rCBF and speech perception in cochlear implant users. Auris Nasus Larynx 1999; 26:229-36. [PMID: 10419029 DOI: 10.1016/s0385-8146(99)00009-7] [Citation(s) in RCA: 23] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
OBJECTIVE Although cochlear implants (CIs) have provided the opportunity for bilaterally deaf individuals to recover their hearing abilities, the speech perception performances of the CI users varies considerably. To elucidate the cortical mechanisms of processing speech signals coded by CIs, we evaluated the correlation between the brain activity during speech activation and speech perception in CI users by PET. METHODS Fourteen postlingually deaf CI users were examined. CI used in the patients was a 22-channel system and its speech-coding strategy was the Nucleus spectral peak (SPEAK) strategy. To evaluate the speech perception performances, we examined vowel perception, consonant perception and speech tracking performances in the Japanese language. Regional cerebral blood flow (rCBF) was measured during no sound stimulation and speech sound stimulation. PET data of the silent condition was subtracted from that of speech stimulation to determine changes in rCBF. In the search for changes in rCBF in the areas for auditory processing, three regions of interest (ROI) were selected; primary auditory area, auditory association area and Broca's area. The correlation between the rCBF changes in the ROIs and the speech perception performances was analyzed using Pearson's correlation coefficient. RESULTS The patient's speech perception performances ranged widely. Although there were no significant correlations between the speech perception and the rCBF increases in the primary auditory area and Broca's area, there were positive correlations in the auditory association area. In the left auditory association area, the correlation coefficient of the vowel perception performance was 0.546 (P <0.05) and that of the speech-tracking test was 0.657 (P < 0.05). Regarding the consonant perception performance, the correlation coefficient was 0.743 (P < 0.01). There was a positive correlation only between the consonant perception performance and the rCBF increase (R = 0.576, P < 0.05) in the right auditory association area. These correlations are stronger in the left hemisphere than in the right hemisphere. CONCLUSIONS It is suggested that the improvement of the auditory processing of speech in CI users with SPEAK strategy is accompanied by the recruitment of more neurons in the auditory association areas. The adult auditory cortices may still have plasticity or
Collapse
Affiliation(s)
- N Fujiki
- Department of Hearing and Speech Science, Kyoto University Graduate School of Medicine, Japan.
| | | | | | | | | | | | | | | |
Collapse
|
44
|
Fujiki N, Naito Y, Hirano S, Kojima H, Kamoto Y, Nishizawa S, Konishi J, Honjo I. Influence of speech-coding strategy on cortical activity in cochlear implant users: a positron emission tomographic study. Acta Otolaryngol 1998; 118:797-802. [PMID: 9870622 DOI: 10.1080/00016489850182468] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
Abstract
The effects of a speech-coding strategy of cochlear implant (CI) on cortical activity were evaluated using positron emission tomography. The CIs used in the present study were those of a 22-channel system using the Multipeak speech-coding strategy (MPEAK) and the spectral peak strategy (SPEAK). On comparing the 2 groups, it was found that the speech-tracking performance was significantly higher in the SPEAK group than in the MPEAK group. Regional cerebral blood flow (rCBF) was measured during the silent resting, noise stimulus and speech stimulus periods. The increase in rCBF was localized mainly in the primary auditory area during the noise stimulus period. The increase in rCBF in the auditory association area during the speech stimulus period was stronger in the SPEAK group than in the MPEAK group. This finding suggests that the SPEAK strategy activates more speech processing neuronal networks in the auditory association area than the MPEAK strategy.
Collapse
Affiliation(s)
- N Fujiki
- Department of Hearing and Speech Science, Kyoto University Graduate School of Medicine, Japan.
| | | | | | | | | | | | | | | |
Collapse
|