1
|
Unmasking the relevance of hemispheric asymmetries—Break on through (to the other side). Prog Neurobiol 2020; 192:101823. [DOI: 10.1016/j.pneurobio.2020.101823] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2020] [Revised: 04/17/2020] [Accepted: 05/13/2020] [Indexed: 12/21/2022]
|
2
|
Early tone categorization in absolute pitch musicians is subserved by the right-sided perisylvian brain. Sci Rep 2019; 9:1419. [PMID: 30723232 PMCID: PMC6363806 DOI: 10.1038/s41598-018-38273-0] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Accepted: 12/21/2018] [Indexed: 01/17/2023] Open
Abstract
Absolute pitch (AP) is defined as the ability to identify and label tones without reference to keyality. In this context, the main question is whether early or late processing stages are responsible for this ability. We investigated the electrophysiological responses to tones in AP and relative pitch (RP) possessors while participants listened attentively to sine tones. Since event-related potentials are particularly suited for tracking tone encoding (N100 and P200), categorization (N200), and mnemonic functions (N400), we hypothesized that differences in early pitch processing stages would be reflected by increased N100 and P200-related areas in AP musicians. Otherwise, differences in later cognitive stages of tone processing should be mirrored by increased N200 and/or N400 areas in AP musicians. AP possessors exhibited larger N100 areas and a tendency towards enhanced P200 areas. Furthermore, the sources of these components were estimated and statistically compared between the two groups for a set of a priori defined regions of interest. AP musicians demonstrated increased N100-related current densities in the right superior temporal sulcus, middle temporal gyrus, and Heschl’s gyrus. Results are interpreted as indicating that early between-group differences in right-sided perisylvian brain regions might reflect auditory tone categorization rather than labelling mechanisms.
Collapse
|
3
|
Pratt H, Bleich N, Mittelman N. Spatio-temporal distribution of brain activity associated with audio-visually congruent and incongruent speech and the McGurk Effect. Brain Behav 2015; 5:e00407. [PMID: 26664791 PMCID: PMC4667754 DOI: 10.1002/brb3.407] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/08/2015] [Revised: 08/26/2015] [Accepted: 09/07/2015] [Indexed: 12/04/2022] Open
Abstract
INTRODUCTION Spatio-temporal distributions of cortical activity to audio-visual presentations of meaningless vowel-consonant-vowels and the effects of audio-visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily presented one. METHODS Twenty subjects listened to pairs of audio-visually congruent or incongruent utterances and indicated whether pair members were the same or not. Source current densities of event-related potentials to the first utterance in the pair were estimated and effects of stimulus-response combinations, brain area, hemisphere, and clarity of visual articulation were assessed. RESULTS Auditory cortex, superior parietal cortex, and middle temporal cortex were the most consistently involved areas across experimental conditions. Early (<200 msec) processing of the consonant was overall prominent in the left hemisphere, except right hemisphere prominence in superior parietal cortex and secondary visual cortex. Clarity of visual articulation impacted activity in secondary visual cortex and Wernicke's area. McGurk perception was associated with decreased activity in primary and secondary auditory cortices and Wernicke's area before 100 msec, increased activity around 100 msec which decreased again around 180 msec. Activity in Broca's area was unaffected by McGurk perception and was only increased to congruent audio-visual stimuli 30-70 msec following consonant onset. CONCLUSIONS The results suggest left hemisphere prominence in the effects of stimulus and response conditions on eight brain areas involved in dynamically distributed parallel processing of audio-visual integration. Initially (30-70 msec) subcortical contributions to auditory cortex, superior parietal cortex, and middle temporal cortex occur. During 100-140 msec, peristriate visual influences and Wernicke's area join in the processing. Resolution of incongruent audio-visual inputs is then attempted, and if successful, McGurk perception occurs and cortical activity in left hemisphere further increases between 170 and 260 msec.
Collapse
Affiliation(s)
- Hillel Pratt
- Evoked Potentials Laboratory Technion - Israel Institute of Technology Haifa 32000 Israel
| | - Naomi Bleich
- Evoked Potentials Laboratory Technion - Israel Institute of Technology Haifa 32000 Israel
| | - Nomi Mittelman
- Evoked Potentials Laboratory Technion - Israel Institute of Technology Haifa 32000 Israel
| |
Collapse
|
4
|
Santosa H, Hong MJ, Hong KS. Lateralization of music processing with noises in the auditory cortex: an fNIRS study. Front Behav Neurosci 2014; 8:418. [PMID: 25538583 PMCID: PMC4260509 DOI: 10.3389/fnbeh.2014.00418] [Citation(s) in RCA: 90] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2014] [Accepted: 11/16/2014] [Indexed: 11/29/2022] Open
Abstract
The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing 14 subjects to four different auditory environments: music segments only, noise segments only, music + noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distinguish stimulus-evoked hemodynamics, the difference between the mean and the minimum value of the hemodynamic response for a given stimulus was used. The right-hemispheric lateralization in music processing was about 75% (instead of continuous music, only music segments were heard). If the stimuli were only noises, the lateralization was about 65%. But, if the music was mixed with noises, the right-hemispheric lateralization has increased. Particularly, if the noise was a little bit lower than the music (i.e., music level 10~15%, noise level 10%), the entire subjects showed the right-hemispheric lateralization: This is due to the subjects' effort to hear the music in the presence of noises. However, too much noise has reduced the subjects' discerning efforts.
Collapse
Affiliation(s)
- Hendrik Santosa
- Department of Cogno-Mechatronics Engineering, Pusan National University Busan, South Korea
| | - Melissa Jiyoun Hong
- Department of Education Policy and Social Analysis, Columbia University New York, NY, USA
| | - Keum-Shik Hong
- Department of Cogno-Mechatronics Engineering, Pusan National University Busan, South Korea ; School of Mechanical Engineering, Pusan National University Busan, South Korea
| |
Collapse
|
5
|
Elmer S, Hänggi J, Jäncke L. Interhemispheric transcallosal connectivity between the left and right planum temporale predicts musicianship, performance in temporal speech processing, and functional specialization. Brain Struct Funct 2014; 221:331-44. [DOI: 10.1007/s00429-014-0910-x] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2014] [Accepted: 09/29/2014] [Indexed: 12/01/2022]
|
6
|
Kühnis J, Elmer S, Jäncke L. Auditory evoked responses in musicians during passive vowel listening are modulated by functional connectivity between bilateral auditory-related brain regions. J Cogn Neurosci 2014; 26:2750-61. [PMID: 24893742 DOI: 10.1162/jocn_a_00674] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Currently, there is striking evidence showing that professional musical training can substantially alter the response properties of auditory-related cortical fields. Such plastic changes have previously been shown not only to abet the processing of musical sounds, but likewise spectral and temporal aspects of speech. Therefore, here we used the EEG technique and measured a sample of musicians and nonmusicians while the participants were passively exposed to artificial vowels in the context of an oddball paradigm. Thereby, we evaluated whether increased intracerebral functional connectivity between bilateral auditory-related brain regions may promote sensory specialization in musicians, as reflected by altered cortical N1 and P2 responses. This assumption builds on the reasoning that sensory specialization is dependent, at least in part, on the amount of synchronization between the two auditory-related cortices. Results clearly revealed that auditory-evoked N1 responses were shaped by musical expertise. In addition, in line with our reasoning musicians showed an overall increased intracerebral functional connectivity (as indexed by lagged phase synchronization) in theta, alpha, and beta bands. Finally, within-group correlative analyses indicated a relationship between intracerebral beta band connectivity and cortical N1 responses, however only within the musicians' group. Taken together, we provide first electrophysiological evidence for a relationship between musical expertise, auditory-evoked brain responses, and intracerebral functional connectivity among auditory-related brain regions.
Collapse
|
7
|
Sella I, Reiner M, Pratt H. Natural stimuli from three coherent modalities enhance behavioral responses and electrophysiological cortical activity in humans. Int J Psychophysiol 2013; 93:45-55. [PMID: 24315926 DOI: 10.1016/j.ijpsycho.2013.11.003] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2012] [Revised: 10/23/2013] [Accepted: 11/26/2013] [Indexed: 11/15/2022]
Abstract
Cues that involve a number of sensory modalities are processed in the brain in an interactive multimodal manner rather than independently for each modality. We studied multimodal integration in a natural, yet fully controlled scene, implemented as an interactive game in an auditory-haptic-visual virtual environment. In this imitation of a natural scene, the targets of perception were ecologically valid uni-, bi- and tri-modal manifestations of a simple event-a ball hitting a wall. Subjects were engaged in the game while their behavioral and early cortical electrophysiological responses were measured. Behavioral results confirmed that tri-modal cues were detected faster and more accurately than bi-modal cues, which, likewise, showed advantages over unimodal responses. Event-Related Potentials (ERPs) were recorded, and the first 200 ms following stimulus onset was analyzed to reveal the latencies of cortical multimodal interactions as estimated by sLORETA. These electrophysiological findings indicated bi-modal as well as tri-modal interactions beginning very early (~30 ms), uniquely for each multimodal combination. The results suggest that early cortical multimodal integration accelerates cortical activity and, in turn, enhances performance measures. This acceleration registers on the scalp as sub-additive cortical activation.
Collapse
Affiliation(s)
- Irit Sella
- The Virtual Reality and NeuroCognition Laboratory, Technion, Israel Institute of Science, Israel; Evoked Potentials Laboratory, Technion, Israel Institute of Science, Israel
| | - Miriam Reiner
- The Virtual Reality and NeuroCognition Laboratory, Technion, Israel Institute of Science, Israel.
| | - Hillel Pratt
- Evoked Potentials Laboratory, Technion, Israel Institute of Science, Israel
| |
Collapse
|
8
|
Langer N, von Bastian CC, Wirz H, Oberauer K, Jäncke L. The effects of working memory training on functional brain network efficiency. Cortex 2013; 49:2424-38. [DOI: 10.1016/j.cortex.2013.01.008] [Citation(s) in RCA: 131] [Impact Index Per Article: 11.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2012] [Revised: 11/03/2012] [Accepted: 01/10/2013] [Indexed: 11/27/2022]
|
9
|
Pratt H, Abbasi DAA, Bleich N, Mittelman N, Starr A. Spatiotemporal distribution of cortical processing of first and second languages in bilinguals. II. Effects of phonologic and semantic priming. Hum Brain Mapp 2012; 34:2882-98. [PMID: 22696304 DOI: 10.1002/hbm.22109] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2011] [Revised: 03/29/2012] [Accepted: 04/02/2012] [Indexed: 11/08/2022] Open
Abstract
This study determined the effects of phonology and semantics on the distribution of cortical activity to the second of a pair of words in first and second language (mixed pairs). The effects of relative proficiency in the two languages and linguistic setting (monolinguistic or mixed) are reported in a companion paper. Ten early bilinguals and 14 late bilinguals listened to mixed pairs of words in Arabic (L1) and Hebrew (L2) and indicated whether both words in the pair had the same or different meanings. The spatio-temporal distribution of current densities of event-related potentials were estimated for each language and according to semantic and phonologic relationship (same or different) compared with the first word in the pair. During early processing (<300 ms), brain activity in temporal and temporoparietal auditory areas was enhanced by phonologic incongruence between words in the pair and in Wernicke's area by both phonologic and semantic priming. In contrast, brain activities during late processing (>300 ms) were enhanced by semantic incongruence between the two words, particularly in temporal areas and in left hemisphere Broca's and Wernicke's areas. The latter differences were greater when words were in L2. Surprisingly, no significant effects of relative proficiency on processing the second word in the pair were found. These results indicate that the distribution of brain activity to the second of two words presented bilingually is affected differently during early and late processing by both semantic and phonologic priming by- and incongruence with the immediately preceding word.
Collapse
Affiliation(s)
- Hillel Pratt
- Evoked Potentials Laboratory, Technion - Israel Institute of Technology, Haifa, Israel
| | | | | | | | | |
Collapse
|
10
|
Pratt H, Abbasi DAA, Bleich N, Mittelman N, Starr A. Spatiotemporal distribution of cortical processing of first and second languages in bilinguals. I. Effects of proficiency and linguistic setting. Hum Brain Mapp 2012; 34:2863-81. [PMID: 22696391 DOI: 10.1002/hbm.22111] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2011] [Revised: 03/29/2012] [Accepted: 04/02/2012] [Indexed: 11/08/2022] Open
Abstract
The study determined how spatiotemporal distribution of cortical activity to words in first and second language is affected by language, proficiency, and linguistic setting. Ten early bilinguals and 14 late adult bilinguals listened to pairs of words presented in Arabic (L1), Hebrew (L2), or in mixed pairs and indicated whether both words had the same meaning or not. Source current densities of event-related potentials were estimated. Activity to first words in the pair lateralized to right hemisphere, higher to L1 than L2 during early processing (<300 ms) among both groups but only among late bilinguals during late processing (>300 ms). During early and late processing, activities were larger in mixed than monolinguistic settings among early bilinguals but lower in mixed than in monolinguistic settings among late bilinguals. Late processing in auditory regions was of larger magnitude in left than right hemispheres among both groups. Activity to second words in the pair was larger in mixed than in monolinguistic settings during both early and late processing among both groups. Early processing of second words in auditory regions lateralized to the right among early bilinguals and to the left among late bilinguals, whereas late processing did not differ between groups. Wernicke's area activity during late processing of L2 was larger on the right, while on the left no significant differences between languages were found. The results show that cortical language processing in bilinguals differs between early and late processing and these differences are modulated by linguistic proficiency and setting.
Collapse
Affiliation(s)
- Hillel Pratt
- Evoked Potentials Laboratory, Technion - Israel Institute of Technology, Haifa, Israel
| | | | | | | | | |
Collapse
|
11
|
Brain activity while reading words and pseudo-words: A comparison between dyslexic and fluent readers. Int J Psychophysiol 2012; 84:270-6. [PMID: 22465207 DOI: 10.1016/j.ijpsycho.2012.03.005] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2011] [Revised: 03/18/2012] [Accepted: 03/21/2012] [Indexed: 11/23/2022]
|
12
|
Elmer S, Meyer M, Jäncke L. The spatiotemporal characteristics of elementary audiovisual speech and music processing in musically untrained subjects. Int J Psychophysiol 2012; 83:259-68. [DOI: 10.1016/j.ijpsycho.2011.09.011] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2010] [Revised: 06/16/2011] [Accepted: 09/11/2011] [Indexed: 10/17/2022]
|
13
|
Jäncke L, Langer N. A strong parietal hub in the small-world network of coloured-hearing synaesthetes during resting state EEG. J Neuropsychol 2012; 5:178-202. [PMID: 21923785 DOI: 10.1111/j.1748-6653.2011.02004.x] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
We investigated whether functional brain networks are different in coloured-hearing synaesthetes compared with non-synaesthetes. Based on resting state electroencephalographic (EEG) activity, graph-theoretical analysis was applied to functional connectivity data obtained from different frequency bands (theta, alpha1, alpha2, and beta) of 12 coloured-hearing synaesthetes and 13 non-synaesthetes. The analysis of functional connectivity was based on estimated intra-cerebral sources of brain activation using standardized low-resolution electrical tomography. These intra-cerebral sources of brain activity were subjected to graph-theoretical analysis yielding measures representing small-world network characteristics (cluster coefficients and path length). In addition, brain regions with strong interconnections were identified (so-called hubs), and the interconnectedness of these hubs were quantified using degree as a measure of connectedness. Our analysis was guided by the two-stage model proposed by Hubbard and Ramachandran (2005). In this model, the parietal lobe is thought to play a pivotal role in binding together the synaesthetic perceptions (hyperbinding). In addition, we hypothesized that the auditory cortex and the fusiform gyrus would qualify as strong hubs in synaesthetes. Although synaesthetes and non-synaesthetes demonstrated a similar small-world network topology, the parietal lobe turned out to be a stronger hub in synaesthetes than in non-synaesthetes supporting the two-stage model. The auditory cortex was also identified as a strong hub in these coloured-hearing synaesthetes (for the alpha2 band). Thus, our a priori hypotheses receive strong support. Several additional hubs (for which no a priori hypothesis has been formulated) were found to be different in terms of the degree measure in synaesthetes, with synaesthetes demonstrating stronger degree measures indicating stronger interconnectedness. These hubs were found in brain areas known to be involved in controlling memory processes (alpha1: hippocampus and retrosplenial area), executive functions (alpha1 and alpha2: ventrolateral prefrontal cortex; theta: inferior frontal cortex), and the generation of perceptions (theta: extrastriate cortex; beta: subcentral area). Taken together this graph-theoretical analysis of the resting state EEG supports the two-stage model in demonstrating that the left-sided parietal lobe is a strong hub region, which is stronger functionally interconnected in synaesthetes than in non-synaesthetes. The right-sided auditory cortex is also a strong hub supporting the idea that coloured-hearing synaesthetes demonstrate a specific auditory cortex. A further important point is that these hub regions are even differently operating at rest supporting the idea that these hub characteristics are predetermining factors of coloured-hearing synaesthesia.
Collapse
Affiliation(s)
- Lutz Jäncke
- Division Neuropychology, Psychological Institute, University of Zurich, Switzerland.
| | | |
Collapse
|
14
|
Hiscock M, Kinsbourne M. Attention and the right-ear advantage: What is the connection? Brain Cogn 2011; 76:263-75. [DOI: 10.1016/j.bandc.2011.03.016] [Citation(s) in RCA: 72] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2011] [Revised: 03/22/2011] [Accepted: 03/22/2011] [Indexed: 11/30/2022]
|
15
|
Langer N, Pedroni A, Gianotti LRR, Hänggi J, Knoch D, Jäncke L. Functional brain network efficiency predicts intelligence. Hum Brain Mapp 2011; 33:1393-406. [PMID: 21557387 DOI: 10.1002/hbm.21297] [Citation(s) in RCA: 183] [Impact Index Per Article: 14.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2010] [Accepted: 02/01/2011] [Indexed: 12/24/2022] Open
Abstract
The neuronal causes of individual differences in mental abilities such as intelligence are complex and profoundly important. Understanding these abilities has the potential to facilitate their enhancement. The purpose of this study was to identify the functional brain network characteristics and their relation to psychometric intelligence. In particular, we examined whether the functional network exhibits efficient small-world network attributes (high clustering and short path length) and whether these small-world network parameters are associated with intellectual performance. High-density resting state electroencephalography (EEG) was recorded in 74 healthy subjects to analyze graph-theoretical functional network characteristics at an intracortical level. Ravens advanced progressive matrices were used to assess intelligence. We found that the clustering coefficient and path length of the functional network are strongly related to intelligence. Thus, the more intelligent the subjects are the more the functional brain network resembles a small-world network. We further identified the parietal cortex as a main hub of this resting state network as indicated by increased degree centrality that is associated with higher intelligence. Taken together, this is the first study that substantiates the neural efficiency hypothesis as well as the Parieto-Frontal Integration Theory (P-FIT) of intelligence in the context of functional brain network characteristics. These theories are currently the most established intelligence theories in neuroscience. Our findings revealed robust evidence of an efficiently organized resting state functional brain network for highly productive cognitions.
Collapse
Affiliation(s)
- Nicolas Langer
- Division of Neuropsychology, Institute of Psychology, University of Zurich, Zurich 8050, Switzerland.
| | | | | | | | | | | |
Collapse
|
16
|
Hugdahl K. Hemispheric asymmetry: contributions from brain imaging. WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2010; 2:461-478. [PMID: 26302300 DOI: 10.1002/wcs.122] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
A series of studies using functional and structural magnetic resonance imaging, including diffusion tensor imaging measures also, to elucidate the aspects of hemispheric asymmetry are reviewed. It is suggested that laterality evolved as a response to the demands of language and the need for air-based communication which may have necessitated a division of labor between the hemispheres in order to avoid having duplicate copies in both the hemispheres that would increase processing redundancy. This would have put pressure on brain structures related to the evolution of language and speech, such as the left peri-Sylvian region. MRI data are provided showing structural and functional asymmetry in this region of the brain and how fibers connecting the right and left peri-Sylvian regions pass through the corpus callosum. It is further suggested that the so-called Yakelovian-torque, i.e., the twisting of the brain along the longitudinal axis, with the right frontal and left occipital poles protruding beyond the corresponding left and right sides, was necessary for the expansion of the left peri-Sylvian region and the right occipito-parietal regions subserving the processing of spatial relations. Functional magnetic resonance imaging data related to sex differences for visuo-spatial processing are presented showing enhanced right-sided activation in posterior parts of the brain in both sexes, and frontal activation including Broca's area in the female group only, suggesting that males and females use different strategies when solving a cognitive task. The paper ends with a discussion of the role of the corpus callosum in laterality and the role played by structural asymmetry in understanding corresponding functional asymmetry. WIREs Cogni Sci 2011 2 461-478 DOI: 10.1002/wcs.122 For further resources related to this article, please visit the WIREs website.
Collapse
Affiliation(s)
- Kenneth Hugdahl
- Department of Biological and Medical Psychology, University of Bergen, N-5020 Bergen, Norway.,Division of Psychiatry, Haukeland University Hospital, 5053 Bergen, Norway
| |
Collapse
|
17
|
Bayer U, Hausmann M. Hormone therapy in postmenopausal women affects hemispheric asymmetries in fine motor coordination. Horm Behav 2010; 58:450-6. [PMID: 20580722 DOI: 10.1016/j.yhbeh.2010.05.008] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/01/2010] [Revised: 05/11/2010] [Accepted: 05/17/2010] [Indexed: 10/19/2022]
Abstract
Evidence exists that the functional differences between the left and right cerebral hemispheres are affected by age. One prominent hypothesis proposes that frontal activity during cognitive task performance tends to be less lateralized in older than in younger adults, a pattern that has also been reported for motor functioning. Moreover, functional cerebral asymmetries (FCAs) have been shown to be affected by sex hormonal manipulations via hormone therapy (HT) in older women. Here, we investigate whether FCAs in fine motor coordination, as reflected by manual asymmetries (MAs), are susceptible to HT in older women. Therefore, sixty-two postmenopausal women who received hormone therapy either with estrogen (E) alone (n=15), an E-gestagen combination (n=21) or without HT (control group, n=26) were tested. Saliva levels of free estradiol and progesterone (P) were analyzed using chemiluminescence assays. MAs were measured with a finger tapping paradigm consisting of two different tapping conditions. As expected, postmenopausal controls without HT showed reduced MAs in simple (repetitive) finger tapping. In a more demanding sequential condition involving four fingers, however, they revealed enhanced MAs in favour of the dominant hand. This finding suggests an insufficient recruitment of critical motor brain areas (especially when the nondominant hand is used), probably as a result of age-related changes in corticocortical connectivity between motor areas. In contrast, both HT groups revealed reduced MAs in sequential finger tapping but an asymmetrical tapping performance related to estradiol levels in simple finger tapping. A similar pattern has previously been found in younger participants. The results suggest that, HT, and E exposure in particular, exerts positive effects on the motor system thereby counteracting an age-related reorganization.
Collapse
Affiliation(s)
- Ulrike Bayer
- Department of Psychology, Durham University, Durham, UK
| | | |
Collapse
|
18
|
Pratt H, Starr A, Michalewski HJ, Dimitrijevic A, Bleich N, Mittelman N. A comparison of auditory evoked potentials to acoustic beats and to binaural beats. Hear Res 2010; 262:34-44. [PMID: 20123120 DOI: 10.1016/j.heares.2010.01.013] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/11/2009] [Revised: 01/26/2010] [Accepted: 01/26/2010] [Indexed: 11/19/2022]
Abstract
The purpose of this study was to compare cortical brain responses evoked by amplitude modulated acoustic beats of 3 and 6 Hz in tones of 250 and 1000 Hz with those evoked by their binaural beats counterparts in unmodulated tones to indicate whether the cortical processes involved differ. Event-related potentials (ERPs) were recorded to 3- and 6-Hz acoustic and binaural beats in 2000 ms duration 250 and 1000 Hz tones presented with approximately 1 s intervals. Latency, amplitude and source current density estimates of ERP components to beats-evoked oscillations were determined and compared across beat types, beat frequencies and base (carrier) frequencies. All stimuli evoked tone-onset components followed by oscillations corresponding to the beat frequency, and a subsequent tone-offset complex. Beats-evoked oscillations were higher in amplitude in response to acoustic than to binaural beats, to 250 than to 1000 Hz base frequency and to 3 Hz than to 6 Hz beat frequency. Sources of the beats-evoked oscillations across all stimulus conditions located mostly to left temporal lobe areas. Differences between estimated sources of potentials to acoustic and binaural beats were not significant. The perceptions of binaural beats involve cortical activity that is not different than acoustic beats in distribution and in the effects of beat- and base frequency, indicating similar cortical processing.
Collapse
Affiliation(s)
- Hillel Pratt
- Evoked Potentials Laboratory, Behavioral Biology, Technion - Israel Institute of Technology, Haifa 32000, Israel.
| | | | | | | | | | | |
Collapse
|
19
|
Ventouras EM, Ktonas PY, Tsekou H, Paparrigopoulos T, Kalatzis I, Soldatos CR. Independent component analysis for source localization of EEG sleep spindle components. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2010; 2010:329436. [PMID: 20369057 PMCID: PMC2847376 DOI: 10.1155/2010/329436] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/15/2009] [Revised: 11/24/2009] [Accepted: 01/19/2010] [Indexed: 11/17/2022]
Abstract
Sleep spindles are bursts of sleep electroencephalogram (EEG) quasirhythmic activity within the frequency band of 11-16 Hz, characterized by progressively increasing, then gradually decreasing amplitude. The purpose of the present study was to process sleep spindles with Independent Component Analysis (ICA) in order to investigate the possibility of extracting, through visual analysis of the spindle EEG and visual selection of Independent Components (ICs), spindle "components" (SCs) corresponding to separate EEG activity patterns during a spindle, and to investigate the intracranial current sources underlying these SCs. Current source analysis using Low-Resolution Brain Electromagnetic Tomography (LORETA) was applied to the original and the ICA-reconstructed EEGs. Results indicated that SCs can be extracted by reconstructing the EEG through back-projection of separate groups of ICs, based on a temporal and spectral analysis of ICs. The intracranial current sources related to the SCs were found to be spatially stable during the time evolution of the sleep spindles.
Collapse
Affiliation(s)
- Erricos M. Ventouras
- 1Department of Medical Instrumentation Technology, Technological Educational Institution of Athens, Ag. Spyridonos Street, Egaleo,12210 Athens, Greece
- *Erricos M. Ventouras:
| | - Periklis Y. Ktonas
- 2Sleep Research Unit, Eginition Hospital, Department of Psychiatry, University of Athens, 74 Vas.Sophias Avenue, 11528 Athens, Greece
| | - Hara Tsekou
- 2Sleep Research Unit, Eginition Hospital, Department of Psychiatry, University of Athens, 74 Vas.Sophias Avenue, 11528 Athens, Greece
| | - Thomas Paparrigopoulos
- 2Sleep Research Unit, Eginition Hospital, Department of Psychiatry, University of Athens, 74 Vas.Sophias Avenue, 11528 Athens, Greece
| | - Ioannis Kalatzis
- 1Department of Medical Instrumentation Technology, Technological Educational Institution of Athens, Ag. Spyridonos Street, Egaleo,12210 Athens, Greece
| | - Constantin R. Soldatos
- 2Sleep Research Unit, Eginition Hospital, Department of Psychiatry, University of Athens, 74 Vas.Sophias Avenue, 11528 Athens, Greece
| |
Collapse
|
20
|
Pratt H, Starr A, Michalewski HJ, Dimitrijevic A, Bleich N, Mittelman N. Cortical evoked potentials to an auditory illusion: binaural beats. Clin Neurophysiol 2009; 120:1514-24. [PMID: 19616993 DOI: 10.1016/j.clinph.2009.06.014] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2009] [Revised: 05/31/2009] [Accepted: 06/18/2009] [Indexed: 11/30/2022]
Abstract
OBJECTIVE To define brain activity corresponding to an auditory illusion of 3 and 6Hz binaural beats in 250Hz or 1000Hz base frequencies, and compare it to the sound onset response. METHODS Event-Related Potentials (ERPs) were recorded in response to unmodulated tones of 250 or 1000Hz to one ear and 3 or 6Hz higher to the other, creating an illusion of amplitude modulations (beats) of 3Hz and 6Hz, in base frequencies of 250Hz and 1000Hz. Tones were 2000ms in duration and presented with approximately 1s intervals. Latency, amplitude and source current density estimates of ERP components to tone onset and subsequent beats-evoked oscillations were determined and compared across beat frequencies with both base frequencies. RESULTS All stimuli evoked tone-onset P(50), N(100) and P(200) components followed by oscillations corresponding to the beat frequency, and a subsequent tone-offset complex. Beats-evoked oscillations were higher in amplitude with the low base frequency and to the low beat frequency. Sources of the beats-evoked oscillations across all stimulus conditions located mostly to left lateral and inferior temporal lobe areas in all stimulus conditions. Onset-evoked components were not different across stimulus conditions; P(50) had significantly different sources than the beats-evoked oscillations; and N(100) and P(200) sources located to the same temporal lobe regions as beats-evoked oscillations, but were bilateral and also included frontal and parietal contributions. CONCLUSIONS Neural activity with slightly different volley frequencies from left and right ear converges and interacts in the central auditory brainstem pathways to generate beats of neural activity to modulate activities in the left temporal lobe, giving rise to the illusion of binaural beats. Cortical potentials recorded to binaural beats are distinct from onset responses. SIGNIFICANCE Brain activity corresponding to an auditory illusion of low frequency beats can be recorded from the scalp.
Collapse
Affiliation(s)
- Hillel Pratt
- Evoked Potentials Laboratory, Behavioral Biology, Technion - Israel Institute of Technology, Haifa, Israel.
| | | | | | | | | | | |
Collapse
|
21
|
Pratt H, Starr A, Michalewski HJ, Dimitrijevic A, Bleich N, Mittelman N. Auditory-evoked potentials to frequency increase and decrease of high- and low-frequency tones. Clin Neurophysiol 2009; 120:360-73. [DOI: 10.1016/j.clinph.2008.10.158] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2008] [Revised: 10/15/2008] [Accepted: 10/24/2008] [Indexed: 11/16/2022]
|
22
|
Perceptual visual grouping under inattention: Electrophysiological functional imaging. Brain Cogn 2008; 67:183-96. [DOI: 10.1016/j.bandc.2008.01.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2007] [Revised: 09/27/2007] [Accepted: 01/14/2008] [Indexed: 11/22/2022]
|
23
|
Giraud K, Trébuchon-DaFonseca A, Démonet J, Habib M, Liégeois-Chauvel C. Asymmetry of voice onset time-processing in adult developmental dyslexics. Clin Neurophysiol 2008; 119:1652-63. [DOI: 10.1016/j.clinph.2008.02.017] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2007] [Revised: 01/29/2008] [Accepted: 02/13/2008] [Indexed: 11/28/2022]
|
24
|
Zaehle T, Jancke L, Meyer M. Electrical brain imaging evidences left auditory cortex involvement in speech and non-speech discrimination based on temporal features. Behav Brain Funct 2007; 3:63. [PMID: 18070338 PMCID: PMC2231369 DOI: 10.1186/1744-9081-3-63] [Citation(s) in RCA: 47] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2007] [Accepted: 12/10/2007] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Speech perception is based on a variety of spectral and temporal acoustic features available in the acoustic signal. Voice-onset time (VOT) is considered an important cue that is cardinal for phonetic perception. METHODS In the present study, we recorded and compared scalp auditory evoked potentials (AEP) in response to consonant-vowel-syllables (CV) with varying voice-onset-times (VOT) and non-speech analogues with varying noise-onset-time (NOT). In particular, we aimed to investigate the spatio-temporal pattern of acoustic feature processing underlying elemental speech perception and relate this temporal processing mechanism to specific activations of the auditory cortex. RESULTS Results show that the characteristic AEP waveform in response to consonant-vowel-syllables is on a par with those of non-speech sounds with analogue temporal characteristics. The amplitude of the N1a and N1b component of the auditory evoked potentials significantly correlated with the duration of the VOT in CV and likewise, with the duration of the NOT in non-speech sounds.Furthermore, current density maps indicate overlapping supratemporal networks involved in the perception of both speech and non-speech sounds with a bilateral activation pattern during the N1a time window and leftward asymmetry during the N1b time window. Elaborate regional statistical analysis of the activation over the middle and posterior portion of the supratemporal plane (STP) revealed strong left lateralized responses over the middle STP for both the N1a and N1b component, and a functional leftward asymmetry over the posterior STP for the N1b component. CONCLUSION The present data demonstrate overlapping spatio-temporal brain responses during the perception of temporal acoustic cues in both speech and non-speech sounds. Source estimation evidences a preponderant role of the left middle and posterior auditory cortex in speech and non-speech discrimination based on temporal features. Therefore, in congruency with recent fMRI studies, we suggest that similar mechanisms underlie the perception of linguistically different but acoustically equivalent auditory events on the level of basic auditory analysis.
Collapse
Affiliation(s)
- Tino Zaehle
- Department of Neuropsychology, University of Zurich, 8050 Zurich, Switzerland.
| | | | | |
Collapse
|
25
|
Pratt H, Starr A, Michalewski HJ, Bleich N, Mittelman N. The auditory P50 component to onset and offset of sound. Clin Neurophysiol 2007; 119:376-87. [PMID: 18055255 DOI: 10.1016/j.clinph.2007.10.016] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2007] [Revised: 09/20/2007] [Accepted: 10/16/2007] [Indexed: 11/18/2022]
Abstract
OBJECTIVE The auditory Event-Related Potentials (ERP) of component P50 to sound onset and offset have been reported to be similar, but their magnetic homologue has been reported absent to sound offset. We compared the spatio-temporal distribution of cortical activity during P50 to sound onset and offset, without confounds of spectral change. METHODS ERPs were recorded in response to onsets and offsets of silent intervals of 0.5 s (gaps) appearing randomly in otherwise continuous white noise and compared to ERPs to randomly distributed click pairs with half second separation presented in silence. Subjects were awake and distracted from the stimuli by reading a complicated text. Measures of P50 included peak latency and amplitude, as well as source current density estimates to the clicks and sound onsets and offsets. RESULTS P50 occurred in response to noise onsets and to clicks, while to noise offset it was absent. Latency of P50 was similar to noise onset (56 ms) and to clicks (53 ms). Sources of P50 to noise onsets and clicks included bilateral superior parietal areas. In contrast, noise offsets activated left inferior temporal and occipital areas at the time of P50. Source current density was significantly higher to noise onset than offset in the vicinity of the temporo-parietal junction. CONCLUSIONS P50 to sound offset is absent compared to the distinct P50 to sound onset and to clicks, at different intracranial sources. P50 to stimulus onset and to clicks appears to reflect preattentive arousal by a new sound in the scene. Sound offset does not involve a new sound and hence the absent P50. SIGNIFICANCE Stimulus onset activates distinct early cortical processes that are absent to offset.
Collapse
Affiliation(s)
- Hillel Pratt
- Evoked Potentials Laboratory, Behavioral Biology, Gutwirth Building, Technion-Israel Institute of Technology, Haifa 32000, Israel.
| | | | | | | | | |
Collapse
|
26
|
Hunter MD, Lee KH, Tandon P, Parks RW, Wilkinson ID, Woodruff PWR. Lateral response dynamics and hemispheric dominance for speech perception. Neuroreport 2007; 18:1295-9. [PMID: 17632286 DOI: 10.1097/wnr.0b013e32827420e4] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
In this study, we investigated the mechanism for the left cerebral hemisphere's dominance for speech perception. We utilized the crossover of auditory pathways in the central nervous system to present speech stimuli more directly to the left hemisphere (via the right ear) and right hemisphere (via the left ear). Using functional MRI, we found that estimated duration of neural response in the left auditory cortex increased as more speech information was directly received from the right ear. Conversely, response duration in the right auditory cortex was not modulated when more speech information was directly received from the left ear. These data suggest that selective temporal responding distinguishes the dominant from nondominant hemisphere of the human brain during speech perception.
Collapse
Affiliation(s)
- Michael D Hunter
- Sheffield Cognition and Neuroimaging Laboratory, Academic Clinical Psychiatry, University of Sheffield, UK.
| | | | | | | | | | | |
Collapse
|
27
|
Horev N, Most T, Pratt H. Categorical Perception of Speech (VOT) and Analogous Non-Speech (FOT) signals: Behavioral and electrophysiological correlates. Ear Hear 2007; 28:111-28. [PMID: 17204903 DOI: 10.1097/01.aud.0000250021.69163.96] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE To determine whether voicing perception is influenced primarily by linguistic experience or if it is due to innate temporal sensitivity to voicing boundaries, by examining behavioral and electrophysiological correlates of speech Voice-Onset-Time (VOT) and nonspeech Formant-Onset-Time (FOT) categorical perception. DESIGN Behavioral measures and auditory event-related potentials (ERPs) were obtained from 14 normal-hearing Hebrew speakers, whose voicing distinction is different than English, during identification and discrimination of two sets of stimuli: a VOT continuum, created by editing natural productions of /ba/ and /pa/, and an analogous nonspeech continuum, composed of two synthesized formants, varying in their onset time-FOT. RESULTS VOT and FOT continua yielded similar behavioral identification curves. Differences between the two stimulus types were found in discrimination of within-category differences and in reaction time effects. During identification and discrimination tasks, ERPs were differently affected by the VOT or FOT value of the stimulus: VOT value had a significant effect on N1 latency and on N1 and P2 amplitudes whereas FOT value had a significant effect on P2 amplitude. Additionally, during identification tasks, whereas all speech signals evoked a P3, regardless of overt categorization, only the perceptually "rare" nonspeech stimulus (+15 msec FOT) evoked a P3. CONCLUSIONS Voicing boundaries corresponded to Hebrew VOT values of production, suggesting that voicing perception in Hebrew is mediated mainly by linguistic experience rather than by innate temporal sensitivity. ERP data differed to VOT versus FOT stimuli as early as N1, indicating that brain processing of the temporal aspects of speech and nonspeech signals differ from their early stages. Further studies to establish the neural response patterns to voicing in speakers of languages that use different voicing categories than English are warranted.
Collapse
Affiliation(s)
- Nitza Horev
- Evoked Potentials Laboratory, Technion-Israel Institute of Technology, Haifa, Israel
| | | | | |
Collapse
|
28
|
Arzouan Y, Goldstein A, Faust M. Dynamics of hemispheric activity during metaphor comprehension: Electrophysiological measures. Neuroimage 2007; 36:222-31. [PMID: 17428685 DOI: 10.1016/j.neuroimage.2007.02.015] [Citation(s) in RCA: 43] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2006] [Revised: 01/24/2007] [Accepted: 02/12/2007] [Indexed: 10/23/2022] Open
Abstract
Brain imaging studies have lead to conflicting findings regarding the involvement of the right hemisphere (RH) in metaphor comprehension. Some report more relative RH activation when processing figurative expressions but others have shown just the opposite. The inconsistencies might be a result of the low temporal resolution related to current brain imaging techniques which is insufficient to uncover patterns of hemispheric interaction that change over time. Event-related potentials and a source estimation technique (LORETA) were used to investigate such temporal interactions when processing two-word expressions denoting literal, conventional metaphoric, and novel metaphoric meaning, as well as unrelated word pairs. Participants performed a semantic judgment task in which they decided whether each word pair conveyed a meaningful expression. Our findings indicate that during comprehension of novel metaphors there are some stages of considerable RH involvement, mainly of the temporal and superior frontal areas. Although the processing mechanisms used for all types of expressions were similar and require both hemispheres, the relative contribution of each hemisphere at specific processing stages depended on stimulus type. Those stages correspond roughly to the N400 and LPC components which reflect semantic and contextual integration, respectively. The present study demonstrates that RH mechanisms are necessary, but not sufficient, for understanding metaphoric expressions. Both hemispheres work in concert in a complex dynamical pattern during literal and figurative language comprehension. Electrophysiological recordings together with source localization algorithms such as LORETA are a viable tool for measuring this type of activity patterns.
Collapse
Affiliation(s)
- Yossi Arzouan
- Gonda Brain Research Center, Bar-Ilan University, Israel
| | | | | |
Collapse
|
29
|
Praeg E, Esslen M, Lutz K, Jancke L. Neuronal Modifications During Visuomotor Association Learning Assessed by Electric Brain Tomography. Brain Topogr 2006; 19:61-75. [PMID: 17136595 DOI: 10.1007/s10548-006-0013-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
In everyday life specific situations need specific reactions. Through repetitive practice, such stimulus-response associations can be learned and performed automatically. The aim of the present EEG study was the illustration of learning dependent modifications in neuronal pathways during short-term practice of visuomotor associations. Participants performed a visuomotor association task including four visual stimuli, which should be associated with four keys, learned by trial and error. We assumed that distinct cognitive processes might be dominant during early learning e.g., visual perception and decision making. Advanced learning, however, might be indicated by increased neuronal activation in integration- and memory-related regions. For assessment of learning progress, visual- and movement-related brain potentials were measured and compared between three learning stages (early, intermediate, and late). The results have revealed significant differences between the learning stages during distinct time intervals. Related to visual stimulus presentation, Low Resolution Electromagnetic Brain Tomography (LORETA) revealed strong neuronal activation in a parieto-prefrontal network in time intervals between 100-400 ms post event and during early learning. In relation to the motor response neuronal activation was significantly increased during intermediate compared to early learning. Prior to the motor response (120-360 ms pre event), neuronal activation was detected in the cingulate motor area and the right dorsal premotor cortex. Subsequent to the motor response (68-430 ms post event) there was an increase in neuronal activation in visuomotor- and memory-related areas including parietal cortex, SMA, premotor, dorsolateral prefrontal, and parahippocampal cortex. The present study has shown specific time elements of a visuomotor-memory-related network, which might support learning progress during visuomotor association learning.
Collapse
Affiliation(s)
- Elke Praeg
- Department of Neuropsychology, Institute of Psychology, University of Zurich, Treichlerstrasse 10, CH-8032 Zurich, Switzerland.
| | | | | | | |
Collapse
|
30
|
Meyer M, Baumann S, Jancke L. Electrical brain imaging reveals spatio-temporal dynamics of timbre perception in humans. Neuroimage 2006; 32:1510-23. [PMID: 16798014 DOI: 10.1016/j.neuroimage.2006.04.193] [Citation(s) in RCA: 56] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2005] [Revised: 03/29/2006] [Accepted: 04/10/2006] [Indexed: 11/27/2022] Open
Abstract
Timbre is a major attribute of sound perception and a key feature for the identification of sound quality. Here, we present event-related brain potentials (ERPs) obtained from sixteen healthy individuals while they discriminated complex instrumental tones (piano, trumpet, and violin) or simple sine wave tones that lack the principal features of timbre. Data analysis yielded enhanced N1 and P2 responses to instrumental tones relative to sine wave tones. Furthermore, we applied an electrical brain imaging approach using low-resolution electromagnetic tomography (LORETA) to estimate the neural sources of N1/P2 responses. Separate significance tests of instrumental vs. sine wave tones for N1 and P2 revealed distinct regions as principally governing timbre perception. In an initial stage (N1), timbre perception recruits left and right (peri-)auditory fields with an activity maximum over the right posterior Sylvian fissure (SF) and the posterior cingulate (PCC) territory. In the subsequent stage (P2), we uncovered enhanced activity in the vicinity of the entire cingulate gyrus. The involvement of extra-auditory areas in timbre perception may imply the presence of a highly associative processing level which might be generally related to musical sensations and integrates widespread medial areas of the human cortex. In summary, our results demonstrate spatio-temporally distinct stages in timbre perception which not only involve bilateral parts of the peri-auditory cortex but also medially situated regions of the human brain associated with emotional and auditory imagery functions.
Collapse
Affiliation(s)
- Martin Meyer
- Department of Neuropsychology, University of Zurich, Treichlerstrasse 10, CH-8032 Zurich, Switzerland.
| | | | | |
Collapse
|
31
|
Behne N, Wendt B, Scheich H, Brechmann A. Contralateral White Noise Selectively Changes Left Human Auditory Cortex Activity in a Lexical Decision Task. J Neurophysiol 2006; 95:2630-7. [PMID: 16436478 DOI: 10.1152/jn.01201.2005] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
In a previous study, we hypothesized that the approach of presenting information-bearing stimuli to one ear and noise to the other ear may be a general strategy to determine hemispheric specialization in auditory cortex (AC). In that study, we confirmed the dominant role of the right AC in directional categorization of frequency modulations by showing that fMRI activation of right but not left AC was sharply emphasized when masking noise was presented to the contralateral ear. Here, we tested this hypothesis using a lexical decision task supposed to be mainly processed in the left hemisphere. Subjects had to distinguish between pseudowords and natural words presented monaurally to the left or right ear either with or without white noise to the other ear. According to our hypothesis, we expected a strong effect of contralateral noise on fMRI activity in left AC. For the control conditions without noise, we found that activation in both auditory cortices was stronger on contralateral than on ipsilateral word stimulation consistent with a more influential contralateral than ipsilateral auditory pathway. Additional presentation of contralateral noise did not significantly change activation in right AC, whereas it led to a significant increase of activation in left AC compared with the condition without noise. This is consistent with a left hemispheric specialization for lexical decisions. Thus our results support the hypothesis that activation by ipsilateral information-bearing stimuli is upregulated mainly in the hemisphere specialized for a given task when noise is presented to the more influential contralateral ear.
Collapse
Affiliation(s)
- Nicole Behne
- Leibniz Institute for Neurobiology, Magdeburg, Germany.
| | | | | | | |
Collapse
|
32
|
Babiloni C, Binetti G, Cassarino A, Dal Forno G, Del Percio C, Ferreri F, Ferri R, Frisoni G, Galderisi S, Hirata K, Lanuzza B, Miniussi C, Mucci A, Nobili F, Rodriguez G, Luca Romani G, Rossini PM. Sources of cortical rhythms in adults during physiological aging: a multicentric EEG study. Hum Brain Mapp 2006; 27:162-72. [PMID: 16108018 PMCID: PMC6871339 DOI: 10.1002/hbm.20175] [Citation(s) in RCA: 203] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2004] [Accepted: 12/16/2004] [Indexed: 11/11/2022] Open
Abstract
This electroencephalographic (EEG) study tested whether cortical EEG rhythms (especially delta and alpha) show a progressive increasing or decreasing trend across physiological aging. To this aim, we analyzed the type of correlation (linear and nonlinear) between cortical EEG rhythms and age. Resting eyes-closed EEG data were recorded in 108 young (Nyoung; age range: 18-50 years, mean age 27.3+/-7.3 SD) and 107 elderly (Nold; age range: 51-85 years, mean age 67.3+/-9.2 SD) subjects. The EEG rhythms of interest were delta (2-4 Hz), theta (4-8 Hz), alpha 1 (8-10.5 Hz), alpha 2 (10.5-13 Hz), beta 1 (13-20 Hz), and beta 2 (20-30 Hz). EEG cortical sources were estimated by low-resolution brain electromagnetic tomography (LORETA). Statistical results showed that delta sources in the occipital area had significantly less magnitude in Nold compared to Nyoung subjects. Similarly, alpha 1 and alpha 2 sources in the parietal, occipital, temporal, and limbic areas had significantly less magnitude in Nold compared to Nyoung subjects. These nine EEG sources were given as input for evaluating the type (linear, exponential, logarithmic, and power) of correlation with age. When subjects were considered as a single group there was a significant linear correlation of age with the magnitude of delta sources in the occipital area and of alpha 1 sources in occipital and limbic areas. The same was true for alpha 2 sources in the parietal, occipital, temporal, and limbic areas. In general, the EEG sources showing significant linear correlation with age also supported a nonlinear correlation with age. These results suggest that the occipital delta and posterior cortical alpha rhythms decrease in magnitude during physiological aging with both linear and nonlinear trends. In conclusion, this new methodological approach holds promise for the prediction of dementia in mild cognitive impairment by regional source rather than surface EEG data and by both linear and nonlinear predictors.
Collapse
Affiliation(s)
- Claudio Babiloni
- Dipartimento di Fisiologia Umana e Farmacologia, Università La Sapienza, Rome, Italy.
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
33
|
Sittiprapaporn W, Tervaniemi M, Chindaduangratn C, Kotchabhakdi N. Preattentive discrimination of across-category and within-category change in consonant–vowel syllable. Neuroreport 2005; 16:1513-8. [PMID: 16110281 DOI: 10.1097/01.wnr.0000175618.46677.07] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Event-related potentials to infrequently presented spoken deviant syllables /pi/ and /po/ among repetitive standard [see text] syllables were recorded in Thai study participants who ignored these stimuli while reading books of their choices. The vowel across-category and within-category changes elicited a change-specific mismatch negativity response. The across-category and within-category change discrimination of vowels in consonant-vowel syllable was also assessed using the low-resolution electromagnetic tomography. The results of low-resolution electromagnetic tomography mismatch negativity generator analysis suggest that the within-category change perception of vowels is analyzed as the change in physical features of the stimuli, thus predominantly activating the right temporal cortex. In contrast, the left temporal cortex is predominantly activated in the across-category change perception of vowels, emphasizing the role of the left hemisphere in speech processing already at a preattentive processing level also in consonant-vowel syllables. The results support the hypothesis that a part of the superior temporal gyrus contains neurons specialized for speech perception.
Collapse
Affiliation(s)
- Wichian Sittiprapaporn
- Neuro-Behavioural Biology Center, Institute of Science and Technology for Research and Development, Mahidol University, Salaya, Nakhonpathom, Thailand.
| | | | | | | |
Collapse
|
34
|
Bunzeck N, Wuestenberg T, Lutz K, Heinze HJ, Jancke L. Scanning silence: mental imagery of complex sounds. Neuroimage 2005; 26:1119-27. [PMID: 15893474 DOI: 10.1016/j.neuroimage.2005.03.013] [Citation(s) in RCA: 118] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2004] [Revised: 03/03/2005] [Accepted: 03/10/2005] [Indexed: 11/17/2022] Open
Abstract
In this functional magnetic resonance imaging (fMRI) study, we investigated the neural basis of mental auditory imagery of familiar complex sounds that did not contain language or music. In the first condition (perception), the subjects watched familiar scenes and listened to the corresponding sounds that were presented simultaneously. In the second condition (imagery), the same scenes were presented silently and the subjects had to mentally imagine the appropriate sounds. During the third condition (control), the participants watched a scrambled version of the scenes without sound. To overcome the disadvantages of the stray acoustic scanner noise in auditory fMRI experiments, we applied sparse temporal sampling technique with five functional clusters that were acquired at the end of each movie presentation. Compared to the control condition, we found bilateral activations in the primary and secondary auditory cortices (including Heschl's gyrus and planum temporale) during perception of complex sounds. In contrast, the imagery condition elicited bilateral hemodynamic responses only in the secondary auditory cortex (including the planum temporale). No significant activity was observed in the primary auditory cortex. The results show that imagery and perception of complex sounds that do not contain language or music rely on overlapping neural correlates of the secondary but not primary auditory cortex.
Collapse
Affiliation(s)
- Nico Bunzeck
- Department of Neurology II, Otto von Guericke University, Leipziger Street 44, Magdeburg 39120, Germany.
| | | | | | | | | |
Collapse
|
35
|
Eichele T, Nordby H, Rimol LM, Hugdahl K. Asymmetry of evoked potential latency to speech sounds predicts the ear advantage in dichotic listening. ACTA ACUST UNITED AC 2005; 24:405-12. [PMID: 16099353 DOI: 10.1016/j.cogbrainres.2005.02.017] [Citation(s) in RCA: 60] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2004] [Revised: 02/11/2005] [Accepted: 02/14/2005] [Indexed: 11/18/2022]
Abstract
The functional organization of the human auditory cortex is still not well understood with respect to speech perception and language lateralization. Especially, there is comparatively little data available in the brain imaging literature focusing on the timing of phonetic processing. We recorded auditory-evoked potentials (AEP) from 27 scalp and additional EOG channels in 12 healthy volunteers performing a free report dichotic listening task with simple speech sounds (CV syllables: [ba], [da], [ga], [pa], [ta], [ka]). ERP analysis employed independent components analysis (ICA) wavelet denoising for artifact reduction and improvement of the SNR. The main finding was a 15-ms shorter average latency of the N1-AEP recorded from the scalp approximately overlying the left supratemporal cortical plane compared to the N1-AEP over the homologous right side. Corresponding N1 amplitudes did not differ between these sites. The individual AEP latency differences significantly correlated with the ear advantage as an index of speech/language lateralization. The behaviorally relevant difference in N1 latency between the hemispheres indicates that an important key to understanding speech perception is to consider the functional implications of neuronal event timing.
Collapse
Affiliation(s)
- Tom Eichele
- Department of Biological and Medical Psychology, University of Bergen, Jonas Lies Vei 91, N-5011 Bergen, Norway.
| | | | | | | |
Collapse
|
36
|
Lin YY, Chen WT, Liao KK, Yeh TC, Wu ZA, Ho LT. Hemispheric balance in coding speech and non-speech sounds in Chinese participants. Neuroreport 2005; 16:469-73. [PMID: 15770153 DOI: 10.1097/00001756-200504040-00010] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
To study the role of neuromagnetic auditory approximately 100-ms responses (N100m) in phonetic processing, we recorded N100m in 24 right-handed Chinese participants using a whole-head neuromagnetometer. The stimuli included vowel /a/ and consonant-vowels /ba/ and /da/, spoken by one Chinese speaker, and a 1-kHz tone. N100m to tones was larger in the right hemisphere, whereas that to speech sounds was bilaterally similar. The amplitude ratio of speech to non-speech N100m was larger in the left hemisphere. N100m dipoles in the left hemisphere were approximately 2 mm more anterior for speech than for tone stimuli. The results suggest that N100m reflects both acoustics and phonetic processing. Moreover, the ratio of speech to non-speech activation in individual hemispheres may be useful for language lateralization.
Collapse
Affiliation(s)
- Yung-Yang Lin
- Department of Medical Research and Education, Taipei Veterans General Hospital, Taipei, Taiwan.
| | | | | | | | | | | |
Collapse
|
37
|
Laufer I, Pratt H. The ‘F-complex’ and MMN tap different aspects of deviance. Clin Neurophysiol 2005; 116:336-52. [PMID: 15661112 DOI: 10.1016/j.clinph.2004.08.007] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/19/2004] [Indexed: 10/26/2022]
Abstract
OBJECTIVE To compare the 'F(fusion)-complex' with the Mismatch negativity (MMN), both components associated with automatic detection of changes in the acoustic stimulus flow. METHODS Ten right-handed adult native Hebrew speakers discriminated vowel-consonant-vowel (V-C-V) sequences /ada/ (deviant) and /aga/ (standard) in an active auditory 'Oddball' task, and the brain potentials associated with performance of the task were recorded from 21 electrodes. Stimuli were generated by fusing the acoustic elements of the V-C-V sequences as follows: base was always presented in front of the subject, and formant transitions were presented to the front, left or right in a virtual reality room. An illusion of a lateralized echo (duplex sensation) accompanied base fusion with the lateralized formant locations. Source current density estimates were derived for the net response to the fusion of the speech elements (F-complex) and for the MMN, using low-resolution electromagnetic tomography (LORETA). Statistical non-parametric mapping was used to estimate the current density differences between the brain sources of the F-complex and the MMN. RESULTS Occipito-parietal regions and prefrontal regions were associated with the F-complex in all formant locations, whereas the vicinity of the supratemporal plane was bilaterally associated with the MMN, but only in case of front-fusion (no duplex effect). CONCLUSIONS MMN is sensitive to the novelty of the auditory object in relation to other stimuli in a sequence, whereas the F-complex is sensitive to the acoustic features of the auditory object and reflects a process of matching them with target categories. SIGNIFICANCE The F-complex and MMN reflect different aspects of auditory processing in a stimulus-rich and changing environment: content analysis of the stimulus and novelty detection, respectively.
Collapse
Affiliation(s)
- Ilan Laufer
- Evoked Potentials Laboratory, Technion-Israel Institute of Technology, Gutwirth Building, 3200 Haifa, Israel
| | | |
Collapse
|
38
|
Jerger J, Martin J. Hemispheric asymmetry of the right ear advantage in dichotic listening. Hear Res 2004; 198:125-36. [PMID: 15567609 DOI: 10.1016/j.heares.2004.07.019] [Citation(s) in RCA: 35] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/29/2004] [Accepted: 07/28/2004] [Indexed: 11/19/2022]
Abstract
ERP waveforms evoked by target-right and target-left stimuli in a directed-attention, dichotic-listening paradigm were examined using cross-correlation analysis. We analyzed data from two experiments involving linguistic processing. They involved listening for (1) a phonemic feature, and (2) a series of morpho-syntactic anomalies. The maximum correlation between target-right and target-left waveforms was achieved when the target-right waveform was delayed relative to the target-left waveform (the tau shift), reflecting the shorter latency of the target-right waveform. We interpret the direction of displacement as equivalent to a "right-ear advantage" in the dichotic listening paradigm. In both tasks, tau shifts were not uniformly distributed across the parietal electrode array. They were greatest on the extreme left side of the head and systematically declined as the electrode site moved rightward, indicating a temporal gradient in the relative latencies of the two waveforms. Results are interpreted in relation to both structural and attentional aspects of dichotic listening.
Collapse
Affiliation(s)
- James Jerger
- Texas Auditory Processing Disorder Laboratory, School of Behavioral and Brain Sciences, The University of Texas at Dallas, 2612 Prairie Creek Drive East, Richardson, TX 75080-2679, USA.
| | | |
Collapse
|
39
|
Takegata R, Nakagawa S, Tonoike M, Näätänen R. Hemispheric processing of duration changes in speech and non-speech sounds. Neuroreport 2004; 15:1683-6. [PMID: 15232307 DOI: 10.1097/01.wnr.0000134929.04561.64] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Sound duration conveys phonemic information in some languages. The present study, using magnetoencephalography (MEG), examined whether the hemispheric activation associated with the processing of duration is different between speech and non-speech sounds in subjects whose native language uses duration as a phonemic cue. The magnetic mismatch negativity (MMNm) response was recorded for equal-duration decrements in vowel, sinusoidal, and spectrally rich complex sounds. Although the MMNm responses to duration changes were predominant in the right hemisphere, the distribution of this response for the vowel stimuli was significantly displaced leftward compared with that for the other two types of stimuli. The results suggest that the hemispheric distribution of the MMNm response to duration change depends on the linguistic relevance of the change.
Collapse
Affiliation(s)
- Rika Takegata
- Cognitive Brain Research Unit, Department of Psychology, P.O. Box 9, University of Helsinki, FI-00014 Helsinki, Finland.
| | | | | | | |
Collapse
|
40
|
Tzourio-Mazoyer N, Josse G, Crivello F, Mazoyer B. Interindividual variability in the hemispheric organization for speech. Neuroimage 2004; 21:422-35. [PMID: 14741679 DOI: 10.1016/j.neuroimage.2003.08.032] [Citation(s) in RCA: 100] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
A PET activation study was designed to investigate hemispheric specialization during speech comprehension and production in right- and left-handed subjects. Normalized regional cerebral blood flow (NrCBF) was repeatedly monitored while subjects either listened to factual stories (Story) or covertly generated verbs semantically related to heard nouns (Gener), using silent resting (Rest) as a common control condition. NrCBF variations in each task, as compared to Rest, as well as functional asymmetry indices (FAI = right minus left NrCBF variations), were computed in anatomical regions of interest (AROIs) defined on the single-subject MNI template. FAIs were predominantly leftward in all regions during both tasks, although larger FAIs were observed during Gener. Subjects were declared "typical" for language hemispheric specialization based on the presence of significant leftward asymmetries (FAI < 0) in the pars triangularis and opercularis of the inferior frontal gyrus during Gener, and in the middle and inferior temporal AROIs during Story. Six subjects (including five LH) showed an atypical language representation. Among them, one presented a right hemisphere specialization during both tasks, another a shift in hemispheric specialization from production to comprehension (left during Gener, right during Story). The group of 14 typical subjects showed significant positive correlation between homologous left and right AROIs NrCBF variations in temporal areas during Story, and in temporal and inferior frontal areas during Gener, almost all regions presenting a leftward FAI. Such correlations were also present in deactivated areas with strong leftward asymmetry (supramarginalis gyrus, inferior parietal region). These results suggest that entry into a language task translates into a hemispheric reconfiguration of lateral cortical areas with global NrCBF increase in the dominant hemisphere and decrease in the minor hemisphere. This can be considered as the setting up of a "language mode", under the control of a mechanism that operates at a perisylvian level. On top of this global organization, regional variations carry on the performance of the cognitive operations specific to the language task to be performed. Hemispheric relationships could be different in atypical subjects, with either between task hemispheric regulation differences or differences in regional specialization.
Collapse
Affiliation(s)
- N Tzourio-Mazoyer
- Groupe d'Imagerie Neurofonctionnelle (GIN), UMR 6095 CNRS, CEA, Universités de Caen et Paris 5, 14074 Cedex, Caen, France.
| | | | | | | |
Collapse
|