4351
|
Yagi R, Nishina E, Honda M, Oohashi T. Modulatory effect of inaudible high-frequency sounds on human acoustic perception. Neurosci Lett 2003; 351:191-5. [PMID: 14623138 DOI: 10.1016/j.neulet.2003.07.020] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
We evaluated the effects of the intensity of an inaudible high-frequency component (HFC) of sound on human responses by employing a multi-parametric approach consisting of behavioral measurements of the comfortable listening level (CLL), psychological measurements of the subjective impression of sounds, and physiological measurements using electroencephalogram (EEG). Increasing the intensity of the inaudible HFC resulted in a significant increase in the CLL, the subjective impression of sounds, and the occipital alpha frequency component of the spontaneous EEG. These effects peaked with an increase of 6 dB in HFC intensity. The results of the present study suggest that the intensity of inaudible HFC non-linearly modulates human sound perception.
Collapse
|
4352
|
Abstract
A performer can convey different expressive intentions when playing a piece of music. Perceptual and acoustic analysis of expressive music performances attempts to understand the musicians' strategies. Moreover, models for rendering different expressive intentions were developed for both analysis and deeper multimedia products fruition.
Collapse
|
4353
|
Escabí MA, Read HL. Representation of spectrotemporal sound information in the ascending auditory pathway. BIOLOGICAL CYBERNETICS 2003; 89:350-362. [PMID: 14669015 DOI: 10.1007/s00422-003-0440-8] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2003] [Accepted: 09/10/2003] [Indexed: 05/24/2023]
Abstract
The representation of sound information in the central nervous system relies on the analysis of time-varying features in communication and other environmental sounds. How are auditory physiologists and theoreticians to choose an appropriate method for characterizing spectral and temporal acoustic feature representations in single neurons and neural populations? A brief survey of currently available scientific methods and their potential usefulness is given, with a focus on the strengths and weaknesses of using noise analysis techniques for approximating spectrotemporal response fields (STRFs). Noise analysis has been used to foster several conceptual advances in describing neural acoustic feature representation in a variety of species and auditory nuclei. STRFs have been used to quantitatively assess spectral and temporal transformations across mutually connected auditory nuclei, to identify neuronal interactions between spectral and temporal sound dimensions, and to compare linear vs. nonlinear response properties through state-dependent comparisons. We propose that noise analysis techniques used in combination with novel stimulus paradigms and parametric experiment designs will provide powerful means of exploring acoustic feature representations in the central nervous system.
Collapse
|
4354
|
Billingsley RL, Jackson EF, Slopis JM, Swank PR, Mahankali S, Moore BD. Functional magnetic resonance imaging of phonologic processing in neurofibromatosis 1. J Child Neurol 2003; 18:731-40. [PMID: 14696899 DOI: 10.1177/08830738030180110701] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Neurofibromatosis 1 is associated with reading disabilities, but few associations between neuroanatomic abnormalities and reading problems have been found. We examined the neuronal bases for phonologic processing, a core component of learning to read, in 15 individuals with neurofibromatosis 1 and 15 controls using functional magnetic resonance imaging (MRI). Our results revealed differential use of inferior and dorsolateral prefrontal cortical areas relative to posterior (temporal, parietal, and occipital) cortices for participants with neurofibromatosis 1 compared with controls during phonologic (rhyme) decisions. In addition, similar to previous brain imaging studies of reading deficits in the general population, poorer performance on one of the phonologic decision tasks was associated with increased signal change in the right superior temporal gyrus for the neurofibromatosis 1 group. Behavioral performance on the functional MRI tasks was related to academic reading measures for the neurofibromatosis 1 group. The differential patterns of functional connectivity observed here lend support to previous morphologic studies that suggested inferior frontal and superior temporal areas to be important mediators of reading and language development in neurofibromatosis 1.
Collapse
|
4355
|
Nelken I, Fishbach A, Las L, Ulanovsky N, Farkas D. Primary auditory cortex of cats: feature detection or something else? BIOLOGICAL CYBERNETICS 2003; 89:397-406. [PMID: 14669020 DOI: 10.1007/s00422-003-0445-3] [Citation(s) in RCA: 84] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2003] [Accepted: 09/10/2003] [Indexed: 05/24/2023]
Abstract
Neurons in sensory cortices are often assumed to be "feature detectors", computing simple and then successively more complex features out of the incoming sensory stream. These features are somehow integrated into percepts. Despite many years of research, a convincing candidate for such a feature in primary auditory cortex has not been found. We argue that feature detection is actually a secondary issue in understanding the role of primary auditory cortex. Instead, the major contribution of primary auditory cortex to auditory perception is in processing previously derived features on a number of different timescales. We hypothesize that, as a result, neurons in primary auditory cortex represent sounds in terms of auditory objects rather than in terms of feature maps. According to this hypothesis, primary auditory cortex has a pivotal role in the auditory system in that it generates the representation of auditory objects to which higher auditory centers assign properties such as spatial location, source identity, and meaning.
Collapse
|
4356
|
Burkitt AN, van Hemmen JL. How synapses in the auditory system wax and wane: theoretical perspectives. BIOLOGICAL CYBERNETICS 2003; 89:318-332. [PMID: 14669012 DOI: 10.1007/s00422-003-0437-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2003] [Accepted: 09/10/2003] [Indexed: 05/24/2023]
Abstract
Spike-timing-dependent synaptic plasticity has recently provided an account of both the acuity of sound localization and the development of temporal-feature maps in the avian auditory system. The dynamics of the resulting learning equation, which describes the evolution of the synaptic weights, is governed by an unstable fixed point. We outline the derivation of the learning equation for both the Poisson neuron model and the leaky integrate-and-fire neuron with conductance synapses. The asymptotic solutions of the learning equation can be described by a spectral representation based on a biorthogonal expansion.
Collapse
|
4357
|
Obleser J, Lahiri A, Eulitz C. Auditory-evoked magnetic field codes place of articulation in timing and topography around 100 milliseconds post syllable onset. Neuroimage 2003; 20:1839-47. [PMID: 14642493 DOI: 10.1016/j.neuroimage.2003.07.019] [Citation(s) in RCA: 58] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
This study demonstrates by means of magnetic source imaging how consonants and vowels that constitute a syllable differently affect the neural processing within the auditory cortex. We recently identified a topographically separate processing for mutually exclusive place features in isolated vowels (Obleser et al., in press). Does this mapping principle also hold for stop consonants with differing places of articulation? How is the N100m response to consonant-vowel (CV) syllables affected by the congruency of place information in the consonant and the vowel? Moreover, how is the N100m affected by coarticulation, i.e., the spreading of place features to adjacent phonemes? By systematically varying phonological information in the consonant as well as in the vowel of CV syllables, we were able to reveal a difference in N100m syllable source location along the anterior-posterior axis due to mutually exclusive places of articulation in the vowel of the syllable. We also found a change in source orientation rather than source location due to the same mutually exclusive features in the onset of the syllable. Furthermore, the N100m time course of the brain response delivered important complementary information to identify the phonological features present in the speech signal. Responses to all syllable categories originated in the perisylvian region anterior to the source of a band-passed noise stimulus. The systematic variation of both consonantal and vocalic place features and the study of their interaction on auditory processing proves to be a valuable method to gain more insight into the elusive phenomenon of human speech recognition.
Collapse
|
4358
|
Talcott JB, Gram A, Van Ingelghem M, Witton C, Stein JF, Toennessen FE. Impaired sensitivity to dynamic stimuli in poor readers of a regular orthography. BRAIN AND LANGUAGE 2003; 87:259-66. [PMID: 14585295 DOI: 10.1016/s0093-934x(03)00105-6] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
The mappings from grapheme to phoneme are much less consistent in English than they are for most other languages. Therefore, the differences found between English-speaking dyslexics and controls on sensory measures of temporal processing might be related more to the irregularities of English orthography than to a general deficit affecting reading ability in all languages. However, here we show that poor readers of Norwegian, a language with a relatively regular orthography, are less sensitive than controls to dynamic visual and auditory stimuli. Consistent with results from previous studies of English-readers, detection thresholds for visual motion and auditory frequency modulation (FM) were significantly higher in 19 poor readers of Norwegian compared to 22 control readers of the same age. Over two-thirds (68.4%) of the children identified as poor readers were less sensitive than controls to either or both of the visual coherent motion or auditory 2Hz FM stimuli.
Collapse
|
4359
|
Cardin JA, Schmidt MF. Song system auditory responses are stable and highly tuned during sedation, rapidly modulated and unselective during wakefulness, and suppressed by arousal. J Neurophysiol 2003; 90:2884-99. [PMID: 12878713 DOI: 10.1152/jn.00391.2003] [Citation(s) in RCA: 87] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We used auditory responsiveness in the avian song system to investigate the complex relationship between behavioral state and sensory processing in a high-order sensorimotor brain area. We present evidence from recordings in awake, anesthetized, and sleeping male zebra finches (Taeniopygia guttata) that auditory responsiveness in nucleus HVc is profoundly affected by changes in behavioral state. In anesthetized and sleeping birds, auditory responses were characterized by an increase in firing rate that was selective for the bird's own song (BOS) and highly stable over time. In contrast, HVc responses during wakefulness were extremely variable and transitioned between undetectable and robust levels over short intervals. Surprisingly, auditory responses in awake birds were not selective for the BOS stimulus. The variability of HVc auditory responses in awake birds suggests that, as in mammals, wakefulness is not a uniform behavioral state. Rather, auditory responsiveness likely is continually influenced by variables such as arousal state. We therefore developed several experimental paradigms in which we could manipulate arousal levels during auditory stimulus presentation. In all cases, arousal suppressed HVc auditory responses. This effect was specific to the song system, as auditory responses in Field L, a primary auditory area that is a source of auditory input to HVc, were unaffected. While arousal acts as a negative regulator of HVc auditory responsiveness, the presence and variability of the responses observed in awake, alert birds suggests that other mechanisms, such as attention, may enhance auditory responsiveness. The interplay between behavioral state and sensory processing may regulate song system responsiveness according to the bird's behavioral and social context.
Collapse
|
4360
|
|
4361
|
Winkler I, Teder-Sälejärvi WA, Horváth J, Näätänen R, Sussman E. Human auditory cortex tracks task-irrelevant sound sources. Neuroreport 2003; 14:2053-6. [PMID: 14600496 DOI: 10.1097/00001756-200311140-00009] [Citation(s) in RCA: 46] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
The brain organizes sound into coherent sequences, termed auditory streams. We asked whether task-irrelevant sounds would be detected as separate auditory streams in a natural listening environment that included three simultaneously active sound sources. Participants watched a movie with sound while street-noise and sequences of naturally varying footstep sounds were presented in the background. Occasional deviations in the footstep sequences elicited the mismatch negativity (MMN) event-related potential. The elicitation of MMN showed that the regular features of the footstep sequences had been registered and their violations detected, which could only occur if the footstep sequence had been detected as a separate auditory stream. Our results demonstrate that sounds are organized into auditory streams irrespective of their relevance to ongoing behavior.
Collapse
|
4362
|
|
4363
|
Svirskis G, Dodla R, Rinzel J. Subthreshold outward currents enhance temporal integration in auditory neurons. BIOLOGICAL CYBERNETICS 2003; 89:333-40. [PMID: 14669013 PMCID: PMC3677199 DOI: 10.1007/s00422-003-0438-2] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2003] [Accepted: 09/10/2003] [Indexed: 05/18/2023]
Abstract
Many auditory neurons possess low-threshold potassium currents ( I(KLT)) that enhance their responsiveness to rapid and coincident inputs. We present recordings from gerbil medial superior olivary (MSO) neurons in vitro and modeling results that illustrate how I(KLT) improves the detection of brief signals, of weak signals in noise, and of the coincidence of signals (as needed for sound localization). We quantify the enhancing effect of I(KLT) on temporal processing with several measures: signal-to-noise ratio (SNR), reverse correlation or spike-triggered averaging of input currents, and interaural time difference (ITD) tuning curves. To characterize how I(KLT), which activates below spike threshold, influences a neuron's voltage rise toward threshold, i.e., how it filters the inputs, we focus first on the response to weak and noisy signals. Cells and models were stimulated with a computer-generated steady barrage of random inputs, mimicking weak synaptic conductance transients (the "noise"), together with a larger but still subthreshold postsynaptic conductance, EPSG (the "signal"). Reduction of I(KLT) decreased the SNR, mainly due to an increase in spontaneous firing (more "false positive"). The spike-triggered reverse correlation indicated that I(KLT) shortened the integration time for spike generation. I(KLT) also heightened the model's timing selectivity for coincidence detection of simulated binaural inputs. Further, ITD tuning is shifted in favor of a slope code rather than a place code by precise and rapid inhibition onto MSO cells (Brand et al. 2002). In several ways, low-threshold outward currents are seen to shape integration of weak and strong signals in auditory neurons.
Collapse
|
4364
|
Kakigi R, Naka D, Okusa T, Wang X, Inui K, Qiu Y, Tran TD, Miki K, Tamura Y, Nguyen TB, Watanabe S, Hoshiyama M. Sensory perception during sleep in humans: a magnetoencephalograhic study. Sleep Med 2003; 4:493-507. [PMID: 14607343 DOI: 10.1016/s1389-9457(03)00169-2] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
We reported the changes of brain responses during sleep following auditory, visual, somatosensory and painful somatosensory stimulation by using magnetoencephalography (MEG). Surprisingly, very large changes were found under all conditions, although the changes in each were not the same. However, there are some common findings. Short-latency components, reflecting the primary cortical activities generated in the primary sensory cortex for each stimulus kind, show no significant change, or are slightly prolonged in latency and decreased in amplitude. These findings indicate that the neuronal activities in the primary sensory cortex are not affected or are only slightly inhibited during sleep. By contrast, middle- and long-latency components, probably reflecting secondary activities, are much affected during sleep. Since the dipole location is changed (auditory stimulation), unchanged (somatosensory stimulation) or vague (visual stimulation) between the state of being awake and asleep, different regions responsible for such changes of activity may be one explanation, although the activated regions are very close to each other. The enhancement of activities probably indicates two possibilities, an increase in the activity of excitatory systems during sleep, or a decrease in the activity of some inhibitory systems, which are active in the awake state. We have no evidence to support either, but we prefer the latter, since it is difficult to consider why neuronal activities would be increased during sleep.
Collapse
|
4365
|
Abstract
In major-minor tonal music, chord functions are arranged according to certain regularities. The dominant-tonic progression, known as an authentic cadence, is often used as a marker of the end of a harmonic progression and has been considered a basic syntactic structure of major-minor tonal music by several music theorists and music psychologists. We review data from studies in which brain responses to an authentic cadence were compared to those elicited by music-syntactically inappropriate endings. In event-related electric brain potentials (recorded with EEG), the inappropriate endings elicit early right anterior negativity (ERAN), which is maximal around 200 ms after the presentation of an inappropriate chord. The ERAN is reminiscent of early anterior negativities elicited by syntactic incongruities during the perception of language. Magnetoencephalographic (MEG) data suggest that the ERAN is generated in the inferior frontolateral cortex, an area known to be crucially involved in the processing of (linguistic) syntax. Interestingly, the ERAN can be recorded in nonmusicians and in children, indicating that the ability to acquire (implicit) knowledge about musical regularities and to process musical information according to this knowledge is a general ability of the human brain. This ability is probably of great importance for the acquisition of language in infants and children.
Collapse
|
4366
|
Abstract
Musical timbre is a multidimensional property of sound that allows one to distinguish musical instruments. In this paper, studies that explore the cerebral substrate underlying the processing of musical timbre are discussed. Perceptual asymmetries measured in normal participants, deficits of musical timbre perception obtained in brain-damaged patients, as well as results obtained with various neuroimaging methods are reviewed. The findings obtained in all of these studies generally support the predominant involvement of right temporal lobe areas, and more specifically of its anterior part, in processing spectral and temporal envelopes of musical timbre. However, controversies still exist about the contribution of the left temporal lobe in timbre perception. The necessity of comparing data obtained with different perceptual paradigms (same-different discrimination and similarity judgment) and various types of stimuli (single tones and melodies) was emphasized by reporting lesion studies carried out in patients with unilateral temporal lobe lesions. The few neuroimaging studies published in this domain provided additional and complementary findings. Unlike lesion studies that allow us to infer the cerebral structures that are essential for timbre perception, the latter investigations implicate a more distributed neural network in timbre processing that extends along the superior temporal gyrus to include not only anterior but also posterior temporal regions and possibly frontal areas as well.
Collapse
|
4367
|
Tillmann B, Janata P, Bharucha JJ. Activation of the Inferior Frontal Cortex in Musical Priming. Ann N Y Acad Sci 2003; 999:209-11. [PMID: 14681143 DOI: 10.1196/annals.1284.031] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Musical contexts influence the processing of target events. Our study investigated the neural correlates of processing related and unrelated musical events presented as the last chord of eight-chord sequences.
Collapse
|
4368
|
Krumhansl CL. Experimental Strategies for Understanding the Role of Experience in Music Cognition. Ann N Y Acad Sci 2003; 999:414-28. [PMID: 14681166 DOI: 10.1196/annals.1284.052] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Research in music cognition has assessed the role of experience by investigating the effects of development, training, and cross-cultural differences. A fourth approach is to apply statistical techniques to identify patterns that are relatively frequent in the listeners' musical experience. As an example of this approach, a functional magnetic resonance imaging (fMRI) study was conducted on melodic expectancy. Two patterns that frequently began musical themes in classical and folk music were identified. A final continuation tone was added to these opening patterns. Some of the continuation tones were frequent in the musical styles; others were infrequent. Listeners judged how well the continuation tones fit with their expectations. A sparse-sampling (event-related) design was used in the fMRI study, with image acquisition at varying delays after the sequence. Early acquisitions showed activation in the auditory cortex, and late acquisitions showed effects of the motor response. These results suggest the timing of the image acquisitions was appropriate. Contrasting melodies and monotonic controls showed right inferior frontal activation, similar to that found in other studies. However, no differences were found as a function of whether or not the continuation tone was frequent or infrequent in the statistical style analysis. Methodological differences of this study from other recent fMRI studies on harmonic expectations are discussed.
Collapse
|
4369
|
Haarala C, Aalto S, Hautzel H, Julkunen L, Rinne JO, Laine M, Krause B, Hämäläinen H. Effects of a 902 MHz mobile phone on cerebral blood flow in humans: a PET study. Neuroreport 2003; 14:2019-23. [PMID: 14600490 DOI: 10.1097/00001756-200311140-00003] [Citation(s) in RCA: 68] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Fourteen healthy right-handed subjects were scanned using PET with a [15O]water tracer during exposure to electromagnetic field (EMF) emitted by a mobile phone and a sham-exposure under double-blind conditions. During scanning, the subjects performed a visual working memory task. Exposure to an active mobile phone produced a relative decrease in regional cerebral blood flow (rCBF) bilaterally in the auditory cortex but no rCBF changes were observed in the area of maximum EMF. It is possible that these remote findings were caused by the EMF emitted by the active mobile phone. A more likely interpretation of the present findings were a result of an auditory signal from the active mobile phone. Therefore, it is not reasoned to attribute this finding to the EMF emitted by the phone. Further study on human rCBF during exposure to EMF of a mobile phone is needed.
Collapse
|
4370
|
Ruusuvirta T, Huotilainen M, Fellman V, Näätänen R. The newborn human brain binds sound features together. Neuroreport 2003; 14:2117-9. [PMID: 14600508 DOI: 10.1097/00001756-200311140-00021] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
To process a stimulus as a holistic entity, the human brain must be able to conjoin its different features. Previous evidence suggests that this ability emerges during the first months of life, implying its considerable dependence on postnatal development. We recorded human newborn (1-3 days of age) electrical brain responses to frequently occurring (standard) sounds and to rarely occurring (deviant) sounds in a series. Responses to deviants differed from those to standards despite the fact that only the combination of sound frequency and intensity could be used as a cue for discriminating between these sound types. Our finding suggests that the human brain is ready for auditory feature binding very soon after birth.
Collapse
|
4371
|
Abstract
Rhythm is widely acknowledged to be an important feature of both speech and music, yet there is little empirical work comparing rhythmic organization in the two domains. One approach to the empirical comparison of rhythm in language and music is to break rhythm down into subcomponents and compare each component across domains. This approach reveals empirical evidence that rhythmic grouping is an area of overlap between language and music, but no empirical support for the long-held notion that language has periodic structure comparable to that of music. Focusing on the statistical patterning of event duration, new evidence suggests that the linguistic rhythm of a culture leaves an imprint on its musical rhythm. The latter finding suggests that one effective strategy for comparing rhythm in language and music is to determine if differences in linguistic rhythm between cultures are reflected in differences in musical rhythm.
Collapse
|
4372
|
Dahl S, Granqvist S. Estimating Internal Drift and Just Noticeable Difference in the Perception of Continuous Tempo Drift. Ann N Y Acad Sci 2003; 999:161-5. [PMID: 14681132 DOI: 10.1196/annals.1284.020] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Is there such a thing as an internal representation of a "steady tempo" and is this representation itself free from tempo drift? To investigate this question, we propose a new method for studying detection of continuous tempo drift.
Collapse
|
4373
|
De Baene W, Vandierendonck A, Leman M, Widmann A, Tervaniemi M. Exploration of Roughness by Means of the Mismatch Negativity Paradigm. Ann N Y Acad Sci 2003; 999:170-2. [PMID: 14681134 DOI: 10.1196/annals.1284.022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
A mismatch negativity study was set up to find the neural correlates of roughness perception. The results suggest that when the sounds are not attended to, roughness is reflected by the mismatch positivity as evidenced at the mastoid electrodes.
Collapse
|
4374
|
|
4375
|
Demirtas S, Goksoy C. Dynamics of audio-visual interactions in the guinea pig brain: an electrophysiological study. Neuroreport 2003; 14:2061-5. [PMID: 14600498 DOI: 10.1097/00001756-200311140-00011] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Audio-visual interactions and their specifications, evaluated by bioelectrical activities, in guinea pigs are presented in this study. The difference potential, as the evidence of an interaction, was calculated by subtracting the sum of averaged potentials recorded in visual and auditory events from the averaged potential recorded in an event where two stimuli combined in the same sweep. Dynamic investigations have shown an interaction when auditory stimulus is applied 24 ms before and 201 ms after visual stimulation. Latency between the difference potential and auditory stimulus was stable. Directional investigations have shown that the interaction is not observed when auditory and/or visual stimulation is used ipsilaterally, according to the recording side.
Collapse
|