1
|
Jing Y, Xu Z, Pang Y, Liu X, Zhao J, Liu Y. The Neural Correlates of Food Preference among Music Kinds. Foods 2024; 13:1127. [PMID: 38611431 PMCID: PMC11011844 DOI: 10.3390/foods13071127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Revised: 03/21/2024] [Accepted: 04/02/2024] [Indexed: 04/14/2024] Open
Abstract
The calorie and taste choices of food have been shown to be related to the external environment, including music. Previous studies have mostly focused on manipulating basic auditory parameters, with few scholars exploring the impact of complex musical parameters on food selection. This study explored the effects of different kinds of music (classical, rock, jazz, and hip-hop) on food liking based on the calories (high and low) and taste (sweet and salty) using event-related potentials (ERPs). Twenty-four participants (8 males, 16 females) were recruited from Southwest University, China to participate in the food liking task using a Likert seven-point rating and simultaneously recording EEG signals (N2, P2, N3, and LPC). This study used repeated-measures analyses of covariances and found that the score of the high-calorie foods was greater than that of the low-calorie foods. Additionally, results revealed that the score in classical music was greatest for sweet foods, while there was no difference among music kinds in the salty foods. The ERP results showed that P2 amplitudes were greater for sweet foods than those for the salty foods. N2 amplitudes for the salty foods were greater than those for the sweet foods during rock music; in addition, N2 amplitudes during hip-hop music were greatest for sweet foods. However, N2 amplitudes during rock music were the greatest for salty foods. The results also revealed that N2 amplitudes during hip-hop music were greater than those during jazz music. This study provides unique operational insights for businesses.
Collapse
Affiliation(s)
- Yuanluo Jing
- Faculty of Psychology, Southwest University, Chongqing 400715, China; (Y.J.); (Y.P.); (J.Z.)
| | - Ziyuan Xu
- Division of Psychology and Language Sciences, University College London, London WC1H 0AP, UK;
| | - Yazhi Pang
- Faculty of Psychology, Southwest University, Chongqing 400715, China; (Y.J.); (Y.P.); (J.Z.)
| | - Xiaolin Liu
- School of Music, Southwest University, Chongqing 400715, China;
| | - Jia Zhao
- Faculty of Psychology, Southwest University, Chongqing 400715, China; (Y.J.); (Y.P.); (J.Z.)
- Key Laboratory of Cognition and Personality (Ministry of Education), Southwest University, Chongqing 400715, China
| | - Yong Liu
- Faculty of Psychology, Southwest University, Chongqing 400715, China; (Y.J.); (Y.P.); (J.Z.)
- Key Laboratory of Cognition and Personality (Ministry of Education), Southwest University, Chongqing 400715, China
| |
Collapse
|
2
|
Zäske R, Kaufmann JM, Schweinberger SR. Neural Correlates of Voice Learning with Distinctive and Non-Distinctive Faces. Brain Sci 2023; 13:637. [PMID: 37190602 PMCID: PMC10136676 DOI: 10.3390/brainsci13040637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 03/31/2023] [Accepted: 04/04/2023] [Indexed: 05/17/2023] Open
Abstract
Recognizing people from their voices may be facilitated by a voice's distinctiveness, in a manner similar to that which has been reported for faces. However, little is known about the neural time-course of voice learning and the role of facial information in voice learning. Based on evidence for audiovisual integration in the recognition of familiar people, we studied the behavioral and electrophysiological correlates of voice learning associated with distinctive or non-distinctive faces. We repeated twelve unfamiliar voices uttering short sentences, together with either distinctive or non-distinctive faces (depicted before and during voice presentation) in six learning-test cycles. During learning, distinctive faces increased early visually-evoked (N170, P200, N250) potentials relative to non-distinctive faces, and face distinctiveness modulated voice-elicited slow EEG activity at the occipito-temporal and fronto-central electrodes. At the test, unimodally-presented voices previously learned with distinctive faces were classified more quickly than were voices learned with non-distinctive faces, and also more quickly than novel voices. Moreover, voices previously learned with faces elicited an N250-like component that was similar in topography to that typically observed for facial stimuli. The preliminary source localization of this voice-induced N250 was compatible with a source in the fusiform gyrus. Taken together, our findings provide support for a theory of early interaction between voice and face processing areas during both learning and voice recognition.
Collapse
Affiliation(s)
- Romi Zäske
- Department of Experimental Otorhinolaryngology, Jena University Hospital, Stoystraße 3, 07743 Jena, Germany
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University of Jena, Am Steiger 3/1, 07743 Jena, Germany
- Voice Research Unit, Friedrich Schiller University of Jena, Leutragraben 1, 07743 Jena, Germany
| | - Jürgen M. Kaufmann
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University of Jena, Am Steiger 3/1, 07743 Jena, Germany
| | - Stefan R. Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University of Jena, Am Steiger 3/1, 07743 Jena, Germany
- Voice Research Unit, Friedrich Schiller University of Jena, Leutragraben 1, 07743 Jena, Germany
| |
Collapse
|
3
|
Morse K, Vander Werff KR. Onset-offset cortical auditory evoked potential amplitude differences indicate auditory cortical hyperactivity and reduced inhibition in people with tinnitus. Clin Neurophysiol 2023; 149:223-233. [PMID: 36963993 DOI: 10.1016/j.clinph.2023.02.164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2022] [Revised: 12/26/2022] [Accepted: 02/05/2023] [Indexed: 02/25/2023]
Abstract
OBJECTIVE The current study investigates evidence of hypothesized reduced central inhibition and/or increased excitation in individuals with tinnitus by evaluating cortical auditory onset versus offset responses. METHODS Cortical auditory evoked potentials (CAEPs) were recorded to the onset and offset of 3-second white noise stimuli in tinnitus and control groups matched in pairs by age, hearing, and sex (n = 26 total). Independent t-tests and 2-way mixed model ANOVA were used to evaluate onset-offset differences in amplitude, area, and latency of CAEP components by group. The predictive influence of tinnitus presence and associated participant characteristics on CAEP outcomes was assessed by multiple regression proportional reduction in error. RESULTS The tinnitus group had significantly larger onset minus offset P2 amplitudes (ΔP2 amplitudes) than control group participants. No other component variables differed significantly. ΔP2 amplitude was best predicted by tinnitus status and not significantly influenced by other variables such as hearing loss or age. CONCLUSIONS Hypothesized reduced central inhibition and/or increased excitation in tinnitus participants was partially supported by a group difference in ΔP2 amplitude. SIGNIFICANCE This was the first study to evaluate CAEP onset minus offset differences to investigate changes in central excitation/inhibition in individuals with tinnitus versus controls in matched groups.
Collapse
Affiliation(s)
- Kenneth Morse
- West Virginia University, Division of Communication Sciences and Disorders, USA.
| | | |
Collapse
|
4
|
Dauer T, Nerness B, Fujioka T. Predictability of higher-order temporal structure of musical stimuli is associated with auditory evoked response. Int J Psychophysiol 2020; 153:53-64. [PMID: 32325078 DOI: 10.1016/j.ijpsycho.2020.04.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Revised: 03/30/2020] [Accepted: 04/03/2020] [Indexed: 10/24/2022]
Abstract
Sound predictability resulting from repetitive patterns can be implicitly learned and often neither requires nor captures our conscious attention. Recently, predictive coding theory has been used as a framework to explain how predictable or expected stimuli evoke and gradually attenuate obligatory neural responses over time compared to those elicited by unpredictable events. However, these results were obtained using the repetition of simple auditory objects such as pairs of tones or phonemes. Here we examined whether the same principle would hold for more abstract temporal structures of sounds. If this is the case, we hypothesized that a regular repetition schedule of a set of musical patterns would reduce neural processing over the course of listening compared to stimuli with an irregular repetition schedule (and the same set of musical patterns). Electroencephalography (EEG) was recorded while participants passively listened to 6-8 min stimulus sequences in which five different four-tone patterns with temporally regular or irregular repetition were presented successively in a randomized order. N1 amplitudes in response to the first tone of each musical pattern were significantly less negative at the end of the regular sequence compared to the beginning, while such reduction was absent in the irregular sequence. These results extend previous findings by showing that N1 reflects automatic learning of the predictable higher-order structure of sound sequences, while continuous engagement of preattentive auditory processing is necessary for the unpredictable structure.
Collapse
Affiliation(s)
- Tysen Dauer
- Department of Music, Stanford University, United States.
| | - Barbara Nerness
- Department of Music, Stanford University, United States; Center for Computer Research in Music and Acoustics, Department of Music, Stanford University, United States
| | - Takako Fujioka
- Department of Music, Stanford University, United States; Center for Computer Research in Music and Acoustics, Department of Music, Stanford University, United States; Wu Tsai Neurosciences Institute, Stanford University, United States
| |
Collapse
|
5
|
Rayes H, Al-Malky G, Vickers D. Systematic Review of Auditory Training in Pediatric Cochlear Implant Recipients. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:1574-1593. [PMID: 31039327 DOI: 10.1044/2019_jslhr-h-18-0252] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Objective The purpose of this systematic review is to evaluate the published research in auditory training (AT) for pediatric cochlear implant (CI) recipients. This review investigates whether AT in children with CIs leads to improvements in speech and language development, cognition, and/or quality of life and whether improvements, if any, remain over time post AT intervention. Method A systematic search of 7 databases identified 96 review articles published up until January 2017, 9 of which met the inclusion criteria. Data were extracted and independently assessed for risk of bias and quality of study against a PICOS (participants, intervention, control, outcomes, and study) framework. Results All studies reported improvements in trained AT tasks, including speech discrimination/identification and working memory. Retention of improvements over time was found whenever it was assessed. Transfer of learning was measured in 4 of 6 studies, which assessed generalization. Quality of life was not assessed. Overall, evidence for the included studies was deemed to be of low quality. Conclusion Benefits of AT were illustrated through the improvement in trained tasks, and this was observed in all reviewed studies. Transfer of improvement to other domains and also retention of benefits post AT were evident when assessed, although rarely done. However, higher quality evidence to further examine outcomes of AT in pediatric CI recipients is needed.
Collapse
Affiliation(s)
- Hanin Rayes
- Department of Speech Hearing and Phonetic Sciences, Faculty of Brain Sciences, University College London, United Kingdom
| | - Ghada Al-Malky
- Ear Institute, Faculty of Brain Sciences, University College London, United Kingdom
| | - Deborah Vickers
- Department of Speech Hearing and Phonetic Sciences, Faculty of Brain Sciences, University College London, United Kingdom
- Department of Clinical Neurosciences, Clinical School, University of Cambridge, United Kingdom
| |
Collapse
|
6
|
Rigoulot S, Armony JL. Early selectivity for vocal and musical sounds: electrophysiological evidence from an adaptation paradigm. Eur J Neurosci 2016; 44:2786-2794. [PMID: 27600697 DOI: 10.1111/ejn.13391] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2016] [Revised: 08/29/2016] [Accepted: 08/31/2016] [Indexed: 11/27/2022]
Abstract
There is growing interest in characterizing the neural basis of music perception and, in particular, assessing how similar, or not, it is to that of speech. To further explore this question, we employed an EEG adaptation paradigm in which we compared responses to short sounds belonging to the same category, either speech (pseudo-sentences) or music (piano or violin), depending on whether they were immediately preceded by a same- or different-category sound. We observed a larger reduction in the N100 component magnitude in response to musical sounds when they were preceded by music (either the same or different instrument) than by speech. In contrast, the N100 amplitude was not affected by the preceding stimulus category in the case of speech. For P200 component, we observed a diminution of amplitude when speech sounds were preceded speech, compared to music. No such decrease was found when we compared the responses to music sounds. These differences in the processing of speech and music are consistent with the proposal that some degree of category selectivity for these two classes of complex stimuli already occurs at early stages of auditory processing, possibly subserved by partly separated neuronal populations.
Collapse
Affiliation(s)
- Simon Rigoulot
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada.,Department of Psychiatry, Faculty of Medicine, Douglas Mental Health University Institute, 6875 LaSalle Boulevard, Montreal, QC, H4H 1R3, Canada
| | - Jorge L Armony
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada.,Department of Psychiatry, Faculty of Medicine, Douglas Mental Health University Institute, 6875 LaSalle Boulevard, Montreal, QC, H4H 1R3, Canada
| |
Collapse
|
7
|
Ross B, Fujioka T. 40-Hz oscillations underlying perceptual binding in young and older adults. Psychophysiology 2016; 53:974-90. [PMID: 27080577 DOI: 10.1111/psyp.12654] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Revised: 03/12/2016] [Accepted: 03/13/2016] [Indexed: 11/29/2022]
Abstract
Auditory object perception requires binding of elementary features of complex stimuli. Synchronization of high-frequency oscillation in neural networks has been proposed as an effective alternative to binding via hard-wired connections because binding in an oscillatory network can be dynamically adjusted to the ever-changing sensory environment. Previously, we demonstrated in young adults that gamma oscillations are critical for sensory integration and found that they were affected by concurrent noise. Here, we aimed to support the hypothesis that stimulus evoked auditory 40-Hz responses are a component of thalamocortical gamma oscillations and examined whether this oscillatory system may become less effective in aging. In young and older adults, we recorded neuromagnetic 40-Hz oscillations, elicited by monaural amplitude-modulated sound. Comparing responses in quiet and under contralateral masking with multitalker babble noise revealed two functionally distinct components of auditory 40-Hz responses. The first component followed changes in the auditory input with high fidelity and was of similar amplitude in young and older adults. The second, significantly smaller in older adults, showed a 200-ms interval of amplitude and phase rebound and was strongly attenuated by contralateral noise. The amplitude of the second component was correlated with behavioral speech-in-noise performance. Concurrent noise also reduced the P2 wave of auditory evoked responses at 200-ms latency, but not the earlier N1 wave. P2 modulation was reduced in older adults. The results support the model of sensory binding through thalamocortical gamma oscillations. Limitation of neural resources for this process in older adults may contribute to their speech-in-noise understanding deficits.
Collapse
Affiliation(s)
- Bernhard Ross
- Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada.,Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
| | - Takako Fujioka
- Center for Computer Research in Music and Acoustics, Department of Music, Stanford University, Stanford, California, USA.,Neurosciences Institute, Stanford University, Stanford, California, USA
| |
Collapse
|
8
|
Daikoku T, Yatomi Y, Yumoto M. Statistical learning of music- and language-like sequences and tolerance for spectral shifts. Neurobiol Learn Mem 2014; 118:8-19. [PMID: 25451311 DOI: 10.1016/j.nlm.2014.11.001] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2014] [Revised: 09/25/2014] [Accepted: 11/02/2014] [Indexed: 11/18/2022]
Abstract
In our previous study (Daikoku, Yatomi, & Yumoto, 2014), we demonstrated that the N1m response could be a marker for the statistical learning process of pitch sequence, in which each tone was ordered by a Markov stochastic model. The aim of the present study was to investigate how the statistical learning of music- and language-like auditory sequences is reflected in the N1m responses based on the assumption that both language and music share domain generality. By using vowel sounds generated by a formant synthesizer, we devised music- and language-like auditory sequences in which higher-ordered transitional rules were embedded according to a Markov stochastic model by controlling fundamental (F0) and/or formant frequencies (F1-F2). In each sequence, F0 and/or F1-F2 were spectrally shifted in the last one-third of the tone sequence. Neuromagnetic responses to the tone sequences were recorded from 14 right-handed normal volunteers. In the music- and language-like sequences with pitch change, the N1m responses to the tones that appeared with higher transitional probability were significantly decreased compared with the responses to the tones that appeared with lower transitional probability within the first two-thirds of each sequence. Moreover, the amplitude difference was even retained within the last one-third of the sequence after the spectral shifts. However, in the language-like sequence without pitch change, no significant difference could be detected. The pitch change may facilitate the statistical learning in language and music. Statistically acquired knowledge may be appropriated to process altered auditory sequences with spectral shifts. The relative processing of spectral sequences may be a domain-general auditory mechanism that is innate to humans.
Collapse
Affiliation(s)
- Tatsuya Daikoku
- Department of Clinical Laboratory, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Yutaka Yatomi
- Department of Clinical Laboratory, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Masato Yumoto
- Department of Clinical Laboratory, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan.
| |
Collapse
|
9
|
Altmann CF, Uesaki M, Ono K, Matsuhashi M, Mima T, Fukuyama H. Categorical speech perception during active discrimination of consonants and vowels. Neuropsychologia 2014; 64:13-23. [DOI: 10.1016/j.neuropsychologia.2014.09.006] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2014] [Revised: 08/21/2014] [Accepted: 09/03/2014] [Indexed: 10/24/2022]
|
10
|
Implicit and explicit statistical learning of tone sequences across spectral shifts. Neuropsychologia 2014; 63:194-204. [PMID: 25192632 DOI: 10.1016/j.neuropsychologia.2014.08.028] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2014] [Revised: 08/21/2014] [Accepted: 08/22/2014] [Indexed: 11/22/2022]
Abstract
We investigated how the statistical learning of auditory sequences is reflected in neuromagnetic responses in implicit and explicit learning conditions. Complex tones with fundamental frequencies (F0s) in a five-tone equal temperament were generated by a formant synthesizer. The tones were subsequently ordered with the constraint that the probability of the forthcoming tone was statistically defined (80% for one tone; 5% for the other four) by the latest two successive tones (second-order Markov chains). The tone sequence consisted of 500 tones and 250 successive tones with a relative shift of F0s based on the same Markov transitional matrix. In explicit and implicit learning conditions, neuromagnetic responses to the tone sequence were recorded from fourteen right-handed participants. The temporal profiles of the N1m responses to the tones with higher and lower transitional probabilities were compared. In the explicit learning condition, the N1m responses to tones with higher transitional probability were significantly decreased compared with responses to tones with lower transitional probability in the latter half of the 500-tone sequence. Furthermore, this difference was retained even after the F0s were relatively shifted. In the implicit learning condition, N1m responses to tones with higher transitional probability were significantly decreased only for the 250 tones following the relative shift of F0s. The delayed detection of learning effects across the sound-spectral shift in the implicit condition may imply that learning may progress earlier in explicit learning conditions than in implicit learning conditions. The finding that the learning effects were retained across spectral shifts regardless of the learning modality indicates that relative pitch processing may be an essential ability for humans.
Collapse
|
11
|
Repetition suppression comprises both attention-independent and attention-dependent processes. Neuroimage 2014; 98:168-75. [DOI: 10.1016/j.neuroimage.2014.04.084] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2013] [Revised: 04/23/2014] [Accepted: 04/30/2014] [Indexed: 11/18/2022] Open
|
12
|
Harris KC, Vaden KI, Dubno JR. Auditory-evoked cortical activity: contribution of brain noise, phase locking, and spectral power. J Basic Clin Physiol Pharmacol 2014; 25:277-84. [PMID: 25046314 PMCID: PMC5585860 DOI: 10.1515/jbcpp-2014-0047] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2014] [Accepted: 05/23/2014] [Indexed: 11/15/2022]
Abstract
BACKGROUND The N1-P2 is an obligatory cortical response that can reflect the representation of spectral and temporal characteristics of an auditory stimulus. Traditionally,mean amplitudes and latencies of the prominent peaks in the averaged response are compared across experimental conditions. Analyses of the peaks in the averaged response only reflect a subset of the data contained within the electroencephalogram(EEG) signal. We used single-trial analyses techniques to identify the contribution of brain noise,neural synchrony, and spectral power to the generation of P2 amplitude and how these variables may change across age group. This information is important for appropriate interpretation of event-related potentials (ERPs) results and in understanding of age-related neural pathologies. METHODS EEG was measured from 25 younger and 25 older normal hearing adults. Age-related and individual differences in P2 response amplitudes, and variability in brain noise, phase locking value (PLV), and spectral power (4-8 Hz) were assessed from electrode FCz. Model testing and linear regression were used to determine the extent to which brain noise, PLV, and spectral power uniquely predicted P2 amplitudes and varied by age group. RESULTS Younger adults had significantly larger P2 amplitudes, PLV, and power compared to older adults. Brain noise did not differ between age groups. The results of regression testing revealed that brain noise and PLV, but not spectral power were unique predictors of P2 amplitudes. Model fit was significantly better in younger than in older adults. CONCLUSIONS ERP analyses are intended to provide a better understanding of the underlying neural mechanisms that contribute to individual and group differences in behavior. The current results support that age-related declines in neural synchrony contribute to smaller P2 amplitudes in older normal hearing adults. Based on our results, we discuss potential models in which differences in neural synchrony and brain noise can account for associations with P2 amplitudes and behavior and potentially provide a better explanation of the neural mechanisms that underlie declines in auditory processing and training benefits.
Collapse
Affiliation(s)
- Kelly C. Harris
- Department of Otolaryngology–Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave MSC550 Charleston, SC 29425, USA
| | - Kenneth I. Vaden
- Department of Otolaryngology–Head and Neck Surgery, Medical University of South Carolina, SC, USA
| | - Judy R. Dubno
- Department of Otolaryngology–Head and Neck Surgery, Medical University of South Carolina, SC, USA
| |
Collapse
|
13
|
Tremblay KL, Ross B, Inoue K, McClannahan K, Collet G. Is the auditory evoked P2 response a biomarker of learning? Front Syst Neurosci 2014; 8:28. [PMID: 24600358 PMCID: PMC3929834 DOI: 10.3389/fnsys.2014.00028] [Citation(s) in RCA: 93] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2013] [Accepted: 02/06/2014] [Indexed: 11/13/2022] Open
Abstract
Even though auditory training exercises for humans have been shown to improve certain perceptual skills of individuals with and without hearing loss, there is a lack of knowledge pertaining to which aspects of training are responsible for the perceptual gains, and which aspects of perception are changed. To better define how auditory training impacts brain and behavior, electroencephalography (EEG) and magnetoencephalography (MEG) have been used to determine the time course and coincidence of cortical modulations associated with different types of training. Here we focus on P1-N1-P2 auditory evoked responses (AEP), as there are consistent reports of gains in P2 amplitude following various types of auditory training experiences; including music and speech-sound training. The purpose of this experiment was to determine if the auditory evoked P2 response is a biomarker of learning. To do this, we taught native English speakers to identify a new pre-voiced temporal cue that is not used phonemically in the English language so that coinciding changes in evoked neural activity could be characterized. To differentiate possible effects of repeated stimulus exposure and a button-pushing task from learning itself, we examined modulations in brain activity in a group of participants who learned to identify the pre-voicing contrast and compared it to participants, matched in time, and stimulus exposure, that did not. The main finding was that the amplitude of the P2 auditory evoked response increased across repeated EEG sessions for all groups, regardless of any change in perceptual performance. What’s more, these effects are retained for months. Changes in P2 amplitude were attributed to changes in neural activity associated with the acquisition process and not the learned outcome itself. A further finding was the expression of a late negativity (LN) wave 600–900 ms post-stimulus onset, post-training exclusively for the group that learned to identify the pre-voiced contrast.
Collapse
Affiliation(s)
- Kelly L Tremblay
- Department of Speech and Hearing Sciences, University of Washington Seattle, WA, USA
| | - Bernhard Ross
- Baycrest Centre, Rotman Research Institute Toronto, ON, Canada ; Department of Medical Biophysics, University of Toronto Toronto, ON, Canada
| | - Kayo Inoue
- Department of Speech and Hearing Sciences, University of Washington Seattle, WA, USA ; Department of Radiology, Integrated Brian Imaging Center, University of Washington Seattle, WA, USA
| | - Katrina McClannahan
- Department of Speech and Hearing Sciences, University of Washington Seattle, WA, USA
| | - Gregory Collet
- Life Sciences Department, Royal Military Academy Brussels, Belgium ; Unité de Recherche en Neurosciences Cognitives, Centre de Recherches en Cognition et Neurosciences Université Libre de Bruxelles Brussels, Belgium
| |
Collapse
|
14
|
|
15
|
Seol J, Oh M, Kim JS, Jin SH, Kim SI, Chung CK. Discrimination of timbre in early auditory responses of the human brain. PLoS One 2011; 6:e24959. [PMID: 21949807 PMCID: PMC3174256 DOI: 10.1371/journal.pone.0024959] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2011] [Accepted: 08/25/2011] [Indexed: 12/03/2022] Open
Abstract
Background The issue of how differences in timbre are represented in the neural response still has not been well addressed, particularly with regard to the relevant brain mechanisms. Here we employ phasing and clipping of tones to produce auditory stimuli differing to describe the multidimensional nature of timbre. We investigated the auditory response and sensory gating as well, using by magnetoencephalography (MEG). Methodology/Principal Findings Thirty-five healthy subjects without hearing deficit participated in the experiments. Two different or same tones in timbre were presented through conditioning (S1) – testing (S2) paradigm as a pair with an interval of 500 ms. As a result, the magnitudes of auditory M50 and M100 responses were different with timbre in both hemispheres. This result might support that timbre, at least by phasing and clipping, is discriminated in the auditory early processing. The second response in a pair affected by S1 in the consecutive stimuli occurred in M100 of the left hemisphere, whereas both M50 and M100 responses to S2 only in the right hemisphere reflected whether two stimuli in a pair were the same or not. Both M50 and M100 magnitudes were different with the presenting order (S1 vs. S2) for both same and different conditions in the both hemispheres. Conclusions/Significances Our results demonstrate that the auditory response depends on timbre characteristics. Moreover, it was revealed that the auditory sensory gating is determined not by the stimulus that directly evokes the response, but rather by whether or not the two stimuli are identical in timbre.
Collapse
Affiliation(s)
- Jaeho Seol
- Interdisciplinary Program in Cognitive Science, Seoul National University College of Humanities, Seoul, Korea
- MEG Center, Department of Neurosurgery, Seoul National University Hospital, Seoul, Korea
| | - MiAe Oh
- Department of Statistics, Seoul National University College of Natural Sciences, Seoul, Korea
| | - June Sic Kim
- MEG Center, Department of Neurosurgery, Seoul National University Hospital, Seoul, Korea
- Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Korea
| | - Seung-Hyun Jin
- MEG Center, Department of Neurosurgery, Seoul National University Hospital, Seoul, Korea
- Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Korea
| | - Sun Il Kim
- Department of Biomedical Engineering, Hanyang University, Seoul, Korea
| | - Chun Kee Chung
- Interdisciplinary Program in Cognitive Science, Seoul National University College of Humanities, Seoul, Korea
- MEG Center, Department of Neurosurgery, Seoul National University Hospital, Seoul, Korea
- Department of Neurosurgery, Seoul National University College of Medicine, Seoul, Korea
- * E-mail:
| |
Collapse
|
16
|
Schweinberger SR, Walther C, Zäske R, Kovács G. Neural correlates of adaptation to voice identity. Br J Psychol 2011; 102:748-64. [DOI: 10.1111/j.2044-8295.2011.02048.x] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
17
|
Zacharias N, Sielużycki C, Kordecki W, König R, Heil P. The M100 component of evoked magnetic fields differs by scaling factors: implications for signal averaging. Psychophysiology 2011; 48:1069-82. [PMID: 21342204 DOI: 10.1111/j.1469-8986.2011.01183.x] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
MEG and EEG studies of event-related responses often involve comparisons of grand averages, requiring homogeneity of the variances. Here, we examine the possibility, implied by the nature of neural sources and the measuring principles involved, that the M100 component of auditory-evoked magnetic fields of different subjects, hemispheres, to different stimuli, and at different sensors differs by scaling factors. Such a multiplicative model predicts a linear increase in the standard deviation with the mean, and thus would have important implications for averaging and comparing such data. Our analyses, at the sensor and the source level, clearly show that the multiplicative model applies. We therefore propose geometric, rather than arithmetic, averaging of the M100 component across subjects and suggest a novel and superior normalization procedure. Our results question the justification of the common practice of subtracting arithmetic grand averages.
Collapse
Affiliation(s)
- Norman Zacharias
- Special Lab Non-invasive Brain Imaging, Leibniz Institute for Neurobiology, Magdeburg, Germany
| | | | | | | | | |
Collapse
|
18
|
Tremblay KL, Inoue K, McClannahan K, Ross B. Repeated stimulus exposure alters the way sound is encoded in the human brain. PLoS One 2010; 5:e10283. [PMID: 20421969 PMCID: PMC2858650 DOI: 10.1371/journal.pone.0010283] [Citation(s) in RCA: 51] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2009] [Accepted: 03/12/2010] [Indexed: 11/18/2022] Open
Abstract
Auditory training programs are being developed to remediate various types of communication disorders. Biological changes have been shown to coincide with improved perception following auditory training so there is interest in determining if these changes represent biologic markers of auditory learning. Here we examine the role of stimulus exposure and listening tasks, in the absence of training, on the modulation of evoked brain activity. Twenty adults were divided into two groups and exposed to two similar sounding speech syllables during four electrophysiological recording sessions (24 hours, one week, and up to one year later). In between each session, members of one group were asked to identify each stimulus. Both groups showed enhanced neural activity from session-to-session, in the same P2 latency range previously identified as being responsive to auditory training. The enhancement effect was most pronounced over temporal-occipital scalp regions and largest for the group who participated in the identification task. The effects were rapid and long-lasting with enhanced synchronous activity persisting months after the last auditory experience. Physiological changes did not coincide with perceptual changes so results are interpreted to mean stimulus exposure, with and without being paired with an identification task, alters the way sound is processed in the brain. The cumulative effect likely involves auditory memory; however, in the absence of training, the observed physiological changes are insufficient to result in changes in learned behavior.
Collapse
Affiliation(s)
- Kelly L Tremblay
- Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington, United States of America.
| | | | | | | |
Collapse
|
19
|
Zäske R, Schweinberger SR, Kaufmann JM, Kawahara H. In the ear of the beholder: neural correlates of adaptation to voice gender. Eur J Neurosci 2009; 30:527-34. [DOI: 10.1111/j.1460-9568.2009.06839.x] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
20
|
Neural processing of vocal emotion and identity. Brain Cogn 2008; 69:121-6. [PMID: 18644670 DOI: 10.1016/j.bandc.2008.06.003] [Citation(s) in RCA: 43] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2007] [Revised: 06/06/2008] [Accepted: 06/08/2008] [Indexed: 11/24/2022]
Abstract
The voice is a marker of a person's identity which allows individual recognition even if the person is not in sight. Listening to a voice also affords inferences about the speaker's emotional state. Both these types of personal information are encoded in characteristic acoustic feature patterns analyzed within the auditory cortex. In the present study 16 volunteers listened to pairs of non-verbal voice stimuli with happy or sad valence in two different task conditions while event-related brain potentials (ERPs) were recorded. In an emotion matching task, participants indicated whether the expressed emotion of a target voice was congruent or incongruent with that of a (preceding) prime voice. In an identity matching task, participants indicated whether or not the prime and target voice belonged to the same person. Effects based on emotion expressed occurred earlier than those based on voice identity. Specifically, P2 (approximately 200 ms)-amplitudes were reduced for happy voices when primed by happy voices. Identity match effects, by contrast, did not start until around 300 ms. These results show an early task-specific emotion-based influence on the early stages of auditory sensory processing.
Collapse
|
21
|
Fujioka T, Ross B. Auditory processing indexed by stimulus-induced alpha desynchronization in children. Int J Psychophysiol 2008; 68:130-40. [PMID: 18331761 DOI: 10.1016/j.ijpsycho.2007.12.004] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2007] [Revised: 10/22/2007] [Accepted: 12/03/2007] [Indexed: 12/31/2022]
Abstract
By means of magnetoencephalography (MEG), we investigated event-related synchronization and desynchronization (ERS/ERD) in auditory cortex activity, recorded from twelve children aged four to six years, while they passively listened to a violin tone and a noise-burst stimulus. Time-frequency analysis using Wavelet Transform was applied to single-trials of source waveforms observed from left and right auditory cortices. Stimulus-induced changes in non-phase-locked activities were evident. ERS in the beta range (13-30 Hz) lasted only for 100 ms after stimulus onset. This was followed by prominent alpha ERD, which showed a clear dissociation between the upper (12 Hz) and lower (8 Hz) alpha range in both left and right auditory cortices for both stimuli. The time courses of the alpha ERD (onset around 300 ms, peak at 500 ms, offset after 1500 ms) were similar to those previously found for older children and adults with auditory memory related tasks. For the violin tone only, the ERD lasted longer in the upper than the lower alpha band. The findings suggest that induced alpha ERD indexes auditory stimulus processing in children without specific cognitive task requirement. The left auditory cortex showed a larger and longer-lasting upper alpha ERD than did the right auditory cortex, likely reflecting hemispheric differences in maturational stages of neural oscillatory mechanisms.
Collapse
Affiliation(s)
- Takako Fujioka
- Rotman Research Institute, Baycrest Centre, University of Toronto, Canada.
| | | |
Collapse
|
22
|
Chartrand JP, Peretz I, Belin P. Auditory recognition expertise and domain specificity. Brain Res 2008; 1220:191-8. [PMID: 18299121 DOI: 10.1016/j.brainres.2008.01.014] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2007] [Revised: 12/22/2007] [Accepted: 01/03/2008] [Indexed: 10/22/2022]
Abstract
Auditory recognition expertise refers to one's ability to accurately and rapidly identify individual sound sources within a homogeneous class of stimuli. Compared to the study of visual expertise, the field of expertise in sound source recognition has been neglected. Different types of visual experts have been studied extensively both in behavioral and neuroimaging studies, leading to a vigorous debate about the domain specificity of face perception. In the present paper, we briefly review what is known about visual expertise and propose that the same framework can be used in the auditory domain to ask the question of domain specificity for the processing and neural correlates of the human voice. We suggest that questions like "are voices special ?" can be partially answered with neuroimaging studies of "auditory experts", such as musicians and bird experts, who rely on subtle acoustical parameters to identify auditory exemplars at a subordinate level. Future studies of auditory experts cannot only serve to answer questions related to the neural correlates of voice perception, but also broaden the understanding of the auditory system.
Collapse
Affiliation(s)
- Jean-Pierre Chartrand
- International Laboratory for Brain, Music and Sound (BRAMS), Université de Montréal, Montréal, Canada.
| | | | | |
Collapse
|
23
|
Otsuka A, Kuriki S, Murata N, Hasegawa T. Neuromagnetic responses to chords are modified by preceding musical scale. Neurosci Res 2008; 60:50-5. [DOI: 10.1016/j.neures.2007.09.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2007] [Revised: 09/18/2007] [Accepted: 09/20/2007] [Indexed: 11/25/2022]
|