1
|
Jaquerod ME, Knight RS, Lintas A, Villa AEP. A Dual Role for the Dorsolateral Prefrontal Cortex (DLPFC) in Auditory Deviance Detection. Brain Sci 2024; 14:994. [PMID: 39452008 PMCID: PMC11505713 DOI: 10.3390/brainsci14100994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2024] [Revised: 09/27/2024] [Accepted: 09/27/2024] [Indexed: 10/26/2024] Open
Abstract
BACKGROUND In the oddball paradigm, the dorsolateral prefrontal cortex (DLPFC) is often associated with active cognitive responses, such as maintaining information in working memory or adapting response strategies. While some evidence points to the DLPFC's role in passive auditory deviance perception, a detailed understanding of the spatiotemporal neurodynamics involved remains unclear. METHODS In this study, event-related optical signals (EROS) and event-related potentials (ERPs) were simultaneously recorded for the first time over the prefrontal cortex using a 64-channel electroencephalography (EEG) system, during passive auditory deviance perception in 12 right-handed young adults (7 women and 5 men). In this oddball paradigm, deviant stimuli (a 1500 Hz pure tone) elicited a negative shift in the N1 ERP component, related to mismatch negativity (MMN), and a significant positive deflection associated with the P300, compared to standard stimuli (a 1000 Hz tone). RESULTS We hypothesize that the DLPFC not only participates in active tasks but also plays a critical role in processing deviant stimuli in passive conditions, shifting from pre-attentive to attentive processing. We detected enhanced neural activity in the left middle frontal gyrus (MFG), at the same timing of the MMN component, followed by later activation at the timing of the P3a ERP component in the right MFG. CONCLUSIONS Understanding these dynamics will provide deeper insights into the DLPFC's role in evaluating the novelty or unexpectedness of the deviant stimulus, updating its cognitive value, and adjusting future predictions accordingly. However, the small number of subjects could limit the generalizability of the observations, in particular with respect to the effect of handedness, and additional studies with larger and more diverse samples are necessary to validate our conclusions.
Collapse
Affiliation(s)
- Manon E. Jaquerod
- NeuroHeuristic Research Group, University of Lausanne, Quartier UNIL-Chamberonne, 1015 Lausanne, Switzerland (A.L.)
| | - Ramisha S. Knight
- Beckman Institute, University of Illinois at Urbana-Champaign, 405 N Mathews Ave., Urbana, IL 61801, USA
- Aptima, Inc., 2555 University Blvd, Fairborn, OH 45324, USA
| | - Alessandra Lintas
- NeuroHeuristic Research Group, University of Lausanne, Quartier UNIL-Chamberonne, 1015 Lausanne, Switzerland (A.L.)
- LABEX, HEC Lausanne, University of Lausanne, Quartier UNIL-Chamberonne, 1015 Lausanne, Switzerland
| | - Alessandro E. P. Villa
- NeuroHeuristic Research Group, University of Lausanne, Quartier UNIL-Chamberonne, 1015 Lausanne, Switzerland (A.L.)
| |
Collapse
|
2
|
MacLean J, Stirn J, Sisson A, Bidelman GM. Short- and long-term neuroplasticity interact during the perceptual learning of concurrent speech. Cereb Cortex 2024; 34:bhad543. [PMID: 38212291 PMCID: PMC10839853 DOI: 10.1093/cercor/bhad543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 12/20/2023] [Accepted: 12/21/2023] [Indexed: 01/13/2024] Open
Abstract
Plasticity from auditory experience shapes the brain's encoding and perception of sound. However, whether such long-term plasticity alters the trajectory of short-term plasticity during speech processing has yet to be investigated. Here, we explored the neural mechanisms and interplay between short- and long-term neuroplasticity for rapid auditory perceptual learning of concurrent speech sounds in young, normal-hearing musicians and nonmusicians. Participants learned to identify double-vowel mixtures during ~ 45 min training sessions recorded simultaneously with high-density electroencephalography (EEG). We analyzed frequency-following responses (FFRs) and event-related potentials (ERPs) to investigate neural correlates of learning at subcortical and cortical levels, respectively. Although both groups showed rapid perceptual learning, musicians showed faster behavioral decisions than nonmusicians overall. Learning-related changes were not apparent in brainstem FFRs. However, plasticity was highly evident in cortex, where ERPs revealed unique hemispheric asymmetries between groups suggestive of different neural strategies (musicians: right hemisphere bias; nonmusicians: left hemisphere). Source reconstruction and the early (150-200 ms) time course of these effects localized learning-induced cortical plasticity to auditory-sensory brain areas. Our findings reinforce the domain-general benefits of musicianship but reveal that successful speech sound learning is driven by a critical interplay between long- and short-term mechanisms of auditory plasticity, which first emerge at a cortical level.
Collapse
Affiliation(s)
- Jessica MacLean
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
| | - Jack Stirn
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
| | - Alexandria Sisson
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
| | - Gavin M Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
- Cognitive Science Program, Indiana University, Bloomington, IN, USA
| |
Collapse
|
3
|
MacLean J, Stirn J, Sisson A, Bidelman GM. Short- and long-term experience-dependent neuroplasticity interact during the perceptual learning of concurrent speech. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.26.559640. [PMID: 37808665 PMCID: PMC10557636 DOI: 10.1101/2023.09.26.559640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
Abstract
Plasticity from auditory experiences shapes brain encoding and perception of sound. However, whether such long-term plasticity alters the trajectory of short-term plasticity during speech processing has yet to be investigated. Here, we explored the neural mechanisms and interplay between short- and long-term neuroplasticity for rapid auditory perceptual learning of concurrent speech sounds in young, normal-hearing musicians and nonmusicians. Participants learned to identify double-vowel mixtures during ∼45 minute training sessions recorded simultaneously with high-density EEG. We analyzed frequency-following responses (FFRs) and event-related potentials (ERPs) to investigate neural correlates of learning at subcortical and cortical levels, respectively. While both groups showed rapid perceptual learning, musicians showed faster behavioral decisions than nonmusicians overall. Learning-related changes were not apparent in brainstem FFRs. However, plasticity was highly evident in cortex, where ERPs revealed unique hemispheric asymmetries between groups suggestive of different neural strategies (musicians: right hemisphere bias; nonmusicians: left hemisphere). Source reconstruction and the early (150-200 ms) time course of these effects localized learning-induced cortical plasticity to auditory-sensory brain areas. Our findings confirm domain-general benefits for musicianship but reveal successful speech sound learning is driven by a critical interplay between long- and short-term mechanisms of auditory plasticity that first emerge at a cortical level.
Collapse
|
4
|
Fallahnezhad T, Pourbakht A, Toufan R, Jalaei S. The Effect of Combined Auditory Training on Concurrent Sound Segregation in Young old: A Single-Blinded Randomized Clinical Trial. Indian J Otolaryngol Head Neck Surg 2023:1-7. [PMID: 37362117 PMCID: PMC10236386 DOI: 10.1007/s12070-023-03923-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Accepted: 05/29/2023] [Indexed: 06/28/2023] Open
Abstract
This study aimed to investigate the behavioral results of perceptual learning in young old using double-vowel discrimination tasks in combined auditory training programs. In a single-blind randomized clinical trial, 35 participants were randomly divided into three groups and received different auditory training programs for six sessions using the software. To compare the double-vowel discrimination score, CV in noise test, and reaction time to the first and second vowels pre- and post-intervention, an analysis of variance was conducted. The discrimination score in the double vowel task and CV in noise test improved after training with no significant difference between the groups. After auditory training, the lowest RT1 was observed in the first intervention group, whereas RT2 decreased only in the second intervention. The present study showed that combined auditory training programs are as effective as conventional auditory training programs in improving speech perception in the elderly. Modifications in the sensory cortex could be investigated using electrophysiological recordings, but this was not conducted because of the pandemic. Supplementary Information The online version contains supplementary material available at 10.1007/s12070-023-03923-x.
Collapse
Affiliation(s)
- Tayyebe Fallahnezhad
- Rehabilitation research center, Department of Audiology, School of Rehabilitation Sciences, University of Medical Sciences, Tehran, Iran
| | - Akram Pourbakht
- Rehabilitation research center, Department of Audiology, School of Rehabilitation Sciences, University of Medical Sciences, Tehran, Iran
| | - Reyhane Toufan
- Rehabilitation research center, Department of Audiology, School of Rehabilitation Sciences, University of Medical Sciences, Tehran, Iran
| | - Shohre Jalaei
- Department of Physiotherapy, School of Rehabilitation, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
5
|
Zhang Z, Zhang H, Sommer W, Yang X, Wei Z, Li W. Musical training alters neural processing of tones and vowels in classic Chinese poems. Brain Cogn 2023; 166:105952. [PMID: 36641937 DOI: 10.1016/j.bandc.2023.105952] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Revised: 10/31/2022] [Accepted: 01/05/2023] [Indexed: 01/15/2023]
Abstract
Long-term rigorous musical training promotes various aspects of spoken language processing. However, it is unclear whether musical training provides an advantage in recognizing segmental and suprasegmental information of spoken language. We used vowel and tone violations in spoken unfamiliar seven-character quatrains and a rhyming judgment task to investigate the effects of musical training on tone and vowel processing by recording ERPs. Compared with non-musicians, musicians were more accurate and responded faster to incorrect than correct tones. Musicians showed larger P2 components in their ERPs than non-musicians during both tone and vowel processing, revealing increased focused attention on sounds. Both groups showed enhanced N400 and LPC for incorrect vowels (vs. correct vowels) but non-musicians showed an additional P2 effect for vowel violations. Moreover, both groups showed enhanced LPC for incorrect tones (vs. correct tones) but only non-musicians showed an additional N400 effect for tone violations. These results indicate that vowel/tone processing is less effortful for musicians (vs. non-musicians). Our study suggests that long-term musical training facilitates speech tone and vowel processing in a tonal language environment by increasing the attentional focus on speech and reducing demands for detecting incorrect vowels and integration costs for tone changes.
Collapse
Affiliation(s)
- Zhenghua Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China; Department of Psychology, Renmin University of China, Beijing 100872, China
| | - Hang Zhang
- Department of Psychology, Renmin University of China, Beijing 100872, China
| | - Werner Sommer
- Institut für Psychologie, Humboldt-Universität zu Berlin, Berlin 10117, Germany; Department of Psychology, Zhejiang Normal University, Jinhua 321004, China
| | - Xiaohong Yang
- Department of Psychology, Renmin University of China, Beijing 100872, China
| | - Zhen Wei
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Weijun Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China.
| |
Collapse
|
6
|
Chen YP, Schmidt F, Keitel A, Rösch S, Hauswald A, Weisz N. Speech intelligibility changes the temporal evolution of neural speech tracking. Neuroimage 2023; 268:119894. [PMID: 36693596 DOI: 10.1016/j.neuroimage.2023.119894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 12/13/2022] [Accepted: 01/20/2023] [Indexed: 01/22/2023] Open
Abstract
Listening to speech with poor signal quality is challenging. Neural speech tracking of degraded speech has been used to advance the understanding of how brain processes and speech intelligibility are interrelated. However, the temporal dynamics of neural speech tracking and their relation to speech intelligibility are not clear. In the present MEG study, we exploited temporal response functions (TRFs), which has been used to describe the time course of speech tracking on a gradient from intelligible to unintelligible degraded speech. In addition, we used inter-related facets of neural speech tracking (e.g., speech envelope reconstruction, speech-brain coherence, and components of broadband coherence spectra) to endorse our findings in TRFs. Our TRF analysis yielded marked temporally differential effects of vocoding: ∼50-110 ms (M50TRF), ∼175-230 ms (M200TRF), and ∼315-380 ms (M350TRF). Reduction of intelligibility went along with large increases of early peak responses M50TRF, but strongly reduced responses in M200TRF. In the late responses M350TRF, the maximum response occurred for degraded speech that was still comprehensible then declined with reduced intelligibility. Furthermore, we related the TRF components to our other neural "tracking" measures and found that M50TRF and M200TRF play a differential role in the shifting center frequency of the broadband coherence spectra. Overall, our study highlights the importance of time-resolved computation of neural speech tracking and decomposition of coherence spectra and provides a better understanding of degraded speech processing.
Collapse
Affiliation(s)
- Ya-Ping Chen
- Centre for Cognitive Neuroscience, University of Salzburg, 5020 Salzburg, Austria; Department of Psychology, University of Salzburg, 5020 Salzburg, Austria.
| | - Fabian Schmidt
- Centre for Cognitive Neuroscience, University of Salzburg, 5020 Salzburg, Austria; Department of Psychology, University of Salzburg, 5020 Salzburg, Austria
| | - Anne Keitel
- Psychology, School of Social Sciences, University of Dundee, DD1 4HN Dundee, UK
| | - Sebastian Rösch
- Department of Otorhinolaryngology, Paracelsus Medical University, 5020 Salzburg, Austria
| | - Anne Hauswald
- Centre for Cognitive Neuroscience, University of Salzburg, 5020 Salzburg, Austria; Department of Psychology, University of Salzburg, 5020 Salzburg, Austria
| | - Nathan Weisz
- Centre for Cognitive Neuroscience, University of Salzburg, 5020 Salzburg, Austria; Department of Psychology, University of Salzburg, 5020 Salzburg, Austria; Neuroscience Institute, Christian Doppler University Hospital, Paracelsus Medical University, 5020 Salzburg, Austria
| |
Collapse
|
7
|
Railo H, Varjonen A, Lehtonen M, Sikka P. Event-Related Potential Correlates of Learning to Produce Novel Foreign Phonemes. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:599-614. [PMID: 37215343 PMCID: PMC10158638 DOI: 10.1162/nol_a_00080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 08/31/2022] [Indexed: 05/24/2023]
Abstract
Learning to pronounce a foreign phoneme requires an individual to acquire a motor program that enables the reproduction of the new acoustic target sound. This process is largely based on the use of auditory feedback to detect pronunciation errors to adjust vocalization. While early auditory evoked neural activity underlies automatic detection and adaptation to vocalization errors, little is known about the neural correlates of acquiring novel speech targets. To investigate the neural processes that mediate the learning of foreign phoneme pronunciation, we recorded event-related potentials when participants (N = 19) pronounced native or foreign phonemes. Behavioral results indicated that the participants' pronunciation of the foreign phoneme improved during the experiment. Early auditory responses (N1 and P2 waves, approximately 85-290 ms after the sound onset) revealed no differences between foreign and native phonemes. In contrast, the amplitude of the frontocentrally distributed late slow wave (LSW, 320-440 ms) was modulated by the pronunciation of the foreign phonemes, and the effect changed during the experiment, paralleling the improvement in pronunciation. These results suggest that the LSW may reflect higher-order monitoring processes that signal successful pronunciation and help learn novel phonemes.
Collapse
Affiliation(s)
- Henry Railo
- Department of Psychology and Speech-Language Pathology, University of Turku, Turku, Finland
- Turku Brain and Mind Centre, University of Turku, Turku, Finland
| | - Anni Varjonen
- Turku Brain and Mind Centre, University of Turku, Turku, Finland
| | - Minna Lehtonen
- Department of Psychology and Speech-Language Pathology, University of Turku, Turku, Finland
- Turku Brain and Mind Centre, University of Turku, Turku, Finland
- Center for Multilingualism in Society across the Lifespan, Department of Linguistics and Scandinavian Studies, University of Oslo, Oslo, Norway
| | - Pilleriin Sikka
- Department of Psychology and Speech-Language Pathology, University of Turku, Turku, Finland
- Turku Brain and Mind Centre, University of Turku, Turku, Finland
- Department of Cognitive Neuroscience and Philosophy, School of Bioscience, University of Skövde, Skövde, Sweden
- Department of Psychology, Stanford University, Stanford, California, USA
| |
Collapse
|
8
|
Gohari N, Hosseini Dastgerdi Z, Bernstein LJ, Alain C. Neural correlates of concurrent sound perception: A review and guidelines for future research. Brain Cogn 2022; 163:105914. [PMID: 36155348 DOI: 10.1016/j.bandc.2022.105914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 08/30/2022] [Accepted: 09/02/2022] [Indexed: 11/02/2022]
Abstract
The perception of concurrent sound sources depends on processes (i.e., auditory scene analysis) that fuse and segregate acoustic features according to harmonic relations, temporal coherence, and binaural cues (encompass dichotic pitch, location difference, simulated echo). The object-related negativity (ORN) and P400 are electrophysiological indices of concurrent sound perception. Here, we review the different paradigms used to study concurrent sound perception and the brain responses obtained from these paradigms. Recommendations regarding the design and recording parameters of the ORN and P400 are made, and their clinical applications in assessing central auditory processing ability in different populations are discussed.
Collapse
Affiliation(s)
- Nasrin Gohari
- Department of Audiology, School of Rehabilitation, Hamadan University of Medical Sciences, Hamadan, Iran.
| | - Zahra Hosseini Dastgerdi
- Department of Audiology, School of Rehabilitation, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Lori J Bernstein
- Department of Supportive Care, University Health Network, and Department of Psychiatry, University of Toronto, Toronto, Canada
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care & Department of Psychology, University of Toronto, Canada
| |
Collapse
|
9
|
Thomas T, Martin C, Caffarra S. An ERP investigation of accented isolated single word processing. Neuropsychologia 2022; 175:108349. [PMID: 35987342 DOI: 10.1016/j.neuropsychologia.2022.108349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 08/01/2022] [Accepted: 08/10/2022] [Indexed: 10/15/2022]
Abstract
Previous studies show that there are differences in native and non-native speech processing (Lev-Ari, 2018). However, less is known about the differences between processing native and dialectal accents. Is dialectal processing more similar to foreign or native speech? To address this, two theories have been proposed. The Perceptual Distance Hypothesis states that the mechanisms underlying dialectal accent processing are attenuated versions of those of foreign (Clarke & Garrett, 2004). Conversely, the Different Processes Hypothesis argues that the mechanisms of foreign and dialectal accent processing are qualitatively different (Floccia et al., 2009). The present study addresses these hypotheses. Electroencephalographic data was recorded from 25 participants who listened to 40 isolated words in different accents. Event-Related Potential mean amplitudes were extracted: P2 [150-250 ms], PMN [250-400 ms] and N400 [400-600 ms]. Support for the Different Processes Hypothesis was found in different time windows. Results show that early processing mechanisms distinguish only between native and non-native speech, with a reduced P2 amplitude for foreign accent processing, supporting the Different Processes Hypothesis. Furthermore, later processing mechanisms show a similar binary difference in the processing of the accents, with a larger PMN negativity elicited in the foreign accent than the others, further supporting the Different Processes Hypothesis. Results contribute to the understanding of single word processing, in which it is uniquely difficult to extract acoustic characteristics from foreign accent, and in which foreign accented speech is associated with the largest cost, as compared to native and dialectal speech, of phonological matching between representations and acoustic input.
Collapse
Affiliation(s)
- Trisha Thomas
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain; University of the Basque Country (UPV/EHU) Bilbao, Spain.
| | - Clara Martin
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain; Basque Foundation for Science (Ikerbasque), Spain
| | - Sendy Caffarra
- Basque Center on Cognition, Brain and Language, San Sebastian, Spain; University School of Medicine, 291 Campus Drive, Li Ka Shing Building, Stanford, CA 94305 5101, USA; Stanford University Graduate School of Education, 485 Lasuen Mall, Stanford, CA 94305, USA; University of Modena and Reggio Emilia, Via Campi 287,41125 Modena, Italy
| |
Collapse
|
10
|
Gosselin L, Martin CD, González Martín A, Caffarra S. When A Nonnative Accent Lets You Spot All the Errors: Examining the Syntactic Interlanguage Benefit. J Cogn Neurosci 2022; 34:1650-1669. [PMID: 35802598 DOI: 10.1162/jocn_a_01886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In our continuously globalizing world, cross-cultural and cross-linguistic communications are far from exceptional. A wealth of research has indicated that the processing of nonnative-accented speech can be challenging for native listeners, both at the level of phonology (e.g., Munro & Derwing, 1995) and syntax (Caffarra & Martin, 2019). However, few online studies have examined the underpinnings of accented speech recognition from the perspective of the "nonnative listener," even though behavioral studies indicate that accented input may be easier to process for such individuals (i.e., the interlanguage speech intelligibility benefit; Bent & Bradlow, 2003). The current EEG study first examined the phonological and syntactic analysis of nonnative-accented speech among nonnative listeners. As such, 30 English learners of Spanish listened to syntactically correct and incorrect Spanish sentences produced in native and nonnative-accented Spanish. The violation in the incorrect sentences was caused by errors that are typical (i.e., gender errors; *la color) or atypical for English learners of Spanish (i.e., number errors; *los color). Results indicated that nonnative listeners elicit a phonological mismatch negativity (PMN) when attending to speech produced by a native Spanish speaker. Furthermore, the nonnative listeners showed a P600 for all grammatical violations, indicating that they repair all errors regardless of their typicality or the accent in which they are produced. Follow-up analyses compared our novel data to the data of native listeners from the methodologically identical precursor study (Caffarra & Martin, 2019). These analyses showed that native and nonnative listeners exhibit directionally opposite PMN effects; whereas natives exhibited a larger PMN for English-accented Spanish, nonnatives displayed a larger PMN in response to native Spanish utterances (a classic interlanguage speech intelligibility benefit). An additional difference was observed at the syntactic level: Whereas natives repaired only atypical number errors when they were English-accented, nonnative participants exhibited a P600 in response to all English-accented syntactic errors, regardless of their typicality (a syntactic interlanguage speech intelligibility benefit). Altogether, these results suggest that accented speech is not inherently difficult to process; in fact, nonnatives may benefit from the presence of a nonnative accent. Thus, our data provide some of the first electrophysiological evidence supporting the existence of the classic interlanguage speech intelligibility benefit and its novel syntactic counterpart.
Collapse
Affiliation(s)
| | - Clara D Martin
- Basque Center on Cognition, Brain and Language, Donostia, Spain.,Ikerbasque-The Basque Foundation for Science, Bilbao, Spain
| | | | - Sendy Caffarra
- Basque Center on Cognition, Brain and Language, Donostia, Spain.,Stanford University School of Medicine, Palo Alto, CA.,University of Modena and Reggio Emilia
| |
Collapse
|
11
|
Bieber RE, Brodbeck C, Anderson S. Examining the context benefit in older adults: A combined behavioral-electrophysiologic word identification study. Neuropsychologia 2022; 170:108224. [PMID: 35346650 DOI: 10.1016/j.neuropsychologia.2022.108224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2021] [Revised: 02/18/2022] [Accepted: 03/22/2022] [Indexed: 10/18/2022]
Abstract
When listening to degraded speech, listeners can use high-level semantic information to support recognition. The literature contains conflicting findings regarding older listeners' ability to benefit from semantic cues in recognizing speech, relative to younger listeners. Electrophysiologic (EEG) measures of lexical access (N400) often show that semantic context does not facilitate lexical access in older listeners; in contrast, auditory behavioral studies indicate that semantic context improves speech recognition in older listeners as much or more as in younger listeners. Many behavioral studies of aging and the context benefit have employed signal degradation or alteration, whereas this stimulus manipulation has been absent in the EEG literature, a possible reason for the inconsistencies between studies. Here we compared the context benefit as a function of age and signal type, using EEG combined with behavioral measures. Non-native accent, a common form of signal alteration which many older adults report as a challenge in daily speech recognition, was utilized for testing. The stimuli included English sentences produced by native speakers of English and Spanish, containing target words differing in cloze probability. Listeners performed a word identification task while 32-channel cortical responses were recorded. Results show that older adults' word identification performance was poorer in the low-predictability and non-native talker conditions than the younger adults, replicating earlier behavioral findings. However, older adults did not show reductions or delays in the average N400 response as compared to younger listeners, suggesting no age-related reduction in predictive processing capability. Potential sources for discrepancies in the prior literature are discussed.
Collapse
Affiliation(s)
- Rebecca E Bieber
- Department of Hearing and Speech Sciences, 0100 Lefrak Hall, University of Maryland College Park, College Park MD, 20740, USA.
| | - Christian Brodbeck
- Department of Psychological Sciences, University of Connecticut, Storrs CT, 06269, USA
| | - Samira Anderson
- Department of Hearing and Speech Sciences, 0100 Lefrak Hall, University of Maryland College Park, College Park MD, 20740, USA
| |
Collapse
|
12
|
Heald SLM, Van Hedger SC, Veillette J, Reis K, Snyder JS, Nusbaum HC. Going Beyond Rote Auditory Learning: Neural Patterns of Generalized Auditory Learning. J Cogn Neurosci 2022; 34:425-444. [PMID: 34942645 PMCID: PMC8832160 DOI: 10.1162/jocn_a_01805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The ability to generalize across specific experiences is vital for the recognition of new patterns, especially in speech perception considering acoustic-phonetic pattern variability. Indeed, behavioral research has demonstrated that listeners are able via a process of generalized learning to leverage their experiences of past words said by difficult-to-understand talker to improve their understanding for new words said by that talker. Here, we examine differences in neural responses to generalized versus rote learning in auditory cortical processing by training listeners to understand a novel synthetic talker. Using a pretest-posttest design with EEG, participants were trained using either (1) a large inventory of words where no words were repeated across the experiment (generalized learning) or (2) a small inventory of words where words were repeated (rote learning). Analysis of long-latency auditory evoked potentials at pretest and posttest revealed that rote and generalized learning both produced rapid changes in auditory processing, yet the nature of these changes differed. Generalized learning was marked by an amplitude reduction in the N1-P2 complex and by the presence of a late negativity wave in the auditory evoked potential following training; rote learning was marked only by temporally later scalp topography differences. The early N1-P2 change, found only for generalized learning, is consistent with an active processing account of speech perception, which proposes that the ability to rapidly adjust to the specific vocal characteristics of a new talker (for which rote learning is rare) relies on attentional mechanisms to selectively modify early auditory processing sensitivity.
Collapse
|
13
|
Kleeva DF, Rebreikina AB, Soghoyan GA, Kostanian DG, Neklyudova AN, Sysoeva OV. Generalization of sustained neurophysiological effects of short-term auditory 13-Hz stimulation to neighboring frequency representation in humans. Eur J Neurosci 2021; 55:175-188. [PMID: 34736295 PMCID: PMC9299826 DOI: 10.1111/ejn.15513] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Revised: 10/23/2021] [Accepted: 10/29/2021] [Indexed: 11/30/2022]
Abstract
A fuller understanding of the effects of auditory tetanization in humans would inform better language and sensory learning paradigms, however, there are still unanswered questions. Here, we probe sustained changes in the event-related potentials (ERPs) to 1020Hz and 980Hz tones following a rapid presentation of 1020Hz tone (every 75 ms, 13.3Hz, tetanization). Consistent with the previous studies (Rygvold, et al., 2021, Mears & Spencer 2012), we revealed the increase in the P2 ERP component after tetanization. Contrary to other studies (Clapp et al., 2005; Lei et al., 2017) we did not observe the expected N1 increase after tetanization even in the experimental sequence identical to Clapp. et al., 2005. We detected a significant N1 decrease after tetanization. Expanding previous research, we showed that P2 increase and N1 decrease is not specific to the stimulus type (tetanized 1020Hz and non-tetanized 980Hz), suggesting the generalizability of tetanization effect to the not-stimulated auditory tones, at least to those of the neighboring frequency. The ERPs tetanization effects were observed for at least 30 min - the most prolonged interval examined, consistent with the duration of long-term potentiation, LTP. In addition, the tetanization effects were detectable in the blocks where the participants watched muted videos, an experimental setting that can be easily used in children and other challenging groups. Thus, auditory 13-Hz stimulation affects brain processing of tones including those of neighboring frequencies.
Collapse
Affiliation(s)
- D F Kleeva
- Center for Cognitive Research, Sirius University of Science and Technology, Sochi, Russia.,Center for Bioelectric Interfaces, National Research University "Higher School of Economics", Moscow, Russia
| | - A B Rebreikina
- Center for Cognitive Research, Sirius University of Science and Technology, Sochi, Russia.,Laboratory of Human Higher Nervous Activity, Institute of Higher Nervous Activity and Neurophysiology of RAS, Moscow, Russia
| | - G A Soghoyan
- Center for Cognitive Research, Sirius University of Science and Technology, Sochi, Russia.,Center for Bioelectric Interfaces, National Research University "Higher School of Economics", Moscow, Russia.,V. Zelman Center for Neurobiology and Brain Restoration, Skolkovo Institute of Science and Technology 121205, Moscow, Russia
| | - D G Kostanian
- Center for Cognitive Research, Sirius University of Science and Technology, Sochi, Russia.,Laboratory of Human Higher Nervous Activity, Institute of Higher Nervous Activity and Neurophysiology of RAS, Moscow, Russia
| | - A N Neklyudova
- Center for Cognitive Research, Sirius University of Science and Technology, Sochi, Russia.,Laboratory of Human Higher Nervous Activity, Institute of Higher Nervous Activity and Neurophysiology of RAS, Moscow, Russia
| | - O V Sysoeva
- Center for Cognitive Research, Sirius University of Science and Technology, Sochi, Russia.,Laboratory of Human Higher Nervous Activity, Institute of Higher Nervous Activity and Neurophysiology of RAS, Moscow, Russia
| |
Collapse
|
14
|
Neural correlates of implicit agency during the transition from adolescence to adulthood: An ERP study. Neuropsychologia 2021; 158:107908. [PMID: 34062152 DOI: 10.1016/j.neuropsychologia.2021.107908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Revised: 05/03/2021] [Accepted: 05/26/2021] [Indexed: 11/20/2022]
Abstract
Sense of agency (SoA), the experience of being in control of our voluntary actions and their outcomes, is a key feature of normal human experience. Frontoparietal brain circuits associated with SoA undergo a major maturational process during adolescence. To examine whether this translates to neurodevelopmental changes in agency experience, we investigated two key neural processes associated with SoA, the activity that is leading to voluntary action (Readiness Potential) and the activity that is associated with the action outcome processing (attenuation of auditory N1 and P2 event related potentials, ERPs) in mid-adolescents (13-14), late-adolescents (18-20) and adults (25-28) while they perform an intentional binding task. In this task, participants pressed a button (action) that delivered a tone (outcome) after a small delay and reported the time of the tone using the Libet clock. This action-outcome condition alternated with a no-action condition where an identical tone was triggered by a computer. Mid-adolescents showed greater outcome binding, such that they perceived self-triggered tones as being temporally closer to their actions compared to adults. Suggesting greater agency experience over the outcomes of their voluntary actions during mid-adolescence. Consistent with this, greater levels of attenuated neural response to self-triggered auditory tones (specifically P2 attenuation) were found during mid-adolescence compared to older age groups. This enhanced attenuation decreased with age as observed in outcome binding. However, there were no age-related differences in the readiness potential leading to the voluntary action (button press) as well as in the N1 attenuation to the self-triggered tones. Notably, in mid-adolescents greater outcome binding scores were positively associated with greater P2 attenuation, and smaller negativity in the late readiness potential. These findings suggest that the greater experience of implicit agency observed during mid-adolescence may be mediated by a neural over-suppression of action outcomes (auditory P2 attenuation), and over-reliance on motor preparation (late readiness potential), which we found to become adult-like during late-adolescence. Implications for adolescent development and SoA related neurodevelopmental disorders are discussed.
Collapse
|
15
|
Matsushita R, Puschmann S, Baillet S, Zatorre RJ. Inhibitory effect of tDCS on auditory evoked response: Simultaneous MEG-tDCS reveals causal role of right auditory cortex in pitch learning. Neuroimage 2021; 233:117915. [PMID: 33652144 DOI: 10.1016/j.neuroimage.2021.117915] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 02/02/2021] [Accepted: 02/23/2021] [Indexed: 12/29/2022] Open
Abstract
A body of literature has demonstrated that the right auditory cortex (AC) plays a dominant role in fine pitch processing. However, our understanding is relatively limited about whether this asymmetry extends to perceptual learning of pitch. There is also a lack of causal evidence regarding the role of the right AC in pitch learning. We addressed these points with anodal transcranial direct current stimulation (tDCS), adapting a previous behavioral study in which anodal tDCS over the right AC was shown to block improvement of a microtonal pitch pattern learning task over 3 days. To address the physiological changes associated with tDCS, we recorded MEG data simultaneously with tDCS on the first day, and measured behavioral thresholds on the following two consecutive days. We tested three groups of participants who received anodal tDCS over their right or left AC, or sham tDCS, and measured the N1m auditory evoked response before, during, and after tDCS. Our data show that anodal tDCS of the right AC disrupted pitch discrimination learning up to two days after its application, whereas learning was unaffected by left-AC or sham tDCS. Although tDCS reduced the amplitude of the N1m ipsilaterally to the stimulated hemisphere on both left and right, only right AC N1m amplitude reductions were associated with the degree to which pitch learning was disrupted. This brain-behavior relationship confirms a causal link between right AC physiological responses and fine pitch processing, and provides neurophysiological insight concerning the mechanisms of action of tDCS on the auditory system.
Collapse
Affiliation(s)
- Reiko Matsushita
- Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada; Centre for Research on Brain, Language and Music, Montreal, QC H3G 2A8, Canada; International Laboratory for Brain, Music and Sound Research, Montreal, QC H2V 2S9, Canada.
| | - Sebastian Puschmann
- Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada; Centre for Research on Brain, Language and Music, Montreal, QC H3G 2A8, Canada; Institute of Psychology, Carl von Ossietzky University, Oldenburg 26111, Germany
| | - Sylvain Baillet
- Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada; Centre for Research on Brain, Language and Music, Montreal, QC H3G 2A8, Canada
| | - Robert J Zatorre
- Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada; Centre for Research on Brain, Language and Music, Montreal, QC H3G 2A8, Canada; International Laboratory for Brain, Music and Sound Research, Montreal, QC H2V 2S9, Canada.
| |
Collapse
|
16
|
Hajizadeh A, Matysiak A, Brechmann A, König R, May PJC. Why do humans have unique auditory event-related fields? Evidence from computational modeling and MEG experiments. Psychophysiology 2021; 58:e13769. [PMID: 33475173 DOI: 10.1111/psyp.13769] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 12/04/2020] [Accepted: 12/20/2020] [Indexed: 11/28/2022]
Abstract
Auditory event-related fields (ERFs) measured with magnetoencephalography (MEG) are useful for studying the neuronal underpinnings of auditory cognition in human cortex. They have a highly subject-specific morphology, albeit certain characteristic deflections (e.g., P1m, N1m, and P2m) can be identified in most subjects. Here, we explore the reason for this subject-specificity through a combination of MEG measurements and computational modeling of auditory cortex. We test whether ERF subject-specificity can predominantly be explained in terms of each subject having an individual cortical gross anatomy, which modulates the MEG signal, or whether individual cortical dynamics is also at play. To our knowledge, this is the first time that tools to address this question are being presented. The effects of anatomical and dynamical variation on the MEG signal is simulated in a model describing the core-belt-parabelt structure of the auditory cortex, and with the dynamics based on the leaky-integrator neuron model. The experimental and simulated ERFs are characterized in terms of the N1m amplitude, latency, and width. Also, we examine the waveform grand-averaged across subjects, and the standard deviation of this grand average. The results show that the intersubject variability of the ERF arises out of both the anatomy and the dynamics of auditory cortex being specific to each subject. Moreover, our results suggest that the latency variation of the N1m is largely related to subject-specific dynamics. The findings are discussed in terms of how learning, plasticity, and sound detection are reflected in the auditory ERFs. The notion of the grand-averaged ERF is critically evaluated.
Collapse
Affiliation(s)
- Aida Hajizadeh
- Leibniz Institute for Neurobiology, Research Group Comparative Neuroscience, Magdeburg, Germany
| | - Artur Matysiak
- Leibniz Institute for Neurobiology, Research Group Comparative Neuroscience, Magdeburg, Germany
| | - André Brechmann
- Leibniz Institute for Neurobiology, Combinatorial NeuroImaging Core Facility, Magdeburg, Germany
| | - Reinhard König
- Leibniz Institute for Neurobiology, Research Group Comparative Neuroscience, Magdeburg, Germany
| | - Patrick J C May
- Leibniz Institute for Neurobiology, Research Group Comparative Neuroscience, Magdeburg, Germany.,Department of Psychology, Lancaster University, Lancaster, UK
| |
Collapse
|
17
|
Zioga I, Harrison PMC, Pearce MT, Bhattacharya J, Luft CDB. Auditory but Not Audiovisual Cues Lead to Higher Neural Sensitivity to the Statistical Regularities of an Unfamiliar Musical Style. J Cogn Neurosci 2020; 32:2241-2259. [PMID: 32762519 DOI: 10.1162/jocn_a_01614] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
It is still a matter of debate whether visual aids improve learning of music. In a multisession study, we investigated the neural signatures of novel music sequence learning with or without aids (auditory-only: AO, audiovisual: AV). During three training sessions on three separate days, participants (nonmusicians) reproduced (note by note on a keyboard) melodic sequences generated by an artificial musical grammar. The AV group (n = 20) had each note color-coded on screen, whereas the AO group (n = 20) had no color indication. We evaluated learning of the statistical regularities of the novel music grammar before and after training by presenting melodies ending on correct or incorrect notes and by asking participants to judge the correctness and surprisal of the final note, while EEG was recorded. We found that participants successfully learned the new grammar. Although the AV group, as compared to the AO group, reproduced longer sequences during training, there was no significant difference in learning between groups. At the neural level, after training, the AO group showed a larger N100 response to low-probability compared with high-probability notes, suggesting an increased neural sensitivity to statistical properties of the grammar; this effect was not observed in the AV group. Our findings indicate that visual aids might improve sequence reproduction while not necessarily promoting better learning, indicating a potential dissociation between sequence reproduction and learning. We suggest that the difficulty induced by auditory-only input during music training might enhance cognitive engagement, thereby improving neural sensitivity to the underlying statistical properties of the learned material.
Collapse
|
18
|
Musical training improves rhythm integrative processing of classical Chinese poem. ACTA PSYCHOLOGICA SINICA 2020. [DOI: 10.3724/sp.j.1041.2020.00847] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
19
|
Sysoeva OV, Molholm S, Djukic A, Frey HP, Foxe JJ. Atypical processing of tones and phonemes in Rett Syndrome as biomarkers of disease progression. Transl Psychiatry 2020; 10:188. [PMID: 32522978 PMCID: PMC7287060 DOI: 10.1038/s41398-020-00877-4] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Revised: 05/19/2020] [Accepted: 05/26/2020] [Indexed: 12/27/2022] Open
Abstract
Due to severe motor impairments and the lack of expressive language abilities seen in most patients with Rett Syndrome (RTT), it has proven extremely difficult to obtain accurate measures of auditory processing capabilities in this population. Here, we examined early auditory cortical processing of pure tones and more complex phonemes in females with Rett Syndrome (RTT), by recording high-density auditory evoked potentials (AEP), which allow for objective evaluation of the timing and severity of processing deficits along the auditory processing hierarchy. We compared AEPs of 12 females with RTT to those of 21 typically developing (TD) peers aged 4-21 years, interrogating the first four major components of the AEP (P1: 60-90 ms; N1: 100-130 ms; P2: 135-165 ms; and N2: 245-275 ms). Atypicalities were evident in RTT at the initial stage of processing. Whereas the P1 showed increased amplitude to phonemic inputs relative to tones in TD participants, this modulation by stimulus complexity was absent in RTT. Interestingly, the subsequent N1 did not differ between groups, whereas the following P2 was hugely diminished in RTT, regardless of stimulus complexity. The N2 was similarly smaller in RTT and did not differ as a function of stimulus type. The P2 effect was remarkably robust in differentiating between groups with near perfect separation between the two groups despite the wide age range of our samples. Given this robustness, along with the observation that P2 amplitude was significantly associated with RTT symptom severity, the P2 has the potential to serve as a monitoring, treatment response, or even surrogate endpoint biomarker. Compellingly, the reduction of P2 in patients with RTT mimics findings in animal models of RTT, providing a translational bridge between pre-clinical and human research.
Collapse
Affiliation(s)
- Olga V. Sysoeva
- grid.412750.50000 0004 1936 9166The Cognitive Neurophysiology Laboratory, Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY USA ,grid.240283.f0000 0001 2152 0791The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY USA ,grid.4886.20000 0001 2192 9124The Laboratory of Human Higher Nervous Activity, Institute of Higher Nervous Activity and Neurophysiology, Russian Academy of Sciences, Moscow, Russia
| | - Sophie Molholm
- grid.412750.50000 0004 1936 9166The Cognitive Neurophysiology Laboratory, Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY USA ,grid.240283.f0000 0001 2152 0791The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY USA
| | - Aleksandra Djukic
- grid.240283.f0000 0001 2152 0791The Rett Syndrome Center, Department of Neurology, Montefiore Medical Center & Albert Einstein College of Medicine, Bronx, NY USA
| | - Hans-Peter Frey
- grid.240283.f0000 0001 2152 0791The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY USA
| | - John J. Foxe
- grid.412750.50000 0004 1936 9166The Cognitive Neurophysiology Laboratory, Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY USA ,grid.240283.f0000 0001 2152 0791The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY USA
| |
Collapse
|
20
|
Le Dantec CC, Seitz AR. Dissociating electrophysiological correlates of contextual and perceptual learning in a visual search task. J Vis 2020; 20:7. [PMID: 32525986 PMCID: PMC7416887 DOI: 10.1167/jov.20.6.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Perceptual learning and contextual learning are two types of implicit visual learning that can co-occur in the same tasks. For example, to find an animal in the woods, you need to know where to look in the environment (contextual learning) and you must be able to discriminate its features (perceptual learning). However, contextual and perceptual learning are typically studied using distinct experimental paradigms, and little is known regarding their comparative neural mechanisms. In this study, we investigated contextual and perceptual learning in 12 healthy adult humans as they performed the same visual search task, and we examined psychophysical and electrophysiological (event-related potentials) measures of learning. Participants were trained to look for a visual stimulus, a small line with a specific orientation, presented among distractors. We found better performance for the trained target orientation as compared to an untrained control orientation, reflecting specificity of perceptual learning for the orientation of trained elements. This orientation specificity effect was associated with changes in the C1 component. We also found better performance for repeated spatial configurations as compared to novel ones, reflecting contextual learning. This context-specific effect was associated with the N2pc component. Taken together, these results suggest that contextual and perceptual learning are distinct visual learning phenomena that have different behavioral and electrophysiological characteristics.
Collapse
|
21
|
Scanlon JE, Redman EX, Kuziek JW, Mathewson KE. A ride in the park: Cycling in different outdoor environments modulates the auditory evoked potentials. Int J Psychophysiol 2020; 151:59-69. [DOI: 10.1016/j.ijpsycho.2020.02.016] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2019] [Revised: 01/29/2020] [Accepted: 02/26/2020] [Indexed: 10/24/2022]
|
22
|
Shen D, Fang K, Fan Y, Shen J, Yang J, Cui J, Tang Y, Fang G. Sex differences in vocalization are reflected by event-related potential components in the music frog. Anim Cogn 2020; 23:477-490. [PMID: 32016618 DOI: 10.1007/s10071-020-01350-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2018] [Revised: 01/02/2020] [Accepted: 01/17/2020] [Indexed: 11/28/2022]
Abstract
Sex differences in vocalization have been commonly found in vocal animals. It remains unclear, however, how animals perceive and discriminate these differences. The amplitudes and latencies of event-related potentials (ERP) components can reflect the auditory processing efficiency and time course. We investigated the neural mechanisms of auditory processing in the Emei music frog (Nidirana daunchina) using an Oddball paradigm with ERP. We recorded and analyzed eletroencephalogram (EEG) signals from the forebrain and midbrain when the subjects listened to white noise (WN) and conspecific sex-specific vocalizations. We found that (1) both amplitudes and latencies of some ERP components evoked by conspecific calls were significantly higher than those by WN, suggesting the music frogs can discriminate conspecific vocalizations from background noise; (2) both amplitudes and latencies of most ERP components evoked by female calls were significantly higher or longer than those by male calls, implying that the ERP components can reflect sex differences in vocalization; and (3) there were significant differences in ERP amplitudes between male and female subjects, suggesting a sexual dimorphism in auditory perception. Together, the present results indicate that the music frog could discriminate conspecific calls from noise, male's calls from female's ones, and sexual dimorphism of auditory perception existed in this species.
Collapse
Affiliation(s)
- Di Shen
- Chengdu Institute of Biology, Chinese Academy of Sciences, No. 9 Section 4, Renmin Nan Road, Chengdu, 610041, Sichuan, People's Republic of China.,University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing, People's Republic of China
| | - Ke Fang
- Institute of Bio-Inspired Structure and Surface Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, 210016, People's Republic of China
| | - Yanzhu Fan
- Chengdu Institute of Biology, Chinese Academy of Sciences, No. 9 Section 4, Renmin Nan Road, Chengdu, 610041, Sichuan, People's Republic of China.,University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing, People's Republic of China
| | - Jiangyan Shen
- Chengdu Institute of Biology, Chinese Academy of Sciences, No. 9 Section 4, Renmin Nan Road, Chengdu, 610041, Sichuan, People's Republic of China.,University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing, People's Republic of China
| | - Jing Yang
- Chengdu Institute of Biology, Chinese Academy of Sciences, No. 9 Section 4, Renmin Nan Road, Chengdu, 610041, Sichuan, People's Republic of China.,University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing, People's Republic of China
| | - Jianguo Cui
- Chengdu Institute of Biology, Chinese Academy of Sciences, No. 9 Section 4, Renmin Nan Road, Chengdu, 610041, Sichuan, People's Republic of China
| | - Yezhong Tang
- Chengdu Institute of Biology, Chinese Academy of Sciences, No. 9 Section 4, Renmin Nan Road, Chengdu, 610041, Sichuan, People's Republic of China
| | - Guangzhan Fang
- Chengdu Institute of Biology, Chinese Academy of Sciences, No. 9 Section 4, Renmin Nan Road, Chengdu, 610041, Sichuan, People's Republic of China.
| |
Collapse
|
23
|
Wisniewski MG, Ball NJ, Zakrzewski AC, Iyer N, Thompson ER, Spencer N. Auditory detection learning is accompanied by plasticity in the auditory evoked potential. Neurosci Lett 2020; 721:134781. [PMID: 32004657 DOI: 10.1016/j.neulet.2020.134781] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2019] [Revised: 12/23/2019] [Accepted: 01/20/2020] [Indexed: 12/01/2022]
Abstract
Auditory detection can improve with practice. These improvements are often assumed to arise from selective attention processes, but longer-term plasticity as a result of training may also play a role. Here, listeners were trained to detect either an 861-Hz or 1058-Hz tone (counterbalanced across participants) presented in noise at SNRs varying from -10 to -24 dB. On the following day, they were tasked with detecting 861-Hz and 1058-Hz tones at an SNR of -21 dB. In between blocks of this active task, EEG was recorded during passive presentation of trained and untrained frequency tones in quiet. Detection accuracy and confidence ratings were higher for trials at listeners' trained, than untrained-frequency (i.e., learning occurred). During passive exposure to sounds, the P2 component of the auditory evoked potential (∼150 - 200 ms post tone onset) was larger in amplitude for the trained compared to the untrained frequency. An analysis of global field power similarly yielded a stronger response for trained tones in the P2 time window. These effects were obtained during passive exposure, suggesting that training induced improvements in detection are not solely related to changes in selective attention. Rather, there may be an important role for changes in the long-term neural representations of sounds.
Collapse
Affiliation(s)
| | | | | | - Nandini Iyer
- U.S. Air Force Research Laboratory, United States
| | | | | |
Collapse
|
24
|
Sharma VV, Thaut M, Russo F, Alain C. Absolute Pitch and Musical Expertise Modulate Neuro-Electric and Behavioral Responses in an Auditory Stroop Paradigm. Front Neurosci 2019; 13:932. [PMID: 31551690 PMCID: PMC6743413 DOI: 10.3389/fnins.2019.00932] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2019] [Accepted: 08/20/2019] [Indexed: 11/29/2022] Open
Abstract
Musicians have considerable experience naming pitch-classes with verbal (e.g., Doh, Ré, and Mi) and semiotic tags (e.g., musical notation). On the one end of the spectrum, musicians can identify the pitch of a piano tone or quality of a chord without a reference tone [i.e., absolute pitch (AP) or relative pitch], which suggests strong associations between the perceived pitch information and verbal labels. Here, we examined the strength of this association using auditory versions of the Stroop task while neuro-electric brain activity was measured using high-density electroencephalography. In separate blocks of trials, participants were presented with congruent or incongruent auditory words from English language (standard auditory Stroop), Romanic solemnization, or German key lexicons (the latter two versions require some knowledge of music notation). We hypothesized that musically trained groups would show greater Stroop interference effects when presented with incongruent musical notations than non-musicians. Analyses of behavioral data revealed small or even non-existent congruency effects in musicians for solfège and keycodes versions of the Stroop task. This finding was unexpected and appears inconsistent with the hypothesis that musical training and AP are associated with high strength response level associations between a perceived pitch and verbal label. The analyses of event-related potentials revealed three temporally distinct modulations associated with conflict processing. All three modulations were larger in the auditory word Stroop than in the other two versions of the Stroop task. Only AP musicians showed significant congruity effects around 450 and 750 ms post-stimulus when stimuli were presented as Germanic keycodes (i.e., C or G). This finding suggests that AP possessors may process alpha-numeric encodings as word forms with a semantic value, unlike their RP possessing counterparts and non-musically trained individuals. However, the strength of musical conditional associations may not exceed that of standard language in speech.
Collapse
Affiliation(s)
- Vivek V. Sharma
- Music and Health Research Collaboratory, University of Toronto, Toronto, ON, Canada
- Rotman Research Institute, Baycrest Centre, Toronto, ON, Canada
| | - Michael Thaut
- Music and Health Research Collaboratory, University of Toronto, Toronto, ON, Canada
| | - Frank Russo
- Music and Health Research Collaboratory, University of Toronto, Toronto, ON, Canada
- Department of Psychology, Ryerson University, Toronto, ON, Canada
| | - Claude Alain
- Music and Health Research Collaboratory, University of Toronto, Toronto, ON, Canada
- Rotman Research Institute, Baycrest Centre, Toronto, ON, Canada
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
25
|
Scanlon JE, Townsend KA, Cormier DL, Kuziek JW, Mathewson KE. Taking off the training wheels: Measuring auditory P3 during outdoor cycling using an active wet EEG system. Brain Res 2019; 1716:50-61. [DOI: 10.1016/j.brainres.2017.12.010] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2017] [Revised: 12/06/2017] [Accepted: 12/11/2017] [Indexed: 10/18/2022]
|
26
|
Burgess JD, Major BP, McNeel C, Clark GM, Lum JAG, Enticott PG. Learning to Expect: Predicting Sounds During Movement Is Related to Sensorimotor Association During Listening. Front Hum Neurosci 2019; 13:215. [PMID: 31333431 PMCID: PMC6624421 DOI: 10.3389/fnhum.2019.00215] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2018] [Accepted: 06/11/2019] [Indexed: 11/13/2022] Open
Abstract
Sensory experiences, such as sound, often result from our motor actions. Over time, repeated sound-producing performance can generate sensorimotor associations. However, it is not clear how sensory and motor information are associated. Here, we explore if sensory prediction is associated with the formation of sensorimotor associations during a learning task. We recorded event-related potentials (ERPs) while participants produced index and little finger-swipes on a bespoke device, generating novel sounds. ERPs were also obtained as participants heard those sounds played back. Peak suppression was compared to assess sensory prediction. Additionally, transcranial magnetic stimulation (TMS) was used during listening to generate finger-motor evoked potentials (MEPs). MEPs were recorded before and after training upon hearing these sounds, and then compared to reveal sensorimotor associations. Finally, we explored the relationship between these components. Results demonstrated that an increased positive-going peak (e.g., P2) and a suppressed negative-going peak (e.g., N2) were recorded during action, revealing some sensory prediction outcomes (P2: p = 0.050, ηp2 = 0.208; N2: p = 0.001, ηp2 = 0.474). Increased MEPs were also observed upon hearing congruent sounds compared with incongruent sounds (i.e., associated to a finger), demonstrating precise sensorimotor associations that were not present before learning (Index finger: p < 0.001, ηp2 = 0.614; Little finger: p < 0.001, ηp2 = 0.529). Consistent with our broad hypotheses, a negative association between the MEPs in one finger during listening and ERPs during performance of the other was observed (Index finger MEPs and Fz N1 action ERPs; r = −0.655, p = 0.003). Overall, data suggest that predictive mechanisms are associated with the fine-tuning of sensorimotor associations.
Collapse
Affiliation(s)
- Jed D Burgess
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, VIC, Australia
| | - Brendan P Major
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, VIC, Australia
| | - Claire McNeel
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, VIC, Australia
| | - Gillian M Clark
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, VIC, Australia
| | - Jarrad A G Lum
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, VIC, Australia
| | - Peter G Enticott
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, VIC, Australia
| |
Collapse
|
27
|
Scanlon JEM, Cormier DL, Townsend KA, Kuziek JWP, Mathewson KE. The ecological cocktail party: Measuring brain activity during an auditory oddball task with background noise. Psychophysiology 2019; 56:e13435. [DOI: 10.1111/psyp.13435] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2018] [Revised: 03/29/2019] [Accepted: 05/20/2019] [Indexed: 11/29/2022]
Affiliation(s)
- Joanna E. M. Scanlon
- Department of Psychology, Faculty of Science University of Alberta Edmonton Alberta Canada
- Neuropsychology Lab, Department of Psychology University of Oldenburg Oldenburg Germany
| | - Danielle L. Cormier
- Faculty of Rehabilitation Medicine, Department of Physical Therapy University of Alberta Edmonton Alberta Canada
| | | | - Jonathan W. P. Kuziek
- Department of Psychology, Faculty of Science University of Alberta Edmonton Alberta Canada
| | - Kyle E. Mathewson
- Department of Psychology, Faculty of Science University of Alberta Edmonton Alberta Canada
- Neuroscience and Mental Health Institute, Faculty of Medicine and Dentistry University of Alberta Edmonton Alberta Canada
| |
Collapse
|
28
|
Bidelman GM, Walker B. Plasticity in auditory categorization is supported by differential engagement of the auditory-linguistic network. Neuroimage 2019; 201:116022. [PMID: 31310863 DOI: 10.1016/j.neuroimage.2019.116022] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2019] [Revised: 06/30/2019] [Accepted: 07/12/2019] [Indexed: 12/21/2022] Open
Abstract
To construct our perceptual world, the brain categorizes variable sensory cues into behaviorally-relevant groupings. Categorical representations are apparent within a distributed fronto-temporo-parietal brain network but how this neural circuitry is shaped by experience remains undefined. Here, we asked whether speech and music categories might be formed within different auditory-linguistic brain regions depending on listeners' auditory expertise. We recorded EEG in highly skilled (musicians) vs. less experienced (nonmusicians) perceivers as they rapidly categorized speech and musical sounds. Musicians showed perceptual enhancements across domains, yet source EEG data revealed a double dissociation in the neurobiological mechanisms supporting categorization between groups. Whereas musicians coded categories in primary auditory cortex (PAC), nonmusicians recruited non-auditory regions (e.g., inferior frontal gyrus, IFG) to generate category-level information. Functional connectivity confirmed nonmusicians' increased left IFG involvement reflects stronger routing of signal from PAC directed to IFG, presumably because sensory coding is insufficient to construct categories in less experienced listeners. Our findings establish auditory experience modulates specific engagement and inter-regional communication in the auditory-linguistic network supporting categorical perception. Whereas early canonical PAC representations are sufficient to generate categories in highly trained ears, less experienced perceivers broadcast information downstream to higher-order linguistic brain areas (IFG) to construct abstract sound labels.
Collapse
Affiliation(s)
- Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.
| | - Breya Walker
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; Department of Psychology, University of Memphis, Memphis, TN, USA; Department of Mathematical Sciences, University of Memphis, Memphis, TN, USA
| |
Collapse
|
29
|
Fan Y, Yue X, Yang J, Shen J, Shen D, Tang Y, Fang G. Preference of spectral features in auditory processing for advertisement calls in the music frogs. Front Zool 2019; 16:13. [PMID: 31168310 PMCID: PMC6509768 DOI: 10.1186/s12983-019-0314-0] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2018] [Accepted: 04/22/2019] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Animal vocal signals encode very important information for communication during which the importance of temporal and spectral characteristics of vocalizations is always asymmetrical and species-specific. However, it is still unknown how auditory system represents this asymmetrical and species-specific patterns. In this study, auditory event related potential (ERP) changes were evaluated in the Emei music frog (Babina daunchina) to assess the differences in eliciting neural responses of both temporal and spectral features for the telencephalon, diencephalon and mesencephalon respectively. To do this, an acoustic playback experiment using an oddball paradigm design was conducted, in which an original advertisement call (OC), its spectral feature preserved version (SC) and temporal feature preserved version (TC) were used as deviant stimuli with synthesized white noise as standard stimulus. RESULTS The present results show that 1) compared with TC, more similar ERP components were evoked by OC and SC; and 2) the P3a amplitudes in the forebrain evoked by OC were significantly higher in males than in females. CONCLUSIONS Together, the results provide evidence for suggesting neural processing for conspecific vocalization may prefer to the spectral features in the music frog, prompting speculation that the spectral features may play more important roles in auditory object perception or vocal communication in this species. In addition, the neural processing for auditory perception is sexually dimorphic.
Collapse
Affiliation(s)
- Yanzhu Fan
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, No.9 Section 4, Renmin Nan Road, Chengdu, Sichuan 610041 People’s Republic of China
- University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing, People’s Republic of China
| | - Xizi Yue
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, No.9 Section 4, Renmin Nan Road, Chengdu, Sichuan 610041 People’s Republic of China
| | - Jing Yang
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, No.9 Section 4, Renmin Nan Road, Chengdu, Sichuan 610041 People’s Republic of China
- University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing, People’s Republic of China
| | - Jiangyan Shen
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, No.9 Section 4, Renmin Nan Road, Chengdu, Sichuan 610041 People’s Republic of China
- University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing, People’s Republic of China
| | - Di Shen
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, No.9 Section 4, Renmin Nan Road, Chengdu, Sichuan 610041 People’s Republic of China
- University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing, People’s Republic of China
| | - Yezhong Tang
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, No.9 Section 4, Renmin Nan Road, Chengdu, Sichuan 610041 People’s Republic of China
| | - Guangzhan Fang
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, No.9 Section 4, Renmin Nan Road, Chengdu, Sichuan 610041 People’s Republic of China
| |
Collapse
|
30
|
Hämäläinen JA, Parviainen T, Hsu YF, Salmelin R. Dynamics of brain activation during learning of syllable-symbol paired associations. Neuropsychologia 2019; 129:93-103. [PMID: 30930303 DOI: 10.1016/j.neuropsychologia.2019.03.016] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Revised: 02/20/2019] [Accepted: 03/25/2019] [Indexed: 11/15/2022]
Abstract
Initial stages of reading acquisition require the learning of letter and speech sound combinations. While the long-term effects of audio-visual learning are rather well studied, relatively little is known about the short-term learning effects at the brain level. Here we examined the cortical dynamics of short-term learning using magnetoencephalography (MEG) and electroencephalography (EEG) in two experiments that respectively addressed active and passive learning of the association between shown symbols and heard syllables. In experiment 1, learning was based on feedback provided after each trial. The learning of the audio-visual associations was contrasted with items for which the feedback was meaningless. In experiment 2, learning was based on statistical learning through passive exposure to audio-visual stimuli that were consistently presented with each other and contrasted with audio-visual stimuli that were randomly paired with each other. After 5-10 min of training and exposure, learning-related changes emerged in neural activation around 200 and 350 ms in the two experiments. The MEG results showed activity changes at 350 ms in caudal middle frontal cortex and posterior superior temporal sulcus, and at 500 ms in temporo-occipital cortex. Changes in brain activity coincided with a decrease in reaction times and an increase in accuracy scores. Changes in EEG activity were observed starting at the auditory P2 response followed by later changes after 300 ms. The results show that the short-term learning effects emerge rapidly (manifesting in later stages of audio-visual integration processes) and that these effects are modulated by selective attention processes.
Collapse
Affiliation(s)
- Jarmo A Hämäläinen
- Centre for Interdisciplinary Brain Research, Department of Psychology, P.O. Box 35, 40014, University of Jyväskylä, Finland.
| | - Tiina Parviainen
- Centre for Interdisciplinary Brain Research, Department of Psychology, P.O. Box 35, 40014, University of Jyväskylä, Finland
| | - Yi-Fang Hsu
- Department of Educational Psychology and Counseling, National Taiwan Normal University, 10610, Taipei, Taiwan; Institute for Research Excellence in Learning Sciences, National Taiwan Normal University, 10610, Taipei, Taiwan
| | - Riitta Salmelin
- Department of Neuroscience and Biomedical Engineering, 00076, Aalto University, Finland; Aalto NeuroImaging, 00076, Aalto University, Finland
| |
Collapse
|
31
|
Alain C, Moussard A, Singer J, Lee Y, Bidelman GM, Moreno S. Music and Visual Art Training Modulate Brain Activity in Older Adults. Front Neurosci 2019; 13:182. [PMID: 30906245 PMCID: PMC6418041 DOI: 10.3389/fnins.2019.00182] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2018] [Accepted: 02/15/2019] [Indexed: 11/13/2022] Open
Abstract
Cognitive decline is an unavoidable aspect of aging that impacts important behavioral and cognitive skills. Training programs can improve cognition, yet precise characterization of the psychological and neural underpinnings supporting different training programs is lacking. Here, we assessed the effect and maintenance (3-month follow-up) of 3-month music and visual art training programs on neuroelectric brain activity in older adults using a partially randomized intervention design. During the pre-, post-, and follow-up test sessions, participants completed a brief neuropsychological assessment. High-density EEG was measured while participants were presented with auditory oddball paradigms (piano tones, vowels) and during a visual GoNoGo task. Neither training program significantly impacted psychometric measures, compared to a non-active control group. However, participants enrolled in the music and visual art training programs showed enhancement of auditory evoked responses to piano tones that persisted for up to 3 months after training ended, suggesting robust and long-lasting neuroplastic effects. Both music and visual art training also modulated visual processing during the GoNoGo task, although these training effects were relatively short-lived and disappeared by the 3-month follow-up. Notably, participants enrolled in the visual art training showed greater changes in visual evoked response (i.e., N1 wave) amplitude distribution than those from the music or control group. Conversely, those enrolled in music showed greater response associated with inhibitory control over the right frontal scalp areas than those in the visual art group. Our findings reveal a causal relationship between art training (music and visual art) and neuroplastic changes in sensory systems, with some of the neuroplastic changes being specific to the training regimen.
Collapse
Affiliation(s)
- Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, ON, Canada
| | - Aline Moussard
- Centre de Recherche de l'Institut Universitaire de Gériatrie de Montréal, Université de Montréal, Montréal, QC, Canada
| | - Julia Singer
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, ON, Canada
| | - Yunjo Lee
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, ON, Canada
| | - Gavin M Bidelman
- Institute for Intelligent Systems - School of Communication Sciences and Disorders, The University of Memphis, Memphis, TN, United States
| | - Sylvain Moreno
- Digital Health Hub, School of Engineering Science, Simon Fraser University, Surrey, BC, Canada
| |
Collapse
|
32
|
Kurkela JL, Hämäläinen JA, Leppänen PH, Shu H, Astikainen P. Passive exposure to speech sounds modifies change detection brain responses in adults. Neuroimage 2019; 188:208-216. [DOI: 10.1016/j.neuroimage.2018.12.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2018] [Revised: 12/03/2018] [Accepted: 12/05/2018] [Indexed: 11/29/2022] Open
|
33
|
Yellamsetty A, Bidelman GM. Brainstem correlates of concurrent speech identification in adverse listening conditions. Brain Res 2019; 1714:182-192. [PMID: 30796895 DOI: 10.1016/j.brainres.2019.02.025] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2018] [Revised: 01/07/2019] [Accepted: 02/19/2019] [Indexed: 01/20/2023]
Abstract
When two voices compete, listeners can segregate and identify concurrent speech sounds using pitch (fundamental frequency, F0) and timbre (harmonic) cues. Speech perception is also hindered by the signal-to-noise ratio (SNR). How clear and degraded concurrent speech sounds are represented at early, pre-attentive stages of the auditory system is not well understood. To this end, we measured scalp-recorded frequency-following responses (FFR) from the EEG while human listeners heard two concurrently presented, steady-state (time-invariant) vowels whose F0 differed by zero or four semitones (ST) presented diotically in either clean (no noise) or noise-degraded (+5dB SNR) conditions. Listeners also performed a speeded double vowel identification task in which they were required to identify both vowels correctly. Behavioral results showed that speech identification accuracy increased with F0 differences between vowels, and this perceptual F0 benefit was larger for clean compared to noise degraded (+5dB SNR) stimuli. Neurophysiological data demonstrated more robust FFR F0 amplitudes for single compared to double vowels and considerably weaker responses in noise. F0 amplitudes showed speech-on-speech masking effects, along with a non-linear constructive interference at 0ST, and suppression effects at 4ST. Correlations showed that FFR F0 amplitudes failed to predict listeners' identification accuracy. In contrast, FFR F1 amplitudes were associated with faster reaction times, although this correlation was limited to noise conditions. The limited number of brain-behavior associations suggests subcortical activity mainly reflects exogenous processing rather than perceptual correlates of concurrent speech perception. Collectively, our results demonstrate that FFRs reflect pre-attentive coding of concurrent auditory stimuli that only weakly predict the success of identifying concurrent speech.
Collapse
Affiliation(s)
- Anusha Yellamsetty
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Department of Communication Sciences & Disorders, University of South Florida, USA.
| | - Gavin M Bidelman
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.
| |
Collapse
|
34
|
Zhang C, Tao R, Zhao H. Auditory spatial attention modulates the unmasking effect of perceptual separation in a "cocktail party" environment. Neuropsychologia 2019; 124:108-116. [PMID: 30659864 DOI: 10.1016/j.neuropsychologia.2019.01.009] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2018] [Revised: 11/01/2018] [Accepted: 01/15/2019] [Indexed: 11/30/2022]
Abstract
The perceptual separation between a signal speech and a competing speech (masker), induced by the precedence effect, plays an important role in releasing the signal speech from the masker, especially in a reverberant environment. The perceptual-separation-induced unmasking effect has been suggested to involve multiple cognitive processes, such as selective attention. However, whether listeners' spatial attention modulate the perceptual-separation-induced unmasking effect is not clear. The present study investigated how perceptual separation and auditory spatial attention interact with each other to facilitate speech perception under a simulated noisy and reverberant environment by analyzing the cortical auditory evoked potentials to the signal speech. The results showed that the N1 wave was significantly enhanced by perceptual separation between the signal and masker regardless of whether the participants' spatial attention was directed to the signal or not. However, the P2 wave was significantly enhanced by perceptual separation only when the participants attended to the signal speech. The results indicate that the perceptual-separation-induced facilitation of P2 needs more attentional resource than that of N1. The results also showed that the signal speech caused an enhanced N1 in the contralateral hemisphere regardless of whether participants' attention was directed to the signal or not. In contrast, the signal speech caused an enhanced P2 in the contralateral hemisphere only when the participant attended to the signal. The results indicate that the hemispheric distribution of N1 is mainly affected by the perceptual features of the acoustic stimuli, while that of P2 is affected by the listeners' attentional status.
Collapse
Affiliation(s)
- Changxin Zhang
- Faculty of Education, East China Normal University, Shanghai, China; Key Laboratory of Speech and Hearing Science, East China Normal University, Shanghai, China.
| | - Renxia Tao
- Faculty of Education, East China Normal University, Shanghai, China; Key Laboratory of Speech and Hearing Science, East China Normal University, Shanghai, China
| | - Hang Zhao
- Faculty of Education, East China Normal University, Shanghai, China; Key Laboratory of Speech and Hearing Science, East China Normal University, Shanghai, China
| |
Collapse
|
35
|
Fan Y, Yue X, Xue F, Cui J, Brauth SE, Tang Y, Fang G. Auditory perception exhibits sexual dimorphism and left telencephalic dominance in Xenopus laevis. Biol Open 2018; 7:7/12/bio035956. [PMID: 30509903 PMCID: PMC6310876 DOI: 10.1242/bio.035956] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
Abstract
Sex differences in both vocalization and auditory processing have been commonly found in vocal animals, although the underlying neural mechanisms associated with sexual dimorphism of auditory processing are not well understood. In this study we investigated whether auditory perception exhibits sexual dimorphism in Xenopus laevis. To do this we measured event-related potentials (ERPs) evoked by white noise (WN) and conspecific calls in the telencephalon, diencephalon and mesencephalon respectively. Results showed that (1) the N1 amplitudes evoked in the right telencephalon and right diencephalon of males by WN are significantly different from those evoked in females; (2) in males the N1 amplitudes evoked by conspecific calls are significantly different from those evoked by WN; (3) in females the N1 amplitude for the left mesencephalon was significantly lower than for other brain areas, while the P2 and P3 amplitudes for the right mesencephalon were the smallest; in contrast these amplitudes for the left mesencephalon were the smallest in males. These results suggest auditory perception is sexually dimorphic. Moreover, the amplitude of each ERP component (N1, P2 and P3) for the left telencephalon was the largest in females and/or males, suggesting that left telencephalic dominance exists for auditory perception in Xenopus. Summary: Investigation of auditory neural mechanisms in the South African clawed frog (Xenopus laevis) indicates that auditory perception exhibits sexual dimorphism and left telencephalic advantage.
Collapse
Affiliation(s)
- Yanzhu Fan
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, No.9 Section 4, Renmin South Road, Chengdu, Sichuan, People's Republic of China.,University of Chinese Academy of Sciences, 19A Yuquan Road, Beijing, People's Republic of China
| | - Xizi Yue
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, No.9 Section 4, Renmin South Road, Chengdu, Sichuan, People's Republic of China
| | - Fei Xue
- Sichuan Key Laboratory of Conservation Biology for Endangered Wildlife, Chengdu Research Base of Giant Panda Breeding, 26 Panda Road, Northern Suburb, Chengdu, Sichuan 610081, People's Republic of China
| | - Jianguo Cui
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, No.9 Section 4, Renmin South Road, Chengdu, Sichuan, People's Republic of China
| | - Steven E Brauth
- Department of Psychology, University of Maryland, College Park, MD20742, USA
| | - Yezhong Tang
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, No.9 Section 4, Renmin South Road, Chengdu, Sichuan, People's Republic of China
| | - Guangzhan Fang
- Department of Herpetology, Chengdu Institute of Biology, Chinese Academy of Sciences, No.9 Section 4, Renmin South Road, Chengdu, Sichuan, People's Republic of China
| |
Collapse
|
36
|
Irvine DRF. Auditory perceptual learning and changes in the conceptualization of auditory cortex. Hear Res 2018; 366:3-16. [PMID: 29551308 DOI: 10.1016/j.heares.2018.03.011] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/13/2017] [Revised: 03/06/2018] [Accepted: 03/09/2018] [Indexed: 12/11/2022]
Abstract
Perceptual learning, improvement in discriminative ability as a consequence of training, is one of the forms of sensory system plasticity that has driven profound changes in our conceptualization of sensory cortical function. Psychophysical and neurophysiological studies of auditory perceptual learning have indicated that the characteristics of the learning, and by implication the nature of the underlying neural changes, are highly task specific. Some studies in animals have indicated that recruitment of neurons to the population responding to the training stimuli, and hence an increase in the so-called cortical "area of representation" of those stimuli, is the substrate of improved performance, but such changes have not been observed in other studies. A possible reconciliation of these conflicting results is provided by evidence that changes in area of representation constitute a transient stage in the processes underlying perceptual learning. This expansion - renormalization hypothesis is supported by evidence from studies of the learning of motor skills, another form of procedural learning, but leaves open the nature of the permanent neural substrate of improved performance. Other studies have suggested that the substrate might be reduced response variability - a decrease in internal noise. Neuroimaging studies in humans have also provided compelling evidence that training results in long-term changes in auditory cortical function and in the auditory brainstem frequency-following response. Musical training provides a valuable model, but the evidence it provides is qualified by the fact that most such training is multimodal and sensorimotor, and that few of the studies are experimental and allow control over confounding variables. More generally, the overwhelming majority of experimental studies of the various forms of auditory perceptual learning have established the co-occurrence of neural and perceptual changes, but have not established that the former are causally related to the latter. Important forms of perceptual learning in humans are those involved in language acquisition and in the improvement in speech perception performance of post-lingually deaf cochlear implantees over the months following implantation. The development of a range of auditory training programs has focused interest on the factors determining the extent to which perceptual learning is specific or generalises to tasks other than those used in training. The context specificity demonstrated in a number of studies of perceptual learning suggests a multiplexing model, in which learning relating to a particular stimulus attribute depends on a subset of the diverse inputs to a given cortical neuron being strengthened, and different subsets being gated by top-down influences. This hypothesis avoids the difficulty of balancing system stability with plasticity, which is a problem for recruitment hypotheses. The characteristics of auditory perceptual learning reflect the fact that auditory cortex forms part of distributed networks that integrate the representation of auditory stimuli with attention, decision, and reward processes.
Collapse
Affiliation(s)
- Dexter R F Irvine
- Bionics Institute, East Melbourne, Victoria 3002, Australia; School of Psychological Sciences, Monash University, Victoria 3800, Australia.
| |
Collapse
|
37
|
Yellamsetty A, Bidelman GM. Low- and high-frequency cortical brain oscillations reflect dissociable mechanisms of concurrent speech segregation in noise. Hear Res 2018; 361:92-102. [PMID: 29398142 DOI: 10.1016/j.heares.2018.01.006] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/24/2017] [Revised: 12/09/2017] [Accepted: 01/12/2018] [Indexed: 10/18/2022]
Abstract
Parsing simultaneous speech requires listeners use pitch-guided segregation which can be affected by the signal-to-noise ratio (SNR) in the auditory scene. The interaction of these two cues may occur at multiple levels within the cortex. The aims of the current study were to assess the correspondence between oscillatory brain rhythms and determine how listeners exploit pitch and SNR cues to successfully segregate concurrent speech. We recorded electrical brain activity while participants heard double-vowel stimuli whose fundamental frequencies (F0s) differed by zero or four semitones (STs) presented in either clean or noise-degraded (+5 dB SNR) conditions. We found that behavioral identification was more accurate for vowel mixtures with larger pitch separations but F0 benefit interacted with noise. Time-frequency analysis decomposed the EEG into different spectrotemporal frequency bands. Low-frequency (θ, β) responses were elevated when speech did not contain pitch cues (0ST > 4ST) or was noisy, suggesting a correlate of increased listening effort and/or memory demands. Contrastively, γ power increments were observed for changes in both pitch (0ST > 4ST) and SNR (clean > noise), suggesting high-frequency bands carry information related to acoustic features and the quality of speech representations. Brain-behavior associations corroborated these effects; modulations in low-frequency rhythms predicted the speed of listeners' perceptual decisions with higher bands predicting identification accuracy. Results are consistent with the notion that neural oscillations reflect both automatic (pre-perceptual) and controlled (post-perceptual) mechanisms of speech processing that are largely divisible into high- and low-frequency bands of human brain rhythms.
Collapse
Affiliation(s)
- Anusha Yellamsetty
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA
| | - Gavin M Bidelman
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; Univeristy of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.
| |
Collapse
|
38
|
Park JM, Chung CK, Kim JS, Lee KM, Seol J, Yi SW. Musical Expectations Enhance Auditory Cortical Processing in Musicians: A Magnetoencephalography Study. Neuroscience 2018; 369:325-335. [DOI: 10.1016/j.neuroscience.2017.11.036] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2017] [Revised: 11/14/2017] [Accepted: 11/20/2017] [Indexed: 11/28/2022]
|
39
|
Giroud N, Lemke U, Reich P, Bauer J, Widmer S, Meyer M. Are you surprised to hear this? Longitudinal spectral speech exposure in older compared to middle-aged normal hearing adults. Eur J Neurosci 2017; 47:58-68. [DOI: 10.1111/ejn.13772] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2017] [Revised: 10/10/2017] [Accepted: 10/31/2017] [Indexed: 01/07/2023]
Affiliation(s)
- Nathalie Giroud
- Department of Psychology; Research Unit for Neuroplasticity and Learning in the Healthy Aging Brain; University of Zurich; Zurich Switzerland
- Department of Psychology; University Research Priority Program “Dynamics of Healthy Aging”; University of Zurich; Zurich Switzerland
| | - Ulrike Lemke
- Science & Technology; Phonak AG; Stäfa Switzerland
| | - Philip Reich
- Department of Psychology; Research Unit for Neuroplasticity and Learning in the Healthy Aging Brain; University of Zurich; Zurich Switzerland
| | - Julia Bauer
- Department of Psychology; Research Unit for Neuroplasticity and Learning in the Healthy Aging Brain; University of Zurich; Zurich Switzerland
| | - Susann Widmer
- Department of Psychology; Research Unit for Neuroplasticity and Learning in the Healthy Aging Brain; University of Zurich; Zurich Switzerland
| | - Martin Meyer
- Department of Psychology; Research Unit for Neuroplasticity and Learning in the Healthy Aging Brain; University of Zurich; Zurich Switzerland
- Department of Psychology; University Research Priority Program “Dynamics of Healthy Aging”; University of Zurich; Zurich Switzerland
- Department of Psychology; Cognitive Neuroscience; University of Klagenfurt; Klagenfurt Austria
| |
Collapse
|
40
|
Faster native vowel discrimination learning in musicians is mediated by an optimization of mnemonic functions. Neuropsychologia 2017; 104:64-75. [DOI: 10.1016/j.neuropsychologia.2017.08.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2017] [Revised: 07/11/2017] [Accepted: 08/02/2017] [Indexed: 11/22/2022]
|
41
|
Elmer S, Hausheer M, Albrecht J, Kühnis J. Human Brainstem Exhibits higher Sensitivity and Specificity than Auditory-Related Cortex to Short-Term Phonetic Discrimination Learning. Sci Rep 2017; 7:7455. [PMID: 28785043 PMCID: PMC5547112 DOI: 10.1038/s41598-017-07426-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2017] [Accepted: 06/28/2017] [Indexed: 01/09/2023] Open
Abstract
Phonetic discrimination learning is an active perceptual process that operates under the influence of cognitive control mechanisms by increasing the sensitivity of the auditory system to the trained stimulus attributes. It is assumed that the auditory cortex and the brainstem interact in order to refine how sounds are transcribed into neural codes. Here, we evaluated whether these two computational entities are prone to short-term functional changes, whether there is a chronological difference in malleability, and whether short-term training suffices to alter reciprocal interactions. We performed repeated cortical (i.e., mismatch negativity responses, MMN) and subcortical (i.e., frequency-following response, FFR) EEG measurements in two groups of participants who underwent one hour of phonetic discrimination training or were passively exposed to the same stimulus material. The training group showed a distinctive brainstem energy reduction in the trained frequency-range (i.e., first formant), whereas the passive group did not show any response modulation. Notably, brainstem signal change correlated with the behavioral improvement during training, this result indicating a close relationship between behavior and underlying brainstem physiology. Since we did not reveal group differences in MMN responses, results point to specific short-term brainstem changes that precede functional alterations in the auditory cortex.
Collapse
Affiliation(s)
- Stefan Elmer
- Auditory Research Group Zurich (ARGZ), Division Neuropsychology, Institute of Psychology, University of Zurich, Zurich, Switzerland.
| | - Marcela Hausheer
- Auditory Research Group Zurich (ARGZ), Division Neuropsychology, Institute of Psychology, University of Zurich, Zurich, Switzerland
| | - Joëlle Albrecht
- Auditory Research Group Zurich (ARGZ), Division Neuropsychology, Institute of Psychology, University of Zurich, Zurich, Switzerland
| | - Jürg Kühnis
- Auditory Research Group Zurich (ARGZ), Division Neuropsychology, Institute of Psychology, University of Zurich, Zurich, Switzerland
| |
Collapse
|
42
|
Bidelman GM, Yellamsetty A. Noise and pitch interact during the cortical segregation of concurrent speech. Hear Res 2017; 351:34-44. [PMID: 28578876 DOI: 10.1016/j.heares.2017.05.008] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/24/2017] [Revised: 05/09/2017] [Accepted: 05/23/2017] [Indexed: 10/19/2022]
Abstract
Behavioral studies reveal listeners exploit intrinsic differences in voice fundamental frequency (F0) to segregate concurrent speech sounds-the so-called "F0-benefit." More favorable signal-to-noise ratio (SNR) in the environment, an extrinsic acoustic factor, similarly benefits the parsing of simultaneous speech. Here, we examined the neurobiological substrates of these two cues in the perceptual segregation of concurrent speech mixtures. We recorded event-related brain potentials (ERPs) while listeners performed a speeded double-vowel identification task. Listeners heard two concurrent vowels whose F0 differed by zero or four semitones presented in either clean (no noise) or noise-degraded (+5 dB SNR) conditions. Behaviorally, listeners were more accurate in correctly identifying both vowels for larger F0 separations but F0-benefit was more pronounced at more favorable SNRs (i.e., pitch × SNR interaction). Analysis of the ERPs revealed that only the P2 wave (∼200 ms) showed a similar F0 x SNR interaction as behavior and was correlated with listeners' perceptual F0-benefit. Neural classifiers applied to the ERPs further suggested that speech sounds are segregated neurally within 200 ms based on SNR whereas segregation based on pitch occurs later in time (400-700 ms). The earlier timing of extrinsic SNR compared to intrinsic F0-based segregation implies that the cortical extraction of speech from noise is more efficient than differentiating speech based on pitch cues alone, which may recruit additional cortical processes. Findings indicate that noise and pitch differences interact relatively early in cerebral cortex and that the brain arrives at the identities of concurrent speech mixtures as early as ∼200 ms.
Collapse
Affiliation(s)
- Gavin M Bidelman
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, 38152, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, 38152, USA; Univeristy of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, 38163, USA.
| | - Anusha Yellamsetty
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, 38152, USA
| |
Collapse
|
43
|
Sound-Making Actions Lead to Immediate Plastic Changes of Neuromagnetic Evoked Responses and Induced β-Band Oscillations during Perception. J Neurosci 2017; 37:5948-5959. [PMID: 28539421 DOI: 10.1523/jneurosci.3613-16.2017] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2016] [Revised: 04/18/2017] [Accepted: 05/13/2017] [Indexed: 11/21/2022] Open
Abstract
Auditory and sensorimotor brain areas interact during the action-perception cycle of sound making. Neurophysiological evidence of a feedforward model of the action and its outcome has been associated with attenuation of the N1 wave of auditory evoked responses elicited by self-generated sounds, such as talking and singing or playing a musical instrument. Moreover, neural oscillations at β-band frequencies have been related to predicting the sound outcome after action initiation. We hypothesized that a newly learned action-perception association would immediately modify interpretation of the sound during subsequent listening. Nineteen healthy young adults (7 female, 12 male) participated in three magnetoencephalographic recordings while first passively listening to recorded sounds of a bell ringing, then actively striking the bell with a mallet, and then again listening to recorded sounds. Auditory cortex activity showed characteristic P1-N1-P2 waves. The N1 was attenuated during sound making, while P2 responses were unchanged. In contrast, P2 became larger when listening after sound making compared with the initial naive listening. The P2 increase occurred immediately, while in previous learning-by-listening studies P2 increases occurred on a later day. Also, reactivity of β-band oscillations, as well as θ coherence between auditory and sensorimotor cortices, was stronger in the second listening block. These changes were significantly larger than those observed in control participants (eight female, five male), who triggered recorded sounds by a key press. We propose that P2 characterizes familiarity with sound objects, whereas β-band oscillation signifies involvement of the action-perception cycle, and both measures objectively indicate functional neuroplasticity in auditory perceptual learning.SIGNIFICANCE STATEMENT While suppression of auditory responses to self-generated sounds is well known, it is not clear whether the learned action-sound association modifies subsequent perception. Our study demonstrated the immediate effects of sound-making experience on perception using magnetoencephalographic recordings, as reflected in the increased auditory evoked P2 wave, increased responsiveness of β oscillations, and enhanced connectivity between auditory and sensorimotor cortices. The importance of motor learning was underscored as the changes were much smaller in a control group using a key press to generate the sounds instead of learning to play the musical instrument. The results support the rapid integration of a feedforward model during perception and provide a neurophysiological basis for the application of music making in motor rehabilitation training.
Collapse
|
44
|
Heald SLM, Van Hedger SC, Nusbaum HC. Perceptual Plasticity for Auditory Object Recognition. Front Psychol 2017; 8:781. [PMID: 28588524 PMCID: PMC5440584 DOI: 10.3389/fpsyg.2017.00781] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2016] [Accepted: 04/26/2017] [Indexed: 01/25/2023] Open
Abstract
In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as "noise" in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples of perceptual categories that are thought to be highly stable. This framework suggests that the process of auditory recognition cannot be divorced from the short-term context in which an auditory object is presented. Implications for auditory category acquisition and extant models of auditory perception, both cognitive and neural, are discussed.
Collapse
|
45
|
Pinheiro AP, Barros C, Dias M, Niznikiewicz M. Does emotion change auditory prediction and deviance detection? Biol Psychol 2017; 127:123-133. [PMID: 28499839 DOI: 10.1016/j.biopsycho.2017.05.007] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2016] [Revised: 03/15/2017] [Accepted: 05/06/2017] [Indexed: 01/23/2023]
Abstract
In the last decades, a growing number of studies provided compelling evidence supporting the interplay of cognitive and affective processes. However, it remains to be clarified whether and how an emotional context affects the prediction and detection of change in unattended sensory events. In an event-related potential (ERP) study, we probed the modulatory role of pleasant, unpleasant and neutral visual contexts on the brain response to automatic detection of change in spectral (intensity) vs. temporal (duration) sound features. Twenty participants performed a passive auditory oddball task. Additionally, we tested the relationship between ERPs and self-reported mood. Participants reported more negative mood after the negative block. The P2 amplitude elicited by standards was increased in a positive context. Mismatch Negativity (MMN) amplitude was decreased in the negative relative to the neutral and positive contexts, and was associated with self-reported mood. These findings suggest that the detection of regularities in the auditory stream was facilitated in a positive context, whereas a negative visual context interfered with prediction error elicitation, through associated mood changes. Both ERP and behavioral effects highlight the intricate links between emotion, perception and cognitive processes.
Collapse
Affiliation(s)
- Ana P Pinheiro
- Neuropsychophysiology Lab, School of Psychology, University of Minho, Braga, Portugal; Faculty of Psychology, University of Lisbon, Lisbon, Portugal.
| | - Carla Barros
- Neuropsychophysiology Lab, School of Psychology, University of Minho, Braga, Portugal
| | - Marcelo Dias
- Neuropsychophysiology Lab, School of Psychology, University of Minho, Braga, Portugal
| | - Margaret Niznikiewicz
- VA Boston Healthcare System, Department of Psychiatry, Harvard Medical School, Boston MA, USA
| |
Collapse
|
46
|
Bidelman GM, Walker BS. Attentional modulation and domain-specificity underlying the neural organization of auditory categorical perception. Eur J Neurosci 2017; 45:690-699. [DOI: 10.1111/ejn.13526] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2016] [Revised: 01/13/2017] [Accepted: 01/13/2017] [Indexed: 11/29/2022]
Affiliation(s)
- Gavin M. Bidelman
- Institute for Intelligent Systems; University of Memphis; Memphis TN USA
- School of Communication Sciences & Disorders; University of Memphis; 4055 North Park Loop Memphis TN 38152 USA
- Department of Anatomy and Neurobiology; Univeristy of Tennessee Health Sciences Center; Memphis TN USA
| | - Breya S. Walker
- Institute for Intelligent Systems; University of Memphis; Memphis TN USA
- Department of Psychology; University of Memphis; Memphis TN USA
| |
Collapse
|
47
|
Longitudinal auditory learning facilitates auditory cognition as revealed by microstate analysis. Biol Psychol 2016; 123:25-36. [PMID: 27866990 DOI: 10.1016/j.biopsycho.2016.11.007] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2016] [Revised: 09/19/2016] [Accepted: 11/15/2016] [Indexed: 11/20/2022]
Abstract
The current study investigates cognitive processes as reflected in late auditory-evoked potentials as a function of longitudinal auditory learning. A normal hearing adult sample (n=15) performed an active oddball task at three consecutive time points (TPs) arranged at two week intervals, and during which EEG was recorded. The stimuli comprised of syllables consisting of a natural fricative (/sh/,/s/,/f/) embedded between two /a/ sounds, as well as morphed transitions of the two syllables that served as deviants. Perceptual and cognitive modulations as reflected in the onset and the mean global field power (GFP) of N2b- and P3b-related microstates across four weeks were investigated. We found that the onset of P3b-like microstates, but not N2b-like microstates decreased across TPs, more strongly for difficult deviants leading to similar onsets for difficult and easy stimuli after repeated exposure. The mean GFP of all N2b-like and P3b-like microstates increased more in spectrally strong deviants compared to weak deviants, leading to a distinctive activation for each stimulus after learning. Our results indicate that longitudinal training of auditory-related cognitive mechanisms such as stimulus categorization, attention and memory updating processes are an indispensable part of successful auditory learning. This suggests that future studies should focus on the potential benefits of cognitive processes in auditory training.
Collapse
|
48
|
Bosseler AN, Teinonen T, Tervaniemi M, Huotilainen M. Infant Directed Speech Enhances Statistical Learning in Newborn Infants: An ERP Study. PLoS One 2016; 11:e0162177. [PMID: 27617967 PMCID: PMC5019490 DOI: 10.1371/journal.pone.0162177] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2016] [Accepted: 08/18/2016] [Indexed: 11/19/2022] Open
Abstract
Statistical learning and the social contexts of language addressed to infants are hypothesized to play important roles in early language development. Previous behavioral work has found that the exaggerated prosodic contours of infant-directed speech (IDS) facilitate statistical learning in 8-month-old infants. Here we examined the neural processes involved in on-line statistical learning and investigated whether the use of IDS facilitates statistical learning in sleeping newborns. Event-related potentials (ERPs) were recorded while newborns were exposed to12 pseudo-words, six spoken with exaggerated pitch contours of IDS and six spoken without exaggerated pitch contours (ADS) in ten alternating blocks. We examined whether ERP amplitudes for syllable position within a pseudo-word (word-initial vs. word-medial vs. word-final, indicating statistical word learning) and speech register (ADS vs. IDS) would interact. The ADS and IDS registers elicited similar ERP patterns for syllable position in an early 0-100 ms component but elicited different ERP effects in both the polarity and topographical distribution at 200-400 ms and 450-650 ms. These results provide the first evidence that the exaggerated pitch contours of IDS result in differences in brain activity linked to on-line statistical learning in sleeping newborns.
Collapse
Affiliation(s)
- Alexis N. Bosseler
- Cognitive Brain Research Unit, Institute of Behavioural Sciences, University of Helsinki, Helsinki, Finland
- Institute for Learning and Brain Sciences, University of Washington, Seattle, Washington, United States of America
| | - Tuomas Teinonen
- Cognitive Brain Research Unit, Institute of Behavioural Sciences, University of Helsinki, Helsinki, Finland
| | - Mari Tervaniemi
- Cognitive Brain Research Unit, Institute of Behavioural Sciences, University of Helsinki, Helsinki, Finland
- Cicero Learning, University of Helsinki, Helsinki, Finland
| | - Minna Huotilainen
- Cognitive Brain Research Unit, Institute of Behavioural Sciences, University of Helsinki, Helsinki, Finland
- Cicero Learning, University of Helsinki, Helsinki, Finland
| |
Collapse
|
49
|
Kärgel C, Sartory G, Kariofillis D, Wiltfang J, Müller BW. The effect of auditory and visual training on the mismatch negativity in schizophrenia. Int J Psychophysiol 2016; 102:47-54. [DOI: 10.1016/j.ijpsycho.2016.03.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2015] [Revised: 03/04/2016] [Accepted: 03/07/2016] [Indexed: 10/22/2022]
|
50
|
Toida K, Ueno K, Shimada S. Neural Basis of the Time Window for Subjective Motor-Auditory Integration. Front Hum Neurosci 2016; 9:688. [PMID: 26779000 PMCID: PMC4704610 DOI: 10.3389/fnhum.2015.00688] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2015] [Accepted: 12/04/2015] [Indexed: 12/01/2022] Open
Abstract
Temporal contiguity between an action and corresponding auditory feedback is crucial to the perception of self-generated sound. However, the neural mechanisms underlying motor–auditory temporal integration are unclear. Here, we conducted four experiments with an oddball paradigm to examine the specific event-related potentials (ERPs) elicited by delayed auditory feedback for a self-generated action. The first experiment confirmed that a pitch-deviant auditory stimulus elicits mismatch negativity (MMN) and P300, both when it is generated passively and by the participant’s action. In our second and third experiments, we investigated the ERP components elicited by delayed auditory feedback for a self-generated action. We found that delayed auditory feedback elicited an enhancement of P2 (enhanced-P2) and a N300 component, which were apparently different from the MMN and P300 components observed in the first experiment. We further investigated the sensitivity of the enhanced-P2 and N300 to delay length in our fourth experiment. Strikingly, the amplitude of the N300 increased as a function of the delay length. Additionally, the N300 amplitude was significantly correlated with the conscious detection of the delay (the 50% detection point was around 200 ms), and hence reduction in the feeling of authorship of the sound (the sense of agency). In contrast, the enhanced-P2 was most prominent in short-delay (≤200 ms) conditions and diminished in long-delay conditions. Our results suggest that different neural mechanisms are employed for the processing of temporally deviant and pitch-deviant auditory feedback. Additionally, the temporal window for subjective motor–auditory integration is likely about 200 ms, as indicated by these auditory ERP components.
Collapse
Affiliation(s)
- Koichi Toida
- Department of Architecture, School of Science and Technology, Meiji UniversityKawasaki, Japan; Japan Science and Technology Agency, Core Research for Evolutionary Science and Technology (CREST)Saitama, Japan
| | - Kanako Ueno
- Department of Architecture, School of Science and Technology, Meiji UniversityKawasaki, Japan; Japan Science and Technology Agency, Core Research for Evolutionary Science and Technology (CREST)Saitama, Japan
| | - Sotaro Shimada
- Japan Science and Technology Agency, Core Research for Evolutionary Science and Technology (CREST)Saitama, Japan; Department of Electronics and Bioinformatics, School of Science and Technology, Meiji UniversityKawasaki, Japan
| |
Collapse
|