1
|
Jagt M, Ganis F, Serafin S. Enhanced neural phase locking through audio-tactile stimulation. Front Neurosci 2024; 18:1425398. [PMID: 39416951 PMCID: PMC11480033 DOI: 10.3389/fnins.2024.1425398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Accepted: 08/23/2024] [Indexed: 10/19/2024] Open
Abstract
Numerous studies have underscored the close relationship between the auditory and vibrotactile modality. For instance, in the peripheral structures of both modalities, afferent nerve fibers synchronize their activity to the external sensory stimulus, thereby providing a temporal code linked to pitch processing. The Frequency Following Response is a neurological measure that captures this phase locking activity in response to auditory stimuli. In our study, we investigated whether this neural signal is influenced by the simultaneous presentation of a vibrotactile stimulus. Accordingly, our findings revealed a significant increase in phase locking to the fundamental frequency of a speech stimulus, while no such effects were observed at harmonic frequencies. Since phase locking to the fundamental frequency has been associated with pitch perceptual capabilities, our results suggests that audio-tactile stimulation might improve pitch perception in human subjects.
Collapse
Affiliation(s)
- Mels Jagt
- Multisensory Experience Lab, Department of Architecture, Design and Media Technology, Aalborg University Copenhagen, Copenhagen, Denmark
- Life Sciences Engineering (Neuroscience and Neuroengineering), École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Francesco Ganis
- Multisensory Experience Lab, Department of Architecture, Design and Media Technology, Aalborg University Copenhagen, Copenhagen, Denmark
| | - Stefania Serafin
- Multisensory Experience Lab, Department of Architecture, Design and Media Technology, Aalborg University Copenhagen, Copenhagen, Denmark
| |
Collapse
|
2
|
Gorina-Careta N, Arenillas-Alcón S, Puertollano M, Mondéjar-Segovia A, Ijjou-Kadiri S, Costa-Faidella J, Gómez-Roig MD, Escera C. Exposure to bilingual or monolingual maternal speech during pregnancy affects the neurophysiological encoding of speech sounds in neonates differently. Front Hum Neurosci 2024; 18:1379660. [PMID: 38841122 PMCID: PMC11150635 DOI: 10.3389/fnhum.2024.1379660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 04/22/2024] [Indexed: 06/07/2024] Open
Abstract
Introduction Exposure to maternal speech during the prenatal period shapes speech perception and linguistic preferences, allowing neonates to recognize stories heard frequently in utero and demonstrating an enhanced preference for their mother's voice and native language. Yet, with a high prevalence of bilingualism worldwide, it remains an open question whether monolingual or bilingual maternal speech during pregnancy influence differently the fetus' neural mechanisms underlying speech sound encoding. Methods In the present study, the frequency-following response (FFR), an auditory evoked potential that reflects the complex spectrotemporal dynamics of speech sounds, was recorded to a two-vowel /oa/ stimulus in a sample of 129 healthy term neonates within 1 to 3 days after birth. Newborns were divided into two groups according to maternal language usage during the last trimester of gestation (monolingual; bilingual). Spectral amplitudes and spectral signal-to-noise ratios (SNR) at the stimulus fundamental (F0) and first formant (F1) frequencies of each vowel were, respectively, taken as measures of pitch and formant structure neural encoding. Results Our results reveal that while spectral amplitudes at F0 did not differ between groups, neonates from bilingual mothers exhibited a lower spectral SNR. Additionally, monolingually exposed neonates exhibited a higher spectral amplitude and SNR at F1 frequencies. Discussion We interpret our results under the consideration that bilingual maternal speech, as compared to monolingual, is characterized by a greater complexity in the speech sound signal, rendering newborns from bilingual mothers more sensitive to a wider range of speech frequencies without generating a particularly strong response at any of them. Our results contribute to an expanding body of research indicating the influence of prenatal experiences on language acquisition and underscore the necessity of including prenatal language exposure in developmental studies on language acquisition, a variable often overlooked yet capable of influencing research outcomes.
Collapse
Affiliation(s)
- Natàlia Gorina-Careta
- Brainlab – Cognitive Neuroscience Research Group, Departament de Psicologia Clinica i Psicobiologia, Universitat de Barcelona, Barcelona, Spain
- Institut de Neurociènces, Universitat de Barcelona, Barcelona, Spain
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain
| | - Sonia Arenillas-Alcón
- Brainlab – Cognitive Neuroscience Research Group, Departament de Psicologia Clinica i Psicobiologia, Universitat de Barcelona, Barcelona, Spain
- Institut de Neurociènces, Universitat de Barcelona, Barcelona, Spain
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain
| | - Marta Puertollano
- Brainlab – Cognitive Neuroscience Research Group, Departament de Psicologia Clinica i Psicobiologia, Universitat de Barcelona, Barcelona, Spain
- Institut de Neurociènces, Universitat de Barcelona, Barcelona, Spain
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain
| | - Alejandro Mondéjar-Segovia
- Brainlab – Cognitive Neuroscience Research Group, Departament de Psicologia Clinica i Psicobiologia, Universitat de Barcelona, Barcelona, Spain
- Institut de Neurociènces, Universitat de Barcelona, Barcelona, Spain
| | - Siham Ijjou-Kadiri
- Brainlab – Cognitive Neuroscience Research Group, Departament de Psicologia Clinica i Psicobiologia, Universitat de Barcelona, Barcelona, Spain
- Institut de Neurociènces, Universitat de Barcelona, Barcelona, Spain
| | - Jordi Costa-Faidella
- Brainlab – Cognitive Neuroscience Research Group, Departament de Psicologia Clinica i Psicobiologia, Universitat de Barcelona, Barcelona, Spain
- Institut de Neurociènces, Universitat de Barcelona, Barcelona, Spain
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain
| | - María Dolores Gómez-Roig
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain
- BCNatal – Barcelona Center for Maternal Fetal and Neonatal Medicine (Hospital Sant Joan de Déu and Hospital Clínic), University of Barcelona, Barcelona, Spain
| | - Carles Escera
- Brainlab – Cognitive Neuroscience Research Group, Departament de Psicologia Clinica i Psicobiologia, Universitat de Barcelona, Barcelona, Spain
- Institut de Neurociènces, Universitat de Barcelona, Barcelona, Spain
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Spain
| |
Collapse
|
3
|
Gransier R, Carlyon RP, Richardson ML, Middlebrooks JC, Wouters J. Artifact removal by template subtraction enables recordings of the frequency following response in cochlear-implant users. Sci Rep 2024; 14:6158. [PMID: 38486005 PMCID: PMC10940306 DOI: 10.1038/s41598-024-56047-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2023] [Accepted: 03/01/2024] [Indexed: 03/18/2024] Open
Abstract
Electrically evoked frequency-following responses (eFFRs) provide insight in the phase-locking ability of brainstem of cochlear-implant (CI) users. eFFRs can potentially be used to gain insight in the individual differences in the biological limitation on temporal encoding of the electrically stimulated auditory pathway, which can be inherent to the electrical stimulation itself and/or the degenerative processes associated with hearing loss. One of the major challenge of measuring eFFRs in CI users is the process of isolating the stimulation artifact from the neural response, as both the response and the artifact overlap in time and have similar frequency characteristics. Here we introduce a new artifact removal method based on template subtraction that successfully removes the stimulation artifacts from the recordings when CI users are stimulated with pulse trains from 128 to 300 pulses per second in a monopolar configuration. Our results show that, although artifact removal was successful in all CI users, the phase-locking ability of the brainstem to the different pulse rates, as assessed with the eFFR differed substantially across participants. These results show that the eFFR can be measured, free from artifacts, in CI users and that they can be used to gain insight in individual differences in temporal processing of the electrically stimulated auditory pathway.
Collapse
Affiliation(s)
- Robin Gransier
- ExpORL, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Leuven, Belgium
| | - Robert P Carlyon
- Cambridge Hearing Group, MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - Matthew L Richardson
- Department of Otolaryngology, University of California at Irvine, Irvine, CA, USA
- Center for Hearing Research, University of California at Irvine, Irvine, CA, USA
| | - John C Middlebrooks
- Department of Otolaryngology, University of California at Irvine, Irvine, CA, USA
- Center for Hearing Research, University of California at Irvine, Irvine, CA, USA
- Departments of Neurobiology and Behavior, Biomedical Engineering, Cognitive Sciences, University of California at Irvine, Irvine, CA, USA
| | - Jan Wouters
- ExpORL, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Leuven, Belgium.
| |
Collapse
|
4
|
Arenillas-Alcón S, Ribas-Prats T, Puertollano M, Mondéjar-Segovia A, Gómez-Roig MD, Costa-Faidella J, Escera C. Prenatal daily musical exposure is associated with enhanced neural representation of speech fundamental frequency: Evidence from neonatal frequency-following responses. Dev Sci 2023; 26:e13362. [PMID: 36550689 DOI: 10.1111/desc.13362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Revised: 12/14/2022] [Accepted: 12/15/2022] [Indexed: 12/24/2022]
Abstract
Fetal hearing experiences shape the linguistic and musical preferences of neonates. From the very first moment after birth, newborns prefer their native language, recognize their mother's voice, and show a greater responsiveness to lullabies presented during pregnancy. Yet, the neural underpinnings of this experience inducing plasticity have remained elusive. Here we recorded the frequency-following response (FFR), an auditory evoked potential elicited to periodic complex sounds, to show that prenatal music exposure is associated to enhanced neural encoding of speech stimuli periodicity, which relates to the perceptual experience of pitch. FFRs were recorded in a sample of 60 healthy neonates born at term and aged 12-72 hours. The sample was divided into two groups according to their prenatal musical exposure (29 daily musically exposed; 31 not-daily musically exposed). Prenatal exposure was assessed retrospectively by a questionnaire in which mothers reported how often they sang or listened to music through loudspeakers during the last trimester of pregnancy. The FFR was recorded to either a /da/ or an /oa/ speech-syllable stimulus. Analyses were centered on stimuli sections of identical duration (113 ms) and fundamental frequency (F0 = 113 Hz). Neural encoding of stimuli periodicity was quantified as the FFR spectral amplitude at the stimulus F0 . Data revealed that newborns exposed daily to music exhibit larger spectral amplitudes at F0 as compared to not-daily musically-exposed newborns, regardless of the eliciting stimulus. Our results suggest that prenatal music exposure facilitates the tuning to human speech fundamental frequency, which may support early language processing and acquisition. RESEARCH HIGHLIGHTS: Frequency-following responses to speech were collected from a sample of neonates prenatally exposed to music daily and compared to neonates not-daily exposed to music. Neonates who experienced daily prenatal music exposure exhibit enhanced frequency-following responses to the periodicity of speech sounds. Prenatal music exposure is associated with a fine-tuned encoding of human speech fundamental frequency, which may facilitate early language processing and acquisition.
Collapse
Affiliation(s)
- Sonia Arenillas-Alcón
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain
- Institute of Neurosciences, University of Barcelona, Catalonia, Spain
- Institut de Recerca Sant Joan de Déu, Catalonia, Spain
| | - Teresa Ribas-Prats
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain
- Institute of Neurosciences, University of Barcelona, Catalonia, Spain
- Institut de Recerca Sant Joan de Déu, Catalonia, Spain
| | - Marta Puertollano
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain
- Institute of Neurosciences, University of Barcelona, Catalonia, Spain
- Institut de Recerca Sant Joan de Déu, Catalonia, Spain
| | - Alejandro Mondéjar-Segovia
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain
- Institute of Neurosciences, University of Barcelona, Catalonia, Spain
| | - María Dolores Gómez-Roig
- Institut de Recerca Sant Joan de Déu, Catalonia, Spain
- BCNatal - Barcelona Center for Maternal Fetal and Neonatal Medicine (Hospital Sant Joan de Déu and Hospital Clínic), University of Barcelona, Catalonia, Spain
| | - Jordi Costa-Faidella
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain
- Institute of Neurosciences, University of Barcelona, Catalonia, Spain
- Institut de Recerca Sant Joan de Déu, Catalonia, Spain
| | - Carles Escera
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain
- Institute of Neurosciences, University of Barcelona, Catalonia, Spain
- Institut de Recerca Sant Joan de Déu, Catalonia, Spain
| |
Collapse
|
5
|
Tang H, Zhang S, Tian Y, Kang T, Zhou C, Yang S, Liu Y, Liu X, Chen Q, Xiao H, Chen W, Zang J. Bioinspired Soft Elastic Metamaterials for Reconstruction of Natural Hearing. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2023; 10:e2207273. [PMID: 37114826 PMCID: PMC10369269 DOI: 10.1002/advs.202207273] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 03/28/2023] [Indexed: 06/19/2023]
Abstract
Natural hearing which means hearing naturally like normal people is critical for patients with hearing loss to participate in life. Cochlear implants have enabled numerous severe hearing loss patients to hear voice functionally, while cochlear implant users can hardly distinguish different tones or appreciate music subject to the absence of rate coding and insufficient frequency channels. Here a bioinspired soft elastic metamaterial that reproduces the shape and key functions of the human cochlea is reported. Inspired by human cochlea, the metamaterials are designed to possess graded microstructures with high effective refractive index distributed on a spiral shape to implement position-related frequency demultiplexing, passive sound enhancements of 10 times, and high-speed parallel processing of 168-channel sound/piezoelectric signals. Besides, it is demonstrated that natural hearing artificial cochlea has fine frequency resolution up to 30 Hz, a wide audible range from 150-12 000 Hz, and a considerable output voltage that can activate the auditory pathway in mice. This work blazes a promising trail for reconstruction of natural hearing in patients with severe hearing loss.
Collapse
Affiliation(s)
- Hanchuan Tang
- School of Integrated Circuits and Wuhan National Laboratory for OptoelectronicsHuazhong University of Science and TechnologyWuhan430074China
| | - Shujie Zhang
- College of Life Science and TechnologyHuazhong University of Science and TechnologyWuhan430074China
| | - Ye Tian
- School of Integrated Circuits and Wuhan National Laboratory for OptoelectronicsHuazhong University of Science and TechnologyWuhan430074China
| | - Tianyu Kang
- School of Integrated Circuits and Wuhan National Laboratory for OptoelectronicsHuazhong University of Science and TechnologyWuhan430074China
| | - Cheng Zhou
- School of Integrated Circuits and Wuhan National Laboratory for OptoelectronicsHuazhong University of Science and TechnologyWuhan430074China
| | - Shuaikang Yang
- School of Integrated Circuits and Wuhan National Laboratory for OptoelectronicsHuazhong University of Science and TechnologyWuhan430074China
| | - Ying Liu
- School of Integrated Circuits and Wuhan National Laboratory for OptoelectronicsHuazhong University of Science and TechnologyWuhan430074China
| | - Xurui Liu
- School of Integrated Circuits and Wuhan National Laboratory for OptoelectronicsHuazhong University of Science and TechnologyWuhan430074China
| | - Qicai Chen
- School of Life SciencesCentral China Normal UniversityWuhan430074China
| | - Hongjun Xiao
- Department of OtorhinolaryngologyUnion HospitalTongji Medical CollegeHuazhong University of Science and TechnologyWuhan430022China
| | - Wei Chen
- College of Life Science and TechnologyHuazhong University of Science and TechnologyWuhan430074China
| | - Jianfeng Zang
- School of Integrated Circuits and Wuhan National Laboratory for OptoelectronicsHuazhong University of Science and TechnologyWuhan430074China
- The State Key Laboratory of Digital Manufacturing Equipment and TechnologyHuazhong University of Science and TechnologyWuhan430074China
| |
Collapse
|
6
|
Prinz R. Nothing in evolution makes sense except in the light of code biology. Biosystems 2023; 229:104907. [PMID: 37207840 DOI: 10.1016/j.biosystems.2023.104907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Revised: 04/29/2023] [Accepted: 05/02/2023] [Indexed: 05/21/2023]
Abstract
This article highlights the potential contribution of biological codes to the course and dynamics of evolution. The concept of organic codes, developed by Marcello Barbieri, has fundamentally changed our view of how living systems function. The notion that molecular interactions built on adaptors that arbitrarily link molecules from different "worlds" in a conventional, i.e., rule-based way, departs significantly from the law-based constraints imposed on livening things by physical and chemical mechanisms. In other words, living and non-living things behave like rules and laws, respectively, but this important distinction is rarely considered in current evolutionary theory. The many known codes allow quantification of codes that relate to a cell, or comparisons between different biological systems and may pave the way to a quantitative and empirical research agenda in code biology. A starting point for such an endeavour is the introduction of a simple dichotomous classification of structural and regulatory codes. This classification can be used as a tool to analyse and quantify key organising principles of the living world, such as modularity, hierarchy, and robustness, based on organic codes. The implications for evolutionary research are related to the unique dynamics of codes, or ´Eigendynamics´ (self-momentum) and how they determine the behaviour of biological systems from within, whereas physical constraints are imposed mainly from without. A speculation on the drivers of macroevolution in light of codes is followed by the conclusion that a meaningful and comprehensive understanding of evolution depends including codes into the equation of life.
Collapse
|
7
|
Gnanateja GN, Rupp K, Llanos F, Remick M, Pernia M, Sadagopan S, Teichert T, Abel TJ, Chandrasekaran B. Frequency-Following Responses to Speech Sounds Are Highly Conserved across Species and Contain Cortical Contributions. eNeuro 2021; 8:ENEURO.0451-21.2021. [PMID: 34799409 PMCID: PMC8704423 DOI: 10.1523/eneuro.0451-21.2021] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 11/02/2021] [Indexed: 11/21/2022] Open
Abstract
Time-varying pitch is a vital cue for human speech perception. Neural processing of time-varying pitch has been extensively assayed using scalp-recorded frequency-following responses (FFRs), an electrophysiological signal thought to reflect integrated phase-locked neural ensemble activity from subcortical auditory areas. Emerging evidence increasingly points to a putative contribution of auditory cortical ensembles to the scalp-recorded FFRs. However, the properties of cortical FFRs and precise characterization of laminar sources are still unclear. Here we used direct human intracortical recordings as well as extracranial and intracranial recordings from macaques and guinea pigs to characterize the properties of cortical sources of FFRs to time-varying pitch patterns. We found robust FFRs in the auditory cortex across all species. We leveraged representational similarity analysis as a translational bridge to characterize similarities between the human and animal models. Laminar recordings in animal models showed FFRs emerging primarily from the thalamorecipient layers of the auditory cortex. FFRs arising from these cortical sources significantly contributed to the scalp-recorded FFRs via volume conduction. Our research paves the way for a wide array of studies to investigate the role of cortical FFRs in auditory perception and plasticity.
Collapse
Affiliation(s)
- G Nike Gnanateja
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
| | - Kyle Rupp
- Department of Neurological Surgery, UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Fernando Llanos
- Department of Linguistics, The University of Texas at Austin, Austin, Texas 78712
| | - Madison Remick
- Department of Neurological Surgery, UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Marianny Pernia
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
| | - Srivatsun Sadagopan
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
| | - Tobias Teichert
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Department of Psychiatry, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Taylor J Abel
- Department of Neurological Surgery, UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania 15213
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
| | - Bharath Chandrasekaran
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
| |
Collapse
|
8
|
Gransier R, Guérit F, Carlyon RP, Wouters J. Frequency following responses and rate change complexes in cochlear implant users. Hear Res 2021; 404:108200. [PMID: 33647574 PMCID: PMC8052190 DOI: 10.1016/j.heares.2021.108200] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Revised: 01/25/2021] [Accepted: 02/06/2021] [Indexed: 01/05/2023]
Abstract
The upper limit of rate-based pitch perception and rate discrimination can differ substantially across cochlear implant (CI) users. One potential reason for this difference is the presence of a biological limitation on temporal encoding in the electrically-stimulated auditory pathway, which can be inherent to the electrical stimulation itself and/or to the degenerative processes associated with hearing loss. Electrophysiological measures, like the electrically-evoked frequency following response (eFFR) and auditory change complex (eACC), could potentially provide valuable insights in the temporal processing limitations at the level of the brainstem and cortex in the electrically-stimulated auditory pathway. Obtaining these neural responses, free from stimulation artifacts, is challenging, especially when the neural response is phase-locked to the stimulation rate, as is the case for the eFFR. In this study we investigated the feasibility of measuring eFFRs, free from stimulation artifacts, to stimulation rates ranging from 94 to 196 pulses per second (pps) and eACCs to pulse rate changes ranging from 36 to 108%, when stimulating in a monopolar configuration. A high-sampling rate EEG system was used to measure the electrophysiological responses in five CI users, and linear interpolation was applied to remove the stimulation artifacts from the EEG. With this approach, we were able to measure eFFRs for pulse rates up to 162 pps and eACCs to the different rate changes. Our results show that it is feasible to measure electrophysiological responses, free from stimulation artifacts, that could potentially be used as neural correlates for rate and pitch processing in CI users.
Collapse
Affiliation(s)
- Robin Gransier
- KU Leuven, Department of Neurosciences, ExpORL, Herestraat 49, Box 721, Leuven 3000, Belgium.
| | - Franҫois Guérit
- Cambridge Hearing Group, MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge CB2 7EF, United Kingdom
| | - Robert P Carlyon
- Cambridge Hearing Group, MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge CB2 7EF, United Kingdom
| | - Jan Wouters
- KU Leuven, Department of Neurosciences, ExpORL, Herestraat 49, Box 721, Leuven 3000, Belgium
| |
Collapse
|
9
|
Burton H, Reeder RM, Holden T, Agato A, Firszt JB. Cortical Regions Activated by Spectrally Degraded Speech in Adults With Single Sided Deafness or Bilateral Normal Hearing. Front Neurosci 2021; 15:618326. [PMID: 33897343 PMCID: PMC8058229 DOI: 10.3389/fnins.2021.618326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Accepted: 03/04/2021] [Indexed: 11/13/2022] Open
Abstract
Those with profound sensorineural hearing loss from single sided deafness (SSD) generally experience greater cognitive effort and fatigue in adverse sound environments. We studied cases with right ear, SSD compared to normal hearing (NH) individuals. SSD cases were significantly less correct in naming last words in spectrally degraded 8- and 16-band vocoded sentences, despite high semantic predictability. Group differences were not significant for less intelligible 4-band sentences, irrespective of predictability. SSD also had diminished BOLD percent signal changes to these same sentences in left hemisphere (LH) cortical regions of early auditory, association auditory, inferior frontal, premotor, inferior parietal, dorsolateral prefrontal, posterior cingulate, temporal-parietal-occipital junction, and posterior opercular. Cortical regions with lower amplitude responses in SSD than NH were mostly components of a LH language network, previously noted as concerned with speech recognition. Recorded BOLD signal magnitudes were averages from all vertices within predefined parcels from these cortex regions. Parcels from different regions in SSD showed significantly larger signal magnitudes to sentences of greater intelligibility (e.g., 8- or 16- vs. 4-band) in all except early auditory and posterior cingulate cortex. Significantly lower response magnitudes occurred in SSD than NH in regions prior studies found responsible for phonetics and phonology of speech, cognitive extraction of meaning, controlled retrieval of word meaning, and semantics. The findings suggested reduced activation of a LH fronto-temporo-parietal network in SSD contributed to difficulty processing speech for word meaning and sentence semantics. Effortful listening experienced by SSD might reflect diminished activation to degraded speech in the affected LH language network parcels. SSD showed no compensatory activity in matched right hemisphere parcels.
Collapse
Affiliation(s)
- Harold Burton
- Department of Neuroscience, Washington University School of Medicine, Saint Louis, MO, United States
| | - Ruth M Reeder
- Department of Otolaryngology-Head and Neck Surgery, Washington University School of Medicine, Saint Louis, MO, United States
| | - Tim Holden
- Department of Otolaryngology-Head and Neck Surgery, Washington University School of Medicine, Saint Louis, MO, United States
| | - Alvin Agato
- Department of Neuroscience, Washington University School of Medicine, Saint Louis, MO, United States
| | - Jill B Firszt
- Department of Otolaryngology-Head and Neck Surgery, Washington University School of Medicine, Saint Louis, MO, United States
| |
Collapse
|
10
|
Flagge AG, Davis T, Henbest VS. The Contribution of Frequency Discrimination Ability to Auditory Temporal Patterning Tests in Children. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:4314-4324. [PMID: 33270483 DOI: 10.1044/2020_jslhr-20-00093] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose The Pitch Patterns Test (PPT) and the Duration Patterns Test (DPT) are clinical auditory processing tests that evaluate temporal patterning skills based on pitch (PPT) or duration (DPT) aspects of sound. Although temporal patterning tests are categorized under the temporal processing domain, successful performance on the PPT also relies on accurate pitch discrimination. However, the relationship between pitch discrimination ability and temporal patterning skills has not been thoroughly evaluated. This study examined the contribution of pitch discrimination ability to performance on temporal patterning in children through the use of a pitch discrimination task and the PPT. The DPT was also given as a control measure to assess temporal patterning with no pitch component. Method Thirty-two typically developing elementary school-age children (6;11-11;3 [years;months]) with normal hearing were given a series of three counterbalanced tasks: an adaptive psychophysical pitch discrimination task (difference limen for frequency [DLF]), the PPT, and the DPT. Results Correlational analysis revealed moderate correlations between DLF and PPT scores. After accounting for age, results of a linear regression analysis suggested that pitch discrimination accounts for a significant amount of variance in performance on the PPT. No significant correlation was found between DLF and DPT scores, supporting the hypothesis that the pitch task had no significant temporal patterning component contributing to the overall score. Discussion These findings indicate that pitch discrimination contributes significantly to performance on the PPT, but not the DPT, in a typically developing pediatric population. This is an important clinical consideration in both assessment and utilization of targeted therapy techniques for different clinical populations.
Collapse
Affiliation(s)
- Ashley G Flagge
- Department of Speech Pathology and Audiology, University of South Alabama, Mobile
| | - Tara Davis
- Department of Speech Pathology and Audiology, University of South Alabama, Mobile
| | - Victoria S Henbest
- Department of Speech Pathology and Audiology, University of South Alabama, Mobile
| |
Collapse
|
11
|
Gabrieli D, Schumm SN, Vigilante NF, Meaney DF. NMDA Receptor Alterations After Mild Traumatic Brain Injury Induce Deficits in Memory Acquisition and Recall. Neural Comput 2020; 33:67-95. [PMID: 33253030 DOI: 10.1162/neco_a_01343] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Mild traumatic brain injury (mTBI) presents a significant health concern with potential persisting deficits that can last decades. Although a growing body of literature improves our understanding of the brain network response and corresponding underlying cellular alterations after injury, the effects of cellular disruptions on local circuitry after mTBI are poorly understood. Our group recently reported how mTBI in neuronal networks affects the functional wiring of neural circuits and how neuronal inactivation influences the synchrony of coupled microcircuits. Here, we utilized a computational neural network model to investigate the circuit-level effects of N-methyl D-aspartate receptor dysfunction. The initial increase in activity in injured neurons spreads to downstream neurons, but this increase was partially reduced by restructuring the network with spike-timing-dependent plasticity. As a model of network-based learning, we also investigated how injury alters pattern acquisition, recall, and maintenance of a conditioned response to stimulus. Although pattern acquisition and maintenance were impaired in injured networks, the greatest deficits arose in recall of previously trained patterns. These results demonstrate how one specific mechanism of cellular-level damage in mTBI affects the overall function of a neural network and point to the importance of reversing cellular-level changes to recover important properties of learning and memory in a microcircuit.
Collapse
Affiliation(s)
- David Gabrieli
- Department of Bioengineering, School of Engineering and Applied Sciences, University of Pennsylvania, Philadelphia, PA 19104, U.S.A.
| | - Samantha N Schumm
- Department of Bioengineering, School of Engineering and Applied Sciences, University of Pennsylvania, Philadelphia, PA 19104, U.S.A.
| | - Nicholas F Vigilante
- Department of Bioengineering, School of Engineering and Applied Sciences, University of Pennsylvania, Philadelphia, PA 19104, U.S.A.
| | - David F Meaney
- Department of Bioengineering, School of Engineering and Applied Sciences, and Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, U.S.A.
| |
Collapse
|
12
|
Anderson SR, Glickman B, Oh Y, Reiss LAJ. Binaural pitch fusion: Effects of sound level in listeners with normal hearing. Hear Res 2020; 396:108067. [PMID: 32961518 DOI: 10.1016/j.heares.2020.108067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Revised: 08/11/2020] [Accepted: 08/31/2020] [Indexed: 11/25/2022]
Abstract
Pitch is an important cue that allows the auditory system to distinguish between sound sources. Pitch cues are less useful when listeners are not able to discriminate different pitches between the two ears, a problem encountered by listeners with hearing impairment (HI). Many listeners with HI will fuse the pitch of two dichotically presented tones over a larger range of interaural frequency disparities, i.e., have a broader fusion range, than listeners with normal hearing (NH). One potential explanation for broader fusion in listeners with HI is that hearing aids stimulate at high sound levels. The present study investigated effects of overall sound levels on pitch fusion in listeners with NH. It was hypothesized that if sound level increased, then fusion range would increase. Fusion ranges were measured by presenting a fixed frequency tone to a reference ear simultaneously with a variable frequency tone to the opposite ear and finding the range of frequencies that were fused with the reference frequency. No significant effects of sound level (comfortable level ± 15 dB) on fusion range were found, even when tested within the range of levels where some listeners with HI show large fusion ranges. Results suggest that increased sound level does not explain increased fusion range in listeners with HI and imply that other factors associated with hearing loss might play a larger role.
Collapse
Affiliation(s)
- Sean R Anderson
- Oregon Health and Science University, Portland, OR 97239, United States.
| | - Bess Glickman
- Oregon Health and Science University, Portland, OR 97239, United States
| | - Yonghee Oh
- Oregon Health and Science University, Portland, OR 97239, United States
| | - Lina A J Reiss
- Oregon Health and Science University, Portland, OR 97239, United States
| |
Collapse
|
13
|
Hardy CJD, Yong KXX, Goll JC, Crutch SJ, Warren JD. Impairments of auditory scene analysis in posterior cortical atrophy. Brain 2020; 143:2689-2695. [PMID: 32875326 PMCID: PMC7523698 DOI: 10.1093/brain/awaa221] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2020] [Revised: 05/27/2020] [Accepted: 05/27/2020] [Indexed: 01/08/2023] Open
Abstract
Although posterior cortical atrophy is often regarded as the canonical 'visual dementia', auditory symptoms may also be salient in this disorder. Patients often report particular difficulty hearing in busy environments; however, the core cognitive process-parsing of the auditory environment ('auditory scene analysis')-has been poorly characterized. In this cross-sectional study, we used customized perceptual tasks to assess two generic cognitive operations underpinning auditory scene analysis-sound source segregation and sound event grouping-in a cohort of 21 patients with posterior cortical atrophy, referenced to 15 healthy age-matched individuals and 21 patients with typical Alzheimer's disease. After adjusting for peripheral hearing function and performance on control tasks assessing perceptual and executive response demands, patients with posterior cortical atrophy performed significantly worse on both auditory scene analysis tasks relative to healthy controls and patients with typical Alzheimer's disease (all P < 0.05). Our findings provide further evidence of central auditory dysfunction in posterior cortical atrophy, with implications for our pathophysiological understanding of Alzheimer syndromes as well as clinical diagnosis and management.
Collapse
Affiliation(s)
- Chris J D Hardy
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London, UK
| | - Keir X X Yong
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London, UK
| | - Johanna C Goll
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London, UK
| | - Sebastian J Crutch
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London, UK
| | - Jason D Warren
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London, UK
| |
Collapse
|
14
|
Hechavarría JC, Jerome Beetz M, García-Rosales F, Kössl M. Bats distress vocalizations carry fast amplitude modulations that could represent an acoustic correlate of roughness. Sci Rep 2020; 10:7332. [PMID: 32355293 PMCID: PMC7192923 DOI: 10.1038/s41598-020-64323-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2019] [Accepted: 03/04/2020] [Indexed: 02/07/2023] Open
Abstract
Communication sounds are ubiquitous in the animal kingdom, where they play a role in advertising physiological states and/or socio-contextual scenarios. Human screams, for example, are typically uttered in fearful contexts and they have a distinctive feature termed as "roughness", which depicts amplitude fluctuations at rates from 30-150 Hz. In this article, we report that the occurrence of fast acoustic periodicities in harsh sounding vocalizations is not unique to humans. A roughness-like structure is also present in vocalizations emitted by bats (species Carollia perspicillata) in distressful contexts. We report that 47.7% of distress calls produced by bats carry amplitude fluctuations at rates ~1.7 kHz (>10 times faster than temporal modulations found in human screams). In bats, rough-like vocalizations entrain brain potentials and are more effective in accelerating the bats' heart rate than slow amplitude modulated sounds. Our results are consistent with a putative role of fast amplitude modulations (roughness in humans) for grabbing the listeners attention in situations in which the emitter is in distressful, potentially dangerous, contexts.
Collapse
Affiliation(s)
- Julio C Hechavarría
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Frankfurt/M., Germany.
| | - M Jerome Beetz
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Frankfurt/M., Germany
- Zoology II Emmy-Noether Animal Navigation Group, Biocenter, University of Würzburg, Würzburg, Germany
| | | | - Manfred Kössl
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, Frankfurt/M., Germany
| |
Collapse
|
15
|
Zulfiqar I, Moerel M, Formisano E. Spectro-Temporal Processing in a Two-Stream Computational Model of Auditory Cortex. Front Comput Neurosci 2020; 13:95. [PMID: 32038212 PMCID: PMC6987265 DOI: 10.3389/fncom.2019.00095] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Accepted: 12/23/2019] [Indexed: 12/14/2022] Open
Abstract
Neural processing of sounds in the dorsal and ventral streams of the (human) auditory cortex is optimized for analyzing fine-grained temporal and spectral information, respectively. Here we use a Wilson and Cowan firing-rate modeling framework to simulate spectro-temporal processing of sounds in these auditory streams and to investigate the link between neural population activity and behavioral results of psychoacoustic experiments. The proposed model consisted of two core (A1 and R, representing primary areas) and two belt (Slow and Fast, representing rostral and caudal processing respectively) areas, differing in terms of their spectral and temporal response properties. First, we simulated the responses to amplitude modulated (AM) noise and tones. In agreement with electrophysiological results, we observed an area-dependent transition from a temporal (synchronization) to a rate code when moving from low to high modulation rates. Simulated neural responses in a task of amplitude modulation detection suggested that thresholds derived from population responses in core areas closely resembled those of psychoacoustic experiments in human listeners. For tones, simulated modulation threshold functions were found to be dependent on the carrier frequency. Second, we simulated the responses to complex tones with missing fundamental stimuli and found that synchronization of responses in the Fast area accurately encoded pitch, with the strength of synchronization depending on number and order of harmonic components. Finally, using speech stimuli, we showed that the spectral and temporal structure of the speech was reflected in parallel by the modeled areas. The analyses highlighted that the Slow stream coded with high spectral precision the aspects of the speech signal characterized by slow temporal changes (e.g., prosody), while the Fast stream encoded primarily the faster changes (e.g., phonemes, consonants, temporal pitch). Interestingly, the pitch of a speaker was encoded both spatially (i.e., tonotopically) in Slow area and temporally in Fast area. Overall, performed simulations showed that the model is valuable for generating hypotheses on how the different cortical areas/streams may contribute toward behaviorally relevant aspects of auditory processing. The model can be used in combination with physiological models of neurovascular coupling to generate predictions for human functional MRI experiments.
Collapse
Affiliation(s)
- Isma Zulfiqar
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, Netherlands
| | - Michelle Moerel
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, Netherlands.,Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.,Maastricht Brain Imaging Center, Maastricht, Netherlands
| | - Elia Formisano
- Maastricht Centre for Systems Biology, Maastricht University, Maastricht, Netherlands.,Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.,Maastricht Brain Imaging Center, Maastricht, Netherlands
| |
Collapse
|
16
|
Knott V, Wright N, Shah D, Baddeley A, Bowers H, de la Salle S, Labelle A. Change in the Neural Response to Auditory Deviance Following Cognitive Therapy for Hallucinations in Patients With Schizophrenia. Front Psychiatry 2020; 11:555. [PMID: 32595542 PMCID: PMC7304235 DOI: 10.3389/fpsyt.2020.00555] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Accepted: 06/01/2020] [Indexed: 12/28/2022] Open
Abstract
Adjunctive psychotherapeutic approaches recommended for patients with schizophrenia (SZ) who are fully or partially resistant to pharmacotherapy have rarely utilized biomarkers to enhance the understanding of treatment-effective mechanisms. As SZ patients with persistent auditory verbal hallucinations (AVH) frequently evidence reduced neural responsiveness to external auditory stimulation, which may impact cognitive and functional outcomes, this study examined the effects of cognitive behavioral therapy for voices (CBTv) on clinical and AVH symptoms and the sensory processing of auditory deviants as measured with the electroencephalographically derived mismatch negativity (MMN) response. Twenty-four patients with SZ and AVH were randomly assigned to group CBTv treatment or a treatment as usual (TAU) condition. Patients in the group CBTv condition received treatment for 5 months while the matched control patients received TAU for the same period, followed by 5 months of group CBTv. Assessments were conducted at baseline and at the end of treatment. Although not showing consistent changes in the frequency of AVHs, CBTv (vs. TAU) improved patients' appraisal (p = 0.001) of and behavioral/emotional responses to AVHs, and increased both MMN generation (p = 0.001) and auditory cortex current density (p = 0.002) in response to tone pitch deviants. Improvements in AVH symptoms were correlated with change in pitch deviant MMN and current density in left primary auditory cortex. These findings of improved auditory information processing and symptom-response attributable to CBTv suggest potential clinical and functional benefits of psychotherapeutical approaches for patients with persistent AVHs.
Collapse
Affiliation(s)
- Verner Knott
- School of Psychology, University of Ottawa, Ottawa, ON, Canada.,Clinical Neuroelectrophysiology and Cognitive Research Laboratory, University of Ottawa Institute of Mental Health Research, Ottawa, ON, Canada.,Department of Psychiatry, University of Ottawa, Ottawa, ON, Canada
| | - Nicola Wright
- Schizophrenia Program, The Royal Ottawa Mental Health Centre, Ottawa, ON, Canada
| | - Dhrasti Shah
- School of Psychology, University of Ottawa, Ottawa, ON, Canada
| | - Ashley Baddeley
- Clinical Neuroelectrophysiology and Cognitive Research Laboratory, University of Ottawa Institute of Mental Health Research, Ottawa, ON, Canada
| | - Hayley Bowers
- Schizophrenia Program, The Royal Ottawa Mental Health Centre, Ottawa, ON, Canada
| | - Sara de la Salle
- School of Psychology, University of Ottawa, Ottawa, ON, Canada.,Clinical Neuroelectrophysiology and Cognitive Research Laboratory, University of Ottawa Institute of Mental Health Research, Ottawa, ON, Canada
| | - Alain Labelle
- Department of Psychiatry, University of Ottawa, Ottawa, ON, Canada.,Schizophrenia Program, The Royal Ottawa Mental Health Centre, Ottawa, ON, Canada
| |
Collapse
|
17
|
Mollaei F, Shiller DM, Baum SR, Gracco VL. The Relationship Between Speech Perceptual Discrimination and Speech Production in Parkinson's Disease. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:4256-4268. [PMID: 31738857 DOI: 10.1044/2019_jslhr-s-18-0425] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Purpose We recently demonstrated that individuals with Parkinson's disease (PD) respond differentially to specific altered auditory feedback parameters during speech production. Participants with PD respond more robustly to pitch and less robustly to formant manipulations compared to control participants. In this study, we investigated whether differences in perceptual processing may in part underlie these compensatory differences in speech production. Methods Pitch and formant feedback manipulations were presented under 2 conditions: production and listening. In the production condition, 15 participants with PD and 15 age- and gender-matched healthy control participants judged whether their own speech output was manipulated in real time. During the listening task, participants judged whether paired tokens of their previously recorded speech samples were the same or different. Results Under listening, 1st formant manipulation discrimination was significantly reduced for the PD group compared to the control group. There was a trend toward better discrimination of pitch in the PD group, but the group difference was not significant. Under the production condition, the ability of participants with PD to identify pitch manipulations was greater than that of the controls. Conclusion The findings suggest perceptual processing differences associated with acoustic parameters of fundamental frequency and 1st formant perturbations in PD. These findings extend our previous results, indicating that different patterns of compensation to pitch and 1st formant shifts may reflect a combination of sensory and motor mechanisms that are differentially influenced by basal ganglia dysfunction.
Collapse
Affiliation(s)
- Fatemeh Mollaei
- Centre for Research on Brain, Language and Music, Montréal, Quebec, Canada
- School of Communication Sciences and Disorders, McGill University, Montréal, Quebec, Canada
| | - Douglas M Shiller
- Centre for Research on Brain, Language and Music, Montréal, Quebec, Canada
- École d'orthophonie et d'audiologie, Université de Montréal, Quebec, Canada
| | - Shari R Baum
- Centre for Research on Brain, Language and Music, Montréal, Quebec, Canada
- School of Communication Sciences and Disorders, McGill University, Montréal, Quebec, Canada
| | - Vincent L Gracco
- Centre for Research on Brain, Language and Music, Montréal, Quebec, Canada
- School of Communication Sciences and Disorders, McGill University, Montréal, Quebec, Canada
- Haskins Laboratories, New Haven, CT
| |
Collapse
|
18
|
Gu F, Wong L, Hu A, Zhang X, Tong X. A lateral inhibition mechanism explains the dissociation between mismatch negativity and behavioral pitch discrimination. Brain Res 2019; 1720:146308. [PMID: 31247205 DOI: 10.1016/j.brainres.2019.146308] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2018] [Revised: 04/20/2019] [Accepted: 06/23/2019] [Indexed: 11/26/2022]
Abstract
Although mismatch negativity (MMN), a change-specific component of auditory event-related potential, is considered to be an index of sound discrimination accuracy, the amplitude of the MMN responses elicited by pitch height deviations in musicians and tone language speakers with superior pitch discrimination is usually not enhanced compared to that elicited in individuals with inferior pitch discrimination. We hypothesized that superior pitch discrimination is accompanied by enhanced lateral inhibition, a critical neural mechanism that sharpens the tuning curves of the auditory neurons in the tonotopy. Forty Mandarin-speaking healthy adults completed an auditory EEG experiment in which MMN was elicited by pitch height deviations in both pure and harmonic tones. Their behavioral pitch discrimination was indexed by the difference limens measured using pure and harmonic tones. Behavioral pitch discrimination correlated significantly with the MMN elicited by pure tones, but not by harmonic tones; this could be due to lateral inhibition strongly influencing the MMN elicited by harmonic tones but having less effect on the MMN elicited by pure tones. As lateral inhibition is a neural mechanism for attenuating the amplitude of MMN, our results support the notion that an enhanced lateral inhibition mechanism underlies superior pitch discrimination.
Collapse
Affiliation(s)
- Feng Gu
- Division of Speech and Hearing Sciences, The University of Hong Kong, Hong Kong
| | - Lena Wong
- Division of Speech and Hearing Sciences, The University of Hong Kong, Hong Kong
| | - Axu Hu
- Key Lab of China's National Linguistic Information Technology, Northwest Minzu University, Lanzhou, China
| | - Xiaochu Zhang
- CAS Key Laboratory of Brain Function and Diseases, School of Life Sciences, University of Science and Technology of China, Hefei, China
| | - Xiuli Tong
- Division of Speech and Hearing Sciences, The University of Hong Kong, Hong Kong.
| |
Collapse
|
19
|
Direct electrophysiological mapping of human pitch-related processing in auditory cortex. Neuroimage 2019; 202:116076. [PMID: 31401239 DOI: 10.1016/j.neuroimage.2019.116076] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2018] [Revised: 07/28/2019] [Accepted: 08/05/2019] [Indexed: 11/23/2022] Open
Abstract
This work sought correlates of pitch perception, defined by neural activity above the lower limit of pitch (LLP), in auditory cortical neural ensembles, and examined their topographical distribution. Local field potentials (LFPs) were recorded in eight patients undergoing invasive recordings for pharmaco-resistant epilepsy. Stimuli consisted of bursts of broadband noise followed by regular interval noise (RIN). RIN was presented at rates below and above the LLP to distinguish responses related to the regularity of the stimulus and the presence of pitch itself. LFPs were recorded from human cortical homologues of auditory core, belt, and parabelt regions using multicontact depth electrodes implanted in Heschl's gyrus (HG) and Planum Temporale (PT), and subdural grid electrodes implanted over lateral superior temporal gyrus (STG). Evoked responses corresponding to the temporal regularity of the stimulus were assessed using autocorrelation of the evoked responses, and occurred for stimuli below and above the LLP. Induced responses throughout the high gamma range (60-200 Hz) were present for pitch values above the LLP, with onset latencies of approximately 70 ms. Mapping of the induced responses onto a common brain space demonstrated variability in the topographical distribution of high gamma responses across subjects. Induced responses were present throughout the length of HG and on PT, which is consistent with previous functional neuroimaging studies. Moreover, in each subject, a region within lateral STG showed robust induced responses at pitch-evoking stimulus rates. This work suggests a distributed representation of pitch processing in neural ensembles in human homologues of core and non-core auditory cortex.
Collapse
|
20
|
Peng F, Innes-Brown H, McKay CM, Fallon JB, Zhou Y, Wang X, Hu N, Hou W. Temporal Coding of Voice Pitch Contours in Mandarin Tones. Front Neural Circuits 2018; 12:55. [PMID: 30087597 PMCID: PMC6066958 DOI: 10.3389/fncir.2018.00055] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2017] [Accepted: 06/27/2018] [Indexed: 11/13/2022] Open
Abstract
Accurate perception of time-variant pitch is important for speech recognition, particularly for tonal languages with different lexical tones such as Mandarin, in which different tones convey different semantic information. Previous studies reported that the auditory nerve and cochlear nucleus can encode different pitches through phase-locked neural activities. However, little is known about how the inferior colliculus (IC) encodes the time-variant periodicity pitch of natural speech. In this study, the Mandarin syllable /ba/ pronounced with four lexical tones (flat, rising, falling then rising and falling) were used as stimuli. Local field potentials (LFPs) and single neuron activity were simultaneously recorded from 90 sites within contralateral IC of six urethane-anesthetized and decerebrate guinea pigs in response to the four stimuli. Analysis of the temporal information of LFPs showed that 93% of the LFPs exhibited robust encoding of periodicity pitch. Pitch strength of LFPs derived from the autocorrelogram was significantly (p < 0.001) stronger for rising tones than flat and falling tones. Pitch strength are also significantly increased (p < 0.05) with the characteristic frequency (CF). On the other hand, only 47% (42 or 90) of single neuron activities were significantly synchronized to the fundamental frequency of the stimulus suggesting that the temporal spiking pattern of single IC neuron could encode the time variant periodicity pitch of speech robustly. The difference between the number of LFPs and single neurons that encode the time-variant F0 voice pitch supports the notion of a transition at the level of IC from direct temporal coding in the spike trains of individual neurons to other form of neural representation.
Collapse
Affiliation(s)
- Fei Peng
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, Bioengineering College of Chongqing University, Chongqing, China
- Collaborative Innovation Center for Brain Science, Chongqing University, Chongqing, China
| | - Hamish Innes-Brown
- Bionics Institute, East Melbourne, VIC, Australia
- Department of Medical Bionics Department, University of Melbourne, Melbourne, VIC, Australia
| | - Colette M. McKay
- Bionics Institute, East Melbourne, VIC, Australia
- Department of Medical Bionics Department, University of Melbourne, Melbourne, VIC, Australia
| | - James B. Fallon
- Bionics Institute, East Melbourne, VIC, Australia
- Department of Medical Bionics Department, University of Melbourne, Melbourne, VIC, Australia
- Department of Otolaryngology, University of Melbourne, Melbourne, VIC, Australia
| | - Yi Zhou
- Chongqing Key Laboratory of Neurobiology, Department of Neurobiology, Third Military Medical University, Chongqing, China
| | - Xing Wang
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, Bioengineering College of Chongqing University, Chongqing, China
- Chongqing Medical Electronics Engineering Technology Research Center, Chongqing University, Chongqing, China
| | - Ning Hu
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, Bioengineering College of Chongqing University, Chongqing, China
- Collaborative Innovation Center for Brain Science, Chongqing University, Chongqing, China
| | - Wensheng Hou
- Key Laboratory of Biorheological Science and Technology of Ministry of Education, Bioengineering College of Chongqing University, Chongqing, China
- Collaborative Innovation Center for Brain Science, Chongqing University, Chongqing, China
- Chongqing Medical Electronics Engineering Technology Research Center, Chongqing University, Chongqing, China
| |
Collapse
|
21
|
Shoemaker JK, Klassen SA, Badrov MB, Fadel PJ. Fifty years of microneurography: learning the language of the peripheral sympathetic nervous system in humans. J Neurophysiol 2018; 119:1731-1744. [PMID: 29412776 DOI: 10.1152/jn.00841.2017] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
As a primary component of homeostasis, the sympathetic nervous system enables rapid adjustments to stress through its ability to communicate messages among organs and cause targeted and graded end organ responses. Key in this communication model is the pattern of neural signals emanating from the central to peripheral components of the sympathetic nervous system. But what is the communication strategy employed in peripheral sympathetic nerve activity (SNA)? Can we develop and interpret the system of coding in SNA that improves our understanding of the neural control of the circulation? In 1968, Hagbarth and Vallbo (Hagbarth KE, Vallbo AB. Acta Physiol Scand 74: 96-108, 1968) reported the first use of microneurographic methods to record sympathetic discharges in peripheral nerves of conscious humans, allowing quantification of SNA at rest and sympathetic responsiveness to physiological stressors in health and disease. This technique also has enabled a growing investigation into the coding patterns within, and cardiovascular outcomes associated with, postganglionic SNA. This review outlines how results obtained by microneurographic means have improved our understanding of SNA outflow patterns at the action potential level, focusing on SNA directed toward skeletal muscle in conscious humans.
Collapse
Affiliation(s)
- J Kevin Shoemaker
- School of Kinesiology, University of Western Ontario , London, Ontario , Canada
| | - Stephen A Klassen
- School of Kinesiology, University of Western Ontario , London, Ontario , Canada
| | - Mark B Badrov
- School of Kinesiology, University of Western Ontario , London, Ontario , Canada
| | - Paul J Fadel
- Department of Kinesiology, University of Texas at Arlington , Arlington, Texas
| |
Collapse
|
22
|
Nozaradan S, Keller PE, Rossion B, Mouraux A. EEG Frequency-Tagging and Input-Output Comparison in Rhythm Perception. Brain Topogr 2017; 31:153-160. [PMID: 29127530 DOI: 10.1007/s10548-017-0605-8] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2017] [Accepted: 10/27/2017] [Indexed: 01/23/2023]
Abstract
The combination of frequency-tagging with electroencephalography (EEG) has recently proved fruitful for understanding the perception of beat and meter in musical rhythm, a common behavior shared by humans of all cultures. EEG frequency-tagging allows the objective measurement of input-output transforms to investigate beat perception, its modulation by exogenous and endogenous factors, development, and neural basis. Recent doubt has been raised about the validity of comparing frequency-domain representations of auditory rhythmic stimuli and corresponding EEG responses, assuming that it implies a one-to-one mapping between the envelope of the rhythmic input and the neural output, and that it neglects the sensitivity of frequency-domain representations to acoustic features making up the rhythms. Here we argue that these elements actually reinforce the strengths of the approach. The obvious fact that acoustic features influence the frequency spectrum of the sound envelope precisely justifies taking into consideration the sounds used to generate a beat percept for interpreting neural responses to auditory rhythms. Most importantly, the many-to-one relationship between rhythmic input and perceived beat actually validates an approach that objectively measures the input-output transforms underlying the perceptual categorization of rhythmic inputs. Hence, provided that a number of potential pitfalls and fallacies are avoided, EEG frequency-tagging to study input-output relationships appears valuable for understanding rhythm perception.
Collapse
Affiliation(s)
- Sylvie Nozaradan
- The MARCS Institute for Brain, Behaviour and Development (WSU), Sydney, NSW, Australia. .,Institute of Neuroscience (Ions), Université catholique de Louvain (UCL), Brussels, Belgium. .,International Laboratory for Brain, Music and Sound Research (Brams), Montreal, QC, Canada. .,MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Locked Bag 1797, Penrith, NSW, 2751, Australia.
| | - Peter E Keller
- The MARCS Institute for Brain, Behaviour and Development (WSU), Sydney, NSW, Australia
| | - Bruno Rossion
- Institute of Neuroscience (Ions), Université catholique de Louvain (UCL), Brussels, Belgium.,Neurology Unit, Centre Hospitalier Régional Universitaire (CHRU) de Nancy, Nancy, France
| | - André Mouraux
- Institute of Neuroscience (Ions), Université catholique de Louvain (UCL), Brussels, Belgium
| |
Collapse
|
23
|
Shoemaker JK. Recruitment strategies in efferent sympathetic nerve activity. Clin Auton Res 2017; 27:369-378. [DOI: 10.1007/s10286-017-0459-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2017] [Accepted: 08/09/2017] [Indexed: 12/13/2022]
|
24
|
Stock AK, Dajkic D, Köhling HL, von Heinegg EH, Fiedler M, Beste C. Humans with latent toxoplasmosis display altered reward modulation of cognitive control. Sci Rep 2017; 7:10170. [PMID: 28860577 PMCID: PMC5579228 DOI: 10.1038/s41598-017-10926-6] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2017] [Accepted: 08/17/2017] [Indexed: 12/20/2022] Open
Abstract
Latent infection with Toxoplasma gondii has repeatedly been shown to be associated with behavioral changes that are commonly attributed to a presumed increase in dopaminergic signaling. Yet, virtually nothing is known about its effects on dopamine-driven reward processing. We therefore assessed behavior and event-related potentials in individuals with vs. without latent toxoplasmosis performing a rewarded control task. The data show that otherwise healthy young adults with latent toxoplasmosis show a greatly diminished response to monetary rewards as compared to their non-infected counterparts. While this selective effect eliminated a toxoplasmosis-induced speed advantage previously observed for non-rewarded behavior, Toxo-positive subjects could still be demonstrated to be superior to Toxo-negative subjects with respect to response accuracy. Event-related potential (ERP) and source localization analyses revealed that this advantage during rewarded behavior was based on increased allocation of processing resources reflected by larger visual late positive component (LPC) amplitudes and associated activity changes in the right temporo-parietal junction (BA40) and left auditory cortex (BA41). Taken together, individuals with latent toxoplasmosis show superior behavioral performance in challenging cognitive control situations but may at the same time have a reduced sensitivity towards motivational effects of rewards, which might be explained by the presumed increase in dopamine.
Collapse
Affiliation(s)
- Ann-Kathrin Stock
- Cognitive Neurophysiology, Department of Child and Adolescent Psychiatry, Faculty of Medicine of the TU Dresden, Schubertstr. 42, 01307, Dresden, Germany.
| | - Danica Dajkic
- Cognitive Neurophysiology, Department of Child and Adolescent Psychiatry, Faculty of Medicine of the TU Dresden, Schubertstr. 42, 01307, Dresden, Germany
| | - Hedda Luise Köhling
- Institute of Medical Microbiology, University Hospital Essen, University of Duisburg-Essen, Virchowstr. 179, 45147, Essen, Germany
| | - Evelyn Heintschel von Heinegg
- Institute of Medical Microbiology, University Hospital Essen, University of Duisburg-Essen, Virchowstr. 179, 45147, Essen, Germany
| | - Melanie Fiedler
- Institute of Virology, University Hospital, University of Duisburg-Essen, Virchowstr. 179, 45147, Essen, Germany
| | - Christian Beste
- Cognitive Neurophysiology, Department of Child and Adolescent Psychiatry, Faculty of Medicine of the TU Dresden, Schubertstr. 42, 01307, Dresden, Germany.,Experimental Neurobiology, National Institute of Mental Health, Klecany, Czech Republic
| |
Collapse
|
25
|
Rufener KS, Ruhnau P, Heinze HJ, Zaehle T. Transcranial Random Noise Stimulation (tRNS) Shapes the Processing of Rapidly Changing Auditory Information. Front Cell Neurosci 2017. [PMID: 28642686 PMCID: PMC5463504 DOI: 10.3389/fncel.2017.00162] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
Neural oscillations in the gamma range are the dominant rhythmic activation pattern in the human auditory cortex. These gamma oscillations are functionally relevant for the processing of rapidly changing acoustic information in both speech and non-speech sounds. Accordingly, there is a tight link between the temporal resolution ability of the auditory system and inherent neural gamma oscillations. Transcranial random noise stimulation (tRNS) has been demonstrated to specifically increase gamma oscillation in the human auditory cortex. However, neither the physiological mechanisms of tRNS nor the behavioral consequences of this intervention are completely understood. In the present study we stimulated the human auditory cortex bilaterally with tRNS while EEG was continuously measured. Modulations in the participants’ temporal and spectral resolution ability were investigated by means of a gap detection task and a pitch discrimination task. Compared to sham, auditory tRNS increased the detection rate for near-threshold stimuli in the temporal domain only, while no such effect was present for the discrimination of spectral features. Behavioral findings were paralleled by reduced peak latencies of the P50 and N1 component of the auditory event-related potentials (ERP) indicating an impact on early sensory processing. The facilitating effect of tRNS was limited to the processing of near-threshold stimuli while stimuli clearly below and above the individual perception threshold were not affected by tRNS. This non-linear relationship between the signal-to-noise level of the presented stimuli and the effect of stimulation further qualifies stochastic resonance (SR) as the underlying mechanism of tRNS on auditory processing. Our results demonstrate a tRNS related improvement in acoustic perception of time critical auditory information and, thus, provide further indices that auditory tRNS can amplify the resonance frequency of the auditory system.
Collapse
Affiliation(s)
| | - Philipp Ruhnau
- Department of Neurology, Otto-von-Guericke UniversityMagdeburg, Germany
| | | | - Tino Zaehle
- Department of Neurology, Otto-von-Guericke UniversityMagdeburg, Germany
| |
Collapse
|
26
|
Dykstra AR, Cariani PA, Gutschalk A. A roadmap for the study of conscious audition and its neural basis. Philos Trans R Soc Lond B Biol Sci 2017; 372:20160103. [PMID: 28044014 PMCID: PMC5206271 DOI: 10.1098/rstb.2016.0103] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/03/2016] [Indexed: 12/16/2022] Open
Abstract
How and which aspects of neural activity give rise to subjective perceptual experience-i.e. conscious perception-is a fundamental question of neuroscience. To date, the vast majority of work concerning this question has come from vision, raising the issue of generalizability of prominent resulting theories. However, recent work has begun to shed light on the neural processes subserving conscious perception in other modalities, particularly audition. Here, we outline a roadmap for the future study of conscious auditory perception and its neural basis, paying particular attention to how conscious perception emerges (and of which elements or groups of elements) in complex auditory scenes. We begin by discussing the functional role of the auditory system, particularly as it pertains to conscious perception. Next, we ask: what are the phenomena that need to be explained by a theory of conscious auditory perception? After surveying the available literature for candidate neural correlates, we end by considering the implications that such results have for a general theory of conscious perception as well as prominent outstanding questions and what approaches/techniques can best be used to address them.This article is part of the themed issue 'Auditory and visual scene analysis'.
Collapse
Affiliation(s)
- Andrew R Dykstra
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| | | | - Alexander Gutschalk
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| |
Collapse
|
27
|
Xie Z, Reetzke R, Chandrasekaran B. Stability and plasticity in neural encoding of linguistically relevant pitch patterns. J Neurophysiol 2017; 117:1407-1422. [PMID: 28077662 DOI: 10.1152/jn.00445.2016] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2016] [Revised: 01/09/2017] [Accepted: 01/09/2017] [Indexed: 12/15/2022] Open
Abstract
While lifelong language experience modulates subcortical encoding of pitch patterns, there is emerging evidence that short-term training introduced in adulthood also shapes subcortical pitch encoding. Here we use a cross-language design to examine the stability of language experience-dependent subcortical plasticity over multiple days. We then examine the extent to which behavioral relevance induced by sound-to-category training leads to plastic changes in subcortical pitch encoding in adulthood relative to adolescence, a period of ongoing maturation of subcortical and cortical auditory processing. Frequency-following responses (FFRs), which reflect phase-locked activity from subcortical neural ensembles, were elicited while participants passively listened to pitch patterns reflective of Mandarin tones. In experiment 1, FFRs were recorded across three consecutive days from native Chinese-speaking (n = 10) and English-speaking (n = 10) adults. In experiment 2, FFRs were recorded from native English-speaking adolescents (n = 20) and adults (n = 15) before, during, and immediately after a session of sound-to-category training, as well as a day after training ceased. Experiment 1 demonstrated the stability of language experience-dependent subcortical plasticity in pitch encoding across multiple days of passive exposure to linguistic pitch patterns. In contrast, experiment 2 revealed an enhancement in subcortical pitch encoding that emerged a day after the sound-to-category training, with some developmental differences observed. Taken together, these findings suggest that behavioral relevance is a critical component for the observation of plasticity in the subcortical encoding of pitch.NEW & NOTEWORTHY We examine the timescale of experience-dependent auditory plasticity to linguistically relevant pitch patterns. We find extreme stability in lifelong experience-dependent plasticity. We further demonstrate that subcortical function in adolescents and adults is modulated by a single session of sound-to-category training. Our results suggest that behavioral relevance is a necessary ingredient for neural changes in pitch encoding to be observed throughout human development. These findings contribute to the neurophysiological understanding of long- and short-term experience-dependent modulation of pitch.
Collapse
Affiliation(s)
- Zilong Xie
- Department of Communication Sciences and Disorders, The University of Texas at Austin, Austin, Texas
| | - Rachel Reetzke
- Department of Communication Sciences and Disorders, The University of Texas at Austin, Austin, Texas
| | - Bharath Chandrasekaran
- Department of Communication Sciences and Disorders, The University of Texas at Austin, Austin, Texas; .,Department of Psychology, The University of Texas at Austin, Austin, Texas.,Department of Linguistics, The University of Texas at Austin, Austin, Texas.,Institute for Neuroscience, The University of Texas at Austin, Austin, Texas; and.,Institute for Mental Health Research, The University of Texas at Austin, Austin, Texas
| |
Collapse
|
28
|
Zhang X, Gong Q. Correlation between the frequency difference limen and an index based on principal component analysis of the frequency-following response of normal hearing listeners. Hear Res 2016; 344:255-264. [PMID: 27956352 DOI: 10.1016/j.heares.2016.12.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/24/2016] [Revised: 12/01/2016] [Accepted: 12/08/2016] [Indexed: 10/20/2022]
Abstract
Subcortical phase locking tends to reflect performance differences in tasks related to pitch perception across different types of populations. Enhancement or attenuation in its strength may correspond to population excellence or deficiency in pitch perception. However, it is still unclear whether differences in perceptual capability among individuals with normal hearing can be predicted by subcortical phase locking. In this study, we examined the brain-behavior relationship between frequency-following responses (FFRs) evoked by pure/sweeping tones and frequency difference limens (FDLs). FFRs are considered to reflect subcortical phase locking, and FDLs are a psychophysical measure of behavioral performance in pitch discrimination. Traditional measures of FFR strength were found to be poorly correlated with FDL. Here, we introduced principal component analysis into FFR analysis and extracted an FFR component that was correlated with individual pitch discrimination. The absolute value of the score of this FFR principal component (but not the original score) was negatively correlated with FDL, regardless of stimulus type. The topographic distribution of this component was relatively constant across individuals and across stimulus types, and the inferior colliculus was identified as its origin. The findings suggest that subcortical phase locking at certain but not all FFR generators carries the neural information required for the prediction of individual pitch perception among humans with normal hearing.
Collapse
Affiliation(s)
- Xiaochen Zhang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Qin Gong
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China; Research Center for Biomedical Engineering, Graduate School at Shenzhen, Tsinghua University, Shenzhen, Guangdong Province, China.
| |
Collapse
|
29
|
Zhang X, Gong Q, Zhang T. Cortical auditory evoked potentials (CAEPs) represent neural cues relevant to pitch perception. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2016:1628-1631. [PMID: 28268641 DOI: 10.1109/embc.2016.7591025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Components of auditory event-related potentials (ERPs) may represent various aspects of the cortical processing of pitch. However, evidence hints an earlier representation of pitch perception in auditory ERPs of cortical origin. In this study, we examined whether earlier waves in cortical auditory evoked potentials (CAEPs) might reflect pitch-relevant features of both listeners and stimuli. CAEPs were elicited by pure tones and sweeping tones, and individual behavioral performance in pitch discrimination reflected by frequency difference limen (FDL) was also measured. Results show that CAEPs evoked by sweeping tones significantly correlate to FDL around ~50 ms, but CAEPs evoked by pure tones do not. Also, CAEPs are significantly affected by pitch-shift direction around ~130 ms. CAEPs evoked by ascending sweeping tones are larger in magnitude than those evoked by descending ones. Therefore, listeners' personal attributes relevant to pitch perception have already been reflected at a very early stage of cortical auditory processing, whilst certain pitch-related features of stimuli are recognized and represented at a later stage.
Collapse
|
30
|
Abstract
Congenital amusia is a lifelong deficit in music perception thought to reflect an underlying impairment in the perception and memory of pitch. The neural basis of amusic impairments is actively debated. Some prior studies have suggested that amusia stems from impaired connectivity between auditory and frontal cortex. However, it remains possible that impairments in pitch coding within auditory cortex also contribute to the disorder, in part because prior studies have not measured responses from the cortical regions most implicated in pitch perception in normal individuals. We addressed this question by measuring fMRI responses in 11 subjects with amusia and 11 age- and education-matched controls to a stimulus contrast that reliably identifies pitch-responsive regions in normal individuals: harmonic tones versus frequency-matched noise. Our findings demonstrate that amusic individuals with a substantial pitch perception deficit exhibit clusters of pitch-responsive voxels that are comparable in extent, selectivity, and anatomical location to those of control participants. We discuss possible explanations for why amusics might be impaired at perceiving pitch relations despite exhibiting normal fMRI responses to pitch in their auditory cortex: (1) individual neurons within the pitch-responsive region might exhibit abnormal tuning or temporal coding not detectable with fMRI, (2) anatomical tracts that link pitch-responsive regions to other brain areas (e.g., frontal cortex) might be altered, and (3) cortical regions outside of pitch-responsive cortex might be abnormal. The ability to identify pitch-responsive regions in individual amusic subjects will make it possible to ask more precise questions about their role in amusia in future work.
Collapse
|
31
|
Neural Mechanisms Underlying Musical Pitch Perception and Clinical Applications Including Developmental Dyslexia. Curr Neurol Neurosci Rep 2016; 15:51. [PMID: 26092314 DOI: 10.1007/s11910-015-0574-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Music production and perception invoke a complex set of cognitive functions that rely on the integration of sensorimotor, cognitive, and emotional pathways. Pitch is a fundamental perceptual attribute of sound and a building block for both music and speech. Although the cerebral processing of pitch is not completely understood, recent advances in imaging and electrophysiology have provided insight into the functional and anatomical pathways of pitch processing. This review examines the current understanding of pitch processing and behavioral and neural variations that give rise to difficulties in pitch processing, and potential applications of music education for language processing disorders such as dyslexia.
Collapse
|
32
|
Coffey EBJ, Colagrosso EMG, Lehmann A, Schönwiesner M, Zatorre RJ. Individual Differences in the Frequency-Following Response: Relation to Pitch Perception. PLoS One 2016; 11:e0152374. [PMID: 27015271 PMCID: PMC4807774 DOI: 10.1371/journal.pone.0152374] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2015] [Accepted: 03/14/2016] [Indexed: 11/30/2022] Open
Abstract
The scalp-recorded frequency-following response (FFR) is a measure of the auditory nervous system’s representation of periodic sound, and may serve as a marker of training-related enhancements, behavioural deficits, and clinical conditions. However, FFRs of healthy normal subjects show considerable variability that remains unexplained. We investigated whether the FFR representation of the frequency content of a complex tone is related to the perception of the pitch of the fundamental frequency. The strength of the fundamental frequency in the FFR of 39 people with normal hearing was assessed when they listened to complex tones that either included or lacked energy at the fundamental frequency. We found that the strength of the fundamental representation of the missing fundamental tone complex correlated significantly with people's general tendency to perceive the pitch of the tone as either matching the frequency of the spectral components that were present, or that of the missing fundamental. Although at a group level the fundamental representation in the FFR did not appear to be affected by the presence or absence of energy at the same frequency in the stimulus, the two conditions were statistically distinguishable for some subjects individually, indicating that the neural representation is not linearly dependent on the stimulus content. In a second experiment using a within-subjects paradigm, we showed that subjects can learn to reversibly select between either fundamental or spectral perception, and that this is accompanied both by changes to the fundamental representation in the FFR and to cortical-based gamma activity. These results suggest that both fundamental and spectral representations coexist, and are available for later auditory processing stages, the requirements of which may also influence their relative strength and thus modulate FFR variability. The data also highlight voluntary mode perception as a new paradigm with which to study top-down vs bottom-up mechanisms that support the emerging view of the FFR as the outcome of integrated processing in the entire auditory system.
Collapse
Affiliation(s)
- Emily B. J. Coffey
- Montreal Neurological Institute, McGill University, Montreal, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada
- * E-mail:
| | | | - Alexandre Lehmann
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada
- Department of Psychology, University of Montreal, Montreal, Canada
- Department of Otolaryngology Head & Neck Surgery, McGill University, Montreal, Canada
| | - Marc Schönwiesner
- Montreal Neurological Institute, McGill University, Montreal, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada
- Department of Psychology, University of Montreal, Montreal, Canada
| | - Robert J. Zatorre
- Montreal Neurological Institute, McGill University, Montreal, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada
| |
Collapse
|
33
|
Bach JP, Lüpke M, Dziallas P, Wefstaedt P, Uppenkamp S, Seifert H, Nolte I. Auditory functional magnetic resonance imaging in dogs--normalization and group analysis and the processing of pitch in the canine auditory pathways. BMC Vet Res 2016; 12:32. [PMID: 26897016 PMCID: PMC4761139 DOI: 10.1186/s12917-016-0660-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2015] [Accepted: 12/04/2015] [Indexed: 11/10/2022] Open
Abstract
Background Functional magnetic resonance imaging (fMRI) is an advanced and frequently used technique for studying brain functions in humans and increasingly so in animals. A key element of analyzing fMRI data is group analysis, for which valid spatial normalization is a prerequisite. In the current study we applied normalization and group analysis to a dataset from an auditory functional MRI experiment in anesthetized beagles. The stimulation paradigm used in the experiment was composed of simple Gaussian noise and regular interval sounds (RIS), which included a periodicity pitch as an additional sound feature. The results from the performed group analysis were compared with those from single animal analysis. In addition to this, the data were examined for brain regions showing an increased activation associated with the perception of pitch. Results With the group analysis, significant activations matching the position of the right superior olivary nucleus, lateral lemniscus and internal capsule were identified, which could not be detected in the single animal analysis. In addition, a large cluster of activated voxels in the auditory cortex was found. The contrast of the RIS condition (including pitch) with Gaussian noise (no pitch) showed a significant effect in a region matching the location of the left medial geniculate nucleus. Conclusion By using group analysis additional activated areas along the canine auditory pathways could be identified in comparison to single animal analysis. It was possible to demonstrate a pitch-specific effect, indicating that group analysis is a suitable method for improving the results of auditory fMRI studies in dogs and extending our knowledge of canine neuroanatomy.
Collapse
Affiliation(s)
- Jan-Peter Bach
- Klinik für Kleintiere, Stiftung Tierärztliche Hochschule Hannover, Bünteweg 9, 30559, Hannover, Germany.
| | - Matthias Lüpke
- Fachgebiet für Allgemeine Radiologie und Medizinische Physik, Stiftung Tierärztliche Hochschule Hannover, Bischofsholer Damm 15, 30173, Hannover, Germany.
| | - Peter Dziallas
- Klinik für Kleintiere, Stiftung Tierärztliche Hochschule Hannover, Bünteweg 9, 30559, Hannover, Germany.
| | - Patrick Wefstaedt
- Klinik für Kleintiere, Stiftung Tierärztliche Hochschule Hannover, Bünteweg 9, 30559, Hannover, Germany.
| | - Stefan Uppenkamp
- Medizinische Physik, Universität Oldenburg, 26111, Oldenburg, Germany.
| | - Hermann Seifert
- Fachgebiet für Allgemeine Radiologie und Medizinische Physik, Stiftung Tierärztliche Hochschule Hannover, Bischofsholer Damm 15, 30173, Hannover, Germany.
| | - Ingo Nolte
- Klinik für Kleintiere, Stiftung Tierärztliche Hochschule Hannover, Bünteweg 9, 30559, Hannover, Germany.
| |
Collapse
|
34
|
Nikolsky A. Evolution of tonal organization in music mirrors symbolic representation of perceptual reality. Part-1: Prehistoric. Front Psychol 2015; 6:1405. [PMID: 26528193 PMCID: PMC4607869 DOI: 10.3389/fpsyg.2015.01405] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2015] [Accepted: 09/03/2015] [Indexed: 11/21/2022] Open
Abstract
This paper reveals the way in which musical pitch works as a peculiar form of cognition that reflects upon the organization of the surrounding world as perceived by majority of music users within a socio-cultural formation. The evidence from music theory, ethnography, archeology, organology, anthropology, psychoacoustics, and evolutionary biology is plotted against experimental evidence. Much of the methodology for this investigation comes from studies conducted within the territory of the former USSR. To date, this methodology has remained solely confined to Russian speaking scholars. A brief overview of pitch-set theory demonstrates the need to distinguish between vertical and horizontal harmony, laying out the framework for virtual music space that operates according to the perceptual laws of tonal gravity. Brought to life by bifurcation of music and speech, tonal gravity passed through eleven discrete stages of development until the onset of tonality in the seventeenth century. Each stage presents its own method of integration of separate musical tones into an auditory-cognitive unity. The theory of “melodic intonation” is set forth as a counterpart to harmonic theory of chords. Notions of tonality, modality, key, diatonicity, chromaticism, alteration, and modulation are defined in terms of their perception, and categorized according to the way in which they have developed historically. Tonal organization in music, and perspective organization in fine arts are explained as products of the same underlying mental process. Music seems to act as a unique medium of symbolic representation of reality through the concept of pitch. Tonal organization of pitch reflects the culture of thinking, adopted as a standard within a community of music users. Tonal organization might be a naturally formed system of optimizing individual perception of reality within a social group and its immediate environment, setting conventional standards of intellectual and emotional intelligence.
Collapse
|
35
|
Yang W, Yang J, Gao Y, Tang X, Ren Y, Takahashi S, Wu J. Effects of Sound Frequency on Audiovisual Integration: An Event-Related Potential Study. PLoS One 2015; 10:e0138296. [PMID: 26384256 PMCID: PMC4575110 DOI: 10.1371/journal.pone.0138296] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2015] [Accepted: 08/29/2015] [Indexed: 11/24/2022] Open
Abstract
A combination of signals across modalities can facilitate sensory perception. The audiovisual facilitative effect strongly depends on the features of the stimulus. Here, we investigated how sound frequency, which is one of basic features of an auditory signal, modulates audiovisual integration. In this study, the task of the participant was to respond to a visual target stimulus by pressing a key while ignoring auditory stimuli, comprising of tones of different frequencies (0.5, 1, 2.5 and 5 kHz). A significant facilitation of reaction times was obtained following audiovisual stimulation, irrespective of whether the task-irrelevant sounds were low or high frequency. Using event-related potential (ERP), audiovisual integration was found over the occipital area for 0.5 kHz auditory stimuli from 190–210 ms, for 1 kHz stimuli from 170–200 ms, for 2.5 kHz stimuli from 140–200 ms, 5 kHz stimuli from 100–200 ms. These findings suggest that a higher frequency sound signal paired with visual stimuli might be early processed or integrated despite the auditory stimuli being task-irrelevant information. Furthermore, audiovisual integration in late latency (300–340 ms) ERPs with fronto-central topography was found for auditory stimuli of lower frequencies (0.5, 1 and 2.5 kHz). Our results confirmed that audiovisual integration is affected by the frequency of an auditory stimulus. Taken together, the neurophysiological results provide unique insight into how the brain processes a multisensory visual signal and auditory stimuli of different frequencies.
Collapse
Affiliation(s)
- Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Hubei, China
- Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, Okayama, Japan
| | - Jingjing Yang
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China
| | - Yulin Gao
- Department of Psychology, School of Philosophy and Sociology, Jilin University, Changchun, China
| | - Xiaoyu Tang
- Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, Okayama, Japan
| | - Yanna Ren
- Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, Okayama, Japan
| | - Satoshi Takahashi
- Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, Okayama, Japan
| | - Jinglong Wu
- Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, Okayama, Japan
- Bio-robotics and System Laboratory, Beijing Institute of Technology, Beijing, China
- * E-mail:
| |
Collapse
|
36
|
Liu F, Maggu AR, Lau JCY, Wong PCM. Brainstem encoding of speech and musical stimuli in congenital amusia: evidence from Cantonese speakers. Front Hum Neurosci 2015; 8:1029. [PMID: 25646077 PMCID: PMC4297920 DOI: 10.3389/fnhum.2014.01029] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2014] [Accepted: 12/06/2014] [Indexed: 12/01/2022] Open
Abstract
Congenital amusia is a neurodevelopmental disorder of musical processing that also impacts subtle aspects of speech processing. It remains debated at what stage(s) of auditory processing deficits in amusia arise. In this study, we investigated whether amusia originates from impaired subcortical encoding of speech (in quiet and noise) and musical sounds in the brainstem. Fourteen Cantonese-speaking amusics and 14 matched controls passively listened to six Cantonese lexical tones in quiet, two Cantonese tones in noise (signal-to-noise ratios at 0 and 20 dB), and two cello tones in quiet while their frequency-following responses (FFRs) to these tones were recorded. All participants also completed a behavioral lexical tone identification task. The results indicated normal brainstem encoding of pitch in speech (in quiet and noise) and musical stimuli in amusics relative to controls, as measured by FFR pitch strength, pitch error, and stimulus-to-response correlation. There was also no group difference in neural conduction time or FFR amplitudes. Both groups demonstrated better FFRs to speech (in quiet and noise) than to musical stimuli. However, a significant group difference was observed for tone identification, with amusics showing significantly lower accuracy than controls. Analysis of the tone confusion matrices suggested that amusics were more likely than controls to confuse between tones that shared similar acoustic features. Interestingly, this deficit in lexical tone identification was not coupled with brainstem abnormality for either speech or musical stimuli. Together, our results suggest that the amusic brainstem is not functioning abnormally, although higher-order linguistic pitch processing is impaired in amusia. This finding has significant implications for theories of central auditory processing, requiring further investigations into how different stages of auditory processing interact in the human brain.
Collapse
Affiliation(s)
- Fang Liu
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong Hong Kong, China
| | - Akshay R Maggu
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong Hong Kong, China
| | - Joseph C Y Lau
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong Hong Kong, China
| | - Patrick C M Wong
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong Hong Kong, China ; The Chinese University of Hong Kong - Utrecht University Joint Center for Language, Mind and Brain Hong Kong, China ; Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University Evanston, IL, USA ; Department of Otolaryngology, Head and Neck Surgery, Northwestern University Feinberg School of Medicine Chicago, IL, USA
| |
Collapse
|
37
|
Sturm I, Blankertz B, Potes C, Schalk G, Curio G. ECoG high gamma activity reveals distinct cortical representations of lyrics passages, harmonic and timbre-related changes in a rock song. Front Hum Neurosci 2014; 8:798. [PMID: 25352799 PMCID: PMC4195312 DOI: 10.3389/fnhum.2014.00798] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2014] [Accepted: 09/19/2014] [Indexed: 11/13/2022] Open
Abstract
Listening to music moves our minds and moods, stirring interest in its neural underpinnings. A multitude of compositional features drives the appeal of natural music. How such original music, where a composer's opus is not manipulated for experimental purposes, engages a listener's brain has not been studied until recently. Here, we report an in-depth analysis of two electrocorticographic (ECoG) data sets obtained over the left hemisphere in ten patients during presentation of either a rock song or a read-out narrative. First, the time courses of five acoustic features (intensity, presence/absence of vocals with lyrics, spectral centroid, harmonic change, and pulse clarity) were extracted from the audio tracks and found to be correlated with each other to varying degrees. In a second step, we uncovered the specific impact of each musical feature on ECoG high-gamma power (70-170 Hz) by calculating partial correlations to remove the influence of the other four features. In the music condition, the onset and offset of vocal lyrics in ongoing instrumental music was consistently identified within the group as the dominant driver for ECoG high-gamma power changes over temporal auditory areas, while concurrently subject-individual activation spots were identified for sound intensity, timbral, and harmonic features. The distinct cortical activations to vocal speech-related content embedded in instrumental music directly demonstrate that song integrated in instrumental music represents a distinct dimension in complex music. In contrast, in the speech condition, the full sound envelope was reflected in the high gamma response rather than the onset or offset of the vocal lyrics. This demonstrates how the contributions of stimulus features that modulate the brain response differ across the two examples of a full-length natural stimulus, which suggests a context-dependent feature selection in the processing of complex auditory stimuli.
Collapse
Affiliation(s)
- Irene Sturm
- Berlin School of Mind and Brain, Humboldt Universität zu Berlin Berlin, Germany ; Neurotechnology Group, Department of Electrical Engineering and Computer Science, Berlin Institute of Technology Berlin, Germany ; Neurophysics Group, Department of Neurology and Clinical Neurophysiology, Charité - University Medicine Berlin Berlin, Germany
| | - Benjamin Blankertz
- Neurotechnology Group, Department of Electrical Engineering and Computer Science, Berlin Institute of Technology Berlin, Germany ; Bernstein Focus: Neurotechnology Berlin, Germany
| | - Cristhian Potes
- National Resource Center for Adaptive Neurotechnologies, Wadsworth Center, New York State Department of Health Albany, NY, USA ; Department of Electrical and Computer Engineering, University of Texas at El Paso El Paso, TX, USA
| | - Gerwin Schalk
- National Resource Center for Adaptive Neurotechnologies, Wadsworth Center, New York State Department of Health Albany, NY, USA ; Department of Electrical and Computer Engineering, University of Texas at El Paso El Paso, TX, USA ; Department of Neurosurgery, Washington University in St. Louis St. Louis, MO, USA ; Department of Biomedical Engineering, Rensselaer Polytechnic Institute Troy, NY, USA ; Department of Neurology, Albany Medical College Albany, NY, USA ; Department of Neurosurgery, Washington University in St. Louis St. Louis, MO, USA
| | - Gabriel Curio
- Berlin School of Mind and Brain, Humboldt Universität zu Berlin Berlin, Germany ; Neurophysics Group, Department of Neurology and Clinical Neurophysiology, Charité - University Medicine Berlin Berlin, Germany ; Bernstein Focus: Neurotechnology Berlin, Germany
| |
Collapse
|
38
|
The influence of tone inventory on ERP without focal attention: a cross-language study. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2014; 2014:961563. [PMID: 25254067 PMCID: PMC4164512 DOI: 10.1155/2014/961563] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2014] [Accepted: 07/15/2014] [Indexed: 11/18/2022]
Abstract
This study investigates the effect of tone inventories on brain activities underlying pitch without focal attention. We find that the electrophysiological responses to across-category stimuli are larger than those to within-category stimuli when the pitch contours are superimposed on nonspeech stimuli; however, there is no electrophysiological response difference associated with category status in speech stimuli. Moreover, this category effect in nonspeech stimuli is stronger for Cantonese speakers. Results of previous and present studies lead us to conclude that brain activities to the same native lexical tone contrasts are modulated by speakers' language experiences not only in active phonological processing but also in automatic feature detection without focal attention. In contrast to the condition with focal attention, where phonological processing is stronger for speech stimuli, the feature detection (pitch contours in this study) without focal attention as shaped by language background is superior in relatively regular stimuli, that is, the nonspeech stimuli. The results suggest that Cantonese listeners outperform Mandarin listeners in automatic detection of pitch features because of the denser Cantonese tone system.
Collapse
|
39
|
Affiliation(s)
- Deborah A Hall
- National Institute of Health Research (NIHR) Nottingham Hearing Biomedical Research Unit, University of Nottingham, Nottingham NG7 2RD, UK.
| | | |
Collapse
|