1
|
Duggirala SX, Schwartze M, Goller LK, Linden DEJ, Pinheiro AP, Kotz SA. Hallucination Proneness Alters Sensory Feedback Processing in Self-voice Production. Schizophr Bull 2024; 50:1147-1158. [PMID: 38824450 PMCID: PMC11349023 DOI: 10.1093/schbul/sbae095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/03/2024]
Abstract
BACKGROUND Sensory suppression occurs when hearing one's self-generated voice, as opposed to passively listening to one's own voice. Quality changes in sensory feedback to the self-generated voice can increase attentional control. These changes affect the self-other voice distinction and might lead to hearing voices in the absence of an external source (ie, auditory verbal hallucinations). However, it is unclear how changes in sensory feedback processing and attention allocation interact and how this interaction might relate to hallucination proneness (HP). STUDY DESIGN Participants varying in HP self-generated (via a button-press) and passively listened to their voice that varied in emotional quality and certainty of recognition-100% neutral, 60%-40% neutral-angry, 50%-50% neutral-angry, 40%-60% neutral-angry, 100% angry, during electroencephalography (EEG) recordings. STUDY RESULTS The N1 auditory evoked potential was more suppressed for self-generated than externally generated voices. Increased HP was associated with (1) an increased N1 response to the self- compared with externally generated voices, (2) a reduced N1 response for angry compared with neutral voices, and (3) a reduced N2 response to unexpected voice quality in sensory feedback (60%-40% neutral-angry) compared with neutral voices. CONCLUSIONS The current study highlights an association between increased HP and systematic changes in the emotional quality and certainty in sensory feedback processing (N1) and attentional control (N2) in self-voice production in a nonclinical population. Considering that voice hearers also display these changes, these findings support the continuum hypothesis.
Collapse
Affiliation(s)
- Suvarnalata Xanthate Duggirala
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
- Department of Psychology, Faculty of Psychology, University of Lisbon, Lisbon, Portugal
- Department of Psychiatry and Neuropsychology, School for Mental Health and Neuroscience, Faculty of Health and Medical Sciences, Maastricht University, Maastricht, Netherlands
| | - Michael Schwartze
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Lisa K Goller
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - David E J Linden
- Department of Psychiatry and Neuropsychology, School for Mental Health and Neuroscience, Faculty of Health and Medical Sciences, Maastricht University, Maastricht, Netherlands
- Maastricht University Medical Center, Maastricht, Netherlands
| | - Ana P Pinheiro
- Department of Psychology, Faculty of Psychology, University of Lisbon, Lisbon, Portugal
| | - Sonja A Kotz
- Department of Neuropsychology and Psychopharmacology, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
2
|
Castejón J, Chen F, Yasoda-Mohan A, Ó Sé C, Vanneste S. Chronic pain - A maladaptive compensation to unbalanced hierarchical predictive processing. Neuroimage 2024; 297:120711. [PMID: 38942099 DOI: 10.1016/j.neuroimage.2024.120711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 06/24/2024] [Accepted: 06/25/2024] [Indexed: 06/30/2024] Open
Abstract
The ability to perceive pain presents an interesting evolutionary advantage to adapt to an ever-changing environment. However, in the case of chronic pain (CP), pain perception hinders the capacity of the system to adapt to changing sensory environments. Similar to other chronic perceptual disorders, CP is also proposed to be a maladaptive compensation to aberrant sensory predictive processing. The local-global oddball paradigm relies on learning hierarchical rules and processing environmental irregularities at a local and global level. Prediction errors (PE) between actual and predicted input typically trigger an update of the forward model to limit the probability of encountering future PEs. It has been hypothesised that CP hinders forward model updating, reflected in increased local deviance and decreased global deviance. In the present study, we used the local-global paradigm to examine how CP influences hierarchical learning relative to healthy controls. As hypothesised, we observed that deviance in the stimulus characteristics evoked heightened local deviance and decreased global deviance of the stimulus-driven PE. This is also accompanied by respective changes in theta phase locking that is correlated with the subjective pain perception. Changes in the global deviant in the stimulus-driven-PE could also be explained by dampened attention-related responses. Changing the context of the auditory stimulus did not however show a difference in the context-driven PE. These findings suggest that CP is accompanied by maladaptive forward model updating where the constant presence of pain perception disrupts local deviance in non-nociceptive domains. Furthermore, we hypothesise that the auditory-processing based biomarker identified here could be a marker of domain-general dysfunction that could be confirmed by future research.
Collapse
Affiliation(s)
- Jorge Castejón
- Lab for Clinical and Integrative Neuroscience, Trinity College Institute for Neuroscience, School of Psychology, Trinity College Dublin, Ireland; Senior MSK Physiotherapist CompassPhysio LTD, Ireland
| | - Feifan Chen
- Lab for Clinical and Integrative Neuroscience, Trinity College Institute for Neuroscience, School of Psychology, Trinity College Dublin, Ireland
| | - Anusha Yasoda-Mohan
- Lab for Clinical and Integrative Neuroscience, Trinity College Institute for Neuroscience, School of Psychology, Trinity College Dublin, Ireland; Global Brain Health Institute, Trinity College Dublin, Ireland
| | - Colum Ó Sé
- Lab for Clinical and Integrative Neuroscience, Trinity College Institute for Neuroscience, School of Psychology, Trinity College Dublin, Ireland
| | - Sven Vanneste
- Lab for Clinical and Integrative Neuroscience, Trinity College Institute for Neuroscience, School of Psychology, Trinity College Dublin, Ireland; Global Brain Health Institute, Trinity College Dublin, Ireland.
| |
Collapse
|
3
|
Quique YM, Gnanateja GN, Dickey MW, Evans WS, Chandrasekaran B. Examining cortical tracking of the speech envelope in post-stroke aphasia. Front Hum Neurosci 2023; 17:1122480. [PMID: 37780966 PMCID: PMC10538638 DOI: 10.3389/fnhum.2023.1122480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Accepted: 08/28/2023] [Indexed: 10/03/2023] Open
Abstract
Introduction People with aphasia have been shown to benefit from rhythmic elements for language production during aphasia rehabilitation. However, it is unknown whether rhythmic processing is associated with such benefits. Cortical tracking of the speech envelope (CTenv) may provide a measure of encoding of speech rhythmic properties and serve as a predictor of candidacy for rhythm-based aphasia interventions. Methods Electroencephalography was used to capture electrophysiological responses while Spanish speakers with aphasia (n = 9) listened to a continuous speech narrative (audiobook). The Temporal Response Function was used to estimate CTenv in the delta (associated with word- and phrase-level properties), theta (syllable-level properties), and alpha bands (attention-related properties). CTenv estimates were used to predict aphasia severity, performance in rhythmic perception and production tasks, and treatment response in a sentence-level rhythm-based intervention. Results CTenv in delta and theta, but not alpha, predicted aphasia severity. Neither CTenv in delta, alpha, or theta bands predicted performance in rhythmic perception or production tasks. Some evidence supported that CTenv in theta could predict sentence-level learning in aphasia, but alpha and delta did not. Conclusion CTenv of the syllable-level properties was relatively preserved in individuals with less language impairment. In contrast, higher encoding of word- and phrase-level properties was relatively impaired and was predictive of more severe language impairments. CTenv and treatment response to sentence-level rhythm-based interventions need to be further investigated.
Collapse
Affiliation(s)
- Yina M. Quique
- Center for Education in Health Sciences, Northwestern University Feinberg School of Medicine, Chicago, IL, United States
| | - G. Nike Gnanateja
- Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, United States
| | - Michael Walsh Dickey
- VA Pittsburgh Healthcare System, Pittsburgh, PA, United States
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, PA, United States
| | | | - Bharath Chandrasekaran
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, PA, United States
- Roxelyn and Richard Pepper Department of Communication Science and Disorders, School of Communication. Northwestern University, Evanston, IL, United States
| |
Collapse
|
4
|
Maggu AR. Auditory Evoked Potentials in Communication Disorders: An Overview of Past, Present, and Future. Semin Hear 2022; 43:137-148. [PMID: 36313051 PMCID: PMC9605805 DOI: 10.1055/s-0042-1756160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/16/2023] Open
Abstract
This article provides a brief overview of auditory evoked potentials (AEPs) and their application in the areas of research and clinics within the field of communication disorders. The article begins with providing a historical perspective within the context of the key scientific developments that led to the emergence of numerous types of AEPs. Furthermore, the article discusses the different AEP techniques in the light of their feasibility in clinics. As AEPs, because of their versatility, find their use across disciplines, this article also discusses some of the research questions that are currently being addressed using AEP techniques in the field of communication disorders and beyond. At the end, this article summarizes the shortcomings of the existing AEP techniques and provides a general perspective toward the future directions. The article is aimed at a broad readership including (but not limited to) students, clinicians, and researchers. Overall, this article may act as a brief primer for the new AEP users, and as an overview of the progress in the field of AEPs along with future directions, for those who already use AEPs on a routine basis.
Collapse
Affiliation(s)
- Akshay R. Maggu
- Department of Speech-Language-Hearing Sciences, Hofstra University, Hempstead, New York
| |
Collapse
|
5
|
Macambira YKDS, Menezes PDL, Frizzo ACF, Griz SMS, Menezes DC, Advíncula KP. Cortical auditory evoked potentials using the speech stimulus /ma/. REVISTA CEFAC 2022. [DOI: 10.1590/1982-0216/20222439021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
|
6
|
Song H, Jeon S, Shin Y, Han W, Kim S, Kwak C, Lee E, Kim J. Effects of Natural Versus Synthetic Consonant and Vowel Stimuli on Cortical Auditory-Evoked Potential. J Audiol Otol 2021; 26:68-75. [PMID: 34963276 PMCID: PMC8996083 DOI: 10.7874/jao.2021.00479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Accepted: 10/09/2021] [Indexed: 11/22/2022] Open
Abstract
Background and Objectives Natural and synthetic speech signals effectively stimulate cortical auditory evoked potential (CAEP). This study aimed to select the speech materials for CAEP and identify CAEP waveforms according to gender of speaker (GS) and gender of listener (GL). Subjects and Methods Two experiments including a comparison of natural and synthetic stimuli and CAEP measurement were performed of 21 young announcers and 40 young adults. Plosive /g/ and /b/ and aspirated plosive /k/ and /p/ were combined to /a/. Six bisyllables–/ga/-/ka/, /ga/-/ba/, /ga/-/pa/, /ka/-/ba/, /ka/-/pa/, and /ba/-/pa/–were formulated as tentative forwarding and backwarding orders. In the natural and synthetic stimulation mode (SM) according to GS, /ka/ and /pa/ were selected through the first experiment used for CAEP measurement. Results The correction rate differences were largest (74%) at /ka/-/ pa/ and /pa/-/ka/; thus, they were selected as stimulation materals for CAEP measurement. The SM showed shorter latency with P2 and N1-P2 with natural stimulation and N2 with synthetic stimulation. The P2 amplitude was larger with natural stimulation. The SD showed significantly larger amplitude for P2 and N1-P2 with /pa/. The GS showed shorter latency for P2, N2, and N1-P2 and larger amplitude for N2 with female speakers. The GL showed shorter latency for N2 and N1-P2 and larger amplitude for N2 with female listeners. Conclusions Although several variables showed significance for N2, P2, and N1-P2, P1 and N1 did not show any significance for any variables. N2 and P2 of CAEP seemed affected by endogenous factors.
Collapse
|
7
|
Lunardelo PP, Hebihara Fukuda MT, Zuanetti PA, Pontes-Fernandes ÂC, Ferretti MI, Zanchetta S. Cortical auditory evoked potentials with different acoustic stimuli: Evidence of differences and similarities in coding in auditory processing disorders. Int J Pediatr Otorhinolaryngol 2021; 151:110944. [PMID: 34773882 DOI: 10.1016/j.ijporl.2021.110944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/07/2020] [Revised: 09/05/2021] [Accepted: 10/12/2021] [Indexed: 11/25/2022]
Abstract
OBJECTIVES The use of cortical auditory evoked potentials allows for the study of the processing of acoustic signals at the cortical level, an important step in the diagnostic evaluation process, and for the monitoring of the therapeutic process associated with auditory processing disorders (APD). The differences and similarities in the acoustic coding between different types of stimuli in the context of APD remain unknown to this date. METHODS A total of 37 children aged between 7 and 11 years, with and without APDs (identified based on verbal and non-verbal tests), all with a suitable intelligence quotient with respect to their chronological age, were assessed. Components P1 and N1 were studied using verbal and non-verbal stimuli. RESULTS The comparison between stimuli in each group revealed that the control group had higher latency and amplitude values for speech stimuli, except for the P1 amplitude, while the group with APDs had different results with respect to the amplitudes of P1 and N1, yielding higher values for speech sounds. The differences between the groups varied according to the type of stimulus: the difference was in amplitude for the verbal stimulus and latency for the non-verbal stimulus. CONCLUSION The records of components P1 and N1 revealed that the children with APDs performed the coding underlying the detection and identification of acoustic signals, whether verbal and non-verbal, according to a different pattern than the children in the control group.
Collapse
Affiliation(s)
- Pamela Papile Lunardelo
- Department of Psychology, School of Fhilosophy, Sciences and Letters- Ribeirão Preto, University of São Paulo, Brazil.
| | - Marisa Tomoe Hebihara Fukuda
- Department of Psychology, School of Fhilosophy, Sciences and Letters- Ribeirão Preto, University of São Paulo, Brazil; Department of Health Sciences, Ribeirão Preto Medical School, University of São Paulo, 3900 Bandeirantes Av., Postal Code 14.040-901, Ribeirão Preto, Brazil.
| | - Patricia Aparecida Zuanetti
- Clinical Hospital/ Ribeirão Preto Medical School-University of São Paulo, 3900, Bandeirantes Av., Postal Code 14.040-901, Ribeirão Preto, Brazil.
| | - Ângela Cristina Pontes-Fernandes
- Clinical Hospital/ Ribeirão Preto Medical School-University of São Paulo, 3900, Bandeirantes Av., Postal Code 14.040-901, Ribeirão Preto, Brazil; University Paulista - UNIP, Ribeirão Preto, Brazil.
| | | | - Sthella Zanchetta
- Department of Health Sciences, Ribeirão Preto Medical School, University of São Paulo, 3900 Bandeirantes Av., Postal Code 14.040-901, Ribeirão Preto, Brazil; Clinical Hospital/ Ribeirão Preto Medical School-University of São Paulo, 3900, Bandeirantes Av., Postal Code 14.040-901, Ribeirão Preto, Brazil.
| |
Collapse
|
8
|
Kamita MK, Silva LAF, Magliaro FCL, Fernandes FD, Matas CG. Auditory Event Related Potentials in children with autism spectrum disorder. Int J Pediatr Otorhinolaryngol 2021; 148:110826. [PMID: 34246067 DOI: 10.1016/j.ijporl.2021.110826] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 06/15/2021] [Accepted: 06/29/2021] [Indexed: 11/25/2022]
Abstract
OBJECTIVE To analyze auditory cortical processing in high functioning ASD individuals. METHODS Thirty individuals were included in the study (15 with Autism Spectrum Disorder and 15 with typical development), and their Auditory Event Related Potentials evaluation, elicited with tone burst and speech stimuli, were analyzed. RESULTS There were no significant differences between individuals with high-functioning Autism Spectrum Disorder without intellectual disability and those with typical development in the auditory Event-related Potentials elicited with tone bursts or speech stimuli. CONCLUSIONS The results of Auditory Event Related Potentials did not show any change at the cortical level in individuals with Autism Spectrum Disorder.
Collapse
Affiliation(s)
- Mariana K Kamita
- Department of Physical Therapy, Speech-Language-Hearing Therapy and Occupational Therapy, School of Medicine, University of São Paulo (USP), São Paulo. Str. Cipotânea, 51, Cidade Universitária, São Paulo, SP, ZIP Code: 05360-160, Brazil.
| | - Liliane A F Silva
- Department of Physical Therapy, Speech-Language-Hearing Therapy and Occupational Therapy, School of Medicine, University of São Paulo (USP), São Paulo. Str. Cipotânea, 51, Cidade Universitária, São Paulo, SP, ZIP Code: 05360-160, Brazil.
| | - Fernanda C L Magliaro
- Department of Physical Therapy, Speech-Language-Hearing Therapy and Occupational Therapy, School of Medicine, University of São Paulo (USP), São Paulo. Str. Cipotânea, 51, Cidade Universitária, São Paulo, SP, ZIP Code: 05360-160, Brazil.
| | - Fernanda D Fernandes
- Department of Physical Therapy, Speech-Language-Hearing Therapy and Occupational Therapy, School of Medicine, University of São Paulo (USP), São Paulo. Str. Cipotânea, 51, Cidade Universitária, São Paulo, SP, ZIP Code: 05360-160, Brazil.
| | - Carla G Matas
- Department of Physical Therapy, Speech-Language-Hearing Therapy and Occupational Therapy, School of Medicine, University of São Paulo (USP), São Paulo. Str. Cipotânea, 51, Cidade Universitária, São Paulo, SP, ZIP Code: 05360-160, Brazil.
| |
Collapse
|
9
|
Speech Perception with Noise Vocoding and Background Noise: An EEG and Behavioral Study. J Assoc Res Otolaryngol 2021; 22:349-363. [PMID: 33851289 DOI: 10.1007/s10162-021-00787-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Accepted: 01/26/2021] [Indexed: 10/21/2022] Open
Abstract
This study explored the physiological response of the human brain to degraded speech syllables. The degradation was introduced using noise vocoding and/or background noise. The goal was to identify physiological features of auditory-evoked potentials (AEPs) that may explain speech intelligibility. Ten human subjects with normal hearing participated in syllable-detection tasks, while their AEPs were recorded with 32-channel electroencephalography. Subjects were presented with six syllables in the form of consonant-vowel-consonant or vowel-consonant-vowel. Noise vocoding with 22 or 4 frequency channels was applied to the syllables. When examining the peak heights in the AEPs (P1, N1, and P2), vocoding alone showed no consistent effect. P1 was not consistently reduced by background noise, N1 was sometimes reduced by noise, and P2 was almost always highly reduced. Two other physiological metrics were examined: (1) classification accuracy of the syllables based on AEPs, which indicated whether AEPs were distinguishable for different syllables, and (2) cross-condition correlation of AEPs (rcc) between the clean and degraded speech, which indicated the brain's ability to extract speech-related features and suppress response to noise. Both metrics decreased with degraded speech quality. We further tested if the two metrics can explain cross-subject variations in their behavioral performance. A significant correlation existed for rcc, as well as classification based on early AEPs, in the fronto-central areas. Because rcc indicates similarities between clean and degraded speech, our finding suggests that high speech intelligibility may be a result of the brain's ability to ignore noise in the sound carrier and/or background.
Collapse
|
10
|
Miller SE, Graham J, Schafer E. Auditory Sensory Gating of Speech and Nonspeech Stimuli. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:1404-1412. [PMID: 33755510 DOI: 10.1044/2020_jslhr-20-00535] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose Auditory sensory gating is a neural measure of inhibition and is typically measured with a click or tonal stimulus. This electrophysiological study examined if stimulus characteristics and the use of speech stimuli affected auditory sensory gating indices. Method Auditory event-related potentials were elicited using natural speech, synthetic speech, and nonspeech stimuli in a traditional auditory gating paradigm in 15 adult listeners with normal hearing. Cortical responses were recorded at 64 electrode sites, and peak amplitudes and latencies to the different stimuli were extracted. Individual data were analyzed using repeated-measures analysis of variance. Results Significant gating of P1-N1-P2 peaks was observed for all stimulus types. N1-P2 cortical responses were affected by stimulus type, with significantly less neural inhibition of the P2 response observed for natural speech compared to nonspeech and synthetic speech. Conclusions Auditory sensory gating responses can be measured using speech and nonspeech stimuli in listeners with normal hearing. The results of the study indicate the amount of gating and neural inhibition observed is affected by the spectrotemporal characteristics of the stimuli used to evoke the neural responses.
Collapse
Affiliation(s)
- Sharon E Miller
- Department of Audiology and Speech-Language Pathology, University of North Texas, Denton
| | - Jessica Graham
- Division of Audiology, St. Louis Children's Hospital, MO
| | - Erin Schafer
- Department of Audiology and Speech-Language Pathology, University of North Texas, Denton
| |
Collapse
|
11
|
Reetzke R, Gnanateja GN, Chandrasekaran B. Neural tracking of the speech envelope is differentially modulated by attention and language experience. BRAIN AND LANGUAGE 2021; 213:104891. [PMID: 33290877 PMCID: PMC7856208 DOI: 10.1016/j.bandl.2020.104891] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Revised: 09/22/2020] [Accepted: 11/18/2020] [Indexed: 05/13/2023]
Abstract
The ability to selectively attend to a speech signal amid competing sounds is a significant challenge, especially for listeners trying to comprehend non-native speech. Attention is critical to direct neural processing resources to the most essential information. Here, neural tracking of the speech envelope of an English story narrative and cortical auditory evoked potentials (CAEPs) to non-speech stimuli were simultaneously assayed in native and non-native listeners of English. Although native listeners exhibited higher narrative comprehension accuracy, non-native listeners exhibited enhanced neural tracking of the speech envelope and heightened CAEP magnitudes. These results support an emerging view that although attention to a target speech signal enhances neural tracking of the speech envelope, this mechanism itself may not confer speech comprehension advantages. Our findings suggest that non-native listeners may engage neural attentional processes that enhance low-level acoustic features, regardless if the target signal contains speech or non-speech information.
Collapse
Affiliation(s)
- Rachel Reetzke
- Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, United States; Center for Autism and Related Disorders, Kennedy Krieger Institute, United States
| | - G Nike Gnanateja
- Department of Communication Science and Disorders, University of Pittsburgh, United States
| | - Bharath Chandrasekaran
- Department of Communication Science and Disorders, University of Pittsburgh, United States.
| |
Collapse
|
12
|
Bell KL, Lister JJ, Conter R, Harrison Bush AL, O'Brien J. Cognitive Event-Related Potential Responses Differentiate Older Adults with and without Probable Mild Cognitive Impairment. Exp Aging Res 2020; 47:145-164. [PMID: 33342371 DOI: 10.1080/0361073x.2020.1861838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Background: Older adults rarely seek cognitive assessment, but often visit other healthcare professionals (e.g., audiologists). Noninvasive clinical measures within the scopes of practice of those professions sensitive to cognitive impairment are needed. Purpose: This study examined the differences of probable mild cognitive impairment (MCI) on latency and mean amplitude of the P3b auditory event-related potential. Method: Fifty-four participants comprised two groups according to cognitive status (cognitively normal older adults [CNOA], n = 25; probable MCI, n = 29). P3b was recorded using an oddball paradigm for speech (/ba/, /da/) and non-speech (1000, 2000 Hz) stimuli. Amplitudes and latencies were compared from six electrodes (FPz, Fz, FCz, Cz, CPz, Pz) between groups across stimulus probability and type. Results: CNOA participants had larger P3b mean amplitudes for deviant stimuli than those with probable MCI. Group effects of latency were isolated to deviant stimuli at FCz only when those with unclear P3bs were included. Findings did not covary with age or education. Overall, CNOAs showed a large P3b oddball effect while those with probable MCI did not. Conclusions: P3b can be used to show electrophysiological differences between older adults with and without probable MCI. These results support the development of educational materials targeting professionals using auditory-evoked potentials.
Collapse
Affiliation(s)
- Karen L Bell
- Department of Communication Sciences and Disorders, University of South Florida , Tampa, Florida, USA
| | - Jennifer Jones Lister
- Department of Communication Sciences and Disorders, University of South Florida , Tampa, Florida, USA
| | - Rachel Conter
- Department of Communication Sciences and Disorders, University of South Florida , Tampa, Florida, USA
| | - Aryn L Harrison Bush
- Department of Communication Sciences and Disorders, University of South Florida , Tampa, Florida, USA.,Department of Brain Health and Cognition, Reliance Medical Centers , Lakeland, Florida, USA
| | - Jennifer O'Brien
- Department of Psychology, University of South Florida , Tampa, Florida, USA
| |
Collapse
|
13
|
Megha, Maruthy S. Effect of Hearing Aid Acclimatization on Speech-in-Noise Perception and Its Relationship With Changes in Auditory Long Latency Responses. Am J Audiol 2020; 29:774-784. [PMID: 32970453 DOI: 10.1044/2020_aja-19-00124] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Objective The study attempted to track speech-in-noise perception and auditory long latency responses (ALLRs) over a period of hearing aid use in naïve hearing aid users. The primary aim was to investigate the relationship of change in speech-in-noise perception with the change in ALLRs. Method Thirty adults with mild-to-moderate sensorineural hearing loss (clinical group) and 17 adults with normal hearing (control group) in the age range of 23-60 years participated in the study. Syllable identification in noise (SIN) and ALLRs in noise were measured three times (three sessions) over a period of 2 months of hearing aid use. Results Results showed a significant increase in SIN and a decrease in the latency of ALLRs in the later sessions compared to the baseline session in the clinical group. However, the changes seen across the three sessions in the control group were not statistically significant. The magnitude of change in ALLRs seen in the clinical group did not significantly correlate with the change in SIN scores seen in them. Conclusions The study provides evidence for improvements in speech perception in noise and in processing time of auditory cortical areas with hearing aid acclimatization. However, it is important to note that the improvement in ALLRs does not assure improvement in speech perception in noise.
Collapse
Affiliation(s)
- Megha
- Department of Audiology, All IndiaInstitute of Speech and Hearing, Manasagangothri, Mysuru, Karnataka
| | - Sandeep Maruthy
- Department of Audiology, All IndiaInstitute of Speech and Hearing, Manasagangothri, Mysuru, Karnataka
| |
Collapse
|
14
|
Faucette SP, Stuart A. An examination of electrophysiological release from masking in young and older adults. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:1786. [PMID: 33138490 DOI: 10.1121/10.0002010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Accepted: 09/09/2020] [Indexed: 06/11/2023]
Abstract
The effect of age on release from masking (RFM) was examined using cortical auditory evoked potentials (CAEPs). Two speech-in-noise paradigms [i.e., fixed speech with varying signal-to-noise ratios (SNRs) and fixed noise with varying speech levels], similar to those used in behavioral measures of RFM, were employed with competing continuous and interrupted noises. Young and older normal-hearing adults participated (N = 36). Cortical responses were evoked in the fixed speech paradigm at SNRs of -10, 0, and 10 dB. In the fixed noise paradigm, the CAEP SNR threshold was determined in both noises as the lowest SNR that yielded a measurable response. RFM was demonstrated in the fixed speech paradigm with a significant amount of missing responses, longer P1 and N1 latencies, and smaller N1 response amplitudes in continuous noise at the poorest -10 dB SNR. In the fixed noise paradigm, RFM was demonstrated with significantly lower CAEP SNR thresholds in interrupted noise. Older participants demonstrated significantly longer P2 latencies and reduced P1 and N1 amplitudes. There was no evidence of a group difference in RFM in either paradigm.
Collapse
Affiliation(s)
- Sarah P Faucette
- Department of Otolaryngology and Communicative Sciences, University of Mississippi Medical Center, 2500 North State Street, Jackson, Mississippi 39216-4505, USA
| | - Andrew Stuart
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, North Carolina 27858-4353, USA
| |
Collapse
|
15
|
Wang NYH, Chiang CH, Wang HLS, Tsao Y. Atypical Frequency Sweep Processing in Chinese Children With Reading Difficulties: Evidence From Magnetoencephalography. Front Psychol 2020; 11:1649. [PMID: 32849009 PMCID: PMC7431696 DOI: 10.3389/fpsyg.2020.01649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2019] [Accepted: 06/17/2020] [Indexed: 11/27/2022] Open
Abstract
Chinese lexical tones determine word meaning and are crucial in reading development. Reduced tone awareness is widely reported in children with reading difficulties (RD). Lexical-tone processing requires sensitivity to frequency-modulated sound changes. The present study investigates whether reduced tone awareness in children with RD is reflected in basic auditory processing and the level at which the breakdown occurs. Magnetoencephalographic techniques and an oddball paradigm were used to elicit auditory-related neural responses. Five frequency sweep conditions were established to mirror the frequency fluctuation in Chinese lexical tones, including one standard (level) sweep and four deviant sweeps (fast-up, fast-down, slow-up, and slow-down). A total of 14 Chinese-speaking children aged 9–12 years with RD and 13 age-matched typically developing children were recruited. The participants completed a magnetoencephalographic data acquisition session, during which they watched a silent cartoon and the auditory stimuli were presented in a pseudorandomized order. The results revealed that the significant between-group difference was caused by differences in the level of auditory sensory processing, reflected by the P1m component elicited by the slow-up frequency sweep. This finding indicated that auditory sensory processing was affected by both the duration and the direction of a frequency sweep. Sensitivity to changes in duration and frequency is crucial for the processing of suprasegmental features. Therefore, this sensory deficit might be associated with difficulties discriminating two tones with an upward frequency contour in Chinese.
Collapse
Affiliation(s)
- Natalie Yu-Hsien Wang
- Department of Audiology and Speech-Language Pathology, Asia University, Taichung, Taiwan
| | - Chun-Han Chiang
- Department of Special Education, National Pingtung University, Pingtung, Taiwan
| | | | - Yu Tsao
- Research Center for Information Technology Innovation, Academia Sinica, Taipei, Taiwan
| |
Collapse
|
16
|
Whitten A, Key AP, Mefferd AS, Bodfish JW. Auditory event-related potentials index faster processing of natural speech but not synthetic speech over nonspeech analogs in children. BRAIN AND LANGUAGE 2020; 207:104825. [PMID: 32563764 DOI: 10.1016/j.bandl.2020.104825] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2019] [Revised: 05/29/2020] [Accepted: 05/30/2020] [Indexed: 06/11/2023]
Abstract
Given the crucial role of speech sounds in human language, it may be beneficial for speech to be supported by more efficient auditory and attentional neural processing mechanisms compared to nonspeech sounds. However, previous event-related potential (ERP) studies have found either no differences or slower auditory processing of speech than nonspeech, as well as inconsistent attentional processing. We hypothesized that this may be due to the use of synthetic stimuli in past experiments. The present study measured ERP responses during passive listening to both synthetic and natural speech and complexity-matched nonspeech analog sounds in 22 8-11-year-old children. We found that although children were more likely to show immature auditory ERP responses to the more complex natural stimuli, ERP latencies were significantly faster to natural speech compared to cow vocalizations, but were significantly slower to synthetic speech compared to tones. The attentional results indicated a P3a orienting response only to the cow sound, and we discuss potential methodological reasons for this. We conclude that our results support more efficient auditory processing of natural speech sounds in children, though more research with a wider array of stimuli will be necessary to confirm these results. Our results also highlight the importance of using natural stimuli in research investigating the neurobiology of language.
Collapse
Affiliation(s)
- Allison Whitten
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S., Nashville, TN, USA.
| | - Alexandra P Key
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S., Nashville, TN, USA; Department of Psychiatry and Behavioral Sciences, Vanderbilt Psychiatric Hospital, 1601 23rd Ave. S, Nashville, TN, USA; Vanderbilt Kennedy Center, 110 Magnolia Cir, Nashville, TN, USA
| | - Antje S Mefferd
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S., Nashville, TN, USA; Vanderbilt Kennedy Center, 110 Magnolia Cir, Nashville, TN, USA
| | - James W Bodfish
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S., Nashville, TN, USA; Department of Psychiatry and Behavioral Sciences, Vanderbilt Psychiatric Hospital, 1601 23rd Ave. S, Nashville, TN, USA; Vanderbilt Kennedy Center, 110 Magnolia Cir, Nashville, TN, USA; Vanderbilt Brain Institute, 6133 Medical Research Building III, 465 21st Avenue S., Nashville, TN, USA
| |
Collapse
|
17
|
Miller SE, Zhang Y. Neural Coding of Syllable-Final Fricatives with and without Hearing Aid Amplification. J Am Acad Audiol 2020; 31:566-577. [PMID: 32340057 DOI: 10.1055/s-0040-1709448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
BACKGROUND Cortical auditory event-related potentials are a potentially useful clinical tool to objectively assess speech outcomes with rehabilitative devices. Whether hearing aids reliably encode the spectrotemporal characteristics of fricative stimuli in different phonological contexts and whether these differences result in distinct neural responses with and without hearing aid amplification remain unclear. PURPOSE To determine whether the neural coding of the voiceless fricatives /s/ and /ʃ/ in the syllable-final context reliably differed without hearing aid amplification and whether hearing aid amplification altered neural coding of the fricative contrast. RESEARCH DESIGN A repeated-measures, within subject design was used to compare the neural coding of a fricative contrast with and without hearing aid amplification. STUDY SAMPLE Ten adult listeners with normal hearing participated in the study. DATA COLLECTION AND ANALYSIS Cortical auditory event-related potentials were elicited to an /ɑs/-/ɑʃ/ vowel-fricative contrast in unaided and aided listening conditions. Neural responses to the speech contrast were recorded at 64-electrode sites. Peak latencies and amplitudes of the cortical response waveforms to the fricatives were analyzed using repeated-measures analysis of variance. RESULTS The P2' component of the acoustic change complex significantly differed from the syllable-final fricative contrast with and without hearing aid amplification. Hearing aid amplification differentially altered the neural coding of the contrast across frontal, temporal, and parietal electrode regions. CONCLUSIONS Hearing aid amplification altered the neural coding of syllable-final fricatives. However, the contrast remained acoustically distinct in the aided and unaided conditions, and cortical responses to the fricative significantly differed with and without the hearing aid.
Collapse
Affiliation(s)
- Sharon E Miller
- Department of Audiology and Speech-Language Pathology, University of North Texas, Denton, Texas
| | - Yang Zhang
- Department of Speech-Language Hearing Science, University of Minnesota, Minneapolis, Minnesota.,Center for Neurobehavioral Development, University of Minnesota, Minneapolis, Minnesota.,Center for Applied and Translational Sensory Science, University of Minnesota, Minneapolis, Minnesota
| |
Collapse
|
18
|
Lunardelo PP, Simões HDO, Zanchetta S. Differences and similarities in the long-latency auditory evoked potential recording of P1-N1 for different sound stimuli. REVISTA CEFAC 2019. [DOI: 10.1590/1982-0216/201921218618] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
ABSTRACT Purpose: this study aimed at illustrating the similarities and differences in the recording of components P1 and N1 for verbal and non-verbal stimuli, in an adult sample population, for reference purposes. Methods: twenty-one adult, eutrophic individuals of both sexes were recruited for this study. The long-latency auditory evoked potential was detected by bilateral stimulation in both ears, using simultaneous recording, with non-verbal stimuli and the syllable /da/. Results: for non-verbal and speech stimuli, N1 was identified in 100.0% of the participants, whereas P1 was observed in 85.7% and 95.2% individuals for non-verbal and speech stimuli, respectively. Significant differences were observed for the P1 and N1 amplitudes between the ears (p <0.05); the P1 component, in the left ear, was higher than that in the right ear, whereas the N1 component was higher in the right one. Regarding the stimuli, the amplitude and latency values of N1 were higher for speech, whereas in P1, different results were obtained only in latency. Conclusion: the N1 component was the most frequently detected one. Differences in latency and amplitude for each stimuli occurred only for N1, which can be justified by its role in the process of speech discrimination.
Collapse
|
19
|
Leite RA, Magliaro FCL, Raimundo JC, Bento RF, Matas CG. Monitoring auditory cortical plasticity in hearing aid users with long latency auditory evoked potentials: a longitudinal study. Clinics (Sao Paulo) 2018; 73:e51. [PMID: 29466495 PMCID: PMC5808112 DOI: 10.6061/clinics/2018/e51] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/30/2017] [Accepted: 10/03/2017] [Indexed: 11/18/2022] Open
Abstract
OBJECTIVE The objective of this study was to compare long-latency auditory evoked potentials before and after hearing aid fittings in children with sensorineural hearing loss compared with age-matched children with normal hearing. METHODS Thirty-two subjects of both genders aged 7 to 12 years participated in this study and were divided into two groups as follows: 14 children with normal hearing were assigned to the control group (mean age 9 years and 8 months), and 18 children with mild to moderate symmetrical bilateral sensorineural hearing loss were assigned to the study group (mean age 9 years and 2 months). The children underwent tympanometry, pure tone and speech audiometry and long-latency auditory evoked potential testing with speech and tone burst stimuli. The groups were assessed at three time points. RESULTS The study group had a lower percentage of positive responses, lower P1-N1 and P2-N2 amplitudes (speech and tone burst), and increased latencies for the P1 and P300 components following the tone burst stimuli. They also showed improvements in long-latency auditory evoked potentials (with regard to both the amplitude and presence of responses) after hearing aid use. CONCLUSIONS Alterations in the central auditory pathways can be identified using P1-N1 and P2-N2 amplitude components, and the presence of these components increases after a short period of auditory stimulation (hearing aid use). These findings emphasize the importance of using these amplitude components to monitor the neuroplasticity of the central auditory nervous system in hearing aid users.
Collapse
Affiliation(s)
- Renata Aparecida Leite
- Departamento de Fisioterapia, Fonoaudiologia e Terapia Ocupacional, Faculdade de Medicina (FMUSP), Universidade de Sao Paulo, Sao Paulo, SP, BR
- *Corresponding author. E-mail:
| | - Fernanda Cristina Leite Magliaro
- Departamento de Fisioterapia, Fonoaudiologia e Terapia Ocupacional, Faculdade de Medicina (FMUSP), Universidade de Sao Paulo, Sao Paulo, SP, BR
| | - Jeziela Cristina Raimundo
- Departamento de Fisioterapia, Fonoaudiologia e Terapia Ocupacional, Faculdade de Medicina (FMUSP), Universidade de Sao Paulo, Sao Paulo, SP, BR
| | - Ricardo Ferreira Bento
- Departamento de Oftalmologia e Otorrinolaringologia, Faculdade de Medicina (FMUSP), Universidade de Sao Paulo, Sao Paulo, SP, BR
| | - Carla Gentile Matas
- Departamento de Fisioterapia, Fonoaudiologia e Terapia Ocupacional, Faculdade de Medicina (FMUSP), Universidade de Sao Paulo, Sao Paulo, SP, BR
| |
Collapse
|
20
|
Faucette SP, Stuart A. Evidence of a speech evoked electrophysiological release from masking in noise. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:EL218. [PMID: 28863590 DOI: 10.1121/1.4998151] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
In this study, a release from masking (RFM) was sought with cortical auditory evoked potentials (CAEPs) elicited by speech (/da/) in competing continuous and interrupted noises. Two paradigms (i.e., fixed speech with varying signal-to-noise ratios and fixed noise with varying speech levels) were employed. Shorter latencies and larger amplitudes were observed in interrupted versus continuous noise at equivalent signal-to-noise ratios. With fixed speech presentation, P1-N1-P2 latencies were prolonged and peak N1 and P2 amplitudes decreased and more so with continuous noise. CAEP thresholds were lower in interrupted noise. This is the first demonstration of RFM with CAEPs to speech.
Collapse
Affiliation(s)
- Sarah P Faucette
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, North Carolina 27858-4353, USA ,
| | - Andrew Stuart
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, North Carolina 27858-4353, USA ,
| |
Collapse
|
21
|
Manca AD, Grimaldi M. Vowels and Consonants in the Brain: Evidence from Magnetoencephalographic Studies on the N1m in Normal-Hearing Listeners. Front Psychol 2016; 7:1413. [PMID: 27713712 PMCID: PMC5031792 DOI: 10.3389/fpsyg.2016.01413] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2016] [Accepted: 09/05/2016] [Indexed: 01/07/2023] Open
Abstract
Speech sound perception is one of the most fascinating tasks performed by the human brain. It involves a mapping from continuous acoustic waveforms onto the discrete phonological units computed to store words in the mental lexicon. In this article, we review the magnetoencephalographic studies that have explored the timing and morphology of the N1m component to investigate how vowels and consonants are computed and represented within the auditory cortex. The neurons that are involved in the N1m act to construct a sensory memory of the stimulus due to spatially and temporally distributed activation patterns within the auditory cortex. Indeed, localization of auditory fields maps in animals and humans suggested two levels of sound coding, a tonotopy dimension for spectral properties and a tonochrony dimension for temporal properties of sounds. When the stimulus is a complex speech sound, tonotopy and tonochrony data may give important information to assess whether the speech sound parsing and decoding are generated by pure bottom-up reflection of acoustic differences or whether they are additionally affected by top-down processes related to phonological categories. Hints supporting pure bottom-up processing coexist with hints supporting top-down abstract phoneme representation. Actually, N1m data (amplitude, latency, source generators, and hemispheric distribution) are limited and do not help to disentangle the issue. The nature of these limitations is discussed. Moreover, neurophysiological studies on animals and neuroimaging studies on humans have been taken into consideration. We compare also the N1m findings with the investigation of the magnetic mismatch negativity (MMNm) component and with the analogous electrical components, the N1 and the MMN. We conclude that N1 seems more sensitive to capture lateralization and hierarchical processes than N1m, although the data are very preliminary. Finally, we suggest that MEG data should be integrated with EEG data in the light of the neural oscillations framework and we propose some concerns that should be addressed by future investigations if we want to closely line up language research with issues at the core of the functional brain mechanisms.
Collapse
Affiliation(s)
- Anna Dora Manca
- Dipartimento di Studi Umanistici, Centro di Ricerca Interdisciplinare sul Linguaggio, University of SalentoLecce, Italy; Laboratorio Diffuso di Ricerca Interdisciplinare Applicata alla MedicinaLecce, Italy
| | - Mirko Grimaldi
- Dipartimento di Studi Umanistici, Centro di Ricerca Interdisciplinare sul Linguaggio, University of SalentoLecce, Italy; Laboratorio Diffuso di Ricerca Interdisciplinare Applicata alla MedicinaLecce, Italy
| |
Collapse
|
22
|
Billings CJ, Grush LD. Signal type and signal-to-noise ratio interact to affect cortical auditory evoked potentials. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:EL221. [PMID: 27586784 PMCID: PMC5848827 DOI: 10.1121/1.4959600] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2016] [Revised: 07/09/2016] [Accepted: 07/11/2016] [Indexed: 06/06/2023]
Abstract
Use of speech signals and background noise is emerging in cortical auditory evoked potential (CAEP) studies; however, the interaction between signal type and noise level remains unclear. Two experiments determined the interaction between signal type and signal-to-noise ratio (SNR) on CAEPs. Three signals (syllable /ba/, 1000-Hz tone, and the /ba/ envelope with 1000-Hz fine structure) with varying SNRs were used in two experiments, demonstrating signal-by-SNR interactions due to both envelope and spectral characteristics. When using real-world stimuli such as speech to evoke CAEPs, temporal and spectral complexity leads to differences with traditional tonal stimuli, especially when presented in background noise.
Collapse
Affiliation(s)
- Curtis J Billings
- National Center for Rehabilitative Auditory Research, Veterans Affairs Portland Health Care System, Portland, Oregon 97239, USA ,
| | - Leslie D Grush
- National Center for Rehabilitative Auditory Research, Veterans Affairs Portland Health Care System, Portland, Oregon 97239, USA ,
| |
Collapse
|
23
|
Morris DJ, Steinmetzger K, Tøndering J. Auditory event-related responses to diphthongs in different attention conditions. Neurosci Lett 2016; 626:158-63. [DOI: 10.1016/j.neulet.2016.05.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2015] [Revised: 05/04/2016] [Accepted: 05/05/2016] [Indexed: 11/30/2022]
|
24
|
Didoné DD, Oppitz SJ, Folgearini J, Biaggio EPV, Garcia MV. Auditory Evoked Potentials with Different Speech Stimuli: a Comparison and Standardization of Values. Int Arch Otorhinolaryngol 2016; 20:99-104. [PMID: 27096012 PMCID: PMC4835323 DOI: 10.1055/s-0035-1566133] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2015] [Accepted: 07/26/2015] [Indexed: 11/29/2022] Open
Abstract
Introduction Long Latency Auditory Evoked Potentials (LLAEP) with speech sounds has been the subject of research, as these stimuli would be ideal to check individualś detection and discrimination. Objective The objective of this study is to compare and describe the values of latency and amplitude of cortical potentials for speech stimuli in adults with normal hearing. Methods The sample population included 30 normal hearing individuals aged between 18 and 32 years old with ontological disease and auditory processing. All participants underwent LLAEP search using pairs of speech stimuli (/ba/ x /ga/, /ba/ x /da/, and /ba/ x /di/. The authors studied the LLAEP using binaural stimuli at an intensity of 75dBNPS. In total, they used 300 stimuli were used (∼60 rare and 240 frequent) to obtain the LLAEP. Individuals received guidance to count the rare stimuli. The authors analyzed latencies of potential P1, N1, P2, N2, and P300, as well as the ampleness of P300. Results The mean age of the group was approximately 23 years. The averages of cortical potentials vary according to different speech stimuli. The N2 latency was greater for /ba/ x /di/ and P300 latency was greater for /ba/ x /ga/. Considering the overall average amplitude, it ranged from 5.35 and 7.35uV for different speech stimuli. Conclusion It was possible to obtain the values of latency and amplitude for different speech stimuli. Furthermore, the N2 component showed higher latency with the / ba / x / di / stimulus and P300 for /ba/ x / ga /.
Collapse
Affiliation(s)
| | - Sheila Jacques Oppitz
- Department of Phonoaudiology, Universidade Federal de Santa Maria, Santa Maria, Brazil
| | - Jordana Folgearini
- Department of Phonoaudiology, Universidade Federal de Santa Maria, Santa Maria, Brazil
| | | | - Michele Vargas Garcia
- Department of Phonoaudiology, Universidade Federal de Santa Maria, Santa Maria, Brazil
| |
Collapse
|
25
|
P1 amplitude across replicates: does measurement method make a difference? J Clin Neurophysiol 2013; 30:287-90. [PMID: 23733094 DOI: 10.1097/wnp.0b013e31828736a0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
PURPOSE Most cortical auditory evoked potentials instruments provide a "default" peak-to-baseline (P-B) amplitude and a means for obtaining a peak-to-trough (P-T) measure. This study investigated the sensitivity of these two measures in assessing the effects of repeated runs on the P1 component of the electrophysiological response. METHODS Cortical auditory evoked potentials were recorded on 30 normal hearing young adults. Three stimuli were used: an 80-millisecond synthetic /da/ and a 1 kHz tone burst of 40- and 80-millisecond durations. Stimuli were presented at 60 dB normal hearing level in a counterbalanced order. Three serial replicates were obtained for each stimulus. P1 amplitude and latency were measured. RESULTS The P-T amplitudes diminished significantly (P < 0.01) from replicate 1 to replicate 3 for each of the three stimulus types, but P-B amplitudes did not. P1 latency findings were consistent with effects shown by diminished P-T amplitude data in which latency increased significantly (P = 0.024) from replicate 1 to replicate 3 for one stimulus (40-millisecond tone). CONCLUSIONS The P-T amplitude measurement method identified significant decrements in amplitude because repeated runs were obtained, whereas the P-B method did not. These findings suggest that a P-T method is more sensitive to some P1 electrophysiological activity than is a P-B measure.
Collapse
|
26
|
Swink S, Stuart A. The effect of gender on the N1-P2 auditory complex while listening and speaking with altered auditory feedback. BRAIN AND LANGUAGE 2012; 122:25-33. [PMID: 22564750 DOI: 10.1016/j.bandl.2012.04.007] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/23/2011] [Revised: 04/10/2012] [Accepted: 04/11/2012] [Indexed: 05/31/2023]
Abstract
The effect of gender on the N1-P2 auditory complex was examined while listening and speaking with altered auditory feedback. Fifteen normal hearing adult males and 15 females participated. N1-P2 components were evoked while listening to self-produced nonaltered and frequency shifted /a/ tokens and during production of /a/ tokens during nonaltered auditory feedback (NAF), frequency altered feedback (FAF), and delayed auditory feedback (DAF; 50 and 200 ms). During speech production, females exhibited earlier N1 latencies during 50 ms DAF and earlier P2 latencies during 50 ms DAF and FAF. There were no significant differences in N1-P2 amplitudes across all conditions. Comparing listening to active speaking, N1 and P2 latencies were earlier among females, with speaking, and under NAF. N1-P2 amplitudes were significantly reduced during speech production. These findings are consistent with the notions that speech production suppresses auditory cortex responsiveness and males and females process altered auditory feedback differently while speaking.
Collapse
Affiliation(s)
- Shannon Swink
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, NC, USA
| | | |
Collapse
|