1
|
Krishnan A, Suresh CH, Gandour JT. Cortical hemisphere preference and brainstem ear asymmetry reflect experience-dependent functional modulation of pitch. BRAIN AND LANGUAGE 2021; 221:104995. [PMID: 34303110 PMCID: PMC8559596 DOI: 10.1016/j.bandl.2021.104995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 05/07/2021] [Accepted: 07/07/2021] [Indexed: 06/13/2023]
Abstract
Temporal attributes of pitch processing at cortical and subcortical levels are differentially weighted and well-coordinated. The question is whether language experience induces functional modulation of hemispheric preference complemented by brainstem ear symmetry for pitch processing. Brainstem frequency-following and cortical pitch responses were recorded concurrently from Mandarin and English participants. A Mandarin syllable with a rising pitch contour was presented to both ears with monaural stimulation. At the cortical level, left ear stimulation in the Chinese group revealed an experience-dependent response for pitch processing in the right hemisphere, consistent with a functionalaccount. The English group revealed a contralateral hemisphere preference consistent with a structuralaccount. At the brainstem level, Chinese participants showed a functional leftward ear asymmetry, whereas English were consistent with a structural account. Overall, language experience modulates both cortical hemispheric preference and brainstem ear asymmetry in a complementary manner to optimize processing of temporal attributes of pitch.
Collapse
Affiliation(s)
- Ananthanarayan Krishnan
- Department of Speech Language Hearing Sciences, Purdue University, Lyles Porter Hall, 715 Clinic Drive, West Lafayette, IN 47907, USA.
| | - Chandan H Suresh
- Department of Speech Language Hearing Sciences, Purdue University, Lyles Porter Hall, 715 Clinic Drive, West Lafayette, IN 47907, USA; Department of Communication Disorders, California State, University, 5151 State University Drive, Los Angeles, CA 90032, USA.
| | - Jackson T Gandour
- Department of Speech Language Hearing Sciences, Purdue University, Lyles Porter Hall, 715 Clinic Drive, West Lafayette, IN 47907, USA.
| |
Collapse
|
2
|
The Use of Static and Dynamic Cues for Vowel Identification by Children Wearing Hearing Aids or Cochlear Implants. Ear Hear 2019; 41:72-81. [PMID: 30998549 DOI: 10.1097/aud.0000000000000735] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE To examine vowel perception based on dynamic formant transition and/or static formant pattern cues in children with hearing loss while using their hearing aids or cochlear implants. We predicted that the sensorineural hearing loss would degrade formant transitions more than static formant patterns, and that shortening the duration of cues would cause more difficulty for vowel identification for these children than for their normal-hearing peers. DESIGN A repeated-measures, between-group design was used. Children 4 to 9 years of age from a university hearing services clinic who were fit for hearing aids (13 children) or who wore cochlear implants (10 children) participated. Chronologically age-matched children with normal hearing served as controls (23 children). Stimuli included three naturally produced syllables (/ba/, /bi/, and /bu/), which were presented either in their entirety or segmented to isolate the formant transition or the vowel static formant center. The stimuli were presented to listeners via loudspeaker in the sound field. Aided participants wore their own devices and listened with their everyday settings. Participants chose the vowel presented by selecting from corresponding pictures on a computer screen. RESULTS Children with hearing loss were less able to use shortened transition or shortened vowel centers to identify vowels as compared to their normal-hearing peers. Whole syllable and initial transition yielded better identification performance than the vowel center for /ɑ/, but not for /i/ or /u/. CONCLUSIONS The children with hearing loss may require a longer time window than children with normal hearing to integrate vowel cues over time because of altered peripheral encoding in spectrotemporal domains. Clinical implications include cognizance of the importance of vowel perception when developing habilitative programs for children with hearing loss.
Collapse
|
3
|
Okayasu T, Nishimura T, Uratani Y, Yamashita A, Nakagawa S, Yamanaka T, Hosoi H, Kitahara T. Temporal window of integration estimated by omission in bone-conducted ultrasound. Neurosci Lett 2018; 696:1-6. [PMID: 30476566 DOI: 10.1016/j.neulet.2018.11.035] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2018] [Revised: 11/20/2018] [Accepted: 11/22/2018] [Indexed: 10/27/2022]
Abstract
Bone-conducted ultrasound (BCU) can be heard for both normal-hearing and some profoundly deaf individuals. Moreover, amplitude-modulated BCU can transmit the speech signal. These characteristics of BCU provide the possibility of the developing a bone-conducted ultrasonic hearing aid. Previous studies on the perception mechanism of speech-modulated BCU have pointed to the importance of temporal rather than frequency information. In order to elucidate the perception of speech-modulated BCU, further investigation is need concerning the processing of temporal information. The temporal processing of air-conducted audible sounds (ACASs) involves the integration of closely presented sounds into a single information unit. The long-temporal window of integration was estimated approximately 150-200 ms, which contribute to the discrimination of speech sound. The present study investigated the long-temporal integration system for BCU evaluated by stimulus omission using magnetoencephalography. Eight participants with normal hearing took part in this study. Ultrasonic tone burst with the duration of 50 ms and frequency of 30 kHz was used as the standard stimulus and presented with steady onset-to-onset times or stimulus-onset asynchronies (SOAs). In each sequence, the duration of the SOAs were set to 100, 125, 150, 175, 200, or 350 ms. For deviant, tones were randomly omitted from the stimulus train. Definite mismatch fields were elicited by sound omission in the stimulus train with an SOA of 100-150 ms, but weren't with an SOA of 200 and 350 ms for all participants. We found that stimulus train for BCUs can be integrated within a temporal window of integration with an SOA of 100-150 ms, but are regarded as a separate event when the SOA is 200 or 350 ms in duration. Therefore, we demonstrated that the long-temporal window of integration for BCUs estimated by omission was 150-200 ms, which was similar to that for ACAS (Yabe et al. NeuroReport 8 (1997) 1971-1974 and Psychophysiology. 35 (1998) 615-619). These findings contribute to the elucidation and improvement of the perception of speech-modulated BCU.
Collapse
Affiliation(s)
- Tadao Okayasu
- Department of Otolaryngology-Head and Neck Surgery, Nara Medical University, 840 Shijo-cho, Kashihara, Nara 634-8522, Japan.
| | - Tadashi Nishimura
- Department of Otolaryngology-Head and Neck Surgery, Nara Medical University, 840 Shijo-cho, Kashihara, Nara 634-8522, Japan.
| | - Yuka Uratani
- Department of Otolaryngology-Head and Neck Surgery, Nara Medical University, 840 Shijo-cho, Kashihara, Nara 634-8522, Japan.
| | - Akinori Yamashita
- Department of Otolaryngology-Head and Neck Surgery, Nara Medical University, 840 Shijo-cho, Kashihara, Nara 634-8522, Japan.
| | - Seiji Nakagawa
- Center for Frontier Medical Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba 263-8522, Japan; Department of Medical Engineering, Graduate School of Engineering, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba 263-8522, Japan; University Hospital Med-Tech Link Center, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba 263-8522, Japan; Biomedical Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), 1-8-31 Midorigaoka, Ikeda, Osaka 563-8577, Japan.
| | - Toshiaki Yamanaka
- Department of Otolaryngology-Head and Neck Surgery, Nara Medical University, 840 Shijo-cho, Kashihara, Nara 634-8522, Japan.
| | - Hiroshi Hosoi
- President's Office, Nara Medical University, 840 Shijo-cho, Kashihara, Nara 634-8522, Japan.
| | - Tadashi Kitahara
- Department of Otolaryngology-Head and Neck Surgery, Nara Medical University, 840 Shijo-cho, Kashihara, Nara 634-8522, Japan.
| |
Collapse
|
4
|
Krishnan A, Gandour JT, Ananthakrishnan S, Vijayaraghavan V. Language experience enhances early cortical pitch-dependent responses. JOURNAL OF NEUROLINGUISTICS 2015; 33:128-148. [PMID: 25506127 PMCID: PMC4261237 DOI: 10.1016/j.jneuroling.2014.08.002] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Pitch processing at cortical and subcortical stages of processing is shaped by language experience. We recently demonstrated that specific components of the cortical pitch response (CPR) index the more rapidly-changing portions of the high rising Tone 2 of Mandarin Chinese, in addition to marking pitch onset and sound offset. In this study, we examine how language experience (Mandarin vs. English) shapes the processing of different temporal attributes of pitch reflected in the CPR components using stimuli representative of within-category variants of Tone 2. Results showed that the magnitude of CPR components (Na-Pb and Pb-Nb) and the correlation between these two components and pitch acceleration were stronger for the Chinese listeners compared to English listeners for stimuli that fell within the range of Tone 2 citation forms. Discriminant function analysis revealed that the Na-Pb component was more than twice as important as Pb-Nb in grouping listeners by language affiliation. In addition, a stronger stimulus-dependent, rightward asymmetry was observed for the Chinese group at the temporal, but not frontal, electrode sites. This finding may reflect selective recruitment of experience-dependent, pitch-specific mechanisms in right auditory cortex to extract more complex, time-varying pitch patterns. Taken together, these findings suggest that long-term language experience shapes early sensory level processing of pitch in the auditory cortex, and that the sensitivity of the CPR may vary depending on the relative linguistic importance of specific temporal attributes of dynamic pitch.
Collapse
|
5
|
ten Oever S, Schroeder CE, Poeppel D, van Atteveldt N, Zion-Golumbic E. Rhythmicity and cross-modal temporal cues facilitate detection. Neuropsychologia 2014; 63:43-50. [PMID: 25128589 DOI: 10.1016/j.neuropsychologia.2014.08.008] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2014] [Revised: 07/14/2014] [Accepted: 08/06/2014] [Indexed: 11/26/2022]
Abstract
Temporal structure in the environment often has predictive value for anticipating the occurrence of forthcoming events. In this study we investigated the influence of two types of predictive temporal information on the perception of near-threshold auditory stimuli: 1) intrinsic temporal rhythmicity within an auditory stimulus stream and 2) temporally-predictive visual cues. We hypothesized that combining predictive temporal information within- and across-modality should decrease the threshold at which sounds are detected, beyond the advantage provided by each information source alone. Two experiments were conducted in which participants had to detect tones in noise. Tones were presented in either rhythmic or random sequences and were preceded by a temporally predictive visual signal in half of the trials. We show that detection intensities are lower for rhythmic (vs. random) and audiovisual (vs. auditory-only) presentation, independent from response bias, and that this effect is even greater for rhythmic audiovisual presentation. These results suggest that both types of temporal information are used to optimally process sounds that occur at expected points in time (resulting in enhanced detection), and that multiple temporal cues are combined to improve temporal estimates. Our findings underscore the flexibility and proactivity of the perceptual system which uses within- and across-modality temporal cues to anticipate upcoming events and process them optimally.
Collapse
Affiliation(s)
- Sanne ten Oever
- Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD, Maastricht, The Netherlands
| | - Charles E Schroeder
- Departments of Psychiatry and Neurology, Columbia University Medical Center, New York, NY 10032, USA; The Nathan Kline Institute for Psychiatric Research, Orangeburg, NY 10962, USA
| | - David Poeppel
- Department of Psychology, New York University, New York, NY 10003, USA
| | - Nienke van Atteveldt
- Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD, Maastricht, The Netherlands; Department of Educational Neuroscience, Faculty of Psychology and Education and Institute Learn, VU University Amsterdam, The Netherlands
| | - Elana Zion-Golumbic
- Departments of Psychiatry and Neurology, Columbia University Medical Center, New York, NY 10032, USA; The Nathan Kline Institute for Psychiatric Research, Orangeburg, NY 10962, USA; Gonda Brain Research Center, Bar Ilan University, Ramat Gan, Israel.
| |
Collapse
|
6
|
Krishnan A, Gandour JT, Ananthakrishnan S, Vijayaraghavan V. Cortical pitch response components index stimulus onset/offset and dynamic features of pitch contours. Neuropsychologia 2014; 59:1-12. [PMID: 24751993 DOI: 10.1016/j.neuropsychologia.2014.04.006] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2013] [Revised: 03/12/2014] [Accepted: 04/11/2014] [Indexed: 11/19/2022]
Abstract
Voice pitch is an important information-bearing component of language that is subject to experience dependent plasticity at both early cortical and subcortical stages of processing. We have already demonstrated that pitch onset component (Na) of the cortical pitch response (CPR) is sensitive to flat pitch and its salience … CPR responses from Chinese listeners were elicited by three citation forms varying in pitch acceleration and duration. Results showed that the pitch onset component (Na) was invariant to changes in acceleration. In contrast, Na–Pb and Pb–Nb showed a systematic decrease in the interpeak latency and decrease in amplitude with increase in pitch acceleration that followed the time course of pitch change across the three stimuli. A strong correlation with pitch acceleration was observed for these two components only – a putative index of pitch-relevant neural activity associated with the more rapidly-changing portions of the pitch contour. Pc–Nc marks unambiguously the stimulus offset … and their functional roles as related to sensory and cognitive properties of the stimulus. [Corrected]
Collapse
Affiliation(s)
| | - Jackson T Gandour
- Department of Speech Language Hearing Sciences, Purdue University, West Lafayette, IN, USA.
| | | | | |
Collapse
|
7
|
Yrttiaho S, May PJC, Tiitinen H, Alku P. Cortical encoding of aperiodic and periodic speech sounds: evidence for distinct neural populations. Neuroimage 2011; 55:1252-9. [PMID: 21215807 DOI: 10.1016/j.neuroimage.2010.12.076] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2010] [Revised: 12/01/2010] [Accepted: 12/28/2010] [Indexed: 10/18/2022] Open
Abstract
Most speech sounds are periodic due to the vibration of the vocal folds. Non-invasive studies of the human brain have revealed a periodicity-sensitive population in the auditory cortex which might contribute to the encoding of speech periodicity. Since the periodicity of natural speech varies from (almost) periodic to aperiodic, one may argue that speech aperiodicity could similarly be represented by a dedicated neuron population. In the current magnetoencephalography study, cortical sensitivity to periodicity was probed with natural periodic vowels and their aperiodic counterparts in a stimulus-specific adaptation paradigm. The effects of intervening adaptor stimuli on the N1m elicited by the probe stimuli (the actual effective stimuli) were studied under interstimulus intervals (ISIs) of 800 and 200 ms. The results indicated a periodicity-dependent release from adaptation which was observed for aperiodic probes alternating with periodic adaptors under both ISIs. Such release from adaptation can be attributed to the activation of a distinct neural population responsive to aperiodic (probe) but not to periodic (adaptor) stimuli. Thus, the current results suggest that the aperiodicity of speech sounds may be represented not only by decreased activation of the periodicity-sensitive population but, additionally, by the activation of a distinct cortical population responsive to speech aperiodicity.
Collapse
Affiliation(s)
- Santeri Yrttiaho
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, FI-00076 AALTO, Finland.
| | | | | | | |
Collapse
|