1
|
Developmental changes in brain activation during novel grammar learning in 8-25-year-olds. Dev Cogn Neurosci 2024; 66:101347. [PMID: 38277712 PMCID: PMC10839867 DOI: 10.1016/j.dcn.2024.101347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Revised: 01/12/2024] [Accepted: 01/17/2024] [Indexed: 01/28/2024] Open
Abstract
While it is well established that grammar learning success varies with age, the cause of this developmental change is largely unknown. This study examined functional MRI activation across a broad developmental sample of 165 Dutch-speaking individuals (8-25 years) as they were implicitly learning a new grammatical system. This approach allowed us to assess the direct effects of age on grammar learning ability while exploring its neural correlates. In contrast to the alleged advantage of children language learners over adults, we found that adults outperformed children. Moreover, our behavioral data showed a sharp discontinuity in the relationship between age and grammar learning performance: there was a strong positive linear correlation between 8 and 15.4 years of age, after which age had no further effect. Neurally, our data indicate two important findings: (i) during grammar learning, adults and children activate similar brain regions, suggesting continuity in the neural networks that support initial grammar learning; and (ii) activation level is age-dependent, with children showing less activation than older participants. We suggest that these age-dependent processes may constrain developmental effects in grammar learning. The present study provides new insights into the neural basis of age-related differences in grammar learning in second language acquisition.
Collapse
|
2
|
New in, old out: Does learning a new language make you forget previously learned foreign languages? Q J Exp Psychol (Hove) 2024; 77:530-550. [PMID: 37246912 PMCID: PMC10880412 DOI: 10.1177/17470218231181380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Revised: 04/21/2023] [Accepted: 04/21/2023] [Indexed: 05/30/2023]
Abstract
Anecdotal evidence suggests that learning a new foreign language (FL) makes you forget previously learned FLs. To seek empirical evidence for this claim, we tested whether learning words in a previously unknown L3 hampers subsequent retrieval of their L2 translation equivalents. In two experiments, Dutch native speakers with knowledge of English (L2), but not Spanish (L3), first completed an English vocabulary test, based on which 46 participant-specific, known English words were chosen. Half of those were then learned in Spanish. Finally, participants' memory for all 46 English words was probed again in a picture naming task. In Experiment 1, all tests took place within one session. In Experiment 2, we separated the English pre-test from Spanish learning by a day and manipulated the timing of the English post-test (immediately after learning vs. 1 day later). By separating the post-test from Spanish learning, we asked whether consolidation of the new Spanish words would increase their interference strength. We found significant main effects of interference in naming latencies and accuracy: Participants speeded up less and were less accurate to recall words in English for which they had learned Spanish translations, compared with words for which they had not. Consolidation time did not significantly affect these interference effects. Thus, learning a new language indeed comes at the cost of subsequent retrieval ability in other FLs. Such interference effects set in immediately after learning and do not need time to emerge, even when the other FL has been known for a long time.
Collapse
|
3
|
IDLaS-NL - A platform for running customized studies on individual differences in Dutch language skills via the Internet. Behav Res Methods 2024; 56:2422-2436. [PMID: 37749421 PMCID: PMC10991024 DOI: 10.3758/s13428-023-02156-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/01/2023] [Indexed: 09/27/2023]
Abstract
We introduce the Individual Differences in Language Skills (IDLaS-NL) web platform, which enables users to run studies on individual differences in Dutch language skills via the Internet. IDLaS-NL consists of 35 behavioral tests, previously validated in participants aged between 18 and 30 years. The platform provides an intuitive graphical interface for users to select the tests they wish to include in their research, to divide these tests into different sessions and to determine their order. Moreover, for standardized administration the platform provides an application (an emulated browser) wherein the tests are run. Results can be retrieved by mouse click in the graphical interface and are provided as CSV file output via e-mail. Similarly, the graphical interface enables researchers to modify and delete their study configurations. IDLaS-NL is intended for researchers, clinicians, educators and in general anyone conducting fundamental research into language and general cognitive skills; it is not intended for diagnostic purposes. All platform services are free of charge. Here, we provide a description of its workings as well as instructions for using the platform. The IDLaS-NL platform can be accessed at www.mpi.nl/idlas-nl .
Collapse
|
4
|
Using Psychometric Network Analysis to Examine the Components of Spoken Word Recognition. J Cogn 2024; 7:10. [PMID: 38223231 PMCID: PMC10786093 DOI: 10.5334/joc.340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Accepted: 12/15/2023] [Indexed: 01/16/2024] Open
Abstract
Using language requires access to domain-specific linguistic representations, but also draws on domain-general cognitive skills. A key issue in current psycholinguistics is to situate linguistic processing in the network of human cognitive abilities. Here, we focused on spoken word recognition and used an individual differences approach to examine the links of scores in word recognition tasks with scores on tasks capturing effects of linguistic experience, general processing speed, working memory, and non-verbal reasoning. 281 young native speakers of Dutch completed an extensive test battery assessing these cognitive skills. We used psychometric network analysis to map out the direct links between the scores, that is, the unique variance between pairs of scores, controlling for variance shared with the other scores. The analysis revealed direct links between word recognition skills and processing speed. We discuss the implications of these results and the potential of psychometric network analysis for studying language processing and its embedding in the broader cognitive system.
Collapse
|
5
|
Lexically Mediated Compensation for Coarticulation Still as Elusive as a White Christmash. Cogn Sci 2023; 47:e13342. [PMID: 37715483 DOI: 10.1111/cogs.13342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 07/20/2023] [Accepted: 08/30/2023] [Indexed: 09/17/2023]
Abstract
Luthra, Peraza-Santiago, Beeson, Saltzman, Crinnion, and Magnuson (2021) present data from the lexically mediated compensation for coarticulation paradigm that they claim provides conclusive evidence in favor of top-down processing in speech perception. We argue here that this evidence does not support that conclusion. The findings are open to alternative explanations, and we give data in support of one of them (that there is an acoustic confound in the materials). Lexically mediated compensation for coarticulation thus remains elusive, while prior data from the paradigm instead challenge the idea that there is top-down processing in online speech recognition.
Collapse
|
6
|
Tracking talker-specific cues to lexical stress: Evidence from perceptual learning. J Exp Psychol Hum Percept Perform 2023; 49:549-565. [PMID: 37184938 DOI: 10.1037/xhp0001105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
When recognizing spoken words, listeners are confronted by variability in the speech signal caused by talker differences. Previous research has focused on segmental talker variability; less is known about how suprasegmental variability is handled. Here we investigated the use of perceptual learning to deal with between-talker differences in lexical stress. Two groups of participants heard Dutch minimal stress pairs (e.g., VOORnaam vs. voorNAAM, "first name" vs. "respectable") spoken by two male talkers. Group 1 heard Talker 1 use only F0 to signal stress (intensity and duration values were ambiguous), while Talker 2 used only intensity (F0 and duration were ambiguous). Group 2 heard the reverse talker-cue mappings. After training, participants were tested on words from both talkers containing conflicting stress cues ("mixed items"; e.g., one spoken by Talker 1 with F0 signaling initial stress and intensity signaling final stress). We found that listeners used previously learned information about which talker used which cue to interpret the mixed items. For example, the mixed item described above tended to be interpreted as having initial stress by Group 1 but as having final stress by Group 2. This demonstrates that listeners learn how individual talkers signal stress and use that knowledge in spoken-word recognition. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
|
7
|
Neural tracking of speech envelope does not unequivocally reflect intelligibility. Neuroimage 2023; 272:120040. [PMID: 36935084 DOI: 10.1016/j.neuroimage.2023.120040] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Revised: 03/13/2023] [Accepted: 03/15/2023] [Indexed: 03/19/2023] Open
Abstract
During listening, brain activity tracks the rhythmic structures of speech signals. Here, we directly dissociated the contribution of neural envelope tracking in the processing of speech acoustic cues from that related to linguistic processing. We examined the neural changes associated with the comprehension of Noise-Vocoded (NV) speech using magnetoencephalography (MEG). Participants listened to NV sentences in a 3-phase training paradigm: (1) pre-training, where NV stimuli were barely comprehended, (2) training with exposure of the original clear version of speech stimulus, and (3) post-training, where the same stimuli gained intelligibility from the training phase. Using this paradigm, we tested if the neural responses of a speech signal was modulated by its intelligibility without any change in its acoustic structure. To test the influence of spectral degradation on neural envelope tracking independently of training, participants listened to two types of NV sentences (4-band and 2-band NV speech), but were only trained to understand 4-band NV speech. Significant changes in neural tracking were observed in the delta range in relation to the acoustic degradation of speech. However, we failed to find a direct effect of intelligibility on the neural tracking of speech envelope in both theta and delta ranges, in both auditory regions-of-interest and whole-brain sensor-space analyses. This suggests that acoustics greatly influence the neural tracking response to speech envelope, and that caution needs to be taken when choosing the control signals for speech-brain tracking analyses, considering that a slight change in acoustic parameters can have strong effects on the neural tracking response.
Collapse
|
8
|
Study protocol: a comprehensive multi-method neuroimaging approach to disentangle developmental effects and individual differences in second language learning. BMC Psychol 2022; 10:169. [PMID: 35804430 PMCID: PMC9270835 DOI: 10.1186/s40359-022-00873-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 06/23/2022] [Indexed: 11/28/2022] Open
Abstract
Background While it is well established that second language (L2) learning success changes with age and across individuals, the underlying neural mechanisms responsible for this developmental shift and these individual differences are largely unknown. We will study the behavioral and neural factors that subserve new grammar and word learning in a large cross-sectional developmental sample. This study falls under the NWO (Nederlandse Organisatie voor Wetenschappelijk Onderzoek [Dutch Research Council]) Language in Interaction consortium (website: https://www.languageininteraction.nl/). Methods We will sample 360 healthy individuals across a broad age range between 8 and 25 years. In this paper, we describe the study design and protocol, which involves multiple study visits covering a comprehensive behavioral battery and extensive magnetic resonance imaging (MRI) protocols. On the basis of these measures, we will create behavioral and neural fingerprints that capture age-based and individual variability in new language learning. The behavioral fingerprint will be based on first and second language proficiency, memory systems, and executive functioning. We will map the neural fingerprint for each participant using the following MRI modalities: T1‐weighted, diffusion-weighted, resting-state functional MRI, and multiple functional-MRI paradigms. With respect to the functional MRI measures, half of the sample will learn grammatical features and half will learn words of a new language. Combining all individual fingerprints allows us to explore the neural maturation effects on grammar and word learning. Discussion This will be one of the largest neuroimaging studies to date that investigates the developmental shift in L2 learning covering preadolescence to adulthood. Our comprehensive approach of combining behavioral and neuroimaging data will contribute to the understanding of the mechanisms influencing this developmental shift and individual differences in new language learning. We aim to answer: (I) do these fingerprints differ according to age and can these explain the age-related differences observed in new language learning? And (II) which aspects of the behavioral and neural fingerprints explain individual differences (across and within ages) in grammar and word learning? The results of this study provide a unique opportunity to understand how the development of brain structure and function influence new language learning success. Supplementary Information The online version contains supplementary material available at 10.1186/s40359-022-00873-x.
Collapse
|
9
|
Correction: Protocol of the Healthy Brain Study: An accessible resource for understanding the human brain and how it dynamically and individually operates in its bio-social context. PLoS One 2022; 17:e0267071. [PMID: 35404975 PMCID: PMC9000123 DOI: 10.1371/journal.pone.0267071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
10
|
The differential roles of lexical and sublexical processing during spoken-word recognition in clear and in noise. Cortex 2022; 151:70-88. [DOI: 10.1016/j.cortex.2022.02.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 01/21/2022] [Accepted: 02/13/2022] [Indexed: 02/03/2023]
|
11
|
Protocol of the Healthy Brain Study: An accessible resource for understanding the human brain and how it dynamically and individually operates in its bio-social context. PLoS One 2021; 16:e0260952. [PMID: 34965252 PMCID: PMC8716054 DOI: 10.1371/journal.pone.0260952] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Accepted: 11/20/2021] [Indexed: 12/29/2022] Open
Abstract
The endeavor to understand the human brain has seen more progress in the last few decades than in the previous two millennia. Still, our understanding of how the human brain relates to behavior in the real world and how this link is modulated by biological, social, and environmental factors is limited. To address this, we designed the Healthy Brain Study (HBS), an interdisciplinary, longitudinal, cohort study based on multidimensional, dynamic assessments in both the laboratory and the real world. Here, we describe the rationale and design of the currently ongoing HBS. The HBS is examining a population-based sample of 1,000 healthy participants (age 30–39) who are thoroughly studied across an entire year. Data are collected through cognitive, affective, behavioral, and physiological testing, neuroimaging, bio-sampling, questionnaires, ecological momentary assessment, and real-world assessments using wearable devices. These data will become an accessible resource for the scientific community enabling the next step in understanding the human brain and how it dynamically and individually operates in its bio-social context. An access procedure to the collected data and bio-samples is in place and published on https://www.healthybrainstudy.nl/en/data-and-methods/access. Trail registration:https://www.trialregister.nl/trial/7955.
Collapse
|
12
|
Abstract
This resource contains data from 112 Dutch adults (18-29 years of age) who completed the Individual Differences in Language Skills test battery that included 33 behavioural tests assessing language skills and domain-general cognitive skills likely involved in language tasks. The battery included tests measuring linguistic experience (e.g. vocabulary size, prescriptive grammar knowledge), general cognitive skills (e.g. working memory, non-verbal intelligence) and linguistic processing skills (word production/comprehension, sentence production/comprehension). Testing was done in a lab-based setting resulting in high quality data due to tight monitoring of the experimental protocol and to the use of software and hardware that were optimized for behavioural testing. Each participant completed the battery twice (i.e., two test days of four hours each). We provide the raw data from all tests on both days as well as pre-processed data that were used to calculate various reliability measures (including internal consistency and test-retest reliability). We encourage other researchers to use this resource for conducting exploratory and/or targeted analyses of individual differences in language and general cognitive skills.
Collapse
|
13
|
Perception of English phonetic contrasts by Dutch children: How bilingual are early-English learners? PLoS One 2020; 15:e0229902. [PMID: 32160213 PMCID: PMC7065789 DOI: 10.1371/journal.pone.0229902] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Accepted: 02/17/2020] [Indexed: 11/24/2022] Open
Abstract
The aim of this study was to investigate whether early-English education benefits the perception of English phonetic contrasts that are known to be perceptually confusable for Dutch native speakers, comparing Dutch pupils who were enrolled in an early-English programme at school from the age of four with pupils in a mainstream programme with English instruction from the age of 11, and English-Dutch early bilingual children. Children were 4-5-year-olds (start of primary school), 8-9-year-olds, or 11-12-year-olds (end of primary school). Children were tested on four contrasts that varied in difficulty: /b/-/s/ (easy), /k/-/ɡ/ (intermediate), /f/-/θ/ (difficult), /ɛ/-/æ/ (very difficult). Bilingual children outperformed the two other groups on all contrasts except /b/-/s/. Early-English pupils did not outperform mainstream pupils on any of the contrasts. This shows that early-English education as it is currently implemented is not beneficial for pupils' perception of non-native contrasts.
Collapse
|
14
|
Bridging the Gap Between Second Language Acquisition Research and Memory Science: The Case of Foreign Language Attrition. Front Hum Neurosci 2019; 13:397. [PMID: 31798433 PMCID: PMC6866255 DOI: 10.3389/fnhum.2019.00397] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2019] [Accepted: 10/23/2019] [Indexed: 11/23/2022] Open
Abstract
The field of second language acquisition (SLA) is by nature of its subject a highly interdisciplinary area of research. Learning a (foreign) language, for example, involves encoding new words, consolidating and committing them to long-term memory, and later retrieving them. All of these processes have direct parallels in the domain of human memory and have been thoroughly studied by researchers in that field. Yet, despite these clear links, the two fields have largely developed in parallel and in isolation from one another. The present article aims to promote more cross-talk between SLA and memory science. We focus on foreign language (FL) attrition as an example of a research topic in SLA where the parallels with memory science are especially apparent. We discuss evidence that suggests that competition between languages is one of the mechanisms of FL attrition, paralleling the interference process thought to underlie forgetting in other domains of human memory. Backed up by concrete suggestions, we advocate the use of paradigms from the memory literature to study these interference effects in the language domain. In doing so, we hope to facilitate future cross-talk between the two fields and to further our understanding of FL attrition as a memory phenomenon.
Collapse
|
15
|
Shared lexical access processes in speaking and listening? An individual differences study. J Exp Psychol Learn Mem Cogn 2019; 46:1048-1063. [PMID: 31599623 DOI: 10.1037/xlm0000768] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Lexical access is a core component of word processing. In order to produce or comprehend a word, language users must access word forms in their mental lexicon. However, despite its involvement in both tasks, previous research has often studied lexical access in either production or comprehension alone. Therefore, it is unknown to which extent lexical access processes are shared across both tasks. Picture naming and auditory lexical decision are considered good tools for studying lexical access. Both of them are speeded tasks. Given these commonalities, another open question concerns the involvement of general cognitive abilities (e.g., processing speed) in both linguistic tasks. In the present study, we addressed these questions. We tested a large group of young adults enrolled in academic and vocational courses. Participants completed picture-naming and auditory lexical-decision tasks as well as a battery of tests assessing nonverbal processing speed, vocabulary, and nonverbal intelligence. Our results suggest that the lexical access processes involved in picture naming and lexical decision are related but less closely than one might have thought. Moreover, reaction times in picture naming and lexical decision depended as least as much on general processing speed as on domain-specific linguistic processes (i.e., lexical access processes). (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
|
16
|
Combined predictive effects of sentential and visual constraints in early audiovisual speech processing. Sci Rep 2019; 9:7870. [PMID: 31133646 PMCID: PMC6536519 DOI: 10.1038/s41598-019-44311-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2018] [Accepted: 05/14/2019] [Indexed: 11/09/2022] Open
Abstract
In language comprehension, a variety of contextual cues act in unison to render upcoming words more or less predictable. As a sentence unfolds, we use prior context (sentential constraints) to predict what the next words might be. Additionally, in a conversation, we can predict upcoming sounds through observing the mouth movements of a speaker (visual constraints). In electrophysiological studies, effects of visual constraints have typically been observed early in language processing, while effects of sentential constraints have typically been observed later. We hypothesized that the visual and the sentential constraints might feed into the same predictive process such that effects of sentential constraints might also be detectable early in language processing through modulations of the early effects of visual salience. We presented participants with audiovisual speech while recording their brain activity with magnetoencephalography. Participants saw videos of a person saying sentences where the last word was either sententially constrained or not, and began with a salient or non-salient mouth movement. We found that sentential constraints indeed exerted an early (N1) influence on language processing. Sentential modulations of the N1 visual predictability effect were visible in brain areas associated with semantic processing, and were differently expressed in the two hemispheres. In the left hemisphere, visual and sentential constraints jointly suppressed the auditory evoked field, while the right hemisphere was sensitive to visual constraints only in the absence of strong sentential constraints. These results suggest that sentential and visual constraints can jointly influence even very early stages of audiovisual speech comprehension.
Collapse
|
17
|
Abstract
Previous research on the effect of perturbed auditory feedback in speech production has focused on two types of responses. In the short term, speakers generate compensatory motor commands in response to unexpected perturbations. In the longer term, speakers adapt feedforward motor programmes in response to feedback perturbations, to avoid future errors. The current study investigated the relation between these two types of responses to altered auditory feedback. Specifically, it was hypothesised that consistency in previous feedback perturbations would influence whether speakers adapt their feedforward motor programmes. In an altered auditory feedback paradigm, formant perturbations were applied either across all trials (the consistent condition) or only to some trials, whereas the others remained unperturbed (the inconsistent condition). The results showed that speakers' responses were affected by feedback consistency, with stronger speech changes in the consistent condition compared with the inconsistent condition. Current models of speech-motor control can explain this consistency effect. However, the data also suggest that compensation and adaptation are distinct processes, which are not in line with all current models.
Collapse
|
18
|
Abstract
Learning new words entails, inter alia, encoding of novel sound patterns and transferring those patterns from short-term to long-term memory. We report a series of 5 experiments that investigated whether the memory systems engaged in word learning are specialized for speech and whether utilization of these systems results in a benefit for word learning. Sine-wave synthesis (SWS) was applied to spoken nonwords, and listeners were or were not informed (through instruction and familiarization) that the SWS stimuli were derived from actual utterances. This allowed us to manipulate whether listeners would process sound sequences as speech or as nonspeech. In a sound-picture association learning task, listeners who processed the SWS stimuli as speech consistently learned faster and remembered more associations than listeners who processed the same stimuli as nonspeech. The advantage of listening in "speech mode" was stable over the course of 7 days. These results provide causal evidence that access to a specialized, phonological short-term memory system is important for word learning. More generally, this study supports the notion that subsystems of auditory short-term memory are specialized for processing different types of acoustic information. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
|
19
|
Success in learning similar-sounding words predicts vocabulary depth above and beyond vocabulary breadth. JOURNAL OF CHILD LANGUAGE 2019; 46:184-197. [PMID: 30236168 DOI: 10.1017/s0305000918000338] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In lexical development, the specificity of phonological representations is important. The ability to build phonologically specific lexical representations predicts the number of words a child knows (vocabulary breadth), but it is not clear if it also fosters how well words are known (vocabulary depth). Sixty-six children were studied in kindergarten (age 5;7) and first grade (age 6;8). The predictive value of the ability to learn phonologically similar new words, phoneme discrimination ability, and phonological awareness on vocabulary breadth and depth were assessed using hierarchical regression. Word learning explained unique variance in kindergarten and first-grade vocabulary depth, over the other phonological factors. It did not explain unique variance in vocabulary breadth. Furthermore, even after controlling for kindergarten vocabulary breadth, kindergarten word learning still explained unique variance in first-grade vocabulary depth. Skill in learning phonologically similar words appears to predict knowledge children have about what words mean.
Collapse
|
20
|
How much does orthography influence the processing of reduced word forms? Evidence from novel-word learning about French schwa deletion. Q J Exp Psychol (Hove) 2018; 71:2378-2394. [DOI: 10.1177/1747021817741859] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
This study examines the influence of orthography on the processing of reduced word forms. For this purpose, we compared the impact of phonological variation with the impact of spelling-sound consistency on the processing of words that may be produced with or without the vowel schwa. Participants learnt novel French words in which the vowel schwa was present or absent in the first syllable. In Experiment 1, the words were consistently produced without schwa or produced in a variable manner (i.e., sometimes produced with and sometimes produced without schwa). In Experiment 2, words were always produced in a consistent manner, but an orthographic exposure phase was included in which words that were produced without schwa were either spelled with or without the letter <e>. Results from naming and eye-tracking tasks suggest that both phonological variation and spelling-sound consistency influence the processing of spoken novel words. However, the influence of phonological variation outweighs the effect of spelling-sound consistency. Our findings therefore suggest that the influence of orthography on the processing of reduced word forms is relatively small.
Collapse
|
21
|
Self-monitoring in the cerebral cortex: Neural responses to small pitch shifts in auditory feedback during speech production. Neuroimage 2018; 179:326-336. [DOI: 10.1016/j.neuroimage.2018.06.061] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2018] [Revised: 06/18/2018] [Accepted: 06/20/2018] [Indexed: 11/30/2022] Open
|
22
|
Commentary on "Interaction in Spoken Word Recognition Models". Front Psychol 2018; 9:1568. [PMID: 30233453 PMCID: PMC6129619 DOI: 10.3389/fpsyg.2018.01568] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2018] [Accepted: 08/07/2018] [Indexed: 11/26/2022] Open
|
23
|
Perception and production in interaction during non-native speech category learning. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:92. [PMID: 30075662 DOI: 10.1121/1.5044415] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Establishing non-native phoneme categories can be a notoriously difficult endeavour-in both speech perception and speech production. This study asks how these two domains interact in the course of this learning process. It investigates the effect of perceptual learning and related production practice of a challenging non-native category on the perception and/or production of that category. A four-day perceptual training protocol on the British English /æ/-/ɛ/ vowel contrast was combined with either related or unrelated production practice. After feedback on perceptual categorisation of the contrast, native Dutch participants in the related production group (N = 19) pronounced the trial's correct answer, while participants in the unrelated production group (N = 19) pronounced similar but phonologically unrelated words. Comparison of pre- and post-tests showed significant improvement over the course of training in both perception and production, but no differences between the groups were found. The lack of an effect of production practice is discussed in the light of previous, competing results and models of second-language speech perception and production. This study confirms that, even in the context of related production practice, perceptual training boosts production learning.
Collapse
|
24
|
Language balance and switching ability in children acquiring English as a second language. J Exp Child Psychol 2018; 173:168-186. [PMID: 29730573 DOI: 10.1016/j.jecp.2018.03.019] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2017] [Revised: 02/09/2018] [Accepted: 03/31/2018] [Indexed: 11/26/2022]
Abstract
This study investigated whether relative lexical proficiency in Dutch and English in child second language (L2) learners is related to executive functioning. Participants were Dutch primary school pupils of three different age groups (4-5, 8-9, and 11-12 years) who either were enrolled in an early-English schooling program or were age-matched controls not on that early-English program. Participants performed tasks that measured switching, inhibition, and working memory. Early-English program pupils had greater knowledge of English vocabulary and more balanced Dutch-English lexicons. In both groups, lexical balance, a ratio measure obtained by dividing vocabulary scores in English by those in Dutch, was related to switching but not to inhibition or working memory performance. These results show that for children who are learning an L2 in an instructional setting, and for whom managing two languages is not yet an automatized process, language balance may be more important than L2 proficiency in influencing the relation between childhood bilingualism and switching abilities.
Collapse
|
25
|
Theta-band Oscillations in the Middle Temporal Gyrus Reflect Novel Word Consolidation. J Cogn Neurosci 2018; 30:621-633. [PMID: 29393716 DOI: 10.1162/jocn_a_01240] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Like many other types of memory formation, novel word learning benefits from an offline consolidation period after the initial encoding phase. A previous EEG study has shown that retrieval of novel words elicited more word-like-induced electrophysiological brain activity in the theta band after consolidation [Bakker, I., Takashima, A., van Hell, J. G., Janzen, G., & McQueen, J. M. Changes in theta and beta oscillations as signatures of novel word consolidation. Journal of Cognitive Neuroscience, 27, 1286-1297, 2015]. This suggests that theta-band oscillations play a role in lexicalization, but it has not been demonstrated that this effect is directly caused by the formation of lexical representations. This study used magnetoencephalography to localize the theta consolidation effect to the left posterior middle temporal gyrus (pMTG), a region known to be involved in lexical storage. Both untrained novel words and words learned immediately before test elicited lower theta power during retrieval than existing words in this region. After a 24-hr consolidation period, the difference between novel and existing words decreased significantly, most strongly in the left pMTG. The magnitude of the decrease after consolidation correlated with an increase in behavioral competition effects between novel words and existing words with similar spelling, reflecting functional integration into the mental lexicon. These results thus provide new evidence that consolidation aids the development of lexical representations mediated by the left pMTG. Theta synchronization may enable lexical access by facilitating the simultaneous activation of distributed semantic, phonological, and orthographic representations that are bound together in the pMTG.
Collapse
|
26
|
Abstract
Participants made visual lexical decisions to upper-case words and nonwords, and then categorized an ambiguous N–H letter continuum. The lexical decision phase included different exposure conditions: Some participants saw an ambiguous letter “?”, midway between N and H, in N-biased lexical contexts (e.g., REIG?), plus words with unambiguous H (e.g., WEIGH); others saw the reverse (e.g., WEIG?, REIGN). The first group categorized more of the test continuum as N than did the second group. Control groups, who saw “?” in nonword contexts (e.g., SMIG?), plus either of the unambiguous word sets (e.g., WEIGH or REIGN), showed no such subsequent effects. Perceptual learning about ambiguous letters therefore appears to be based on lexical knowledge, just as in an analogous speech experiment (Norris, McQueen, & Cutler, 2003) which showed similar lexical influence in learning about ambiguous phonemes. We argue that lexically guided learning is an efficient general strategy available for exploitation by different specific perceptual tasks.
Collapse
|
27
|
Abstract
Eye movements of Dutch participants were tracked as they looked at arrays of four words on a computer screen and followed spoken instructions (e.g., “Klik op het woord buffel”: Click on the word buffalo). The arrays included the target (e.g., buffel), a phonological competitor (e.g., buffer, buffer), and two unrelated distractors. Targets were monosyllabic or bisyllabic, and competitors mismatched targets only on either their onset or offset phoneme and only by one distinctive feature. Participants looked at competitors more than at distractors, but this effect was much stronger for offset-mismatch than onset-mismatch competitors. Fixations to competitors started to decrease as soon as phonetic evidence disfavouring those competitors could influence behaviour. These results confirm that listeners continuously update their interpretation of words as the evidence in the speech signal unfolds and hence establish the viability of the methodology of using eye movements to arrays of printed words to track spoken-word recognition.
Collapse
|
28
|
Individual variability as a window on production-perception interactions in speech motor control. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:2007. [PMID: 29092613 DOI: 10.1121/1.5006899] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
An important part of understanding speech motor control consists of capturing the interaction between speech production and speech perception. This study tests a prediction of theoretical frameworks that have tried to account for these interactions: If speech production targets are specified in auditory terms, individuals with better auditory acuity should have more precise speech targets, evidenced by decreased within-phoneme variability and increased between-phoneme distance. A study was carried out consisting of perception and production tasks in counterbalanced order. Auditory acuity was assessed using an adaptive speech discrimination task, while production variability was determined using a pseudo-word reading task. Analyses of the production data were carried out to quantify average within-phoneme variability, as well as average between-phoneme contrasts. Results show that individuals not only vary in their production and perceptual abilities, but that better discriminators have more distinctive vowel production targets-that is, targets with less within-phoneme variability and greater between-phoneme distances-confirming the initial hypothesis. This association between speech production and perception did not depend on local phoneme density in vowel space. This study suggests that better auditory acuity leads to more precise speech production targets, which may be a consequence of auditory feedback affecting speech production over time.
Collapse
|
29
|
Speaking Style Influences the Brain's Electrophysiological Response to Grammatical Errors in Speech Comprehension. J Cogn Neurosci 2017; 29:1132-1146. [DOI: 10.1162/jocn_a_01095] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
This electrophysiological study asked whether the brain processes grammatical gender violations in casual speech differently than in careful speech. Native speakers of Dutch were presented with utterances that contained adjective–noun pairs in which the adjective was either correctly inflected with a word-final schwa (e.g., een spannende roman, “a suspenseful novel”) or incorrectly uninflected without that schwa (een spannend roman). Consistent with previous findings, the uninflected adjectives elicited an electrical brain response sensitive to syntactic violations when the talker was speaking in a careful manner. When the talker was speaking in a casual manner, this response was absent. A control condition showed electrophysiological responses for carefully as well as casually produced utterances with semantic anomalies, showing that listeners were able to understand the content of both types of utterance. The results suggest that listeners take information about the speaking style of a talker into account when processing the acoustic–phonetic information provided by the speech signal. Absent schwas in casual speech are effectively not grammatical gender violations. These changes in syntactic processing are evidence of contextually driven neural flexibility.
Collapse
|
30
|
Mapping the Speech Code: Cortical Responses Linking the Perception and Production of Vowels. Front Hum Neurosci 2017; 11:161. [PMID: 28439232 PMCID: PMC5383703 DOI: 10.3389/fnhum.2017.00161] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2016] [Accepted: 03/17/2017] [Indexed: 11/13/2022] Open
Abstract
The acoustic realization of speech is constrained by the physical mechanisms by which it is produced. Yet for speech perception, the degree to which listeners utilize experience derived from speech production has long been debated. In the present study, we examined how sensorimotor adaptation during production may affect perception, and how this relationship may be reflected in early vs. late electrophysiological responses. Participants first performed a baseline speech production task, followed by a vowel categorization task during which EEG responses were recorded. In a subsequent speech production task, half the participants received shifted auditory feedback, leading most to alter their articulations. This was followed by a second, post-training vowel categorization task. We compared changes in vowel production to both behavioral and electrophysiological changes in vowel perception. No differences in phonetic categorization were observed between groups receiving altered or unaltered feedback. However, exploratory analyses revealed correlations between vocal motor behavior and phonetic categorization. EEG analyses revealed correlations between vocal motor behavior and cortical responses in both early and late time windows. These results suggest that participants' recent production behavior influenced subsequent vowel perception. We suggest that the change in perception can be best characterized as a mapping of acoustics onto articulation.
Collapse
|
31
|
Interaction between episodic and semantic memory networks in the acquisition and consolidation of novel spoken words. BRAIN AND LANGUAGE 2017; 167:44-60. [PMID: 27291335 DOI: 10.1016/j.bandl.2016.05.009] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/28/2015] [Revised: 04/08/2016] [Accepted: 05/23/2016] [Indexed: 06/06/2023]
Abstract
When a novel word is learned, its memory representation is thought to undergo a process of consolidation and integration. In this study, we tested whether the neural representations of novel words change as a function of consolidation by observing brain activation patterns just after learning and again after a delay of one week. Words learned with meanings were remembered better than those learned without meanings. Both episodic (hippocampus-dependent) and semantic (dependent on distributed neocortical areas) memory systems were utilised during recognition of the novel words. The extent to which the two systems were involved changed as a function of time and the amount of associated information, with more involvement of both systems for the meaningful words than for the form-only words after the one-week delay. These results suggest that the reason the meaningful words were remembered better is that their retrieval can benefit more from these two complementary memory systems.
Collapse
|
32
|
Sensorimotor adaptation affects perceptual compensation for coarticulation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 141:2693. [PMID: 28464681 PMCID: PMC5848838 DOI: 10.1121/1.4979791] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/26/2016] [Revised: 03/22/2017] [Accepted: 03/23/2017] [Indexed: 05/21/2023]
Abstract
A given speech sound will be realized differently depending on the context in which it is produced. Listeners have been found to compensate perceptually for these coarticulatory effects, yet it is unclear to what extent this effect depends on actual production experience. In this study, whether changes in motor-to-sound mappings induced by adaptation to altered auditory feedback can affect perceptual compensation for coarticulation is investigated. Specifically, whether altering how the vowel [i] is produced can affect the categorization of a stimulus continuum between an alveolar and a palatal fricative whose interpretation is dependent on vocalic context is tested. It was found that participants could be sorted into three groups based on whether they tended to oppose the direction of the shifted auditory feedback, to follow it, or a mixture of the two, and that these articulatory responses, not the shifted feedback the participants heard, correlated with changes in perception. These results indicate that sensorimotor adaptation to altered feedback can affect the perception of unaltered yet coarticulatorily-dependent speech sounds, suggesting a modulatory role of sensorimotor experience on speech perception.
Collapse
|
33
|
Pure linguistic interference during comprehension of competing speech signals. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 141:EL249. [PMID: 28372048 DOI: 10.1121/1.4977590] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/04/2016] [Revised: 01/12/2017] [Accepted: 02/14/2017] [Indexed: 06/07/2023]
Abstract
Speech-in-speech perception can be challenging because the processing of competing acoustic and linguistic information leads to informational masking. Here, a method is proposed to isolate the linguistic component of informational masking while keeping the distractor's acoustic information unchanged. Participants performed a dichotic listening cocktail-party task before and after training on 4-band noise-vocoded sentences that became intelligible through the training. Distracting noise-vocoded speech interfered more with target speech comprehension after training (i.e., when intelligible) than before training (i.e., when unintelligible) at -3 dB SNR. These findings confirm that linguistic and acoustic information have distinct masking effects during speech-in-speech comprehension.
Collapse
|
34
|
Beyond the usual cognitive suspects: The importance of speechreading and audiovisual temporal sensitivity in reading ability. LEARNING AND INDIVIDUAL DIFFERENCES 2017. [DOI: 10.1016/j.lindif.2017.01.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
35
|
A General Audiovisual Temporal Processing Deficit in Adult Readers With Dyslexia. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:144-158. [PMID: 28056152 DOI: 10.1044/2016_jslhr-h-15-0375] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Accepted: 05/26/2016] [Indexed: 05/14/2023]
Abstract
PURPOSE Because reading is an audiovisual process, reading impairment may reflect an audiovisual processing deficit. The aim of the present study was to test the existence and scope of such a deficit in adult readers with dyslexia. METHOD We tested 39 typical readers and 51 adult readers with dyslexia on their sensitivity to the simultaneity of audiovisual speech and nonspeech stimuli, their time window of audiovisual integration for speech (using incongruent /aCa/ syllables), and their audiovisual perception of phonetic categories. RESULTS Adult readers with dyslexia showed less sensitivity to audiovisual simultaneity than typical readers for both speech and nonspeech events. We found no differences between readers with dyslexia and typical readers in the temporal window of integration for audiovisual speech or in the audiovisual perception of phonetic categories. CONCLUSIONS The results suggest an audiovisual temporal deficit in dyslexia that is not specific to speech-related events. But the differences found for audiovisual temporal sensitivity did not translate into a deficit in audiovisual speech perception. Hence, there seems to be a hiatus between simultaneity judgment and perception, suggesting a multisensory system that uses different mechanisms across tasks. Alternatively, it is possible that the audiovisual deficit in dyslexia is only observable when explicit judgments about audiovisual simultaneity are required.
Collapse
|
36
|
Abstract
An eye-tracking study examined the involvement of prosodic knowledge—specifically, the knowledge that monosyllabic words tend to have longer durations than the first syllables of polysyllabic words—in the recognition of newly learned words. Participants learned new spoken words (by associating them to novel shapes): bisyllables and onset-embedded monosyllabic competitors (e.g., baptoe and bap). In the learning phase, the duration of the ambiguous sequence (e.g., bap) was held constant. In the test phase, its duration was longer than, shorter than, or equal to its learning-phase duration. Listeners' fixations indicated that short syllables tended to be interpreted as the first syllables of the bisyllables, whereas long syllables generated more monosyllabic-word interpretations. Recognition of newly acquired words is influenced by prior prosodic knowledge and is therefore not determined solely on the basis of stored episodes of those words.
Collapse
|
37
|
Prediction, Bayesian inference and feedback in speech recognition. LANGUAGE, COGNITION AND NEUROSCIENCE 2016; 31:4-18. [PMID: 26740960 PMCID: PMC4685608 DOI: 10.1080/23273798.2015.1081703] [Citation(s) in RCA: 60] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2015] [Accepted: 08/05/2015] [Indexed: 05/19/2023]
Abstract
Speech perception involves prediction, but how is that prediction implemented? In cognitive models prediction has often been taken to imply that there is feedback of activation from lexical to pre-lexical processes as implemented in interactive-activation models (IAMs). We show that simple activation feedback does not actually improve speech recognition. However, other forms of feedback can be beneficial. In particular, feedback can enable the listener to adapt to changing input, and can potentially help the listener to recognise unusual input, or recognise speech in the presence of competing sounds. The common feature of these helpful forms of feedback is that they are all ways of optimising the performance of speech recognition using Bayesian inference. That is, listeners make predictions about speech because speech recognition is optimal in the sense captured in Bayesian models.
Collapse
|
38
|
Effects of Early Bilingual Experience with a Tone and a Non-Tone Language on Speech-Music Integration. PLoS One 2015; 10:e0144225. [PMID: 26659377 PMCID: PMC4684237 DOI: 10.1371/journal.pone.0144225] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2015] [Accepted: 11/16/2015] [Indexed: 11/18/2022] Open
Abstract
We investigated music and language processing in a group of early bilinguals who spoke a tone language and a non-tone language (Cantonese and Dutch). We assessed online speech-music processing interactions, that is, interactions that occur when speech and music are processed simultaneously in songs, with a speeded classification task. In this task, participants judged sung pseudowords either musically (based on the direction of the musical interval) or phonologically (based on the identity of the sung vowel). We also assessed longer-term effects of linguistic experience on musical ability, that is, the influence of extensive prior experience with language when processing music. These effects were assessed with a task in which participants had to learn to identify musical intervals and with four pitch-perception tasks. Our hypothesis was that due to their experience in two different languages using lexical versus intonational tone, the early Cantonese-Dutch bilinguals would outperform the Dutch control participants. In online processing, the Cantonese-Dutch bilinguals processed speech and music more holistically than controls. This effect seems to be driven by experience with a tone language, in which integration of segmental and pitch information is fundamental. Regarding longer-term effects of linguistic experience, we found no evidence for a bilingual advantage in either the music-interval learning task or the pitch-perception tasks. Together, these results suggest that being a Cantonese-Dutch bilingual does not have any measurable longer-term effects on pitch and music processing, but does have consequences for how speech and music are processed jointly.
Collapse
|
39
|
Tracking lexical consolidation with ERPs: Lexical and semantic-priming effects on N400 and LPC responses to newly-learned words. Neuropsychologia 2015; 79:33-41. [DOI: 10.1016/j.neuropsychologia.2015.10.020] [Citation(s) in RCA: 55] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2014] [Revised: 07/29/2015] [Accepted: 10/12/2015] [Indexed: 11/30/2022]
|
40
|
Do We Perceive Others Better than Ourselves? A Perceptual Benefit for Noise-Vocoded Speech Produced by an Average Speaker. PLoS One 2015; 10:e0129731. [PMID: 26134279 PMCID: PMC4489924 DOI: 10.1371/journal.pone.0129731] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2014] [Accepted: 05/12/2015] [Indexed: 11/19/2022] Open
Abstract
In different tasks involving action perception, performance has been found to be facilitated when the presented stimuli were produced by the participants themselves rather than by another participant. These results suggest that the same mental representations are accessed during both production and perception. However, with regard to spoken word perception, evidence also suggests that listeners’ representations for speech reflect the input from their surrounding linguistic community rather than their own idiosyncratic productions. Furthermore, speech perception is heavily influenced by indexical cues that may lead listeners to frame their interpretations of incoming speech signals with regard to speaker identity. In order to determine whether word recognition evinces similar self-advantages as found in action perception, it was necessary to eliminate indexical cues from the speech signal. We therefore asked participants to identify noise-vocoded versions of Dutch words that were based on either their own recordings or those of a statistically average speaker. The majority of participants were more accurate for the average speaker than for themselves, even after taking into account differences in intelligibility. These results suggest that the speech representations accessed during perception of noise-vocoded speech are more reflective of the input of the speech community, and hence that speech perception is not necessarily based on representations of one’s own speech.
Collapse
|
41
|
Changes in Theta and Beta Oscillations as Signatures of Novel Word Consolidation. J Cogn Neurosci 2015; 27:1286-97. [DOI: 10.1162/jocn_a_00801] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
The complementary learning systems account of word learning states that novel words, like other types of memories, undergo an offline consolidation process during which they are gradually integrated into the neocortical memory network. A fundamental change in the neural representation of a novel word should therefore occur in the hours after learning. The present EEG study tested this hypothesis by investigating whether novel words learned before a 24-hr consolidation period elicited more word-like oscillatory responses than novel words learned immediately before testing. In line with previous studies indicating that theta synchronization reflects lexical access, unfamiliar novel words elicited lower power in the theta band (4–8 Hz) than existing words. Recently learned words still showed a marginally lower theta increase than existing words, but theta responses to novel words that had been acquired 24 hr earlier were indistinguishable from responses to existing words. Consistent with evidence that beta desynchronization (16–21 Hz) is related to lexical-semantic processing, we found that both unfamiliar and recently learned novel words elicited less beta desynchronization than existing words. In contrast, no difference was found between novel words learned 24 hr earlier and existing words. These data therefore suggest that an offline consolidation period enables novel words to acquire lexically integrated, word-like neural representations.
Collapse
|
42
|
Repetition Suppression in the Left Inferior Frontal Gyrus Predicts Tone Learning Performance. Cereb Cortex 2015; 26:2728-42. [PMID: 26113631 DOI: 10.1093/cercor/bhv126] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Do individuals differ in how efficiently they process non-native sounds? To what extent do these differences relate to individual variability in sound-learning aptitude? We addressed these questions by assessing the sound-learning abilities of Dutch native speakers as they were trained on non-native tone contrasts. We used fMRI repetition suppression to the non-native tones to measure participants' neuronal processing efficiency before and after training. Although all participants improved in tone identification with training, there was large individual variability in learning performance. A repetition suppression effect to tone was found in the bilateral inferior frontal gyri (IFGs) before training. No whole-brain effect was found after training; a region-of-interest analysis, however, showed that, after training, repetition suppression to tone in the left IFG correlated positively with learning. That is, individuals who were better in learning the non-native tones showed larger repetition suppression in this area. Crucially, this was true even before training. These findings add to existing evidence that the left IFG plays an important role in sound learning and indicate that individual differences in learning aptitude stem from differences in the neuronal efficiency with which non-native sounds are processed.
Collapse
|
43
|
Syntactic predictability in the recognition of carefully and casually produced speech. J Exp Psychol Learn Mem Cogn 2015; 41:1684-702. [PMID: 26052788 DOI: 10.1037/a0039326] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The present study investigated whether the recognition of spoken words is influenced by how predictable they are given their syntactic context and whether listeners assign more weight to syntactic predictability when acoustic-phonetic information is less reliable. Syntactic predictability was manipulated by varying the word order of past participles and auxiliary verbs in Dutch subordinate clauses. Acoustic-phonetic reliability was manipulated by presenting sentences either in a careful or a casual speaking style. In 3 eye-tracking experiments, participants recognized past participles more quickly when they occurred after their associated auxiliary verbs than when they preceded them. Response measures tapping into later stages of processing suggested that this effect was stronger for casually than for carefully produced sentences. These findings provide further evidence that syntactic predictability can influence word recognition and that this type of information is particularly useful for coping with acoustic-phonetic reductions in conversational speech. We conclude that listeners dynamically adapt to the different sources of linguistic information available to them.
Collapse
|
44
|
Abstract
In three cross-modal priming experiments we asked whether adaptation to a foreign-accented speaker is automatic, and whether adaptation can be seen after a long delay between initial exposure and test. Dutch listeners were exposed to a Hebrew-accented Dutch speaker with two types of Dutch words: those that contained [I] (globally accented words), and those in which the Dutch [i] was shortened to [I] (specific accent marker words). Experiment 1, which served as a baseline, showed that native Dutch participants showed facilitatory priming for globally accented, but not specific accent, words. In experiment 2, participants performed a 3.5-minute phoneme monitoring task, and were tested on their comprehension of the accented speaker 24 hours later using the same cross-modal priming task as in experiment 1. During the phoneme monitoring task, listeners were asked to detect a consonant that was not strongly accented. In experiment 3, the delay between exposure and test was extended to 1 week. Listeners in experiments 2 and 3 showed facilitatory priming for both globally accented and specific accent marker words. Together, these results show that adaptation to a foreign-accented speaker can be rapid and automatic, and can be observed after a prolonged delay in testing.
Collapse
|
45
|
Abstract
This study asks how users of British Sign Language (BSL) recognize individual signs in connected sign sequences. We examined whether this is achieved through modality-specific or modality-general segmentation procedures. A modality-specific feature of signed languages is that, during continuous signing, there are salient transitions between sign locations. We used the sign-spotting task to ask if and how BSL signers use these transitions in segmentation. A total of 96 real BSL signs were preceded by nonsense signs which were produced in either the target location or another location (with a small or large transition). Half of the transitions were within the same major body area (e.g., head) and half were across body areas (e.g., chest to hand). Deaf adult BSL users (a group of natives and early learners, and a group of late learners) spotted target signs best when there was a minimal transition and worst when there was a large transition. When location changes were present, both groups performed better when transitions were to a different body area than when they were within the same area. These findings suggest that transitions do not provide explicit sign-boundary cues in a modality-specific fashion. Instead, we argue that smaller transitions help recognition in a modality-general way by limiting lexical search to signs within location neighbourhoods, and that transitions across body areas also aid segmentation in a modality-general way, by providing a phonotactic cue to a sign boundary. We propose that sign segmentation is based on modality-general procedures which are core language-processing mechanisms.
Collapse
|
46
|
Individual aptitude in Mandarin lexical tone perception predicts effectiveness of high-variability training. Front Psychol 2014; 5:1318. [PMID: 25505434 PMCID: PMC4243698 DOI: 10.3389/fpsyg.2014.01318] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2014] [Accepted: 10/30/2014] [Indexed: 11/19/2022] Open
Abstract
Although the high-variability training method can enhance learning of non-native speech categories, this can depend on individuals’ aptitude. The current study asked how general the effects of perceptual aptitude are by testing whether they occur with training materials spoken by native speakers and whether they depend on the nature of the to-be-learned material. Forty-five native Dutch listeners took part in a 5-day training procedure in which they identified bisyllabic Mandarin pseudowords (e.g., asa) pronounced with different lexical tone combinations. The training materials were presented to different groups of listeners at three levels of variability: low (many repetitions of a limited set of words recorded by a single speaker), medium (fewer repetitions of a more variable set of words recorded by three speakers), and high (similar to medium but with five speakers). Overall, variability did not influence learning performance, but this was due to an interaction with individuals’ perceptual aptitude: increasing variability hindered improvements in performance for low-aptitude perceivers while it helped improvements in performance for high-aptitude perceivers. These results show that the previously observed interaction between individuals’ aptitude and effects of degree of variability extends to natural tokens of Mandarin speech. This interaction was not found, however, in a closely matched study in which native Dutch listeners were trained on the Japanese geminate/singleton consonant contrast. This may indicate that the effectiveness of high-variability training depends not only on individuals’ aptitude in speech perception but also on the nature of the categories being acquired.
Collapse
|
47
|
Use what you can: storage, abstraction processes, and perceptual adjustments help listeners recognize reduced forms. Front Psychol 2014; 5:437. [PMID: 24910622 PMCID: PMC4038950 DOI: 10.3389/fpsyg.2014.00437] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2013] [Accepted: 04/24/2014] [Indexed: 11/13/2022] Open
Abstract
Three eye-tracking experiments tested whether native listeners recognized reduced Dutch words better after having heard the same reduced words, or different reduced words of the same reduction type and whether familiarization with one reduction type helps listeners to deal with another reduction type. In the exposure phase, a segmental reduction group was exposed to /b/-reductions (e.g., minderij instead of binderij, “book binder”) and a syllabic reduction group was exposed to full-vowel deletions (e.g., p'raat instead of paraat, “ready”), while a control group did not hear any reductions. In the test phase, all three groups heard the same speaker producing reduced-/b/ and deleted-vowel words that were either repeated (Experiments 1 and 2) or new (Experiment 3), but that now appeared as targets in semantically neutral sentences. Word-specific learning effects were found for vowel-deletions but not for /b/-reductions. Generalization of learning to new words of the same reduction type occurred only if the exposure words showed a phonologically consistent reduction pattern (/b/-reductions). In contrast, generalization of learning to words of another reduction type occurred only if the exposure words showed a phonologically inconsistent reduction pattern (the vowel deletions; learning about them generalized to recognition of the /b/-reductions). In order to deal with reductions, listeners thus use various means. They store reduced variants (e.g., for the inconsistent vowel-deleted words) and they abstract over incoming information to build up and apply mapping rules (e.g., for the consistent /b/-reductions). Experience with inconsistent pronunciations leads to greater perceptual flexibility in dealing with other forms of reduction uttered by the same speaker than experience with consistent pronunciations.
Collapse
|
48
|
Tracking perception of the sounds of English. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 135:2995-3006. [PMID: 24815279 DOI: 10.1121/1.4870486] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Twenty American English listeners identified gated fragments of all 2288 possible English within-word and cross-word diphones, providing a total of 538,560 phoneme categorizations. The results show orderly uptake of acoustic information in the signal and provide a view of where information about segments occurs in time. Information locus depends on each speech sound's identity and phonological features. Affricates and diphthongs have highly localized information so that listeners' perceptual accuracy rises during a confined time range. Stops and sonorants have more distributed and gradually appearing information. The identity and phonological features (e.g., vowel vs consonant) of the neighboring segment also influences when acoustic information about a segment is available. Stressed vowels are perceived significantly more accurately than unstressed vowels, but this effect is greater for lax vowels than for tense vowels or diphthongs. The dataset charts the availability of perceptual cues to segment identity across time for the full phoneme repertoire of English in all attested phonetic contexts.
Collapse
|
49
|
Suprasegmental Lexical Stress Cues in Visual Speech can Guide Spoken-Word Recognition. Q J Exp Psychol (Hove) 2014; 67:793-808. [DOI: 10.1080/17470218.2013.834371] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Visual cues to the individual segments of speech and to sentence prosody guide speech recognition. The present study tested whether visual suprasegmental cues to the stress patterns of words can also constrain recognition. Dutch listeners use acoustic suprasegmental cues to lexical stress (changes in duration, amplitude, and pitch) in spoken-word recognition. We asked here whether they can also use visual suprasegmental cues. In two categorization experiments, Dutch participants saw a speaker say fragments of word pairs that were segmentally identical but differed in their stress realization (e.g., ˈ ca-vi from cavia “guinea pig” vs. ˌka-vi from kaviaar “caviar”). Participants were able to distinguish between these pairs from seeing a speaker alone. Only the presence of primary stress in the fragment, not its absence, was informative. Participants were able to distinguish visually primary from secondary stress on first syllables, but only when the fragment-bearing target word carried phrase-level emphasis. Furthermore, participants distinguished fragments with primary stress on their second syllable from those with secondary stress on their first syllable (e.g., pro-ˈjec from projector “projector” vs. ˌpro-jec from projectiel “projectile”), independently of phrase-level emphasis. Seeing a speaker thus contributes to spoken-word recognition by providing suprasegmental information about the presence of primary lexical stress.
Collapse
|
50
|
Decoding of single-trial auditory mismatch responses for online perceptual monitoring and neurofeedback. Front Neurosci 2014; 7:265. [PMID: 24415996 PMCID: PMC3874475 DOI: 10.3389/fnins.2013.00265] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2013] [Accepted: 12/16/2013] [Indexed: 11/17/2022] Open
Abstract
Multivariate pattern classification methods are increasingly applied to neuroimaging data in the context of both fundamental research and in brain-computer interfacing approaches. Such methods provide a framework for interpreting measurements made at the single-trial level with respect to a set of two or more distinct mental states. Here, we define an approach in which the output of a binary classifier trained on data from an auditory mismatch paradigm can be used for online tracking of perception and as a neurofeedback signal. The auditory mismatch paradigm is known to induce distinct perceptual states related to the presentation of high- and low-probability stimuli, which are reflected in event-related potential (ERP) components such as the mismatch negativity (MMN). The first part of this paper illustrates how pattern classification methods can be applied to data collected in an MMN paradigm, including discussion of the optimization of preprocessing steps, the interpretation of features and how the performance of these methods generalizes across individual participants and measurement sessions. We then go on to show that the output of these decoding methods can be used in online settings as a continuous index of single-trial brain activation underlying perceptual discrimination. We conclude by discussing several potential domains of application, including neurofeedback, cognitive monitoring and passive brain-computer interfaces.
Collapse
|