1
|
Mueller JL, Weyers I, Friederici AD, Männel C. Individual differences in auditory perception predict learning of non-adjacent tone sequences in 3-year-olds. Front Hum Neurosci 2024; 18:1358380. [PMID: 38638804 PMCID: PMC11024384 DOI: 10.3389/fnhum.2024.1358380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Accepted: 03/15/2024] [Indexed: 04/20/2024] Open
Abstract
Auditory processing of speech and non-speech stimuli oftentimes involves the analysis and acquisition of non-adjacent sound patterns. Previous studies using speech material have demonstrated (i) children's early emerging ability to extract non-adjacent dependencies (NADs) and (ii) a relation between basic auditory perception and this ability. Yet, it is currently unclear whether children show similar sensitivities and similar perceptual influences for NADs in the non-linguistic domain. We conducted an event-related potential study with 3-year-old children using a sine-tone-based oddball task, which simultaneously tested for NAD learning and auditory perception by means of varying sound intensity. Standard stimuli were A × B sine-tone sequences, in which specific A elements predicted specific B elements after variable × elements. NAD deviants violated the dependency between A and B and intensity deviants were reduced in amplitude. Both elicited similar frontally distributed positivities, suggesting successful deviant detection. Crucially, there was a predictive relationship between the amplitude of the sound intensity discrimination effect and the amplitude of the NAD learning effect. These results are taken as evidence that NAD learning in the non-linguistic domain is functional in 3-year-olds and that basic auditory processes are related to the learning of higher-order auditory regularities also outside the linguistic domain.
Collapse
Affiliation(s)
- Jutta L. Mueller
- Department of Linguistics, University of Vienna, Vienna, Austria
- Vienna Cognitive Science Research HUB, Vienna, Austria
| | - Ivonne Weyers
- Department of Linguistics, University of Vienna, Vienna, Austria
| | - Angela D. Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Claudia Männel
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Audiology and Phoniatrics, Charité – Universitätsmedizin Berlin, Berlin, Germany
| |
Collapse
|
2
|
Endress AD. Hebbian learning can explain rhythmic neural entrainment to statistical regularities. Dev Sci 2024:e13487. [PMID: 38372153 DOI: 10.1111/desc.13487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 12/26/2023] [Accepted: 01/29/2024] [Indexed: 02/20/2024]
Abstract
In many domains, learners extract recurring units from continuous sequences. For example, in unknown languages, fluent speech is perceived as a continuous signal. Learners need to extract the underlying words from this continuous signal and then memorize them. One prominent candidate mechanism is statistical learning, whereby learners track how predictive syllables (or other items) are of one another. Syllables within the same word predict each other better than syllables straddling word boundaries. But does statistical learning lead to memories of the underlying words-or just to pairwise associations among syllables? Electrophysiological results provide the strongest evidence for the memory view. Electrophysiological responses can be time-locked to statistical word boundaries (e.g., N400s) and show rhythmic activity with a periodicity of word durations. Here, I reproduce such results with a simple Hebbian network. When exposed to statistically structured syllable sequences (and when the underlying words are not excessively long), the network activation is rhythmic with the periodicity of a word duration and activation maxima on word-final syllables. This is because word-final syllables receive more excitation from earlier syllables with which they are associated than less predictable syllables that occur earlier in words. The network is also sensitive to information whose electrophysiological correlates were used to support the encoding of ordinal positions within words. Hebbian learning can thus explain rhythmic neural activity in statistical learning tasks without any memory representations of words. Learners might thus need to rely on cues beyond statistical associations to learn the words of their native language. RESEARCH HIGHLIGHTS: Statistical learning may be utilized to identify recurring units in continuous sequences (e.g., words in fluent speech) but may not generate explicit memory for words. Exposure to statistically structured sequences leads to rhythmic activity with a period of the duration of the underlying units (e.g., words). I show that a memory-less Hebbian network model can reproduce this rhythmic neural activity as well as putative encodings of ordinal positions observed in earlier research. Direct tests are needed to establish whether statistical learning leads to declarative memories for words.
Collapse
Affiliation(s)
- Ansgar D Endress
- Department of Psychology, City, University of London, London, UK
| |
Collapse
|
3
|
Weyers I, Mueller J. A Special Role of Syllables, But Not Vowels or Consonants, for Nonadjacent Dependency Learning. J Cogn Neurosci 2022; 34:1467-1487. [PMID: 35604359 DOI: 10.1162/jocn_a_01874] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Successful language processing entails tracking (morpho)syntactic relationships between distant units of speech, so-called nonadjacent dependencies (NADs). Many cues to such dependency relations have been identified, yet the linguistic elements encoding them have received little attention. In the present investigation, we tested whether and how these elements, here syllables, consonants, and vowels, affect behavioral learning success as well as learning-related changes in neural activity in relation to item-specific NAD learning. In a set of two EEG studies with adults, we compared learning under conditions where either all segment types (Experiment 1) or only one segment type (Experiment 2) was informative. The collected behavioral and ERP data indicate that, when all three segment types are available, participants mainly rely on the syllable for NAD learning. With only one segment type available for learning, adults also perform most successfully with syllable-based dependencies. Although we find no evidence for successful learning across vowels in Experiment 2, dependencies between consonants seem to be identified at least passively at the phonetic-feature level. Together, these results suggest that successful item-specific NAD learning may depend on the availability of syllabic information. Furthermore, they highlight consonants' distinctive power to support lexical processes. Although syllables show a clear facilitatory function for NAD learning, the underlying mechanisms of this advantage require further research.
Collapse
|
4
|
Roembke TC, McMurray B. Multiple components of statistical word learning are resource dependent: Evidence from a dual-task learning paradigm. Mem Cognit 2021; 49:984-997. [PMID: 33733433 PMCID: PMC8238696 DOI: 10.3758/s13421-021-01141-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/18/2021] [Indexed: 11/08/2022]
Abstract
It is increasingly understood that people may learn new word/object mappings in part via a form of statistical learning in which they track co-occurrences between words and objects across situations (cross-situational learning). Multiple learning processes contribute to this, thought to reflect the simultaneous influence of real-time hypothesis testing and graduate learning. It is unclear how these processes interact, and if any require explicit cognitive resources. To manipulate the availability of working memory resources for explicit processing, participants completed a dual-task paradigm in which a cross-situational word-learning task was interleaved with a short-term memory task. We then used trial-by-trial analyses to estimate how different learning processes that play out simultaneously are impacted by resource availability. Critically, we found that the effect of hypothesis testing and gradual learning effects showed a small reduction under limited resources, and that the effect of memory load was not fully mediated by these processes. This suggests that neither is purely explicit, and there may be additional resource-dependent processes at play. Consistent with a hybrid account, these findings suggest that these two aspects of learning may reflect different aspects of a single system gated by attention, rather than competing learning systems.
Collapse
Affiliation(s)
- Tanja C Roembke
- Jaegerstrasse 17-19, Institute of Psychology, RWTH Aachen University, 62062, Aachen, Germany.
| | - Bob McMurray
- Departments of Psychological and Brain Sciences, Communication Sciences and Disorders, and Linguistics, University of Iowa, Iowa City, IA, USA
| |
Collapse
|
5
|
Bettoni R, Riva V, Cantiani C, Molteni M, Macchi Cassia V, Bulf H. Infants' Learning of Rule-Based Visual Sequences Predicts Language Outcome at 2 Years. Front Psychol 2020; 11:281. [PMID: 32158415 PMCID: PMC7052175 DOI: 10.3389/fpsyg.2020.00281] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Accepted: 02/06/2020] [Indexed: 11/13/2022] Open
Abstract
The ability to learn and generalize abstract rules from sensory input - i.e., Rule Learning (RL) - is seen as pivotal to language development, and specifically to the acquisition of the grammatical structure of language. Although many studies have shown that RL in infancy is operating across different perceptual domains, including vision, no studies have directly investigated the link between infants' visual RL and later language acquisition. Here, we conducted a longitudinal study to investigate whether 7-month-olds' ability to detect visual structural regularities predicts linguistic outcome at 2 years of age. At 7 months, infants were tested for their ability to extract and generalize ABB and ABA structures from sequences of visual shapes, and at 24 months their lexical and grammatical skills were assessed using the MacArthur-Bates CDI. Regression analyses showed that infants' visual RL abilities selectively predicted early grammatical abilities, but not lexical abilities. These results may provide the first evidence that RL mechanisms are involved in language acquisition, and suggest that RL abilities may act as an early neurocognitive marker for language impairments.
Collapse
Affiliation(s)
- Roberta Bettoni
- Department of Psychology, University of Milano-Bicocca, Milan, Italy
- NeuroMI, Milan Center for Neuroscience, Milan, Italy
| | - Valentina Riva
- Child Psychopathology Unit, Scientific Institute, IRCCS E. Medea, Bosisio Parini, Lecco, Italy
| | - Chiara Cantiani
- Child Psychopathology Unit, Scientific Institute, IRCCS E. Medea, Bosisio Parini, Lecco, Italy
| | - Massimo Molteni
- Child Psychopathology Unit, Scientific Institute, IRCCS E. Medea, Bosisio Parini, Lecco, Italy
| | - Viola Macchi Cassia
- Department of Psychology, University of Milano-Bicocca, Milan, Italy
- NeuroMI, Milan Center for Neuroscience, Milan, Italy
| | - Hermann Bulf
- Department of Psychology, University of Milano-Bicocca, Milan, Italy
- NeuroMI, Milan Center for Neuroscience, Milan, Italy
| |
Collapse
|
6
|
Rabagliati H, Ferguson B, Lew‐Williams C. The profile of abstract rule learning in infancy: Meta-analytic and experimental evidence. Dev Sci 2019; 22:e12704. [PMID: 30014590 PMCID: PMC6294696 DOI: 10.1111/desc.12704] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2017] [Accepted: 05/18/2018] [Indexed: 11/28/2022]
Abstract
Everyone agrees that infants possess general mechanisms for learning about the world, but the existence and operation of more specialized mechanisms is controversial. One mechanism-rule learning-has been proposed as potentially specific to speech, based on findings that 7-month-olds can learn abstract repetition rules from spoken syllables (e.g. ABB patterns: wo-fe-fe, ga-tu-tu…) but not from closely matched stimuli, such as tones. Subsequent work has shown that learning of abstract patterns is not simply specific to speech. However, we still lack a parsimonious explanation to tie together the diverse, messy, and occasionally contradictory findings in that literature. We took two routes to creating a new profile of rule learning: meta-analysis of 20 prior reports on infants' learning of abstract repetition rules (including 1,318 infants in 63 experiments total), and an experiment on learning of such rules from a natural, non-speech communicative signal. These complementary approaches revealed that infants were most likely to learn abstract patterns from meaningful stimuli. We argue that the ability to detect and generalize simple patterns supports learning across domains in infancy but chiefly when the signal is meaningfully relevant to infants' experience with sounds, objects, language, and people.
Collapse
Affiliation(s)
- Hugh Rabagliati
- School of Philosophy, Psychology and Language SciencesUniversity of EdinburghEdinburghUK
| | - Brock Ferguson
- Department of PsychologyNorthwestern UniversityEvanstonIllinois
| | | |
Collapse
|
7
|
Kazemi Esfeh T, Hatami J, Lavasani MG. Influence of metrical structure on learning of positional regularities in movement sequences. PSYCHOLOGICAL RESEARCH 2018; 84:611-624. [PMID: 30229296 DOI: 10.1007/s00426-018-1096-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2018] [Accepted: 09/10/2018] [Indexed: 10/28/2022]
Abstract
Sequential stimuli are usually perceived to have hierarchical temporal structures. However, some of these structures are only investigated in one type of sequence, regardless of the existing evidence, showing the domain-generality of the representation of these structures. Here, we assess whether the hierarchical representation of regularly segmented action sequences resembles the perceived metrical patterns that organize the representation of events hierarchically in temporally regular sequences. In all our experiments, we presented the participants with sequences of human movements and tested the perception of metrical pattern by segmenting the movement streams into temporally equal groups containing four movements. In Experiment 1, we found that a movement sequence with temporally equal groupings improves the learning of positional regularities inherent within each group of movements. To further clarify the degree to which this learning mechanism is affected by the perceived metrical patterns, we conducted Experiments 2a and 2b, in which the relative saliencies of the first and last positions in the movement groups, respectively, were studied. The results showed that, although in the learning of positional regularities, the rule-conforming first positions are as effective as when both first and last positions are legal, the last positions are not as influential. Based on these findings we conclude that, in grouped sequences, learning of positional regularities may be modulated by the metrical saliency patterns that are imposed by the temporal regularity of the sequential grouping pattern.
Collapse
Affiliation(s)
- Talieh Kazemi Esfeh
- Faculty of Psychology and Education, University of Tehran, Jalal Al-e-Ahmad Avenue, Tehran, 1445983861, Iran.
| | - Javad Hatami
- Faculty of Psychology and Education, University of Tehran, Jalal Al-e-Ahmad Avenue, Tehran, 1445983861, Iran
| | - Masoud Gholamali Lavasani
- Faculty of Psychology and Education, University of Tehran, Jalal Al-e-Ahmad Avenue, Tehran, 1445983861, Iran
| |
Collapse
|
8
|
Thiessen ED. What's statistical about learning? Insights from modelling statistical learning as a set of memory processes. Philos Trans R Soc Lond B Biol Sci 2017; 372:20160056. [PMID: 27872374 PMCID: PMC5124081 DOI: 10.1098/rstb.2016.0056] [Citation(s) in RCA: 54] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/28/2016] [Indexed: 11/12/2022] Open
Abstract
Statistical learning has been studied in a variety of different tasks, including word segmentation, object identification, category learning, artificial grammar learning and serial reaction time tasks (e.g. Saffran et al. 1996 Science 274: , 1926-1928; Orban et al. 2008 Proceedings of the National Academy of Sciences 105: , 2745-2750; Thiessen & Yee 2010 Child Development 81: , 1287-1303; Saffran 2002 Journal of Memory and Language 47: , 172-196; Misyak & Christiansen 2012 Language Learning 62: , 302-331). The difference among these tasks raises questions about whether they all depend on the same kinds of underlying processes and computations, or whether they are tapping into different underlying mechanisms. Prior theoretical approaches to statistical learning have often tried to explain or model learning in a single task. However, in many cases these approaches appear inadequate to explain performance in multiple tasks. For example, explaining word segmentation via the computation of sequential statistics (such as transitional probability) provides little insight into the nature of sensitivity to regularities among simultaneously presented features. In this article, we will present a formal computational approach that we believe is a good candidate to provide a unifying framework to explore and explain learning in a wide variety of statistical learning tasks. This framework suggests that statistical learning arises from a set of processes that are inherent in memory systems, including activation, interference, integration of information and forgetting (e.g. Perruchet & Vinter 1998 Journal of Memory and Language 39: , 246-263; Thiessen et al. 2013 Psychological Bulletin 139: , 792-814). From this perspective, statistical learning does not involve explicit computation of statistics, but rather the extraction of elements of the input into memory traces, and subsequent integration across those memory traces that emphasize consistent information (Thiessen and Pavlik 2013 Cognitive Science 37: , 310-343).This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'.
Collapse
Affiliation(s)
- Erik D Thiessen
- Department of Psychology, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, USA
| |
Collapse
|