1
|
Háden GP, Bouwer FL, Honing H, Winkler I. Beat processing in newborn infants cannot be explained by statistical learning based on transition probabilities. Cognition 2024; 243:105670. [PMID: 38016227 DOI: 10.1016/j.cognition.2023.105670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 11/08/2023] [Accepted: 11/17/2023] [Indexed: 11/30/2023]
Abstract
Newborn infants have been shown to extract temporal regularities from sound sequences, both in the form of learning regular sequential properties, and extracting periodicity in the input, commonly referred to as a regular pulse or the 'beat'. However, these two types of regularities are often indistinguishable in isochronous sequences, as both statistical learning and beat perception can be elicited by the regular alternation of accented and unaccented sounds. Here, we manipulated the isochrony of sound sequences in order to disentangle statistical learning from beat perception in sleeping newborn infants in an EEG experiment, as previously done in adults and macaque monkeys. We used a binary accented sequence that induces a beat when presented with isochronous timing, but not when presented with randomly jittered timing. We compared mismatch responses to infrequent deviants falling on either accented or unaccented (i.e., odd and even) positions. Results showed a clear difference between metrical positions in the isochronous sequence, but not in the equivalent jittered sequence. This suggests that beat processing is present in newborns. Despite previous evidence for statistical learning in newborns the effects of this ability were not detected in the jittered condition. These results show that statistical learning by itself does not fully explain beat processing in newborn infants.
Collapse
Affiliation(s)
- Gábor P Háden
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Magyar tudósok körútja 2, H-1117 Budapest, Hungary; Department of Telecommunications and Media Informatics, Faculty of Electrical Engineering and Informatics, Budapest University of Technology and Economics, Magyar tudósok körútja 2, 1117 Budapest, Hungary.
| | - Fleur L Bouwer
- Music Cognition Group, Institute for Logic, Language, and Computation, University of Amsterdam, P.O. Box 94242, 1090 GE Amsterdam, the Netherlands; Amsterdam Brain and Cognition, University of Amsterdam, P.O. Box 15900, 1001 NK Amsterdam, the Netherlands; Department of Psychology, Brain & Cognition, University of Amsterdam, P.O. Box 15900, 1001 NK Amsterdam, the Netherlands; Cognitive Psychology Unit, Institute of Psychology & Leiden Institute for Brain and Cognition, Leiden University, 2333 AK Leiden, the Netherlands.
| | - Henkjan Honing
- Music Cognition Group, Institute for Logic, Language, and Computation, University of Amsterdam, P.O. Box 94242, 1090 GE Amsterdam, the Netherlands; Amsterdam Brain and Cognition, University of Amsterdam, P.O. Box 15900, 1001 NK Amsterdam, the Netherlands.
| | - István Winkler
- Institute of Cognitive Neuroscience and Psychology, HUN-REN Research Centre for Natural Sciences, Magyar tudósok körútja 2, H-1117 Budapest, Hungary.
| |
Collapse
|
2
|
Tan SHJ, Kalashnikova M, Di Liberto GM, Crosse MJ, Burnham D. Seeing a Talking Face Matters: Gaze Behavior and the Auditory-Visual Speech Benefit in Adults' Cortical Tracking of Infant-directed Speech. J Cogn Neurosci 2023; 35:1741-1759. [PMID: 37677057 DOI: 10.1162/jocn_a_02044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/09/2023]
Abstract
In face-to-face conversations, listeners gather visual speech information from a speaker's talking face that enhances their perception of the incoming auditory speech signal. This auditory-visual (AV) speech benefit is evident even in quiet environments but is stronger in situations that require greater listening effort such as when the speech signal itself deviates from listeners' expectations. One example is infant-directed speech (IDS) presented to adults. IDS has exaggerated acoustic properties that are easily discriminable from adult-directed speech (ADS). Although IDS is a speech register that adults typically use with infants, no previous neurophysiological study has directly examined whether adult listeners process IDS differently from ADS. To address this, the current study simultaneously recorded EEG and eye-tracking data from adult participants as they were presented with auditory-only (AO), visual-only, and AV recordings of IDS and ADS. Eye-tracking data were recorded because looking behavior to the speaker's eyes and mouth modulates the extent of AV speech benefit experienced. Analyses of cortical tracking accuracy revealed that cortical tracking of the speech envelope was significant in AO and AV modalities for IDS and ADS. However, the AV speech benefit [i.e., AV > (A + V)] was only present for IDS trials. Gaze behavior analyses indicated differences in looking behavior during IDS and ADS trials. Surprisingly, looking behavior to the speaker's eyes and mouth was not correlated with cortical tracking accuracy. Additional exploratory analyses indicated that attention to the whole display was negatively correlated with cortical tracking accuracy of AO and visual-only trials in IDS. Our results underscore the nuances involved in the relationship between neurophysiological AV speech benefit and looking behavior.
Collapse
Affiliation(s)
- Sok Hui Jessica Tan
- The MARCS Institute of Brain, Behaviour and Development, Western Sydney University, Australia
- Science of Learning in Education Centre, Office of Education Research, National Institute of Education, Nanyang Technological University, Singapore
| | - Marina Kalashnikova
- The Basque Center on Cognition, Brain and Language
- IKERBASQUE, Basque Foundation for Science
| | - Giovanni M Di Liberto
- ADAPT Centre, School of Computer Science and Statistics, Trinity College Institute of Neuroscience, Trinity College, The University of Dublin, Ireland
| | - Michael J Crosse
- SEGOTIA, Galway, Ireland
- Trinity Center for Biomedical Engineering, Department of Mechanical, Manufacturing & Biomedical Engineering, Trinity College Dublin, Dublin, Ireland
| | - Denis Burnham
- The MARCS Institute of Brain, Behaviour and Development, Western Sydney University, Australia
| |
Collapse
|
3
|
Kujala T, Partanen E, Virtala P, Winkler I. Prerequisites of language acquisition in the newborn brain. Trends Neurosci 2023; 46:726-737. [PMID: 37344237 DOI: 10.1016/j.tins.2023.05.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 04/13/2023] [Accepted: 05/24/2023] [Indexed: 06/23/2023]
Abstract
Learning to decode and produce speech is one of the most demanding tasks faced by infants. Nevertheless, infants typically utter their first words within a year, and phrases soon follow. Here we review cognitive abilities of newborn infants that promote language acquisition, focusing primarily on studies tapping neural activity. The results of these studies indicate that infants possess core adult auditory abilities already at birth, including statistical learning and rule extraction from variable speech input. Thus, the neonatal brain is ready to categorize sounds, detect word boundaries, learn words, and separate speech streams: in short, to acquire language quickly and efficiently from everyday linguistic input.
Collapse
Affiliation(s)
- Teija Kujala
- Cognitive Brain Research Unit, Centre of Excellence in Music, Mind, Body and Brain, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland.
| | - Eino Partanen
- Cognitive Brain Research Unit, Centre of Excellence in Music, Mind, Body and Brain, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| | - Paula Virtala
- Cognitive Brain Research Unit, Centre of Excellence in Music, Mind, Body and Brain, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| | - István Winkler
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary
| |
Collapse
|
4
|
Hervé E, Mento G, Desnous B, François C. Challenges and new perspectives of developmental cognitive EEG studies. Neuroimage 2022; 260:119508. [PMID: 35882267 DOI: 10.1016/j.neuroimage.2022.119508] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Revised: 07/07/2022] [Accepted: 07/22/2022] [Indexed: 10/16/2022] Open
Abstract
Despite shared procedures with adults, electroencephalography (EEG) in early development presents many specificities that need to be considered for good quality data collection. In this paper, we provide an overview of the most representative early cognitive developmental EEG studies focusing on the specificities of this neuroimaging technique in young participants, such as attrition and artifacts. We also summarize the most representative results in developmental EEG research obtained in the time and time-frequency domains and use more advanced signal processing methods. Finally, we briefly introduce three recent standardized pipelines that will help promote replicability and comparability across experiments and ages. While this paper does not claim to be exhaustive, it aims to give a sufficiently large overview of the challenges and solutions available to conduct robust cognitive developmental EEG studies.
Collapse
Affiliation(s)
- Estelle Hervé
- CNRS, LPL, Aix-Marseille University, 5 Avenue Pasteur, Aix-en-Provence 13100, France
| | - Giovanni Mento
- Department of General Psychology, University of Padova, Padova 35131, Italy; Padua Neuroscience Center (PNC), University of Padova, Padova 35131, Italy
| | - Béatrice Desnous
- APHM, Reference Center for Rare Epilepsies, Timone Children Hospital, Aix-Marseille University, Marseille 13005, France; Inserm, INS, Aix-Marseille University, Marseille 13005, France
| | - Clément François
- CNRS, LPL, Aix-Marseille University, 5 Avenue Pasteur, Aix-en-Provence 13100, France.
| |
Collapse
|
5
|
Soares AP, Gutiérrez-Domínguez FJ, Oliveira HM, Lages A, Guerra N, Pereira AR, Tomé D, Lousada M. Explicit Instructions Do Not Enhance Auditory Statistical Learning in Children With Developmental Language Disorder: Evidence From Event-Related Potentials. Front Psychol 2022; 13:905762. [PMID: 35846717 PMCID: PMC9282164 DOI: 10.3389/fpsyg.2022.905762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2022] [Accepted: 05/26/2022] [Indexed: 12/04/2022] Open
Abstract
A current issue in psycholinguistic research is whether the language difficulties exhibited by children with developmental language disorder [DLD, previously labeled specific language impairment (SLI)] are due to deficits in their abilities to pick up patterns in the sensory environment, an ability known as statistical learning (SL), and the extent to which explicit learning mechanisms can be used to compensate for those deficits. Studies designed to test the compensatory role of explicit learning mechanisms in children with DLD are, however, scarce, and the few conducted so far have led to inconsistent results. This work aimed to provide new insights into the role that explicit learning mechanisms might play on implicit learning deficits in children with DLD by resorting to a new approach. This approach involved not only the collection of event-related potentials (ERPs), while preschool children with DLD [relative to typical language developmental (TLD) controls] were exposed to a continuous auditory stream made of the repetition of three-syllable nonsense words but, importantly, the collection of ERPs when the same children performed analogous versions of the same auditory SL task first under incidental (implicit) and afterward under intentional (explicit) conditions. In each of these tasks, the level of predictability of the three-syllable nonsense words embedded in the speech streams was also manipulated (high vs. low) to mimic natural languages closely. At the end of both tasks' exposure phase, children performed a two-alternative forced-choice (2-AFC) task from which behavioral evidence of SL was obtained. Results from the 2-AFC tasks failed to show reliable signs of SL in both groups of children. The ERPs data showed, however, significant modulations in the N100 and N400 components, taken as neural signatures of word segmentation in the brain, even though a detailed analysis of the neural responses revealed that only children from the TLD group seem to have taken advantage of the previous knowledge to enhance SL functioning. These results suggest that children with DLD showed deficits both in implicit and explicit learning mechanisms, casting doubts on the efficiency of the interventions relying on explicit instructions to help children with DLD to overcome their language difficulties.
Collapse
Affiliation(s)
- Ana Paula Soares
- Human Cognition Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | | | - Helena M. Oliveira
- Human Cognition Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Alexandrina Lages
- Human Cognition Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Natália Guerra
- Human Cognition Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Ana Rita Pereira
- Psychological Neuroscience Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - David Tomé
- Department of Audiology, School of Health, Polytechnic Institute of Porto, Porto, Portugal
- Neurocognition Group, Laboratory of Psychosocial Rehabilitation, CiR, Porto, Portugal
| | - Marisa Lousada
- Center for Health Technology and Services Research (CINTESIS@RISE), School of Health Sciences, University of Aveiro, Aveiro, Portugal
| |
Collapse
|
6
|
Relevance to the higher order structure may govern auditory statistical learning in neonates. Sci Rep 2022; 12:5905. [PMID: 35393525 PMCID: PMC8989996 DOI: 10.1038/s41598-022-09994-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Accepted: 03/23/2022] [Indexed: 11/08/2022] Open
Abstract
Hearing is one of the earliest senses to develop and is quite mature by birth. Contemporary theories assume that regularities in sound are exploited by the brain to create internal models of the environment. Through statistical learning, internal models extrapolate from patterns to predictions about subsequent experience. In adults, altered brain responses to sound enable us to infer the existence and properties of these models. In this study, brain potentials were used to determine whether newborns exhibit context-dependent modulations of a brain response that can be used to infer the existence and properties of internal models. Results are indicative of significant context-dependence in the responsivity to sound in newborns. When common and rare sounds continue in stable probabilities over a very long period, neonates respond to all sounds equivalently (no differentiation). However, when the same common and rare sounds at the same probabilities alternate over time, the neonate responses show clear differentiations. The context-dependence is consistent with the possibility that the neonate brain produces more precise internal models that discriminate between contexts when there is an emergent structure to be discovered but appears to adopt broader models when discrimination delivers little or no additional information about the environment.
Collapse
|
7
|
Sleeping neonates track transitional probabilities in speech but only retain the first syllable of words. Sci Rep 2022; 12:4391. [PMID: 35292694 PMCID: PMC8924158 DOI: 10.1038/s41598-022-08411-w] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2021] [Accepted: 02/25/2022] [Indexed: 12/15/2022] Open
Abstract
Extracting statistical regularities from the environment is a primary learning mechanism that might support language acquisition. While it has been shown that infants are sensitive to transition probabilities between syllables in speech, it is still not known what information they encode. Here we used electrophysiology to study how full-term neonates process an artificial language constructed by randomly concatenating four pseudo-words and what information they retain after a few minutes of exposure. Neural entrainment served as a marker of the regularities the brain was tracking during learning. Then in a post-learning phase, evoked-related potentials (ERP) to different triplets explored which information was retained. After two minutes of familiarization with the artificial language, neural entrainment at the word rate emerged, demonstrating rapid learning of the regularities. ERPs in the test phase significantly differed between triplets starting or not with the correct first syllables, but no difference was associated with subsequent violations in transition probabilities. Thus, our results revealed a two-step learning process: neonates segmented the stream based on its statistical regularities, but memory encoding targeted during the word recognition phase entangled the ordinal position of the syllables but was still incomplete at that age.
Collapse
|
8
|
Soares AP, Gutiérrez-Domínguez FJ, Lages A, Oliveira HM, Vasconcelos M, Jiménez L. Learning Words While Listening to Syllables: Electrophysiological Correlates of Statistical Learning in Children and Adults. Front Hum Neurosci 2022; 16:805723. [PMID: 35280206 PMCID: PMC8905652 DOI: 10.3389/fnhum.2022.805723] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Accepted: 01/11/2022] [Indexed: 01/29/2023] Open
Abstract
From an early age, exposure to a spoken language has allowed us to implicitly capture the structure underlying the succession of speech sounds in that language and to segment it into meaningful units (words). Statistical learning (SL), the ability to pick up patterns in the sensory environment without intention or reinforcement, is thus assumed to play a central role in the acquisition of the rule-governed aspects of language, including the discovery of word boundaries in the continuous acoustic stream. Although extensive evidence has been gathered from artificial languages experiments showing that children and adults are able to track the regularities embedded in the auditory input, as the probability of one syllable to follow another syllable in the speech stream, the developmental trajectory of this ability remains controversial. In this work, we have collected Event-Related Potentials (ERPs) while 5-year-old children and young adults (university students) were exposed to a speech stream made of the repetition of eight three-syllable nonsense words presenting different levels of predictability (high vs. low) to mimic closely what occurs in natural languages and to get new insights into the changes that the mechanisms underlying auditory statistical learning (aSL) might undergo through the development. The participants performed the aSL task first under implicit and, subsequently, under explicit conditions to further analyze if children take advantage of previous knowledge of the to-be-learned regularities to enhance SL, as observed with the adult participants. These findings would also contribute to extend our knowledge of the mechanisms available to assist SL at each developmental stage. Although behavioral signs of learning, even under explicit conditions, were only observed for the adult participants, ERP data showed evidence of online segmentation in the brain in both groups, as indexed by modulations in the N100 and N400 components. A detailed analysis of the neural data suggests, however, that adults and children rely on different mechanisms to assist the extraction of word-like units from the continuous speech stream, hence supporting the view that SL with auditory linguistic materials changes through development.
Collapse
Affiliation(s)
- Ana Paula Soares
- Human Cognition Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
- *Correspondence: Ana Paula Soares,
| | | | - Alexandrina Lages
- Human Cognition Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Helena M. Oliveira
- Human Cognition Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Margarida Vasconcelos
- Psychological Neuroscience Lab, CIPsi, School of Psychology, University of Minho, Braga, Portugal
| | - Luis Jiménez
- Department of Psychology, University of Santiago de Compostela, Santiago de Compostela, Spain
| |
Collapse
|
9
|
Marklund U, Marklund E, Gustavsson L. Relationship Between Parent Vowel Hyperarticulation in Infant-Directed Speech and Infant Phonetic Complexity on the Level of Conversational Turns. Front Psychol 2021; 12:688242. [PMID: 34421739 PMCID: PMC8371631 DOI: 10.3389/fpsyg.2021.688242] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Accepted: 06/11/2021] [Indexed: 11/13/2022] Open
Abstract
When speaking to infants, parents typically use infant-directed speech, a speech register that in several aspects differs from that directed to adults. Vowel hyperarticulation, that is, extreme articulation of vowels, is one characteristic sometimes found in infant-directed speech, and it has been suggested that there exists a relationship between how much vowel hyperarticulation parents use when speaking to their infant and infant language development. In this study, the relationship between parent vowel hyperarticulation and phonetic complexity of infant vocalizations is investigated. Previous research has shown that on the level of subject means, a positive correlational relationship exists. However, the previous findings do not provide information about the directionality of that relationship. In this study the relationship is investigated on a conversational turn level, which makes it possible to draw conclusions on whether the behavior of the infant is impacting the parent, the behavior of the parent is impacting the infant, or both. Parent vowel hyperarticulation was quantified using the vhh-index, a measure that allows vowel hyperarticulation to be estimated for individual vowel tokens. Phonetic complexity of infant vocalizations was calculated using the Word Complexity Measure for Swedish. Findings were unexpected in that a negative relationship was found between parent vowel hyperarticulation and phonetic complexity of the immediately following infant vocalization. Directionality was suggested by the fact that no such relationship was found between infant phonetic complexity and vowel hyperarticulation of the immediately following parent utterance. A potential explanation for these results is that high degrees of vowel hyperarticulation either provide, or co-occur with, large amounts of phonetic and/or linguistic information, which may occupy processing resources to an extent that affects production of the next vocalization.
Collapse
Affiliation(s)
- Ulrika Marklund
- Division of Sensory Organs and Communication, Department of Biomedical and Clinical Sciences, Linköping University, Linköping, Sweden.,Department of Neurology, Speech and Language Clinic, Danderyd Hospital, Stockholm, Sweden.,Division of Speech and Language Pathology, Department of Clinical Science, Intervention and Technology, Karolinska Institutet, Stockholm, Sweden
| | - Ellen Marklund
- Phonetics Laboratory, Stockholm Babylab, Department of Linguistics, Stockholm University, Stockholm, Sweden
| | - Lisa Gustavsson
- Division of Speech and Language Pathology, Department of Clinical Science, Intervention and Technology, Karolinska Institutet, Stockholm, Sweden.,Phonetics Laboratory, Stockholm Babylab, Department of Linguistics, Stockholm University, Stockholm, Sweden
| |
Collapse
|
10
|
Ferry A, Guellai B. Labels and object categorization in six- and nine-month-olds: tracking labels across varying carrier phrases. Infant Behav Dev 2021; 64:101606. [PMID: 34333262 DOI: 10.1016/j.infbeh.2021.101606] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 06/16/2021] [Accepted: 07/02/2021] [Indexed: 11/16/2022]
Abstract
Language shapes object categorization in infants. This starts as a general enhanced attentional effect of language, which narrows to a specific link between labels and categories by twelve months. The current experiments examined this narrowing effect by investigating when infants track a consistent label across varied input. Six-month-old infants (N = 48) were familiarized to category exemplars, each presented with the exact same labeling phrase or the same label in different phrases. Evidence of object categorization at test was only found with the same phrase, suggesting that infants were not tracking the label's consistency, but rather that of the entire input. Nine-month-olds (N = 24) did show evidence of categorization across the varied phrases, suggesting that they were tracking the consistent label across the varied input.
Collapse
Affiliation(s)
- Alissa Ferry
- Language, Cognition, and Development Laboratory, Scuola Internazionale di Studi Avanzati, Trieste, Italy; Division of Human Communication, Development and Hearing, University of Manchester, Manchester, UK.
| | - Bahia Guellai
- Language, Cognition, and Development Laboratory, Scuola Internazionale di Studi Avanzati, Trieste, Italy; Laboratoire Ethologie Cognition, Développement (LECD), Université Paris Nanterre, France
| |
Collapse
|
11
|
Marklund E, Marklund U, Gustavsson L. An Association Between Phonetic Complexity of Infant Vocalizations and Parent Vowel Hyperarticulation. Front Psychol 2021; 12:693866. [PMID: 34354637 PMCID: PMC8329736 DOI: 10.3389/fpsyg.2021.693866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Accepted: 06/21/2021] [Indexed: 12/02/2022] Open
Abstract
Extreme or exaggerated articulation of vowels, or vowel hyperarticulation, is a characteristic commonly found in infant-directed speech (IDS). High degrees of vowel hyperarticulation in parent IDS has been tied to better speech sound category development and bigger vocabulary size in infants. In the present study, the relationship between vowel hyperarticulation in Swedish IDS to 12-month-old and phonetic complexity of infant vocalizations is investigated. Articulatory adaptation toward hyperarticulation is quantified as difference in vowel space area between IDS and adult-directed speech (ADS). Phonetic complexity is estimated using the Word Complexity Measure for Swedish (WCM-SE). The results show that vowels in IDS was more hyperarticulated than vowels in ADS, and that parents' articulatory adaptation in terms of hyperarticulation correlates with phonetic complexity of infant vocalizations. This can be explained either by the parents' articulatory behavior impacting the infants' vocalization behavior, the infants' social and communicative cues eliciting hyperarticulation in the parents' speech, or the two variables being impacted by a third, underlying variable such as parents' general communicative adaptiveness.
Collapse
Affiliation(s)
- Ellen Marklund
- Phonetics Laboratory, Stockholm Babylab, Department of Linguistics, Stockholm University, Stockholm, Sweden
| | - Ulrika Marklund
- Division of Sensory Organs and Communication, Department of Biomedical and Clinical Sciences, Linköping University, Linköping, Sweden
- Speech and Language Clinic, Department of Neurology, Danderyd Hospital, Stockholm, Sweden
- Division of Speech and Language Pathology, Department of Clinical Science, Intervention and Technology, Karolinska Institutet, Stockholm, Sweden
| | - Lisa Gustavsson
- Phonetics Laboratory, Stockholm Babylab, Department of Linguistics, Stockholm University, Stockholm, Sweden
- Division of Speech and Language Pathology, Department of Clinical Science, Intervention and Technology, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
12
|
Pierce LJ, Carmody Tague E, Nelson CA. Maternal stress predicts neural responses during auditory statistical learning in 26-month-old children: An event-related potential study. Cognition 2021; 213:104600. [PMID: 33509600 DOI: 10.1016/j.cognition.2021.104600] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 11/25/2020] [Accepted: 01/11/2021] [Indexed: 01/25/2023]
Abstract
Exposure to high levels of early life stress have been associated with long-term difficulties in learning, behavior, and health, with particular impact evident in the language domain. While some have proposed that the increased stress of living in a low-income household mediates observed associations between socioeconomic status (SES) and child outcomes, considerable individual differences have been observed. The extent to which specific variables associated with socioeconomic status - in particular exposure to stressful life events - influence the neurocognitive mechanisms underlying language acquisition are not well understood. Auditory statistical learning, or the ability to segment a continuous auditory stream based on its statistical properties, develops during early infancy and is one mechanism thought to underlie language learning. The present study used an event-related potential (ERP) paradigm to test whether maternal stress, adjusting for socioeconomic variables (e.g., family income, maternal education) was associated with neurocognitive processes underlying statistical learning in a sample of 26-month-old children (n = 23) from predominantly low- to middle-income backgrounds. Event-related potentials were recorded while children listened to a continuous stream of tri-tone "words" in which tone elements varied in transitional probability. "Tone-words" were presented in random order, such that Tone 1 always predicted Tones 2 and 3 (transitional probability for Tone 3 = 1.0), but Tone 1 appeared randomly. A larger P2 amplitude was observed in response to Tone 3 compared to Tone 1, demonstrating that children implicitly tracked differences in transitional probabilities during passive listening. Maternal reports of stress at 26 months, adjusting for SES, were negatively associated with difference in P2 amplitude between Tones 1 and 3. These findings suggest that maternal stress, within a low-SES context, is associated with the manner in which children process statistical properties of auditory input.
Collapse
Affiliation(s)
- Lara J Pierce
- Department of Pediatrics, Division of Developmental Medicine, Boston Children's Hospital, 1 Autumn Street, Boston, MA 02115, United States; Harvard Medical School, 25 Shattuck St., Boston, MA 02115, United States.
| | - Erin Carmody Tague
- Department of Pediatrics, Division of Developmental Medicine, Boston Children's Hospital, 1 Autumn Street, Boston, MA 02115, United States.
| | - Charles A Nelson
- Department of Pediatrics, Division of Developmental Medicine, Boston Children's Hospital, 1 Autumn Street, Boston, MA 02115, United States; Harvard Medical School, 25 Shattuck St., Boston, MA 02115, United States; Harvard Graduate School of Education, 13 Appian Way, Cambridge, MA 02138, United States.
| |
Collapse
|
13
|
Transitional probabilities and expectation for word length impact verbal statistical learning. ACTA PSYCHOLOGICA SINICA 2021. [DOI: 10.3724/sp.j.1041.2021.00565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
14
|
García-Sierra A, Ramírez-Esparza N, Wig N, Robertson D. Language learning as a function of infant directed speech (IDS) in Spanish: Testing neural commitment using the positive-MMR. BRAIN AND LANGUAGE 2021; 212:104890. [PMID: 33307333 DOI: 10.1016/j.bandl.2020.104890] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/03/2020] [Revised: 10/27/2020] [Accepted: 11/17/2020] [Indexed: 06/12/2023]
Abstract
Spanish-English bilingual families (N = 17) were recruited to assess the association between infant directed speech (IDS) in Spanish and their degree of neural commitment to the Spanish language. IDS was assessed by extracting the caregivers' Vowel Space Area (VSA) from recordings of a storybook reading task done at home. Infants' neural commitment was assessed by extracting the positive mismatch brain response (positive-MMR), an Event-Related Potential (ERP) thought to be indicative of higher attentional processes and early language commitment. A linear mixed model analysis demonstrated that caregivers' VSA predicted the amplitude of the positive-MMR in response to a native speech contrast (Spanish), but not to a non-native speech contrast (Chinese), even after holding other predictors constant (i.e., socioeconomic status, infants' age, and fundamental frequency). Our findings provide support to the view that quality of language exposure fosters language learning, and that this beneficial relationship expands to the bilingual population.
Collapse
Affiliation(s)
- Adrián García-Sierra
- Speech, Language, and Hearing Sciences, University of Connecticut, 2 Alethia Dr. Unit 1085, Storrs, CT 06269, USA; Connecticut Institute for the Brain and Cognitive Science, University of Connecticut, 337 Mansfield Rd Unit 1272, Storrs, CT 06269, USA.
| | - Nairán Ramírez-Esparza
- Department of Psychological Sciences, University of Connecticut, 406 Babbidge Rd, Unit 1020, Storrs, CT 06269, USA.
| | - Noelle Wig
- Speech, Language, and Hearing Sciences, University of Connecticut, 2 Alethia Dr. Unit 1085, Storrs, CT 06269, USA; Connecticut Institute for the Brain and Cognitive Science, University of Connecticut, 337 Mansfield Rd Unit 1272, Storrs, CT 06269, USA.
| | - Dylan Robertson
- Speech, Language, and Hearing Sciences, University of Connecticut, 2 Alethia Dr. Unit 1085, Storrs, CT 06269, USA.
| |
Collapse
|
15
|
Antovich DM, Graf Estes K. One language or two? Navigating cross-language conflict in statistical word segmentation. Dev Sci 2020; 23:e12960. [PMID: 32145042 DOI: 10.1111/desc.12960] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2019] [Revised: 01/22/2020] [Accepted: 02/21/2020] [Indexed: 11/28/2022]
Abstract
Bilingual infants must navigate the similarities and differences between their languages to achieve native proficiency in childhood. Bilinguals learning to find individual words in fluent speech face the possibility of conflicting cues to word boundaries across their languages. Despite this challenge, bilingual infants typically begin to segment and learn words in both languages around the same time as monolinguals. It is possible that early bilingual experience may support infants' abilities to track regularities relevant for word segmentation separately across their languages. In a dual speech stream statistical word segmentation task, we assessed whether 16-month-old infants could track syllable co-occurrence regularities in two artificial languages despite conflicting information across the languages. We found that bilingual, but not monolingual, infants were able to segment the dual speech streams using statistical regularities. Although the two language groups did not differ on secondary measures of cognitive and linguistic development, bilingual infants' real-world experience with bilingual speakers was predictive of their performance in the dual language statistical segmentation task.
Collapse
|
16
|
Sirri L, Linnert S, Reid V, Parise E. Speech Intonation Induces Enhanced Face Perception in Infants. Sci Rep 2020; 10:3225. [PMID: 32081944 PMCID: PMC7035392 DOI: 10.1038/s41598-020-60074-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Accepted: 02/05/2020] [Indexed: 11/21/2022] Open
Abstract
Infants' preference for faces with direct compared to averted eye gaze, and for infant-directed over adult-directed speech, reflects early sensitivity to social communication. Here, we studied whether infant-directed speech (IDS), could affect the processing of a face with direct gaze in 4-month-olds. In a new ERP paradigm, the word 'hello' was uttered either in IDS or adult-direct speech (ADS) followed by an upright or inverted face. We show that the face-specific N290 ERP component was larger when faces were preceded by IDS relative to ADS. Crucially, this effect is specific to upright faces, whereas inverted faces preceded by IDS elicited larger attention-related P1 and Nc. These results suggest that IDS generates communicative expectations in infants. When such expectations are met by a following social stimulus - an upright face - infants are already prepared to process it. When the stimulus is a non-social one -inverted face - IDS merely increases general attention.
Collapse
Affiliation(s)
- Louah Sirri
- Department of Education, Manchester Metropolitan University, Manchester, UK.
- Department of Psychology, Lancaster University, Lancaster, UK.
| | - Szilvia Linnert
- Department of Psychology, Lancaster University, Lancaster, UK
| | - Vincent Reid
- Department of Psychology, Lancaster University, Lancaster, UK
- School of Psychology, University of Waikato, Hamilton, New Zealand
| | - Eugenio Parise
- Department of Psychology, Lancaster University, Lancaster, UK
| |
Collapse
|
17
|
Statistical learning for vocal sequence acquisition in a songbird. Sci Rep 2020; 10:2248. [PMID: 32041978 PMCID: PMC7010765 DOI: 10.1038/s41598-020-58983-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Accepted: 01/17/2020] [Indexed: 01/31/2023] Open
Abstract
Birdsong is a learned communicative behavior that consists of discrete acoustic elements (“syllables”) that are sequenced in a controlled manner. While the learning of the acoustic structure of syllables has been extensively studied, relatively little is known about sequence learning in songbirds. Statistical learning could contribute to the acquisition of vocal sequences, and we investigated the nature and extent of sequence learning at various levels of song organization in the Bengalese finch, Lonchura striata var. domestica. We found that, under semi-natural conditions, pupils (sons) significantly reproduced the sequence statistics of their tutor’s (father’s) songs at multiple levels of organization (e.g., syllable repertoire, prevalence, and transitions). For example, the probability of syllable transitions at “branch points” (relatively complex sequences that are followed by multiple types of transitions) were significantly correlated between the songs of tutors and pupils. We confirmed the contribution of learning to sequence similarities between fathers and sons by experimentally tutoring juvenile Bengalese finches with the songs of unrelated tutors. We also discovered that the extent and fidelity of sequence similarities between tutors and pupils were significantly predicted by the prevalence of sequences in the tutor’s song and that distinct types of sequence modifications (e.g., syllable additions or deletions) followed distinct patterns. Taken together, these data provide compelling support for the role of statistical learning in vocal production learning and identify factors that could modulate the extent of vocal sequence learning.
Collapse
|
18
|
Kostilainen K, Partanen E, Mikkola K, Wikström V, Pakarinen S, Fellman V, Huotilainen M. Neural processing of changes in phonetic and emotional speech sounds and tones in preterm infants at term age. Int J Psychophysiol 2020; 148:111-118. [DOI: 10.1016/j.ijpsycho.2019.10.009] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2018] [Revised: 09/10/2019] [Accepted: 10/23/2019] [Indexed: 10/25/2022]
|
19
|
Snijders TM, Benders T, Fikkert P. Infants Segment Words from Songs-An EEG Study. Brain Sci 2020; 10:E39. [PMID: 31936586 PMCID: PMC7017257 DOI: 10.3390/brainsci10010039] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Revised: 12/25/2019] [Accepted: 01/06/2020] [Indexed: 12/15/2022] Open
Abstract
Children's songs are omnipresent and highly attractive stimuli in infants' input. Previous work suggests that infants process linguistic-phonetic information from simplified sung melodies. The present study investigated whether infants learn words from ecologically valid children's songs. Testing 40 Dutch-learning 10-month-olds in a familiarization-then-test electroencephalography (EEG) paradigm, this study asked whether infants can segment repeated target words embedded in songs during familiarization and subsequently recognize those words in continuous speech in the test phase. To replicate previous speech work and compare segmentation across modalities, infants participated in both song and speech sessions. Results showed a positive event-related potential (ERP) familiarity effect to the final compared to the first target occurrences during both song and speech familiarization. No evidence was found for word recognition in the test phase following either song or speech. Comparisons across the stimuli of the present and a comparable previous study suggested that acoustic prominence and speech rate may have contributed to the polarity of the ERP familiarity effect and its absence in the test phase. Overall, the present study provides evidence that 10-month-old infants can segment words embedded in songs, and it raises questions about the acoustic and other factors that enable or hinder infant word segmentation from songs and speech.
Collapse
Affiliation(s)
- Tineke M. Snijders
- Language Development Department, Max Planck Institute for Psycholinguistics, 6500 Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6500 Nijmegen, The Netherlands;
| | - Titia Benders
- Department of Linguistics, Macquarie University, North Ryde 2109, Australia
| | - Paula Fikkert
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, 6500 Nijmegen, The Netherlands;
- Centre for Language Studies, Radboud University, 6500 Nijmegen, The Netherlands
| |
Collapse
|
20
|
Newborn infants differently process adult directed and infant directed speech. Int J Psychophysiol 2020; 147:107-112. [DOI: 10.1016/j.ijpsycho.2019.10.011] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2019] [Revised: 10/25/2019] [Accepted: 10/28/2019] [Indexed: 01/07/2023]
|
21
|
Suppanen E, Huotilainen M, Ylinen S. Rhythmic structure facilitates learning from auditory input in newborn infants. Infant Behav Dev 2019; 57:101346. [DOI: 10.1016/j.infbeh.2019.101346] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2019] [Revised: 07/31/2019] [Accepted: 08/01/2019] [Indexed: 02/01/2023]
|
22
|
Hoemann K, Xu F, Barrett LF. Emotion words, emotion concepts, and emotional development in children: A constructionist hypothesis. Dev Psychol 2019; 55:1830-1849. [PMID: 31464489 PMCID: PMC6716622 DOI: 10.1037/dev0000686] [Citation(s) in RCA: 88] [Impact Index Per Article: 17.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
In this article, we integrate two constructionist approaches-the theory of constructed emotion and rational constructivism-to introduce several novel hypotheses for understanding emotional development. We first discuss the hypothesis that emotion categories are abstract and conceptual, whose instances share a goal-based function in a particular context but are highly variable in their affective, physical, and perceptual features. Next, we discuss the possibility that emotional development is the process of developing emotion concepts, and that emotion words may be a critical part of this process. We hypothesize that infants and children learn emotion categories the way they learn other abstract conceptual categories-by observing others use the same emotion word to label highly variable events. Finally, we hypothesize that emotional development can be understood as a concept construction problem: a child becomes capable of experiencing and perceiving emotion only when her brain develops the capacity to assemble ad hoc, situated emotion concepts for the purposes of guiding behavior and giving meaning to sensory inputs. Specifically, we offer a predictive processing account of emotional development. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
Affiliation(s)
- Katie Hoemann
- Department of Psychology, Northeastern University, Boston, MA
| | - Fei Xu
- Department of Psychology, University of California Berkeley, Berkeley, CA
| | - Lisa Feldman Barrett
- Department of Psychology, Northeastern University, Boston, MA
- Department of Psychiatry, Massachusetts General Hospital, Boston, MA
- Martinos Center for Biomedical Imaging, Charlestown, MA
| |
Collapse
|
23
|
Leo V, Sihvonen AJ, Linnavalli T, Tervaniemi M, Laine M, Soinila S, Särkämö T. Cognitive and neural mechanisms underlying the mnemonic effect of songs after stroke. NEUROIMAGE-CLINICAL 2019; 24:101948. [PMID: 31419766 PMCID: PMC6706631 DOI: 10.1016/j.nicl.2019.101948] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/20/2018] [Revised: 04/05/2019] [Accepted: 07/19/2019] [Indexed: 01/28/2023]
Abstract
Sung melody provides a mnemonic cue that can enhance the acquisition of novel verbal material in healthy subjects. Recent evidence suggests that also stroke patients, especially those with mild aphasia, can learn and recall novel narrative stories better when they are presented in sung than spoken format. Extending this finding, the present study explored the cognitive mechanisms underlying this effect by determining whether learning and recall of novel sung vs. spoken stories show a differential pattern of serial position effects (SPEs) and chunking effects in non-aphasic and aphasic stroke patients (N = 31) studied 6 months post-stroke. The structural neural correlates of these effects were also explored using voxel-based morphometry (VBM) and deterministic tractography (DT) analyses of structural MRI data. Non-aphasic patients showed more stable recall with reduced SPEs in the sung than spoken task, which was coupled with greater volume and integrity (indicated by fractional anisotropy, FA) of the left arcuate fasciculus. In contrast, compared to non-aphasic patients, the aphasic patients showed a larger recency effect (better recall of the last vs. middle part of the story) and enhanced chunking (larger units of correctly recalled consecutive items) in the sung than spoken task. In aphasics, the enhanced chunking and better recall on the middle verse in the sung vs. spoken task correlated also with better ability to perceive emotional prosody in speech. Neurally, the sung > spoken recency effect in aphasic patients was coupled with greater grey matter volume in a bilateral network of temporal, frontal, and parietal regions and also greater volume of the right inferior fronto-occipital fasciculus (IFOF). These results provide novel cognitive and neurobiological insight on how a repetitive sung melody can function as a verbal mnemonic aid after stroke. Non-aphasic stroke patients show more stable recall of sung than spoken stories. Aphasic patients show larger recency and chunking effects to sung vs. spoken stories. The left dorsal pathway mediates better recall of sung stories in non-aphasics. The right ventral pathway mediates better recall of sung stories in aphasics. Large-scale bilateral cortical networks are linked to musical mnemonics in aphasia.
Collapse
Affiliation(s)
- Vera Leo
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| | - Aleksi J Sihvonen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland; Department of Neurosciences, Faculty of Medicine, University of Helsinki, Finland
| | - Tanja Linnavalli
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| | - Mari Tervaniemi
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland; CICERO Learning, University of Helsinki, Finland
| | - Matti Laine
- Department of Psychology, Åbo Akademi University, Turku, Finland
| | - Seppo Soinila
- Division of Clinical Neurosciences, Turku University Hospital, Department of Neurology, University of Turku, Finland
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland.
| |
Collapse
|
24
|
Daikoku T. Neurophysiological Markers of Statistical Learning in Music and Language: Hierarchy, Entropy, and Uncertainty. Brain Sci 2018; 8:E114. [PMID: 29921829 PMCID: PMC6025354 DOI: 10.3390/brainsci8060114] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Revised: 06/14/2018] [Accepted: 06/18/2018] [Indexed: 01/07/2023] Open
Abstract
Statistical learning (SL) is a method of learning based on the transitional probabilities embedded in sequential phenomena such as music and language. It has been considered an implicit and domain-general mechanism that is innate in the human brain and that functions independently of intention to learn and awareness of what has been learned. SL is an interdisciplinary notion that incorporates information technology, artificial intelligence, musicology, and linguistics, as well as psychology and neuroscience. A body of recent study has suggested that SL can be reflected in neurophysiological responses based on the framework of information theory. This paper reviews a range of work on SL in adults and children that suggests overlapping and independent neural correlations in music and language, and that indicates disability of SL. Furthermore, this article discusses the relationships between the order of transitional probabilities (TPs) (i.e., hierarchy of local statistics) and entropy (i.e., global statistics) regarding SL strategies in human's brains; claims importance of information-theoretical approaches to understand domain-general, higher-order, and global SL covering both real-world music and language; and proposes promising approaches for the application of therapy and pedagogy from various perspectives of psychology, neuroscience, computational studies, musicology, and linguistics.
Collapse
Affiliation(s)
- Tatsuya Daikoku
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, 04103 Leipzig, Germany.
| |
Collapse
|
25
|
Huotilainen M, Tervaniemi M. Planning music-based amelioration and training in infancy and childhood based on neural evidence. Ann N Y Acad Sci 2018; 1423:146-154. [PMID: 29727038 DOI: 10.1111/nyas.13655] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2017] [Revised: 02/01/2018] [Accepted: 02/06/2018] [Indexed: 11/30/2022]
Abstract
Music-based amelioration and training of the developing auditory system has a long tradition, and recent neuroscientific evidence supports using music in this manner. Here, we present the available evidence showing that various music-related activities result in positive changes in brain structure and function, becoming helpful for auditory cognitive processes in everyday life situations for individuals with typical neural development and especially for individuals with hearing, learning, attention, or other deficits that may compromise auditory processing. We also compare different types of music-based training and show how their effects have been investigated with neural methods. Finally, we take a critical position on the multitude of error sources found in amelioration and training studies and on publication bias in the field. We discuss some future improvements of these issues in the field of music-based training and their potential results at the neural and behavioral levels in infants and children for the advancement of the field and for a more complete understanding of the possibilities and significance of the training.
Collapse
Affiliation(s)
- Minna Huotilainen
- Cognitive Brain Research Unit and CICERO Learning Network, University of Helsinki, Helsinki, Finland
| | - Mari Tervaniemi
- Cognitive Brain Research Unit and CICERO Learning Network, University of Helsinki, Helsinki, Finland
| |
Collapse
|
26
|
Leo V, Sihvonen AJ, Linnavalli T, Tervaniemi M, Laine M, Soinila S, Särkämö T. Sung melody enhances verbal learning and recall after stroke. Ann N Y Acad Sci 2018; 1423:296-307. [PMID: 29542823 DOI: 10.1111/nyas.13624] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2017] [Revised: 12/18/2017] [Accepted: 12/22/2017] [Indexed: 01/20/2023]
Abstract
Coupling novel verbal material with a musical melody can potentially aid in its learning and recall in healthy subjects, but this has never been systematically studied in stroke patients with cognitive deficits. In a counterbalanced design, we presented novel verbal material (short narrative stories) in both spoken and sung formats to stroke patients at the acute poststroke stage and 6 months poststroke. The task comprised three learning trials and a delayed recall trial. Memory performance on the spoken and sung tasks did not differ at the acute stage, whereas sung stories were learned and recalled significantly better compared with spoken stories at the 6 months poststroke stage. Interestingly, this pattern of results was evident especially in patients with mild aphasia, in whom the learning of sung versus spoken stories improved more from the acute to the 6-month stages compared with nonaphasic patients. Overall, these findings suggest that singing could be used as a mnemonic aid in the learning of novel verbal material in later stages of recovery after stroke.
Collapse
Affiliation(s)
- Vera Leo
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Aleksi J Sihvonen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- Faculty of Medicine, University of Turku, Turku, Finland
| | - Tanja Linnavalli
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Mari Tervaniemi
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- CICERO Learning, University of Helsinki, Helsinki, Finland
| | - Matti Laine
- Department of Psychology, Åbo Akademi University, Turku, Finland
| | - Seppo Soinila
- Division of Clinical Neurosciences, Department of Neurology, University of Turku, Turku University Hospital, Turku, Finland
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| |
Collapse
|
27
|
Kakouros S, Salminen N, Räsänen O. Making predictable unpredictable with style - Behavioral and electrophysiological evidence for the critical role of prosodic expectations in the perception of prominence in speech. Neuropsychologia 2018; 109:181-199. [PMID: 29247667 DOI: 10.1016/j.neuropsychologia.2017.12.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2017] [Revised: 12/04/2017] [Accepted: 12/05/2017] [Indexed: 11/26/2022]
Abstract
Perceptual prominence of linguistic units such as words has been earlier connected to the concepts of predictability and attentional orientation. One hypothesis is that low-probability prosodic or lexical content is perceived as prominent due to the surprisal and high information value associated with the stimulus. However, the existing behavioral studies have used stimulus manipulations that follow or violate typical linguistic patterns present in the listeners' native language, i.e., assuming that the listeners have already established a model for acceptable prosodic patterns in the language. In the present study, we investigated whether prosodic expectations and the resulting subjective impression of prominence is affected by brief statistical adaptation to suprasegmental acoustic features in speech, also in the case where the prosodic patterns do not necessarily follow language-typical marking for prominence. We first exposed listeners to five minutes of speech with uneven distributions of falling and rising fundamental frequency (F0) trajectories on sentence-final words, and then tested their judgments of prominence on a set of new utterances. The results show that the probability of the F0 trajectory affects the perception of prominence, a less frequent F0 trajectory making a word more prominent independently of the absolute direction of F0 change. In the second part of the study, we conducted EEG-measurements on a set of new subjects listening to similar utterances with predominantly rising or falling F0 on sentence-final words. Analysis of the resulting event-related potentials (ERP) reveals a significant difference in N200 and N400 ERP-component amplitudes between standard and deviant prosody, again independently of the F0 direction and the underlying lexical content. Since N400 has earlier been associated with semantic processing of stimuli, this suggests that listeners implicitly track probabilities at the suprasegmental level and that predictability of a prosodic pattern during a word has an impact to the semantic processing of the word. Overall, the study suggests that prosodic markers for prominence are at least partially driven by the statistical structure of recently perceived speech, and therefore prominence perception could be based on statistical learning mechanisms similar to those observed in early word learning, but in this case operating at the level of suprasegmental acoustic features.
Collapse
Affiliation(s)
- Sofoklis Kakouros
- Department of Signal Processing and Acoustics, Aalto University, P.O. Box 12200, FI-00076, Finland.
| | - Nelli Salminen
- Department of Signal Processing and Acoustics, Aalto University, P.O. Box 12200, FI-00076, Finland; Aalto Behavioral Laboratory, Aalto Neuroimaging, Aalto University, FI-00076, Finland.
| | - Okko Räsänen
- Department of Signal Processing and Acoustics, Aalto University, P.O. Box 12200, FI-00076, Finland.
| |
Collapse
|
28
|
Healthy full-term infants' brain responses to emotionally and linguistically relevant sounds using a multi-feature mismatch negativity (MMN) paradigm. Neurosci Lett 2018; 670:110-115. [PMID: 29374541 DOI: 10.1016/j.neulet.2018.01.039] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2017] [Revised: 12/08/2017] [Accepted: 01/22/2018] [Indexed: 11/24/2022]
Abstract
We evaluated the feasibility of a multi-feature mismatch negativity (MMN) paradigm in studying auditory processing of healthy newborns. The aim was to examine the automatic change-detection and processing of semantic and emotional information in speech in newborns. Brain responses of 202 healthy newborns were recorded with a multi-feature paradigm including a Finnish bi-syllabic pseudo-word/ta-ta/as a standard stimulus, six linguistically relevant deviant stimuli and three emotionally relevant stimuli (happy, sad, angry). Clear responses to emotional sounds were found already at the early latency window 100-200 ms, whereas responses to linguistically relevant minor changes and emotional stimuli at the later latency window 300-500 ms did not reach significance. Moreover, significant interaction between gender and emotional stimuli was found in the early latency window. Further studies on using multi-feature paradigms with linguistic and emotional stimuli in newborns are needed, especially those containing of follow-ups, enabling the assessment of the predictive value of early variations between subjects.
Collapse
|
29
|
Gredebäck G, Astor K, Fawcett C. Gaze Following Is Not Dependent on Ostensive Cues: A Critical Test of Natural Pedagogy. Child Dev 2018; 89:2091-2098. [PMID: 29315501 DOI: 10.1111/cdev.13026] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
The theory of natural pedagogy stipulates that infants follow gaze because they are sensitive to the communicative intent of others. According to this theory, gaze following should be present if, and only if, accompanied by at least one of a set of specific ostensive cues. The current article demonstrates gaze following in a range of contexts, both with and without expressions of communicative intent in a between-subjects design with a large sample of 6-month-old infants (n = 94). Thus, conceptually replicating prior results from Szufnarowska et al. (2014) and falsifying a central pillar of the natural pedagogy theory. The results suggest that there are opportunities to learn from others' gaze independently of their displayed communicative intent.
Collapse
|
30
|
Neural processing of musical meter in musicians and non-musicians. Neuropsychologia 2017; 106:289-297. [DOI: 10.1016/j.neuropsychologia.2017.10.007] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2017] [Revised: 10/01/2017] [Accepted: 10/03/2017] [Indexed: 11/17/2022]
|
31
|
François C, Teixidó M, Takerkart S, Agut T, Bosch L, Rodriguez-Fornells A. Enhanced Neonatal Brain Responses To Sung Streams Predict Vocabulary Outcomes By Age 18 Months. Sci Rep 2017; 7:12451. [PMID: 28963569 PMCID: PMC5622081 DOI: 10.1038/s41598-017-12798-2] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2017] [Accepted: 09/15/2017] [Indexed: 12/21/2022] Open
Abstract
Words and melodies are some of the basic elements infants are able to extract early in life from the auditory input. Whether melodic cues contained in songs can facilitate word-form extraction immediately after birth remained unexplored. Here, we provided converging neural and computational evidence of the early benefit of melodies for language acquisition. Twenty-eight neonates were tested on their ability to extract word-forms from continuous flows of sung and spoken syllabic sequences. We found different brain dynamics for sung and spoken streams and observed successful detection of word-form violations in the sung condition only. Furthermore, neonatal brain responses for sung streams predicted expressive vocabulary at 18 months as demonstrated by multiple regression and cross-validation analyses. These findings suggest that early neural individual differences in prosodic speech processing might be a good indicator of later language outcomes and could be considered as a relevant factor in the development of infants' language skills.
Collapse
Affiliation(s)
- Clément François
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute IDIBELL, L'Hospitalet de Llobregat, Barcelona, Spain.
- Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain.
- Institut de Recerca Pediàtrica Hospital Sant Joan de Déu, Barcelona, Spain.
| | - Maria Teixidó
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute IDIBELL, L'Hospitalet de Llobregat, Barcelona, Spain
| | - Sylvain Takerkart
- Aix Marseille Univ, CNRS, INT, Inst Neurosci Timone, Marseille, France
| | - Thaïs Agut
- Institut de Recerca Pediàtrica Hospital Sant Joan de Déu, Barcelona, Spain
- Department of Neonatalogy, Hospital Sant Joan de Déu, Barcelona, Spain
| | - Laura Bosch
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute IDIBELL, L'Hospitalet de Llobregat, Barcelona, Spain
- Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain
- Institut de Recerca Pediàtrica Hospital Sant Joan de Déu, Barcelona, Spain
- Institut de Neurociències, University of Barcelona, Barcelona, Spain
| | - Antoni Rodriguez-Fornells
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute IDIBELL, L'Hospitalet de Llobregat, Barcelona, Spain
- Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain
- Institució Catalana de Recerca i Estudis Avançats, ICREA, Barcelona, Spain
| |
Collapse
|