1
|
Yang J, Nagaraj NK, Magimairaj BM. Audiovisual perception of interrupted speech by nonnative listeners. Atten Percept Psychophys 2024:10.3758/s13414-024-02909-3. [PMID: 38886302 DOI: 10.3758/s13414-024-02909-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/20/2024] [Indexed: 06/20/2024]
Abstract
The purpose of the present study was to examine the influence of visual cues in audiovisual perception of interrupted speech by nonnative English listeners and to identify the role of working memory, long-term memory retrieval, and vocabulary knowledge in audiovisual perception by nonnative listeners. The participants included 31 Mandarin-speaking English learners between 19 and 41 years of age. The perceptual stimuli were noise-filled periodically interrupted AzBio and QuickSIN sentences with or without visual cues that showed a male speaker uttering the sentences. In addition to sentence recognition, the listeners completed a semantic fluency task, verbal (operation span) and visuospatial (symmetry span) working memory tasks, and two vocabulary knowledge tests (Vocabulary Level Test and Lexical Test for Advanced Learners of English). The results revealed significantly better speech recognition in the audio-visual condition than the audio-only condition, but the magnitude of visual benefit was substantially attenuated for sentences that had limited semantic context. The listeners' vocabulary size in English played a key role in the restoration of missing speech information and audiovisual integration in the perception of interrupted speech. Meanwhile, the listeners' verbal working memory capacity played an important role in audiovisual integration especially for the difficult stimuli with limited semantic context.
Collapse
Affiliation(s)
- Jing Yang
- Program of Communication Sciences and Disorders, University of Wisconsin-Milwaukee, Enderis Hall 873, P.O. Box 413, Milwaukee, WI, 53201, USA.
| | - Naveen K Nagaraj
- Cognitive Hearing Science Lab, Utah State University, Logan, UT, USA
| | | |
Collapse
|
2
|
Bortolotti A, Padulo C, Conte N, Fairfield B, Palumbo R. Colored valence in a lexical decision task. Acta Psychol (Amst) 2024; 244:104172. [PMID: 38324933 DOI: 10.1016/j.actpsy.2024.104172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 01/31/2024] [Accepted: 02/01/2024] [Indexed: 02/09/2024] Open
Abstract
Color influences behavior, from the simplest to the most complex, through controlled and more automatic information elaboration processes. Nonetheless, little is known about how and when these highly interconnected processes interact. This study investigates the interaction between controlled and automatic processes during the processing of color information in a lexical decision task. Participants discriminated stimuli presented in different colors (red, blue, green) as words or pseudowords. Results showed that while color did not affect the faster and more accurate recognition of words compared to pseudowords, performance was influenced when examining words and pseudowords separately. Pseudowords were recognized faster when presented in blue or red, suggesting a potential influence of evolutionary color preferences when processing is not guided by more controlled processes. With words, emotional enhancement effects were found, with a preference for green independent of valence. These results suggest that controlled and more automatic processes do interact when processing color information according to stimulus type and task.
Collapse
Affiliation(s)
| | - Caterina Padulo
- Department of Humanities University of Naples "Federico II", Italy.
| | - Nadia Conte
- Department of Psychological, Health and Territorial Sciences University of Chieti, Italy.
| | - Beth Fairfield
- Department of Humanities University of Naples "Federico II", Italy.
| | - Riccardo Palumbo
- Department of Neuroscience e Imaging University of Chieti, Italy.
| |
Collapse
|
3
|
Crespo K, Vlach H, Kaushanskaya M. The effects of speaker and exemplar variability in children's cross-situational word learning. Psychon Bull Rev 2024:10.3758/s13423-023-02444-6. [PMID: 38228967 DOI: 10.3758/s13423-023-02444-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/18/2023] [Indexed: 01/18/2024]
Abstract
Cross-situational word learning (XSWL) - children's ability to learn words by tracking co-occurrence statistics of words and their referents over time - has been identified as a fundamental mechanism underlying lexical learning. However, it is unknown whether children can acquire new words when faced with variable input in XSWL paradigms, such as varying object exemplars and variable speakers. In the present study, we examine the separate and combined effects of exemplar and speaker variability on XSWL in typically developing English-speaking monolingual children. Results revealed that variability in speakers and exemplars did not facilitate or hinder XSWL performance. However, input that varied in both speakers and exemplars simultaneously did hinder children's word learning. Results from this work suggest that XSWL mechanisms may support categorization and generalization beyond word-object associations, but that accommodating multiple forms of variable input may incur costs. Overall, this research provides new theoretical insights into how fundamental mechanisms of word learning scale to more complex and naturalistic forms of input.
Collapse
Affiliation(s)
- Kimberly Crespo
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA, USA.
| | - Haley Vlach
- Department of Educational Psychology, University of Wisconsin-Madison, Madison, WI, USA
| | - Margarita Kaushanskaya
- Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, USA
| |
Collapse
|
4
|
Batterink LJ, Mulgrew J, Gibbings A. Rhythmically Modulating Neural Entrainment during Exposure to Regularities Influences Statistical Learning. J Cogn Neurosci 2024; 36:107-127. [PMID: 37902580 DOI: 10.1162/jocn_a_02079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2023]
Abstract
The ability to discover regularities in the environment, such as syllable patterns in speech, is known as statistical learning. Previous studies have shown that statistical learning is accompanied by neural entrainment, in which neural activity temporally aligns with repeating patterns over time. However, it is unclear whether these rhythmic neural dynamics play a functional role in statistical learning or whether they largely reflect the downstream consequences of learning, such as the enhanced perception of learned words in speech. To better understand this issue, we manipulated participants' neural entrainment during statistical learning using continuous rhythmic visual stimulation. Participants were exposed to a speech stream of repeating nonsense words while viewing either (1) a visual stimulus with a "congruent" rhythm that aligned with the word structure, (2) a visual stimulus with an incongruent rhythm, or (3) a static visual stimulus. Statistical learning was subsequently measured using both an explicit and implicit test. Participants in the congruent condition showed a significant increase in neural entrainment over auditory regions at the relevant word frequency, over and above effects of passive volume conduction, indicating that visual stimulation successfully altered neural entrainment within relevant neural substrates. Critically, during the subsequent implicit test, participants in the congruent condition showed an enhanced ability to predict upcoming syllables and stronger neural phase synchronization to component words, suggesting that they had gained greater sensitivity to the statistical structure of the speech stream relative to the incongruent and static groups. This learning benefit could not be attributed to strategic processes, as participants were largely unaware of the contingencies between the visual stimulation and embedded words. These results indicate that manipulating neural entrainment during exposure to regularities influences statistical learning outcomes, suggesting that neural entrainment may functionally contribute to statistical learning. Our findings encourage future studies using non-invasive brain stimulation methods to further understand the role of entrainment in statistical learning.
Collapse
|
5
|
Mitchel AD, Lusk LG, Wellington I, Mook AT. Segmenting Speech by Mouth: The Role of Oral Prosodic Cues for Visual Speech Segmentation. LANGUAGE AND SPEECH 2023; 66:819-832. [PMID: 36448317 DOI: 10.1177/00238309221137607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Adults are able to use visual prosodic cues in the speaker's face to segment speech. Furthermore, eye-tracking data suggest that learners will shift their gaze to the mouth during visual speech segmentation. Although these findings suggest that the mouth may be viewed more than the eyes or nose during visual speech segmentation, no study has examined the direct functional importance of individual features; thus, it is unclear which visual prosodic cues are important for word segmentation. In this study, we examined the impact of first removing (Experiment 1) and then isolating (Experiment 2) individual facial features on visual speech segmentation. Segmentation performance was above chance in all conditions except for when the visual display was restricted to the eye region (eyes only condition in Experiment 2). This suggests that participants were able to segment speech when they could visually access the mouth but not when the mouth was completely removed from the visual display, providing evidence that visual prosodic cues conveyed by the mouth are sufficient and likely necessary for visual speech segmentation.
Collapse
Affiliation(s)
| | - Laina G Lusk
- Bucknell University, USA; Children's Hospital of Philadelphia, USA
| | - Ian Wellington
- Bucknell University, USA; University of Connecticut, USA
| | | |
Collapse
|
6
|
Dal Ben R, Prequero IT, Souza DDH, Hay JF. Speech Segmentation and Cross-Situational Word Learning in Parallel. Open Mind (Camb) 2023; 7:510-533. [PMID: 37637304 PMCID: PMC10449405 DOI: 10.1162/opmi_a_00095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 07/06/2023] [Indexed: 08/29/2023] Open
Abstract
Language learners track conditional probabilities to find words in continuous speech and to map words and objects across ambiguous contexts. It remains unclear, however, whether learners can leverage the structure of the linguistic input to do both tasks at the same time. To explore this question, we combined speech segmentation and cross-situational word learning into a single task. In Experiment 1, when adults (N = 60) simultaneously segmented continuous speech and mapped the newly segmented words to objects, they demonstrated better performance than when either task was performed alone. However, when the speech stream had conflicting statistics, participants were able to correctly map words to objects, but were at chance level on speech segmentation. In Experiment 2, we used a more sensitive speech segmentation measure to find that adults (N = 35), exposed to the same conflicting speech stream, correctly identified non-words as such, but were still unable to discriminate between words and part-words. Again, mapping was above chance. Our study suggests that learners can track multiple sources of statistical information to find and map words to objects in noisy environments. It also prompts questions on how to effectively measure the knowledge arising from these learning experiences.
Collapse
Affiliation(s)
- Rodrigo Dal Ben
- Universidade Federal de São Carlos, São Carlos, São Paulo, Brazil
| | | | | | | |
Collapse
|
7
|
Alighieri C, Haeghebaert Y, Bettens K, Kissel I, D'haeseleer E, Meerschman I, Van Der Sanden R, Van Lierde K. Peer attitudes towards adolescents with speech disorders due to cleft lip and palate. Int J Pediatr Otorhinolaryngol 2023; 165:111447. [PMID: 36701818 DOI: 10.1016/j.ijporl.2023.111447] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 01/09/2023] [Accepted: 01/11/2023] [Indexed: 01/13/2023]
Abstract
BACKGROUND AND AIMS Individuals with speech disorders are often judged more negatively than peers without speech disorders. A limited number of studies examined the attitudes of adolescents toward peers with speech disorders due to a cleft lip with or without a cleft of the palate (CL ± P). Therefore, the aim of the present study was to investigate the attitudes of peers toward the speech of adolescents with CL ± P. METHOD Seventy-eight typically developing adolescents (15-18 years, 26 boys, 52 girls) judged audio and audiovisual samples of two adolescents with CL ± P based on three attitude components, i.e., cognitive, affective, and behavioral. The degree of speech intelligibility was also scored by their peers. The study investigated whether the three attitudes were determined by speech intelligibility or appearance of an individual with CL ± P. Furthermore, the influence of knowing someone with a cleft, the age, and gender of the listeners on their attitudes were explored. RESULTS A significantly positive correlation was found between the speech intelligibility percentage and the three different attitude components: more positive attitudes were observed when the speech intelligibility of the speaker was higher. A different appearance due to a cleft lip does not lead to more negative attitudes. Furthermore, boys seem to have more negative attitudes toward individuals with CL ± P compared to girls. CONCLUSION This study provided additional evidence that peers show more negative attitudes toward adolescents with less intelligible speech due to CL ± P. Intervention should focus on changing the cognitive, affective, and behavioral attitudes of peers in a more positive direction and remove the stigma of patients with a cleft. Further research is needed to verify these results.
Collapse
Affiliation(s)
- Cassandra Alighieri
- Department of Rehabilitation Sciences, Center for Speech and Language Sciences (CESLAS), Ghent University, Ghent, Belgium.
| | - Ymke Haeghebaert
- Department of Rehabilitation Sciences, Center for Speech and Language Sciences (CESLAS), Ghent University, Ghent, Belgium
| | - Kim Bettens
- Department of Rehabilitation Sciences, Center for Speech and Language Sciences (CESLAS), Ghent University, Ghent, Belgium
| | - Imke Kissel
- Department of Rehabilitation Sciences, Center for Speech and Language Sciences (CESLAS), Ghent University, Ghent, Belgium
| | - Evelien D'haeseleer
- Department of Rehabilitation Sciences, Center for Speech and Language Sciences (CESLAS), Ghent University, Ghent, Belgium; Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium
| | - Iris Meerschman
- Department of Rehabilitation Sciences, Center for Speech and Language Sciences (CESLAS), Ghent University, Ghent, Belgium
| | - Rani Van Der Sanden
- Department of Rehabilitation Sciences, Center for Speech and Language Sciences (CESLAS), Ghent University, Ghent, Belgium
| | - Kristiane Van Lierde
- Department of Rehabilitation Sciences, Center for Speech and Language Sciences (CESLAS), Ghent University, Ghent, Belgium; Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium; Department of Speech-Language Therapy and Audiology, University of Pretoria, Pretoria, South Africa
| |
Collapse
|
8
|
Finley S. Morphological cues as an aid to word learning: a cross-situational word learning study. JOURNAL OF COGNITIVE PSYCHOLOGY 2022. [DOI: 10.1080/20445911.2022.2113087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/15/2022]
Affiliation(s)
- Sara Finley
- Department of Psychology, Pacific Lutheran University, Tacoma, WA, USA
| |
Collapse
|
9
|
Stärk K, Kidd E, Frost RLA. Word Segmentation Cues in German Child-Directed Speech: A Corpus Analysis. LANGUAGE AND SPEECH 2022; 65:3-27. [PMID: 33517856 PMCID: PMC8886305 DOI: 10.1177/0023830920979016] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
To acquire language, infants must learn to segment words from running speech. A significant body of experimental research shows that infants use multiple cues to do so; however, little research has comprehensively examined the distribution of such cues in naturalistic speech. We conducted a comprehensive corpus analysis of German child-directed speech (CDS) using data from the Child Language Data Exchange System (CHILDES) database, investigating the availability of word stress, transitional probabilities (TPs), and lexical and sublexical frequencies as potential cues for word segmentation. Seven hours of data (~15,000 words) were coded, representing around an average day of speech to infants. The analysis revealed that for 97% of words, primary stress was carried by the initial syllable, implicating stress as a reliable cue to word onset in German CDS. Word identity was also marked by TPs between syllables, which were higher within than between words, and higher for backwards than forwards transitions. Words followed a Zipfian-like frequency distribution, and over two-thirds of words (78%) were monosyllabic. Of the 50 most frequent words, 82% were function words, which accounted for 47% of word tokens in the entire corpus. Finally, 15% of all utterances comprised single words. These results give rich novel insights into the availability of segmentation cues in German CDS, and support the possibility that infants draw on multiple converging cues to segment their input. The data, which we make openly available to the research community, will help guide future experimental investigations on this topic.
Collapse
Affiliation(s)
- Katja Stärk
- Katja Stärk, Language Development
Department, Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD
Nijmegen, The Netherlands.
| | - Evan Kidd
- Language Development Department, Max Planck
Institute for Psycholinguistics, The Netherlands
- Research School of Psychology, The Australian
National University, Australia
- ARC Centre of Excellence for the Dynamics of
Language, Australia
| | - Rebecca L. A. Frost
- Language Development Department, Max Planck
Institute for Psycholinguistics, The Netherlands
| |
Collapse
|
10
|
Matzinger T, Fitch WT. Voice modulatory cues to structure across languages and species. Philos Trans R Soc Lond B Biol Sci 2021; 376:20200393. [PMID: 34719253 PMCID: PMC8558770 DOI: 10.1098/rstb.2020.0393] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/05/2021] [Indexed: 12/21/2022] Open
Abstract
Voice modulatory cues such as variations in fundamental frequency, duration and pauses are key factors for structuring vocal signals in human speech and vocal communication in other tetrapods. Voice modulation physiology is highly similar in humans and other tetrapods due to shared ancestry and shared functional pressures for efficient communication. This has led to similarly structured vocalizations across humans and other tetrapods. Nonetheless, in their details, structural characteristics may vary across species and languages. Because data concerning voice modulation in non-human tetrapod vocal production and especially perception are relatively scarce compared to human vocal production and perception, this review focuses on voice modulatory cues used for speech segmentation across human languages, highlighting comparative data where available. Cues that are used similarly across many languages may help indicate which cues may result from physiological or basic cognitive constraints, and which cues may be employed more flexibly and are shaped by cultural evolution. This suggests promising candidates for future investigation of cues to structure in non-human tetrapod vocalizations. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part I)'.
Collapse
Affiliation(s)
- Theresa Matzinger
- Department of Behavioral and Cognitive Biology, University of Vienna, 1030 Vienna, Austria
- Department of English, University of Vienna, 1090 Vienna, Austria
| | - W. Tecumseh Fitch
- Department of Behavioral and Cognitive Biology, University of Vienna, 1030 Vienna, Austria
- Department of English, University of Vienna, 1090 Vienna, Austria
| |
Collapse
|
11
|
Ramos-Escobar N, Segura E, Olivé G, Rodriguez-Fornells A, François C. Oscillatory activity and EEG phase synchrony of concurrent word segmentation and meaning-mapping in 9-year-old children. Dev Cogn Neurosci 2021; 51:101010. [PMID: 34461393 PMCID: PMC8403737 DOI: 10.1016/j.dcn.2021.101010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Revised: 08/25/2021] [Accepted: 08/26/2021] [Indexed: 10/28/2022] Open
Abstract
When learning a new language, one must segment words from continuous speech and associate them with meanings. These complex processes can be boosted by attentional mechanisms triggered by multi-sensory information. Previous electrophysiological studies suggest that brain oscillations are sensitive to different hierarchical complexity levels of the input, making them a plausible neural substrate for speech parsing. Here, we investigated the functional role of brain oscillations during concurrent speech segmentation and meaning acquisition in sixty 9-year-old children. We collected EEG data during an audio-visual statistical learning task during which children were exposed to a learning condition with consistent word-picture associations and a random condition with inconsistent word-picture associations before being tested on their ability to recall words and word-picture associations. We capitalized on the brain dynamics to align neural activity to the same rate as an external rhythmic stimulus to explore modulations of neural synchronization and phase synchronization between electrodes during multi-sensory word learning. Results showed enhanced power at both word- and syllabic-rate and increased EEG phase synchronization between frontal and occipital regions in the learning compared to the random condition. These findings suggest that multi-sensory cueing and attentional mechanisms play an essential role in children's successful word learning.
Collapse
Affiliation(s)
- Neus Ramos-Escobar
- Dept. of Cognition, Development and Educational Science, Institute of Neuroscience, University of Barcelona, L'Hospitalet de Llobregat, Barcelona, 08097, Spain; Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat, Barcelona, 08097, Spain
| | - Emma Segura
- Dept. of Cognition, Development and Educational Science, Institute of Neuroscience, University of Barcelona, L'Hospitalet de Llobregat, Barcelona, 08097, Spain; Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat, Barcelona, 08097, Spain
| | - Guillem Olivé
- Dept. of Cognition, Development and Educational Science, Institute of Neuroscience, University of Barcelona, L'Hospitalet de Llobregat, Barcelona, 08097, Spain; Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat, Barcelona, 08097, Spain
| | - Antoni Rodriguez-Fornells
- Dept. of Cognition, Development and Educational Science, Institute of Neuroscience, University of Barcelona, L'Hospitalet de Llobregat, Barcelona, 08097, Spain; Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de Llobregat, Barcelona, 08097, Spain; Catalan Institution for Research and Advanced Studies, ICREA, Barcelona, Spain.
| | | |
Collapse
|
12
|
Nagaraj NK, Yang J, Robinson TL, Magimairaj BM. Auditory closure with visual cues: Relationship with working memory and semantic memory. JASA EXPRESS LETTERS 2021; 1:095202. [PMID: 36154207 DOI: 10.1121/10.0006297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The role of working memory (WM) and long-term lexical-semantic memory (LTM) in the perception of interrupted speech with and without visual cues, was studied in 29 native English speakers. Perceptual stimuli were periodically interrupted sentences filled with speech noise. The memory measures included an LTM semantic fluency task, verbal WM, and visuo-spatial WM tasks. Whereas perceptual performance in the audio-only condition demonstrated a significant positive association with listeners' semantic fluency, perception in audio-video mode did not. These results imply that when listening to distorted speech without visual cues, listeners rely on lexical-semantic retrieval from LTM to restore missing speech information.
Collapse
Affiliation(s)
- Naveen K Nagaraj
- Cognitive Hearing Science Lab, Department of Communicative Disorders and Deaf Education, Utah State University, Logan, Utah 84322, USA
| | - Jing Yang
- Department of Communication Sciences and Disorders, University of Wisconsin-Milwaukee, P.O. Box 413, Milwaukee, Wisconsin 53201, USA , , ,
| | - Tanner L Robinson
- Cognitive Hearing Science Lab, Department of Communicative Disorders and Deaf Education, Utah State University, Logan, Utah 84322, USA
| | - Beula M Magimairaj
- Cognitive Hearing Science Lab, Department of Communicative Disorders and Deaf Education, Utah State University, Logan, Utah 84322, USA
| |
Collapse
|
13
|
Röttger E, Zhao F, Gaschler R, Haider H. Why Does Dual-Tasking Hamper Implicit Sequence Learning? J Cogn 2021; 4:1. [PMID: 33506167 PMCID: PMC7792471 DOI: 10.5334/joc.136] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Accepted: 10/15/2020] [Indexed: 11/20/2022] Open
Abstract
Research on the limitations of dual-tasking might profit from using setups with a predictable sequence of stimuli and responses and assessing the acquisition of this sequence. Detrimental effects of dual-tasking on implicit sequence learning in the serial reaction time task (SRTT; Nissen & Bullemer, 1987) - when paired with an uncorrelated task - have been attributed to participants' lack of separating the streams of events in either task. Assuming that co-occurring events are automatically integrated, we reasoned that participants could need to first learn which events co-occur, before they can acquire sequence knowledge. In the training phase, we paired an 8-element visual-manual SRTT with an auditory-vocal task. Afterwards, we tested under single-tasking conditions whether SRTT sequence knowledge had been acquired. By applying different variants of probabilistic SRTT-tone pairings across three experiments, we tested what type of predictive relationship was needed to preserve sequence learning. In Experiment 1, where half of the SRTT-elements were paired to 100% with one specific tone and the other half randomly, only the fixedly paired elements were learned. Yet, no sequence learning was found when each of the eight SRTT-elements was paired with tone identity in a 75%-25% ratio (Experiment 2). Sequence learning was, however, intact when the 75%-25% ratio was applied to the four SRTT target locations instead (Experiment 3). The results suggest that participants (when lacking a separation of the task representations while dual-tasking) can learn a sequence inherent in one of two tasks to the extent that across-task contingencies can be learned first.
Collapse
Affiliation(s)
- Eva Röttger
- Department of Psychology, University of Bremen, Hochschulring 18, 28359 Bremen, DE
| | - Fang Zhao
- Department of Psychology, FernUniversität in Hagen, Universitätsstr. 33, 58084 Hagen, DE
| | - Robert Gaschler
- Department of Psychology, FernUniversität in Hagen, Universitätsstr. 33, 58084 Hagen, DE
| | - Hilde Haider
- Department of Psychology, University of Cologne, Richard-Strauss-Str. 2, 50931 Köln, DE
| |
Collapse
|
14
|
Bermúdez-Margaretto B, Shtyrov Y, Beltrán D, Cuetos F, Domínguez A. Rapid acquisition of novel written word-forms: ERP evidence. Behav Brain Funct 2020; 16:11. [PMID: 33267883 PMCID: PMC7713216 DOI: 10.1186/s12993-020-00173-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2020] [Accepted: 11/21/2020] [Indexed: 11/10/2022] Open
Abstract
Background Novel word acquisition is generally believed to be a rapid process, essential for ensuring a flexible and efficient communication system; at least in spoken language, learners are able to construct memory traces for new linguistic stimuli after just a few exposures. However, such rapid word learning has not been systematically found in visual domain, with different confounding factors obscuring the orthographic learning of novel words. This study explored the changes in human brain activity occurring online, during a brief training with novel written word-forms using a silent reading task Results Single-trial, cluster-based random permutation analysis revealed that training caused an extremely fast (after just one repetition) and stable facilitation in novel word processing, reflected in the modulation of P200 and N400 components, possibly indicating rapid dynamics at early and late stages of the lexical processing. Furthermore, neural source estimation of these effects revealed the recruitment of brain areas involved in orthographic and lexico-semantic processing, respectively. Conclusions These results suggest the formation of neural memory traces for novel written word-forms after a minimal exposure to them even in the absence of a semantic reference, resembling the rapid learning processes known to occur in spoken language.
Collapse
Affiliation(s)
- Beatriz Bermúdez-Margaretto
- Centre for Cognition and Decision Making, Institute for Cognitive Neuroscience, National Research University Higher School of Economics, Moscow, Russian Federation.
| | - Yury Shtyrov
- Centre for Cognition and Decision Making, Institute for Cognitive Neuroscience, National Research University Higher School of Economics, Moscow, Russian Federation.,Center of Functionally Integrative Neuroscience, Aarhus University, Aarhus, Denmark
| | - David Beltrán
- Instituto Universitario de Neurociencia (IUNE) and Facultad de Psicología, Universidad de La Laguna, Tenerife, Spain
| | - Fernando Cuetos
- Facultad de Psicología, Universidad de Oviedo, Oviedo, Spain
| | - Alberto Domínguez
- Instituto Universitario de Neurociencia (IUNE) and Facultad de Psicología, Universidad de La Laguna, Tenerife, Spain
| |
Collapse
|
15
|
de la Cruz-Pavía I, Werker JF, Vatikiotis-Bateson E, Gervain J. Finding Phrases: The Interplay of Word Frequency, Phrasal Prosody and Co-speech Visual Information in Chunking Speech by Monolingual and Bilingual Adults. LANGUAGE AND SPEECH 2020; 63:264-291. [PMID: 31002280 PMCID: PMC7254630 DOI: 10.1177/0023830919842353] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The audiovisual speech signal contains multimodal information to phrase boundaries. In three artificial language learning studies with 12 groups of adult participants we investigated whether English monolinguals and bilingual speakers of English and a language with opposite basic word order (i.e., in which objects precede verbs) can use word frequency, phrasal prosody and co-speech (facial) visual information, namely head nods, to parse unknown languages into phrase-like units. We showed that monolinguals and bilinguals used the auditory and visual sources of information to chunk "phrases" from the input. These results suggest that speech segmentation is a bimodal process, though the influence of co-speech facial gestures is rather limited and linked to the presence of auditory prosody. Importantly, a pragmatic factor, namely the language of the context, seems to determine the bilinguals' segmentation, overriding the auditory and visual cues and revealing a factor that begs further exploration.
Collapse
Affiliation(s)
- Irene de la Cruz-Pavía
- Irene de la Cruz-Pavía, Integrative Neuroscience and Cognition Center (INCC—UMR 8002), Université Paris Descartes-CNRS, 45 rue des Saints-Pères, Paris, 75006, France.
| | - Janet F. Werker
- Department of Psychology, University of British Columbia, Canada
| | | | - Judit Gervain
- Integrative Neuroscience and Cognition Center (INCC—UMR 8002), Université Paris Descartes (Sorbonne Paris Cité), France; Integrative Neuroscience and Cognition Center (INCC—UMR 8002), CNRS, France
| |
Collapse
|
16
|
Conway CM. How does the brain learn environmental structure? Ten core principles for understanding the neurocognitive mechanisms of statistical learning. Neurosci Biobehav Rev 2020; 112:279-299. [PMID: 32018038 PMCID: PMC7211144 DOI: 10.1016/j.neubiorev.2020.01.032] [Citation(s) in RCA: 87] [Impact Index Per Article: 21.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 01/22/2020] [Accepted: 01/25/2020] [Indexed: 10/25/2022]
Abstract
Despite a growing body of research devoted to the study of how humans encode environmental patterns, there is still no clear consensus about the nature of the neurocognitive mechanisms underpinning statistical learning nor what factors constrain or promote its emergence across individuals, species, and learning situations. Based on a review of research examining the roles of input modality and domain, input structure and complexity, attention, neuroanatomical bases, ontogeny, and phylogeny, ten core principles are proposed. Specifically, there exist two sets of neurocognitive mechanisms underlying statistical learning. First, a "suite" of associative-based, automatic, modality-specific learning mechanisms are mediated by the general principle of cortical plasticity, which results in improved processing and perceptual facilitation of encountered stimuli. Second, an attention-dependent system, mediated by the prefrontal cortex and related attentional and working memory networks, can modulate or gate learning and is necessary in order to learn nonadjacent dependencies and to integrate global patterns across time. This theoretical framework helps clarify conflicting research findings and provides the basis for future empirical and theoretical endeavors.
Collapse
Affiliation(s)
- Christopher M Conway
- Center for Childhood Deafness, Language, and Learning, Boys Town National Research Hospital, Omaha, NE, United States.
| |
Collapse
|
17
|
de la Cruz-Pavía I, Gervain J, Vatikiotis-Bateson E, Werker JF. Finding phrases: On the role of co-verbal facial information in learning word order in infancy. PLoS One 2019; 14:e0224786. [PMID: 31710615 PMCID: PMC6844464 DOI: 10.1371/journal.pone.0224786] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2018] [Accepted: 10/22/2019] [Indexed: 11/23/2022] Open
Abstract
The input contains perceptually available cues, which might allow young infants to discover abstract properties of the target language. Thus, word frequency and prosodic prominence correlate systematically with basic word order in natural languages. Prelexical infants are sensitive to these frequency-based and prosodic cues, and use them to parse new input into phrases that follow the order characteristic of their native languages. Importantly, young infants readily integrate auditory and visual facial information while processing language. Here, we ask whether co-verbal visual information provided by talking faces also helps prelexical infants learn the word order of their native language in addition to word frequency and prosodic prominence. We created two structurally ambiguous artificial languages containing head nods produced by an animated avatar, aligned or misaligned with the frequency-based and prosodic information. During 4 minutes, two groups of 4- and 8-month-old infants were familiarized with the artificial language containing aligned auditory and visual cues, while two further groups were exposed to the misaligned language. Using a modified Headturn Preference Procedure, we tested infants’ preference for test items exhibiting the word order of the native language, French, vs. the opposite word order. At 4 months, infants had no preference, suggesting that 4-month-olds were not able to integrate the three available cues, or had not yet built a representation of word order. By contrast, 8-month-olds showed no preference when auditory and visual cues were aligned and a preference for the native word order when visual cues were misaligned. These results imply that infants at this age start to integrate the co-verbal visual and auditory cues.
Collapse
Affiliation(s)
- Irene de la Cruz-Pavía
- Integrative Neuroscience and Cognition Center (INCC–UMR 8002), Université Paris Descartes (Sorbonne Paris Cité), Paris, France
- Integrative Neuroscience and Cognition Center (INCC–UMR 8002), CNRS, Paris, France
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
- * E-mail:
| | - Judit Gervain
- Integrative Neuroscience and Cognition Center (INCC–UMR 8002), Université Paris Descartes (Sorbonne Paris Cité), Paris, France
- Integrative Neuroscience and Cognition Center (INCC–UMR 8002), CNRS, Paris, France
| | - Eric Vatikiotis-Bateson
- Department of Linguistics, University of British Columbia, Vancouver, British Columbia, Canada
| | - Janet F. Werker
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
18
|
Forest TA, Lichtenfeld A, Alvarez B, Finn AS. Superior learning in synesthetes: Consistent grapheme-color associations facilitate statistical learning. Cognition 2019; 186:72-81. [PMID: 30763803 DOI: 10.1016/j.cognition.2019.02.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2017] [Revised: 01/30/2019] [Accepted: 02/01/2019] [Indexed: 12/11/2022]
Abstract
In synesthesia activation in one sensory domain, such as smell or sound, triggers an involuntary and unusual secondary sensory or cognitive experience. In the present study, we ask whether the added sensory experience of synesthesia can aid statistical learning-the ability to track environmental regularities in order to segment continuous information. To investigate this, we measured statistical learning outcomes, using an aurally presented artificial language, in two groups of synesthetes alongside controls and simulated the multimodal experience of synesthesia in non-synesthetes. One group of synesthetes exclusively had grapheme-color (GC) synesthesia, in which the experience of color is automatically triggered by exposure to written or spoken graphemes. The other group had both grapheme-color and sound-color (SC+) synesthesia, in which the experience of color is also triggered by the waveform properties of a voice, such as pitch, timbre, and/or musical chords. Unlike GC-only synesthetes, the experience of color in the SC+ group is not perfectly consistent with the statistics that signal word boundaries. We showed that GC-only synesthetes outperformed both non-synesthetes and SC+ synesthetes, likely because the visual concurrents for GC-only synesthetes are highly consistent with the artificial language. We further observed that our simulations of GC synesthesia, but not SC+ synesthesia produced superior statistical learning, showing that synesthesia likely boosts learning outcomes by providing a consistent secondary cue. Findings are discussed with regard to how multimodal experience can improve learning, with the present data indicating that this boost is more likely to occur through explicit, as opposed to implicit, learning systems.
Collapse
Affiliation(s)
- Tess Allegra Forest
- Department of Psychology, University of Toronto, 100 St. George Street, 4th Floor, Sidney Smith Hall, Toronto, ON M5S 3G3, Canada
| | - Alessandra Lichtenfeld
- Department of Psychology, University of California, Berkeley, Room 3210 Tolman Hall #1650, Berkeley, CA 94720-1650, USA
| | - Bryan Alvarez
- Department of Psychology, University of California, Berkeley, Room 3210 Tolman Hall #1650, Berkeley, CA 94720-1650, USA
| | - Amy S Finn
- Department of Psychology, University of Toronto, 100 St. George Street, 4th Floor, Sidney Smith Hall, Toronto, ON M5S 3G3, Canada.
| |
Collapse
|
19
|
Palmer SD, Hutson J, White L, Mattys SL. Lexical knowledge boosts statistically-driven speech segmentation. J Exp Psychol Learn Mem Cogn 2018; 45:139-146. [PMID: 29952630 PMCID: PMC6307531 DOI: 10.1037/xlm0000567] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
The hypothesis that known words can serve as anchors for discovering new words in connected speech has computational and empirical support. However, evidence for how the bootstrapping effect of known words interacts with other mechanisms of lexical acquisition, such as statistical learning, is incomplete. In 3 experiments, we investigated the consequences of introducing a known word in an artificial language with no segmentation cues other than cross-syllable transitional probabilities. We started with an artificial language containing 4 trisyllabic novel words and observed standard above-chance performance in a subsequent recognition memory task. We then replaced 1 of the 4 novel words with a real word (tomorrow) and noted improved segmentation of the other 3 novel words. This improvement was maintained when the real word was a different length to the novel words (philosophy), ruling out an explanation based on metrical expectation. The improvement was also maintained when the word was added to the 4 original novel words rather than replacing 1 of them. Together, these results show that known words in an otherwise meaningless stream serve as anchors for discovering new words. In interpreting the results, we contrast a mechanism where the lexical boost is merely the consequence of attending to the edges of known words, with a mechanism where known words enhance sensitivity to transitional probabilities more generally.
Collapse
|
20
|
Stevenson RA, Sheffield SW, Butera IM, Gifford RH, Wallace MT. Multisensory Integration in Cochlear Implant Recipients. Ear Hear 2018; 38:521-538. [PMID: 28399064 DOI: 10.1097/aud.0000000000000435] [Citation(s) in RCA: 47] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Speech perception is inherently a multisensory process involving integration of auditory and visual cues. Multisensory integration in cochlear implant (CI) recipients is a unique circumstance in that the integration occurs after auditory deprivation and the provision of hearing via the CI. Despite the clear importance of multisensory cues for perception, in general, and for speech intelligibility, specifically, the topic of multisensory perceptual benefits in CI users has only recently begun to emerge as an area of inquiry. We review the research that has been conducted on multisensory integration in CI users to date and suggest a number of areas needing further research. The overall pattern of results indicates that many CI recipients show at least some perceptual gain that can be attributable to multisensory integration. The extent of this gain, however, varies based on a number of factors, including age of implantation and specific task being assessed (e.g., stimulus detection, phoneme perception, word recognition). Although both children and adults with CIs obtain audiovisual benefits for phoneme, word, and sentence stimuli, neither group shows demonstrable gain for suprasegmental feature perception. Additionally, only early-implanted children and the highest performing adults obtain audiovisual integration benefits similar to individuals with normal hearing. Increasing age of implantation in children is associated with poorer gains resultant from audiovisual integration, suggesting a sensitive period in development for the brain networks that subserve these integrative functions, as well as length of auditory experience. This finding highlights the need for early detection of and intervention for hearing loss, not only in terms of auditory perception, but also in terms of the behavioral and perceptual benefits of audiovisual processing. Importantly, patterns of auditory, visual, and audiovisual responses suggest that underlying integrative processes may be fundamentally different between CI users and typical-hearing listeners. Future research, particularly in low-level processing tasks such as signal detection will help to further assess mechanisms of multisensory integration for individuals with hearing loss, both with and without CIs.
Collapse
Affiliation(s)
- Ryan A Stevenson
- 1Department of Psychology, University of Western Ontario, London, Ontario, Canada; 2Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada; 3Walter Reed National Military Medical Center, Audiology and Speech Pathology Center, London, Ontario, Canada; 4Vanderbilt Brain Institute, Nashville, Tennesse; 5Vanderbilt Kennedy Center, Nashville, Tennesse; 6Department of Psychology, Vanderbilt University, Nashville, Tennesse; 7Department of Psychiatry, Vanderbilt University Medical Center, Nashville, Tennesse; and 8Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennesse
| | | | | | | | | |
Collapse
|
21
|
Li X, Zhao X, Shi W, Lu Y, Conway CM. Lack of Cross-Modal Effects in Dual-Modality Implicit Statistical Learning. Front Psychol 2018. [PMID: 29535653 PMCID: PMC5835111 DOI: 10.3389/fpsyg.2018.00146] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
A current controversy in the area of implicit statistical learning (ISL) is whether this process consists of a single, central mechanism or multiple modality-specific ones. To provide insight into this question, the current study involved three ISL experiments to explore whether multimodal input sources are processed separately in each modality or are integrated together across modalities. In Experiment 1, visual and auditory ISL were measured under unimodal conditions, with the results providing a baseline level of learning for subsequent experiments. Visual and auditory sequences were presented separately, and the underlying grammar used for both modalities was the same. In Experiment 2, visual and auditory sequences were presented simultaneously with each modality using the same artificial grammar to investigate whether redundant multisensory information would result in a facilitative effect (i.e., increased learning) compared to the baseline. In Experiment 3, visual and auditory sequences were again presented simultaneously but this time with each modality employing different artificial grammars to investigate whether an interference effect (i.e., decreased learning) would be observed compared to the baseline. Results showed that there was neither a facilitative learning effect in Experiment 2 nor an interference effect in Experiment 3. These findings suggest that participants were able to track simultaneously and independently two sets of sequential regularities under dual-modality conditions. These findings are consistent with the theories that posit the existence of multiple, modality-specific ISL mechanisms rather than a single central one.
Collapse
Affiliation(s)
- Xiujun Li
- School of Psychology and Cognitive Science, East China Normal University, Shanghai, China.,Department of Psychology, School of Education, Shanghai Normal University, Shanghai, China
| | - Xudong Zhao
- Department of Psychology, School of Education, Shanghai Normal University, Shanghai, China
| | - Wendian Shi
- Department of Psychology, School of Education, Shanghai Normal University, Shanghai, China
| | - Yang Lu
- School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Christopher M Conway
- NeuroLearn Lab, Department of Psychology, Georgia State University, Atlanta, GA, United States.,Neuroscience Institute, Georgia State University, Atlanta, GA, United States
| |
Collapse
|
22
|
Havas V, Taylor J, Vaquero L, de Diego-Balaguer R, Rodríguez-Fornells A, Davis MH. Semantic and phonological schema influence spoken word learning and overnight consolidation. Q J Exp Psychol (Hove) 2018; 71:1469-1481. [PMID: 28856956 DOI: 10.1080/17470218.2017.1329325] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.
Collapse
Affiliation(s)
- Viktória Havas
- 1 Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain.,2 Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute, Barcelona, Spain.,3 Department of Language and Literature, Norwegian University of Science and Technology, Trondheim, Norway
| | - Jsh Taylor
- 4 Cognition and Brain Sciences Unit, Medical Research Council, Cambridge, UK.,5 Department of Psychology, Royal Holloway, University of London, Egham, UK
| | - Lucía Vaquero
- 1 Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain
| | - Ruth de Diego-Balaguer
- 1 Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain.,2 Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute, Barcelona, Spain.,6 Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain.,7 Institute of Neurosciences, University of Barcelona, Barcelona, Spain
| | - Antoni Rodríguez-Fornells
- 1 Department of Cognition, Development and Educational Psychology, University of Barcelona, Barcelona, Spain.,2 Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute, Barcelona, Spain.,6 Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| | - Matthew H Davis
- 4 Cognition and Brain Sciences Unit, Medical Research Council, Cambridge, UK
| |
Collapse
|
23
|
Lavi-Rotbain O, Arnon I. Developmental Differences Between Children and Adults in the Use of Visual Cues for Segmentation. Cogn Sci 2017; 42 Suppl 2:606-620. [DOI: 10.1111/cogs.12528] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2016] [Revised: 05/20/2017] [Accepted: 06/22/2017] [Indexed: 11/28/2022]
Affiliation(s)
- Ori Lavi-Rotbain
- The Edmond and Lilly Safra Center for Brain Sciences; Hebrew University
| | | |
Collapse
|
24
|
Grieco-Calub TM, Simeon KM, Snyder HE, Lew-Williams C. Word segmentation from noise-band vocoded speech. LANGUAGE, COGNITION AND NEUROSCIENCE 2017; 32:1344-1356. [PMID: 29977950 PMCID: PMC6028043 DOI: 10.1080/23273798.2017.1354129] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2016] [Accepted: 07/02/2017] [Indexed: 06/01/2023]
Abstract
Spectral degradation reduces access to the acoustics of spoken language and compromises how learners break into its structure. We hypothesised that spectral degradation disrupts word segmentation, but that listeners can exploit other cues to restore detection of words. Normal-hearing adults were familiarised to artificial speech that was unprocessed or spectrally degraded by noise-band vocoding into 16 or 8 spectral channels. The monotonic speech stream was pause-free (Experiment 1), interspersed with isolated words (Experiment 2), or slowed by 33% (Experiment 3). Participants were tested on segmentation of familiar vs. novel syllable sequences and on recognition of individual syllables. As expected, vocoding hindered both word segmentation and syllable recognition. The addition of isolated words, but not slowed speech, improved segmentation. We conclude that syllable recognition is necessary but not sufficient for successful word segmentation, and that isolated words can facilitate listeners' access to the structure of acoustically degraded speech.
Collapse
Affiliation(s)
- Tina M. Grieco-Calub
- The Roxelyn & Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Katherine M. Simeon
- The Roxelyn & Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Hillary E. Snyder
- The Roxelyn & Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | | |
Collapse
|
25
|
Peñaloza C, Mirman D, Cardona P, Juncadella M, Martin N, Laine M, Rodríguez-Fornells A. Cross-situational word learning in aphasia. Cortex 2017; 93:12-27. [PMID: 28570928 DOI: 10.1016/j.cortex.2017.04.020] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2016] [Revised: 12/08/2016] [Accepted: 04/21/2017] [Indexed: 10/19/2022]
Abstract
Human learners can resolve referential ambiguity and discover the relationships between words and meanings through a cross-situational learning (CSL) strategy. Some people with aphasia (PWA) can learn word-referent pairings under referential uncertainty supported by online feedback. However, it remains unknown whether PWA can learn new words cross-situationally and if such learning ability is supported by statistical learning (SL) mechanisms. The present study examined whether PWA can learn novel word-referent mappings in a CSL task without feedback. We also studied whether CSL is related to SL in PWA and neurologically healthy individuals. We further examined whether aphasia severity, phonological processing and verbal short-term memory (STM) predict CSL in aphasia, and also whether individual differences in verbal STM modulate CSL in healthy older adults. Sixteen people with chronic aphasia underwent a CSL task that involved exposure to a series of individually ambiguous learning trials and a SL task that taps speech segmentation. Their learning ability was compared to 18 older controls and 39 young adults recruited for task validation. CSL in the aphasia group was below the older controls and young adults and took place at a slower rate. Importantly, we found a strong association between SL and CSL performance in all three groups. CSL was modulated by aphasia severity in the aphasia group, and by verbal STM capacity in the older controls. Our findings indicate that some PWA can preserve the ability to learn new word-referent associations cross-situationally. We suggest that both PWA and neurologically intact individuals may rely on SL mechanisms to achieve CSL and that verbal STM also influences CSL. These findings contribute to the ongoing debate on the cognitive mechanisms underlying this learning ability.
Collapse
Affiliation(s)
- Claudia Peñaloza
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute - IDIBELL, L'Hospitalet de Llobregat, Barcelona, Spain
| | - Daniel Mirman
- Department of Psychology, University of Alabama at Birmingham, Birmingham, AL, USA; Moss Rehabilitation Research Institute, Elkins Park, PA, USA
| | - Pedro Cardona
- Hospital Universitari de Bellvitge (HUB), Neurology Section, Campus Bellvitge, University of Barcelona, L'Hospitalet de Llobregat, Barcelona, Spain
| | - Montserrat Juncadella
- Hospital Universitari de Bellvitge (HUB), Neurology Section, Campus Bellvitge, University of Barcelona, L'Hospitalet de Llobregat, Barcelona, Spain
| | - Nadine Martin
- Department of Communication Sciences and Disorders, Eleanor M. Saffran Center for Cognitive Neuroscience, Temple University, Philadelphia, USA
| | - Matti Laine
- Department of Psychology, Abo Akademi University, Turku, Finland
| | - Antoni Rodríguez-Fornells
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute - IDIBELL, L'Hospitalet de Llobregat, Barcelona, Spain; Department of Cognition, Development and Educational Psychology, Campus Bellvitge, University of Barcelona, L'Hospitalet de Llobregat, Barcelona, Spain; Catalan Institution for Research and Advanced Studies, ICREA, Barcelona, Spain.
| |
Collapse
|
26
|
François C, Cunillera T, Garcia E, Laine M, Rodriguez-Fornells A. Neurophysiological evidence for the interplay of speech segmentation and word-referent mapping during novel word learning. Neuropsychologia 2016; 98:56-67. [PMID: 27732869 DOI: 10.1016/j.neuropsychologia.2016.10.006] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2016] [Revised: 10/03/2016] [Accepted: 10/08/2016] [Indexed: 11/16/2022]
Abstract
Learning a new language requires the identification of word units from continuous speech (the speech segmentation problem) and mapping them onto conceptual representation (the word to world mapping problem). Recent behavioral studies have revealed that the statistical properties found within and across modalities can serve as cues for both processes. However, segmentation and mapping have been largely studied separately, and thus it remains unclear whether both processes can be accomplished at the same time and if they share common neurophysiological features. To address this question, we recorded EEG of 20 adult participants during both an audio alone speech segmentation task and an audiovisual word-to-picture association task. The participants were tested for both the implicit detection of online mismatches (structural auditory and visual semantic violations) as well as for the explicit recognition of words and word-to-picture associations. The ERP results from the learning phase revealed a delayed learning-related fronto-central negativity (FN400) in the audiovisual condition compared to the audio alone condition. Interestingly, while online structural auditory violations elicited clear MMN/N200 components in the audio alone condition, visual-semantic violations induced meaning-related N400 modulations in the audiovisual condition. The present results support the idea that speech segmentation and meaning mapping can take place in parallel and act in synergy to enhance novel word learning.
Collapse
Affiliation(s)
- Clément François
- Cognition and Brain Plasticity Group [Bellvitge Biomedical Research Institute-] IDIBELL, L'Hospitalet de Llobregat, Barcelona, Spain; Department of Cognition, Development and Educational Science, University of Barcelona, Barcelona, Spain; Institut de Recerca Pediàtrica Hospital Sant Joan de Déu, Barcelona, Spain.
| | - Toni Cunillera
- Department of Cognition, Development and Educational Science, University of Barcelona, Barcelona, Spain
| | - Enara Garcia
- Cognition and Brain Plasticity Group [Bellvitge Biomedical Research Institute-] IDIBELL, L'Hospitalet de Llobregat, Barcelona, Spain; Department of Cognition, Development and Educational Science, University of Barcelona, Barcelona, Spain
| | - Matti Laine
- Department of Psychology, Abo Akademi University, Turku, Finland
| | - Antoni Rodriguez-Fornells
- Cognition and Brain Plasticity Group [Bellvitge Biomedical Research Institute-] IDIBELL, L'Hospitalet de Llobregat, Barcelona, Spain; Department of Cognition, Development and Educational Science, University of Barcelona, Barcelona, Spain; Catalan Institution for Research and Advanced Studies, ICREA, Barcelona, Spain.
| |
Collapse
|
27
|
Walk AM, Conway CM. Cross-Domain Statistical-Sequential Dependencies Are Difficult to Learn. Front Psychol 2016; 7:250. [PMID: 26941696 PMCID: PMC4766371 DOI: 10.3389/fpsyg.2016.00250] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2015] [Accepted: 02/08/2016] [Indexed: 11/13/2022] Open
Abstract
Recent studies have demonstrated participants' ability to learn cross-modal associations during statistical learning tasks. However, these studies are all similar in that the cross-modal associations to be learned occur simultaneously, rather than sequentially. In addition, the majority of these studies focused on learning across sensory modalities but not across perceptual categories. To test both cross-modal and cross-categorical learning of sequential dependencies, we used an artificial grammar learning task consisting of a serial stream of auditory and/or visual stimuli containing both within- and cross-domain dependencies. Experiment 1 examined within-modal and cross-modal learning across two sensory modalities (audition and vision). Experiment 2 investigated within-categorical and cross-categorical learning across two perceptual categories within the same sensory modality (e.g., shape and color; tones and non-words). Our results indicated that individuals demonstrated learning of the within-modal and within-categorical but not the cross-modal or cross-categorical dependencies. These results stand in contrast to the previous demonstrations of cross-modal statistical learning, and highlight the presence of modality constraints that limit the effectiveness of learning in a multimodal environment.
Collapse
Affiliation(s)
- Anne M. Walk
- Neurocognitive Kinesiology Lab, Department of Kinesiology and Community Health, University of Illinois at Urbana-Champaign, UrbanaIL, USA
| | | |
Collapse
|
28
|
Lusk LG, Mitchel AD. Differential Gaze Patterns on Eyes and Mouth During Audiovisual Speech Segmentation. Front Psychol 2016; 7:52. [PMID: 26869959 PMCID: PMC4735377 DOI: 10.3389/fpsyg.2016.00052] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2015] [Accepted: 01/11/2016] [Indexed: 11/17/2022] Open
Abstract
Speech is inextricably multisensory: both auditory and visual components provide critical information for all aspects of speech processing, including speech segmentation, the visual components of which have been the target of a growing number of studies. In particular, a recent study (Mitchel and Weiss, 2014) established that adults can utilize facial cues (i.e., visual prosody) to identify word boundaries in fluent speech. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2014). Subjects spent the most time watching the eyes and mouth. A significant trend in gaze durations was found with the longest gaze duration on the mouth, followed by the eyes and then the nose. In addition, eye-gaze patterns changed across familiarization as subjects learned the word boundaries, showing decreased attention to the mouth in later blocks while attention on other facial features remained consistent. These findings highlight the importance of the visual component of speech processing and suggest that the mouth may play a critical role in visual speech segmentation.
Collapse
Affiliation(s)
- Laina G Lusk
- Neuroscience Program, Bucknell University Lewisburg, PA, USA
| | - Aaron D Mitchel
- Neuroscience Program, Bucknell UniversityLewisburg, PA, USA; Department of Psychology, Bucknell UniversityLewisburg, PA, USA
| |
Collapse
|
29
|
Cunillera T, Laine M, Rodríguez-Fornells A. Headstart for speech segmentation: a neural signature for the anchor word effect. Neuropsychologia 2016; 82:189-199. [DOI: 10.1016/j.neuropsychologia.2016.01.011] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2015] [Revised: 01/07/2016] [Accepted: 01/10/2016] [Indexed: 11/16/2022]
|
30
|
Abstract
Language learning requires that listeners discover acoustically variable functional units like phonetic categories and words from an unfamiliar, continuous acoustic stream. Although many category learning studies have examined how listeners learn to generalize across the acoustic variability inherent in the signals that convey the functional units of language, these studies have tended to focus upon category learning across isolated sound exemplars. However, continuous input presents many additional learning challenges that may impact category learning. Listeners may not know the timescale of the functional unit, its relative position in the continuous input, or its relationship to other evolving input regularities. Moving laboratory-based studies of isolated category exemplars toward more natural input is important to modeling language learning, but very little is known about how listeners discover categories embedded in continuous sound. In 3 experiments, adult participants heard acoustically variable sound category instances embedded in acoustically variable and unfamiliar sound streams within a video game task. This task was inherently rich in multisensory regularities with the to-be-learned categories and likely to engage procedural learning without requiring explicit categorization, segmentation, or even attention to the sounds. After 100 min of game play, participants categorized familiar sound streams in which target words were embedded and generalized this learning to novel streams as well as isolated instances of the target words. The findings demonstrate that even without a priori knowledge, listeners can discover input regularities that have the best predictive control over the environment for both non-native speech and nonspeech signals, emphasizing the generality of the learning.
Collapse
Affiliation(s)
- Sung-Joo Lim
- Department of Psychology, Carnegie Mellon University
| | | | - Lori L Holt
- Department of Psychology, Carnegie Mellon University
| |
Collapse
|
31
|
The effect of visual cues on top-down restoration of temporally interrupted speech, with and without further degradations. Hear Res 2015; 328:24-33. [PMID: 26117407 DOI: 10.1016/j.heares.2015.06.013] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/20/2015] [Revised: 06/15/2015] [Accepted: 06/22/2015] [Indexed: 11/21/2022]
Abstract
In complex listening situations, cognitive restoration mechanisms are commonly used to enhance perception of degraded speech with inaudible segments. Profoundly hearing-impaired people with a cochlear implant (CI) show less benefit from such mechanisms. However, both normal hearing (NH) listeners and CI users do benefit from visual speech cues in these listening situations. In this study we investigated if an accompanying video of the speaker can enhance the intelligibility of interrupted sentences and the phonemic restoration benefit, measured by an increase in intelligibility when the silent intervals are filled with noise. Similar to previous studies, restoration benefit was observed with interrupted speech without spectral degradations (Experiment 1), but was absent in acoustic simulations of CIs (Experiment 2) and was present again in simulations of electric-acoustic stimulation (Experiment 3). In all experiments, the additional speech information provided by the complementary visual cues lead to overall higher intelligibility, however, these cues did not influence the occurrence or extent of the phonemic restoration benefit of filler noise. Results imply that visual cues do not show a synergistic effect with the filler noise, as adding them equally increased the intelligibility of interrupted sentences with or without the filler noise.
Collapse
|
32
|
Abstract
Our environment is richly structured, with objects producing correlated information within and across sensory modalities. A prominent challenge faced by our perceptual system is to learn such regularities. Here, we examined statistical learning and addressed learners' ability to track transitional probabilities between elements in the auditory and visual modalities. Specifically, we investigated whether cross-modal information affects statistical learning within a single modality. Participants were familiarized with a statistically structured modality (e.g., either audition or vision) accompanied by different types of cues in a second modality (e.g., vision or audition). The results revealed that statistical learning within either modality is affected by cross-modal information, with learning being enhanced or reduced according to the type of cue provided in the second modality.
Collapse
|
33
|
Mitchel AD, Christiansen MH, Weiss DJ. Multimodal integration in statistical learning: evidence from the McGurk illusion. Front Psychol 2014; 5:407. [PMID: 24904449 PMCID: PMC4032898 DOI: 10.3389/fpsyg.2014.00407] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2014] [Accepted: 04/18/2014] [Indexed: 11/16/2022] Open
Abstract
Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study, we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker’s face. In the critical condition, the visual (e.g., /gi/) and auditory (e.g., /mi/) signals were occasionally incongruent, which we predicted would produce the McGurk illusion, resulting in the perception of an audiovisual syllable (e.g., /ni/). In this way, we used the McGurk illusion to manipulate the underlying statistical structure of the speech streams, such that perception of these illusory syllables facilitated participants’ ability to segment the speech stream. Our results therefore demonstrate that participants can integrate audio and visual input to perceive the McGurk illusion during statistical learning. We interpret our findings as support for modality-interactive accounts of statistical learning.
Collapse
Affiliation(s)
- Aaron D Mitchel
- Department of Psychology and Program in Neuroscience, Bucknell University Lewisburg, PA, USA
| | - Morten H Christiansen
- Department of Psychology, Cornell University Ithaca, NY, USA ; Department of Language and Communication, University of Southern Denmark Odense, Denmark ; Haskins Laboratories, New Haven CT, USA
| | - Daniel J Weiss
- Department of Psychology and Program in Linguistics, Pennsylvania State University, University Park PA, USA
| |
Collapse
|
34
|
Mitchel AD, Weiss DJ. Visual speech segmentation: using facial cues to locate word boundaries in continuous speech. LANGUAGE AND COGNITIVE PROCESSES 2014; 29:771-780. [PMID: 25018577 PMCID: PMC4091796 DOI: 10.1080/01690965.2013.791703] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition.
Collapse
Affiliation(s)
- Aaron D. Mitchel
- Department of Psychology, Bucknell University, Lewisburg, PA 17837, USA
| | - Daniel J. Weiss
- Department of Psychology and Program in Linguistics, The Pennsylvania State University, 643 Moore Building, University Park, PA 16802, USA
| |
Collapse
|
35
|
Yurovsky D, Yu C, Smith LB. Statistical speech segmentation and word learning in parallel: scaffolding from child-directed speech. Front Psychol 2012; 3:374. [PMID: 23162487 PMCID: PMC3498894 DOI: 10.3389/fpsyg.2012.00374] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2012] [Accepted: 09/11/2012] [Indexed: 11/29/2022] Open
Abstract
In order to acquire their native languages, children must learn richly structured systems with regularities at multiple levels. While structure at different levels could be learned serially, e.g., speech segmentation coming before word-object mapping, redundancies across levels make parallel learning more efficient. For instance, a series of syllables is likely to be a word not only because of high transitional probabilities, but also because of a consistently co-occurring object. But additional statistics require additional processing, and thus might not be useful to cognitively constrained learners. We show that the structure of child-directed speech makes simultaneous speech segmentation and word learning tractable for human learners. First, a corpus of child-directed speech was recorded from parents and children engaged in a naturalistic free-play task. Analyses revealed two consistent regularities in the sentence structure of naming events. These regularities were subsequently encoded in an artificial language to which adult participants were exposed in the context of simultaneous statistical speech segmentation and word learning. Either regularity was independently sufficient to support successful learning, but no learning occurred in the absence of both regularities. Thus, the structure of child-directed speech plays an important role in scaffolding speech segmentation and word learning in parallel.
Collapse
Affiliation(s)
- Daniel Yurovsky
- Department of Psychology, Stanford UniversityStanford, CA, USA
| | - Chen Yu
- Department of Psychological and Brain Sciences and Program in Cognitive Science, Indiana UniversityBloomington, IN, USA
| | - Linda B. Smith
- Department of Psychological and Brain Sciences and Program in Cognitive Science, Indiana UniversityBloomington, IN, USA
| |
Collapse
|
36
|
Stagnitti K, O'Connor C, Sheppard L. Impact of the Learn to Play program on play, social competence and language for children aged 5-8 years who attend a specialist school. Aust Occup Ther J 2012; 59:302-11. [DOI: 10.1111/j.1440-1630.2012.01018.x] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/11/2012] [Indexed: 11/29/2022]
Affiliation(s)
- Karen Stagnitti
- School of Health and Social Development; Deakin University; Deakin; Victoria; Australia
| | - Chloe O'Connor
- School of Health and Social Development; Deakin University; Deakin; Victoria; Australia
| | | |
Collapse
|
37
|
Abstract
Infants are adept at tracking statistical regularities to identify word boundaries in pause-free speech. However, researchers have questioned the relevance of statistical learning mechanisms to language acquisition, since previous studies have used simplified artificial languages that ignore the variability of real language input. The experiments reported here embraced a key dimension of variability in infant-directed speech. English-learning infants (8-10 months) listened briefly to natural Italian speech that contained either fluent speech only or a combination of fluent speech and single-word utterances. Listening times revealed successful learning of the statistical properties of target words only when words appeared both in fluent speech and in isolation; brief exposure to fluent speech alone was not sufficient to facilitate detection of the words' statistical properties. This investigation suggests that statistical learning mechanisms actually benefit from variability in utterance length, and provides the first evidence that isolated words and longer utterances act in concert to support infant word segmentation.
Collapse
Affiliation(s)
- Casey Lew-Williams
- Department of Psychology and Waisman Center, University of Wisconsin-Madison, WI 53705-2280, USA.
| | | | | |
Collapse
|
38
|
Mitchel AD, Weiss DJ. Learning across senses: cross-modal effects in multisensory statistical learning. J Exp Psychol Learn Mem Cogn 2011; 37:1081-91. [PMID: 21574745 PMCID: PMC4041380 DOI: 10.1037/a0023700] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
It is currently unknown whether statistical learning is supported by modality-general or modality-specific mechanisms. One issue within this debate concerns the independence of learning in one modality from learning in other modalities. In the present study, the authors examined the extent to which statistical learning across modalities is independent by simultaneously presenting learners with auditory and visual streams. After establishing baseline rates of learning for each stream independently, they systematically varied the amount of audiovisual correspondence across 3 experiments. They found that learners were able to segment both streams successfully only when the boundaries of the audio and visual triplets were in alignment. This pattern of results suggests that learners are able to extract multiple statistical regularities across modalities provided that there is some degree of cross-modal coherence. They discuss the implications of their results in light of recent claims that multisensory statistical learning is guided by modality-independent mechanisms.
Collapse
Affiliation(s)
- Aaron D Mitchel
- Department of Psychology and Program in Linguistics, Pennsylvania State University, 643 Moore Building, University Park, PA 16802, USA.
| | | |
Collapse
|
39
|
O'Connor C, Stagnitti K. Play, behaviour, language and social skills: the comparison of a play and a non-play intervention within a specialist school setting. RESEARCH IN DEVELOPMENTAL DISABILITIES 2011; 32:1205-11. [PMID: 21282038 DOI: 10.1016/j.ridd.2010.12.037] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2010] [Revised: 12/20/2010] [Accepted: 12/27/2010] [Indexed: 05/11/2023]
Abstract
The aim of the present study was to investigate the play, behaviour, language and social skills of children aged 5-8 years participating in a play intervention (based on the 'Learn to Play' program) compared to a group of children participating in traditional classroom activities within a specialist school over a six month period. Thirty-five children participated in the study, 19 participated in the play intervention group and 16 participated in the comparison group. Fourteen staff members at the special school were involved. A quasi-experimental design was used with pre and post data collection. Children in the play intervention and the comparison group were assessed using the Child-Initiated Pretend Play Assessment (play), Goal Attainment Scaling (behaviour), the Preschool Language Scale (language) and the Penn Interactive Peer Play Scale (social skills) at baseline and at follow up. Findings revealed that children participating in the play intervention showed a significant decrease in play deficits, became less socially disruptive and more socially connected with their peers. Both groups improved in their overall language skills and significantly improved in their goal attainment. This study supports the use of a play intervention in improving a child's play, behaviour, language and social skills.
Collapse
Affiliation(s)
- Chloe O'Connor
- School of Health and Social Development, Deakin University, 1 Gheringhap Street, Geelong, Australia
| | | |
Collapse
|