1
|
Seidl AH, Indarjit M, Borovsky A. Touch to learn: Multisensory input supports word learning and processing. Dev Sci 2024; 27:e13419. [PMID: 37291692 PMCID: PMC10704002 DOI: 10.1111/desc.13419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Revised: 04/14/2023] [Accepted: 05/22/2023] [Indexed: 06/10/2023]
Abstract
Infants experience language in rich multisensory environments. For example, they may first be exposed to the word applesauce while touching, tasting, smelling, and seeing applesauce. In three experiments using different methods we asked whether the number of distinct senses linked with the semantic features of objects would impact word recognition and learning. Specifically, in Experiment 1 we asked whether words linked with more multisensory experiences were learned earlier than words linked fewer multisensory experiences. In Experiment 2, we asked whether 2-year-olds' known words linked with more multisensory experiences were better recognized than those linked with fewer. Finally, in Experiment 3, we taught 2-year-olds labels for novel objects that were linked with either just visual or visual and tactile experiences and asked whether this impacted their ability to learn the new label-to-object mappings. Results converge to support an account in which richer multisensory experiences better support word learning. We discuss two pathways through which rich multisensory experiences might support word learning.
Collapse
Affiliation(s)
- Amanda H Seidl
- Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, USA
| | - Michelle Indarjit
- Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, USA
| | - Arielle Borovsky
- Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, USA
| |
Collapse
|
2
|
Ross P, Williams E, Herbert G, Manning L, Lee B. Turn that music down! Affective musical bursts cause an auditory dominance in children recognizing bodily emotions. J Exp Child Psychol 2023; 230:105632. [PMID: 36731279 DOI: 10.1016/j.jecp.2023.105632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 12/16/2022] [Accepted: 01/13/2023] [Indexed: 02/01/2023]
Abstract
Previous work has shown that different sensory channels are prioritized across the life course, with children preferentially responding to auditory information. The aim of the current study was to investigate whether the mechanism that drives this auditory dominance in children occurs at the level of encoding (overshadowing) or when the information is integrated to form a response (response competition). Given that response competition is dependent on a modality integration attempt, a combination of stimuli that could not be integrated was used so that if children's auditory dominance persisted, this would provide evidence for the overshadowing over the response competition mechanism. Younger children (≤7 years), older children (8-11 years), and adults (18+ years) were asked to recognize the emotion (happy or fearful) in either nonvocal auditory musical emotional bursts or human visual bodily expressions of emotion in three conditions: unimodal, congruent bimodal, and incongruent bimodal. We found that children performed significantly worse at recognizing emotional bodies when they heard (and were told to ignore) musical emotional bursts. This provides the first evidence for auditory dominance in both younger and older children when presented with modally incongruent emotional stimuli. The continued presence of auditory dominance, despite the lack of modality integration, was taken as supportive evidence for the overshadowing explanation. These findings are discussed in relation to educational considerations, and future sensory dominance investigations and models are proposed.
Collapse
Affiliation(s)
- Paddy Ross
- Department of Psychology, Durham University, Durham DH1 3LE, UK.
| | - Ella Williams
- Department of Psychology, Durham University, Durham DH1 3LE, UK; Oxford Neuroscience, University of Oxford, Oxford OX3 9DU, UK
| | - Gemma Herbert
- Department of Psychology, Durham University, Durham DH1 3LE, UK
| | - Laura Manning
- Department of Psychology, Durham University, Durham DH1 3LE, UK
| | - Becca Lee
- Department of Psychology, Durham University, Durham DH1 3LE, UK
| |
Collapse
|
3
|
Zamuner TS, Rabideau T, McDonald M, Yeung HH. Developmental change in children's speech processing of auditory and visual cues: An eyetracking study. JOURNAL OF CHILD LANGUAGE 2023; 50:27-51. [PMID: 36503546 DOI: 10.1017/s0305000921000684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
This study investigates how children aged two to eight years (N = 129) and adults (N = 29) use auditory and visual speech for word recognition. The goal was to bridge the gap between apparent successes of visual speech processing in young children in visual-looking tasks, with apparent difficulties of speech processing in older children from explicit behavioural measures. Participants were presented with familiar words in audio-visual (AV), audio-only (A-only) or visual-only (V-only) speech modalities, then presented with target and distractor images, and looking to targets was measured. Adults showed high accuracy, with slightly less target-image looking in the V-only modality. Developmentally, looking was above chance for both AV and A-only modalities, but not in the V-only modality until 6 years of age (earlier on /k/-initial words). Flexible use of visual cues for lexical access develops throughout childhood.
Collapse
Affiliation(s)
| | | | - Margarethe McDonald
- Department of Linguistics, University of Ottawa, Canada
- School of Psychology, University of Ottawa, Canada
| | - H Henny Yeung
- Department of Linguistics, Simon Fraser University, Canada
- Integrative Neuroscience and Cognition Centre, UMR 8002, CNRS and University of Paris, France
| |
Collapse
|
4
|
Tan E, Hamlin JK. Mechanisms of social evaluation in infancy: A preregistered exploration of infants' eye-movement and pupillary responses to prosocial and antisocial events. INFANCY 2021; 27:255-276. [PMID: 34873821 DOI: 10.1111/infa.12447] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Revised: 07/29/2021] [Accepted: 11/15/2021] [Indexed: 11/28/2022]
Abstract
Past research shows infants selectively touch and look longer at characters who help versus hinder others (Social evaluation by preverbal infants. Nature, 2007, 450, 557; Three-month-olds show a negativity bias in their social evaluations. Developmental Science, 2010, 13, 923); however, the mechanisms underlying this tendency remain underspecified. The current preregistered experiment approaches this question by examining infants' real-time looking behaviors during prosocial and antisocial events, and exploring how individual infants' looking behaviors correlate with helper preferences. Using eye-tracking, 34 five-month-olds were familiarized with two blocks of the "hill" scenario originally developed by Kuhlmeier et al. (Attribution of dispositional states by 12-month-olds. Psychological Science, 2003, 14, 402), in which a climber tries unsuccessfully to reach the top of a hill and is alternately helped or hindered. Infants' visual preferences were assessed after each block of 6 helping and hindering events by proportional looking time to the helper versus hinderer in an image of the characters side by side. Results showed that, at the group level, infants looked longer at the helper after viewing 12 (but not after viewing 6) helping and hindering videos. Moreover, individual infants' average preference for the helper was predicted by their looking behaviors, particularly those suggestive of an understanding of the climber's unfulfilled goal. These results shed light on how infants process helping/hindering scenarios, and suggest that goal understanding is important for infants' helper preferences.
Collapse
Affiliation(s)
- Enda Tan
- University of British Columbia, Vancouver, British Columbia, Canada
| | | |
Collapse
|
5
|
Weatherhead D, Arredondo MM, Nácar Garcia L, Werker JF. The Role of Audiovisual Speech in Fast-Mapping and Novel Word Retention in Monolingual and Bilingual 24-Month-Olds. Brain Sci 2021; 11:brainsci11010114. [PMID: 33467100 PMCID: PMC7830540 DOI: 10.3390/brainsci11010114] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 01/12/2021] [Accepted: 01/14/2021] [Indexed: 11/19/2022] Open
Abstract
Three experiments examined the role of audiovisual speech on 24-month-old monolingual and bilinguals’ performance in a fast-mapping task. In all three experiments, toddlers were exposed to familiar trials which tested their knowledge of known word–referent pairs, disambiguation trials in which novel word–referent pairs were indirectly learned, and retention trials which probed their recognition of the newly-learned word–referent pairs. In Experiment 1 (n = 48), lip movements were present during familiar and disambiguation trials, but not retention trials. In Experiment 2 (n = 48), lip movements were present during all three trial types. In Experiment 3 (bilinguals only, n = 24), a still face with no lip movements was present in all three trial types. While toddlers succeeded in the familiar and disambiguation trials of every experiment, success in the retention trials was only found in Experiment 2. This work suggests that the extra-linguistic support provided by lip movements improved the learning and recognition of the novel words.
Collapse
Affiliation(s)
- Drew Weatherhead
- Department of Psychology and Neuroscience, Dalhousie University, Halifax, NS B3H 4R2, Canada
- Correspondence:
| | - Maria M. Arredondo
- Department of Human Development and Family Sciences, University of Texas at Austin, Austin, TX 78705, USA;
| | - Loreto Nácar Garcia
- Department of Psychology, University of British Columbia, Vancouver, BC V6T 1Z4, Canada; (L.N.G.); (J.F.W.)
| | - Janet F. Werker
- Department of Psychology, University of British Columbia, Vancouver, BC V6T 1Z4, Canada; (L.N.G.); (J.F.W.)
| |
Collapse
|
6
|
Havy M, Zesiger PE. Bridging ears and eyes when learning spoken words: On the effects of bilingual experience at 30 months. Dev Sci 2020; 24:e13002. [PMID: 32506622 DOI: 10.1111/desc.13002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Revised: 05/15/2020] [Accepted: 05/15/2020] [Indexed: 10/24/2022]
Abstract
From the very first moments of their lives, infants selectively attend to the visible orofacial movements of their social partners and apply their exquisite speech perception skills to the service of lexical learning. Here we explore how early bilingual experience modulates children's ability to use visible speech as they form new lexical representations. Using a cross-modal word-learning task, bilingual children aged 30 months were tested on their ability to learn new lexical mappings in either the auditory or the visual modality. Lexical recognition was assessed either in the same modality as the one used at learning ('same modality' condition: auditory test after auditory learning, visual test after visual learning) or in the other modality ('cross-modality' condition: visual test after auditory learning, auditory test after visual learning). The results revealed that like their monolingual peers, bilingual children successfully learn new words in either the auditory or the visual modality and show cross-modal recognition of words following auditory learning. Interestingly, as opposed to monolinguals, they also demonstrate cross-modal recognition of words upon visual learning. Collectively, these findings indicate a bilingual edge in visual word learning, expressed in the capacity to form a recoverable cross-modal representation of visually learned words.
Collapse
Affiliation(s)
- Mélanie Havy
- Faculty of Psychology and Educational Sciences, Geneva University, Geneva, Switzerland
| | - Pascal E Zesiger
- Faculty of Psychology and Educational Sciences, Geneva University, Geneva, Switzerland
| |
Collapse
|
7
|
Kehoe M, Havy M. Bilingual phonological acquisition: the influence of language-internal, language-external, and lexical factors. JOURNAL OF CHILD LANGUAGE 2019; 46:292-333. [PMID: 30560762 DOI: 10.1017/s0305000918000478] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
This study examines the influence of language-internal (frequency and complexity of linguistic properties), language-external (percent French input, socioeconomic status (SES), and gender), and lexical factors (size of total and French vocabulary) on the phonological production abilities of monolingual and bilingual French-speaking children, aged 2;6. Children participated in an object and picture naming task in which they produced words selected to test different phonological properties. The bilinguals' first languages were coded in terms of the frequency and complexity of these phonological properties. Results indicated that bilinguals who spoke languages characterized by high frequency/complexity of codas and clusters had superior results in their coda and cluster accuracy in comparison to monolinguals. Bilinguals also had better coda and cluster accuracy scores than monolinguals. These findings provide evidence for cross-linguistic interaction in combination with a 'general bilingual effect'. In addition, percent French exposure, SES, total vocabulary, and gender influenced phonological production.
Collapse
|