1
|
Alviar C, Sahoo M, Edwards L, Jones W, Klin A, Lense M. Infant-directed song potentiates infants' selective attention to adults' mouths over the first year of life. Dev Sci 2023; 26:e13359. [PMID: 36527322 PMCID: PMC10276172 DOI: 10.1111/desc.13359] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 11/03/2022] [Accepted: 12/02/2022] [Indexed: 12/23/2022]
Abstract
The mechanisms by which infant-directed (ID) speech and song support language development in infancy are poorly understood, with most prior investigations focused on the auditory components of these signals. However, the visual components of ID communication are also of fundamental importance for language learning: over the first year of life, infants' visual attention to caregivers' faces during ID speech switches from a focus on the eyes to a focus on the mouth, which provides synchronous visual cues that support speech and language development. Caregivers' facial displays during ID song are highly effective for sustaining infants' attention. Here we investigate if ID song specifically enhances infants' attention to caregivers' mouths. 299 typically developing infants watched clips of female actors engaging them with ID song and speech longitudinally at six time points from 3 to 12 months of age while eye-tracking data was collected. Infants' mouth-looking significantly increased over the first year of life with a significantly greater increase during ID song versus speech. This difference was early-emerging (evident in the first 6 months of age) and sustained over the first year. Follow-up analyses indicated specific properties inherent to ID song (e.g., slower tempo, reduced rhythmic variability) in part contribute to infants' increased mouth-looking, with effects increasing with age. The exaggerated and expressive facial features that naturally accompany ID song may make it a particularly effective context for modulating infants' visual attention and supporting speech and language development in both typically developing infants and those with or at risk for communication challenges. A video abstract of this article can be viewed at https://youtu.be/SZ8xQW8h93A. RESEARCH HIGHLIGHTS: Infants' visual attention to adults' mouths during infant-directed speech has been found to support speech and language development. Infant-directed (ID) song promotes mouth-looking by infants to a greater extent than does ID speech across the first year of life. Features characteristic of ID song such as slower tempo, increased rhythmicity, increased audiovisual synchrony, and increased positive affect, all increase infants' attention to the mouth. The effects of song on infants' attention to the mouth are more prominent during the second half of the first year of life.
Collapse
Affiliation(s)
- Camila Alviar
- Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Manash Sahoo
- Marcus Autism Center, Children’s Healthcare of Atlanta, Atlanta, GA, USA
- Emory University School of Medicine, Atlanta, GA, USA
| | - Laura Edwards
- Marcus Autism Center, Children’s Healthcare of Atlanta, Atlanta, GA, USA
- Emory University School of Medicine, Atlanta, GA, USA
| | - Warren Jones
- Marcus Autism Center, Children’s Healthcare of Atlanta, Atlanta, GA, USA
- Emory University School of Medicine, Atlanta, GA, USA
| | - Ami Klin
- Marcus Autism Center, Children’s Healthcare of Atlanta, Atlanta, GA, USA
- Emory University School of Medicine, Atlanta, GA, USA
| | - Miriam Lense
- Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
- The Curb Center for Art, Enterprise, and Public Policy, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
2
|
Yu CY, Cabildo A, Grahn JA, Vanden Bosch der Nederlanden CM. Perceived rhythmic regularity is greater for song than speech: examining acoustic correlates of rhythmic regularity in speech and song. Front Psychol 2023; 14:1167003. [PMID: 37303916 PMCID: PMC10250601 DOI: 10.3389/fpsyg.2023.1167003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Accepted: 05/09/2023] [Indexed: 06/13/2023] Open
Abstract
Rhythm is a key feature of music and language, but the way rhythm unfolds within each domain differs. Music induces perception of a beat, a regular repeating pulse spaced by roughly equal durations, whereas speech does not have the same isochronous framework. Although rhythmic regularity is a defining feature of music and language, it is difficult to derive acoustic indices of the differences in rhythmic regularity between domains. The current study examined whether participants could provide subjective ratings of rhythmic regularity for acoustically matched (syllable-, tempo-, and contour-matched) and acoustically unmatched (varying in tempo, syllable number, semantics, and contour) exemplars of speech and song. We used subjective ratings to index the presence or absence of an underlying beat and correlated ratings with stimulus features to identify acoustic metrics of regularity. Experiment 1 highlighted that ratings based on the term "rhythmic regularity" did not result in consistent definitions of regularity across participants, with opposite ratings for participants who adopted a beat-based definition (song greater than speech), a normal-prosody definition (speech greater than song), or an unclear definition (no difference). Experiment 2 defined rhythmic regularity as how easy it would be to tap or clap to the utterances. Participants rated song as easier to clap or tap to than speech for both acoustically matched and unmatched datasets. Subjective regularity ratings from Experiment 2 illustrated that stimuli with longer syllable durations and with less spectral flux were rated as more rhythmically regular across domains. Our findings demonstrate that rhythmic regularity distinguishes speech from song and several key acoustic features can be used to predict listeners' perception of rhythmic regularity within and across domains as well.
Collapse
Affiliation(s)
- Chu Yi Yu
- The Brain and Mind Institute, Western University, London, ON, Canada
- Department of Psychology, Western University, London, ON, Canada
| | - Anne Cabildo
- Department of Psychology, University of Toronto, Mississauga, ON, Canada
| | - Jessica A. Grahn
- The Brain and Mind Institute, Western University, London, ON, Canada
- Department of Psychology, Western University, London, ON, Canada
| | - Christina M. Vanden Bosch der Nederlanden
- The Brain and Mind Institute, Western University, London, ON, Canada
- Department of Psychology, Western University, London, ON, Canada
- Department of Psychology, University of Toronto, Mississauga, ON, Canada
| |
Collapse
|
3
|
Hannon EE, Schachner A, Nave-Blodgett JE. Babies know bad dancing when they see it: Older but not younger infants discriminate between synchronous and asynchronous audiovisual musical displays. J Exp Child Psychol 2017; 159:159-174. [PMID: 28288412 DOI: 10.1016/j.jecp.2017.01.006] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2016] [Revised: 01/17/2017] [Accepted: 01/17/2017] [Indexed: 10/20/2022]
Abstract
Movement to music is a universal human behavior, yet little is known about how observers perceive audiovisual synchrony in complex musical displays such as a person dancing to music, particularly during infancy and childhood. In the current study, we investigated how perception of musical audiovisual synchrony develops over the first year of life. We habituated infants to a video of a person dancing to music and subsequently presented videos in which the visual track was matched (synchronous) or mismatched (asynchronous) with the audio track. In a visual-only control condition, we presented the same visual stimuli with no sound. In Experiment 1, we found that older infants (8-12months) exhibited a novelty preference for the mismatched movie when both auditory information and visual information were available and showed no preference when only visual information was available. By contrast, younger infants (5-8months) in Experiment 2 did not discriminate matching stimuli from mismatching stimuli. This suggests that the ability to perceive musical audiovisual synchrony may develop during the second half of the first year of infancy.
Collapse
Affiliation(s)
- Erin E Hannon
- Department of Psychology, University of Nevada, Las Vegas, Las Vegas, NV 89154, USA.
| | - Adena Schachner
- Department of Psychology, University of California, San Diego, La Jolla, CA 92093, USA
| | | |
Collapse
|