1
|
Deng X, McClay E, Jastrzebski E, Wang Y, Yeung HH. Visual scanning patterns of a talking face when evaluating phonetic information in a native and non-native language. PLoS One 2024; 19:e0304150. [PMID: 38805447 PMCID: PMC11132507 DOI: 10.1371/journal.pone.0304150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 05/07/2024] [Indexed: 05/30/2024] Open
Abstract
When comprehending speech, listeners can use information encoded in visual cues from a face to enhance auditory speech comprehension. For example, prior work has shown that the mouth movements reflect articulatory features of speech segments and durational information, while pitch and speech amplitude are primarily cued by eyebrow and head movements. Little is known about how the visual perception of segmental and prosodic speech information is influenced by linguistic experience. Using eye-tracking, we studied how perceivers' visual scanning of different regions on a talking face predicts accuracy in a task targeting both segmental versus prosodic information, and also asked how this was influenced by language familiarity. Twenty-four native English perceivers heard two audio sentences in either English or Mandarin (an unfamiliar, non-native language), which sometimes differed in segmental or prosodic information (or both). Perceivers then saw a silent video of a talking face, and judged whether that video matched either the first or second audio sentence (or whether both sentences were the same). First, increased looking to the mouth predicted correct responses only for non-native language trials. Second, the start of a successful search for speech information in the mouth area was significantly delayed in non-native versus native trials, but just when there were only prosodic differences in the auditory sentences, and not when there were segmental differences. Third, (in correct trials) the saccade amplitude in native language trials was significantly greater than in non-native trials, indicating more intensely focused fixations in the latter. Taken together, these results suggest that mouth-looking was generally more evident when processing a non-native versus native language in all analyses, but fascinatingly, when measuring perceivers' latency to fixate the mouth, this language effect was largest in trials where only prosodic information was useful for the task.
Collapse
Affiliation(s)
- Xizi Deng
- Department of Linguistics, Simon Fraser University, Burnaby BC, Canada
| | - Elise McClay
- Department of Linguistics, Simon Fraser University, Burnaby BC, Canada
| | - Erin Jastrzebski
- Department of Linguistics, Simon Fraser University, Burnaby BC, Canada
| | - Yue Wang
- Department of Linguistics, Simon Fraser University, Burnaby BC, Canada
| | - H. Henny Yeung
- Department of Linguistics, Simon Fraser University, Burnaby BC, Canada
| |
Collapse
|
2
|
Urbanus E, Swaab H, Tartaglia N, van Rijn S. Social Communication in Young Children With Sex Chromosome Trisomy (XXY, XXX, XYY): A Study With Eye Tracking and Heart Rate Measures. Arch Clin Neuropsychol 2024; 39:482-497. [PMID: 37987192 PMCID: PMC11110620 DOI: 10.1093/arclin/acad088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 09/22/2023] [Accepted: 09/25/2023] [Indexed: 11/22/2023] Open
Abstract
OBJECTIVE Children with sex chromosome trisomy (SCT) have an increased risk for suboptimal development. Difficulties with language are frequently reported, start from a very young age, and encompass various domains. This cross-sectional study examined social orientation with eye tracking and physiological arousal responses to gain more knowledge on how children perceive and respond to communicative bids and evaluated the associations between social orientation and language outcomes, concurrently and 1 year later. METHOD In total, 107 children with SCT (33 XXX, 50 XXY, and 24 XYY) and 102 controls (58 girls and 44 boys) aged between 1 and 7 years were included. Assessments took place in the USA and Western Europe. A communicative bids eye tracking paradigm, physiological arousal measures, and receptive and expressive language outcomes were used. RESULTS Compared to controls, children with SCT showed reduced attention to the face and eyes of the on-screen interaction partner and reduced physiological arousal sensitivity in response to direct versus averted gaze. In addition, social orientation to the mouth was related to concurrent receptive and expressive language abilities in 1-year-old children with SCT. CONCLUSIONS Children with SCT may experience difficulties with social communication that extend past the well-recognized risk for early language delays. These difficulties may underlie social-behavioral problems that have been described in the SCT population and are an important target for early monitoring and support.
Collapse
Affiliation(s)
- Evelien Urbanus
- Department of Clinical Neurodevelopmental Sciences, Leiden University, Leiden, The Netherlands
- TRIXY Center of Expertise, Leiden University Treatment and Expertise Centre (LUBEC), Leiden, The Netherlands
- Department of Clinical, Neuro, and Developmental Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Hanna Swaab
- Department of Clinical Neurodevelopmental Sciences, Leiden University, Leiden, The Netherlands
- TRIXY Center of Expertise, Leiden University Treatment and Expertise Centre (LUBEC), Leiden, The Netherlands
| | - Nicole Tartaglia
- eXtraordinarY Kids Clinic, Developmental Pediatrics, Children's Hospital Colorado, Aurora, CO, USA
- Department of Pediatrics, University of Colorado School of Medicine, Aurora, CO, USA
| | - Sophie van Rijn
- Department of Clinical Neurodevelopmental Sciences, Leiden University, Leiden, The Netherlands
- TRIXY Center of Expertise, Leiden University Treatment and Expertise Centre (LUBEC), Leiden, The Netherlands
| |
Collapse
|
3
|
Hartman J, Saffran J, Litovsky R. Word Learning in Deaf Adults Who Use Cochlear Implants: The Role of Talker Variability and Attention to the Mouth. Ear Hear 2024; 45:337-350. [PMID: 37695563 PMCID: PMC10920394 DOI: 10.1097/aud.0000000000001432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/12/2023]
Abstract
OBJECTIVES Although cochlear implants (CIs) facilitate spoken language acquisition, many CI listeners experience difficulty learning new words. Studies have shown that highly variable stimulus input and audiovisual cues improve speech perception in CI listeners. However, less is known whether these two factors improve perception in a word learning context. Furthermore, few studies have examined how CI listeners direct their gaze to efficiently capture visual information available on a talker's face. The purpose of this study was two-fold: (1) to examine whether talker variability could improve word learning in CI listeners and (2) to examine how CI listeners direct their gaze while viewing a talker speak. DESIGN Eighteen adults with CIs and 10 adults with normal hearing (NH) learned eight novel word-object pairs spoken by a single talker or six different talkers (multiple talkers). The word learning task comprised of nonsense words following the phonotactic rules of English. Learning was probed using a novel talker in a two-alternative forced-choice eye gaze task. Learners' eye movements to the mouth and the target object (accuracy) were tracked over time. RESULTS Both groups performed near ceiling during the test phase, regardless of whether they learned from the same talker or different talkers. However, compared to listeners with NH, CI listeners directed their gaze significantly more to the talker's mouth while learning the words. CONCLUSIONS Unlike NH listeners who can successfully learn words without focusing on the talker's mouth, CI listeners tended to direct their gaze to the talker's mouth, which may facilitate learning. This finding is consistent with the hypothesis that CI listeners use a visual processing strategy that efficiently captures redundant audiovisual speech cues available at the mouth. Due to ceiling effects, however, it is unclear whether talker variability facilitated word learning for adult CI listeners, an issue that should be addressed in future work using more difficult listening conditions.
Collapse
Affiliation(s)
- Jasenia Hartman
- Department of Psychology and Neuroscience, Duke University; Durham, NC 27708
- Neuroscience Training Program, University of Wisconsin-Madison; Madison, WI 53706
| | - Jenny Saffran
- Department of Psychology, University of Wisconsin-Madison; Madison, WI 53706
| | - Ruth Litovsky
- Neuroscience Training Program, University of Wisconsin-Madison; Madison, WI 53706
- Communication and Science Disorders, University of Wisconsin-Madison; Madison, WI 53706
| |
Collapse
|
4
|
Oesch N. Social Brain Perspectives on the Social and Evolutionary Neuroscience of Human Language. Brain Sci 2024; 14:166. [PMID: 38391740 PMCID: PMC10886718 DOI: 10.3390/brainsci14020166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 01/30/2024] [Accepted: 02/03/2024] [Indexed: 02/24/2024] Open
Abstract
Human language and social cognition are two key disciplines that have traditionally been studied as separate domains. Nonetheless, an emerging view suggests an alternative perspective. Drawing on the theoretical underpinnings of the social brain hypothesis (thesis of the evolution of brain size and intelligence), the social complexity hypothesis (thesis of the evolution of communication), and empirical research from comparative animal behavior, human social behavior, language acquisition in children, social cognitive neuroscience, and the cognitive neuroscience of language, it is argued that social cognition and language are two significantly interconnected capacities of the human species. Here, evidence in support of this view reviews (1) recent developmental studies on language learning in infants and young children, pointing to the important crucial benefits associated with social stimulation for youngsters, including the quality and quantity of incoming linguistic information, dyadic infant/child-to-parent non-verbal and verbal interactions, and other important social cues integral for facilitating language learning and social bonding; (2) studies of the adult human brain, suggesting a high degree of specialization for sociolinguistic information processing, memory retrieval, and comprehension, suggesting that the function of these neural areas may connect social cognition with language and social bonding; (3) developmental deficits in language and social cognition, including autism spectrum disorder (ASD), illustrating a unique developmental profile, further linking language, social cognition, and social bonding; and (4) neural biomarkers that may help to identify early developmental disorders of language and social cognition. In effect, the social brain and social complexity hypotheses may jointly help to describe how neurotypical children and adults acquire language, why autistic children and adults exhibit simultaneous deficits in language and social cognition, and why nonhuman primates and other organisms with significant computational capacities cannot learn language. But perhaps most critically, the following article argues that this and related research will allow scientists to generate a holistic profile and deeper understanding of the healthy adult social brain while developing more innovative and effective diagnoses, prognoses, and treatments for maladies and deficits also associated with the social brain.
Collapse
Affiliation(s)
- Nathan Oesch
- Department of Anthropology, University of Toronto Mississauga, Mississauga, ON L5L 1C6, Canada
- Department of Psychology, University of Toronto Mississauga, Mississauga, ON L5L 1C6, Canada
| |
Collapse
|
5
|
Liu S, Li X, Sun R. The effect of masks on infants' ability to fast-map and generalize new words. JOURNAL OF CHILD LANGUAGE 2024:1-19. [PMID: 38189211 DOI: 10.1017/s0305000923000697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2024]
Abstract
Young children today are exposed to masks on a regular basis. However, there is limited empirical evidence on how masks may affect word learning. The study explored the effect of masks on infants' abilities to fast-map and generalize new words. Seventy-two Chinese infants (43 males, Mage = 18.26 months) were taught two novel word-object pairs by a speaker with or without a mask. They then heard the words and had to visually identify the correct objects and also generalize words to a different speaker and objects from the same category. Eye-tracking results indicate that infants looked longer at the target regardless of whether a speaker wore a mask. They also looked longer at the speaker's eyes than at the mouth only when words were taught through a mask. Thus, fast-mapping and generalization occur in both masked and not masked conditions as infants can flexibly access different visual cues during word-learning.
Collapse
Affiliation(s)
- Siying Liu
- Institute of Linguistics, Shanghai International Studies University, Shanghai, China
| | - Xun Li
- Institute of Linguistics, Shanghai International Studies University, Shanghai, China
| | - Renji Sun
- East China University of Political Science and Law, China
| |
Collapse
|
6
|
Surrain S, Mesa MP, Assel MA, Zucker TA. Does Assessor Masking Affect Kindergartners' Performance on Oral Language Measures? A COVID-19 Era Experiment With Children From Diverse Home Language Backgrounds. Lang Speech Hear Serv Sch 2023; 54:1323-1332. [PMID: 37390464 DOI: 10.1044/2023_lshss-22-00197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/02/2023] Open
Abstract
PURPOSE The ongoing COVID-19 pandemic has prompted changes to child assessment procedures in schools such as the use of face masks by assessors. Research with adults suggests that face masks diminish performance on speech processing and comprehension tasks, yet little is known about how assessor masking affects child performance. Therefore, we asked whether assessor masking impacts children's performance on a widely used, individually administered oral language assessment and if impacts vary by child home language background. METHOD A total of 96 kindergartners (5-7 years old, n = 45 with a home language other than English) were administered items from the Clinical Evaluation of Language Fundamentals Preschool-Second Edition Recalling Sentences subtest under two conditions: with and without the assessor wearing a face mask. Regression analysis was used to determine if children scored significantly lower in the masked condition and if the effect of masking depended on home language background. RESULTS Contrary to expectations, we found no evidence that students scored systematically differently in the masked condition. Children with a home language other than English scored lower overall, but masking did not increase the gap in scores by language background. CONCLUSIONS Our results suggest that children's performance on oral language measures is not adversely affected by assessor masking and imply that valid measurements of students' language skills may be obtained in masked conditions. While masking might decrease some of the social determinants of communication (e.g., recognition of emotions), masking in this experiment did not appear to detract from children's ability to hear and immediately recall verbal information. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.23567463.
Collapse
Affiliation(s)
- Sarah Surrain
- The Children's Learning Institute, The University of Texas Health Science Center at Houston
| | - Michael P Mesa
- The Children's Learning Institute, The University of Texas Health Science Center at Houston
| | - Mike A Assel
- The Children's Learning Institute, The University of Texas Health Science Center at Houston
| | - Tricia A Zucker
- The Children's Learning Institute, The University of Texas Health Science Center at Houston
| |
Collapse
|
7
|
Byrne S, O'Flaherty E, Sledge H, Lenehan S, Jordan N, Boland F, Franklin R, Hurley S, McHugh J, Hourihane J. Infants born during the COVID-19 pandemic have less interest in masked faces than unmasked faces. Arch Dis Child 2023; 108:i. [PMID: 37541680 DOI: 10.1136/archdischild-2022-325272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 07/17/2023] [Indexed: 08/06/2023]
Affiliation(s)
- Susan Byrne
- Children's Health Ireland, Dublin, Ireland
- Department of Paediatrics and Child Health, Royal College of Surgeons in Ireland, Dublin, Ireland
- FutureNeuro SFI research centre, Royal College of Surgeons Ireland, Dublin, Ireland
| | | | - Hailey Sledge
- Department of Paediatrics and Child Health, Royal College of Surgeons in Ireland, Dublin, Ireland
| | - Sonia Lenehan
- INFANT Centre, University College Cork, Cork, Ireland
| | | | - Fiona Boland
- Division of Population Health Science, HRB Centre for Primary Care Research, Dublin, Ireland
| | - Ruth Franklin
- Department of Paediatrics and Child Health, Royal College of Surgeons in Ireland, Dublin, Ireland
| | - Sadhbh Hurley
- Children's Health Ireland, Dublin, Ireland
- Department of Paediatrics and Child Health, Royal College of Surgeons in Ireland, Dublin, Ireland
| | | | - Jonathan Hourihane
- Children's Health Ireland, Dublin, Ireland
- Department of Paediatrics and Child Health, Royal College of Surgeons in Ireland, Dublin, Ireland
| |
Collapse
|
8
|
Alviar C, Sahoo M, Edwards L, Jones W, Klin A, Lense M. Infant-directed song potentiates infants' selective attention to adults' mouths over the first year of life. Dev Sci 2023; 26:e13359. [PMID: 36527322 PMCID: PMC10276172 DOI: 10.1111/desc.13359] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 11/03/2022] [Accepted: 12/02/2022] [Indexed: 12/23/2022]
Abstract
The mechanisms by which infant-directed (ID) speech and song support language development in infancy are poorly understood, with most prior investigations focused on the auditory components of these signals. However, the visual components of ID communication are also of fundamental importance for language learning: over the first year of life, infants' visual attention to caregivers' faces during ID speech switches from a focus on the eyes to a focus on the mouth, which provides synchronous visual cues that support speech and language development. Caregivers' facial displays during ID song are highly effective for sustaining infants' attention. Here we investigate if ID song specifically enhances infants' attention to caregivers' mouths. 299 typically developing infants watched clips of female actors engaging them with ID song and speech longitudinally at six time points from 3 to 12 months of age while eye-tracking data was collected. Infants' mouth-looking significantly increased over the first year of life with a significantly greater increase during ID song versus speech. This difference was early-emerging (evident in the first 6 months of age) and sustained over the first year. Follow-up analyses indicated specific properties inherent to ID song (e.g., slower tempo, reduced rhythmic variability) in part contribute to infants' increased mouth-looking, with effects increasing with age. The exaggerated and expressive facial features that naturally accompany ID song may make it a particularly effective context for modulating infants' visual attention and supporting speech and language development in both typically developing infants and those with or at risk for communication challenges. A video abstract of this article can be viewed at https://youtu.be/SZ8xQW8h93A. RESEARCH HIGHLIGHTS: Infants' visual attention to adults' mouths during infant-directed speech has been found to support speech and language development. Infant-directed (ID) song promotes mouth-looking by infants to a greater extent than does ID speech across the first year of life. Features characteristic of ID song such as slower tempo, increased rhythmicity, increased audiovisual synchrony, and increased positive affect, all increase infants' attention to the mouth. The effects of song on infants' attention to the mouth are more prominent during the second half of the first year of life.
Collapse
Affiliation(s)
- Camila Alviar
- Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Manash Sahoo
- Marcus Autism Center, Children’s Healthcare of Atlanta, Atlanta, GA, USA
- Emory University School of Medicine, Atlanta, GA, USA
| | - Laura Edwards
- Marcus Autism Center, Children’s Healthcare of Atlanta, Atlanta, GA, USA
- Emory University School of Medicine, Atlanta, GA, USA
| | - Warren Jones
- Marcus Autism Center, Children’s Healthcare of Atlanta, Atlanta, GA, USA
- Emory University School of Medicine, Atlanta, GA, USA
| | - Ami Klin
- Marcus Autism Center, Children’s Healthcare of Atlanta, Atlanta, GA, USA
- Emory University School of Medicine, Atlanta, GA, USA
| | - Miriam Lense
- Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
- The Curb Center for Art, Enterprise, and Public Policy, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
9
|
Birulés J, Goupil L, Josse J, Fort M. The Role of Talking Faces in Infant Language Learning: Mind the Gap between Screen-Based Settings and Real-Life Communicative Interactions. Brain Sci 2023; 13:1167. [PMID: 37626523 PMCID: PMC10452843 DOI: 10.3390/brainsci13081167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 07/28/2023] [Accepted: 08/01/2023] [Indexed: 08/27/2023] Open
Abstract
Over the last few decades, developmental (psycho) linguists have demonstrated that perceiving talking faces audio-visually is important for early language acquisition. Using mostly well-controlled and screen-based laboratory approaches, this line of research has shown that paying attention to talking faces is likely to be one of the powerful strategies infants use to learn their native(s) language(s). In this review, we combine evidence from these screen-based studies with another line of research that has studied how infants learn novel words and deploy their visual attention during naturalistic play. In our view, this is an important step toward developing an integrated account of how infants effectively extract audiovisual information from talkers' faces during early language learning. We identify three factors that have been understudied so far, despite the fact that they are likely to have an important impact on how infants deploy their attention (or not) toward talking faces during social interactions: social contingency, speaker characteristics, and task- dependencies. Last, we propose ideas to address these issues in future research, with the aim of reducing the existing knowledge gap between current experimental studies and the many ways infants can and do effectively rely upon the audiovisual information extracted from talking faces in their real-life language environment.
Collapse
Affiliation(s)
- Joan Birulés
- Laboratoire de Psychologie et NeuroCognition, CNRS UMR 5105, Université Grenoble Alpes, 38058 Grenoble, France; (L.G.); (J.J.); (M.F.)
| | - Louise Goupil
- Laboratoire de Psychologie et NeuroCognition, CNRS UMR 5105, Université Grenoble Alpes, 38058 Grenoble, France; (L.G.); (J.J.); (M.F.)
| | - Jérémie Josse
- Laboratoire de Psychologie et NeuroCognition, CNRS UMR 5105, Université Grenoble Alpes, 38058 Grenoble, France; (L.G.); (J.J.); (M.F.)
| | - Mathilde Fort
- Laboratoire de Psychologie et NeuroCognition, CNRS UMR 5105, Université Grenoble Alpes, 38058 Grenoble, France; (L.G.); (J.J.); (M.F.)
- Centre de Recherche en Neurosciences de Lyon, INSERM U1028-CNRS UMR 5292, Université Lyon 1, 69500 Bron, France
| |
Collapse
|
10
|
Edgar EV, Todd JT, Bahrick LE. Intersensory processing of faces and voices at 6 months predicts language outcomes at 18, 24, and 36 months of age. INFANCY 2023; 28:569-596. [PMID: 36760157 PMCID: PMC10564323 DOI: 10.1111/infa.12533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 01/04/2023] [Accepted: 01/13/2023] [Indexed: 02/11/2023]
Abstract
Intersensory processing of social events (e.g., matching sights and sounds of audiovisual speech) is a critical foundation for language development. Two recently developed protocols, the Multisensory Attention Assessment Protocol (MAAP) and the Intersensory Processing Efficiency Protocol (IPEP), assess individual differences in intersensory processing at a sufficiently fine-grained level for predicting developmental outcomes. Recent research using the MAAP demonstrates 12-month intersensory processing of face-voice synchrony predicts language outcomes at 18- and 24-months, holding traditional predictors (parent language input, SES) constant. Here, we build on these findings testing younger infants using the IPEP, a more comprehensive, fine-grained index of intersensory processing. Using a longitudinal sample of 103 infants, we tested whether intersensory processing (speed, accuracy) of faces and voices at 3- and 6-months predicts language outcomes at 12-, 18-, 24-, and 36-months, holding traditional predictors constant. Results demonstrate intersensory processing of faces and voices at 6-months (but not 3-months) accounted for significant unique variance in language outcomes at 18-, 24-, and 36-months, beyond that of traditional predictors. Findings highlight the importance of intersensory processing of face-voice synchrony as a foundation for language development as early as 6-months and reveal that individual differences assessed by the IPEP predict language outcomes even 2.5-years later.
Collapse
|
11
|
Thorsson M, Galazka MA, Åsberg Johnels J, Hadjikhani N. A novel end-to-end dual-camera system for eye gaze synchrony assessment in face-to-face interaction. Atten Percept Psychophys 2023:10.3758/s13414-023-02679-4. [PMID: 37099200 DOI: 10.3758/s13414-023-02679-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/11/2023] [Indexed: 04/27/2023]
Abstract
Quantification of face-to-face interaction can provide highly relevant information in cognitive and psychological science research. Current commercial glint-dependent solutions suffer from several disadvantages and limitations when applied in face-to-face interaction, including data loss, parallax errors, the inconvenience and distracting effect of wearables, and/or the need for several cameras to capture each person. Here we present a novel eye-tracking solution, consisting of a dual-camera system used in conjunction with an individually optimized deep learning approach that aims to overcome some of these limitations. Our data show that this system can accurately classify gaze location within different areas of the face of two interlocutors, and capture subtle differences in interpersonal gaze synchrony between two individuals during a (semi-)naturalistic face-to-face interaction.
Collapse
Affiliation(s)
- Max Thorsson
- Gillberg Neuropsychiatry Centre, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden.
| | - Martyna A Galazka
- Gillberg Neuropsychiatry Centre, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Jakob Åsberg Johnels
- Gillberg Neuropsychiatry Centre, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Section of Speech and Language Pathology, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Nouchine Hadjikhani
- Gillberg Neuropsychiatry Centre, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
12
|
Chawarska K, Lewkowicz D, Feiner H, Macari S, Vernetti A. Attention to audiovisual speech does not facilitate language acquisition in infants with familial history of autism. J Child Psychol Psychiatry 2022; 63:1466-1476. [PMID: 35244219 DOI: 10.1111/jcpp.13595] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 01/10/2022] [Accepted: 01/21/2022] [Indexed: 11/30/2022]
Abstract
BACKGROUND Due to familial liability, siblings of children with ASD exhibit elevated risk for language delays. The processes contributing to language delays in this population remain unclear. METHODS Considering well-established links between attention to dynamic audiovisual cues inherent in a speaker's face and speech processing, we investigated if attention to a speaker's face and mouth differs in 12-month-old infants at high familial risk for ASD but without ASD diagnosis (hr-sib; n = 91) and in infants at low familial risk (lr-sib; n = 62) for ASD and whether attention at 12 months predicts language outcomes at 18 months. RESULTS At 12 months, hr-sib and lr-sib infants did not differ in attention to face (p = .14), mouth preference (p = .30), or in receptive and expressive language scores (p = .36, p = .33). At 18 months, the hr-sib infants had lower receptive (p = .01) but not expressive (p = .84) language scores than the lr-sib infants. In the lr-sib infants, greater attention to the face (p = .022) and a mouth preference (p = .025) contributed to better language outcomes at 18 months. In the hr-sib infants, neither attention to the face nor a mouth preference was associated with language outcomes at 18 months. CONCLUSIONS Unlike low-risk infants, high-risk infants do not appear to benefit from audiovisual prosodic and speech cues in the service of language acquisition despite intact attention to these cues. We propose that impaired processing of audiovisual cues may constitute the link between genetic risk factors and poor language outcomes observed across the autism risk spectrum and may represent a promising endophenotype in autism.
Collapse
Affiliation(s)
- Katarzyna Chawarska
- Child Study Center, Yale University School of Medicine, New Haven, CT, USA
- Haskins Laboratories, New Haven, CT, USA
| | - David Lewkowicz
- Child Study Center, Yale University School of Medicine, New Haven, CT, USA
- Haskins Laboratories, New Haven, CT, USA
| | - Hannah Feiner
- Child Study Center, Yale University School of Medicine, New Haven, CT, USA
| | - Suzanne Macari
- Child Study Center, Yale University School of Medicine, New Haven, CT, USA
| | - Angelina Vernetti
- Child Study Center, Yale University School of Medicine, New Haven, CT, USA
| |
Collapse
|
13
|
Lozano I, López Pérez D, Laudańska Z, Malinowska‐Korczak A, Szmytke M, Radkowska A, Tomalski P. Changes in selective attention to articulating mouth across infancy: Sex differences and associations with language outcomes. INFANCY 2022; 27:1132-1153. [DOI: 10.1111/infa.12496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Revised: 05/27/2022] [Accepted: 07/15/2022] [Indexed: 11/29/2022]
Affiliation(s)
- Itziar Lozano
- Department of Cognitive Psychology and Neurocognitive Science Faculty of Psychology, University of Warsaw Warsaw Poland
- Universidad Autónoma de Madrid, Faculty of Psychology Madrid Spain
| | - David López Pérez
- Neurocognitive Development Lab, Institute of Psychology, Polish Academy of Sciences Warsaw Poland
| | - Zuzanna Laudańska
- Neurocognitive Development Lab, Institute of Psychology, Polish Academy of Sciences Warsaw Poland
| | - Anna Malinowska‐Korczak
- Neurocognitive Development Lab, Institute of Psychology, Polish Academy of Sciences Warsaw Poland
| | - Magdalena Szmytke
- Neurocognitive Development Lab, Faculty of Psychology, University of Warsaw Warsaw Poland
| | - Alicja Radkowska
- Neurocognitive Development Lab, Institute of Psychology, Polish Academy of Sciences Warsaw Poland
- Neurocognitive Development Lab, Faculty of Psychology, University of Warsaw Warsaw Poland
| | - Przemysław Tomalski
- Neurocognitive Development Lab, Institute of Psychology, Polish Academy of Sciences Warsaw Poland
| |
Collapse
|
14
|
Rubio-Fernandez P, Shukla V, Bhatia V, Ben-Ami S, Sinha P. Head turning is an effective cue for gaze following: Evidence from newly sighted individuals, school children and adults. Neuropsychologia 2022; 174:108330. [PMID: 35843461 DOI: 10.1016/j.neuropsychologia.2022.108330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 03/24/2022] [Accepted: 07/08/2022] [Indexed: 10/17/2022]
Abstract
In referential communication, gaze is often interpreted as a social cue that facilitates comprehension and enables word learning. Here we investigated the degree to which head turning facilitates gaze following. We presented participants with static pictures of a man looking at a target object in a first and third block of trials (pre- and post-intervention), while they saw short videos of the same man turning towards the target in the second block of trials (intervention). In Experiment 1, newly sighted individuals (treated for congenital cataracts; N = 8) benefited from the motion cues, both when comparing their initial performance with static gaze cues to their performance with dynamic head turning, and their performance with static cues before and after the videos. In Experiment 2, neurotypical school children (ages 5-10 years; N = 90) and adults (N = 30) also revealed improved performance with motion cues, although most participants had started to follow the static gaze cues before they saw the videos. Our results confirm that head turning is an effective social cue when interpreting new words, offering new insights for a pathways approach to development.
Collapse
Affiliation(s)
| | | | | | - Shlomit Ben-Ami
- Massachusetts Institute of Technology, USA; Tel Aviv University, Israel
| | | |
Collapse
|
15
|
Bastianello T, Keren-Portnoy T, Majorano M, Vihman M. Infant looking preferences towards dynamic faces: A systematic review. Infant Behav Dev 2022; 67:101709. [PMID: 35338995 DOI: 10.1016/j.infbeh.2022.101709] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 02/28/2022] [Accepted: 03/06/2022] [Indexed: 11/25/2022]
Abstract
Although the pattern of visual attention towards the region of the eyes is now well-established for infants at an early stage of development, less is known about the extent to which the mouth attracts an infant's attention. Even less is known about the extent to which these specific looking behaviours towards different regions of the talking face (i.e., the eyes or the mouth) may impact on or account for aspects of language development. The aim of the present systematic review is to synthesize and analyse (i) which factors might determine different looking patterns in infants during audio-visual tasks using dynamic faces and (ii) how these patterns have been studied in relation to aspects of the baby's development. Four bibliographic databases were explored, and the records were selected following specified inclusion criteria. The search led to the identification of 19 papers (October 2021). Some studies have tried to clarify the role played by audio-visual support in speech perception and early production based on directly related factors such as the age or language background of the participants, while others have tested the child's competence in terms of linguistic or social skills. Several hypotheses have been advanced to explain the selective attention phenomenon. The results of the selected studies have led to different lines of interpretation. Some suggestions for future research are outlined.
Collapse
Affiliation(s)
| | | | | | - Marilyn Vihman
- Department of Language and Linguistic Science, University of York, UK
| |
Collapse
|
16
|
Carnevali L, Gui A, Jones EJH, Farroni T. Face Processing in Early Development: A Systematic Review of Behavioral Studies and Considerations in Times of COVID-19 Pandemic. Front Psychol 2022; 13:778247. [PMID: 35250718 PMCID: PMC8894249 DOI: 10.3389/fpsyg.2022.778247] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Accepted: 01/21/2022] [Indexed: 12/17/2022] Open
Abstract
Human faces are one of the most prominent stimuli in the visual environment of young infants and convey critical information for the development of social cognition. During the COVID-19 pandemic, mask wearing has become a common practice outside the home environment. With masks covering nose and mouth regions, the facial cues available to the infant are impoverished. The impact of these changes on development is unknown but is critical to debates around mask mandates in early childhood settings. As infants grow, they increasingly interact with a broader range of familiar and unfamiliar people outside the home; in these settings, mask wearing could possibly influence social development. In order to generate hypotheses about the effects of mask wearing on infant social development, in the present work, we systematically review N = 129 studies selected based on the most recent PRISMA guidelines providing a state-of-the-art framework of behavioral studies investigating face processing in early infancy. We focused on identifying sensitive periods during which being exposed to specific facial features or to the entire face configuration has been found to be important for the development of perceptive and socio-communicative skills. For perceptive skills, infants gradually learn to analyze the eyes or the gaze direction within the context of the entire face configuration. This contributes to identity recognition as well as emotional expression discrimination. For socio-communicative skills, direct gaze and emotional facial expressions are crucial for attention engagement while eye-gaze cuing is important for joint attention. Moreover, attention to the mouth is particularly relevant for speech learning. We discuss possible implications of the exposure to masked faces for developmental needs and functions. Providing groundwork for further research, we encourage the investigation of the consequences of mask wearing for infants' perceptive and socio-communicative development, suggesting new directions within the research field.
Collapse
Affiliation(s)
- Laura Carnevali
- Department of Developmental Psychology and Socialization, University of Padua, Padua, Italy
| | - Anna Gui
- Centre for Brain and Cognitive Development, Birkbeck, University of London, London, United Kingdom
| | - Emily J. H. Jones
- Centre for Brain and Cognitive Development, Birkbeck, University of London, London, United Kingdom
| | - Teresa Farroni
- Department of Developmental Psychology and Socialization, University of Padua, Padua, Italy
| |
Collapse
|
17
|
Santapuram P, Feldman JI, Bowman SM, Raj S, Suzman E, Crowley S, Kim SY, Keceli-Kaysili B, Bottema-Beutel K, Lewkowicz DJ, Wallace MT, Woynaroski TG. Mechanisms by which Early Eye Gaze to the Mouth During Multisensory Speech Influences Expressive Communication Development in Infant Siblings of Children with and without Autism. MIND, BRAIN AND EDUCATION : THE OFFICIAL JOURNAL OF THE INTERNATIONAL MIND, BRAIN, AND EDUCATION SOCIETY 2022; 16:62-74. [PMID: 35273650 PMCID: PMC8903197 DOI: 10.1111/mbe.12310] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Accepted: 11/18/2021] [Indexed: 06/14/2023]
Abstract
Looking to the mouth of a talker early in life predicts expressive communication. We hypothesized that looking at a talker's mouth may signal that infants are ready for increased supported joint engagement and that it subsequently facilitates prelinguistic vocal development and translates to broader gains in expressive communication. We tested this hypothesis in 50 infants aged 6-18 months with heightened and general population-level likelihood of autism diagnosis (Sibs-autism and Sibs-NA; respectively). We measured infants' gaze to a speaker's face using an eye tracking task, supported joint engagement during parent-child free play sessions, vocal complexity during a communication sample, and broader expressive communication. Looking at the mouth was indirectly associated with expressive communication via increased higher-order supported joint engagement and vocal complexity. This indirect effect did not vary according to sibling status. This study provides preliminary insights into the mechanisms by which looking at the mouth may influence expressive communication development.
Collapse
Affiliation(s)
- Pooja Santapuram
- Vanderbilt School of Medicine, Vanderbilt University, Nashville, TN, USA
| | - Jacob I Feldman
- Department of Hearing & Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Sarah M Bowman
- Department of Hearing & Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Present Affiliation: Augusta University/University of Georgia Medical Partnership at the Medical College of Georgia, Athens, GA, USA
| | - Sweeya Raj
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Evan Suzman
- Master's Program in Biomedical Sciences, Vanderbilt University, Nashville, TN, USA
| | - Shannon Crowley
- Lynch School of Education and Human Development, Boston College, Boston, MA, USA
| | - So Yoon Kim
- Present Affiliation: Department of Teacher Education, Duksung Women's University, Seoul, South Korea
| | - Bahar Keceli-Kaysili
- Department of Hearing & Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | | | | | - Mark T Wallace
- Department of Hearing & Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Pharmacology, Vanderbilt University, Nashville, TN, USA
| | - Tiffany G Woynaroski
- Department of Hearing & Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
18
|
Sanchez-Alonso S, Aslin RN. Towards a model of language neurobiology in early development. BRAIN AND LANGUAGE 2022; 224:105047. [PMID: 34894429 DOI: 10.1016/j.bandl.2021.105047] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2021] [Revised: 10/24/2021] [Accepted: 10/27/2021] [Indexed: 06/14/2023]
Abstract
Understanding language neurobiology in early childhood is essential for characterizing the developmental structural and functional changes that lead to the mature adult language network. In the last two decades, the field of language neurodevelopment has received increasing attention, particularly given the rapid advances in the implementation of neuroimaging techniques and analytic approaches that allow detailed investigations into the developing brain across a variety of cognitive domains. These methodological and analytical advances hold the promise of developing early markers of language outcomes that allow diagnosis and clinical interventions at the earliest stages of development. Here, we argue that findings in language neurobiology need to be integrated within an approach that captures the dynamic nature and inherent variability that characterizes the developing brain and the interplay between behavior and (structural and functional) neural patterns. Accordingly, we describe a framework for understanding language neurobiology in early development, which minimally requires an explicit characterization of the following core domains: i) computations underlying language learning mechanisms, ii) developmental patterns of change across neural and behavioral measures, iii) environmental variables that reinforce language learning (e.g., the social context), and iv) brain maturational constraints for optimal neural plasticity, which determine the infant's sensitivity to learning from the environment. We discuss each of these domains in the context of recent behavioral and neuroimaging findings and consider the need for quantitatively modeling two main sources of variation: individual differences or trait-like patterns of variation and within-subject differences or state-like patterns of variation. The goal is to enable models that allow prediction of language outcomes from neural measures that take into account these two types of variation. Finally, we examine how future methodological approaches would benefit from the inclusion of more ecologically valid paradigms that complement and allow generalization of traditional controlled laboratory methods.
Collapse
Affiliation(s)
| | - Richard N Aslin
- Haskins Laboratories, New Haven, CT, USA; Department of Psychology, Yale University, New Haven, CT, USA; Child Study Center, Yale University, New Haven, CT, USA.
| |
Collapse
|
19
|
Galazka MA, Hadjikhani N, Sundqvist M, Åsberg Johnels J. Facial speech processing in children with and without dyslexia. ANNALS OF DYSLEXIA 2021; 71:501-524. [PMID: 34115279 PMCID: PMC8458188 DOI: 10.1007/s11881-021-00231-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Accepted: 05/10/2021] [Indexed: 06/04/2023]
Abstract
What role does the presence of facial speech play for children with dyslexia? Current literature proposes two distinctive claims. One claim states that children with dyslexia make less use of visual information from the mouth during speech processing due to a deficit in recruitment of audiovisual areas. An opposing claim suggests that children with dyslexia are in fact reliant on such information in order to compensate for auditory/phonological impairments. The current paper aims at directly testing these contrasting hypotheses (here referred to as "mouth insensitivity" versus "mouth reliance") in school-age children with and without dyslexia, matched on age and listening comprehension. Using eye tracking, in Study 1, we examined how children look at the mouth across conditions varying in speech processing demands. The results did not indicate significant group differences in looking at the mouth. However, correlation analyses suggest potentially important distinctions within the dyslexia group: those children with dyslexia who are better readers attended more to the mouth while presented with a person's face in a phonologically demanding condition. In Study 2, we examined whether the presence of facial speech cues is functionally beneficial when a child is encoding written words. The results indicated lack of overall group differences on the task, although those with less severe reading problems in the dyslexia group were more accurate when reading words that were presented with articulatory facial speech cues. Collectively, our results suggest that children with dyslexia differ in their "mouth reliance" versus "mouth insensitivity," a profile that seems to be related to the severity of their reading problems.
Collapse
Affiliation(s)
- Martyna A Galazka
- Gillberg Neuropsychiatry Center, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden.
| | - Nouchine Hadjikhani
- Gillberg Neuropsychiatry Center, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden
- Harvard Medical School/MGH/MIT, Athinoula A. Martinos Center for Biomedical Imaging, Boston, MA, USA
| | - Maria Sundqvist
- Department of Education and Special Education, University of Gothenburg, Gothenburg, Sweden
| | - Jakob Åsberg Johnels
- Gillberg Neuropsychiatry Center, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden.
- Section of Speech and Language Pathology, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden.
| |
Collapse
|
20
|
Visual Traces of Language Acquisition in Toddlers with Autism Spectrum Disorder During the Second Year of Life. J Autism Dev Disord 2021; 51:2519-2530. [PMID: 33009972 PMCID: PMC8018986 DOI: 10.1007/s10803-020-04730-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Infants show shifting patterns of visual engagement to faces over the first years of life. To explore the adaptive implications of this engagement, we collected eye-tracking measures on cross-sectional samples of 10-25-month-old typically developing toddlers (TD;N = 28) and those with autism spectrum disorder (ASD;N = 54). Concurrent language assessments were conducted and relationships between visual engagement and expressive and receptive language were analyzed between groups, and within ASD subgroups. TD and ASD toddlers exhibited greater mouth- than eye-looking, with TD exhibiting higher levels of mouth-looking than ASD. Mouth-looking was positively associated with expressive language in TD toddlers, and in ASD toddlers who had acquired first words. Mouth-looking was unrelated to expressive language in ASD toddlers who had not yet acquired first words.
Collapse
|
21
|
Sekiyama K, Hisanaga S, Mugitani R. Selective attention to the mouth of a talker in Japanese-learning infants and toddlers: Its relationship with vocabulary and compensation for noise. Cortex 2021; 140:145-156. [PMID: 33989900 DOI: 10.1016/j.cortex.2021.03.023] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2020] [Revised: 12/16/2020] [Accepted: 03/23/2021] [Indexed: 11/25/2022]
Abstract
Infants increasingly gaze at the mouth of talking faces during the latter half of the first postnatal year. This study investigated mouth-looking behavior of 120 full-term infants and toddlers (6 months-3 years) and 12 young adults (21-24 years) from Japanese monolingual families. The purpose of the study included: (1) Is such an attentional shift to the mouth in infancy similarly observed in Japanese environment where contribution of visual speech is known to be relatively weak? (2) Can noisy conditions increase mouth-looking behavior of Japanese young children? (3) Is the mouth-looking behavior related to language acquisition? To this end, movies of a talker speaking short phrases were presented while manipulating signal-to-noise ratio (SNR: Clear, SN+4, and SN-4). Expressive vocabulary of toddlers was obtained through their parents. The results indicated that Japanese infants initially have a strong preference for the eyes to mouth which is weakened toward 10 months, but the shift was later and in a milder fashion compared to known results for English-learning infants. Even after 10 months, no clear-cut preference for the mouth was observed even in linguistically challenging situations with strong noise until 3 years of age. In the Clear condition, there was a return of the gaze to the eyes as early as 3 years of age, where they showed increasing attention to the mouth with increasing noise level. In addition, multiple regression analyses revealed a tendency that 2- and 3-year-olds with larger vocabulary increasingly look at the eyes. Overall, the gaze of Japanese-learning infants and toddlers was more biased to the eyes in various aspects compared to known results of English-learning infants. The present findings shed new light on our understanding of the development of selective attention to the mouth in non-western populations.
Collapse
Affiliation(s)
- Kaoru Sekiyama
- Graduate School of Advanced Integrated Studies in Human Survivability, Kyoto University, Kyoto, Japan; Cognitive Psychology Laboratory, Faculty of Letters, Kumamoto University, Kumamoto, Japan.
| | - Satoko Hisanaga
- Cognitive Psychology Laboratory, Faculty of Letters, Kumamoto University, Kumamoto, Japan
| | - Ryoko Mugitani
- Department of Psychology, Faculty of Integrated Arts and Social Sciences, Japan Women's University, Kanagawa, Japan
| |
Collapse
|
22
|
Haensel JX, Ishikawa M, Itakura S, Smith TJ, Senju A. Cultural influences on face scanning are consistent across infancy and adulthood. Infant Behav Dev 2020; 61:101503. [PMID: 33190091 PMCID: PMC7768814 DOI: 10.1016/j.infbeh.2020.101503] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2020] [Revised: 10/27/2020] [Accepted: 10/28/2020] [Indexed: 10/26/2022]
Abstract
The emergence of cultural differences in face scanning is thought to be shaped by social experience. However, previous studies mainly investigated eye movements of adults and little is known about early development. The current study recorded eye movements of British and Japanese infants (aged 10 and 16 months) and adults, who were presented with static and dynamic faces on screen. Cultural differences were observed across all age groups, with British participants exhibiting more mouth scanning, and Japanese individuals showing increased central face (nose) scanning for dynamic stimuli. Age-related influences independent of culture were also revealed, with a shift from eye to mouth scanning between 10 and 16 months, while adults distributed their gaze more flexibly. Against our prediction, no age-related increases in cultural differences were observed, suggesting the possibility that cultural differences are largely manifest by 10 months of age. Overall, the findings suggest that individuals adopt visual strategies in line with their cultural background from early in infancy, pointing to the development of a highly adaptive face processing system that is shaped by early sociocultural experience.
Collapse
Affiliation(s)
- Jennifer X Haensel
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, United Kingdom.
| | - Mitsuhiko Ishikawa
- Department of Psychology, Graduate School of Letters, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto, 606-8501, Japan
| | - Shoji Itakura
- Center for Baby Science, Doshisha University, 4-1-1 Kizugawadai, Kizugawa, Kyoto, 619-0225, Japan
| | - Tim J Smith
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, United Kingdom
| | - Atsushi Senju
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, United Kingdom
| |
Collapse
|
23
|
Havy M, Zesiger PE. Bridging ears and eyes when learning spoken words: On the effects of bilingual experience at 30 months. Dev Sci 2020; 24:e13002. [PMID: 32506622 DOI: 10.1111/desc.13002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Revised: 05/15/2020] [Accepted: 05/15/2020] [Indexed: 10/24/2022]
Abstract
From the very first moments of their lives, infants selectively attend to the visible orofacial movements of their social partners and apply their exquisite speech perception skills to the service of lexical learning. Here we explore how early bilingual experience modulates children's ability to use visible speech as they form new lexical representations. Using a cross-modal word-learning task, bilingual children aged 30 months were tested on their ability to learn new lexical mappings in either the auditory or the visual modality. Lexical recognition was assessed either in the same modality as the one used at learning ('same modality' condition: auditory test after auditory learning, visual test after visual learning) or in the other modality ('cross-modality' condition: visual test after auditory learning, auditory test after visual learning). The results revealed that like their monolingual peers, bilingual children successfully learn new words in either the auditory or the visual modality and show cross-modal recognition of words following auditory learning. Interestingly, as opposed to monolinguals, they also demonstrate cross-modal recognition of words upon visual learning. Collectively, these findings indicate a bilingual edge in visual word learning, expressed in the capacity to form a recoverable cross-modal representation of visually learned words.
Collapse
Affiliation(s)
- Mélanie Havy
- Faculty of Psychology and Educational Sciences, Geneva University, Geneva, Switzerland
| | - Pascal E Zesiger
- Faculty of Psychology and Educational Sciences, Geneva University, Geneva, Switzerland
| |
Collapse
|
24
|
Shic F, Wang Q, Macari SL, Chawarska K. The role of limited salience of speech in selective attention to faces in toddlers with autism spectrum disorders. J Child Psychol Psychiatry 2020; 61:459-469. [PMID: 31471912 PMCID: PMC7048639 DOI: 10.1111/jcpp.13118] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 07/15/2019] [Indexed: 12/31/2022]
Abstract
BACKGROUND Impaired attention to faces of interactive partners is a marker for autism spectrum disorder (ASD) in early childhood. However, it is unclear whether children with ASD avoid faces or find them less salient and whether the phenomenon is linked with the presence of eye contact or speech. METHODS We investigated the impacts of speech (SP) and direct gaze (DG) on attention to faces in 22-month-old toddlers with ASD (n = 50) and typically developing controls (TD, n = 47) using the Selective Social Attention 2.0 (SSA 2.0) task. The task consisted of four conditions where the presence (+) and absence (-) of DG and SP were systematically manipulated. The severity of autism symptoms, and verbal and nonverbal skills were characterized concurrently with eye tracking at 22.4 (SD = 3.2) months and prospectively at 39.8 (SD = 4.3) months. RESULTS Toddlers with ASD looked less than TD toddlers at face and mouth regions only when the actress was speaking (direct gaze absence with speech, DG-SP+: d = 0.99, p < .001 for face, d = 0.98, p < .001 for mouth regions; direct gaze present with speech, DG+SP+, d = 1.47, p < .001 for face, d = 1.01, p < .001 for mouth regions). Toddlers with ASD looked less at the eye region only when both gaze and speech cues were present (d = 0.46, p = .03). Salience of the combined DG and SP cues was associated concurrently and prospectively with the severity of autism symptoms, and the association remained significant after controlling for verbal and nonverbal levels. CONCLUSIONS The study links poor attention to faces with limited salience of audiovisual speech and provides no support for the face avoidance hypothesis in the early stages of ASD. These results are consequential for research on early discriminant and predictive biomarkers as well as identification of novel treatment targets.
Collapse
Affiliation(s)
- Frederick Shic
- Yale School of Medicine, Child Study Center; 40 Temple St Ste 7D; New Haven, CT 06510
- Seattle Children’s Research Institute, Center for Child Health, Behavior and Development; 2001 8 Ave Ste 400; Seattle, WA 98121
- Univeristy of Washington School of Medicine, Department of Pediatrics; 2001 8 Ave Ste 400; Seattle, WA 98121
| | - Quan Wang
- Yale School of Medicine, Child Study Center; 40 Temple St Ste 7D; New Haven, CT 06510
| | - Suzanne L. Macari
- Yale School of Medicine, Child Study Center; 40 Temple St Ste 7D; New Haven, CT 06510
| | - Katarzyna Chawarska
- Yale School of Medicine, Child Study Center; 40 Temple St Ste 7D; New Haven, CT 06510
| |
Collapse
|
25
|
Wallace MT, Woynaroski TG, Stevenson RA. Multisensory Integration as a Window into Orderly and Disrupted Cognition and Communication. Annu Rev Psychol 2020; 71:193-219. [DOI: 10.1146/annurev-psych-010419-051112] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
During our everyday lives, we are confronted with a vast amount of information from several sensory modalities. This multisensory information needs to be appropriately integrated for us to effectively engage with and learn from our world. Research carried out over the last half century has provided new insights into the way such multisensory processing improves human performance and perception; the neurophysiological foundations of multisensory function; the time course for its development; how multisensory abilities differ in clinical populations; and, most recently, the links between multisensory processing and cognitive abilities. This review summarizes the extant literature on multisensory function in typical and atypical circumstances, discusses the implications of the work carried out to date for theory and research, and points toward next steps for advancing the field.
Collapse
Affiliation(s)
- Mark T. Wallace
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37232, USA;,
- Departments of Psychology and Pharmacology, Vanderbilt University, Nashville, Tennessee 37232, USA
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37232, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee 37232, USA
- Vanderbilt Kennedy Center, Nashville, Tennessee 37203, USA
| | - Tiffany G. Woynaroski
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37232, USA;,
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee 37232, USA
- Vanderbilt Kennedy Center, Nashville, Tennessee 37203, USA
| | - Ryan A. Stevenson
- Departments of Psychology and Psychiatry and Program in Neuroscience, University of Western Ontario, London, Ontario N6A 3K7, Canada
- Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 3K7, Canada
| |
Collapse
|
26
|
Noiray A, Wieling M, Abakarova D, Rubertus E, Tiede M. Back From the Future: Nonlinear Anticipation in Adults' and Children's Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:3033-3054. [PMID: 31465705 DOI: 10.1044/2019_jslhr-s-csmc7-18-0208] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose This study examines the temporal organization of vocalic anticipation in German children from 3 to 7 years of age and adults. The main objective was to test for nonlinear processes in vocalic anticipation, which may result from the interaction between lingual gestural goals for individual vowels and those for their neighbors over time. Method The technique of ultrasound imaging was employed to record tongue movement at 5 time points throughout short utterances of the form V1#CV2. Vocalic anticipation was examined with generalized additive modeling, an analytical approach allowing for the estimation of both linear and nonlinear influences on anticipatory processes. Results Both adults and children exhibit nonlinear patterns of vocalic anticipation over time with the degree and extent of vocalic anticipation varying as a function of the individual consonants and vowels assembled. However, noticeable developmental discrepancies were found with vocalic anticipation being present earlier in children's utterances at 3-5 years of age in comparison to adults and, to some extent, 7-year-old children. Conclusions A developmental transition towards more segmentally-specified coarticulatory organizations seems to occur from kindergarten to primary school to adulthood. In adults, nonlinear anticipatory patterns over time suggest a strong differentiation between the gestural goals for consecutive segments. In children, this differentiation is not yet mature: Vowels show greater prominence over time and seem activated more in phase with those of previous segments relative to adults.
Collapse
Affiliation(s)
- Aude Noiray
- Laboratory for Oral Language Acquisition, Department of Linguistics, University of Potsdam, Germany
- Haskins Laboratories, New Haven, CT
| | - Martijn Wieling
- Haskins Laboratories, New Haven, CT
- Center for Language and Cognition, University of Groningen, the Netherlands
| | - Dzhuma Abakarova
- Laboratory for Oral Language Acquisition, Department of Linguistics, University of Potsdam, Germany
| | - Elina Rubertus
- Laboratory for Oral Language Acquisition, Department of Linguistics, University of Potsdam, Germany
| | | |
Collapse
|
27
|
Pons F, Bosch L, Lewkowicz DJ. Twelve-month-old infants’ attention to the eyes of a talking face is associated with communication and social skills. Infant Behav Dev 2019; 54:80-84. [DOI: 10.1016/j.infbeh.2018.12.003] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Revised: 12/10/2018] [Accepted: 12/10/2018] [Indexed: 12/01/2022]
|
28
|
Birulés J, Bosch L, Brieke R, Pons F, Lewkowicz DJ. Inside bilingualism: Language background modulates selective attention to a talker's mouth. Dev Sci 2018; 22:e12755. [PMID: 30251757 DOI: 10.1111/desc.12755] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2018] [Revised: 08/08/2018] [Accepted: 09/21/2018] [Indexed: 11/28/2022]
Abstract
Previous findings indicate that bilingual Catalan/Spanish-learning infants attend more to the highly salient audiovisual redundancy cues normally available in a talker's mouth than do monolingual infants. Presumably, greater attention to such cues renders the challenge of learning two languages easier. Spanish and Catalan are, however, rhythmically and phonologically close languages. This raises the possibility that bilinguals only rely on redundant audiovisual cues when their languages are close. To test this possibility, we exposed 15-month-old and 4- to 6-year-old close-language bilinguals (Spanish/Catalan) and distant-language bilinguals (Spanish/"other") to videos of a talker uttering Spanish or Catalan (native) and English (non-native) monologues and recorded eye-gaze to the talker's eyes and mouth. At both ages, the close-language bilinguals attended more to the talker's mouth than the distant-language bilinguals. This indicates that language proximity modulates selective attention to a talker's mouth during early childhood and suggests that reliance on the greater salience of audiovisual speech cues depends on the difficulty of the speech-processing task.
Collapse
Affiliation(s)
- Joan Birulés
- Department of Cognition, Development and Educational Psychology, Universitat de Barcelona, Barcelona, Spain
| | - Laura Bosch
- Department of Cognition, Development and Educational Psychology, Universitat de Barcelona, Barcelona, Spain
| | - Ricarda Brieke
- Department of Cognition, Development and Educational Psychology, Universitat de Barcelona, Barcelona, Spain
| | - Ferran Pons
- Department of Cognition, Development and Educational Psychology, Universitat de Barcelona, Barcelona, Spain
| | - David J Lewkowicz
- Department of Communication Sciences and Disorders, Northeastern University, Boston, Massachusetts
| |
Collapse
|