1
|
Diel A, Lalgi T, Teufel M, Bäuerle A, MacDorman K. Eerie edibles: Realism and food neophobia predict an uncanny valley in AI-generated food images. Appetite 2025:107926. [PMID: 39993448 DOI: 10.1016/j.appet.2025.107926] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2024] [Revised: 02/17/2025] [Accepted: 02/20/2025] [Indexed: 02/26/2025]
Abstract
This study investigates whether imperfect AI-generated food images evoke an uncanny valley effect, making them appear uncannier than either unrealistic or realistic food images. It further explores whether this effect is a nonlinear function of realism. Underlying mechanisms are examined, including food disgust and food neophobia. The study also compares reactions to moldy and rotten food with reactions to AI-generated food. Individual differences in food disgust and food neophobia are treated as moderators of food uncanniness. The results show that a cubic function of realism best predicts uncanniness, with imperfect AI-generated food rated significantly more uncanny and less pleasant than unrealistic and realistic food. Pleasantness followed a quadratic function of realism. Food neophobia significantly moderated the uncanny valley effect, while food disgust sensitivity did not. The findings indicate deviations from expected realism elicit discomfort, driven by novelty aversion rather than contamination-related disgust.
Collapse
Affiliation(s)
- Alexander Diel
- Clinic for Psychosomatic Medicine and Psychotherapy, LVR-University Hospital Essen, University of Duisburg-Essen, Essen, Germany; Center for Translational Neuro- and Behavioral Sciences, University of Duisburg-Essen, Essen, Germany.
| | - Tania Lalgi
- Clinic for Psychosomatic Medicine and Psychotherapy, LVR-University Hospital Essen, University of Duisburg-Essen, Essen, Germany; Center for Translational Neuro- and Behavioral Sciences, University of Duisburg-Essen, Essen, Germany
| | - Martin Teufel
- Clinic for Psychosomatic Medicine and Psychotherapy, LVR-University Hospital Essen, University of Duisburg-Essen, Essen, Germany; Center for Translational Neuro- and Behavioral Sciences, University of Duisburg-Essen, Essen, Germany
| | - Alexander Bäuerle
- Clinic for Psychosomatic Medicine and Psychotherapy, LVR-University Hospital Essen, University of Duisburg-Essen, Essen, Germany; Center for Translational Neuro- and Behavioral Sciences, University of Duisburg-Essen, Essen, Germany
| | - Karl MacDorman
- Luddy School of Informatics, Computing, and Engineering, Indiana University, Indianapolis, USA
| |
Collapse
|
2
|
Panneton R, Ostroff WL, Bhullar N, Netto M. Plasticity in older infants' perception of phonetic contrasts: The role of selective attention in context. INFANCY 2025; 30:e12620. [PMID: 39192613 PMCID: PMC11647196 DOI: 10.1111/infa.12620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 08/08/2024] [Accepted: 08/09/2024] [Indexed: 08/29/2024]
Abstract
Developmental plasticity refers to conditions and circumstances that increase phenotypic variability. In infancy, plasticity expands and contracts depending on domains of functioning, developmental history, and timing. In terms of language processing, infants attend to and discriminate both native and non-native phonetic contrasts, but selectively attune to their native phonemes by the end of the first postnatal year. However, relevant studies have excluded factors regarded as promoters of attention such as infant-directed (ID) speech, synchronous multimodal presentations, and female speakers. Here we investigated whether English-learning 11-month-olds would discriminate a non-native phonetic contrast while manipulating these factors. Results showed significant discrimination of the non-native contrast, regardless of speech register, provided that they were presented by a dynamic female speaker. Interestingly, when a static object or a dynamic male ID speaker replaced the female, no significant discrimination was found. These results show infants to be capable of discriminating non-native phonetic contrasts in an enhanced context at an age when they have been characterized as not being able to do so. Synchronized, multimodal information from female speakers allowed infants to perceive difficult non-native phonemes, highlighting the importance of an ecologically valid context for studying speech perception and language learning in early development.
Collapse
Affiliation(s)
- Robin Panneton
- Department of PsychologyVirginia TechBlacksburgVirginiaUSA
| | - Wendy L. Ostroff
- Hutchins School of Interdisciplinary StudiesSonoma State UniversityRohnert ParkCaliforniaUSA
| | | | - Madeline Netto
- Department of PsychologyVirginia TechBlacksburgVirginiaUSA
| |
Collapse
|
3
|
Ghazanfar AA, Gomez-Marin A. The central role of the individual in the history of brains. Neurosci Biobehav Rev 2024; 163:105744. [PMID: 38825259 PMCID: PMC11246226 DOI: 10.1016/j.neubiorev.2024.105744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2024] [Revised: 05/26/2024] [Accepted: 05/30/2024] [Indexed: 06/04/2024]
Abstract
Every species' brain, body and behavior is shaped by the contingencies of their evolutionary history; these exert pressures that change their developmental trajectories. There is, however, another set of contingencies that shape us and other animals: those that occur during a lifetime. In this perspective piece, we show how these two histories are intertwined by focusing on the individual. We suggest that organisms--their brains and behaviors--are not solely the developmental products of genes and neural circuitry but individual centers of action unfolding in time. To unpack this idea, we first emphasize the importance of variation and the central role of the individual in biology. We then go over "errors in time" that we often make when comparing development across species. Next, we reveal how an individual's development is a process rather than a product by presenting a set of case studies. These show developmental trajectories as emerging in the contexts of the "the actual now" and "the presence of the past". Our consideration reveals that individuals are slippery-they are never static; they are a set of on-going, creative activities. In light of this, it seems that taking individual development seriously is essential if we aspire to make meaningful comparisons of neural circuits and behavior within and across species.
Collapse
Affiliation(s)
- Asif A Ghazanfar
- Princeton Neuroscience Institute, and Department of Psychology, Princeton University, Princeton, NJ 08544, USA.
| | - Alex Gomez-Marin
- Behavior of Organisms Laboratory, Instituto de Neurociencias CSIC-UMH, Alicante 03550, Spain.
| |
Collapse
|
4
|
Nava E, Giraud M, Bolognini N. The emergence of the multisensory brain: From the womb to the first steps. iScience 2024; 27:108758. [PMID: 38230260 PMCID: PMC10790096 DOI: 10.1016/j.isci.2023.108758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2024] Open
Abstract
The becoming of the human being is a multisensory process that starts in the womb. By integrating spontaneous neuronal activity with inputs from the external world, the developing brain learns to make sense of itself through multiple sensory experiences. Over the past ten years, advances in neuroimaging and electrophysiological techniques have allowed the exploration of the neural correlates of multisensory processing in the newborn and infant brain, thus adding an important piece of information to behavioral evidence of early sensitivity to multisensory events. Here, we review recent behavioral and neuroimaging findings to document the origins and early development of multisensory processing, particularly showing that the human brain appears naturally tuned to multisensory events at birth, which requires multisensory experience to fully mature. We conclude the review by highlighting the potential uses and benefits of multisensory interventions in promoting healthy development by discussing emerging studies in preterm infants.
Collapse
Affiliation(s)
- Elena Nava
- Department of Psychology & Milan Centre for Neuroscience (NeuroMI), University of Milan-Bicocca, Milan, Italy
| | - Michelle Giraud
- Department of Psychology & Milan Centre for Neuroscience (NeuroMI), University of Milan-Bicocca, Milan, Italy
| | - Nadia Bolognini
- Department of Psychology & Milan Centre for Neuroscience (NeuroMI), University of Milan-Bicocca, Milan, Italy
- Laboratory of Neuropsychology, IRCCS Istituto Auxologico Italiano, Milan, Italy
| |
Collapse
|
5
|
Bosten JM, Coen-Cagli R, Franklin A, Solomon SG, Webster MA. Calibrating Vision: Concepts and Questions. Vision Res 2022; 201:108131. [PMID: 37139435 PMCID: PMC10151026 DOI: 10.1016/j.visres.2022.108131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The idea that visual coding and perception are shaped by experience and adjust to changes in the environment or the observer is universally recognized as a cornerstone of visual processing, yet the functions and processes mediating these calibrations remain in many ways poorly understood. In this article we review a number of facets and issues surrounding the general notion of calibration, with a focus on plasticity within the encoding and representational stages of visual processing. These include how many types of calibrations there are - and how we decide; how plasticity for encoding is intertwined with other principles of sensory coding; how it is instantiated at the level of the dynamic networks mediating vision; how it varies with development or between individuals; and the factors that may limit the form or degree of the adjustments. Our goal is to give a small glimpse of an enormous and fundamental dimension of vision, and to point to some of the unresolved questions in our understanding of how and why ongoing calibrations are a pervasive and essential element of vision.
Collapse
Affiliation(s)
| | - Ruben Coen-Cagli
- Department of Systems Computational Biology, and Dominick P. Purpura Department of Neuroscience, and Department of Ophthalmology and Visual Sciences, Albert Einstein College of Medicine, Bronx NY
| | | | - Samuel G Solomon
- Institute of Behavioural Neuroscience, Department of Experimental Psychology, University College London, UK
| | | |
Collapse
|
6
|
Belteki Z, van den Boomen C, Junge C. Face-to-face contact during infancy: How the development of gaze to faces feeds into infants' vocabulary outcomes. Front Psychol 2022; 13:997186. [PMID: 36389540 PMCID: PMC9650530 DOI: 10.3389/fpsyg.2022.997186] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 10/03/2022] [Indexed: 08/10/2023] Open
Abstract
Infants acquire their first words through interactions with social partners. In the first year of life, infants receive a high frequency of visual and auditory input from faces, making faces a potential strong social cue in facilitating word-to-world mappings. In this position paper, we review how and when infant gaze to faces is likely to support their subsequent vocabulary outcomes. We assess the relevance of infant gaze to faces selectively, in three domains: infant gaze to different features within a face (that is, eyes and mouth); then to faces (compared to objects); and finally to more socially relevant types of faces. We argue that infant gaze to faces could scaffold vocabulary construction, but its relevance may be impacted by the developmental level of the infant and the type of task with which they are presented. Gaze to faces proves relevant to vocabulary, as gazes to eyes could inform about the communicative nature of the situation or about the labeled object, while gazes to the mouth could improve word processing, all of which are key cues to highlighting word-to-world pairings. We also discover gaps in the literature regarding how infants' gazes to faces (versus objects) or to different types of faces relate to vocabulary outcomes. An important direction for future research will be to fill these gaps to better understand the social factors that influence infant vocabulary outcomes.
Collapse
|
7
|
Bosworth RG, Hwang SO, Corina DP. Visual attention for linguistic and non-linguistic body actions in non-signing and native signing children. Front Psychol 2022; 13:951057. [PMID: 36160576 PMCID: PMC9505519 DOI: 10.3389/fpsyg.2022.951057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 07/25/2022] [Indexed: 11/13/2022] Open
Abstract
Evidence from adult studies of deaf signers supports the dissociation between neural systems involved in processing visual linguistic and non-linguistic body actions. The question of how and when this specialization arises is poorly understood. Visual attention to these forms is likely to change with age and be affected by prior language experience. The present study used eye-tracking methodology with infants and children as they freely viewed alternating video sequences of lexical American sign language (ASL) signs and non-linguistic body actions (self-directed grooming action and object-directed pantomime). In Experiment 1, we quantified fixation patterns using an area of interest (AOI) approach and calculated face preference index (FPI) values to assess the developmental differences between 6 and 11-month-old hearing infants. Both groups were from monolingual English-speaking homes with no prior exposure to sign language. Six-month-olds attended the signer's face for grooming; but for mimes and signs, they were drawn to attend to the "articulatory space" where the hands and arms primarily fall. Eleven-month-olds, on the other hand, showed a similar attention to the face for all body action types. We interpret this to reflect an early visual language sensitivity that diminishes with age, just before the child's first birthday. In Experiment 2, we contrasted 18 hearing monolingual English-speaking children (mean age of 4.8 years) vs. 13 hearing children of deaf adults (CODAs; mean age of 5.7 years) whose primary language at home was ASL. Native signing children had a significantly greater face attentional bias than non-signing children for ASL signs, but not for grooming and mimes. The differences in the visual attention patterns that are contingent on age (in infants) and language experience (in children) may be related to both linguistic specialization over time and the emerging awareness of communicative gestural acts.
Collapse
Affiliation(s)
- Rain G. Bosworth
- NTID PLAY Lab, National Technical Institute for the Deaf, Rochester Institute of Technology, Rochester, NY, United States
| | - So One Hwang
- Center for Research in Language, University of California, San Diego, San Diego, CA, United States
| | - David P. Corina
- Center for Mind and Brain, University of California, Davis, Davis, CA, United States
| |
Collapse
|
8
|
Cox CMM, Keren-Portnoy T, Roepstorff A, Fusaroli R. A Bayesian meta-analysis of infants' ability to perceive audio-visual congruence for speech. INFANCY 2021; 27:67-96. [PMID: 34542230 DOI: 10.1111/infa.12436] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 08/25/2021] [Accepted: 09/02/2021] [Indexed: 11/29/2022]
Abstract
This paper quantifies the extent to which infants can perceive audio-visual congruence for speech information and assesses whether this ability changes with native language exposure over time. A hierarchical Bayesian robust regression model of 92 separate effect sizes extracted from 24 studies indicates a moderate effect size in a positive direction (0.35, CI [0.21: 0.50]). This result suggests that infants possess a robust ability to detect audio-visual congruence for speech. Moderator analyses, moreover, suggest that infants' audio-visual matching ability for speech emerges at an early point in the process of language acquisition and remains stable for both native and non-native speech throughout early development. A sensitivity analysis of the meta-analytic data, however, indicates that a moderate publication bias for significant results could shift the lower credible interval to include null effects. Based on these findings, we outline recommendations for new lines of enquiry and suggest ways to improve the replicability of results in future investigations.
Collapse
Affiliation(s)
- Christopher Martin Mikkelsen Cox
- School of Communication and Culture, Aarhus University, Aarhus, Denmark.,Interacting Minds Centre, Aarhus University, Aarhus, Denmark.,Department of Language and Linguistic Science, University of York, Heslington, UK
| | - Tamar Keren-Portnoy
- Department of Language and Linguistic Science, University of York, Heslington, UK
| | - Andreas Roepstorff
- School of Communication and Culture, Aarhus University, Aarhus, Denmark.,Interacting Minds Centre, Aarhus University, Aarhus, Denmark
| | - Riccardo Fusaroli
- School of Communication and Culture, Aarhus University, Aarhus, Denmark.,Interacting Minds Centre, Aarhus University, Aarhus, Denmark
| |
Collapse
|
9
|
Fletcher MD. Can Haptic Stimulation Enhance Music Perception in Hearing-Impaired Listeners? Front Neurosci 2021; 15:723877. [PMID: 34531717 PMCID: PMC8439542 DOI: 10.3389/fnins.2021.723877] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 08/11/2021] [Indexed: 01/07/2023] Open
Abstract
Cochlear implants (CIs) have been remarkably successful at restoring hearing in severely-to-profoundly hearing-impaired individuals. However, users often struggle to deconstruct complex auditory scenes with multiple simultaneous sounds, which can result in reduced music enjoyment and impaired speech understanding in background noise. Hearing aid users often have similar issues, though these are typically less acute. Several recent studies have shown that haptic stimulation can enhance CI listening by giving access to sound features that are poorly transmitted through the electrical CI signal. This “electro-haptic stimulation” improves melody recognition and pitch discrimination, as well as speech-in-noise performance and sound localization. The success of this approach suggests it could also enhance auditory perception in hearing-aid users and other hearing-impaired listeners. This review focuses on the use of haptic stimulation to enhance music perception in hearing-impaired listeners. Music is prevalent throughout everyday life, being critical to media such as film and video games, and often being central to events such as weddings and funerals. It represents the biggest challenge for signal processing, as it is typically an extremely complex acoustic signal, containing multiple simultaneous harmonic and inharmonic sounds. Signal-processing approaches developed for enhancing music perception could therefore have significant utility for other key issues faced by hearing-impaired listeners, such as understanding speech in noisy environments. This review first discusses the limits of music perception in hearing-impaired listeners and the limits of the tactile system. It then discusses the evidence around integration of audio and haptic stimulation in the brain. Next, the features, suitability, and success of current haptic devices for enhancing music perception are reviewed, as well as the signal-processing approaches that could be deployed in future haptic devices. Finally, the cutting-edge technologies that could be exploited for enhancing music perception with haptics are discussed. These include the latest micro motor and driver technology, low-power wireless technology, machine learning, big data, and cloud computing. New approaches for enhancing music perception in hearing-impaired listeners could substantially improve quality of life. Furthermore, effective haptic techniques for providing complex sound information could offer a non-invasive, affordable means for enhancing listening more broadly in hearing-impaired individuals.
Collapse
Affiliation(s)
- Mark D Fletcher
- University of Southampton Auditory Implant Service, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom.,Institute of Sound and Vibration Research, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom
| |
Collapse
|
10
|
Krasotkina A, Götz A, Höhle B, Schwarzer G. Perceptual narrowing in face- and speech-perception domains in infancy: A longitudinal approach. Infant Behav Dev 2021; 64:101607. [PMID: 34274849 DOI: 10.1016/j.infbeh.2021.101607] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Revised: 06/30/2021] [Accepted: 07/04/2021] [Indexed: 10/20/2022]
Abstract
During the first year of life, infants undergo a process known as perceptual narrowing, which reduces their sensitivity to classes of stimuli which the infants do not encounter in their environment. It has been proposed that perceptual narrowing for faces and speech may be driven by shared domain-general processes. To investigate this theory, our study longitudinally tested 50 German Caucasian infants with respect to these domains first at 6 months of age followed by a second testing at 9 months of age. We used an infant-controlled habituation-dishabituation paradigm to test the infants' ability to discriminate among other-race Asian faces and non-native Cantonese speech tones, as well as same-race Caucasian faces as a control. We found that while at 6 months of age infants could discriminate among all stimuli, by 9 months of age they could no longer discriminate among other-race faces or non-native tones. However, infants could discriminate among same-race stimuli both at 6 and at 9 months of age. These results demonstrate that the same infants undergo perceptual narrowing for both other-race faces and non-native speech tones between the ages of 6 and 9 months. This parallel development of perceptual narrowing occurring in both the face and speech perception modalities over the same period of time lends support to the domain-general theory of perceptual narrowing in face and speech perception.
Collapse
|
11
|
Fletcher MD, Verschuur CA. Electro-Haptic Stimulation: A New Approach for Improving Cochlear-Implant Listening. Front Neurosci 2021; 15:581414. [PMID: 34177440 PMCID: PMC8219940 DOI: 10.3389/fnins.2021.581414] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 04/29/2021] [Indexed: 12/12/2022] Open
Abstract
Cochlear implants (CIs) have been remarkably successful at restoring speech perception for severely to profoundly deaf individuals. Despite their success, several limitations remain, particularly in CI users' ability to understand speech in noisy environments, locate sound sources, and enjoy music. A new multimodal approach has been proposed that uses haptic stimulation to provide sound information that is poorly transmitted by the implant. This augmenting of the electrical CI signal with haptic stimulation (electro-haptic stimulation; EHS) has been shown to improve speech-in-noise performance and sound localization in CI users. There is also evidence that it could enhance music perception. We review the evidence of EHS enhancement of CI listening and discuss key areas where further research is required. These include understanding the neural basis of EHS enhancement, understanding the effectiveness of EHS across different clinical populations, and the optimization of signal-processing strategies. We also discuss the significant potential for a new generation of haptic neuroprosthetic devices to aid those who cannot access hearing-assistive technology, either because of biomedical or healthcare-access issues. While significant further research and development is required, we conclude that EHS represents a promising new approach that could, in the near future, offer a non-invasive, inexpensive means of substantially improving clinical outcomes for hearing-impaired individuals.
Collapse
Affiliation(s)
- Mark D. Fletcher
- Faculty of Engineering and Physical Sciences, University of Southampton Auditory Implant Service, University of Southampton, Southampton, United Kingdom
- Faculty of Engineering and Physical Sciences, Institute of Sound and Vibration Research, University of Southampton, Southampton, United Kingdom
| | - Carl A. Verschuur
- Faculty of Engineering and Physical Sciences, University of Southampton Auditory Implant Service, University of Southampton, Southampton, United Kingdom
| |
Collapse
|
12
|
Diel A, MacDorman KF. Creepy cats and strange high houses: Support for configural processing in testing predictions of nine uncanny valley theories. J Vis 2021; 21:1. [PMID: 33792617 PMCID: PMC8024776 DOI: 10.1167/jov.21.4.1] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
In 1970, Masahiro Mori proposed the uncanny valley (UV), a region in a human-likeness continuum where an entity risks eliciting a cold, eerie, repellent feeling. Recent studies have shown that this feeling can be elicited by entities modeled not only on humans but also nonhuman animals. The perceptual and cognitive mechanisms underlying the UV effect are not well understood, although many theories have been proposed to explain them. To test the predictions of nine classes of theories, a within-subjects experiment was conducted with 136 participants. The theories' predictions were compared with ratings of 10 classes of stimuli on eeriness and coldness indices. One type of theory, configural processing, predicted eight out of nine significant effects. Atypicality, in its extended form, in which the uncanny valley effect is amplified by the stimulus appearing more human, also predicted eight. Threat avoidance predicted seven; atypicality, perceptual mismatch, and mismatch+ predicted six; category+, novelty avoidance, mate selection, and psychopathy avoidance predicted five; and category uncertainty predicted three. Empathy's main prediction was not supported. Given that the number of significant effects predicted depends partly on our choice of hypotheses, a detailed consideration of each result is advised. We do, however, note the methodological value of examining many competing theories in the same experiment.
Collapse
Affiliation(s)
- Alexander Diel
- School of Psychology, Cardiff University, Cardiff, United Kingdom.,Indiana University School of Informatics and Computing, Indianapolis, IN, USA.,
| | - Karl F MacDorman
- Indiana University School of Informatics and Computing, Indianapolis, IN, USA.,
| |
Collapse
|
13
|
Bayet L, Saville A, Balas B. Sensitivity to face animacy and inversion in childhood: Evidence from EEG data. Neuropsychologia 2021; 156:107838. [PMID: 33775702 DOI: 10.1016/j.neuropsychologia.2021.107838] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2019] [Revised: 12/28/2020] [Accepted: 03/22/2021] [Indexed: 11/25/2022]
Abstract
Adults exhibit relative behavioral difficulties in processing inanimate, artificial faces compared to real human faces, with implications for using artificial faces in research and designing artificial social agents. However, the developmental trajectory of inanimate face perception is unknown. To address this gap, we used electroencephalography to investigate inanimate faces processing in cross-sectional groups of 5-10-year-old children and adults. A face inversion manipulation was used to test whether face animacy processing relies on expert face processing strategies. Groups of 5-7-year-olds (N = 18), 8-10-year-olds (N = 18), and adults (N = 16) watched pictures of real or doll faces presented in an upright or inverted orientation. Analyses of event-related potentials revealed larger N170 amplitudes in response to doll faces, irrespective of age group or face orientation. Thus, the N170 is sensitive to face animacy by 5-7 years of age, but such sensitivity may not reflect high-level, expert face processing. Multivariate pattern analyses of the EEG signal additionally assessed whether animacy information could be reliably extracted during face processing. Face orientation, but not face animacy, could be reliably decoded from occipitotemporal channels in children and adults. Face animacy could be decoded from whole scalp channels in adults, but not children. Together, these results suggest that 5-10-year-old children exhibit some sensitivity to face animacy over occipitotemporal regions that is comparable to adults.
Collapse
Affiliation(s)
- Laurie Bayet
- Department of Psychology and Center for Neuroscience and Behavior, American University, Washington, DC, USA.
| | - Alyson Saville
- Department of Psychology, North Dakota State University, Fargo, ND, USA
| | - Benjamin Balas
- Department of Psychology, North Dakota State University, Fargo, ND, USA.
| |
Collapse
|
14
|
Dorn K, Cauvet E, Weinert S. A cross‐linguistic study of multisensory perceptual narrowing in German and Swedish infants during the first year of life. INFANT AND CHILD DEVELOPMENT 2021. [DOI: 10.1002/icd.2217] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Affiliation(s)
- Katharina Dorn
- Department of Developmental Psychology Otto‐Friedrich University Bamberg Germany
| | - Elodie Cauvet
- Department of Women's and Children's health Karolinska Institute of Neurodevelopmental Disorders (KIND) Stockholm Sweden
| | - Sabine Weinert
- Department of Developmental Psychology Otto‐Friedrich University Bamberg Germany
| |
Collapse
|
15
|
Byers-Heinlein K, Tsui ASM, Bergmann C, Black AK, Brown A, Carbajal MJ, Durrant S, Fennell CT, Fiévet AC, Frank MC, Gampe A, Gervain J, Gonzalez-Gomez N, Hamlin JK, Havron N, Hernik M, Kerr S, Killam H, Klassen K, Kosie JE, Kovács ÁM, Lew-Williams C, Liu L, Mani N, Marino C, Mastroberardino M, Mateu V, Noble C, Orena AJ, Polka L, Potter CE, Schreiner M, Singh L, Soderstrom M, Sundara M, Waddell C, Werker JF, Wermelinger S. A multi-lab study of bilingual infants: Exploring the preference for infant-directed speech. ADVANCES IN METHODS AND PRACTICES IN PSYCHOLOGICAL SCIENCE 2021; 4:10.1177/2515245920974622. [PMID: 35821764 PMCID: PMC9273003 DOI: 10.1177/2515245920974622] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/28/2023]
Abstract
From the earliest months of life, infants prefer listening to and learn better from infant-directed speech (IDS) than adult-directed speech (ADS). Yet, IDS differs within communities, across languages, and across cultures, both in form and in prevalence. This large-scale, multi-site study used the diversity of bilingual infant experiences to explore the impact of different types of linguistic experience on infants' IDS preference. As part of the multi-lab ManyBabies 1 project, we compared lab-matched samples of 333 bilingual and 385 monolingual infants' preference for North-American English IDS (cf. ManyBabies Consortium, 2020: ManyBabies 1), tested in 17 labs in 7 countries. Those infants were tested in two age groups: 6-9 months (the younger sample) and 12-15 months (the older sample). We found that bilingual and monolingual infants both preferred IDS to ADS, and did not differ in terms of the overall magnitude of this preference. However, amongst bilingual infants who were acquiring North-American English (NAE) as a native language, greater exposure to NAE was associated with a stronger IDS preference, extending the previous finding from ManyBabies 1 that monolinguals learning NAE as a native language showed a stronger preference than infants unexposed to NAE. Together, our findings indicate that IDS preference likely makes a similar contribution to monolingual and bilingual development, and that infants are exquisitely sensitive to the nature and frequency of different types of language input in their early environments.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | | | | | - Judit Gervain
- Integrative Neuroscience and Cognition Center (INCC), CNRS & Université Paris Descartes
| | | | | | | | | | - Shila Kerr
- McGill University, School of Communication Sciences and Disorders
| | | | | | | | | | | | | | | | - Caterina Marino
- Integrative Neuroscience and Cognition Center (INCC), CNRS & Université Paris Descartes
| | | | | | | | | | - Linda Polka
- McGill University, School of Communication Sciences and Disorders
| | | | | | | | | | | | | | | | | |
Collapse
|
16
|
Ujiie Y, Kanazawa S, Yamaguchi MK. Development of the multisensory perception of water in infancy. J Vis 2020; 20:5. [PMID: 32749446 PMCID: PMC7438635 DOI: 10.1167/jov.20.8.5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Material perception is facilitated by multisensory interactions that enable us to associate the visual properties of a material with its auditory properties. Such interactions develop during infancy and are assumed to depend on the familiarity of materials. Here, we aimed to pinpoint the age at which infants acquire multisensory interactions for the perception of water, which is a familiar material to them. We presented two side-by-side movies of pouring water and ice while providing the corresponding sounds of water and ice, as well as silence. We found that infants older than 5 months of age looked longer at the water movie when they heard the sound of water. Conversely, they did not look at the ice movie when they heard the sound of ice. These results indicate that at approximately 5 months of age, infants develop multisensory interactions between auditory and visual properties of water, but not of ice. The contrasting results between water and ice suggest that the development of multisensory material perception depends on the frequency of interactions with materials during infancy.
Collapse
|
17
|
Ujiie Y, Kanazawa S, Yamaguchi MK. The Other-Race-Effect on Audiovisual Speech Integration in Infants: A NIRS Study. Front Psychol 2020; 11:971. [PMID: 32499746 PMCID: PMC7243679 DOI: 10.3389/fpsyg.2020.00971] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2019] [Accepted: 04/20/2020] [Indexed: 11/21/2022] Open
Abstract
Previous studies have revealed perceptual narrowing for the own-race-face in face discrimination, but this phenomenon is poorly understood in face and voice integration. We focused on infants' brain responses to the McGurk effect to examine whether the other-race effect occurs in the activation patterns. In Experiment 1, we conducted fNIRS measurements to find the presence of a mapping of the McGurk effect in Japanese 8- to 9-month-old infants and to examine the difference between the activation patterns in response to own-race-face and other-race-face stimuli. We used two race-face conditions, own-race-face (East Asian) and other-race-face (Caucasian), each of which contained audiovisual-matched and McGurk-type stimuli. While the infants (N = 34) were observing each speech stimulus for each race, we measured cerebral hemoglobin concentrations in bilateral temporal brain regions. The results showed that in the own-race-face condition, audiovisual-matched stimuli induced the activation of the left temporal region, and the McGurk stimuli induced the activation of the bilateral temporal regions. No significant activations were found in the other-race-face condition. These results mean that the McGurk effect occurred only in the own-race-face condition. In Experiment 2, we used a familiarization/novelty preference procedure to confirm that the infants (N = 28) could perceive the McGurk effect in the own-race-face condition but not that of the other-race-face. The behavioral data supported the results of the fNIRS data, implying the presence of narrowing for the own-race face in the McGurk effect. These results suggest that narrowing of the McGurk effect may be involved in the development of relatively high-order processing, such as face-to-face communication with people surrounding the infant. We discuss the hypothesis that perceptual narrowing is a modality-general, pan-sensory process.
Collapse
Affiliation(s)
- Yuta Ujiie
- Graduate School of Psychology, Chukyo University, Aichi, Japan
- Research and Development Initiative, Chuo University, Tokyo, Japan
- Japan Society for the Promotion of Science, Tokyo, Japan
| | - So Kanazawa
- Department of Psychology, Japan Women’s University, Kawasaki, Japan
| | | |
Collapse
|
18
|
Tichko P, Large EW. Modeling infants' perceptual narrowing to musical rhythms: neural oscillation and Hebbian plasticity. Ann N Y Acad Sci 2019; 1453:125-139. [PMID: 31021447 DOI: 10.1111/nyas.14050] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2018] [Revised: 02/01/2019] [Accepted: 02/15/2019] [Indexed: 12/17/2022]
Abstract
Previous research suggests that infants' perception of musical rhythm is fine-tuned to culture-specific rhythmic structures over the first postnatal year of human life. To date, however, little is known about the neurobiological principles that may underlie this process. In the current study, we used a dynamical systems model featuring neural oscillation and Hebbian plasticity to simulate infants' perceptual learning of culture-specific musical rhythms. First, we demonstrate that oscillatory activity in an untrained network reflects the rhythmic structure of either a Western or a Balkan training rhythm in a veridical fashion. Next, during a period of unsupervised learning, we show that the network learns the rhythmic structure of either a Western or a Balkan training rhythm through the self-organization of network connections. Finally, we demonstrate that the learned connections affect the networks' response to violations to the metrical structure of native and nonnative rhythms, a pattern of findings that mirrors the behavioral data on infants' perceptual narrowing to musical rhythms.
Collapse
Affiliation(s)
- Parker Tichko
- Developmental Division, Department of Psychological Sciences, College of Liberal Arts and Sciences, University of Connecticut, Storrs, Connecticut
| | - Edward W Large
- Perception, Action, Cognition (PAC) Division, Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut.,Center for the Ecological Study of Perception & Action (CESPA), Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut.,Department of Physics, University of Connecticut, Storrs, Connecticut
| |
Collapse
|
19
|
Gergely A, Petró E, Oláh K, Topál J. Auditory⁻Visual Matching of Conspecifics and Non-Conspecifics by Dogs and Human Infants. Animals (Basel) 2019; 9:ani9010017. [PMID: 30621092 PMCID: PMC6357027 DOI: 10.3390/ani9010017] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2018] [Revised: 12/17/2018] [Accepted: 01/02/2019] [Indexed: 11/16/2022] Open
Abstract
Simple Summary Comparative investigations on infants’ and dogs’ social and communicative skills revealed striking similarity, which can be attributed to convergent evolutionary and domestication processes. Using a suitable experimental method that allows systematic and direct comparisons of dogs and humans is essential. In the current study, we used non-invasive eye-tracking technology in order to investigate looking behaviour of dogs and human infants in an auditory–visual matching task. We found a similar gazing pattern in the two species when they were presented with pictures and vocalisations of a dog and a female human, that is, both dogs and infants looked longer at the dog portrait during the dog’s bark, while matching human speech with the human face was less obvious. Our results suggested different mechanisms underlying this analogous behaviour and highlighted the importance of future investigations into cross-modal cognition in dogs and humans. Abstract We tested whether dogs and 14–16-month-old infants are able to integrate intersensory information when presented with conspecific and heterospecific faces and vocalisations. The looking behaviour of dogs and infants was recorded with a non-invasive eye-tracking technique while they were concurrently presented with a dog and a female human portrait accompanied with acoustic stimuli of female human speech and a dog’s bark. Dogs showed evidence of both con- and heterospecific intermodal matching, while infants’ looking preferences indicated effective auditory–visual matching only when presented with the audio and visual stimuli of the non-conspecifics. The results of the present study provided further evidence that domestic dogs and human infants have similar socio-cognitive skills and highlighted the importance of comparative examinations on intermodal perception.
Collapse
Affiliation(s)
- Anna Gergely
- Institute of Cognitive Neuroscience and Psychology, Hungarian Academy of Sciences, 1117 Budapest, Hungary.
| | - Eszter Petró
- Institute of Cognitive Neuroscience and Psychology, Hungarian Academy of Sciences, 1117 Budapest, Hungary.
| | - Katalin Oláh
- Faculty of Education and Psychology, Eötvös Loránd University, 1064 Budapest, Hungary.
| | - József Topál
- Institute of Cognitive Neuroscience and Psychology, Hungarian Academy of Sciences, 1117 Budapest, Hungary.
| |
Collapse
|
20
|
Sorcinelli A, Vouloumanos A. Is Visual Perceptual Narrowing an Obligatory Developmental Process? Front Psychol 2018; 9:2326. [PMID: 30532728 PMCID: PMC6265369 DOI: 10.3389/fpsyg.2018.02326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2018] [Accepted: 11/06/2018] [Indexed: 11/15/2022] Open
Abstract
Perceptual narrowing, or a diminished perceptual sensitivity to infrequently encountered stimuli, sometimes accompanied by an increased sensitivity to frequently encountered stimuli, has been observed in unimodal speech and visual perception, as well as in multimodal perception, leading to the suggestion that it is a fundamental feature of perceptual development. However, recent findings in unimodal face perception suggest that perceptual abilities are flexible in development. Similarly, in multimodal perception, new paradigms examining temporal dynamics, rather than standard overall looking time, also suggest that perceptual narrowing might not be obligatory. Across two experiments, we assess perceptual narrowing in unimodal visual perception using remote eye-tracking. We compare adults’ looking at human faces and monkey faces of different species, and present analyses of standard overall looking time and temporal dynamics. As expected, adults discriminated between different human faces, but, unlike previous studies, they also discriminated between different monkey faces. Temporal dynamics revealed that adults more readily discriminated human compared to monkey faces, suggesting a processing advantage for conspecifics compared to other animals. Adults’ success in discriminating between faces of two unfamiliar monkey species calls into question whether perceptual narrowing is an obligatory developmental process. Humans undoubtedly diminish in their ability to perceive distinctions between infrequently encountered stimuli as compared to frequently encountered stimuli, however, consistent with recent findings, this narrowing should be conceptualized as a refinement and not as a loss of abilities. Perceptual abilities for infrequently encountered stimuli may be detectable, though weaker compared to adults’ perception of frequently encountered stimuli. Consistent with several other accounts we suggest that perceptual development must be more flexible than a perceptual narrowing account posits.
Collapse
|
21
|
Altvater-Mackensen N, Grossmann T. Modality-independent recruitment of inferior frontal cortex during speech processing in human infants. Dev Cogn Neurosci 2018; 34:130-138. [PMID: 30391756 PMCID: PMC6969291 DOI: 10.1016/j.dcn.2018.10.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 08/25/2018] [Accepted: 10/25/2018] [Indexed: 11/22/2022] Open
Abstract
Despite increasing interest in the development of audiovisual speech perception in infancy, the underlying mechanisms and neural processes are still only poorly understood. In addition to regions in temporal cortex associated with speech processing and multimodal integration, such as superior temporal sulcus, left inferior frontal cortex (IFC) has been suggested to be critically involved in mapping information from different modalities during speech perception. To further illuminate the role of IFC during infant language learning and speech perception, the current study examined the processing of auditory, visual and audiovisual speech in 6-month-old infants using functional near-infrared spectroscopy (fNIRS). Our results revealed that infants recruit speech-sensitive regions in frontal cortex including IFC regardless of whether they processed unimodal or multimodal speech. We argue that IFC may play an important role in associating multimodal speech information during the early steps of language learning.
Collapse
Affiliation(s)
- Nicole Altvater-Mackensen
- Department of Psychology, Johannes-Gutenberg-University Mainz, Germany; Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany.
| | - Tobias Grossmann
- Department of Psychology, University of Virginia, USA; Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
22
|
Birulés J, Bosch L, Brieke R, Pons F, Lewkowicz DJ. Inside bilingualism: Language background modulates selective attention to a talker's mouth. Dev Sci 2018; 22:e12755. [PMID: 30251757 DOI: 10.1111/desc.12755] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2018] [Revised: 08/08/2018] [Accepted: 09/21/2018] [Indexed: 11/28/2022]
Abstract
Previous findings indicate that bilingual Catalan/Spanish-learning infants attend more to the highly salient audiovisual redundancy cues normally available in a talker's mouth than do monolingual infants. Presumably, greater attention to such cues renders the challenge of learning two languages easier. Spanish and Catalan are, however, rhythmically and phonologically close languages. This raises the possibility that bilinguals only rely on redundant audiovisual cues when their languages are close. To test this possibility, we exposed 15-month-old and 4- to 6-year-old close-language bilinguals (Spanish/Catalan) and distant-language bilinguals (Spanish/"other") to videos of a talker uttering Spanish or Catalan (native) and English (non-native) monologues and recorded eye-gaze to the talker's eyes and mouth. At both ages, the close-language bilinguals attended more to the talker's mouth than the distant-language bilinguals. This indicates that language proximity modulates selective attention to a talker's mouth during early childhood and suggests that reliance on the greater salience of audiovisual speech cues depends on the difficulty of the speech-processing task.
Collapse
Affiliation(s)
- Joan Birulés
- Department of Cognition, Development and Educational Psychology, Universitat de Barcelona, Barcelona, Spain
| | - Laura Bosch
- Department of Cognition, Development and Educational Psychology, Universitat de Barcelona, Barcelona, Spain
| | - Ricarda Brieke
- Department of Cognition, Development and Educational Psychology, Universitat de Barcelona, Barcelona, Spain
| | - Ferran Pons
- Department of Cognition, Development and Educational Psychology, Universitat de Barcelona, Barcelona, Spain
| | - David J Lewkowicz
- Department of Communication Sciences and Disorders, Northeastern University, Boston, Massachusetts
| |
Collapse
|
23
|
Tsang T, Ogren M, Peng Y, Nguyen B, Johnson KL, Johnson SP. Infant perception of sex differences in biological motion displays. J Exp Child Psychol 2018; 173:338-350. [PMID: 29807312 PMCID: PMC5986598 DOI: 10.1016/j.jecp.2018.04.006] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Revised: 04/09/2018] [Accepted: 04/10/2018] [Indexed: 11/24/2022]
Abstract
We examined mechanisms underlying infants' ability to categorize human biological motion stimuli from sex-typed walk motions, focusing on how visual attention to dynamic information in point-light displays (PLDs) contributes to infants' social category formation. We tested for categorization of PLDs produced by women and men by habituating infants to a series of female or male walk motions and then recording posthabituation preferences for new PLDs from the familiar or novel category (Experiment 1). We also tested for intrinsic preferences for female or male walk motions (Experiment 2). We found that infant boys were better able to categorize PLDs than were girls and that male PLDs were preferred overall. Neither of these effects was found to change with development across the observed age range (∼4-18 months). We conclude that infants' categorization of walk motions in PLDs is constrained by intrinsic preferences for higher motion speeds and higher spans of motion and, relatedly, by differences in walk motions produced by men and women.
Collapse
Affiliation(s)
- Tawny Tsang
- University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Marissa Ogren
- University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Yujia Peng
- University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Bryan Nguyen
- University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Kerri L Johnson
- University of California, Los Angeles, Los Angeles, CA 90095, USA
| | - Scott P Johnson
- University of California, Los Angeles, Los Angeles, CA 90095, USA.
| |
Collapse
|
24
|
Xiao NG, Mukaida M, Quinn PC, Pascalis O, Lee K, Itakura S. Narrowing in face and speech perception in infancy: Developmental change in the relations between domains. J Exp Child Psychol 2018; 176:113-127. [PMID: 30149243 DOI: 10.1016/j.jecp.2018.06.007] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2017] [Revised: 06/04/2018] [Accepted: 06/28/2018] [Indexed: 10/28/2022]
Abstract
Although prior research has established that perceptual narrowing reflects the influence of experience on the development of face and speech processing, it is unclear whether narrowing in the two domains is related. A within-participant design (N = 72) was used to investigate discrimination of own- and other-race faces and native and non-native speech sounds in 3-, 6-, 9-, and 12-month-old infants. For face and speech discrimination, whereas 3-month-olds discriminated own-race faces and native speech sounds as well as other-race faces and non-native speech sounds, older infants discriminated only own-race faces and native speech sounds. Narrowing in face and narrowing in speech were not correlated at 6 months, negatively correlated at 9 months, and positively correlated at 12 months. The findings reveal dynamic developmental changes in the relation between modalities during the first year of life.
Collapse
Affiliation(s)
- Naiqi G Xiao
- Department of Psychology, Princeton University, Princeton, NJ 08540, USA
| | - Mai Mukaida
- Department of Psychology, Graduate School of Letters, Kyoto University, Kyoto 606-8501, Japan
| | - Paul C Quinn
- Department of Psychological and Brain Sciences, University of Delaware, Newark, DE 19716, USA
| | - Olivier Pascalis
- Laboratoire de Psychologie et NeuroCognition-Université Grenoble Alpes, Centre National de la Recherche Scientifique, 38058 Grenoble, France
| | - Kang Lee
- Dr. Eric Jackman Institute of Child Study, University of Toronto, Toronto, Ontario M5R 2X2, Canada
| | - Shoji Itakura
- Department of Psychology, Graduate School of Letters, Kyoto University, Kyoto 606-8501, Japan.
| |
Collapse
|
25
|
Reynolds GD, Roth KC. The Development of Attentional Biases for Faces in Infancy: A Developmental Systems Perspective. Front Psychol 2018; 9:222. [PMID: 29541043 PMCID: PMC5835799 DOI: 10.3389/fpsyg.2018.00222] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2017] [Accepted: 02/09/2018] [Indexed: 11/16/2022] Open
Abstract
We present an integrative review of research and theory on major factors involved in the early development of attentional biases to faces. Research utilizing behavioral, eye-tracking, and neuroscience measures with infant participants as well as comparative research with animal subjects are reviewed. We begin with coverage of research demonstrating the presence of an attentional bias for faces shortly after birth, such as newborn infants' visual preference for face-like over non-face stimuli. The role of experience and the process of perceptual narrowing in face processing are examined as infants begin to demonstrate enhanced behavioral and neural responsiveness to mother over stranger, female over male, own- over other-race, and native over non-native faces. Next, we cover research on developmental change in infants' neural responsiveness to faces in multimodal contexts, such as audiovisual speech. We also explore the potential influence of arousal and attention on early perceptual preferences for faces. Lastly, the potential influence of the development of attention systems in the brain on social-cognitive processing is discussed. In conclusion, we interpret the findings under the framework of Developmental Systems Theory, emphasizing the combined and distributed influence of several factors, both internal (e.g., arousal, neural development) and external (e.g., early social experience) to the developing child, in the emergence of attentional biases that lead to enhanced responsiveness and processing of faces commonly encountered in the native environment.
Collapse
Affiliation(s)
- Greg D. Reynolds
- Developmental Cognitive Neuroscience Laboratory, Department of Psychology, University of Tennessee, Knoxville, TN, United States
| | | |
Collapse
|
26
|
Loucks J, Sommerville J. Developmental Change in Action Perception: Is Motor Experience the Cause? INFANCY 2018. [DOI: 10.1111/infa.12231] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
27
|
Van Ackeren MJ, Barbero FM, Mattioni S, Bottini R, Collignon O. Neuronal populations in the occipital cortex of the blind synchronize to the temporal dynamics of speech. eLife 2018; 7:e31640. [PMID: 29338838 PMCID: PMC5790372 DOI: 10.7554/elife.31640] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2017] [Accepted: 01/16/2018] [Indexed: 11/13/2022] Open
Abstract
The occipital cortex of early blind individuals (EB) activates during speech processing, challenging the notion of a hard-wired neurobiology of language. But, at what stage of speech processing do occipital regions participate in EB? Here we demonstrate that parieto-occipital regions in EB enhance their synchronization to acoustic fluctuations in human speech in the theta-range (corresponding to syllabic rate), irrespective of speech intelligibility. Crucially, enhanced synchronization to the intelligibility of speech was selectively observed in primary visual cortex in EB, suggesting that this region is at the interface between speech perception and comprehension. Moreover, EB showed overall enhanced functional connectivity between temporal and occipital cortices that are sensitive to speech intelligibility and altered directionality when compared to the sighted group. These findings suggest that the occipital cortex of the blind adopts an architecture that allows the tracking of speech material, and therefore does not fully abstract from the reorganized sensory inputs it receives.
Collapse
Affiliation(s)
| | - Francesca M Barbero
- Institute of research in PsychologyUniversity of LouvainLouvainBelgium
- Institute of NeuroscienceUniversity of LouvainLouvainBelgium
| | | | - Roberto Bottini
- Center for Mind/Brain StudiesUniversity of TrentoTrentoItaly
| | - Olivier Collignon
- Center for Mind/Brain StudiesUniversity of TrentoTrentoItaly
- Institute of research in PsychologyUniversity of LouvainLouvainBelgium
- Institute of NeuroscienceUniversity of LouvainLouvainBelgium
| |
Collapse
|
28
|
Safar K, Kusec A, Moulson MC. Face Experience and the Attentional Bias for Fearful Expressions in 6- and 9-Month-Old Infants. Front Psychol 2017; 8:1575. [PMID: 28979221 PMCID: PMC5611515 DOI: 10.3389/fpsyg.2017.01575] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Accepted: 08/28/2017] [Indexed: 11/17/2022] Open
Abstract
Infants demonstrate an attentional bias toward fearful facial expressions that emerges in the first year of life. The current study investigated whether this attentional bias is influenced by experience with particular face types. Six-month-old (n = 33) and 9-month-old (n = 31) Caucasian infants' spontaneous preference for fearful facial expressions when expressed by own-race (Caucasian) or other-race (East Asian) faces was examined. Six-month-old infants showed a preference for fearful expressions when expressed by own-race faces, but not when expressed by other-race faces. Nine-month-old infants showed a preference for fearful expressions when expressed by both own-race faces and other-race faces. These results suggest that how infants deploy their attention to different emotional expressions is shaped by experience: Attentional biases might initially be restricted to faces with which infants have the most experience, and later be extended to faces with which they have less experience.
Collapse
Affiliation(s)
- Kristina Safar
- Diagnostic Imaging, Hospital for Sick ChildrenToronto, ON, Canada.,Neurosciences and Mental Health Program, Research Institute, Hospital for Sick ChildrenToronto, ON, Canada
| | - Andrea Kusec
- MRC Cognition and Brain Sciences Unit, University of CambridgeCambridge, United Kingdom
| | | |
Collapse
|
29
|
Rohlf S, Habets B, von Frieling M, Röder B. Infants are superior in implicit crossmodal learning and use other learning mechanisms than adults. eLife 2017; 6:e28166. [PMID: 28949291 PMCID: PMC5662286 DOI: 10.7554/elife.28166] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2017] [Accepted: 09/26/2017] [Indexed: 11/13/2022] Open
Abstract
During development internal models of the sensory world must be acquired which have to be continuously adapted later. We used event-related potentials (ERP) to test the hypothesis that infants extract crossmodal statistics implicitly while adults learn them when task relevant. Participants were passively exposed to frequent standard audio-visual combinations (A1V1, A2V2, p=0.35 each), rare recombinations of these standard stimuli (A1V2, A2V1, p=0.10 each), and a rare audio-visual deviant with infrequent auditory and visual elements (A3V3, p=0.10). While both six-month-old infants and adults differentiated between rare deviants and standards involving early neural processing stages only infants were sensitive to crossmodal statistics as indicated by a late ERP difference between standard and recombined stimuli. A second experiment revealed that adults differentiated recombined and standard combinations when crossmodal combinations were task relevant. These results demonstrate a heightened sensitivity for crossmodal statistics in infants and a change in learning mode from infancy to adulthood.
Collapse
Affiliation(s)
- Sophie Rohlf
- Biological Psychology and NeuropsychologyUniversity of HamburgHamburgGermany
| | - Boukje Habets
- Biological Psychology and NeuropsychologyUniversity of HamburgHamburgGermany
- Biological Psychology and Cognitive NeuroscienceUniversity of BielefeldBielefeldGermany
| | - Marco von Frieling
- Biological Psychology and NeuropsychologyUniversity of HamburgHamburgGermany
| | - Brigitte Röder
- Biological Psychology and NeuropsychologyUniversity of HamburgHamburgGermany
| |
Collapse
|
30
|
Minar NJ, Lewkowicz DJ. Overcoming the other-race effect in infancy with multisensory redundancy: 10-12-month-olds discriminate dynamic other-race faces producing speech. Dev Sci 2017; 21:e12604. [PMID: 28944541 DOI: 10.1111/desc.12604] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2016] [Accepted: 07/03/2017] [Indexed: 11/30/2022]
Abstract
We tested 4-6- and 10-12-month-old infants to investigate whether the often-reported decline in infant sensitivity to other-race faces may reflect responsiveness to static or dynamic/silent faces rather than a general process of perceptual narrowing. Across three experiments, we tested discrimination of either dynamic own-race or other-race faces which were either accompanied by a speech syllable, no sound, or a non-speech sound. Results indicated that 4-6- and 10-12-month-old infants discriminated own-race as well as other-race faces accompanied by a speech syllable, that only the 10-12-month-olds discriminated silent own-race faces, and that 4-6-month-old infants discriminated own-race and other-race faces accompanied by a non-speech sound but that 10-12-month-old infants only discriminated own-race faces accompanied by a non-speech sound. Overall, the results suggest that the ORE reported to date reflects infant responsiveness to static or dynamic/silent faces rather than a general process of perceptual narrowing.
Collapse
Affiliation(s)
- Nicholas J Minar
- Institute for the Study of Child Development, Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| | - David J Lewkowicz
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA, USA
| |
Collapse
|
31
|
Chabrolles L, Coureaud G, Boyer N, Mathevon N, Beauchaud M. Cross-sensory modulation in a future top predator, the young Nile crocodile. ROYAL SOCIETY OPEN SCIENCE 2017; 4:170386. [PMID: 28680686 PMCID: PMC5493928 DOI: 10.1098/rsos.170386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2017] [Accepted: 05/19/2017] [Indexed: 06/07/2023]
Abstract
Animals routinely receive information through different sensory channels, and inputs from a modality may modulate the perception and behavioural reaction to others. In spite of their potential adaptive value, the behavioural correlates of this cross-sensory modulation have been poorly investigated. Due to their predator life, crocodilians deal with decisional conflicts emerging from concurrent stimuli. By testing young Crocodylus niloticus with sounds in the absence or presence of chemical stimuli, we show that (i) the prandial (feeding) state modulates the responsiveness of the animal to a congruent, i.e. food-related olfactory stimulus, (ii) the prandial state alters the responsiveness to an incongruent (independent of food) sound, (iii) fasted, but not sated, crocodiles display selective attention to socially relevant sounds over noise in presence of food odour. Cross-sensory modulation thus appears functional in young Nile crocodiles. It may contribute to decision making in the wild, when juveniles use it to interact acoustically when foraging.
Collapse
Affiliation(s)
- Laura Chabrolles
- Université de Lyon/Saint-Etienne, Equipe Neuro-Ethologie Sensorielle, ENES/Neuro-PSI CNRS UMR 9197, Saint-Etienne, France
| | - Gérard Coureaud
- Centre de Recherche en Neurosciences de Lyon, INSERM U1028/CNRS UMR 5292/Université Claude Bernard Lyon 1, Lyon, France
| | - Nicolas Boyer
- Université de Lyon/Saint-Etienne, Equipe Neuro-Ethologie Sensorielle, ENES/Neuro-PSI CNRS UMR 9197, Saint-Etienne, France
| | - Nicolas Mathevon
- Université de Lyon/Saint-Etienne, Equipe Neuro-Ethologie Sensorielle, ENES/Neuro-PSI CNRS UMR 9197, Saint-Etienne, France
| | - Marilyn Beauchaud
- Université de Lyon/Saint-Etienne, Equipe Neuro-Ethologie Sensorielle, ENES/Neuro-PSI CNRS UMR 9197, Saint-Etienne, France
| |
Collapse
|
32
|
Danielson DK, Bruderer AG, Kandhadai P, Vatikiotis-Bateson E, Werker JF. The organization and reorganization of audiovisual speech perception in the first year of life. COGNITIVE DEVELOPMENT 2017; 42:37-48. [PMID: 28970650 PMCID: PMC5621752 DOI: 10.1016/j.cogdev.2017.02.004] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
The period between six and 12 months is a sensitive period for language learning during which infants undergo auditory perceptual attunement, and recent results indicate that this sensitive period may exist across sensory modalities. We tested infants at three stages of perceptual attunement (six, nine, and 11 months) to determine 1) whether they were sensitive to the congruence between heard and seen speech stimuli in an unfamiliar language, and 2) whether familiarization with congruent audiovisual speech could boost subsequent non-native auditory discrimination. Infants at six- and nine-, but not 11-months, detected audiovisual congruence of non-native syllables. Familiarization to incongruent, but not congruent, audiovisual speech changed auditory discrimination at test for six-month-olds but not nine- or 11-month-olds. These results advance the proposal that speech perception is audiovisual from early in ontogeny, and that the sensitive period for audiovisual speech perception may last somewhat longer than that for auditory perception alone.
Collapse
Affiliation(s)
- D. Kyle Danielson
- Department of Psychology, The University of British Columbia, 2136
West Mall, Vancouver BC V6T 1Z4, Canada
| | - Alison G. Bruderer
- School of Audiology and Speech Sciences, The University of British
Columbia, 2177 Wesbrook Mall, Vancouver BC V6T 1Z3, Canada
| | - Padmapriya Kandhadai
- Department of Psychology, The University of British Columbia, 2136
West Mall, Vancouver BC V6T 1Z4, Canada
| | - Eric Vatikiotis-Bateson
- Department of Linguistics, The University of British Columbia, 2613
West Mall, Vancouver BC V6T 1Z4, Canada
| | - Janet F. Werker
- Department of Psychology, The University of British Columbia, 2136
West Mall, Vancouver BC V6T 1Z4, Canada
| |
Collapse
|
33
|
Hannon EE, Schachner A, Nave-Blodgett JE. Babies know bad dancing when they see it: Older but not younger infants discriminate between synchronous and asynchronous audiovisual musical displays. J Exp Child Psychol 2017; 159:159-174. [PMID: 28288412 DOI: 10.1016/j.jecp.2017.01.006] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2016] [Revised: 01/17/2017] [Accepted: 01/17/2017] [Indexed: 10/20/2022]
Abstract
Movement to music is a universal human behavior, yet little is known about how observers perceive audiovisual synchrony in complex musical displays such as a person dancing to music, particularly during infancy and childhood. In the current study, we investigated how perception of musical audiovisual synchrony develops over the first year of life. We habituated infants to a video of a person dancing to music and subsequently presented videos in which the visual track was matched (synchronous) or mismatched (asynchronous) with the audio track. In a visual-only control condition, we presented the same visual stimuli with no sound. In Experiment 1, we found that older infants (8-12months) exhibited a novelty preference for the mismatched movie when both auditory information and visual information were available and showed no preference when only visual information was available. By contrast, younger infants (5-8months) in Experiment 2 did not discriminate matching stimuli from mismatching stimuli. This suggests that the ability to perceive musical audiovisual synchrony may develop during the second half of the first year of infancy.
Collapse
Affiliation(s)
- Erin E Hannon
- Department of Psychology, University of Nevada, Las Vegas, Las Vegas, NV 89154, USA.
| | - Adena Schachner
- Department of Psychology, University of California, San Diego, La Jolla, CA 92093, USA
| | | |
Collapse
|
34
|
Pickron CB, Fava E, Scott LS. Follow My Gaze: Face Race and Sex Influence Gaze‐Cued Attention in Infancy. INFANCY 2017; 22:626-644. [DOI: 10.1111/infa.12180] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2016] [Revised: 01/06/2017] [Accepted: 01/23/2017] [Indexed: 11/29/2022]
Affiliation(s)
| | - Eswen Fava
- Psychological and Brain Sciences University of Massachusetts Amherst
| | | |
Collapse
|
35
|
Emberson LL. How Does Experience Shape Early Development? Considering the Role of Top-Down Mechanisms. ADVANCES IN CHILD DEVELOPMENT AND BEHAVIOR 2017; 52:1-41. [PMID: 28215282 DOI: 10.1016/bs.acdb.2016.10.001] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/31/2023]
Abstract
Perceptual development requires infants to adapt their perceptual systems to the structures and statistical information of their environment. In this way, perceptual development is not only important in its own right, but is a case study for behavioral and neural plasticity-powerful mechanisms that have the potential to support developmental change in numerous domains starting early in life. While it is widely assumed that perceptual development is a bottom-up process, where simple exposure to sensory input modifies perceptual representations starting early in the perceptual system, there are several critical phenomena in this literature that cannot be explained with an exclusively bottom-up model. This chapter proposes a complementary mechanism where nascent top-down information, feeding back from higher-level regions of the brain, helps to guide perceptual development. Supporting this theoretical proposal, recent behavioral and neuroimaging studies have established that young infants already have the capacity to engage in top-down modulation of their perceptual systems.
Collapse
Affiliation(s)
- L L Emberson
- Princeton University, Princeton, NJ, United States.
| |
Collapse
|
36
|
Abstract
For both humans and other animals, the ability to combine information obtained through different senses is fundamental to the perception of the environment. It is well established that humans form systematic cross-modal correspondences between stimulus features that can facilitate the accurate combination of sensory percepts. However, the evolutionary origins of the perceptual and cognitive mechanisms involved in these cross-modal associations remain surprisingly underexplored. In this review we outline recent comparative studies investigating how non-human mammals naturally combine information encoded in different sensory modalities during communication. The results of these behavioural studies demonstrate that various mammalian species are able to combine signals from different sensory channels when they are perceived to share the same basic features, either because they can be redundantly sensed and/or because they are processed in the same way. Moreover, evidence that a wide range of mammals form complex cognitive representations about signallers, both within and across species, suggests that animals also learn to associate different sensory features which regularly co-occur. Further research is now necessary to determine how multisensory representations are formed in individual animals, including the relative importance of low level feature-related correspondences. Such investigations will generate important insights into how animals perceive and categorise their environment, as well as provide an essential basis for understanding the evolution of multisensory perception in humans.
Collapse
|
37
|
Valentine T, Lewis MB, Hills PJ. Face-Space: A Unifying Concept in Face Recognition Research. Q J Exp Psychol (Hove) 2016; 69:1996-2019. [DOI: 10.1080/17470218.2014.990392] [Citation(s) in RCA: 74] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
The concept of a multidimensional psychological space, in which faces can be represented according to their perceived properties, is fundamental to the modern theorist in face processing. Yet the idea was not clearly expressed until 1991. The background that led to the development of face-space is explained, and its continuing influence on theories of face processing is discussed. Research that has explored the properties of the face-space and sought to understand caricature, including facial adaptation paradigms, is reviewed. Face-space as a theoretical framework for understanding the effect of ethnicity and the development of face recognition is evaluated. Finally, two applications of face-space in the forensic setting are discussed. From initially being presented as a model to explain distinctiveness, inversion, and the effect of ethnicity, face-space has become a central pillar in many aspects of face processing. It is currently being developed to help us understand adaptation effects with faces. While being in principle a simple concept, face-space has shaped, and continues to shape, our understanding of face perception.
Collapse
Affiliation(s)
- Tim Valentine
- Department of Psychology, Goldsmiths, University of London, London, UK
| | | | - Peter J. Hills
- Psychology Research Group, University of Bournemouth, Poole, UK
| |
Collapse
|
38
|
Chattopadhyay D, MacDorman KF. Familiar faces rendered strange: Why inconsistent realism drives characters into the uncanny valley. J Vis 2016; 16:7. [PMID: 27611063 PMCID: PMC5024669 DOI: 10.1167/16.11.7] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2016] [Accepted: 07/08/2016] [Indexed: 12/03/2022] Open
Abstract
Computer-modeled characters resembling real people sometimes elicit cold, eerie feelings. This effect, called the uncanny valley, has been attributed to uncertainty about whether the character is human or living or real. Uncertainty, however, neither explains why anthropomorphic characters lie in the uncanny valley nor their characteristic eeriness. We propose that realism inconsistency causes anthropomorphic characters to appear unfamiliar, despite their physical similarity to real people, owing to perceptual narrowing. We further propose that their unfamiliar, fake appearance elicits cold, eerie feelings, motivating threat avoidance. In our experiment, 365 participants categorized and rated objects, animals, and humans whose realism was manipulated along consistency-reduced and control transitions. These data were used to quantify a Bayesian model of categorical perception. In hypothesis testing, we found reducing realism consistency did not make objects appear less familiar, but only animals and humans, thereby eliciting cold, eerie feelings. Next, structural equation models elucidated the relation among realism inconsistency (measured objectively in a two-dimensional Morlet wavelet domain inspired by the primary visual cortex), realism, familiarity, eeriness, and warmth. The fact that reducing realism consistency only elicited cold, eerie feelings toward anthropomorphic characters, and only when it lessened familiarity, indicates the role of perceptual narrowing in the uncanny valley.
Collapse
|
39
|
Murray MM, Lewkowicz DJ, Amedi A, Wallace MT. Multisensory Processes: A Balancing Act across the Lifespan. Trends Neurosci 2016; 39:567-579. [PMID: 27282408 PMCID: PMC4967384 DOI: 10.1016/j.tins.2016.05.003] [Citation(s) in RCA: 137] [Impact Index Per Article: 15.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2016] [Revised: 04/13/2016] [Accepted: 05/12/2016] [Indexed: 11/20/2022]
Abstract
Multisensory processes are fundamental in scaffolding perception, cognition, learning, and behavior. How and when stimuli from different sensory modalities are integrated rather than treated as separate entities is poorly understood. We review how the relative reliance on stimulus characteristics versus learned associations dynamically shapes multisensory processes. We illustrate the dynamism in multisensory function across two timescales: one long term that operates across the lifespan and one short term that operates during the learning of new multisensory relations. In addition, we highlight the importance of task contingencies. We conclude that these highly dynamic multisensory processes, based on the relative weighting of stimulus characteristics and learned associations, provide both stability and flexibility to brain functions over a wide range of temporal scales.
Collapse
Affiliation(s)
- Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Clinical Neurosciences and Department of Radiology, University Hospital Centre and University of Lausanne, Lausanne, Switzerland; Electroencephalography Brain Mapping Core, Centre for Biomedical Imaging (CIBM), Lausanne, Switzerland; Department of Ophthalmology, University of Lausanne, Jules Gonin Eye Hospital, Lausanne, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
| | - David J Lewkowicz
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA, USA
| | - Amir Amedi
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada (IMRIC), Hadassah Medical School, Hebrew University of Jerusalem, Jerusalem, Israel; Interdisciplinary and Cognitive Science Program, The Edmond & Lily Safra Center for Brain Sciences (ELSC), Hebrew University of Jerusalem, Jerusalem, Israel
| | - Mark T Wallace
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA; Kennedy Center for Research on Human Development, Vanderbilt University, Nashville, TN, USA; Department of Psychiatry, Vanderbilt University, Nashville, TN, USA; Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA.
| |
Collapse
|
40
|
Perszyk DR, Waxman SR. Listening to the calls of the wild: The role of experience in linking language and cognition in young infants. Cognition 2016; 153:175-81. [PMID: 27209387 PMCID: PMC5134735 DOI: 10.1016/j.cognition.2016.05.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2015] [Revised: 05/06/2016] [Accepted: 05/09/2016] [Indexed: 10/21/2022]
Abstract
Well before they understand their first words, infants have begun to link language and cognition. This link is initially broad: At 3months, listening to both human and nonhuman primate vocalizations supports infants' object categorization, a building block of cognition. But by 6months, the link has narrowed: Only human vocalizations support categorization. What mechanisms underlie this rapid tuning process? Here, we document the crucial role of infants' experience as infants tune this link to cognition. Merely exposing infants to nonhuman primate vocalizations permits them to preserve, rather than sever, the link between these signals and categorization. Exposing infants to backward speech-a signal that fails to support categorization in the first year of life-does not have this advantage. This new evidence illuminates the central role of early experience as infants specify which signals, from an initially broad set, they will continue to link to core cognitive capacities.
Collapse
Affiliation(s)
- Danielle R Perszyk
- Department of Psychology, Northwestern University, Evanston, IL 60208, United States.
| | - Sandra R Waxman
- Department of Psychology, Northwestern University, Evanston, IL 60208, United States
| |
Collapse
|
41
|
Orr Y. Interspecies Semiotics and the Specter of Taboo: The Perception and Interpretation of Dogs and Rabies in Bali, Indonesia. AMERICAN ANTHROPOLOGIST 2016. [DOI: 10.1111/aman.12448] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Yancey Orr
- School of Social Science; University of Queensland; Brisbane St. Lucia 4012 Australia
| |
Collapse
|
42
|
Filippetti ML, Orioli G, Johnson MH, Farroni T. Newborn Body Perception: Sensitivity to Spatial Congruency. INFANCY 2015; 20:455-465. [PMID: 26709351 PMCID: PMC4682457 DOI: 10.1111/infa.12083] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2014] [Revised: 03/26/2015] [Accepted: 03/26/2015] [Indexed: 11/30/2022]
Abstract
Studies on adults have demonstrated that the perception our own body can be manipulated by varying both temporal and spatial properties of multisensory information. While human newborns are capable of detecting the temporal synchrony of visuo-tactile body-related cues, it remains unknown whether they also utilise spatial information for body perception. Twenty newborns were presented with a video of an infant's face touched with a paintbrush, while their own face was touched either in the spatially congruent, or an incongruent, location. We found that newborns show a visual preference for spatially congruent synchronous events, supporting the view that newborns have a rudimentary sense of their own body.
Collapse
Affiliation(s)
| | - Giulia Orioli
- Dipartimento di Psicologia dello Sviluppo e della SocializzazioneUniversità degli Studi di Padova
| | - Mark H Johnson
- Centre for Brain and Cognitive Development, Birkbeck CollegeUniversity of London
| | - Teresa Farroni
- Dipartimento di Psicologia dello Sviluppo e della SocializzazioneUniversità degli Studi di Padova
| |
Collapse
|
43
|
Shaw K, Baart M, Depowski N, Bortfeld H. Infants' preference for native audiovisual speech dissociated from congruency preference. PLoS One 2015; 10:e0126059. [PMID: 25927529 PMCID: PMC4415951 DOI: 10.1371/journal.pone.0126059] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2014] [Accepted: 03/28/2015] [Indexed: 11/21/2022] Open
Abstract
Although infant speech perception in often studied in isolated modalities, infants' experience with speech is largely multimodal (i.e., speech sounds they hear are accompanied by articulating faces). Across two experiments, we tested infants’ sensitivity to the relationship between the auditory and visual components of audiovisual speech in their native (English) and non-native (Spanish) language. In Experiment 1, infants’ looking times were measured during a preferential looking task in which they saw two simultaneous visual speech streams articulating a story, one in English and the other in Spanish, while they heard either the English or the Spanish version of the story. In Experiment 2, looking times from another group of infants were measured as they watched single displays of congruent and incongruent combinations of English and Spanish audio and visual speech streams. Findings demonstrated an age-related increase in looking towards the native relative to non-native visual speech stream when accompanied by the corresponding (native) auditory speech. This increase in native language preference did not appear to be driven by a difference in preference for native vs. non-native audiovisual congruence as we observed no difference in looking times at the audiovisual streams in Experiment 2.
Collapse
Affiliation(s)
- Kathleen Shaw
- Department of Psychology, University of Connecticut, Storrs, CT, United States of America
| | - Martijn Baart
- BCBL. Basque Center on Cognition, Brain and Language, Donostia - San Sebastián, Spain
| | - Nicole Depowski
- Department of Psychology, University of Connecticut, Storrs, CT, United States of America
| | - Heather Bortfeld
- Department of Psychology, University of Connecticut, Storrs, CT, United States of America
- Haskins Laboratories, New Haven, CT, United States of America
- * E-mail:
| |
Collapse
|
44
|
Pons F, Bosch L, Lewkowicz DJ. Bilingualism modulates infants' selective attention to the mouth of a talking face. Psychol Sci 2015; 26:490-8. [PMID: 25767208 PMCID: PMC4398611 DOI: 10.1177/0956797614568320] [Citation(s) in RCA: 111] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2014] [Accepted: 12/23/2014] [Indexed: 11/16/2022] Open
Abstract
Infants growing up in bilingual environments succeed at learning two languages. What adaptive processes enable them to master the more complex nature of bilingual input? One possibility is that bilingual infants take greater advantage of the redundancy of the audiovisual speech that they usually experience during social interactions. Thus, we investigated whether bilingual infants' need to keep languages apart increases their attention to the mouth as a source of redundant and reliable speech cues. We measured selective attention to talking faces in 4-, 8-, and 12-month-old Catalan and Spanish monolingual and bilingual infants. Monolinguals looked more at the eyes than the mouth at 4 months and more at the mouth than the eyes at 8 months in response to both native and nonnative speech, but they looked more at the mouth than the eyes at 12 months only in response to nonnative speech. In contrast, bilinguals looked equally at the eyes and mouth at 4 months, more at the mouth than the eyes at 8 months, and more at the mouth than the eyes at 12 months, and these patterns of responses were found for both native and nonnative speech at all ages. Thus, to support their dual-language acquisition processes, bilingual infants exploit the greater perceptual salience of redundant audiovisual speech cues at an earlier age and for a longer time than monolingual infants.
Collapse
Affiliation(s)
- Ferran Pons
- Department of Basic Psychology, Universitat de Barcelona Institute for Brain, Cognition and Behavior (IR3C), Barcelona, Spain
| | - Laura Bosch
- Department of Basic Psychology, Universitat de Barcelona Institute for Brain, Cognition and Behavior (IR3C), Barcelona, Spain
| | - David J Lewkowicz
- Department of Communication Sciences and Disorders, Northeastern University
| |
Collapse
|
45
|
Lewkowicz DJ, Minar NJ, Tift AH, Brandon M. Perception of the multisensory coherence of fluent audiovisual speech in infancy: its emergence and the role of experience. J Exp Child Psychol 2015; 130:147-62. [PMID: 25462038 PMCID: PMC4258456 DOI: 10.1016/j.jecp.2014.10.006] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2014] [Revised: 10/11/2014] [Accepted: 10/13/2014] [Indexed: 11/15/2022]
Abstract
To investigate the developmental emergence of the perception of the multisensory coherence of native and non-native audiovisual fluent speech, we tested 4-, 8- to 10-, and 12- to 14-month-old English-learning infants. Infants first viewed two identical female faces articulating two different monologues in silence and then in the presence of an audible monologue that matched the visible articulations of one of the faces. Neither the 4-month-old nor 8- to 10-month-old infants exhibited audiovisual matching in that they did not look longer at the matching monologue. In contrast, the 12- to 14-month-old infants exhibited matching and, consistent with the emergence of perceptual expertise for the native language, perceived the multisensory coherence of native-language monologues earlier in the test trials than that of non-native language monologues. Moreover, the matching of native audible and visible speech streams observed in the 12- to 14-month-olds did not depend on audiovisual synchrony, whereas the matching of non-native audible and visible speech streams did depend on synchrony. Overall, the current findings indicate that the perception of the multisensory coherence of fluent audiovisual speech emerges late in infancy, that audiovisual synchrony cues are more important in the perception of the multisensory coherence of non-native speech than that of native audiovisual speech, and that the emergence of this skill most likely is affected by perceptual narrowing.
Collapse
Affiliation(s)
- David J Lewkowicz
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA 02115, USA.
| | - Nicholas J Minar
- Department of Psychology, Florida Atlantic University, Boca Raton, FL 33431, USA
| | - Amy H Tift
- Department of Psychology, Florida Atlantic University, Boca Raton, FL 33431, USA
| | - Melissa Brandon
- Department of Psychology, Florida Atlantic University, Boca Raton, FL 33431, USA
| |
Collapse
|
46
|
Hadley H, Rost GC, Fava E, Scott LS. A mechanistic approach to cross-domain perceptual narrowing in the first year of life. Brain Sci 2014; 4:613-34. [PMID: 25521763 PMCID: PMC4279145 DOI: 10.3390/brainsci4040613] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2014] [Revised: 11/11/2014] [Accepted: 12/03/2014] [Indexed: 11/16/2022] Open
Abstract
Language and face processing develop in similar ways during the first year of life. Early in the first year of life, infants demonstrate broad abilities for discriminating among faces and speech. These discrimination abilities then become tuned to frequently experienced groups of people or languages. This process of perceptual development occurs between approximately 6 and 12 months of age and is largely shaped by experience. However, the mechanisms underlying perceptual development during this time, and whether they are shared across domains, remain largely unknown. Here, we highlight research findings across domains and propose a top-down/bottom-up processing approach as a guide for future research. It is hypothesized that perceptual narrowing and tuning in development is the result of a shift from primarily bottom-up processing to a combination of bottom-up and top-down influences. In addition, we propose word learning as an important top-down factor that shapes tuning in both the speech and face domains, leading to similar observed developmental trajectories across modalities. Importantly, we suggest that perceptual narrowing/tuning is the result of multiple interacting factors and not explained by the development of a single mechanism.
Collapse
Affiliation(s)
- Hillary Hadley
- Department of Psychological and Brain Sciences, University of Massachusetts, 413 Tobin Hall/135 Hicks Way, Amherst, MA 01003, USA.
| | - Gwyneth C Rost
- Department of Communication Disorders, University of Massachusetts, Amherst, MA 01003, USA.
| | - Eswen Fava
- Department of Psychological and Brain Sciences, University of Massachusetts, 413 Tobin Hall/135 Hicks Way, Amherst, MA 01003, USA.
| | - Lisa S Scott
- Department of Psychological and Brain Sciences, University of Massachusetts, 413 Tobin Hall/135 Hicks Way, Amherst, MA 01003, USA.
| |
Collapse
|
47
|
A deficit in face-voice integration in developing vervet monkeys exposed to ethanol during gestation. PLoS One 2014; 9:e114100. [PMID: 25470725 PMCID: PMC4254919 DOI: 10.1371/journal.pone.0114100] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2012] [Accepted: 11/03/2014] [Indexed: 11/19/2022] Open
Abstract
Children with fetal alcohol spectrum disorders display behavioural and intellectual impairments that strongly implicate dysfunction within the frontal cortex. Deficits in social behaviour and cognition are amongst the most pervasive outcomes of prenatal ethanol exposure. Our naturalistic vervet monkey model of fetal alcohol exposure (FAE) provides an unparalleled opportunity to study the neurobehavioral outcomes of prenatal ethanol exposure in a controlled experimental setting. Recent work has revealed a significant reduction of the neuronal population in the frontal lobes of these monkeys. We used an intersensory matching procedure to investigate audiovisual perception of socially relevant stimuli in young FAE vervet monkeys. Here we show a domain-specific deficit in audiovisual integration of socially relevant stimuli. When FAE monkeys were shown a pair of side-by-side videos of a monkey concurrently presenting two different calls along with a single audio track matching the content of one of the calls, they were not able to match the correct video to the single audio track. This was manifest by their average looking time being equally spent towards both the matching and non-matching videos. However, a group of normally developing monkeys exhibited a significant preference for the non-matching video. This inability to integrate and thereby discriminate audiovisual stimuli was confined to the integration of faces and voices as revealed by the monkeys' ability to match a dynamic face to a complex tone or a black-and-white checkerboard to a pure tone, presumably based on duration and/or onset-offset synchrony. Together, these results suggest that prenatal ethanol exposure negatively affects a specific domain of audiovisual integration. This deficit is confined to the integration of information that is presented by the face and the voice and does not affect more elementary aspects of sensory integration.
Collapse
|
48
|
Vouloumanos A, Waxman SR. Listen up! Speech is for thinking during infancy. Trends Cogn Sci 2014; 18:642-6. [PMID: 25457376 DOI: 10.1016/j.tics.2014.10.001] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2014] [Revised: 09/15/2014] [Accepted: 10/02/2014] [Indexed: 11/27/2022]
Abstract
Infants' exposure to human speech within the first year promotes more than speech processing and language acquisition: new developmental evidence suggests that listening to speech shapes infants' fundamental cognitive and social capacities. Speech streamlines infants' learning, promotes the formation of object categories, signals communicative partners, highlights information in social interactions, and offers insight into the minds of others. These results, which challenge the claim that for infants, speech offers no special cognitive advantages, suggest a new synthesis. Far earlier than researchers had imagined, an intimate and powerful connection between human speech and cognition guides infant development, advancing infants' acquisition of fundamental psychological processes.
Collapse
Affiliation(s)
- Athena Vouloumanos
- Department of Psychology, New York University, 6 Washington Place, New York, NY, 10003-6603, USA.
| | - Sandra R Waxman
- Department of Psychology, Institute for Policy Research, Northwestern University, 2029 Sheridan Road, Evanston, IL 60208-2710, USA
| |
Collapse
|
49
|
Watson TL, Robbins RA, Best CT. Infant perceptual development for faces and spoken words: an integrated approach. Dev Psychobiol 2014; 56:1454-81. [PMID: 25132626 PMCID: PMC4231232 DOI: 10.1002/dev.21243] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2013] [Accepted: 06/24/2014] [Indexed: 11/10/2022]
Abstract
There are obvious differences between recognizing faces and recognizing spoken words or phonemes that might suggest development of each capability requires different skills. Recognizing faces and perceiving spoken language, however, are in key senses extremely similar endeavors. Both perceptual processes are based on richly variable, yet highly structured input from which the perceiver needs to extract categorically meaningful information. This similarity could be reflected in the perceptual narrowing that occurs within the first year of life in both domains. We take the position that the perceptual and neurocognitive processes by which face and speech recognition develop are based on a set of common principles. One common principle is the importance of systematic variability in the input as a source of information rather than noise. Experience of this variability leads to perceptual tuning to the critical properties that define individual faces or spoken words versus their membership in larger groupings of people and their language communities. We argue that parallels can be drawn directly between the principles responsible for the development of face and spoken language perception.
Collapse
Affiliation(s)
- Tamara L Watson
- School of Social Science and Psychology, University of Western SydneyNew South Wales, Australia
- MARCS Institute, University of Western SydneyNew South Wales, Australia
| | - Rachel A Robbins
- School of Social Science and Psychology, University of Western SydneyNew South Wales, Australia
| | - Catherine T Best
- MARCS Institute, University of Western SydneyNew South Wales, Australia
- School of Humanities and Communication Arts, University of Western SydneyNew South Wales, Australia
| |
Collapse
|
50
|
Guellaï B, Streri A, Yeung HH. The development of sensorimotor influences in the audiovisual speech domain: some critical questions. Front Psychol 2014; 5:812. [PMID: 25147528 PMCID: PMC4123602 DOI: 10.3389/fpsyg.2014.00812] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2014] [Accepted: 07/09/2014] [Indexed: 11/13/2022] Open
Abstract
Speech researchers have long been interested in how auditory and visual speech signals are integrated, and the recent work has revived interest in the role of speech production with respect to this process. Here, we discuss these issues from a developmental perspective. Because speech perception abilities typically outstrip speech production abilities in infancy and childhood, it is unclear how speech-like movements could influence audiovisual speech perception in development. While work on this question is still in its preliminary stages, there is nevertheless increasing evidence that sensorimotor processes (defined here as any motor or proprioceptive process related to orofacial movements) affect developmental audiovisual speech processing. We suggest three areas on which to focus in future research: (i) the relation between audiovisual speech perception and sensorimotor processes at birth, (ii) the pathways through which sensorimotor processes interact with audiovisual speech processing in infancy, and (iii) developmental change in sensorimotor pathways as speech production emerges in childhood.
Collapse
Affiliation(s)
- Bahia Guellaï
- Laboratoire Ethologie, Cognition, Développement, Université Paris Ouest Nanterre La Défense, NanterreFrance
| | - Arlette Streri
- CNRS, Laboratoire Psychologie de la Perception, UMR 8242, ParisFrance
| | - H. Henny Yeung
- CNRS, Laboratoire Psychologie de la Perception, UMR 8242, ParisFrance
- Université Paris Descartes, Paris Sorbonne Cité, ParisFrance
| |
Collapse
|