1
|
Cirelli LK, Talukder LS, Kragness HE. Infant attention to rhythmic audiovisual synchrony is modulated by stimulus properties. Front Psychol 2024; 15:1393295. [PMID: 39027053 PMCID: PMC11256966 DOI: 10.3389/fpsyg.2024.1393295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Accepted: 06/06/2024] [Indexed: 07/20/2024] Open
Abstract
Musical interactions are a common and multimodal part of an infant's daily experiences. Infants hear their parents sing while watching their lips move and see their older siblings dance along to music playing over the radio. Here, we explore whether 8- to 12-month-old infants associate musical rhythms they hear with synchronous visual displays by tracking their dynamic visual attention to matched and mismatched displays. Visual attention was measured using eye-tracking while they attended to a screen displaying two videos of a finger tapping at different speeds. These videos were presented side by side while infants listened to an auditory rhythm (high or low pitch) synchronized with one of the two videos. Infants attended more to the low-pitch trials than to the high-pitch trials but did not display a preference for attending to the synchronous hand over the asynchronous hand within trials. Exploratory evidence, however, suggests that tempo, pitch, and rhythmic complexity interactively engage infants' visual attention to a tapping hand, especially when that hand is aligned with the auditory stimulus. For example, when the rhythm was complex and the auditory stimulus was low in pitch, infants attended to the fast hand more when it aligned with the auditory stream than to misaligned trials. These results suggest that the audiovisual integration in rhythmic non-speech contexts is influenced by stimulus properties.
Collapse
Affiliation(s)
- Laura K. Cirelli
- Department of Psychology, University of Toronto Scarborough, Toronto, ON, Canada
| | - Labeeb S. Talukder
- Department of Psychology, University of Toronto Scarborough, Toronto, ON, Canada
| | - Haley E. Kragness
- Department of Psychology, University of Toronto Scarborough, Toronto, ON, Canada
- Psychology Department, Bucknell University, Lewisburg, PA, United States
| |
Collapse
|
2
|
Alemi R, Wolfe J, Neumann S, Manning J, Hanna L, Towler W, Wilson C, Bien A, Miller S, Schafer E, Gemignani J, Koirala N, Gracco VL, Deroche M. Motor Processing in Children With Cochlear Implants as Assessed by Functional Near-Infrared Spectroscopy. Percept Mot Skills 2024; 131:74-105. [PMID: 37977135 PMCID: PMC10863375 DOI: 10.1177/00315125231213167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2023]
Abstract
Auditory-motor and visual-motor networks are often coupled in daily activities, such as when listening to music and dancing; but these networks are known to be highly malleable as a function of sensory input. Thus, congenital deafness may modify neural activities within the connections between the motor, auditory, and visual cortices. Here, we investigated whether the cortical responses of children with cochlear implants (CI) to a simple and repetitive motor task would differ from that of children with typical hearing (TH) and we sought to understand whether this response related to their language development. Participants were 75 school-aged children, including 50 with CI (with varying language abilities) and 25 controls with TH. We used functional near-infrared spectroscopy (fNIRS) to record cortical responses over the whole brain, as children squeezed the back triggers of a joystick that vibrated or not with the squeeze. Motor cortex activity was reflected by an increase in oxygenated hemoglobin concentration (HbO) and a decrease in deoxygenated hemoglobin concentration (HbR) in all children, irrespective of their hearing status. Unexpectedly, the visual cortex (supposedly an irrelevant region) was deactivated in this task, particularly for children with CI who had good language skills when compared to those with CI who had language delays. Presence or absence of vibrotactile feedback made no difference in cortical activation. These findings support the potential of fNIRS to examine cognitive functions related to language in children with CI.
Collapse
Affiliation(s)
- Razieh Alemi
- Department of Psychology, Concordia University, Montreal, QC, Canada
| | - Jace Wolfe
- Oberkotter Foundation, Oklahoma City, OK, USA
| | - Sara Neumann
- Hearts for Hearing Foundation, Oklahoma City, OK, USA
| | - Jacy Manning
- Hearts for Hearing Foundation, Oklahoma City, OK, USA
| | - Lindsay Hanna
- Hearts for Hearing Foundation, Oklahoma City, OK, USA
| | - Will Towler
- Hearts for Hearing Foundation, Oklahoma City, OK, USA
| | - Caleb Wilson
- Department of Otolaryngology-Head & Neck Surgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, USA
| | - Alexander Bien
- Department of Otolaryngology-Head & Neck Surgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, USA
| | - Sharon Miller
- Department of Audiology & Speech-Language Pathology, University of North Texas, Denton, TX, USA
| | - Erin Schafer
- Department of Audiology & Speech-Language Pathology, University of North Texas, Denton, TX, USA
| | - Jessica Gemignani
- Department of Developmental and Social Psychology, University of Padua, Padova, Italy
| | | | | | - Mickael Deroche
- Department of Psychology, Concordia University, Montreal, QC, Canada
| |
Collapse
|
3
|
Greenfield MD, Merker B. Coordinated rhythms in animal species, including humans: Entrainment from bushcricket chorusing to the philharmonic orchestra. Neurosci Biobehav Rev 2023; 153:105382. [PMID: 37673282 DOI: 10.1016/j.neubiorev.2023.105382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 08/28/2023] [Accepted: 09/01/2023] [Indexed: 09/08/2023]
Abstract
Coordinated group displays featuring precise entrainment of rhythmic behavior between neighbors occur not only in human music, dance and drill, but in the acoustic or optical signaling of a number of species of arthropods and anurans. In this review we describe the mechanisms of phase resetting and phase and tempo adjustments that allow the periodic output of signaling individuals to be aligned in synchronized rhythmic group displays. These mechanisms are well described in some of the synchronizing arthropod species, in which conspecific signals reset an individual's endogenous output oscillators in such a way that the joint rhythmic signals are locked in phase. Some of these species are capable of mutually adjusting both the phase and tempo of their rhythmic signaling, thereby achieving what is called perfect synchrony, a capacity which otherwise is found only in humans. We discuss this disjoint phylogenetic distribution of inter-individual rhythmic entrainment in the context of the functions such entrainment might perform in the various species concerned, and the adaptive circumstances in which it might evolve.
Collapse
Affiliation(s)
- Michael D Greenfield
- ENES Bioacoustics Research Lab, CRNL, University of Saint-Etienne, CNRS, Inserm, Saint-Etienne, France; Department of Ecology and Evolutionary Biology, University of Kansas, Lawrence, KS 66045, USA.
| | - Bjorn Merker
- Independent Scholar, SE-29194 Kristianstad, Sweden
| |
Collapse
|
4
|
Singh M, Mehr SA. Universality, domain-specificity, and development of psychological responses to music. NATURE REVIEWS PSYCHOLOGY 2023; 2:333-346. [PMID: 38143935 PMCID: PMC10745197 DOI: 10.1038/s44159-023-00182-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/30/2023] [Indexed: 12/26/2023]
Abstract
Humans can find music happy, sad, fearful, or spiritual. They can be soothed by it or urged to dance. Whether these psychological responses reflect cognitive adaptations that evolved expressly for responding to music is an ongoing topic of study. In this Review, we examine three features of music-related psychological responses that help to elucidate whether the underlying cognitive systems are specialized adaptations: universality, domain-specificity, and early expression. Focusing on emotional and behavioural responses, we find evidence that the relevant psychological mechanisms are universal and arise early in development. However, the existing evidence cannot establish that these mechanisms are domain-specific. To the contrary, many findings suggest that universal psychological responses to music reflect more general properties of emotion, auditory perception, and other human cognitive capacities that evolved for non-musical purposes. Cultural evolution, driven by the tinkering of musical performers, evidently crafts music to compellingly appeal to shared psychological mechanisms, resulting in both universal patterns (such as form-function associations) and culturally idiosyncratic styles.
Collapse
Affiliation(s)
- Manvir Singh
- Institute for Advanced Study in Toulouse, University of
Toulouse 1 Capitole, Toulouse, France
| | - Samuel A. Mehr
- Yale Child Study Center, Yale University, New Haven, CT,
USA
- School of Psychology, University of Auckland, Auckland,
New Zealand
| |
Collapse
|
5
|
Bosworth RG, Hwang SO, Corina DP. Visual attention for linguistic and non-linguistic body actions in non-signing and native signing children. Front Psychol 2022; 13:951057. [PMID: 36160576 PMCID: PMC9505519 DOI: 10.3389/fpsyg.2022.951057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 07/25/2022] [Indexed: 11/13/2022] Open
Abstract
Evidence from adult studies of deaf signers supports the dissociation between neural systems involved in processing visual linguistic and non-linguistic body actions. The question of how and when this specialization arises is poorly understood. Visual attention to these forms is likely to change with age and be affected by prior language experience. The present study used eye-tracking methodology with infants and children as they freely viewed alternating video sequences of lexical American sign language (ASL) signs and non-linguistic body actions (self-directed grooming action and object-directed pantomime). In Experiment 1, we quantified fixation patterns using an area of interest (AOI) approach and calculated face preference index (FPI) values to assess the developmental differences between 6 and 11-month-old hearing infants. Both groups were from monolingual English-speaking homes with no prior exposure to sign language. Six-month-olds attended the signer's face for grooming; but for mimes and signs, they were drawn to attend to the "articulatory space" where the hands and arms primarily fall. Eleven-month-olds, on the other hand, showed a similar attention to the face for all body action types. We interpret this to reflect an early visual language sensitivity that diminishes with age, just before the child's first birthday. In Experiment 2, we contrasted 18 hearing monolingual English-speaking children (mean age of 4.8 years) vs. 13 hearing children of deaf adults (CODAs; mean age of 5.7 years) whose primary language at home was ASL. Native signing children had a significantly greater face attentional bias than non-signing children for ASL signs, but not for grooming and mimes. The differences in the visual attention patterns that are contingent on age (in infants) and language experience (in children) may be related to both linguistic specialization over time and the emerging awareness of communicative gestural acts.
Collapse
Affiliation(s)
- Rain G. Bosworth
- NTID PLAY Lab, National Technical Institute for the Deaf, Rochester Institute of Technology, Rochester, NY, United States
| | - So One Hwang
- Center for Research in Language, University of California, San Diego, San Diego, CA, United States
| | - David P. Corina
- Center for Mind and Brain, University of California, Davis, Davis, CA, United States
| |
Collapse
|
6
|
Fiber tracing and microstructural characterization among audiovisual integration brain regions in neonates compared with young adults. Neuroimage 2022; 254:119141. [PMID: 35342006 DOI: 10.1016/j.neuroimage.2022.119141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 02/23/2022] [Accepted: 03/21/2022] [Indexed: 11/23/2022] Open
Abstract
Audiovisual integration has been related with cognitive-processing and behavioral advantages, as well as with various socio-cognitive disorders. While some studies have identified brain regions instantiating this ability shortly after birth, little is known about the structural pathways connecting them. The goal of the present study was to reconstruct fiber tracts linking AVI regions in the newborn in-vivo brain and assess their adult-likeness by comparing them with analogous fiber tracts of young adults. We performed probabilistic tractography and compared connective probabilities between a sample of term-born neonates (N = 311; the Developing Human Connectome Project (dHCP, http://www.developingconnectome.org) and young adults (N = 311 The Human Connectome Project; https://www.humanconnectome.org/) by means of a classification algorithm. Furthermore, we computed Dice coefficients to assess between-group spatial similarity of the reconstructed fibers and used diffusion metrics to characterize neonates' AVI brain network in terms of microstructural properties, interhemispheric differences and the association with perinatal covariates and biological sex. Overall, our results indicate that the AVI fiber bundles were successfully reconstructed in a vast majority of neonates, similarly to adults. Connective probability distributional similarities and spatial overlaps of AVI fibers between the two groups differed across the reconstructed fibers. There was a rank-order correspondence of the fibers' connective strengths across the groups. Additionally, the study revealed patterns of diffusion metrics in line with early white matter developmental trajectories and a developmental advantage for females. Altogether, these findings deliver evidence of meaningful structural connections among AVI regions in the newborn in-vivo brain.
Collapse
|
7
|
Bánki A, de Eccher M, Falschlehner L, Hoehl S, Markova G. Comparing Online Webcam- and Laboratory-Based Eye-Tracking for the Assessment of Infants' Audio-Visual Synchrony Perception. Front Psychol 2022; 12:733933. [PMID: 35087442 PMCID: PMC8787048 DOI: 10.3389/fpsyg.2021.733933] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 12/06/2021] [Indexed: 11/13/2022] Open
Abstract
Online data collection with infants raises special opportunities and challenges for developmental research. One of the most prevalent methods in infancy research is eye-tracking, which has been widely applied in laboratory settings to assess cognitive development. Technological advances now allow conducting eye-tracking online with various populations, including infants. However, the accuracy and reliability of online infant eye-tracking remain to be comprehensively evaluated. No research to date has directly compared webcam-based and in-lab eye-tracking data from infants, similarly to data from adults. The present study provides a direct comparison of in-lab and webcam-based eye-tracking data from infants who completed an identical looking time paradigm in two different settings (in the laboratory or online at home). We assessed 4-6-month-old infants (n = 38) in an eye-tracking task that measured the detection of audio-visual asynchrony. Webcam-based and in-lab eye-tracking data were compared on eye-tracking and video data quality, infants' viewing behavior, and experimental effects. Results revealed no differences between the in-lab and online setting in the frequency of technical issues and participant attrition rates. Video data quality was comparable between settings in terms of completeness and brightness, despite lower frame rate and resolution online. Eye-tracking data quality was higher in the laboratory than online, except in case of relative sample loss. Gaze data quantity recorded by eye-tracking was significantly lower than by video in both settings. In valid trials, eye-tracking and video data captured infants' viewing behavior uniformly, irrespective of setting. Despite the common challenges of infant eye-tracking across experimental settings, our results point toward the necessity to further improve the precision of online eye-tracking with infants. Taken together, online eye-tracking is a promising tool to assess infants' gaze behavior but requires careful data quality control. The demographic composition of both samples differed from the generic population on caregiver education: our samples comprised caregivers with higher-than-average education levels, challenging the notion that online studies will per se reach more diverse populations.
Collapse
Affiliation(s)
- Anna Bánki
- Department of Developmental and Educational Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
| | - Martina de Eccher
- Department for Psychology of Language, Georg-Elias-Müller-Institut für Psychologie, Georg-August-Universität Göttingen, Göttingen, Germany
| | - Lilith Falschlehner
- Department of Developmental and Educational Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
| | - Stefanie Hoehl
- Department of Developmental and Educational Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
| | - Gabriela Markova
- Department of Developmental and Educational Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
| |
Collapse
|
8
|
Bainbridge CM, Bertolo M, Youngers J, Atwood S, Yurdum L, Simson J, Lopez K, Xing F, Martin A, Mehr SA. Infants relax in response to unfamiliar foreign lullabies. Nat Hum Behav 2021; 5:256-264. [PMID: 33077883 PMCID: PMC8220405 DOI: 10.1038/s41562-020-00963-z] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Accepted: 09/11/2020] [Indexed: 12/14/2022]
Abstract
Music is characterized by acoustic forms that are predictive of its behavioural functions. For example, adult listeners accurately identify unfamiliar lullabies as infant-directed on the basis of their musical features alone. This property could reflect a function of listeners' experiences, the basic design of the human mind, or both. Here, we show that US infants (N = 144) relax in response to eight unfamiliar foreign lullabies, relative to matched non-lullaby songs from other foreign societies, as indexed by heart rate, pupillometry and electrodermal activity. They do so consistently throughout the first year of life, suggesting that the response is not a function of their musical experiences, which are limited relative to those of adults. The infants' parents overwhelmingly chose lullabies as the songs that they would use to calm their fussy infant, despite their unfamiliarity. Together, these findings suggest that infants may be predisposed to respond to common features of lullabies found in different cultures.
Collapse
Affiliation(s)
| | - Mila Bertolo
- Department of Psychology, Harvard University, Cambridge, MA, USA.
| | - Julie Youngers
- Department of Psychology, Harvard University, Cambridge, MA, USA
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| | - S Atwood
- Department of Psychology, Harvard University, Cambridge, MA, USA
- Department of Psychology, University of Washington, Seattle, WA, USA
| | - Lidya Yurdum
- Department of Psychology, Harvard University, Cambridge, MA, USA
| | - Jan Simson
- Department of Psychology, Harvard University, Cambridge, MA, USA
| | - Kelsie Lopez
- Department of Psychology, Harvard University, Cambridge, MA, USA
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, USA
| | - Feng Xing
- Department of Psychology, Harvard University, Cambridge, MA, USA
- Department of Education, Johns Hopkins University, Baltimore, MD, USA
| | - Alia Martin
- School of Psychology, Victoria University of Wellington, Wellington, New Zealand
| | - Samuel A Mehr
- Department of Psychology, Harvard University, Cambridge, MA, USA.
- School of Psychology, Victoria University of Wellington, Wellington, New Zealand.
- Data Science Initiative, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
9
|
Zhou HY, Cheung EFC, Chan RCK. Audiovisual temporal integration: Cognitive processing, neural mechanisms, developmental trajectory and potential interventions. Neuropsychologia 2020; 140:107396. [PMID: 32087206 DOI: 10.1016/j.neuropsychologia.2020.107396] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 02/14/2020] [Accepted: 02/15/2020] [Indexed: 12/21/2022]
Abstract
To integrate auditory and visual signals into a unified percept, the paired stimuli must co-occur within a limited time window known as the Temporal Binding Window (TBW). The width of the TBW, a proxy of audiovisual temporal integration ability, has been found to be correlated with higher-order cognitive and social functions. A comprehensive review of studies investigating audiovisual TBW reveals several findings: (1) a wide range of top-down processes and bottom-up features can modulate the width of the TBW, facilitating adaptation to the changing and multisensory external environment; (2) a large-scale brain network works in coordination to ensure successful detection of audiovisual (a)synchrony; (3) developmentally, audiovisual TBW follows a U-shaped pattern across the lifespan, with a protracted developmental course into late adolescence and rebounding in size again in late life; (4) an enlarged TBW is characteristic of a number of neurodevelopmental disorders; and (5) the TBW is highly flexible via perceptual and musical training. Interventions targeting the TBW may be able to improve multisensory function and ameliorate social communicative symptoms in clinical populations.
Collapse
Affiliation(s)
- Han-Yu Zhou
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | | | - Raymond C K Chan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
10
|
Takehana A, Uehara T, Sakaguchi Y. Audiovisual synchrony perception in observing human motion to music. PLoS One 2019; 14:e0221584. [PMID: 31454393 PMCID: PMC6711538 DOI: 10.1371/journal.pone.0221584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2019] [Accepted: 08/09/2019] [Indexed: 11/18/2022] Open
Abstract
To examine how individuals perceive synchrony between music and body motion, we investigated the characteristics of synchrony perception during observation of a Japanese Radio Calisthenics routine. We used the constant stimuli method to present video clips of an individual performing an exercise routine. We generated stimuli with a range of temporal shifts between the visual and auditory streams, and asked participants to make synchrony judgments. We then examined which movement-feature points agreed with music beats when the participants perceived synchrony. We found that extremities (e.g., hands and feet) reached the movement endpoint or moved through the lowest position at music beats associated with synchrony. Movement onsets never agreed with music beats. To investigate whether visual information about the feature points was necessary for synchrony perception, we conducted a second experiment where only limited portions of video clips were presented to the participants. Participants consistently judged synchrony even when the video image did not contain the critical feature points, suggesting that a prediction mechanism contributes to synchrony perception. To discuss the meaning of these feature points with respect to synchrony perception, we examined the temporal relationship between the motion of body parts and the ground reaction force (GRF) of exercise performers, which reflected the total force acting on the performer. Interestingly, vertical GRF showed local peaks consistently synchronized with music beats for most exercises, with timing that was closely correlated with the timing of movement feature points. This result suggests that synchrony perception in humans is based on some global variable anticipated from visual information, instead of the feature points found in the motion of individual body parts. In summary, the present results indicate that synchrony perception during observation of human motion to music depends largely on spatiotemporal prediction of the performer's motion.
Collapse
Affiliation(s)
- Akira Takehana
- Department of Mechanical Engineering and Intelligent Systems, Graduate School of Informatics and Engineering, University of Electro-Communications, Chofu, Tokyo, Japan
| | - Tsukasa Uehara
- Department of Mechanical Engineering and Intelligent Systems, Graduate School of Informatics and Engineering, University of Electro-Communications, Chofu, Tokyo, Japan
| | - Yutaka Sakaguchi
- Department of Mechanical Engineering and Intelligent Systems, Graduate School of Informatics and Engineering, University of Electro-Communications, Chofu, Tokyo, Japan
- Research Center for Performance Art Science, University of Electro-Communications, Chofu, Tokyo, Japan
- * E-mail:
| |
Collapse
|
11
|
Metrical congruency and kinematic familiarity facilitate temporal binding between musical and dance rhythms. Psychon Bull Rev 2019; 25:1416-1422. [PMID: 29766450 DOI: 10.3758/s13423-018-1480-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
Although music and dance are often experienced simultaneously, it is unclear what modulates their perceptual integration. This study investigated how two factors related to music-dance correspondences influenced audiovisual binding of their rhythms: the metrical match between the music and dance, and the kinematic familiarity of the dance movement. Participants watched a point-light figure dancing synchronously to a triple-meter rhythm that they heard in parallel, whereby the dance communicated a triple (congruent) or a duple (incongruent) visual meter. The movement was either the participant's own or that of another participant. Participants attended to both streams while detecting a temporal perturbation in the auditory beat. The results showed lower sensitivity to the auditory deviant when the visual dance was metrically congruent to the auditory rhythm and when the movement was the participant's own. This indicated stronger audiovisual binding and a more coherent bimodal rhythm in these conditions, thus making a slight auditory deviant less noticeable. Moreover, binding in the metrically incongruent condition involving self-generated visual stimuli was correlated with self-recognition of the movement, suggesting that action simulation mediates the perceived coherence between one's own movement and a mismatching auditory rhythm. Overall, the mechanisms of rhythm perception and action simulation could inform the perceived compatibility between music and dance, thus modulating the temporal integration of these audiovisual stimuli.
Collapse
|
12
|
Cirelli LK. How interpersonal synchrony facilitates early prosocial behavior. Curr Opin Psychol 2017; 20:35-39. [PMID: 28830004 DOI: 10.1016/j.copsyc.2017.08.009] [Citation(s) in RCA: 64] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2017] [Revised: 06/23/2017] [Accepted: 08/03/2017] [Indexed: 11/16/2022]
Abstract
When infants and children affiliate with others, certain cues may direct their social efforts to 'better' social partners. Interpersonal synchrony, or when two or more people move together in time, can be one such cue. In adults, experiencing interpersonal synchrony encourages affiliative behaviors. Recent studies have found that these effects also influence early prosociality-for example, 14-month-olds help a synchronous partner more than an asynchronous partner. These effects on helping are specifically directed to the synchronous movement partner and members of that person's social group. In older children, the prosocial effects of interpersonal synchrony may even cross group divides. How synchrony and other cues for group membership influence early prosociality is a promising avenue for future research.
Collapse
Affiliation(s)
- Laura K Cirelli
- Department of Psychology, University of Toronto Mississauga, Mississauga, ON L5L 1C6, Canada.
| |
Collapse
|
13
|
Ravignani A, Honing H, Kotz SA. Editorial: The Evolution of Rhythm Cognition: Timing in Music and Speech. Front Hum Neurosci 2017; 11:303. [PMID: 28659775 PMCID: PMC5468413 DOI: 10.3389/fnhum.2017.00303] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2017] [Accepted: 05/26/2017] [Indexed: 01/12/2023] Open
Affiliation(s)
- Andrea Ravignani
- Veterinary and Research Department, Sealcentre PieterburenPieterburen, Netherlands.,Language and Cognition Department, Max Planck Institute for PsycholinguisticsNijmegen, Netherlands.,Artificial Intelligence Lab, Vrije Universiteit BrusselBrussels, Belgium
| | - Henkjan Honing
- Music Cognition Group, Amsterdam Brain and Cognition, Institute for Logic, Language, and Computation, University of AmsterdamAmsterdam, Netherlands
| | - Sonja A Kotz
- Basic and Applied NeuroDynamics Lab, Faculty of Psychology and Neuroscience, Department of Neuropsychology and Psychopharmacology, Maastricht UniversityMaastricht, Netherlands.,Department of Neuropsychology, Max-Planck Institute for Human Cognitive and Brain SciencesLeipzig, Germany
| |
Collapse
|