1
|
Jertberg RM, Wienicke FJ, Andruszkiewicz K, Begeer S, Chakrabarti B, Geurts HM, de Vries R, Van der Burg E. Differences between autistic and non-autistic individuals in audiovisual speech integration: A systematic review and meta-analysis. Neurosci Biobehav Rev 2024; 164:105787. [PMID: 38945419 DOI: 10.1016/j.neubiorev.2024.105787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 05/15/2024] [Accepted: 06/24/2024] [Indexed: 07/02/2024]
Abstract
Research has indicated unique challenges in audiovisual integration of speech among autistic individuals, although methodological differences have led to divergent findings. We conducted a systematic literature search to identify studies that measured audiovisual speech integration among both autistic and non-autistic individuals. Across the 18 identified studies (combined N = 952), autistic individuals showed impaired audiovisual integration compared to their non-autistic peers (g = 0.69, 95 % CI [0.53, 0.85], p <.001). This difference was not found to be influenced by participants' mean ages, studies' sample sizes, risk-of-bias scores, or paradigms employed. However, a subgroup analysis suggested that child studies may show larger between-group differences than adult ones. The prevailing pattern of impaired audiovisual speech integration in autism may have cascading effects on communicative and social behavior. However, small samples and inconsistency in designs/analyses translated into considerable heterogeneity in findings and opacity regarding the influence of underlying unisensory and attentional factors. We recommend three key directions for future research: larger samples, more research with adults, and standardization of methodology and analytical approaches.
Collapse
Affiliation(s)
- Robert M Jertberg
- Department of Clinical and Developmental Psychology, Vrije Universiteit Amsterdam, The Netherlands and Amsterdam Public Health Research Institute, Amsterdam, the Netherlands.
| | - Frederik J Wienicke
- Department of Clinical Psychology, Behavioural Science Institute, Radboud University, Nijmegen, the Netherlands
| | - Krystian Andruszkiewicz
- Department of Clinical and Developmental Psychology, Vrije Universiteit Amsterdam, The Netherlands and Amsterdam Public Health Research Institute, Amsterdam, the Netherlands
| | - Sander Begeer
- Department of Clinical and Developmental Psychology, Vrije Universiteit Amsterdam, The Netherlands and Amsterdam Public Health Research Institute, Amsterdam, the Netherlands
| | - Bhismadev Chakrabarti
- Centre for Autism, School of Psychology and Clinical Language Sciences, University of Reading, UK; India Autism Center, Kolkata, India; Department of Psychology, Ashoka University, India
| | - Hilde M Geurts
- Department of Psychology, Universiteit van Amsterdam, the Netherlands; Leo Kannerhuis (Youz/Parnassiagroup), the Netherlands
| | - Ralph de Vries
- Medical Library, Vrije Universiteit, Amsterdam, the Netherlands
| | - Erik Van der Burg
- Department of Clinical and Developmental Psychology, Vrije Universiteit Amsterdam, The Netherlands and Amsterdam Public Health Research Institute, Amsterdam, the Netherlands; Department of Psychology, Universiteit van Amsterdam, the Netherlands
| |
Collapse
|
2
|
Magnotti JF, Lado A, Beauchamp MS. The noisy encoding of disparity model predicts perception of the McGurk effect in native Japanese speakers. Front Neurosci 2024; 18:1421713. [PMID: 38988770 PMCID: PMC11233445 DOI: 10.3389/fnins.2024.1421713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Accepted: 05/28/2024] [Indexed: 07/12/2024] Open
Abstract
In the McGurk effect, visual speech from the face of the talker alters the perception of auditory speech. The diversity of human languages has prompted many intercultural studies of the effect in both Western and non-Western cultures, including native Japanese speakers. Studies of large samples of native English speakers have shown that the McGurk effect is characterized by high variability in the susceptibility of different individuals to the illusion and in the strength of different experimental stimuli to induce the illusion. The noisy encoding of disparity (NED) model of the McGurk effect uses principles from Bayesian causal inference to account for this variability, separately estimating the susceptibility and sensory noise for each individual and the strength of each stimulus. To determine whether variation in McGurk perception is similar between Western and non-Western cultures, we applied the NED model to data collected from 80 native Japanese-speaking participants. Fifteen different McGurk stimuli that varied in syllable content (unvoiced auditory "pa" + visual "ka" or voiced auditory "ba" + visual "ga") were presented interleaved with audiovisual congruent stimuli. The McGurk effect was highly variable across stimuli and participants, with the percentage of illusory fusion responses ranging from 3 to 78% across stimuli and from 0 to 91% across participants. Despite this variability, the NED model accurately predicted perception, predicting fusion rates for individual stimuli with 2.1% error and for individual participants with 2.4% error. Stimuli containing the unvoiced pa/ka pairing evoked more fusion responses than the voiced ba/ga pairing. Model estimates of sensory noise were correlated with participant age, with greater sensory noise in older participants. The NED model of the McGurk effect offers a principled way to account for individual and stimulus differences when examining the McGurk effect in different cultures.
Collapse
Affiliation(s)
- John F Magnotti
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Anastasia Lado
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Michael S Beauchamp
- Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| |
Collapse
|
3
|
Jertberg RM, Begeer S, Geurts HM, Chakrabarti B, Van der Burg E. Age, not autism, influences multisensory integration of speech stimuli among adults in a McGurk/MacDonald paradigm. Eur J Neurosci 2024; 59:2979-2994. [PMID: 38570828 DOI: 10.1111/ejn.16319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 02/27/2024] [Accepted: 02/28/2024] [Indexed: 04/05/2024]
Abstract
Differences between autistic and non-autistic individuals in perception of the temporal relationships between sights and sounds are theorized to underlie difficulties in integrating relevant sensory information. These, in turn, are thought to contribute to problems with speech perception and higher level social behaviour. However, the literature establishing this connection often involves limited sample sizes and focuses almost entirely on children. To determine whether these differences persist into adulthood, we compared 496 autistic and 373 non-autistic adults (aged 17 to 75 years). Participants completed an online version of the McGurk/MacDonald paradigm, a multisensory illusion indicative of the ability to integrate audiovisual speech stimuli. Audiovisual asynchrony was manipulated, and participants responded both to the syllable they perceived (revealing their susceptibility to the illusion) and to whether or not the audio and video were synchronized (allowing insight into temporal processing). In contrast with prior research with smaller, younger samples, we detected no evidence of impaired temporal or multisensory processing in autistic adults. Instead, we found that in both groups, multisensory integration correlated strongly with age. This contradicts prior presumptions that differences in multisensory perception persist and even increase in magnitude over the lifespan of autistic individuals. It also suggests that the compensatory role multisensory integration may play as the individual senses decline with age is intact. These findings challenge existing theories and provide an optimistic perspective on autistic development. They also underline the importance of expanding autism research to better reflect the age range of the autistic population.
Collapse
Affiliation(s)
- Robert M Jertberg
- Department of Clinical and Developmental Psychology, Vrije Universiteit Amsterdam, The Netherlands and Amsterdam Public Health Research Institute, Amsterdam, Netherlands
| | - Sander Begeer
- Department of Clinical and Developmental Psychology, Vrije Universiteit Amsterdam, The Netherlands and Amsterdam Public Health Research Institute, Amsterdam, Netherlands
| | - Hilde M Geurts
- Dutch Autism and ADHD Research Center (d'Arc), Brain & Cognition, Department of Psychology, Universiteit van Amsterdam, Amsterdam, The Netherlands
- Leo Kannerhuis (Youz/Parnassiagroup), Den Haag, The Netherlands
| | - Bhismadev Chakrabarti
- Centre for Autism, School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
- India Autism Center, Kolkata, India
- Department of Psychology, Ashoka University, Sonipat, India
| | - Erik Van der Burg
- Dutch Autism and ADHD Research Center (d'Arc), Brain & Cognition, Department of Psychology, Universiteit van Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
4
|
Saito H, Tiede M, Whalen DH, Ménard L. The effect of native language and bilingualism on multimodal perception in speech: A study of audio-aerotactile integrationa). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:2209-2220. [PMID: 38526052 PMCID: PMC10965246 DOI: 10.1121/10.0025381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2023] [Revised: 02/22/2024] [Accepted: 02/27/2024] [Indexed: 03/26/2024]
Abstract
Previous studies of speech perception revealed that tactile sensation can be integrated into the perception of stop consonants. It remains uncertain whether such multisensory integration can be shaped by linguistic experience, such as the listener's native language(s). This study investigates audio-aerotactile integration in phoneme perception for English and French monolinguals as well as English-French bilingual listeners. Six step voice onset time continua of alveolar (/da/-/ta/) and labial (/ba/-/pa/) stops constructed from both English and French end points were presented to listeners who performed a forced-choice identification task. Air puffs were synchronized to syllable onset and randomly applied to the back of the hand. Results show that stimuli with an air puff elicited more "voiceless" responses for the /da/-/ta/ continuum by both English and French listeners. This suggests that audio-aerotactile integration can occur even though the French listeners did not have an aspiration/non-aspiration contrast in their native language. Furthermore, bilingual speakers showed larger air puff effects compared to monolinguals in both languages, perhaps due to bilinguals' heightened receptiveness to multimodal information in speech.
Collapse
Affiliation(s)
- Haruka Saito
- Département de Linguistique, Université du Québec à Montréal, Montréal, Québec H2L2C5, Canada
| | - Mark Tiede
- Department of Psychiatry, Yale School of Medicine, New Haven, Connecticut 06520, USA
| | - D H Whalen
- The Graduate Center, City University of New York (CUNY), New York, New York 10016, USA
- Yale Child Study Center, New Haven, Connecticut 06520, USA
| | - Lucie Ménard
- Département de Linguistique, Université du Québec à Montréal, Montréal, Québec H2L2C5, Canada
| |
Collapse
|
5
|
Jertberg RM, Begeer S, Geurts HM, Chakrabarti B, Van der Burg E. Perception of temporal synchrony not a prerequisite for multisensory integration. Sci Rep 2024; 14:4982. [PMID: 38424118 PMCID: PMC10904801 DOI: 10.1038/s41598-024-55572-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 02/25/2024] [Indexed: 03/02/2024] Open
Abstract
Temporal alignment is often viewed as the most essential cue the brain can use to integrate information from across sensory modalities. However, the importance of conscious perception of synchrony to multisensory integration is a controversial topic. Conversely, the influence of cross-modal incongruence of higher level stimulus features such as phonetics on temporal processing is poorly understood. To explore the nuances of this relationship between temporal processing and multisensory integration, we presented 101 participants (ranging from 19 to 73 years of age) with stimuli designed to elicit the McGurk/MacDonald illusion (either matched or mismatched pairs of phonemes and visemes) with varying degrees of stimulus onset asynchrony between the visual and auditory streams. We asked them to indicate which syllable they perceived and whether the video and audio were synchronized on each trial. We found that participants often experienced the illusion despite not perceiving the stimuli as synchronous, and the same phonetic incongruence that produced the illusion also led to significant interference in simultaneity judgments. These findings challenge the longstanding assumption that perception of synchrony is a prerequisite to multisensory integration, support a more flexible view of multisensory integration, and suggest a complex, reciprocal relationship between temporal and multisensory processing.
Collapse
Affiliation(s)
- Robert M Jertberg
- Department of Clinical and Developmental Psychology, The Netherlands and Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Sander Begeer
- Department of Clinical and Developmental Psychology, The Netherlands and Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Hilde M Geurts
- Brain and Cognition, Department of Psychology, Dutch Autism and ADHD Research Center (d'Arc), Universiteit van Amsterdam, Amsterdam, The Netherlands
| | - Bhismadev Chakrabarti
- Centre for Autism, School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK.
- India Autism Center, Kolkata, India.
- Department of Psychology, Ashoka University, Sonipat, India.
| | - Erik Van der Burg
- Brain and Cognition, Department of Psychology, Dutch Autism and ADHD Research Center (d'Arc), Universiteit van Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
6
|
Verhaar E, Medendorp WP, Hunnius S, Stapel JC. Bayesian causal inference in visuotactile integration in children and adults. Dev Sci 2022; 25:e13184. [PMID: 34698430 PMCID: PMC9285718 DOI: 10.1111/desc.13184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 09/01/2021] [Accepted: 10/05/2021] [Indexed: 11/27/2022]
Abstract
If cues from different sensory modalities share the same cause, their information can be integrated to improve perceptual precision. While it is well established that adults exploit sensory redundancy by integrating cues in a Bayes optimal fashion, whether children under 8 years of age combine sensory information in a similar fashion is still under debate. If children differ from adults in the way they infer causality between cues, this may explain mixed findings on the development of cue integration in earlier studies. Here we investigated the role of causal inference in the development of cue integration, by means of a visuotactile localization task. Young children (6-8 years), older children (9.5-12.5 years) and adults had to localize a tactile stimulus, which was presented to the forearm simultaneously with a visual stimulus at either the same or a different location. In all age groups, responses were systematically biased toward the position of the visual stimulus, but relatively more so when the distance between the visual and tactile stimulus was small rather than large. This pattern of results was better captured by a Bayesian causal inference model than by alternative models of forced fusion or full segregation of the two stimuli. Our results suggest that already from a young age the brain implicitly infers the probability that a tactile and a visual cue share the same cause and uses this probability as a weighting factor in visuotactile localization.
Collapse
Affiliation(s)
- Erik Verhaar
- Donders Institute for Brain, Cognition and BehaviourRadboud UniversityNijmegenthe Netherlands
| | | | - Sabine Hunnius
- Donders Institute for Brain, Cognition and BehaviourRadboud UniversityNijmegenthe Netherlands
| | | |
Collapse
|
7
|
Rosemann S, Gieseler A, Tahden M, Colonius H, Thiel CM. Treatment of Age-Related Hearing Loss Alters Audiovisual Integration and Resting-State Functional Connectivity: A Randomized Controlled Pilot Trial. eNeuro 2021; 8:ENEURO.0258-21.2021. [PMID: 34759049 PMCID: PMC8658542 DOI: 10.1523/eneuro.0258-21.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 09/23/2021] [Accepted: 10/14/2021] [Indexed: 11/21/2022] Open
Abstract
Untreated age-related hearing loss increases audiovisual integration and impacts resting state functional brain connectivity. Further, there is a relation between crossmodal plasticity and audiovisual integration strength in cochlear implant patients. However, it is currently unclear whether amplification of the auditory input by hearing aids influences audiovisual integration and resting state functional brain connectivity. We conducted a randomized controlled pilot study to investigate how the McGurk illusion, a common measure for audiovisual integration, and resting state functional brain connectivity of the auditory cortex are altered by six-month hearing aid use. Thirty-two older participants with slight-to-moderate, symmetric, age-related hearing loss were allocated to a treatment or waiting control group and measured one week before and six months after hearing aid fitting with functional magnetic resonance imaging. Our results showed a statistical trend for an increased McGurk illusion after six months of hearing aid use. We further demonstrated that an increase in McGurk susceptibility is related to a decreased hearing aid benefit for auditory speech intelligibility in noise. No significant interaction between group and time point was obtained in the whole-brain resting state analysis. However, a region of interest (ROI)-to-ROI analysis indicated that hearing aid use of six months was associated with a decrease in resting state functional connectivity between the auditory cortex and the fusiform gyrus and that this decrease was related to an increase of perceived McGurk illusions. Our study, therefore, suggests that even short-term hearing aid use alters audiovisual integration and functional brain connectivity between auditory and visual cortices.
Collapse
Affiliation(s)
- Stephanie Rosemann
- Biological Psychology, Department of Psychology, School of Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, Oldenburg 26111, Germany
- Cluster of Excellence "Hearing4all," Carl von Ossietzky Universität Oldenburg, Oldenburg 26111, Germany
| | - Anja Gieseler
- Cluster of Excellence "Hearing4all," Carl von Ossietzky Universität Oldenburg, Oldenburg 26111, Germany
- Cognitive Psychology, Department of Psychology, School of Medicine and Health Sciences, Carl von Oldenburg 26111 Universität Oldenburg, Oldenburg 26111, Germany
| | - Maike Tahden
- Cluster of Excellence "Hearing4all," Carl von Ossietzky Universität Oldenburg, Oldenburg 26111, Germany
- Cognitive Psychology, Department of Psychology, School of Medicine and Health Sciences, Carl von Oldenburg 26111 Universität Oldenburg, Oldenburg 26111, Germany
| | - Hans Colonius
- Cluster of Excellence "Hearing4all," Carl von Ossietzky Universität Oldenburg, Oldenburg 26111, Germany
- Cognitive Psychology, Department of Psychology, School of Medicine and Health Sciences, Carl von Oldenburg 26111 Universität Oldenburg, Oldenburg 26111, Germany
| | - Christiane M Thiel
- Biological Psychology, Department of Psychology, School of Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, Oldenburg 26111, Germany
- Cluster of Excellence "Hearing4all," Carl von Ossietzky Universität Oldenburg, Oldenburg 26111, Germany
| |
Collapse
|
8
|
Magnotti JF, Dzeda KB, Wegner-Clemens K, Rennig J, Beauchamp MS. Weak observer-level correlation and strong stimulus-level correlation between the McGurk effect and audiovisual speech-in-noise: A causal inference explanation. Cortex 2020; 133:371-383. [PMID: 33221701 DOI: 10.1016/j.cortex.2020.10.002] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2020] [Revised: 08/05/2020] [Accepted: 10/05/2020] [Indexed: 11/25/2022]
Abstract
The McGurk effect is a widely used measure of multisensory integration during speech perception. Two observations have raised questions about the validity of the effect as a tool for understanding speech perception. First, there is high variability in perception of the McGurk effect across different stimuli and observers. Second, across observers there is low correlation between McGurk susceptibility and recognition of visual speech paired with auditory speech-in-noise, another common measure of multisensory integration. Using the framework of the causal inference of multisensory speech (CIMS) model, we explored the relationship between the McGurk effect, syllable perception, and sentence perception in seven experiments with a total of 296 different participants. Perceptual reports revealed a relationship between the efficacy of different McGurk stimuli created from the same talker and perception of the auditory component of the McGurk stimuli presented in isolation, both with and without added noise. The CIMS model explained this strong stimulus-level correlation using the principles of noisy sensory encoding followed by optimal cue combination within a common representational space across speech types. Because the McGurk effect (but not speech-in-noise) requires the resolution of conflicting cues between modalities, there is an additional source of individual variability that can explain the weak observer-level correlation between McGurk and noisy speech. Power calculations show that detecting this weak correlation requires studies with many more participants than those conducted to-date. Perception of the McGurk effect and other types of speech can be explained by a common theoretical framework that includes causal inference, suggesting that the McGurk effect is a valid and useful experimental tool.
Collapse
|
9
|
Reduced resting state functional connectivity with increasing age-related hearing loss and McGurk susceptibility. Sci Rep 2020; 10:16987. [PMID: 33046800 PMCID: PMC7550565 DOI: 10.1038/s41598-020-74012-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2020] [Accepted: 09/15/2020] [Indexed: 11/21/2022] Open
Abstract
Age-related hearing loss has been related to a compensatory increase in audio-visual integration and neural reorganization including alterations in functional resting state connectivity. How these two changes are linked in elderly listeners is unclear. The current study explored modulatory effects of hearing thresholds and audio-visual integration on resting state functional connectivity. We analysed a large set of resting state data of 65 elderly participants with a widely varying degree of untreated hearing loss. Audio-visual integration, as gauged with the McGurk effect, increased with progressing hearing thresholds. On the neural level, McGurk illusions were negatively related to functional coupling between motor and auditory regions. Similarly, connectivity of the dorsal attention network to sensorimotor and primary motor cortices was reduced with increasing hearing loss. The same effect was obtained for connectivity between the salience network and visual cortex. Our findings suggest that with progressing untreated age-related hearing loss, functional coupling at rest declines, affecting connectivity of brain networks and areas associated with attentional, visual, sensorimotor and motor processes. Especially connectivity reductions between auditory and motor areas were related to stronger audio-visual integration found with increasing hearing loss.
Collapse
|
10
|
Beatteay A, Wilbiks JMP. The effects of major depressive disorder symptoms on audiovisual integration. JOURNAL OF COGNITIVE PSYCHOLOGY 2020. [DOI: 10.1080/20445911.2020.1825452] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- Annika Beatteay
- Department of Psychology, University of New Brunswick – Saint John, Saint John, NB, Canada
| | - Jonathan M. P. Wilbiks
- Department of Psychology, University of New Brunswick – Saint John, Saint John, NB, Canada
| |
Collapse
|
11
|
Dunham K, Feldman JI, Liu Y, Cassidy M, Conrad JG, Santapuram P, Suzman E, Tu A, Butera I, Simon DM, Broderick N, Wallace MT, Lewkowicz D, Woynaroski TG. Stability of Variables Derived From Measures of Multisensory Function in Children With Autism Spectrum Disorder. AMERICAN JOURNAL ON INTELLECTUAL AND DEVELOPMENTAL DISABILITIES 2020; 125:287-303. [PMID: 32609807 PMCID: PMC8903073 DOI: 10.1352/1944-7558-125.4.287] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Accepted: 10/11/2019] [Indexed: 06/11/2023]
Abstract
Children with autism spectrum disorder (ASD) display differences in multisensory function as quantified by several different measures. This study estimated the stability of variables derived from commonly used measures of multisensory function in school-aged children with ASD. Participants completed: a simultaneity judgment task for audiovisual speech, tasks designed to elicit the McGurk effect, listening-in-noise tasks, electroencephalographic recordings, and eye-tracking tasks. Results indicate the stability of indices derived from tasks tapping multisensory processing is variable. These findings have important implications for measurement in future research. Averaging scores across repeated observations will often be required to obtain acceptably stable estimates and, thus, to increase the likelihood of detecting effects of interest, as it relates to multisensory processing in children with ASD.
Collapse
Affiliation(s)
- Kacie Dunham
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Jacob I. Feldman
- Department of Hearing & Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Yupeng Liu
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Margaret Cassidy
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Julie G. Conrad
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
- Present Address: College of Medicine, University of Illinois, Chicago, IL, USA
| | - Pooja Santapuram
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
- Present Address: School of Medicine, Vanderbilt University, Nashville, TN, USA
| | - Evan Suzman
- Department of Biomedical Sciences, Vanderbilt University, Nashville, TN, USA
| | - Alexander Tu
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
- Present Address: College of Medicine, University of Nebraska Medical Center, Omaha, NE, USA
| | - Iliza Butera
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - David M. Simon
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Present Address: axialHealthcare, Nashville, TN, USA
| | - Neill Broderick
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Pediatrics, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Mark T. Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing & Speech Sciences, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Pharmacology, Vanderbilt University, Nashville, TN, USA
| | - David Lewkowicz
- Department of Communication Sciences & Disorders, Northeastern University, Boston, MA, USA
| | - Tiffany G. Woynaroski
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Hearing & Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
12
|
Wegner-Clemens K, Rennig J, Beauchamp MS. A relationship between Autism-Spectrum Quotient and face viewing behavior in 98 participants. PLoS One 2020; 15:e0230866. [PMID: 32352984 PMCID: PMC7192493 DOI: 10.1371/journal.pone.0230866] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Accepted: 03/10/2020] [Indexed: 01/18/2023] Open
Abstract
Faces are one of the most important stimuli that we encounter, but humans vary dramatically in their behavior when viewing a face: some individuals preferentially fixate the eyes, others fixate the mouth, and still others show an intermediate pattern. The determinants of these large individual differences are unknown. However, individuals with Autism Spectrum Disorder (ASD) spend less time fixating the eyes of a viewed face than controls, suggesting the hypothesis that autistic traits in healthy adults might explain individual differences in face viewing behavior. Autistic traits were measured in 98 healthy adults recruited from an academic setting using the Autism-Spectrum Quotient, a validated 50-statement questionnaire. Fixations were measured using a video-based eye tracker while participants viewed two different types of audiovisual movies: short videos of talker speaking single syllables and longer videos of talkers speaking sentences in a social context. For both types of movies, there was a positive correlation between Autism-Spectrum Quotient score and percent of time fixating the lower half of the face that explained from 4% to 10% of the variance in individual face viewing behavior. This effect suggests that in healthy adults, autistic traits are one of many factors that contribute to individual differences in face viewing behavior.
Collapse
Affiliation(s)
- Kira Wegner-Clemens
- Department of Neurosurgery and Core for Advanced MRI, Baylor College of Medicine, Houston, Texas, United States of America
| | - Johannes Rennig
- Department of Neurosurgery and Core for Advanced MRI, Baylor College of Medicine, Houston, Texas, United States of America
| | - Michael S. Beauchamp
- Department of Neurosurgery and Core for Advanced MRI, Baylor College of Medicine, Houston, Texas, United States of America
| |
Collapse
|
13
|
Zhou X, Innes-Brown H, McKay CM. Audio-visual integration in cochlear implant listeners and the effect of age difference. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:4144. [PMID: 31893708 DOI: 10.1121/1.5134783] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2019] [Accepted: 10/30/2019] [Indexed: 06/10/2023]
Abstract
This study aimed to investigate differences in audio-visual (AV) integration between cochlear implant (CI) listeners and normal-hearing (NH) adults. A secondary aim was to investigate the effect of age differences by examining AV integration in groups of older and younger NH adults. Seventeen CI listeners, 13 similarly aged NH adults, and 16 younger NH adults were recruited. Two speech identification experiments were conducted to evaluate AV integration of speech cues. In the first experiment, reaction times in audio-alone (A-alone), visual-alone (V-alone), and AV conditions were measured during a speeded task in which participants were asked to identify a target sound /aSa/ among 11 alternatives. A race model was applied to evaluate AV integration. In the second experiment, identification accuracies were measured using a closed set of consonants and an open set of consonant-nucleus-consonant words. The authors quantified AV integration using a combination of a probability model and a cue integration model (which model participants' AV accuracy by assuming no or optimal integration, respectively). The results found that experienced CI listeners showed no better AV integration than their similarly aged NH adults. Further, there was no significant difference in AV integration between the younger and older NH adults.
Collapse
Affiliation(s)
- Xin Zhou
- Bionics Institute of Australia, 384-388 East Melbourne, Melbourne, Victoria 3002, Australia
| | - Hamish Innes-Brown
- Bionics Institute of Australia, 384-388 East Melbourne, Melbourne, Victoria 3002, Australia
| | - Colette M McKay
- Bionics Institute of Australia, 384-388 East Melbourne, Melbourne, Victoria 3002, Australia
| |
Collapse
|
14
|
Feng G, Zhou B, Zhou W, Beauchamp MS, Magnotti JF. A Laboratory Study of the McGurk Effect in 324 Monozygotic and Dizygotic Twins. Front Neurosci 2019; 13:1029. [PMID: 31636529 PMCID: PMC6787151 DOI: 10.3389/fnins.2019.01029] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2019] [Accepted: 09/10/2019] [Indexed: 11/13/2022] Open
Abstract
Multisensory integration of information from the talker's voice and the talker's mouth facilitates human speech perception. A popular assay of audiovisual integration is the McGurk effect, an illusion in which incongruent visual speech information categorically changes the percept of auditory speech. There is substantial interindividual variability in susceptibility to the McGurk effect. To better understand possible sources of this variability, we examined the McGurk effect in 324 native Mandarin speakers, consisting of 73 monozygotic (MZ) and 89 dizygotic (DZ) twin pairs. When tested with 9 different McGurk stimuli, some participants never perceived the illusion and others always perceived it. Within participants, perception was similar across time (r = 0.55 at a 2-year retest in 150 participants) suggesting that McGurk susceptibility reflects a stable trait rather than short-term perceptual fluctuations. To examine the effects of shared genetics and prenatal environment, we compared McGurk susceptibility between MZ and DZ twins. Both twin types had significantly greater correlation than unrelated pairs (r = 0.28 for MZ twins and r = 0.21 for DZ twins) suggesting that the genes and environmental factors shared by twins contribute to individual differences in multisensory speech perception. Conversely, the existence of substantial differences within twin pairs (even MZ co-twins) and the overall low percentage of explained variance (5.5%) argues against a deterministic view of individual differences in multisensory integration.
Collapse
Affiliation(s)
- Guo Feng
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
- Psychological Research and Counseling Center, Southwest Jiaotong University, Chengdu, China
| | - Bin Zhou
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Wen Zhou
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Michael S. Beauchamp
- Department of Neurosurgery and Core for Advanced MRI, Baylor College of Medicine, Houston, TX, United States
| | - John F. Magnotti
- Department of Neurosurgery and Core for Advanced MRI, Baylor College of Medicine, Houston, TX, United States
| |
Collapse
|
15
|
Basirat A, Allart É, Brunellière A, Martin Y. Audiovisual speech segmentation in post-stroke aphasia: a pilot study. Top Stroke Rehabil 2019; 26:588-594. [PMID: 31369358 DOI: 10.1080/10749357.2019.1643566] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Background: Stroke may cause sentence comprehension disorders. Speech segmentation, i.e. the ability to detect word boundaries while listening to continuous speech, is an initial step allowing the successful identification of words and the accurate understanding of meaning within sentences. It has received little attention in people with post-stroke aphasia (PWA).Objectives: Our goal was to study speech segmentation in PWA and examine the potential benefit of seeing the speakers' articulatory gestures while segmenting sentences.Methods: Fourteen PWA and twelve healthy controls participated in this pilot study. Performance was measured with a word-monitoring task. In the auditory-only modality, participants were presented with auditory-only stimuli while in the audiovisual modality, visual speech cues (i.e. speaker's articulatory gestures) accompanied the auditory input. The proportion of correct responses was calculated for each participant and each modality. Visual enhancement was then calculated in order to estimate the potential benefit of seeing the speaker's articulatory gestures.Results: Both in auditory-only and audiovisual modalities, PWA performed significantly less well than controls, who had 100% correct performance in both modalities. The performance of PWA was correlated with their phonological ability. Six PWA used the visual cues. Group level analysis performed on PWA did not show any reliable difference between the auditory-only and audiovisual modalities (median of visual enhancement = 7% [Q1 - Q3: -5 - 39]).Conclusion: Our findings show that speech segmentation disorder may exist in PWA. This points to the importance of assessing and training speech segmentation after stroke. Further studies should investigate the characteristics of PWA who use visual speech cues during sentence processing.
Collapse
Affiliation(s)
- Anahita Basirat
- UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, Univ. Lille, CNRS, CHU Lille, Lille, France
| | - Étienne Allart
- Neurorehabilitation Unit, Lille University Medical Center, Lille, France.,Inserm U1171, University Lille, Degenerative and Vascular Cognitive Disorders, Lille, France
| | - Angèle Brunellière
- UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, Univ. Lille, CNRS, CHU Lille, Lille, France
| | | |
Collapse
|
16
|
Shahin AJ. Neural evidence accounting for interindividual variability of the McGurk illusion. Neurosci Lett 2019; 707:134322. [PMID: 31181299 DOI: 10.1016/j.neulet.2019.134322] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2019] [Revised: 06/04/2019] [Accepted: 06/06/2019] [Indexed: 11/30/2022]
Abstract
The McGurk illusion is experienced to various degrees among the general population. Previous studies have implicated the left superior temporal sulcus (STS) and auditory cortex (AC) as regions associated with this interindividual variability. We sought to further investigate the neurophysiology underlying this variability using a variant of the McGurk illusion design. Electroencephalography (EEG) was recorded while human subjects were presented with videos of a speaker uttering the consonant-vowels (CVs) /ba/ and /fa/, which were mixed and matched with audio of /ba/ and /fa/ to produce congruent and incongruent conditions. Subjects were also presented with unimodal stimuli of silent videos and audios of the CVs. They responded to whether they heard (or saw in the silent condition) /ba/ or /fa/. An illusion during the incongruent conditions was deemed successful when individuals heard the syllable conveyed by mouth movements. We hypothesized that individuals who experience the illusion more strongly should exhibit more robust desynchronization of alpha (7-12 Hz) at fronto-central and temporal sites, emphasizing more engagement of neural generators at the AC and STS. We found, however, that compared to weaker illusion perceivers, stronger illusion perceivers exhibited greater alpha synchronization at fronto-central and posterior temporal sites, which is consistent with inhibition of auditory representations. These findings suggest that stronger McGurk illusion perceivers possess more robust cross-modal sensory gating mechanisms whereby phonetic representations not conveyed by the visual system are inhibited, and in turn reinforcing perception of the visually targeted phonemes.
Collapse
Affiliation(s)
- Antoine J Shahin
- Department of Cognitive and Information Sciences, University of California, Merced, CA 95343, United States and Center for Mind and Brain, University of California, Davis, CA 95618, United States.
| |
Collapse
|
17
|
Politzer-Ahles S, Pan L. Skilled musicians are indeed subject to the McGurk effect. ROYAL SOCIETY OPEN SCIENCE 2019; 6:181868. [PMID: 31183122 PMCID: PMC6502376 DOI: 10.1098/rsos.181868] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2018] [Accepted: 02/07/2019] [Indexed: 06/09/2023]
Abstract
The McGurk effect is an illusion whereby speech sounds are often mis-categorized when the auditory cues in the stimulus conflict with the visual cues from the speaker's face. A recent study claims that 'skilled musicians are not subject to' this effect. It is not clear, however, if this is intended to mean that skilled musicians do not experience the McGurk effect at all, or if they just experience it to a lesser magnitude than non-musicians. The study also does not statistically demonstrate either of these conclusions, as it does report a numerical (albeit non-significant) McGurk effect for musicians and does not report a significant difference between musicians' and non-musicians' McGurk effect sizes. This article reports a pre-registered, higher-power replication of that study (using twice the sample size and changing from a between- to a within-participants manipulation). Contrary to the original study's conclusion, we find that musicians do show a large and statistically significant McGurk effect and that their effect is no smaller than that of non-musicians.
Collapse
|
18
|
Rennig J, Beauchamp MS. Free viewing of talking faces reveals mouth and eye preferring regions of the human superior temporal sulcus. Neuroimage 2018; 183:25-36. [PMID: 30092347 PMCID: PMC6214361 DOI: 10.1016/j.neuroimage.2018.08.008] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2018] [Revised: 07/31/2018] [Accepted: 08/05/2018] [Indexed: 01/22/2023] Open
Abstract
During face-to-face communication, the mouth of the talker is informative about speech content, while the eyes of the talker convey other information, such as gaze location. Viewers most often fixate either the mouth or the eyes of the talker's face, presumably allowing them to sample these different sources of information. To study the neural correlates of this process, healthy humans freely viewed talking faces while brain activity was measured with BOLD fMRI and eye movements were recorded with a video-based eye tracker. Post hoc trial sorting was used to divide the data into trials in which participants fixated the mouth of the talker and trials in which they fixated the eyes. Although the audiovisual stimulus was identical, the two trials types evoked differing responses in subregions of the posterior superior temporal sulcus (pSTS). The anterior pSTS preferred trials in which participants fixated the mouth of the talker while the posterior pSTS preferred fixations on the eye of the talker. A second fMRI experiment demonstrated that anterior pSTS responded more strongly to auditory and audiovisual speech than posterior pSTS eye-preferring regions. These results provide evidence for functional specialization within the pSTS under more realistic viewing and stimulus conditions than in previous neuroimaging studies.
Collapse
Affiliation(s)
- Johannes Rennig
- Department of Neurosurgery and Core for Advanced MRI, Baylor College of Medicine, Houston, TX, USA
| | - Michael S Beauchamp
- Department of Neurosurgery and Core for Advanced MRI, Baylor College of Medicine, Houston, TX, USA.
| |
Collapse
|
19
|
Brown VA, Hedayati M, Zanger A, Mayn S, Ray L, Dillman-Hasso N, Strand JF. What accounts for individual differences in susceptibility to the McGurk effect? PLoS One 2018; 13:e0207160. [PMID: 30418995 PMCID: PMC6231656 DOI: 10.1371/journal.pone.0207160] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2018] [Accepted: 10/25/2018] [Indexed: 11/29/2022] Open
Abstract
The McGurk effect is a classic audiovisual speech illusion in which discrepant auditory and visual syllables can lead to a fused percept (e.g., an auditory /bɑ/ paired with a visual /gɑ/ often leads to the perception of /dɑ/). The McGurk effect is robust and easily replicated in pooled group data, but there is tremendous variability in the extent to which individual participants are susceptible to it. In some studies, the rate at which individuals report fusion responses ranges from 0% to 100%. Despite its widespread use in the audiovisual speech perception literature, the roots of the wide variability in McGurk susceptibility are largely unknown. This study evaluated whether several perceptual and cognitive traits are related to McGurk susceptibility through correlational analyses and mixed effects modeling. We found that an individual's susceptibility to the McGurk effect was related to their ability to extract place of articulation information from the visual signal (i.e., a more fine-grained analysis of lipreading ability), but not to scores on tasks measuring attentional control, processing speed, working memory capacity, or auditory perceptual gradiency. These results provide support for the claim that a small amount of the variability in susceptibility to the McGurk effect is attributable to lipreading skill. In contrast, cognitive and perceptual abilities that are commonly used predictors in individual differences studies do not appear to underlie susceptibility to the McGurk effect.
Collapse
Affiliation(s)
- Violet A. Brown
- Department of Psychology, Carleton College, Northfield, Minnesota, United States of America
| | - Maryam Hedayati
- Department of Psychology, Carleton College, Northfield, Minnesota, United States of America
| | - Annie Zanger
- Department of Psychology, Carleton College, Northfield, Minnesota, United States of America
| | - Sasha Mayn
- Department of Psychology, Carleton College, Northfield, Minnesota, United States of America
| | - Lucia Ray
- Department of Psychology, Carleton College, Northfield, Minnesota, United States of America
| | - Naseem Dillman-Hasso
- Department of Psychology, Carleton College, Northfield, Minnesota, United States of America
| | - Julia F. Strand
- Department of Psychology, Carleton College, Northfield, Minnesota, United States of America
| |
Collapse
|