1
|
Bolam J, Diaz JA, Andrews M, Coats RO, Philiastides MG, Astill SL, Delis I. A drift diffusion model analysis of age-related impact on multisensory decision-making processes. Sci Rep 2024; 14:14895. [PMID: 38942761 PMCID: PMC11213863 DOI: 10.1038/s41598-024-65549-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2023] [Accepted: 06/20/2024] [Indexed: 06/30/2024] Open
Abstract
Older adults (OAs) are typically slower and/or less accurate in forming perceptual choices relative to younger adults. Despite perceptual deficits, OAs gain from integrating information across senses, yielding multisensory benefits. However, the cognitive processes underlying these seemingly discrepant ageing effects remain unclear. To address this knowledge gap, 212 participants (18-90 years old) performed an online object categorisation paradigm, whereby age-related differences in Reaction Times (RTs) and choice accuracy between audiovisual (AV), visual (V), and auditory (A) conditions could be assessed. Whereas OAs were slower and less accurate across sensory conditions, they exhibited greater RT decreases between AV and V conditions, showing a larger multisensory benefit towards decisional speed. Hierarchical Drift Diffusion Modelling (HDDM) was fitted to participants' behaviour to probe age-related impacts on the latent multisensory decision formation processes. For OAs, HDDM demonstrated slower evidence accumulation rates across sensory conditions coupled with increased response caution for AV trials of higher difficulty. Notably, for trials of lower difficulty we found multisensory benefits in evidence accumulation that increased with age, but not for trials of higher difficulty, in which increased response caution was instead evident. Together, our findings reconcile age-related impacts on multisensory decision-making, indicating greater multisensory evidence accumulation benefits with age underlying enhanced decisional speed.
Collapse
Affiliation(s)
- Joshua Bolam
- School of Biomedical Sciences, University of Leeds, West Yorkshire, LS2 9JT, UK.
- Institute of Neuroscience, Trinity College Dublin, Dublin, D02 PX31, Ireland.
| | - Jessica A Diaz
- School of Biomedical Sciences, University of Leeds, West Yorkshire, LS2 9JT, UK
- School of Social Sciences, Birmingham City University, West Midlands, B15 3HE, UK
| | - Mark Andrews
- School of Social Sciences, Nottingham Trent University, Nottinghamshire, NG1 4FQ, UK
| | - Rachel O Coats
- School of Psychology, University of Leeds, West Yorkshire, LS2 9JT, UK
| | - Marios G Philiastides
- School of Neuroscience and Psychology, University of Glasgow, Lanarkshire, G12 8QB, UK
| | - Sarah L Astill
- School of Biomedical Sciences, University of Leeds, West Yorkshire, LS2 9JT, UK
| | - Ioannis Delis
- School of Biomedical Sciences, University of Leeds, West Yorkshire, LS2 9JT, UK.
| |
Collapse
|
2
|
Pepper JL, Nuttall HE. Age-Related Changes to Multisensory Integration and Audiovisual Speech Perception. Brain Sci 2023; 13:1126. [PMID: 37626483 PMCID: PMC10452685 DOI: 10.3390/brainsci13081126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 07/20/2023] [Accepted: 07/22/2023] [Indexed: 08/27/2023] Open
Abstract
Multisensory integration is essential for the quick and accurate perception of our environment, particularly in everyday tasks like speech perception. Research has highlighted the importance of investigating bottom-up and top-down contributions to multisensory integration and how these change as a function of ageing. Specifically, perceptual factors like the temporal binding window and cognitive factors like attention and inhibition appear to be fundamental in the integration of visual and auditory information-integration that may become less efficient as we age. These factors have been linked to brain areas like the superior temporal sulcus, with neural oscillations in the alpha-band frequency also being implicated in multisensory processing. Age-related changes in multisensory integration may have significant consequences for the well-being of our increasingly ageing population, affecting their ability to communicate with others and safely move through their environment; it is crucial that the evidence surrounding this subject continues to be carefully investigated. This review will discuss research into age-related changes in the perceptual and cognitive mechanisms of multisensory integration and the impact that these changes have on speech perception and fall risk. The role of oscillatory alpha activity is of particular interest, as it may be key in the modulation of multisensory integration.
Collapse
Affiliation(s)
| | - Helen E. Nuttall
- Department of Psychology, Lancaster University, Bailrigg LA1 4YF, UK;
| |
Collapse
|
3
|
Yang W, Yang X, Guo A, Li S, Li Z, Lin J, Ren Y, Yang J, Wu J, Zhang Z. Audiovisual integration of the dynamic hand-held tool at different stimulus intensities in aging. Front Hum Neurosci 2022; 16:968987. [PMID: 36590067 PMCID: PMC9794578 DOI: 10.3389/fnhum.2022.968987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 11/15/2022] [Indexed: 12/23/2022] Open
Abstract
Introduction: In comparison to the audiovisual integration of younger adults, the same process appears more complex and unstable in older adults. Previous research has found that stimulus intensity is one of the most important factors influencing audiovisual integration. Methods: The present study compared differences in audiovisual integration between older and younger adults using dynamic hand-held tool stimuli, such as holding a hammer hitting the floor. Meanwhile, the effects of stimulus intensity on audiovisual integration were compared. The intensity of the visual and auditory stimuli was regulated by modulating the contrast level and sound pressure level. Results: Behavioral results showed that both older and younger adults responded faster and with higher hit rates to audiovisual stimuli than to visual and auditory stimuli. Further results of event-related potentials (ERPs) revealed that during the early stage of 60-100 ms, in the low-intensity condition, audiovisual integration of the anterior brain region was greater in older adults than in younger adults; however, in the high-intensity condition, audiovisual integration of the right hemisphere region was greater in younger adults than in older adults. Moreover, audiovisual integration was greater in the low-intensity condition than in the high-intensity condition in older adults during the 60-100 ms, 120-160 ms, and 220-260 ms periods, showing inverse effectiveness. However, there was no difference in the audiovisual integration of younger adults across different intensity conditions. Discussion: The results suggested that there was an age-related dissociation between high- and low-intensity conditions with audiovisual integration of the dynamic hand-held tool stimulus. Older adults showed greater audiovisual integration in the lower intensity condition, which may be due to the activation of compensatory mechanisms.
Collapse
Affiliation(s)
- Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China,Brain and Cognition Research Center (BCRC), Faculty of Education, Hubei University, Wuhan, China
| | - Xiangfu Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Ao Guo
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Shengnan Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Zimo Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Jinfei Lin
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Yanna Ren
- Department of Psychology, College of Humanities and Management, Guizhou University of Traditional Chinese Medicine, Guiyang, China,*Correspondence: Yanna Ren Zhilin Zhang
| | - Jiajia Yang
- Applied Brain Science Lab, Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Jinglong Wu
- Applied Brain Science Lab, Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan,Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Zhilin Zhang
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China,*Correspondence: Yanna Ren Zhilin Zhang
| |
Collapse
|
4
|
Begau A, Arnau S, Klatt LI, Wascher E, Getzmann S. Using visual speech at the cocktail-party: CNV evidence for early speech extraction in younger and older adults. Hear Res 2022; 426:108636. [DOI: 10.1016/j.heares.2022.108636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 09/26/2022] [Accepted: 10/18/2022] [Indexed: 11/04/2022]
|
5
|
Yang W, Guo A, Yao H, Yang X, Li Z, Li S, Chen J, Ren Y, Yang J, Wu J, Zhang Z. Effect of aging on audiovisual integration: Comparison of high- and low-intensity conditions in a speech discrimination task. Front Aging Neurosci 2022; 14:1010060. [DOI: 10.3389/fnagi.2022.1010060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Accepted: 10/11/2022] [Indexed: 11/13/2022] Open
Abstract
Audiovisual integration is an essential process that influences speech perception in conversation. However, it is still debated whether older individuals benefit more from audiovisual integration than younger individuals. This ambiguity is likely due to stimulus features, such as stimulus intensity. The purpose of the current study was to explore the effect of aging on audiovisual integration, using event-related potentials (ERPs) at different stimulus intensities. The results showed greater audiovisual integration in older adults at 320–360 ms. Conversely, at 460–500 ms, older adults displayed attenuated audiovisual integration in the frontal, fronto-central, central, and centro-parietal regions compared to younger adults. In addition, we found older adults had greater audiovisual integration at 200–230 ms under the low-intensity condition compared to the high-intensity condition, suggesting inverse effectiveness occurred. However, inverse effectiveness was not found in younger adults. Taken together, the results suggested that there was age-related dissociation in audiovisual integration and inverse effectiveness, indicating that the neural mechanisms underlying audiovisual integration differed between older adults and younger adults.
Collapse
|
6
|
Schneider BA, Rabaglia C, Avivi-Reich M, Krieger D, Arnott SR, Alain C. Age-Related Differences in Early Cortical Representations of Target Speech Masked by Either Steady-State Noise or Competing Speech. Front Psychol 2022; 13:935475. [PMID: 35992450 PMCID: PMC9389464 DOI: 10.3389/fpsyg.2022.935475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 06/13/2022] [Indexed: 11/30/2022] Open
Abstract
Word in noise identification is facilitated by acoustic differences between target and competing sounds and temporal separation between the onset of the masker and that of the target. Younger and older adults are able to take advantage of onset delay when the masker is dissimilar (Noise) to the target word, but only younger adults are able to do so when the masker is similar (Babble). We examined the neural underpinning of this age difference using cortical evoked responses to words masked by either Babble or Noise when the masker preceded the target word by 100 or 600 ms in younger and older adults, after adjusting the signal-to-noise ratios (SNRs) to equate behavioural performance across age groups and conditions. For the 100 ms onset delay, the word in noise elicited an acoustic change complex (ACC) response that was comparable in younger and older adults. For the 600 ms onset delay, the ACC was modulated by both masker type and age. In older adults, the ACC to a word in babble was not affected by the increase in onset delay whereas younger adults showed a benefit from longer delays. Hence, the age difference in sensitivity to temporal delay is indexed by early activity in the auditory cortex. These results are consistent with the hypothesis that an increase in onset delay improves stream segregation in younger adults in both noise and babble, but only in noise for older adults and that this change in stream segregation is evident in early cortical processes.
Collapse
Affiliation(s)
- Bruce A. Schneider
- Department of Psychology, Human Communication Laboratory, University of Toronto Mississauga, Mississauga, ON, Canada
- *Correspondence: Bruce A. Schneider,
| | - Cristina Rabaglia
- Department of Psychology, Human Communication Laboratory, University of Toronto Mississauga, Mississauga, ON, Canada
| | - Meital Avivi-Reich
- Department of Psychology, Human Communication Laboratory, University of Toronto Mississauga, Mississauga, ON, Canada
- Department of Communication Arts, Sciences, and Disorders, Brooklyn College, City University of New York, Brooklyn, NY, United States
| | - Dena Krieger
- Department of Psychology, Human Communication Laboratory, University of Toronto Mississauga, Mississauga, ON, Canada
| | | | - Claude Alain
- Rotman Research Institute, Baycrest Centre, Toronto, ON, Canada
- Department of Psychology, St. George Campus, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
7
|
Wilms V, Drijvers L, Brouwer S. The Effects of Iconic Gestures and Babble Language on Word Intelligibility in Sentence Context. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:1822-1838. [PMID: 35439423 DOI: 10.1044/2022_jslhr-21-00387] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE This study investigated to what extent iconic co-speech gestures help word intelligibility in sentence context in two different linguistic maskers (native vs. foreign). It was hypothesized that sentence recognition improves with the presence of iconic co-speech gestures and with foreign compared to native babble. METHOD Thirty-two native Dutch participants performed a Dutch word recognition task in context in which they were presented with videos in which an actress uttered short Dutch sentences (e.g., Ze begint te openen, "She starts to open"). Participants were presented with a total of six audiovisual conditions: no background noise (i.e., clear condition) without gesture, no background noise with gesture, French babble without gesture, French babble with gesture, Dutch babble without gesture, and Dutch babble with gesture; and they were asked to type down what was said by the Dutch actress. The accurate identification of the action verbs at the end of the target sentences was measured. RESULTS The results demonstrated that performance on the task was better in the gesture compared to the nongesture conditions (i.e., gesture enhancement effect). In addition, performance was better in French babble than in Dutch babble. CONCLUSIONS Listeners benefit from iconic co-speech gestures during communication and from foreign background speech compared to native. These insights into multimodal communication may be valuable to everyone who engages in multimodal communication and especially to a public who often works in public places where competing speech is present in the background.
Collapse
Affiliation(s)
- Veerle Wilms
- Centre for Language Studies, Radboud University, Nijmegen, the Netherlands
| | - Linda Drijvers
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
| | - Susanne Brouwer
- Centre for Language Studies, Radboud University, Nijmegen, the Netherlands
| |
Collapse
|
8
|
Avivi-Reich M, Sran RK, Schneider BA. Do Age and Linguistic Status Alter the Effect of Sound Source Diffuseness on Speech Recognition in Noise? Front Psychol 2022; 13:838576. [PMID: 35369266 PMCID: PMC8965325 DOI: 10.3389/fpsyg.2022.838576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Accepted: 02/14/2022] [Indexed: 11/13/2022] Open
Abstract
One aspect of auditory scenes that has received very little attention is the level of diffuseness of sound sources. This aspect has increasing importance due to growing use of amplification systems. When an auditory stimulus is amplified and presented over multiple, spatially-separated loudspeakers, the signal's timbre is altered due to comb filtering. In a previous study we examined how increasing the diffuseness of the sound sources might affect listeners' ability to recognize speech presented in different types of background noise. Listeners performed similarly when both the target and the masker were presented via a similar number of loudspeakers. However, performance improved when the target was presented using a single speaker (compact) and the masker from three spatially separate speakers (diffuse) but worsened when the target was diffuse, and the masker was compact. In the current study, we extended our research to examine whether the effects of timbre changes with age and linguistic experience. Twenty-four older adults whose first language was English (Old-EFLs) and 24 younger adults whose second language was English (Young-ESLs) were asked to repeat non-sense sentences masked by either Noise, Babble, or Speech and their results were compared with those of the Young-EFLs previously tested. Participants were divided into two experimental groups: (1) A Compact-Target group where the target sentences were presented over a single loudspeaker, while the masker was either presented over three loudspeakers or over a single loudspeaker; (2) A Diffuse-Target group, where the target sentences were diffuse while the masker was either compact or diffuse. The results indicate that the Target Timbre has a negligible effect on thresholds when the timbre of the target matches the timbre of the masker in all three groups. When there is a timbre contrast between target and masker, thresholds are significantly lower when the target is compact than when it is diffuse for all three listening groups in a Noise background. However, while this difference is maintained for the Young and Old-EFLs when the masker is Babble or Speech, speech reception thresholds in the Young-ESL group tend to be equivalent for all four combinations of target and masker timbre.
Collapse
Affiliation(s)
- Meital Avivi-Reich
- Department of Communication Arts, Sciences and Disorders, Brooklyn College, City University of New York, Brooklyn, NY, United States
| | - Rupinder Kaur Sran
- Human Communication Lab, Department of Psychology, University of Toronto Mississauga, Toronto, ON, Canada
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
| | - Bruce A. Schneider
- Human Communication Lab, Department of Psychology, University of Toronto Mississauga, Toronto, ON, Canada
| |
Collapse
|
9
|
Bilinguals Show Proportionally Greater Benefit From Visual Speech Cues and Sentence Context in Their Second Compared to Their First Language. Ear Hear 2021; 43:1316-1326. [PMID: 34966162 DOI: 10.1097/aud.0000000000001182] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVES Speech perception in noise is challenging, but evidence suggests that it may be facilitated by visual speech cues (e.g., lip movements) and supportive sentence context in native speakers. Comparatively few studies have investigated speech perception in noise in bilinguals, and little is known about the impact of visual speech cues and supportive sentence context in a first language compared to a second language within the same individual. The current study addresses this gap by directly investigating the extent to which bilinguals benefit from visual speech cues and supportive sentence context under similarly noisy conditions in their first and second language. DESIGN Thirty young adult English-French/French-English bilinguals were recruited from the undergraduate psychology program at Concordia University and from the Montreal community. They completed a speech perception in noise task during which they were presented with video-recorded sentences and instructed to repeat the last word of each sentence out loud. Sentences were presented in three different modalities: visual-only, auditory-only, and audiovisual. Additionally, sentences had one of two levels of context: moderate (e.g., "In the woods, the hiker saw a bear.") and low (e.g., "I had not thought about that bear."). Each participant completed this task in both their first and second language; crucially, the level of background noise was calibrated individually for each participant and was the same throughout the first language and second language (L2) portions of the experimental task. RESULTS Overall, speech perception in noise was more accurate in bilinguals' first language compared to the second. However, participants benefited from visual speech cues and supportive sentence context to a proportionally greater extent in their second language compared to their first. At the individual level, performance during the speech perception in noise task was related to aspects of bilinguals' experience in their second language (i.e., age of acquisition, relative balance between the first and the second language). CONCLUSIONS Bilinguals benefit from visual speech cues and sentence context in their second language during speech in noise and do so to a greater extent than in their first language given the same level of background noise. Together, this indicates that L2 speech perception can be conceptualized within an inverse effectiveness hypothesis framework with a complex interplay of sensory factors (i.e., the quality of the auditory speech signal and visual speech cues) and linguistic factors (i.e., presence or absence of supportive context and L2 experience of the listener).
Collapse
|
10
|
Gijbels L, Yeatman JD, Lalonde K, Lee AKC. Audiovisual Speech Processing in Relationship to Phonological and Vocabulary Skills in First Graders. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:5022-5040. [PMID: 34735292 PMCID: PMC9150669 DOI: 10.1044/2021_jslhr-21-00196] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 07/06/2021] [Accepted: 08/11/2021] [Indexed: 06/13/2023]
Abstract
PURPOSE It is generally accepted that adults use visual cues to improve speech intelligibility in noisy environments, but findings regarding visual speech benefit in children are mixed. We explored factors that contribute to audiovisual (AV) gain in young children's speech understanding. We examined whether there is an AV benefit to speech-in-noise recognition in children in first grade and if visual salience of phonemes influences their AV benefit. We explored if individual differences in AV speech enhancement could be explained by vocabulary knowledge, phonological awareness, or general psychophysical testing performance. METHOD Thirty-seven first graders completed online psychophysical experiments. We used an online single-interval, four-alternative forced-choice picture-pointing task with age-appropriate consonant-vowel-consonant words to measure auditory-only, visual-only, and AV word recognition in noise at -2 and -8 dB SNR. We obtained standard measures of vocabulary and phonological awareness and included a general psychophysical test to examine correlations with AV benefits. RESULTS We observed a significant overall AV gain among children in first grade. This effect was mainly attributed to the benefit at -8 dB SNR, for visually distinct targets. Individual differences were not explained by any of the child variables. Boys showed lower auditory-only performances, leading to significantly larger AV gains. CONCLUSIONS This study shows AV benefit, of distinctive visual cues, to word recognition in challenging noisy conditions in first graders. The cognitive and linguistic constraints of the task may have minimized the impact of individual differences of vocabulary and phonological awareness on AV benefit. The gender difference should be studied on a larger sample and age range.
Collapse
Affiliation(s)
- Liesbeth Gijbels
- Department of Speech & Hearing Sciences, University of Washington, Seattle
- Institute for Learning & Brain Sciences, University of Washington, Seattle
| | - Jason D. Yeatman
- Division of Developmental-Behavioral Pediatrics, School of Medicine, Stanford University, CA
- Graduate School of Education, Stanford University, CA
| | - Kaylah Lalonde
- Boys Town National Research Hospital, Center for Hearing Research, Omaha, NE
| | - Adrian K. C. Lee
- Department of Speech & Hearing Sciences, University of Washington, Seattle
- Institute for Learning & Brain Sciences, University of Washington, Seattle
| |
Collapse
|
11
|
Myerson J, Tye-Murray N, Spehar B, Hale S, Sommers M. Predicting Audiovisual Word Recognition in Noisy Situations: Toward Precision Audiology. Ear Hear 2021; 42:1656-1667. [PMID: 34320527 PMCID: PMC8545708 DOI: 10.1097/aud.0000000000001072] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE Spoken communication is better when one can see as well as hear the talker. Although age-related deficits in speech perception were observed, Tye-Murray and colleagues found that even when age-related deficits in audiovisual (AV) speech perception were observed, AV performance could be accurately predicted from auditory-only (A-only) and visual-only (V-only) performance, and that knowing individuals' ages did not increase the accuracy of prediction. This finding contradicts conventional wisdom, according to which age-related differences in AV speech perception are due to deficits in the integration of auditory and visual information, and our primary goal was to determine whether Tye-Murray et al.'s finding with a closed-set test generalizes to situations more like those in everyday life. A second goal was to test a new predictive model that has important implications for audiological assessment. DESIGN Participants (N = 109; ages 22-93 years), previously studied by Tye-Murray et al., were administered our new, open-set Lex-List test to assess their auditory, visual, and audiovisual perception of individual words. All testing was conducted in six-talker babble (three males and three females) presented at approximately 62 dB SPL. The level of the audio for the Lex-List items, when presented, was approximately 59 dB SPL because pilot testing suggested that this signal-to-noise ratio would avoid ceiling performance under the AV condition. RESULTS Multiple linear regression analyses revealed that A-only and V-only performance accounted for 87.9% of the variance in AV speech perception, and that the contribution of age failed to reach significance. Our new parabolic model accounted for even more (92.8%) of the variance in AV performance, and again, the contribution of age was not significant. Bayesian analyses revealed that for both linear and parabolic models, the present data were almost 10 times as likely to occur with a reduced model (without age) than with a full model (with age as a predictor). Furthermore, comparison of the two reduced models revealed that the data were more than 100 times as likely to occur with the parabolic model than with the linear regression model. CONCLUSIONS The present results strongly support Tye-Murray et al.'s hypothesis that AV performance can be accurately predicted from unimodal performance and that knowing individuals' ages does not increase the accuracy of that prediction. Our results represent an important initial step in extending Tye-Murray et al.'s findings to situations more like those encountered in everyday communication. The accuracy with which speech perception was predicted in this study foreshadows a form of precision audiology in which determining individual strengths and weaknesses in unimodal and multimodal speech perception facilitates identification of targets for rehabilitative efforts aimed at recovering and maintaining speech perception abilities critical to the quality of an older adult's life.
Collapse
Affiliation(s)
- Joel Myerson
- Department of Psychological and Brain Sciences, Washington University, Saint Louis, Missouri, U.S.A
| | - Nancy Tye-Murray
- Department of Otolaryngology, Washington University School of Medicine, Saint Louis, Missouri, U.S.A
| | - Brent Spehar
- Department of Otolaryngology, Washington University School of Medicine, Saint Louis, Missouri, U.S.A
| | - Sandra Hale
- Department of Psychological and Brain Sciences, Washington University, Saint Louis, Missouri, U.S.A
| | - Mitchell Sommers
- Department of Psychological and Brain Sciences, Washington University, Saint Louis, Missouri, U.S.A
| |
Collapse
|
12
|
Tremblay P, Basirat A, Pinto S, Sato M. Visual prediction cues can facilitate behavioural and neural speech processing in young and older adults. Neuropsychologia 2021; 159:107949. [PMID: 34228997 DOI: 10.1016/j.neuropsychologia.2021.107949] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2020] [Revised: 06/16/2021] [Accepted: 07/01/2021] [Indexed: 02/06/2023]
Abstract
The ability to process speech evolves over the course of the lifespan. Understanding speech at low acoustic intensity and in the presence of background noise becomes harder, and the ability for older adults to benefit from audiovisual speech also appears to decline. These difficulties can have important consequences on quality of life. Yet, a consensus on the cause of these difficulties is still lacking. The objective of this study was to examine the processing of speech in young and older adults under different modalities (i.e. auditory [A], visual [V], audiovisual [AV]) and in the presence of different visual prediction cues (i.e., no predictive cue (control), temporal predictive cue, phonetic predictive cue, and combined temporal and phonetic predictive cues). We focused on recognition accuracy and four auditory evoked potential (AEP) components: P1-N1-P2 and N2. Thirty-four right-handed French-speaking adults were recruited, including 17 younger adults (28 ± 2 years; 20-42 years) and 17 older adults (67 ± 3.77 years; 60-73 years). Participants completed a forced-choice speech identification task. The main findings of the study are: (1) The faciliatory effect of visual information was reduced, but present, in older compared to younger adults, (2) visual predictive cues facilitated speech recognition in younger and older adults alike, (3) age differences in AEPs were localized to later components (P2 and N2), suggesting that aging predominantly affects higher-order cortical processes related to speech processing rather than lower-level auditory processes. (4) Specifically, AV facilitation on P2 amplitude was lower in older adults, there was a reduced effect of the temporal predictive cue on N2 amplitude for older compared to younger adults, and P2 and N2 latencies were longer for older adults. Finally (5) behavioural performance was associated with P2 amplitude in older adults. Our results indicate that aging affects speech processing at multiple levels, including audiovisual integration (P2) and auditory attentional processes (N2). These findings have important implications for understanding barriers to communication in older ages, as well as for the development of compensation strategies for those with speech processing difficulties.
Collapse
Affiliation(s)
- Pascale Tremblay
- Département de Réadaptation, Faculté de Médecine, Université Laval, Quebec City, Canada; Cervo Brain Research Centre, Quebec City, Canada.
| | - Anahita Basirat
- Univ. Lille, CNRS, UMR 9193 - SCALab - Sciences Cognitives et Sciences Affectives, Lille, France
| | - Serge Pinto
- France Aix Marseille Univ, CNRS, LPL, Aix-en-Provence, France
| | - Marc Sato
- France Aix Marseille Univ, CNRS, LPL, Aix-en-Provence, France
| |
Collapse
|
13
|
Schubotz L, Holler J, Drijvers L, Özyürek A. Aging and working memory modulate the ability to benefit from visible speech and iconic gestures during speech-in-noise comprehension. PSYCHOLOGICAL RESEARCH 2021; 85:1997-2011. [PMID: 32627053 PMCID: PMC8289811 DOI: 10.1007/s00426-020-01363-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2019] [Accepted: 05/20/2020] [Indexed: 12/19/2022]
Abstract
When comprehending speech-in-noise (SiN), younger and older adults benefit from seeing the speaker's mouth, i.e. visible speech. Younger adults additionally benefit from manual iconic co-speech gestures. Here, we investigate to what extent younger and older adults benefit from perceiving both visual articulators while comprehending SiN, and whether this is modulated by working memory and inhibitory control. Twenty-eight younger and 28 older adults performed a word recognition task in three visual contexts: mouth blurred (speech-only), visible speech, or visible speech + iconic gesture. The speech signal was either clear or embedded in multitalker babble. Additionally, there were two visual-only conditions (visible speech, visible speech + gesture). Accuracy levels for both age groups were higher when both visual articulators were present compared to either one or none. However, older adults received a significantly smaller benefit than younger adults, although they performed equally well in speech-only and visual-only word recognition. Individual differences in verbal working memory and inhibitory control partly accounted for age-related performance differences. To conclude, perceiving iconic gestures in addition to visible speech improves younger and older adults' comprehension of SiN. Yet, the ability to benefit from this additional visual information is modulated by age and verbal working memory. Future research will have to show whether these findings extend beyond the single word level.
Collapse
Affiliation(s)
- Louise Schubotz
- Max Planck Institute for Psycholinguistics, P.O. Box 310, 6500 AH, Nijmegen, The Netherlands
| | - Judith Holler
- Max Planck Institute for Psycholinguistics, P.O. Box 310, 6500 AH, Nijmegen, The Netherlands.
- Donders Institute for Brain, Cognition, and Behaviour, P.O. Box 9010, 6500 GL, Nijmegen, The Netherlands.
| | - Linda Drijvers
- Max Planck Institute for Psycholinguistics, P.O. Box 310, 6500 AH, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition, and Behaviour, P.O. Box 9010, 6500 GL, Nijmegen, The Netherlands
| | - Aslı Özyürek
- Max Planck Institute for Psycholinguistics, P.O. Box 310, 6500 AH, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition, and Behaviour, P.O. Box 9010, 6500 GL, Nijmegen, The Netherlands
- Centre for Language Studies, Radboud University Nijmegen, P.O. Box 9103, 6500 HD, Nijmegen, The Netherlands
| |
Collapse
|
14
|
Chauvin A, Baum S, Phillips NA. Individuals With Mild Cognitive Impairment and Alzheimer's Disease Benefit From Audiovisual Speech Cues and Supportive Sentence Context. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:1550-1559. [PMID: 33861623 DOI: 10.1044/2021_jslhr-20-00402] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose Speech perception in noise becomes difficult with age but can be facilitated by audiovisual (AV) speech cues and sentence context in healthy older adults. However, individuals with Alzheimer's disease (AD) may present with deficits in AV integration, potentially limiting the extent to which they can benefit from AV cues. This study investigated the benefit of these cues in individuals with mild cognitive impairment (MCI), individuals with AD, and healthy older adult controls. Method This study compared auditory-only and AV speech perception of sentences presented in noise. These sentences had one of two levels of context: high (e.g., "Stir your coffee with a spoon") and low (e.g., "Bob didn't think about the spoon"). Fourteen older controls (M age = 72.71 years, SD = 9.39), 13 individuals with MCI (M age = 79.92 years, SD = 5.52), and nine individuals with probable Alzheimer's-type dementia (M age = 79.38 years, SD = 3.40) completed the speech perception task and were asked to repeat the terminal word of each sentence. Results All three groups benefited (i.e., identified more terminal words) from AV and sentence context. Individuals with MCI showed a smaller AV benefit compared to controls in low-context conditions, suggesting difficulties with AV integration. Individuals with AD showed a smaller benefit in high-context conditions compared to controls, indicating difficulties with AV integration and context use in AD. Conclusions Individuals with MCI and individuals with AD do benefit from AV speech and semantic context during speech perception in noise (albeit to a lower extent than healthy older adults). This suggests that engaging in face-to-face communication and providing ample context will likely foster more effective communication between patients and caregivers, professionals, and loved ones.
Collapse
Affiliation(s)
- Alexandre Chauvin
- Department of Psychology/Centre for Research in Human Development, Concordia University, Montréal, Québec, Canada
- Centre for Research on Brain, Language and Music, McGill University, Montréal, Québec, Canada
| | - Shari Baum
- Centre for Research on Brain, Language and Music, McGill University, Montréal, Québec, Canada
- School of Communication Sciences and Disorders, Faculty of Medicine and Health Sciences, McGill University, Montréal, Québec, Canada
| | - Natalie A Phillips
- Department of Psychology/Centre for Research in Human Development, Concordia University, Montréal, Québec, Canada
- Centre for Research on Brain, Language and Music, McGill University, Montréal, Québec, Canada
- Bloomfield Centre for Research in Aging, Lady Davis Institute for Medical Research, Montréal, Québec, Canada
| |
Collapse
|
15
|
Jones SA, Noppeney U. Ageing and multisensory integration: A review of the evidence, and a computational perspective. Cortex 2021; 138:1-23. [PMID: 33676086 DOI: 10.1016/j.cortex.2021.02.001] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 01/23/2021] [Accepted: 02/02/2021] [Indexed: 11/29/2022]
Abstract
The processing of multisensory signals is crucial for effective interaction with the environment, but our ability to perform this vital function changes as we age. In the first part of this review, we summarise existing research into the effects of healthy ageing on multisensory integration. We note that age differences vary substantially with the paradigms and stimuli used: older adults often receive at least as much benefit (to both accuracy and response times) as younger controls from congruent multisensory stimuli, but are also consistently more negatively impacted by the presence of intersensory conflict. In the second part, we outline a normative Bayesian framework that provides a principled and computationally informed perspective on the key ingredients involved in multisensory perception, and how these are affected by ageing. Applying this framework to the existing literature, we conclude that changes to sensory reliability, prior expectations (together with attentional control), and decisional strategies all contribute to the age differences observed. However, we find no compelling evidence of any age-related changes to the basic inference mechanisms involved in multisensory perception.
Collapse
Affiliation(s)
- Samuel A Jones
- The Staffordshire Centre for Psychological Research, Staffordshire University, Stoke-on-Trent, UK.
| | - Uta Noppeney
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands.
| |
Collapse
|
16
|
Effects of stimulus intensity on audiovisual integration in aging across the temporal dynamics of processing. Int J Psychophysiol 2021; 162:95-103. [PMID: 33529642 DOI: 10.1016/j.ijpsycho.2021.01.017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 10/26/2020] [Accepted: 01/24/2021] [Indexed: 11/24/2022]
Abstract
Previous studies have drawn different conclusions about whether older adults benefit more from audiovisual integration, and such conflicts may have been due to the stimulus features investigated in those studies, such as stimulus intensity. In the current study, using ERPs, we compared the effects of stimulus intensity on audiovisual integration between young adults and older adults. The results showed that inverse effectiveness, which depicts a phenomenon that lowing the effectiveness of sensory stimuli increases benefits of multisensory integration, was observed in young adults at earlier processing stages but was absent in older adults. Moreover, at the earlier processing stages (60-90 ms and 110-140 ms), older adults exhibited significantly greater audiovisual integration than young adults (all ps < 0.05). However, at the later processing stages (220-250 ms and 340-370 ms), young adults exhibited significantly greater audiovisual integration than old adults (all ps < 0.001). The results suggested that there is an age-related dissociation between early integration and late integration, which indicates that there are different audiovisual processing mechanisms in play between older adults and young adults.
Collapse
|
17
|
Muller AM, Dalal TC, Stevenson RA. Schizotypal traits are not related to multisensory integration or audiovisual speech perception. Conscious Cogn 2020; 86:103030. [PMID: 33120291 DOI: 10.1016/j.concog.2020.103030] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 09/02/2020] [Accepted: 10/04/2020] [Indexed: 12/01/2022]
Abstract
Multisensory integration, the binding of sensory information from different sensory modalities, may contribute to perceptual symptomatology in schizophrenia, including hallucinations and aberrant speech perception. Differences in multisensory integration and temporal processing, an important component of multisensory integration, are consistently found in schizophrenia. Evidence is emerging that these differences extend across the schizophrenia spectrum, including individuals in the general population with higher schizotypal traits. In the current study, we investigated the relationship between schizotypal traits and perceptual functioning, using audiovisual speech-in-noise, McGurk, and ternary synchrony judgment tasks. We measured schizotypal traits using the Schizotypal Personality Questionnaire (SPQ), hypothesizing that higher scores on Unusual Perceptual Experiences and Odd Speech subscales would be associated with decreased multisensory integration, increased susceptibility to distracting auditory speech, and less precise temporal processing. Surprisingly, these measures were not associated with the predicted subscales, suggesting that these perceptual differences may not be present across the schizophrenia spectrum.
Collapse
Affiliation(s)
- Anne-Marie Muller
- Department of Psychology, University of Western Ontario, London, ON, Canada; Brain and Mind Institute, University of Western Ontario, London, ON, Canada
| | - Tyler C Dalal
- Department of Psychology, University of Western Ontario, London, ON, Canada; Brain and Mind Institute, University of Western Ontario, London, ON, Canada
| | - Ryan A Stevenson
- Department of Psychology, University of Western Ontario, London, ON, Canada; Brain and Mind Institute, University of Western Ontario, London, ON, Canada.
| |
Collapse
|
18
|
Michaelis K, Erickson LC, Fama ME, Skipper-Kallal LM, Xing S, Lacey EH, Anbari Z, Norato G, Rauschecker JP, Turkeltaub PE. Effects of age and left hemisphere lesions on audiovisual integration of speech. BRAIN AND LANGUAGE 2020; 206:104812. [PMID: 32447050 PMCID: PMC7379161 DOI: 10.1016/j.bandl.2020.104812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 04/02/2020] [Accepted: 05/04/2020] [Indexed: 06/11/2023]
Abstract
Neuroimaging studies have implicated left temporal lobe regions in audiovisual integration of speech and inferior parietal regions in temporal binding of incoming signals. However, it remains unclear which regions are necessary for audiovisual integration, especially when the auditory and visual signals are offset in time. Aging also influences integration, but the nature of this influence is unresolved. We used a McGurk task to test audiovisual integration and sensitivity to the timing of audiovisual signals in two older adult groups: left hemisphere stroke survivors and controls. We observed a positive relationship between age and audiovisual speech integration in both groups, and an interaction indicating that lesions reduce sensitivity to timing offsets between signals. Lesion-symptom mapping demonstrated that damage to the left supramarginal gyrus and planum temporale reduces temporal acuity in audiovisual speech perception. This suggests that a process mediated by these structures identifies asynchronous audiovisual signals that should not be integrated.
Collapse
Affiliation(s)
- Kelly Michaelis
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA
| | - Laura C Erickson
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Neuroscience Department, Georgetown University Medical Center, Washington DC, USA
| | - Mackenzie E Fama
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Department of Speech-Language Pathology & Audiology, Towson University, Towson, MD, USA
| | - Laura M Skipper-Kallal
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA
| | - Shihui Xing
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Department of Neurology, First Affiliated Hospital of Sun Yat-Sen University, Guangzhou, China
| | - Elizabeth H Lacey
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Research Division, MedStar National Rehabilitation Hospital, Washington DC, USA
| | - Zainab Anbari
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA
| | - Gina Norato
- Clinical Trials Unit, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, USA
| | - Josef P Rauschecker
- Neuroscience Department, Georgetown University Medical Center, Washington DC, USA
| | - Peter E Turkeltaub
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Research Division, MedStar National Rehabilitation Hospital, Washington DC, USA.
| |
Collapse
|
19
|
Age-related hearing loss influences functional connectivity of auditory cortex for the McGurk illusion. Cortex 2020; 129:266-280. [PMID: 32535378 DOI: 10.1016/j.cortex.2020.04.022] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Revised: 03/30/2020] [Accepted: 04/09/2020] [Indexed: 01/23/2023]
Abstract
Age-related hearing loss affects hearing at high frequencies and is associated with difficulties in understanding speech. Increased audio-visual integration has recently been found in age-related hearing impairment, the brain mechanisms that contribute to this effect are however unclear. We used functional magnetic resonance imaging in elderly subjects with normal hearing and mild to moderate uncompensated hearing loss. Audio-visual integration was studied using the McGurk task. In this task, an illusionary fused percept can occur if incongruent auditory and visual syllables are presented. The paradigm included unisensory stimuli (auditory only, visual only), congruent audio-visual and incongruent (McGurk) audio-visual stimuli. An illusionary precept was reported in over 60% of incongruent trials. These McGurk illusion rates were equal in both groups of elderly subjects and correlated positively with speech-in-noise perception and daily listening effort. Normal-hearing participants showed an increased neural response in left pre- and postcentral gyri and right middle frontal gyrus for incongruent stimuli (McGurk) compared to congruent audio-visual stimuli. Activation patterns were however not different between groups. Task-modulated functional connectivity differed between groups showing increased connectivity from auditory cortex to visual, parietal and frontal areas in hard of hearing participants as compared to normal-hearing participants when comparing incongruent stimuli (McGurk) with congruent audio-visual stimuli. These results suggest that changes in functional connectivity of auditory cortex rather than activation strength during processing of audio-visual McGurk stimuli accompany age-related hearing loss.
Collapse
|
20
|
Fei N, Ge J, Wang Y, Gao JH. Aging-related differences in the cortical network subserving intelligible speech. BRAIN AND LANGUAGE 2020; 201:104713. [PMID: 31759299 DOI: 10.1016/j.bandl.2019.104713] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2019] [Revised: 10/29/2019] [Accepted: 10/30/2019] [Indexed: 06/10/2023]
Abstract
Language communication is crucial throughout the lifespan. The current study investigated how aging affects the brain network subserving intelligible speech. Using functional magnetic resonance imaging, we compared brain responses to intelligible and unintelligible speech between older and young adults. Univariate and multivariate analyses revealed reduced brain activation and lower regional pattern distinctions in response to intelligible versus unintelligible speech in the left anterior superior temporal gyrus (aSTG) and the left inferior frontal gyrus (IFG) in the older compared with young adults. Notably, the functional connectivity between the left IFG and the left angular gyrus (AG) was increased and a significantly enhanced bidirectional effective connectivity between the left aSTG and the left AG was observed in the older adults for processing speech intelligibility. Our study revealed aging-related differences in the cortical activity for intelligible speech and suggested that increased frontal-temporal-parietal functional integration may help facilitate spoken language processing in older adults.
Collapse
Affiliation(s)
- Nanxi Fei
- Beijing City Key Lab for Medical Physics and Engineering, Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing, China; Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
| | - Jianqiao Ge
- Beijing City Key Lab for Medical Physics and Engineering, Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing, China; Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China.
| | - Yi Wang
- Public Health Science and Engineering College, Tianjin University of Traditional Chinese Medicine, Tianjin, China
| | - Jia-Hong Gao
- Beijing City Key Lab for Medical Physics and Engineering, Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing, China; Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China; McGovern Institute for Brain Research, Peking University, Beijing, China.
| |
Collapse
|
21
|
Wallace MT, Woynaroski TG, Stevenson RA. Multisensory Integration as a Window into Orderly and Disrupted Cognition and Communication. Annu Rev Psychol 2020; 71:193-219. [DOI: 10.1146/annurev-psych-010419-051112] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
During our everyday lives, we are confronted with a vast amount of information from several sensory modalities. This multisensory information needs to be appropriately integrated for us to effectively engage with and learn from our world. Research carried out over the last half century has provided new insights into the way such multisensory processing improves human performance and perception; the neurophysiological foundations of multisensory function; the time course for its development; how multisensory abilities differ in clinical populations; and, most recently, the links between multisensory processing and cognitive abilities. This review summarizes the extant literature on multisensory function in typical and atypical circumstances, discusses the implications of the work carried out to date for theory and research, and points toward next steps for advancing the field.
Collapse
Affiliation(s)
- Mark T. Wallace
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37232, USA;,
- Departments of Psychology and Pharmacology, Vanderbilt University, Nashville, Tennessee 37232, USA
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37232, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee 37232, USA
- Vanderbilt Kennedy Center, Nashville, Tennessee 37203, USA
| | - Tiffany G. Woynaroski
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37232, USA;,
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee 37232, USA
- Vanderbilt Kennedy Center, Nashville, Tennessee 37203, USA
| | - Ryan A. Stevenson
- Departments of Psychology and Psychiatry and Program in Neuroscience, University of Western Ontario, London, Ontario N6A 3K7, Canada
- Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 3K7, Canada
| |
Collapse
|
22
|
Scurry AN, Vercillo T, Nicholson A, Webster M, Jiang F. Aging Impairs Temporal Sensitivity, but not Perceptual Synchrony, Across Modalities. Multisens Res 2019; 32:671-692. [PMID: 31059487 DOI: 10.1163/22134808-20191343] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2018] [Accepted: 02/11/2019] [Indexed: 11/19/2022]
Abstract
Encoding the temporal properties of external signals that comprise multimodal events is a major factor guiding everyday experience. However, during the natural aging process, impairments to sensory processing can profoundly affect multimodal temporal perception. Various mechanisms can contribute to temporal perception, and thus it is imperative to understand how each can be affected by age. In the current study, using three different temporal order judgement tasks (unisensory, multisensory, and sensorimotor), we investigated the effects of age on two separate temporal processes: synchronization and integration of multiple signals. These two processes rely on different aspects of temporal information, either the temporal alignment of processed signals or the integration/segregation of signals arising from different modalities, respectively. Results showed that the ability to integrate/segregate multiple signals decreased with age regardless of the task, and that the magnitude of such impairment correlated across tasks, suggesting a widespread mechanism affected by age. In contrast, perceptual synchrony remained stable with age, revealing a distinct intact mechanism. Overall, results from this study suggest that aging has differential effects on temporal processing, and general impairments with aging may impact global temporal sensitivity while context-dependent processes remain unaffected.
Collapse
Affiliation(s)
| | - Tiziana Vercillo
- 2Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY 14642, USA
| | - Alexis Nicholson
- 1Department of Psychology, University of Nevada, Reno, NV 89557, USA
| | - Michael Webster
- 1Department of Psychology, University of Nevada, Reno, NV 89557, USA
| | - Fang Jiang
- 1Department of Psychology, University of Nevada, Reno, NV 89557, USA
| |
Collapse
|
23
|
Stawicki M, Majdak P, Başkent D. Ventriloquist Illusion Produced With Virtual Acoustic Spatial Cues and Asynchronous Audiovisual Stimuli in Both Young and Older Individuals. Multisens Res 2019; 32:745-770. [DOI: 10.1163/22134808-20191430] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2019] [Accepted: 09/03/2019] [Indexed: 11/19/2022]
Abstract
Abstract
Ventriloquist illusion, the change in perceived location of an auditory stimulus when a synchronously presented but spatially discordant visual stimulus is added, has been previously shown in young healthy populations to be a robust paradigm that mainly relies on automatic processes. Here, we propose ventriloquist illusion as a potential simple test to assess audiovisual (AV) integration in young and older individuals. We used a modified version of the illusion paradigm that was adaptive, nearly bias-free, relied on binaural stimulus representation using generic head-related transfer functions (HRTFs) instead of multiple loudspeakers, and tested with synchronous and asynchronous presentation of AV stimuli (both tone and speech). The minimum audible angle (MAA), the smallest perceptible difference in angle between two sound sources, was compared with or without the visual stimuli in young and older adults with no or minimal sensory deficits. The illusion effect, measured by means of MAAs implemented with HRTFs, was observed with both synchronous and asynchronous visual stimulus, but only with tone and not speech stimulus. The patterns were similar between young and older individuals, indicating the versatility of the modified ventriloquist illusion paradigm.
Collapse
Affiliation(s)
- Marnix Stawicki
- 1Department of Otorhinolaryngology / Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- 2Graduate School of Medical Sciences, Research School of Behavioral and Cognitive Neurosciences (BCN), University of Groningen, Groningen, The Netherlands
| | - Piotr Majdak
- 3Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria
| | - Deniz Başkent
- 1Department of Otorhinolaryngology / Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- 2Graduate School of Medical Sciences, Research School of Behavioral and Cognitive Neurosciences (BCN), University of Groningen, Groningen, The Netherlands
| |
Collapse
|
24
|
Zhou X, Innes-Brown H, McKay CM. Audio-visual integration in cochlear implant listeners and the effect of age difference. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:4144. [PMID: 31893708 DOI: 10.1121/1.5134783] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2019] [Accepted: 10/30/2019] [Indexed: 06/10/2023]
Abstract
This study aimed to investigate differences in audio-visual (AV) integration between cochlear implant (CI) listeners and normal-hearing (NH) adults. A secondary aim was to investigate the effect of age differences by examining AV integration in groups of older and younger NH adults. Seventeen CI listeners, 13 similarly aged NH adults, and 16 younger NH adults were recruited. Two speech identification experiments were conducted to evaluate AV integration of speech cues. In the first experiment, reaction times in audio-alone (A-alone), visual-alone (V-alone), and AV conditions were measured during a speeded task in which participants were asked to identify a target sound /aSa/ among 11 alternatives. A race model was applied to evaluate AV integration. In the second experiment, identification accuracies were measured using a closed set of consonants and an open set of consonant-nucleus-consonant words. The authors quantified AV integration using a combination of a probability model and a cue integration model (which model participants' AV accuracy by assuming no or optimal integration, respectively). The results found that experienced CI listeners showed no better AV integration than their similarly aged NH adults. Further, there was no significant difference in AV integration between the younger and older NH adults.
Collapse
Affiliation(s)
- Xin Zhou
- Bionics Institute of Australia, 384-388 East Melbourne, Melbourne, Victoria 3002, Australia
| | - Hamish Innes-Brown
- Bionics Institute of Australia, 384-388 East Melbourne, Melbourne, Victoria 3002, Australia
| | - Colette M McKay
- Bionics Institute of Australia, 384-388 East Melbourne, Melbourne, Victoria 3002, Australia
| |
Collapse
|
25
|
van de Rijt LPH, Roye A, Mylanus EAM, van Opstal AJ, van Wanrooij MM. The Principle of Inverse Effectiveness in Audiovisual Speech Perception. Front Hum Neurosci 2019; 13:335. [PMID: 31611780 PMCID: PMC6775866 DOI: 10.3389/fnhum.2019.00335] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Accepted: 09/11/2019] [Indexed: 11/13/2022] Open
Abstract
We assessed how synchronous speech listening and lipreading affects speech recognition in acoustic noise. In simple audiovisual perceptual tasks, inverse effectiveness is often observed, which holds that the weaker the unimodal stimuli, or the poorer their signal-to-noise ratio, the stronger the audiovisual benefit. So far, however, inverse effectiveness has not been demonstrated for complex audiovisual speech stimuli. Here we assess whether this multisensory integration effect can also be observed for the recognizability of spoken words. To that end, we presented audiovisual sentences to 18 native-Dutch normal-hearing participants, who had to identify the spoken words from a finite list. Speech-recognition performance was determined for auditory-only, visual-only (lipreading), and auditory-visual conditions. To modulate acoustic task difficulty, we systematically varied the auditory signal-to-noise ratio. In line with a commonly observed multisensory enhancement on speech recognition, audiovisual words were more easily recognized than auditory-only words (recognition thresholds of -15 and -12 dB, respectively). We here show that the difficulty of recognizing a particular word, either acoustically or visually, determines the occurrence of inverse effectiveness in audiovisual word integration. Thus, words that are better heard or recognized through lipreading, benefit less from bimodal presentation. Audiovisual performance at the lowest acoustic signal-to-noise ratios (45%) fell below the visual recognition rates (60%), reflecting an actual deterioration of lipreading in the presence of excessive acoustic noise. This suggests that the brain may adopt a strategy in which attention has to be divided between listening and lipreading.
Collapse
Affiliation(s)
- Luuk P. H. van de Rijt
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands
| | - Anja Roye
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Emmanuel A. M. Mylanus
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition, and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands
| | - A. John van Opstal
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Marc M. van Wanrooij
- Department of Biophysics, Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, Netherlands
| |
Collapse
|
26
|
Wulff DU, De Deyne S, Jones MN, Mata R. New Perspectives on the Aging Lexicon. Trends Cogn Sci 2019; 23:686-698. [PMID: 31288976 DOI: 10.1016/j.tics.2019.05.003] [Citation(s) in RCA: 58] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 05/06/2019] [Accepted: 05/07/2019] [Indexed: 12/26/2022]
Abstract
The field of cognitive aging has seen considerable advances in describing the linguistic and semantic changes that happen during the adult life span to uncover the structure of the mental lexicon (i.e., the mental repository of lexical and conceptual representations). Nevertheless, there is still debate concerning the sources of these changes, including the role of environmental exposure and several cognitive mechanisms associated with learning, representation, and retrieval of information. We review the current status of research in this field and outline a framework that promises to assess the contribution of both ecological and psychological aspects to the aging lexicon.
Collapse
Affiliation(s)
- Dirk U Wulff
- University of Basel, Basel, Switzerland; Max Planck Institute for Human Development, Berlin, Germany.
| | | | | | - Rui Mata
- University of Basel, Basel, Switzerland; Max Planck Institute for Human Development, Berlin, Germany
| | | |
Collapse
|
27
|
Age-related differences in Voice-Onset-Time in Polish language users: An ERP study. Acta Psychol (Amst) 2019; 193:18-29. [PMID: 30580059 DOI: 10.1016/j.actpsy.2018.12.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2018] [Revised: 11/15/2018] [Accepted: 12/06/2018] [Indexed: 11/23/2022] Open
Abstract
Using the Mismatch Negativity (MMN) paradigm we investigated for the first time cortical responses to consonant - vowel (CV) syllables differing in Voice-Onset-Time (VOT) for Polish, a member of the Slavic group of languages. The study aimed at testing age-related effects on different ERP responses in young (20-30 years of age) and elderly (60-68 years) native Polish speakers. Participants were presented with a sequence of voiced and voiceless stop CV syllables /to/ and /do/ with different VOT values (-100 ms, -70 ms, -30 ms, -20 ms, +20 ms, +50 ms). We analyzed MMN and P1, N1, N1', P2, N2 components. Our results showed an age-related decline in voicing perception in all tested ERP components. This decline could be explained in relation to a general slowing in neural processing with advancing age and may be associated with difficulties in temporal- and spectral-information processing in elderly people. Our findings revealed also that specific features of Slavic languages influence ERP morphology in a different way than reported in the literature for aspirating languages.
Collapse
|
28
|
Wang B, Li P, Li D, Niu Y, Yan T, Li T, Cao R, Yan P, Guo Y, Yang W, Ren Y, Li X, Wang F, Yan T, Wu J, Zhang H, Xiang J. Increased Functional Brain Network Efficiency During Audiovisual Temporal Asynchrony Integration Task in Aging. Front Aging Neurosci 2018; 10:316. [PMID: 30356825 PMCID: PMC6189604 DOI: 10.3389/fnagi.2018.00316] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2018] [Accepted: 09/19/2018] [Indexed: 01/05/2023] Open
Abstract
Audiovisual integration significantly changes over the lifespan, but age-related functional connectivity in audiovisual temporal asynchrony integration tasks remains underexplored. In the present study, electroencephalograms (EEGs) of 27 young adults (22–25 years) and 25 old adults (61–76 years) were recorded during an audiovisual temporal asynchrony integration task with seven conditions [auditory (A), visual (V), AV, A50V, A100V, V50A and V100A]. We calculated the phase lag index (PLI)-weighted connectivity networks modulated by the audiovisual tasks and found that the PLI connections showed obvious dynamic changes after stimulus onset. In the theta (4–7 Hz) and alpha (8–13 Hz) bands, the AV and V50A conditions induced stronger functional connections and higher global and local efficiencies, reflecting a stronger audiovisual integration effect, which was attributed to the auditory information arriving at the primary auditory cortex earlier than the visual information reaching the primary visual cortex. Importantly, the functional connectivity and network efficiencies of old adults revealed higher global and local efficiencies and higher degree in both the theta and alpha bands. These larger network efficiencies indicated that old adults might experience more difficulties in attention and cognitive control during the audiovisual integration task with temporal asynchrony than young adults. There were significant associations between network efficiencies and peak time of integration only in young adults. We propose that an audiovisual task with multiple conditions might arouse the appropriate attention in young adults but would lead to a ceiling effect in old adults. Our findings provide new insights into the network topography of old adults during audiovisual integration and highlight higher functional connectivity and network efficiencies due to greater cognitive demand.
Collapse
Affiliation(s)
- Bin Wang
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, China.,Department of Radiology, First Hospital of Shanxi Medical University, Taiyuan, China
| | - Peizhen Li
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, China
| | - Dandan Li
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, China
| | - Yan Niu
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, China
| | - Ting Yan
- Translational Medicine Research Center, Shanxi Medical University, Taiyuan, China
| | - Ting Li
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, China
| | - Rui Cao
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, China
| | - Pengfei Yan
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, China
| | - Yuxiang Guo
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, China
| | - Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Yanna Ren
- Medical Humanities College, Guiyang University of Traditional Chinese Medicine, Guiyang, China
| | - Xinrui Li
- Suzhou North America High School, Suzhou, China
| | | | - Tianyi Yan
- School of Life Science, Beijing Institute of Technology, Beijing, China.,Key Laboratory of Convergence Medical Engineering System and Healthcare Technology, Ministry of Industry and Information Technology, Beijing Institute of Technology, Beijing, China.,Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, Beijing Institute of Technology, Beijing, China
| | - Jinglong Wu
- Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, Beijing Institute of Technology, Beijing, China.,Graduate School of Natural Science and Technology, Okayama University, Okayama, Japan
| | - Hui Zhang
- Department of Radiology, First Hospital of Shanxi Medical University, Taiyuan, China
| | - Jie Xiang
- College of Computer Science and Technology, Taiyuan University of Technology, Taiyuan, China
| |
Collapse
|
29
|
McDaniel J, Camarata S, Yoder P. Comparing Auditory-Only and Audiovisual Word Learning for Children With Hearing Loss. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2018; 23:382-398. [PMID: 29767759 PMCID: PMC6146754 DOI: 10.1093/deafed/eny016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2017] [Revised: 04/16/2018] [Accepted: 05/04/2018] [Indexed: 06/08/2023]
Abstract
Although reducing visual input to emphasize auditory cues is a common practice in pediatric auditory (re)habilitation, the extant literature offers minimal empirical evidence for whether unisensory auditory-only (AO) or multisensory audiovisual (AV) input is more beneficial to children with hearing loss for developing spoken language skills. Using an adapted alternating treatments single case research design, we evaluated the effectiveness and efficiency of a receptive word learning intervention with and without access to visual speechreading cues. Four preschool children with prelingual hearing loss participated. Based on probes without visual cues, three participants demonstrated strong evidence for learning in the AO and AV conditions relative to a control (no-teaching) condition. No participants demonstrated a differential rate of learning between AO and AV conditions. Neither an inhibitory effect predicted by a unisensory theory nor a beneficial effect predicted by a multisensory theory for providing visual cues was identified. Clinical implications are discussed.
Collapse
|
30
|
Bernstein LE. Response Errors in Females' and Males' Sentence Lipreading Necessitate Structurally Different Models for Predicting Lipreading Accuracy. LANGUAGE LEARNING 2018; 68:127-158. [PMID: 31485084 PMCID: PMC6724546 DOI: 10.1111/lang.12281] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Lipreaders recognize words with phonetically impoverished stimuli, an ability that is generally poor in normal-hearing adults. Individual sentence lipreading trials from 341 young adults were modeled to predict words and phonemes correct in terms of measures of phoneme response dissimilarity (PRD), number of inserted incorrect response phonemes, lipreader gender, and a measure of speech perception in noise. Interactions with lipreaders' gender necessitated structurally different models of males' and females' lipreading. Overall, female lipreaders are more accurate, their ability to recognize words with impoverished or degraded input is consistent across visual and auditory modalities, and they amplify their correct responding through top-down insertion of text. Males' responses suggest that individuals with poorer auditory speech perception in noise amplify their responses by shifting towards including text in their response that is more perceptually discrepant from the stimulus. Attention to gender differences merits attention in future studies that use visual speech stimuli.
Collapse
Affiliation(s)
- Lynne E Bernstein
- Department of Speech, Language, and Hearing Science, George Washington University, 2121 I St NW, Washington, DC 20052
| |
Collapse
|
31
|
Brooks CJ, Chan YM, Anderson AJ, McKendrick AM. Audiovisual Temporal Perception in Aging: The Role of Multisensory Integration and Age-Related Sensory Loss. Front Hum Neurosci 2018; 12:192. [PMID: 29867415 PMCID: PMC5954093 DOI: 10.3389/fnhum.2018.00192] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2017] [Accepted: 04/20/2018] [Indexed: 11/26/2022] Open
Abstract
Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information.
Collapse
Affiliation(s)
- Cassandra J Brooks
- Department of Optometry and Vision Sciences, The University of Melbourne, Melbourne, VIC, Australia
| | - Yu Man Chan
- Department of Optometry and Vision Sciences, The University of Melbourne, Melbourne, VIC, Australia
| | - Andrew J Anderson
- Department of Optometry and Vision Sciences, The University of Melbourne, Melbourne, VIC, Australia
| | - Allison M McKendrick
- Department of Optometry and Vision Sciences, The University of Melbourne, Melbourne, VIC, Australia
| |
Collapse
|
32
|
Heikkilä J, Fagerlund P, Tiippana K. Semantically Congruent Visual Information Can Improve Auditory Recognition Memory in Older Adults. Multisens Res 2018; 31:213-225. [DOI: 10.1163/22134808-00002602] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2017] [Accepted: 07/31/2017] [Indexed: 11/19/2022]
Abstract
In the course of normal aging, memory functions show signs of impairment. Studies of memory in the elderly have previously focused on a single sensory modality, although multisensory encoding has been shown to improve memory performance in children and young adults. In this study, we investigated how audiovisual encoding affects auditory recognition memory in older (mean age 71 years) and younger (mean age 23 years) adults. Participants memorized auditory stimuli (sounds, spoken words) presented either alone or with semantically congruent visual stimuli (pictures, text) during encoding. Subsequent recognition memory performance of auditory stimuli was better for stimuli initially presented together with visual stimuli than for auditory stimuli presented alone during encoding. This facilitation was observed both in older and younger participants, while the overall memory performance was poorer in older participants. However, the pattern of facilitation was influenced by age. When encoding spoken words, the gain was greater for older adults. When encoding sounds, the gain was greater for younger adults. These findings show that semantically congruent audiovisual encoding improves memory performance in late adulthood, particularly for auditory verbal material.
Collapse
Affiliation(s)
- Jenni Heikkilä
- Department of Psychology and Logopedics, Faculty of Medicine, P.O. Box 9, 00014 University of Helsinki, Finland
| | - Petra Fagerlund
- Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, Helsinki, Finland
| | - Kaisa Tiippana
- Department of Psychology and Logopedics, Faculty of Medicine, P.O. Box 9, 00014 University of Helsinki, Finland
| |
Collapse
|
33
|
Jansen SD, Keebler JR, Chaparro A. Shifts in Maximum Audiovisual Integration with Age. Multisens Res 2018; 31:191-212. [DOI: 10.1163/22134808-00002599] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2017] [Accepted: 07/14/2017] [Indexed: 11/19/2022]
Abstract
Listeners attempting to understand speech in noisy environments rely on visual and auditory processes, typically referred to as audiovisual processing. Noise corrupts the auditory speech signal and listeners naturally leverage visual cues from the talker’s face in an attempt to interpret the degraded auditory signal. Studies of speech intelligibility in noise show that the maximum improvement in speech recognition performance (i.e., maximum visual enhancement or VEmax), derived from seeing an interlocutor’s face, is invariant with age. Several studies have reported that VEmaxis typically associated with a signal-to-noise (SNR) of −12 dB; however, few studies have systematically investigated whether the SNR associated with VEmaxchanges with age. We investigated if VEmaxchanges as a function of age, whether the SNR at VEmaxchanges as a function of age, and what perceptual/cognitive abilities account for or mediate such relationships. We measured VEmaxon a nongeriatric adult sample () ranging in age from 20 to 59 years old. We found that VEmaxwas age-invariant, replicating earlier studies. No perceptual/cognitive measures predicted VEmax, most likely due to limited variance in VEmaxscores. Importantly, we found that the SNR at VEmaxshifts toward higher (quieter) SNR levels with increasing age; however, this relationship is partially mediated by working memory capacity, where those with larger working memory capacities (WMCs) can identify speech under lower (louder) SNR levels than their age equivalents with smaller WMCs. The current study is the first to report that individual differences in WMC partially mediate the age-related shift in SNR at VEmax.
Collapse
Affiliation(s)
| | - Joseph R. Keebler
- Department of Human Factors and Behavioral Neurobiology, Embry-Riddle Aeronautical University, Daytona Beach, FL, USA
| | - Alex Chaparro
- Department of Human Factors and Behavioral Neurobiology, Embry-Riddle Aeronautical University, Daytona Beach, FL, USA
| |
Collapse
|
34
|
Do age and linguistic background alter the audiovisual advantage when listening to speech in the presence of energetic and informational masking? Atten Percept Psychophys 2017; 80:242-261. [DOI: 10.3758/s13414-017-1423-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
35
|
Abstract
Purpose of Review The integration of information across sensory modalities into unified percepts is a fundamental sensory process upon which a multitude of cognitive processes are based. We review the body of literature exploring aging-related changes in audiovisual integration published over the last five years. Specifically, we review the impact of changes in temporal processing, the influence of the effectiveness of sensory inputs, the role of working memory, and the newer studies of intra-individual variability during these processes. Recent Findings Work in the last five years on bottom-up influences of sensory perception has garnered significant attention. Temporal processing, a driving factors of multisensory integration, has now been shown to decouple with multisensory integration in aging, despite their co-decline with aging. The impact of stimulus effectiveness also changes with age, where older adults show maximal benefit from multisensory gain at high signal-to-noise ratios. Following sensory decline, high working memory capacities have now been shown to be somewhat of a protective factor against age-related declines in audiovisual speech perception, particularly in noise. Finally, newer research is emerging focusing on the general intra-individual variability observed with aging. Summary Overall, the studies of the past five years have replicated and expanded on previous work that highlights the role of bottom-up sensory changes with aging and their influence on audiovisual integration, as well as the top-down influence of working memory.
Collapse
Affiliation(s)
- Sarah H Baum
- Department of Psychology, University of Washington
| | - Ryan Stevenson
- Department of Psychology, Western University.,Brain and Mind Institute, Western University.,Department of Psychiatry, Schulich School of Medicine and Dentistry, Western University.,Program in Neuroscience, Schulich School of Medicine and Dentistry, Western University.,Centre for Vision Research, York University
| |
Collapse
|
36
|
Auditory and Audiovisual Close Shadowing in Post-Lingually Deaf Cochlear-Implanted Patients and Normal-Hearing Elderly Adults. Ear Hear 2017; 39:139-149. [PMID: 28753162 DOI: 10.1097/aud.0000000000000474] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The goal of this study was to determine the effect of auditory deprivation and age-related speech decline on perceptuo-motor abilities during speech processing in post-lingually deaf cochlear-implanted participants and in normal-hearing elderly (NHE) participants. DESIGN A close-shadowing experiment was carried out on 10 cochlear-implanted patients and on 10 NHE participants, with two groups of normal-hearing young participants as controls. To this end, participants had to categorize auditory and audiovisual syllables as quickly as possible, either manually or orally. Reaction times and percentages of correct responses were compared depending on response modes, stimulus modalities, and syllables. RESULTS Responses of cochlear-implanted subjects were globally slower and less accurate than those of both young and elderly normal-hearing people. Adding the visual modality was found to enhance performance for cochlear-implanted patients, whereas no significant effect was obtained for the NHE group. Critically, oral responses were faster than manual ones for all groups. In addition, for NHE participants, manual responses were more accurate than oral responses, as was the case for normal-hearing young participants when presented with noisy speech stimuli. CONCLUSIONS Faster reaction times were observed for oral than for manual responses in all groups, suggesting that perceptuo-motor relationships were somewhat successfully functional after cochlear implantation and remain efficient in the NHE group. These results are in agreement with recent perceptuo-motor theories of speech perception. They are also supported by the theoretical assumption that implicit motor knowledge and motor representations partly constrain auditory speech processing. In this framework, oral responses would have been generated at an earlier stage of a sensorimotor loop, whereas manual responses would appear late, leading to slower but more accurate responses. The difference between oral and manual responses suggests that the perceptuo-motor loop is still effective for NHE subjects and also for cochlear-implanted participants, despite degraded global performance.
Collapse
|
37
|
Stevenson YA, Baum SH, Segers M, Ferber S, Barense MD, Wallace MT. Multisensory speech perception in autism spectrum disorder: From phoneme to whole-word perception. Autism Res 2017; 10:1280-1290. [PMID: 28339177 PMCID: PMC5513806 DOI: 10.1002/aur.1776] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2016] [Revised: 01/19/2017] [Accepted: 02/06/2017] [Indexed: 11/11/2022]
Abstract
Speech perception in noisy environments is boosted when a listener can see the speaker's mouth and integrate the auditory and visual speech information. Autistic children have a diminished capacity to integrate sensory information across modalities, which contributes to core symptoms of autism, such as impairments in social communication. We investigated the abilities of autistic and typically-developing (TD) children to integrate auditory and visual speech stimuli in various signal-to-noise ratios (SNR). Measurements of both whole-word and phoneme recognition were recorded. At the level of whole-word recognition, autistic children exhibited reduced performance in both the auditory and audiovisual modalities. Importantly, autistic children showed reduced behavioral benefit from multisensory integration with whole-word recognition, specifically at low SNRs. At the level of phoneme recognition, autistic children exhibited reduced performance relative to their TD peers in auditory, visual, and audiovisual modalities. However, and in contrast to their performance at the level of whole-word recognition, both autistic and TD children showed benefits from multisensory integration for phoneme recognition. In accordance with the principle of inverse effectiveness, both groups exhibited greater benefit at low SNRs relative to high SNRs. Thus, while autistic children showed typical multisensory benefits during phoneme recognition, these benefits did not translate to typical multisensory benefit of whole-word recognition in noisy environments. We hypothesize that sensory impairments in autistic children raise the SNR threshold needed to extract meaningful information from a given sensory input, resulting in subsequent failure to exhibit behavioral benefits from additional sensory information at the level of whole-word recognition. Autism Res 2017. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Autism Res 2017, 10: 1280-1290. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- yan A. Stevenson
- Department of Psychology, Western University, London, ON, Canada
- Brain and Mind Institute, Western University, London, ON, Canada
| | - Sarah H. Baum
- Department of Psychology, University of Washington, Seattle, WA, USA
| | | | - Susanne Ferber
- Dept. of Psychology, University of Toronto, Toronto, ON, Canada
- Rotman Research Institute, Toronto, ON, Canada
| | - Morgan D. Barense
- Dept. of Psychology, University of Toronto, Toronto, ON, Canada
- Rotman Research Institute, Toronto, ON, Canada
| | - Mark T. Wallace
- Vanderbilt Brain Institute, Nashville, TN, USA
- Vanderbilt Kennedy Center, Nashville, TN, USA
- Vanderbilt University, Nashville, TN, USA
- Dept. of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Dept. of Psychology, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
38
|
Stevenson RA, Segers M, Ncube BL, Black KR, Bebko JM, Ferber S, Barense MD. The cascading influence of multisensory processing on speech perception in autism. AUTISM : THE INTERNATIONAL JOURNAL OF RESEARCH AND PRACTICE 2017; 22:609-624. [PMID: 28506185 DOI: 10.1177/1362361317704413] [Citation(s) in RCA: 76] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
It has been recently theorized that atypical sensory processing in autism relates to difficulties in social communication. Through a series of tasks concurrently assessing multisensory temporal processes, multisensory integration and speech perception in 76 children with and without autism, we provide the first behavioral evidence of such a link. Temporal processing abilities in children with autism contributed to impairments in speech perception. This relationship was significantly mediated by their abilities to integrate social information across auditory and visual modalities. These data describe the cascading impact of sensory abilities in autism, whereby temporal processing impacts multisensory information of social information, which, in turn, contributes to deficits in speech perception. These relationships were found to be specific to autism, specific to multisensory but not unisensory integration, and specific to the processing of social information.
Collapse
Affiliation(s)
| | | | | | | | | | - Susanne Ferber
- 3 University of Toronto, Canada.,4 Rotman Research Institute at Baycrest, Canada
| | - Morgan D Barense
- 3 University of Toronto, Canada.,4 Rotman Research Institute at Baycrest, Canada
| |
Collapse
|
39
|
Stevenson RA, Baum SH, Krueger J, Newhouse PA, Wallace MT. Links between temporal acuity and multisensory integration across life span. J Exp Psychol Hum Percept Perform 2017; 44:106-116. [PMID: 28447850 DOI: 10.1037/xhp0000424] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The temporal relationship between individual pieces of information from the different sensory modalities is one of the stronger cues to integrate such information into a unified perceptual gestalt, conveying numerous perceptual and behavioral advantages. Temporal acuity, however, varies greatly over the life span. It has previously been hypothesized that changes in temporal acuity in both development and healthy aging may thus play a key role in integrative abilities. This study tested the temporal acuity of 138 individuals ranging in age from 5 to 80. Temporal acuity and multisensory integration abilities were tested both within and across modalities (audition and vision) with simultaneity judgment and temporal order judgment tasks. We observed that temporal acuity, both within and across modalities, improved throughout development into adulthood and subsequently declined with healthy aging, as did the ability to integrate multisensory speech information. Of importance, throughout development, temporal acuity of simple stimuli (i.e., flashes and beeps) predicted individuals' abilities to integrate more complex speech information. However, in the aging population, although temporal acuity declined with healthy aging and was accompanied by declines in integrative abilities, temporal acuity was not able to predict integration at the individual level. Together, these results suggest that the impact of temporal acuity on multisensory integration varies throughout the life span. Although the maturation of temporal acuity drives the rise of multisensory integrative abilities during development, it is unable to account for changes in integrative abilities in healthy aging. The differential relationships between age, temporal acuity, and multisensory integration suggest an important role for experience in these processes. (PsycINFO Database Record
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Psychology, Brain and Mind Institute, University of Western Ontario
| | - Sarah H Baum
- Department of Psychology, University of Washington
| | | | - Paul A Newhouse
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center
| | | |
Collapse
|
40
|
Tye-Murray N, Spehar B, Myerson J, Hale S, Sommers M. Lipreading and audiovisual speech recognition across the adult lifespan: Implications for audiovisual integration. Psychol Aging 2016; 31:380-9. [PMID: 27294718 PMCID: PMC4910521 DOI: 10.1037/pag0000094] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In this study of visual (V-only) and audiovisual (AV) speech recognition in adults aged 22-92 years, the rate of age-related decrease in V-only performance was more than twice that in AV performance. Both auditory-only (A-only) and V-only performance were significant predictors of AV speech recognition, but age did not account for additional (unique) variance. Blurring the visual speech signal decreased speech recognition, and in AV conditions involving stimuli associated with equivalent unimodal performance for each participant, speech recognition remained constant from 22 to 92 years of age. Finally, principal components analysis revealed separate visual and auditory factors, but no evidence of an AV integration factor. Taken together, these results suggest that the benefit that comes from being able to see as well as hear a talker remains constant throughout adulthood and that changes in this AV advantage are entirely driven by age-related changes in unimodal visual and auditory speech recognition. (PsycINFO Database Record
Collapse
Affiliation(s)
| | - Brent Spehar
- Washington University in St Louis School of Medicine
| | | | | | | |
Collapse
|
41
|
Kaganovich N, Schumaker J, Rowland C. Matching heard and seen speech: An ERP study of audiovisual word recognition. BRAIN AND LANGUAGE 2016; 157-158:14-24. [PMID: 27155219 PMCID: PMC4915735 DOI: 10.1016/j.bandl.2016.04.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2015] [Revised: 03/23/2016] [Accepted: 04/10/2016] [Indexed: 06/05/2023]
Abstract
Seeing articulatory gestures while listening to speech-in-noise (SIN) significantly improves speech understanding. However, the degree of this improvement varies greatly among individuals. We examined a relationship between two distinct stages of visual articulatory processing and the SIN accuracy by combining a cross-modal repetition priming task with ERP recordings. Participants first heard a word referring to a common object (e.g., pumpkin) and then decided whether the subsequently presented visual silent articulation matched the word they had just heard. Incongruent articulations elicited a significantly enhanced N400, indicative of a mismatch detection at the pre-lexical level. Congruent articulations elicited a significantly larger LPC, indexing articulatory word recognition. Only the N400 difference between incongruent and congruent trials was significantly correlated with individuals' SIN accuracy improvement in the presence of the talker's face.
Collapse
Affiliation(s)
- Natalya Kaganovich
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, IN 47907-2038, United States; Department of Psychological Sciences, Purdue University, 703 Third Street, West Lafayette, IN 47907-2038, United States.
| | - Jennifer Schumaker
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, IN 47907-2038, United States
| | - Courtney Rowland
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, IN 47907-2038, United States
| |
Collapse
|
42
|
Smayda KE, Van Engen KJ, Maddox WT, Chandrasekaran B. Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults. PLoS One 2016; 11:e0152773. [PMID: 27031343 PMCID: PMC4816421 DOI: 10.1371/journal.pone.0152773] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2015] [Accepted: 03/18/2016] [Indexed: 11/19/2022] Open
Abstract
Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18-35) and thirty-three older adults (ages 60-90) to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger adults when both semantic and visual cues are available to the listener.
Collapse
Affiliation(s)
- Kirsten E. Smayda
- Department of Psychology, The University of Texas at Austin, Austin, Texas, United States of America
| | - Kristin J. Van Engen
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, Missouri, United States of America
| | - W. Todd Maddox
- Department of Psychology, The University of Texas at Austin, Austin, Texas, United States of America
| | - Bharath Chandrasekaran
- Department of Psychology, The University of Texas at Austin, Austin, Texas, United States of America
- Communication Sciences and Disorders Department, The University of Texas at Austin, Austin, Texas, United States of America
| |
Collapse
|
43
|
Brooks CJ, Anderson AJ, Roach NW, McGraw PV, McKendrick AM. Age-related changes in auditory and visual interactions in temporal rate perception. J Vis 2016; 15:2. [PMID: 26624937 DOI: 10.1167/15.16.2] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
We investigated how aging affects the integration of temporal rate for auditory flutter (amplitude modulation) presented with visual flicker. Since older adults were poorer at detecting auditory amplitude modulation, modulation depth was individually adjusted so that temporal rate was equally discriminable for 10 Hz flutter and flicker, thereby balancing the reliability of rate information available to each sensory modality. With age-related sensory differences normalized in this way, rate asynchrony skewed both auditory and visual rate judgments to the same extent in younger and older adults. Therefore, reliability-based weighting of temporal rate is preserved in older adults. Concurrent presentation of synchronous 10 Hz flicker and flutter improved temporal rate discrimination consistent with statistically optimal integration in younger but not older adults. In a control experiment, younger adults were presented with the same physical auditory stimulus as older adults. This time, rate asynchrony skewed perceived rate with greater auditory weighting rather than balanced integration. Taken together, our results indicate that integration of discrepant auditory and visual rates is not altered due to the healthy aging process once sensory deficits are accounted for, but that aging does abolish the minor improvement in discrimination performance seen in younger observers when concordant rates are integrated.
Collapse
|
44
|
Krueger Fister J, Stevenson RA, Nidiffer AR, Barnett ZP, Wallace MT. Stimulus intensity modulates multisensory temporal processing. Neuropsychologia 2016; 88:92-100. [PMID: 26920937 DOI: 10.1016/j.neuropsychologia.2016.02.016] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2015] [Revised: 01/20/2016] [Accepted: 02/22/2016] [Indexed: 12/18/2022]
Abstract
One of the more challenging feats that multisensory systems must perform is to determine which sensory signals originate from the same external event, and thus should be integrated or "bound" into a singular perceptual object or event, and which signals should be segregated. Two important stimulus properties impacting this process are the timing and effectiveness of the paired stimuli. It has been well established that the more temporally aligned two stimuli are, the greater the degree to which they influence one another's processing. In addition, the less effective the individual unisensory stimuli are in eliciting a response, the greater the benefit when they are combined. However, the interaction between stimulus timing and stimulus effectiveness in driving multisensory-mediated behaviors has never been explored - which was the purpose of the current study. Participants were presented with either high- or low-intensity audiovisual stimuli in which stimulus onset asynchronies (SOAs) were parametrically varied, and were asked to report on the perceived synchrony/asynchrony of the paired stimuli. Our results revealed an interaction between the temporal relationship (SOA) and intensity of the stimuli. Specifically, individuals were more tolerant of larger temporal offsets (i.e., more likely to call them synchronous) when the paired stimuli were less effective. This interaction was also seen in response time (RT) distributions. Behavioral gains in RTs were seen with synchronous relative to asynchronous presentations, but this effect was more pronounced with high-intensity stimuli. These data suggest that stimulus effectiveness plays an underappreciated role in the perception of the timing of multisensory events, and reinforces the interdependency of the principles of multisensory integration in determining behavior and shaping perception.
Collapse
Affiliation(s)
- Juliane Krueger Fister
- Neuroscience Graduate Program, Vanderbilt University Medical Center, United States; Vanderbilt Brain Institute, United States.
| | - Ryan A Stevenson
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, United States; Vanderbilt Brain Institute, United States; Vanderbilt University Kennedy Center, United States; Department of Psychology, University of Toronto, Canada
| | - Aaron R Nidiffer
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, United States
| | - Zachary P Barnett
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, United States
| | - Mark T Wallace
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, United States; Vanderbilt Brain Institute, United States; Vanderbilt University Kennedy Center, United States; Department of Psychology, Vanderbilt University, United States; Department of Psychiatry, Vanderbilt University, United States
| |
Collapse
|
45
|
Baum SH, Stevenson RA, Wallace MT. Behavioral, perceptual, and neural alterations in sensory and multisensory function in autism spectrum disorder. Prog Neurobiol 2015; 134:140-60. [PMID: 26455789 PMCID: PMC4730891 DOI: 10.1016/j.pneurobio.2015.09.007] [Citation(s) in RCA: 231] [Impact Index Per Article: 25.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2015] [Revised: 08/21/2015] [Accepted: 09/05/2015] [Indexed: 01/24/2023]
Abstract
Although sensory processing challenges have been noted since the first clinical descriptions of autism, it has taken until the release of the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) in 2013 for sensory problems to be included as part of the core symptoms of autism spectrum disorder (ASD) in the diagnostic profile. Because sensory information forms the building blocks for higher-order social and cognitive functions, we argue that sensory processing is not only an additional piece of the puzzle, but rather a critical cornerstone for characterizing and understanding ASD. In this review we discuss what is currently known about sensory processing in ASD, how sensory function fits within contemporary models of ASD, and what is understood about the differences in the underlying neural processing of sensory and social communication observed between individuals with and without ASD. In addition to highlighting the sensory features associated with ASD, we also emphasize the importance of multisensory processing in building perceptual and cognitive representations, and how deficits in multisensory integration may also be a core characteristic of ASD.
Collapse
Affiliation(s)
- Sarah H Baum
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Ryan A Stevenson
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA; Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA; Department of Psychiatry, Vanderbilt University, Nashville, TN, USA.
| |
Collapse
|