51
|
Knowland VCP, Evans S, Snell C, Rosen S. Visual Speech Perception in Children With Language Learning Impairments. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2016; 59:1-14. [PMID: 26895558 DOI: 10.1044/2015_jslhr-s-14-0269] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/29/2014] [Accepted: 07/30/2015] [Indexed: 06/05/2023]
Abstract
PURPOSE The purpose of the study was to assess the ability of children with developmental language learning impairments (LLIs) to use visual speech cues from the talking face. METHOD In this cross-sectional study, 41 typically developing children (mean age: 8 years 0 months, range: 4 years 5 months to 11 years 10 months) and 27 children with diagnosed LLI (mean age: 8 years 10 months, range: 5 years 2 months to 11 years 6 months) completed a silent speechreading task and a speech-in-noise task with and without visual support from the talking face. The speech-in-noise task involved the identification of a target word in a carrier sentence with a single competing speaker as a masker. RESULTS Children in the LLI group showed a deficit in speechreading when compared with their typically developing peers. Beyond the single-word level, this deficit became more apparent in older children. On the speech-in-noise task, a substantial benefit of visual cues was found regardless of age or group membership, although the LLI group showed an overall developmental delay in speech perception. CONCLUSION Although children with LLI were less accurate than their peers on the speechreading and speech-in noise-tasks, both groups were able to make equivalent use of visual cues to boost performance accuracy when listening in noise.
Collapse
|
52
|
Pratt H, Bleich N, Mittelman N. Spatio-temporal distribution of brain activity associated with audio-visually congruent and incongruent speech and the McGurk Effect. Brain Behav 2015; 5:e00407. [PMID: 26664791 PMCID: PMC4667754 DOI: 10.1002/brb3.407] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/08/2015] [Revised: 08/26/2015] [Accepted: 09/07/2015] [Indexed: 12/04/2022] Open
Abstract
INTRODUCTION Spatio-temporal distributions of cortical activity to audio-visual presentations of meaningless vowel-consonant-vowels and the effects of audio-visual congruence/incongruence, with emphasis on the McGurk effect, were studied. The McGurk effect occurs when a clearly audible syllable with one consonant, is presented simultaneously with a visual presentation of a face articulating a syllable with a different consonant and the resulting percept is a syllable with a consonant other than the auditorily presented one. METHODS Twenty subjects listened to pairs of audio-visually congruent or incongruent utterances and indicated whether pair members were the same or not. Source current densities of event-related potentials to the first utterance in the pair were estimated and effects of stimulus-response combinations, brain area, hemisphere, and clarity of visual articulation were assessed. RESULTS Auditory cortex, superior parietal cortex, and middle temporal cortex were the most consistently involved areas across experimental conditions. Early (<200 msec) processing of the consonant was overall prominent in the left hemisphere, except right hemisphere prominence in superior parietal cortex and secondary visual cortex. Clarity of visual articulation impacted activity in secondary visual cortex and Wernicke's area. McGurk perception was associated with decreased activity in primary and secondary auditory cortices and Wernicke's area before 100 msec, increased activity around 100 msec which decreased again around 180 msec. Activity in Broca's area was unaffected by McGurk perception and was only increased to congruent audio-visual stimuli 30-70 msec following consonant onset. CONCLUSIONS The results suggest left hemisphere prominence in the effects of stimulus and response conditions on eight brain areas involved in dynamically distributed parallel processing of audio-visual integration. Initially (30-70 msec) subcortical contributions to auditory cortex, superior parietal cortex, and middle temporal cortex occur. During 100-140 msec, peristriate visual influences and Wernicke's area join in the processing. Resolution of incongruent audio-visual inputs is then attempted, and if successful, McGurk perception occurs and cortical activity in left hemisphere further increases between 170 and 260 msec.
Collapse
Affiliation(s)
- Hillel Pratt
- Evoked Potentials Laboratory Technion - Israel Institute of Technology Haifa 32000 Israel
| | - Naomi Bleich
- Evoked Potentials Laboratory Technion - Israel Institute of Technology Haifa 32000 Israel
| | - Nomi Mittelman
- Evoked Potentials Laboratory Technion - Israel Institute of Technology Haifa 32000 Israel
| |
Collapse
|
53
|
Lalonde K, Holt RF. Preschoolers benefit from visually salient speech cues. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2015; 58:135-50. [PMID: 25322336 PMCID: PMC4712850 DOI: 10.1044/2014_jslhr-h-13-0343] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2013] [Revised: 06/20/2014] [Accepted: 09/23/2014] [Indexed: 06/04/2023]
Abstract
PURPOSE This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. METHOD Twelve adults and 27 typically developing 3- and 4-year-old children completed 3 audiovisual (AV) speech integration tasks: matching, discrimination, and recognition. The authors compared AV benefit for visually salient and less visually salient speech discrimination contrasts and assessed the visual saliency of consonant confusions in auditory-only and AV word recognition. RESULTS Four-year-olds and adults demonstrated visual influence on all measures. Three-year-olds demonstrated visual influence on speech discrimination and recognition measures. All groups demonstrated greater AV benefit for the visually salient discrimination contrasts. AV recognition benefit in 4-year-olds and adults depended on the visual saliency of speech sounds. CONCLUSIONS Preschoolers can demonstrate AV speech integration. Their AV benefit results from efficient use of visually salient speech cues. Four-year-olds, but not 3-year-olds, used visual phonological knowledge to take advantage of visually salient speech cues, suggesting possible developmental differences in the mechanisms of AV benefit.
Collapse
|
54
|
Whittingham KM, McDonald JS, Clifford CW. Synesthetes show normal sound-induced flash fission and fusion illusions. Vision Res 2014; 105:1-9. [DOI: 10.1016/j.visres.2014.08.010] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2013] [Revised: 08/18/2014] [Accepted: 08/20/2014] [Indexed: 11/28/2022]
|
55
|
Vercillo T, Burr D, Sandini G, Gori M. Children do not recalibrate motor-sensory temporal order after exposure to delayed sensory feedback. Dev Sci 2014; 18:703-12. [PMID: 25444457 DOI: 10.1111/desc.12247] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2014] [Accepted: 08/09/2014] [Indexed: 12/01/2022]
Abstract
Prolonged adaptation to delayed sensory feedback to a simple motor act (such as pressing a key) causes recalibration of sensory-motor synchronization, so instantaneous feedback appears to precede the motor act that caused it (Stetson, Cui, Montague & Eagleman, 2006). We investigated whether similar recalibration occurs in school-age children. Although plasticity may be expected to be even greater in children than in adults, we found no evidence of recalibration in children aged 8-11 years. Subjects adapted to delayed feedback for 100 trials, intermittently pressing a key that caused a tone to sound after a 200 ms delay. During the test phase, subjects responded to a visual cue by pressing a key, which triggered a tone to be played at variable intervals before or after the keypress. Subjects judged whether the tone preceded or followed the keypress, yielding psychometric functions estimating the delay when they perceived the tone to be synchronous with the action. The psychometric functions also gave an estimate of the precision of the temporal order judgment. In agreement with previous studies, adaptation caused a shift in perceived synchrony in adults, so the keypress appeared to trail behind the auditory feedback, implying sensory-motor recalibration. However, school children of 8 to 11 years showed no measureable adaptation of perceived simultaneity, even after adaptation with 500 ms lags. Importantly, precision in the simultaneity task also improved with age, and this developmental trend correlated strongly with the magnitude of recalibration. This suggests that lack of recalibration of sensory-motor simultaneity after adaptation in school-age children is related to their poor precision in temporal order judgments. To test this idea we measured recalibration in adult subjects with auditory noise added to the stimuli (which hampered temporal precision). Under these conditions, recalibration was greatly reduced, with the magnitude of recalibration strongly correlating with temporal precision.
Collapse
Affiliation(s)
- Tiziana Vercillo
- Robotics, Brain & Cognitive Sciences Department, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| | - David Burr
- Department of Neuroscience, University of Florence, Italy.,CNR Institute of Neuroscience, Pisa, Italy
| | - Giulio Sandini
- Robotics, Brain & Cognitive Sciences Department, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| | - Monica Gori
- Robotics, Brain & Cognitive Sciences Department, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| |
Collapse
|
56
|
Jerger S, Damian MF, Tye-Murray N, Abdi H. Children use visual speech to compensate for non-intact auditory speech. J Exp Child Psychol 2014; 126:295-312. [PMID: 24974346 PMCID: PMC4106987 DOI: 10.1016/j.jecp.2014.05.003] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2013] [Revised: 05/14/2014] [Accepted: 05/14/2014] [Indexed: 10/25/2022]
Abstract
We investigated whether visual speech fills in non-intact auditory speech (excised consonant onsets) in typically developing children from 4 to 14 years of age. Stimuli with the excised auditory onsets were presented in the audiovisual (AV) and auditory-only (AO) modes. A visual speech fill-in effect occurs when listeners experience hearing the same non-intact auditory stimulus (e.g., /-b/ag) as different depending on the presence/absence of visual speech such as hearing /bag/ in the AV mode but hearing /ag/ in the AO mode. We quantified the visual speech fill-in effect by the difference in the number of correct consonant onset responses between the modes. We found that easy visual speech cues /b/ provided greater filling in than difficult cues /g/. Only older children benefited from difficult visual speech cues, whereas all children benefited from easy visual speech cues, although 4- and 5-year-olds did not benefit as much as older children. To explore task demands, we compared results on our new task with those on the McGurk task. The influence of visual speech was uniquely associated with age and vocabulary abilities for the visual speech fill--in effect but was uniquely associated with speechreading skills for the McGurk effect. This dissociation implies that visual speech--as processed by children-is a complicated and multifaceted phenomenon underpinned by heterogeneous abilities. These results emphasize that children perceive a speaker's utterance rather than the auditory stimulus per se. In children, as in adults, there is more to speech perception than meets the ear.
Collapse
Affiliation(s)
- Susan Jerger
- School of Behavioral & Brain Sciences, Univ. of Texas at Dallas, 800 W. Campbell Rd. Richardson, TX 75080
| | - Markus F. Damian
- School of Experimental Psychology, University of Bristol, 12a Priory Road, Room 1D20, Bristol BS8 1TU, UK
| | - Nancy Tye-Murray
- Dept. Otolaryng., Washington Univ. School of Medicine, Box 8115, 660 S. Euclid Ave., St. Louis, MO 63110
| | - Hervé Abdi
- School of Behavioral & Brain Sciences, Univ. of Texas at Dallas, 800 W. Campbell Rd. Richardson, TX 75080
| |
Collapse
|
57
|
Guellaï B, Streri A, Yeung HH. The development of sensorimotor influences in the audiovisual speech domain: some critical questions. Front Psychol 2014; 5:812. [PMID: 25147528 PMCID: PMC4123602 DOI: 10.3389/fpsyg.2014.00812] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2014] [Accepted: 07/09/2014] [Indexed: 11/13/2022] Open
Abstract
Speech researchers have long been interested in how auditory and visual speech signals are integrated, and the recent work has revived interest in the role of speech production with respect to this process. Here, we discuss these issues from a developmental perspective. Because speech perception abilities typically outstrip speech production abilities in infancy and childhood, it is unclear how speech-like movements could influence audiovisual speech perception in development. While work on this question is still in its preliminary stages, there is nevertheless increasing evidence that sensorimotor processes (defined here as any motor or proprioceptive process related to orofacial movements) affect developmental audiovisual speech processing. We suggest three areas on which to focus in future research: (i) the relation between audiovisual speech perception and sensorimotor processes at birth, (ii) the pathways through which sensorimotor processes interact with audiovisual speech processing in infancy, and (iii) developmental change in sensorimotor pathways as speech production emerges in childhood.
Collapse
Affiliation(s)
- Bahia Guellaï
- Laboratoire Ethologie, Cognition, Développement, Université Paris Ouest Nanterre La Défense, NanterreFrance
| | - Arlette Streri
- CNRS, Laboratoire Psychologie de la Perception, UMR 8242, ParisFrance
| | - H. Henny Yeung
- CNRS, Laboratoire Psychologie de la Perception, UMR 8242, ParisFrance
- Université Paris Descartes, Paris Sorbonne Cité, ParisFrance
| |
Collapse
|
58
|
Sekiyama K, Soshi T, Sakamoto S. Enhanced audiovisual integration with aging in speech perception: a heightened McGurk effect in older adults. Front Psychol 2014; 5:323. [PMID: 24782815 PMCID: PMC3995044 DOI: 10.3389/fpsyg.2014.00323] [Citation(s) in RCA: 55] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Accepted: 03/28/2014] [Indexed: 11/13/2022] Open
Abstract
Two experiments compared young and older adults in order to examine whether aging leads to a larger dependence on visual articulatory movements in auditory-visual speech perception. These experiments examined accuracy and response time in syllable identification for auditory-visual (AV) congruent and incongruent stimuli. There were also auditory-only (AO) and visual-only (VO) presentation modes. Data were analyzed only for participants with normal hearing. It was found that the older adults were more strongly influenced by visual speech than the younger ones for acoustically identical signal-to-noise ratios (SNRs) of auditory speech (Experiment 1). This was also confirmed when the SNRs of auditory speech were calibrated for the equivalent AO accuracy between the two age groups (Experiment 2). There were no aging-related differences in VO lipreading accuracy. Combined with response time data, this enhanced visual influence for the older adults was likely to be associated with an aging-related delay in auditory processing.
Collapse
Affiliation(s)
- Kaoru Sekiyama
- Division of Cognitive Psychology, Faculty of Letters, Kumamoto University Kumamoto, Japan ; Division of Cognitive Psychology, School of Systems Information Science, Future University Hakodate, Japan
| | - Takahiro Soshi
- Division of Cognitive Psychology, Faculty of Letters, Kumamoto University Kumamoto, Japan
| | | |
Collapse
|
59
|
Stoesz BM, Jakobson LS. Developmental changes in attention to faces and bodies in static and dynamic scenes. Front Psychol 2014; 5:193. [PMID: 24639664 PMCID: PMC3944146 DOI: 10.3389/fpsyg.2014.00193] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2013] [Accepted: 02/18/2014] [Indexed: 11/13/2022] Open
Abstract
Typically developing individuals show a strong visual preference for faces and face-like stimuli; however, this may come at the expense of attending to bodies or to other aspects of a scene. The primary goal of the present study was to provide additional insight into the development of attentional mechanisms that underlie perception of real people in naturalistic scenes. We examined the looking behaviors of typical children, adolescents, and young adults as they viewed static and dynamic scenes depicting one or more people. Overall, participants showed a bias to attend to faces more than on other parts of the scenes. Adding motion cues led to a reduction in the number, but an increase in the average duration of face fixations in single-character scenes. When multiple characters appeared in a scene, motion-related effects were attenuated and participants shifted their gaze from faces to bodies, or made off-screen glances. Children showed the largest effects related to the introduction of motion cues or additional characters, suggesting that they find dynamic faces difficult to process, and are especially prone to look away from faces when viewing complex social scenes-a strategy that could reduce the cognitive and the affective load imposed by having to divide one's attention between multiple faces. Our findings provide new insights into the typical development of social attention during natural scene viewing, and lay the foundation for future work examining gaze behaviors in typical and atypical development.
Collapse
Affiliation(s)
- Brenda M. Stoesz
- Department of Psychology, University of ManitobaWinnipeg, MB, Canada
| | - Lorna S. Jakobson
- Department of Psychology, University of ManitobaWinnipeg, MB, Canada
| |
Collapse
|
60
|
Knowland VCP, Mercure E, Karmiloff-Smith A, Dick F, Thomas MSC. Audio-visual speech perception: a developmental ERP investigation. Dev Sci 2014; 17:110-24. [PMID: 24176002 PMCID: PMC3995015 DOI: 10.1111/desc.12098] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2012] [Accepted: 05/14/2013] [Indexed: 11/29/2022]
Abstract
Being able to see a talking face confers a considerable advantage for speech perception in adulthood. However, behavioural data currently suggest that children fail to make full use of these available visual speech cues until age 8 or 9. This is particularly surprising given the potential utility of multiple informational cues during language learning. We therefore explored this at the neural level. The event-related potential (ERP) technique has been used to assess the mechanisms of audio-visual speech perception in adults, with visual cues reliably modulating auditory ERP responses to speech. Previous work has shown congruence-dependent shortening of auditory N1/P2 latency and congruence-independent attenuation of amplitude in the presence of auditory and visual speech signals, compared to auditory alone. The aim of this study was to chart the development of these well-established modulatory effects over mid-to-late childhood. Experiment 1 employed an adult sample to validate a child-friendly stimulus set and paradigm by replicating previously observed effects of N1/P2 amplitude and latency modulation by visual speech cues; it also revealed greater attenuation of component amplitude given incongruent audio-visual stimuli, pointing to a new interpretation of the amplitude modulation effect. Experiment 2 used the same paradigm to map cross-sectional developmental change in these ERP responses between 6 and 11 years of age. The effect of amplitude modulation by visual cues emerged over development, while the effect of latency modulation was stable over the child sample. These data suggest that auditory ERP modulation by visual speech represents separable underlying cognitive processes, some of which show earlier maturation than others over the course of development.
Collapse
Affiliation(s)
- Victoria CP Knowland
- School of Health Sciences, City UniversityLondon, UK
- Department of Psychological Sciences, Birkbeck CollegeLondon, UK
| | | | | | - Fred Dick
- Department of Psychological Sciences, Birkbeck CollegeLondon, UK
| | | |
Collapse
|
61
|
Foxe JJ, Molholm S, Del Bene VA, Frey HP, Russo NN, Blanco D, Saint-Amour D, Ross LA. Severe multisensory speech integration deficits in high-functioning school-aged children with Autism Spectrum Disorder (ASD) and their resolution during early adolescence. ACTA ACUST UNITED AC 2013; 25:298-312. [PMID: 23985136 DOI: 10.1093/cercor/bht213] [Citation(s) in RCA: 144] [Impact Index Per Article: 13.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Under noisy listening conditions, visualizing a speaker's articulations substantially improves speech intelligibility. This multisensory speech integration ability is crucial to effective communication, and the appropriate development of this capacity greatly impacts a child's ability to successfully navigate educational and social settings. Research shows that multisensory integration abilities continue developing late into childhood. The primary aim here was to track the development of these abilities in children with autism, since multisensory deficits are increasingly recognized as a component of the autism spectrum disorder (ASD) phenotype. The abilities of high-functioning ASD children (n = 84) to integrate seen and heard speech were assessed cross-sectionally, while environmental noise levels were systematically manipulated, comparing them with age-matched neurotypical children (n = 142). Severe integration deficits were uncovered in ASD, which were increasingly pronounced as background noise increased. These deficits were evident in school-aged ASD children (5-12 year olds), but were fully ameliorated in ASD children entering adolescence (13-15 year olds). The severity of multisensory deficits uncovered has important implications for educators and clinicians working in ASD. We consider the observation that the multisensory speech system recovers substantially in adolescence as an indication that it is likely amenable to intervention during earlier childhood, with potentially profound implications for the development of social communication abilities in ASD children.
Collapse
Affiliation(s)
- John J Foxe
- Department of Pediatrics, Department of Neuroscience, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Psychology, The Cognitive Neurophysiology Laboratory, Program in Cognitive Neuroscience, City College of the City University of New York, New York, NY 10031, USA Department of Biology, The Cognitive Neurophysiology Laboratory, Program in Cognitive Neuroscience, City College of the City University of New York, New York, NY 10031, USA
| | - Sophie Molholm
- Department of Pediatrics, Department of Neuroscience, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Psychology, The Cognitive Neurophysiology Laboratory, Program in Cognitive Neuroscience, City College of the City University of New York, New York, NY 10031, USA Department of Biology, The Cognitive Neurophysiology Laboratory, Program in Cognitive Neuroscience, City College of the City University of New York, New York, NY 10031, USA
| | - Victor A Del Bene
- Department of Pediatrics, Department of Neuroscience, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Ferkauf Graduate School of Psychology, Albert Einstein College of Medicine, Bronx, NY 10461, USA
| | - Hans-Peter Frey
- Department of Pediatrics, Department of Neuroscience, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC)
| | - Natalie N Russo
- Department of Pediatrics, Department of Neuroscience, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Psychology, Syracuse University, Syracuse, NY 13244, USA
| | - Daniella Blanco
- Department of Pediatrics, Department of Neuroscience, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), Department of Psychology, The Cognitive Neurophysiology Laboratory, Program in Cognitive Neuroscience, City College of the City University of New York, New York, NY 10031, USA Department of Biology, The Cognitive Neurophysiology Laboratory, Program in Cognitive Neuroscience, City College of the City University of New York, New York, NY 10031, USA
| | - Dave Saint-Amour
- Centre de Recherche, CHU Sainte-Justine, 3175, Côte-Sainte-Catherine Montréal, Montréal, QC, Canada H3T 1C5 Département de Psychologie, Université du Québec à Montréal (UQAM), Montréal, QC, Canada H3C 3P8 and
| | - Lars A Ross
- Department of Pediatrics, Department of Neuroscience, The Sheryl and Daniel R. Tishman Cognitive Neurophysiology Laboratory, Children's Evaluation and Rehabilitation Center (CERC), The Gordon F. Derner Institute of Advanced Psychological Studies, Adelphi University, Garden City, NY 11530, USA
| |
Collapse
|
62
|
Gleiss S, Kayser C. Eccentricity dependent auditory enhancement of visual stimulus detection but not discrimination. Front Integr Neurosci 2013; 7:52. [PMID: 23882195 PMCID: PMC3715717 DOI: 10.3389/fnint.2013.00052] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2013] [Accepted: 07/01/2013] [Indexed: 11/13/2022] Open
Abstract
Sensory perception is enhanced by the complementary information provided by our different sensory modalities and even apparently task irrelevant stimuli in one modality can facilitate performance in another. While perception in general comprises both, the detection of sensory objects as well as their discrimination and recognition, most studies on audio-visual interactions have focused on either of these aspects. However, previous evidence, neuroanatomical projections between early sensory cortices and computational mechanisms suggest that sounds might differentially affect visual detection and discrimination and differentially at central and peripheral retinal locations. We performed an experiment to directly test this by probing the enhancement of visual detection and discrimination by auxiliary sounds at different visual eccentricities and within the same subjects. Specifically, we quantified the enhancement provided by sounds that reduce the overall uncertainty about the visual stimulus beyond basic multisensory co-stimulation. This revealed a general trend for stronger enhancement at peripheral locations in both tasks, but a statistically significant effect only for detection and only at peripheral locations. Overall this suggests that there are topographic differences in the auditory facilitation of basic visual processes and that these may differentially affect basic aspects of visual recognition.
Collapse
Affiliation(s)
- Stephanie Gleiss
- Max Planck Institute for Biological Cybernetics Tübingen, Germany
| | | |
Collapse
|
63
|
Jerger S, Damian MF, Mills C, Bartlett J, Tye-Murray N, Abdi H. Effect of perceptual load on semantic access by speech in children. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2013; 56:388-403. [PMID: 22896045 PMCID: PMC3742031 DOI: 10.1044/1092-4388(2012/11-0186)] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
PURPOSE To examine whether semantic access by speech requires attention in children. METHOD Children (N = 200) named pictures and ignored distractors on a cross-modal (distractors: auditory-no face) or multimodal (distractors: auditory-static face and audiovisual-dynamic face) picture word task. The cross-modal task had a low load, and the multimodal task had a high load (i.e., respectively naming pictures displayed on a blank screen vs. below the talker's face on his T-shirt). Semantic content of distractors was manipulated to be related vs. unrelated to the picture (e.g., picture "dog" with distractors "bear" vs. "cheese"). If irrelevant semantic content manipulation influences naming times on both tasks despite variations in loads, Lavie's (2005) perceptual load model proposes that semantic access is independent of capacity-limited attentional resources; if, however, irrelevant content influences naming only on the cross-modal task (low load), the perceptual load model proposes that semantic access is dependent on attentional resources exhausted by the higher load task. RESULTS Irrelevant semantic content affected performance for both tasks in 6- to 9-year-olds but only on the cross-modal task in 4- to 5-year-olds. The addition of visual speech did not influence results on the multimodal task. CONCLUSION Younger and older children differ in dependence on attentional resources for semantic access by speech.
Collapse
|
64
|
|
65
|
The implicit use of spatial information develops later for crossmodal than for intramodal temporal processing. Cognition 2012; 126:301-6. [PMID: 23099123 DOI: 10.1016/j.cognition.2012.09.009] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2012] [Revised: 08/28/2012] [Accepted: 09/03/2012] [Indexed: 11/20/2022]
Abstract
The integrated use of spatial and temporal information seems to support the separation of two sensory streams. The present study tested whether this facilitation depends on the encoding of sensory stimuli in externally anchored spatial coordinate systems. Fifty-nine children between 5 and 12 years as well as 12 young adults performed a crossmodal temporal order judgment (TOJ) task for simple visual and tactile stimuli. Stimuli were presented either within the same or in different hemifields. Presentation of the two modality inputs in different hemifields improved TOJ only in children aged 10 years and older. In contrast, intramodal TOJ performance (data from Pagel, Heed, & Röder, 2009) was better than crossmodal TOJ performance starting at the age of 6 years. An adult-like level of performance in the crossmodal TOJ task was evident only at the age of 12 years. We speculate that the ability to redundantly code sensory input in modality-specific and modality-independent spatial coordinates facilitates intramodal temporal processing. Further refinement of the processes providing external spatial coordinates then results in the integrated use of space and time to decide whether sensory inputs belong to a common object or to separate events.
Collapse
|
66
|
Nava E, Pavani F. Changes in Sensory Dominance During Childhood: Converging Evidence From the Colavita Effect and the Sound-Induced Flash Illusion. Child Dev 2012; 84:604-16. [DOI: 10.1111/j.1467-8624.2012.01856.x] [Citation(s) in RCA: 56] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
67
|
Hillock-Dunn A, Wallace MT. Developmental changes in the multisensory temporal binding window persist into adolescence. Dev Sci 2012; 15:688-96. [PMID: 22925516 DOI: 10.1111/j.1467-7687.2012.01171.x] [Citation(s) in RCA: 106] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
We live in a world rich in sensory information, and consequently the brain is challenged with deciphering which cues from the various sensory modalities belong together. Determinations regarding the relatedness of sensory information appear to be based, at least in part, on the spatial and temporal relationships between the stimuli. Stimuli that are presented in close spatial and temporal correspondence are more likely to be associated with one another and thus 'bound' into a single perceptual entity. While there is a robust literature delineating behavioral changes in perception induced by multisensory stimuli, maturational changes in multisensory processing, particularly in the temporal realm, are poorly understood. The current study examines the developmental progression of multisensory temporal function by analyzing responses on an audiovisual simultaneity judgment task in 6- to 23-year-old participants. The overarching hypothesis for the study was that multisensory temporal function will mature with increasing age, with the developmental trajectory for this change being the primary point of inquiry. Results indeed reveal an age-dependent decrease in the size of the 'multisensory temporal binding window', the temporal interval within which multisensory stimuli are likely to be perceptually bound, with changes occurring over a surprisingly protracted time course that extends into adolescence.
Collapse
|
68
|
Stevenson RA, Zemtsov RK, Wallace MT. Individual differences in the multisensory temporal binding window predict susceptibility to audiovisual illusions. J Exp Psychol Hum Percept Perform 2012; 38:1517-29. [PMID: 22390292 DOI: 10.1037/a0027339] [Citation(s) in RCA: 171] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Human multisensory systems are known to bind inputs from the different sensory modalities into a unified percept, a process that leads to measurable behavioral benefits. This integrative process can be observed through multisensory illusions, including the McGurk effect and the sound-induced flash illusion, both of which demonstrate the ability of one sensory modality to modulate perception in a second modality. Such multisensory integration is highly dependent upon the temporal relationship of the different sensory inputs, with perceptual binding occurring within a limited range of asynchronies known as the temporal binding window (TBW). Previous studies have shown that this window is highly variable across individuals, but it is unclear how these variations in the TBW relate to an individual's ability to integrate multisensory cues. Here we provide evidence linking individual differences in multisensory temporal processes to differences in the individual's audiovisual integration of illusory stimuli. Our data provide strong evidence that the temporal processing of multiple sensory signals and the merging of multiple signals into a single, unified perception, are highly related. Specifically, the width of right side of an individuals' TBW, where the auditory stimulus follows the visual, is significantly correlated with the strength of illusory percepts, as indexed via both an increase in the strength of binding synchronous sensory signals and in an improvement in correctly dissociating asynchronous signals. These findings are discussed in terms of their possible neurobiological basis, relevance to the development of sensory integration, and possible importance for clinical conditions in which there is growing evidence that multisensory integration is compromised.
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center.
| | | | | |
Collapse
|
69
|
Neural correlates of interindividual differences in children's audiovisual speech perception. J Neurosci 2011; 31:13963-71. [PMID: 21957257 DOI: 10.1523/jneurosci.2605-11.2011] [Citation(s) in RCA: 51] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023] Open
Abstract
Children use information from both the auditory and visual modalities to aid in understanding speech. A dramatic illustration of this multisensory integration is the McGurk effect, an illusion in which an auditory syllable is perceived differently when it is paired with an incongruent mouth movement. However, there are significant interindividual differences in McGurk perception: some children never perceive the illusion, while others always do. Because converging evidence suggests that the posterior superior temporal sulcus (STS) is a critical site for multisensory integration, we hypothesized that activity within the STS would predict susceptibility to the McGurk effect. To test this idea, we used BOLD fMRI in 17 children aged 6-12 years to measure brain responses to the following three audiovisual stimulus categories: McGurk incongruent, non-McGurk incongruent, and congruent syllables. Two separate analysis approaches, one using independent functional localizers and another using whole-brain voxel-based regression, showed differences in the left STS between perceivers and nonperceivers. The STS of McGurk perceivers responded significantly more than that of nonperceivers to McGurk syllables, but not to other stimuli, and perceivers' hemodynamic responses in the STS were significantly prolonged. In addition to the STS, weaker differences between perceivers and nonperceivers were observed in the fusiform face area and extrastriate visual cortex. These results suggest that the STS is an important source of interindividual variability in children's audiovisual speech perception.
Collapse
|
70
|
David N, R. Schneider T, Vogeley K, Engel AK. Impairments in multisensory processing are not universal to the autism spectrum: no evidence for crossmodal priming deficits in Asperger syndrome. Autism Res 2011; 4:383-8. [DOI: 10.1002/aur.210] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2010] [Accepted: 06/01/2011] [Indexed: 11/07/2022]
|
71
|
|
72
|
Innes-Brown H, Barutchu A, Shivdasani MN, Crewther DP, Grayden DB, Paolini AG. Susceptibility to the flash-beep illusion is increased in children compared to adults. Dev Sci 2011; 14:1089-99. [DOI: 10.1111/j.1467-7687.2011.01059.x] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
73
|
Taylor N, Isaac C, Milne E. A comparison of the development of audiovisual integration in children with autism spectrum disorders and typically developing children. J Autism Dev Disord 2011; 40:1403-11. [PMID: 20354776 DOI: 10.1007/s10803-010-1000-4] [Citation(s) in RCA: 59] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
This study aimed to investigate the development of audiovisual integration in children with Autism Spectrum Disorder (ASD). Audiovisual integration was measured using the McGurk effect in children with ASD aged 7-16 years and typically developing children (control group) matched approximately for age, sex, nonverbal ability and verbal ability. Results showed that the children with ASD were delayed in visual accuracy and audiovisual integration compared to the control group. However, in the audiovisual integration measure, children with ASD appeared to 'catch-up' with their typically developing peers at the older age ranges. The suggestion that children with ASD show a deficit in audiovisual integration which diminishes with age has clinical implications for those assessing and treating these children.
Collapse
Affiliation(s)
- Natalie Taylor
- Clinical Psychology Unit, University of Sheffield, Sheffield, South Yorkshire, UK
| | | | | |
Collapse
|
74
|
Hillock AR, Powers AR, Wallace MT. Binding of sights and sounds: age-related changes in multisensory temporal processing. Neuropsychologia 2010; 49:461-7. [PMID: 21134385 DOI: 10.1016/j.neuropsychologia.2010.11.041] [Citation(s) in RCA: 127] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2010] [Revised: 11/29/2010] [Accepted: 11/29/2010] [Indexed: 11/24/2022]
Abstract
We live in a multisensory world and one of the challenges the brain is faced with is deciding what information belongs together. Our ability to make assumptions about the relatedness of multisensory stimuli is partly based on their temporal and spatial relationships. Stimuli that are proximal in time and space are likely to be bound together by the brain and ascribed to a common external event. Using this framework we can describe multisensory processes in the context of spatial and temporal filters or windows that compute the probability of the relatedness of stimuli. Whereas numerous studies have examined the characteristics of these multisensory filters in adults and discrepancies in window size have been reported between infants and adults, virtually nothing is known about multisensory temporal processing in childhood. To examine this, we compared the ability of 10 and 11 year olds and adults to detect audiovisual temporal asynchrony. Findings revealed striking and asymmetric age-related differences. Whereas children were able to identify asynchrony as readily as adults when visual stimuli preceded auditory cues, significant group differences were identified at moderately long stimulus onset asynchronies (150-350 ms) where the auditory stimulus was first. Results suggest that changes in audiovisual temporal perception extend beyond the first decade of life. In addition to furthering our understanding of basic multisensory developmental processes, these findings have implications on disorders (e.g., autism, dyslexia) in which emerging evidence suggests alterations in multisensory temporal function.
Collapse
Affiliation(s)
- Andrea R Hillock
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA.
| | | | | |
Collapse
|
75
|
Abstract
The classic McGurk study showed that presentation of one syllable in the visual modality simultaneous with a different syllable in the auditory modality creates the perception of a third, not presented syllable. The current study presented dichotic syllable pairs (one in each ear) simultaneously with video clips of a mouth pronouncing the syllables from one of the ears, or pronouncing a syllable that was not part of the dichotic pair. When asked to report the auditory stimuli, responses were shifted towards selecting the auditory stimulus from the side that matched the visual stimulus.
Collapse
Affiliation(s)
- Bjørn Saetrevik
- Department of Biological and Medical Psychology, Faculty of Psychology, University of Bergen, Bergen, Norway.
| |
Collapse
|
76
|
|
77
|
|