1
|
Kyle FE, Trickey N. Speechreading, Phonological Skills, and Word Reading Ability in Children. Lang Speech Hear Serv Sch 2024; 55:756-766. [PMID: 38630019 DOI: 10.1044/2024_lshss-23-00129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/02/2024] Open
Abstract
PURPOSE The purpose of the present study was to investigate the relationship between speechreading ability, phonological skills, and word reading ability in typically developing children. METHOD Sixty-six typically developing children (6-7 years old) completed tasks measuring word reading, speechreading (words, sentences, and short stories), alliteration awareness, rhyme awareness, nonword reading, and rapid automatized naming (RAN). RESULTS Speechreading ability was significantly correlated with rhyme and alliteration awareness, phonological error rate, nonword reading, and reading ability (medium effect sizes) and RAN (small effect size). Multiple regression analyses showed that speechreading was not a unique predictor of word reading ability beyond the contribution of phonological skills. A speechreading error analysis revealed that children tended to use a phonological strategy when speechreading, and in particular, this strategy was used by skilled speechreaders. CONCLUSIONS The current study provides converging evidence that speechreading and phonological skills are positively related in typically developing children. These skills are likely to have a reciprocal relationship, and children may benefit from having their attention drawn to visual information available on the lips while learning letter sounds or learning to read, as this could augment and strengthen underlying phonological representations.
Collapse
Affiliation(s)
- Fiona E Kyle
- Division of Psychology and Language Sciences, Department of Language and Cognition, University College London, United Kingdom
- UCL Deafness, Cognition and Language Research Centre, University College London, United Kingdom
| | - Natasha Trickey
- Division of Language and Communication Science, City, University of London, United Kingdom
| |
Collapse
|
2
|
Gijbels L, Lee AKC, Yeatman JD. Children with developmental dyslexia have equivalent audiovisual speech perception performance but their perceptual weights differ. Dev Sci 2024; 27:e13431. [PMID: 37403418 DOI: 10.1111/desc.13431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 05/18/2023] [Accepted: 06/19/2023] [Indexed: 07/06/2023]
Abstract
As reading is inherently a multisensory, audiovisual (AV) process where visual symbols (i.e., letters) are connected to speech sounds, the question has been raised whether individuals with reading difficulties, like children with developmental dyslexia (DD), have broader impairments in multisensory processing. This question has been posed before, yet it remains unanswered due to (a) the complexity and contentious etiology of DD along with (b) lack of consensus on developmentally appropriate AV processing tasks. We created an ecologically valid task for measuring multisensory AV processing by leveraging the natural phenomenon that speech perception improves when listeners are provided visual information from mouth movements (particularly when the auditory signal is degraded). We designed this AV processing task with low cognitive and linguistic demands such that children with and without DD would have equal unimodal (auditory and visual) performance. We then collected data in a group of 135 children (age 6.5-15) with an AV speech perception task to answer the following questions: (1) How do AV speech perception benefits manifest in children, with and without DD? (2) Do children all use the same perceptual weights to create AV speech perception benefits, and (3) what is the role of phonological processing in AV speech perception? We show that children with and without DD have equal AV speech perception benefits on this task, but that children with DD rely less on auditory processing in more difficult listening situations to create these benefits and weigh both incoming information streams differently. Lastly, any reported differences in speech perception in children with DD might be better explained by differences in phonological processing than differences in reading skills. RESEARCH HIGHLIGHTS: Children with versus without developmental dyslexia have equal audiovisual speech perception benefits, regardless of their phonological awareness or reading skills. Children with developmental dyslexia rely less on auditory performance to create audiovisual speech perception benefits. Individual differences in speech perception in children might be better explained by differences in phonological processing than differences in reading skills.
Collapse
Affiliation(s)
- Liesbeth Gijbels
- Department of Speech & Hearing Sciences, University of Washington, Seattle, Washington, USA
- University of Washington, Institute for Learning & Brain Sciences, Seattle, Washington, USA
| | - Adrian K C Lee
- Department of Speech & Hearing Sciences, University of Washington, Seattle, Washington, USA
- University of Washington, Institute for Learning & Brain Sciences, Seattle, Washington, USA
| | - Jason D Yeatman
- Division of Developmental-Behavioral Pediatrics, Stanford University School of Medicine, Stanford, California, USA
- Stanford University Graduate School of Education, Stanford, California, USA
- Stanford University Department of Psychology, Stanford, California, USA
| |
Collapse
|
3
|
Yu L, Xu J. The Development of Multisensory Integration at the Neuronal Level. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1437:153-172. [PMID: 38270859 DOI: 10.1007/978-981-99-7611-9_10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Multisensory integration is a fundamental function of the brain. In the typical adult, multisensory neurons' response to paired multisensory (e.g., audiovisual) cues is significantly more robust than the corresponding best unisensory response in many brain regions. Synthesizing sensory signals from multiple modalities can speed up sensory processing and improve the salience of outside events or objects. Despite its significance, multisensory integration is testified to be not a neonatal feature of the brain. Neurons' ability to effectively combine multisensory information does not occur rapidly but develops gradually during early postnatal life (for cats, 4-12 weeks required). Multisensory experience is critical for this developing process. If animals were restricted from sensing normal visual scenes or sounds (deprived of the relevant multisensory experience), the development of the corresponding integrative ability could be blocked until the appropriate multisensory experience is obtained. This section summarizes the extant literature on the development of multisensory integration (mainly using cat superior colliculus as a model), sensory-deprivation-induced cross-modal plasticity, and how sensory experience (sensory exposure and perceptual learning) leads to the plastic change and modification of neural circuits in cortical and subcortical areas.
Collapse
Affiliation(s)
- Liping Yu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal University, Shanghai, China.
| | - Jinghong Xu
- Key Laboratory of Brain Functional Genomics (Ministry of Education and Shanghai), School of Life Sciences, East China Normal University, Shanghai, China
| |
Collapse
|
4
|
Jeschke L, Mathias B, von Kriegstein K. Inhibitory TMS over Visual Area V5/MT Disrupts Visual Speech Recognition. J Neurosci 2023; 43:7690-7699. [PMID: 37848284 PMCID: PMC10634547 DOI: 10.1523/jneurosci.0975-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 07/26/2023] [Accepted: 09/04/2023] [Indexed: 10/19/2023] Open
Abstract
During face-to-face communication, the perception and recognition of facial movements can facilitate individuals' understanding of what is said. Facial movements are a form of complex biological motion. Separate neural pathways are thought to processing (1) simple, nonbiological motion with an obligatory waypoint in the motion-sensitive visual middle temporal area (V5/MT); and (2) complex biological motion. Here, we present findings that challenge this dichotomy. Neuronavigated offline transcranial magnetic stimulation (TMS) over V5/MT on 24 participants (17 females and 7 males) led to increased response times in the recognition of simple, nonbiological motion as well as visual speech recognition compared with TMS over the vertex, an active control region. TMS of area V5/MT also reduced practice effects on response times, that are typically observed in both visual speech and motion recognition tasks over time. Our findings provide the first indication that area V5/MT causally influences the recognition of visual speech.SIGNIFICANCE STATEMENT In everyday face-to-face communication, speech comprehension is often facilitated by viewing a speaker's facial movements. Several brain areas contribute to the recognition of visual speech. One area of interest is the motion-sensitive visual medial temporal area (V5/MT), which has been associated with the perception of simple, nonbiological motion such as moving dots, as well as more complex, biological motion such as visual speech. Here, we demonstrate using noninvasive brain stimulation that area V5/MT is causally relevant in recognizing visual speech. This finding provides new insights into the neural mechanisms that support the perception of human communication signals, which will help guide future research in typically developed individuals and populations with communication difficulties.
Collapse
Affiliation(s)
- Lisa Jeschke
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, 01069 Dresden, Germany
| | - Brian Mathias
- School of Psychology, University of Aberdeen, Aberdeen AB243FX, United Kingdom
| | - Katharina von Kriegstein
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität Dresden, 01069 Dresden, Germany
| |
Collapse
|
5
|
Pulliam G, Feldman JI, Woynaroski TG. Audiovisual multisensory integration in individuals with reading and language impairments: A systematic review and meta-analysis. Neurosci Biobehav Rev 2023; 149:105130. [PMID: 36933815 PMCID: PMC10243286 DOI: 10.1016/j.neubiorev.2023.105130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 03/09/2023] [Accepted: 03/10/2023] [Indexed: 03/18/2023]
Abstract
Differences in sensory function have been documented for a number of neurodevelopmental conditions, including reading and language impairments. Prior studies have measured audiovisual multisensory integration (i.e., the ability to combine inputs from the auditory and visual modalities) in these populations. The present study sought to systematically review and quantitatively synthesize the extant literature on audiovisual multisensory integration in individuals with reading and language impairments. A comprehensive search strategy yielded 56 reports, of which 38 were used to extract 109 group difference and 68 correlational effect sizes. There was an overall difference between individuals with reading and language impairments and comparisons on audiovisual integration. There was a nonsignificant trend towards moderation according to sample type (i.e., reading versus language) and publication/small study bias for this model. Overall, there was a small but non-significant correlation between metrics of audiovisual integration and reading or language ability; this model was not moderated by sample or study characteristics, nor was there evidence of publication/small study bias. Limitations and future directions for primary and meta-analytic research are discussed.
Collapse
Affiliation(s)
- Grace Pulliam
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S, MCE South Tower 8310, Nashville 37232, TN, USA
| | - Jacob I Feldman
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S, MCE South Tower 8310, Nashville 37232, TN, USA; Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA.
| | - Tiffany G Woynaroski
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S, MCE South Tower 8310, Nashville 37232, TN, USA; Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA; Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA; John A. Burns School of Medicine, University of Hawaii, Manoa, HI, USA
| |
Collapse
|
6
|
Pourhashemi F, Baart M, van Laarhoven T, Vroomen J. Want to quickly adapt to distorted speech and become a better listener? Read lips, not text. PLoS One 2022; 17:e0278986. [PMID: 36580461 PMCID: PMC9799298 DOI: 10.1371/journal.pone.0278986] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2022] [Accepted: 11/28/2022] [Indexed: 12/30/2022] Open
Abstract
When listening to distorted speech, does one become a better listener by looking at the face of the speaker or by reading subtitles that are presented along with the speech signal? We examined this question in two experiments in which we presented participants with spectrally distorted speech (4-channel noise-vocoded speech). During short training sessions, listeners received auditorily distorted words or pseudowords that were partially disambiguated by concurrently presented lipread information or text. After each training session, listeners were tested with new degraded auditory words. Learning effects (based on proportions of correctly identified words) were stronger if listeners had trained with words rather than with pseudowords (a lexical boost), and adding lipread information during training was more effective than adding text (a lipread boost). Moreover, the advantage of lipread speech over text training was also found when participants were tested more than a month later. The current results thus suggest that lipread speech may have surprisingly long-lasting effects on adaptation to distorted speech.
Collapse
Affiliation(s)
- Faezeh Pourhashemi
- Dept. of Cognitive Neuropsychology, Tilburg University, Tilburg, The Netherlands
| | - Martijn Baart
- Dept. of Cognitive Neuropsychology, Tilburg University, Tilburg, The Netherlands
- BCBL, Basque Center on Cognition, Brain, and Language, Donostia, Spain
- * E-mail:
| | - Thijs van Laarhoven
- Dept. of Cognitive Neuropsychology, Tilburg University, Tilburg, The Netherlands
| | - Jean Vroomen
- Dept. of Cognitive Neuropsychology, Tilburg University, Tilburg, The Netherlands
| |
Collapse
|
7
|
Zhao P, Chen Y, Zhao L, Wu G, Zhou X. Generating images from audio under semantic consistency. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.03.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
8
|
Galazka MA, Hadjikhani N, Sundqvist M, Åsberg Johnels J. Facial speech processing in children with and without dyslexia. ANNALS OF DYSLEXIA 2021; 71:501-524. [PMID: 34115279 PMCID: PMC8458188 DOI: 10.1007/s11881-021-00231-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Accepted: 05/10/2021] [Indexed: 06/04/2023]
Abstract
What role does the presence of facial speech play for children with dyslexia? Current literature proposes two distinctive claims. One claim states that children with dyslexia make less use of visual information from the mouth during speech processing due to a deficit in recruitment of audiovisual areas. An opposing claim suggests that children with dyslexia are in fact reliant on such information in order to compensate for auditory/phonological impairments. The current paper aims at directly testing these contrasting hypotheses (here referred to as "mouth insensitivity" versus "mouth reliance") in school-age children with and without dyslexia, matched on age and listening comprehension. Using eye tracking, in Study 1, we examined how children look at the mouth across conditions varying in speech processing demands. The results did not indicate significant group differences in looking at the mouth. However, correlation analyses suggest potentially important distinctions within the dyslexia group: those children with dyslexia who are better readers attended more to the mouth while presented with a person's face in a phonologically demanding condition. In Study 2, we examined whether the presence of facial speech cues is functionally beneficial when a child is encoding written words. The results indicated lack of overall group differences on the task, although those with less severe reading problems in the dyslexia group were more accurate when reading words that were presented with articulatory facial speech cues. Collectively, our results suggest that children with dyslexia differ in their "mouth reliance" versus "mouth insensitivity," a profile that seems to be related to the severity of their reading problems.
Collapse
Affiliation(s)
- Martyna A Galazka
- Gillberg Neuropsychiatry Center, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden.
| | - Nouchine Hadjikhani
- Gillberg Neuropsychiatry Center, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden
- Harvard Medical School/MGH/MIT, Athinoula A. Martinos Center for Biomedical Imaging, Boston, MA, USA
| | - Maria Sundqvist
- Department of Education and Special Education, University of Gothenburg, Gothenburg, Sweden
| | - Jakob Åsberg Johnels
- Gillberg Neuropsychiatry Center, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden.
- Section of Speech and Language Pathology, Institute of Neuroscience and Physiology, University of Gothenburg, Gothenburg, Sweden.
| |
Collapse
|
9
|
Destoky F, Bertels J, Niesen M, Wens V, Vander Ghinst M, Leybaert J, Lallier M, Ince RAA, Gross J, De Tiège X, Bourguignon M. Cortical tracking of speech in noise accounts for reading strategies in children. PLoS Biol 2020; 18:e3000840. [PMID: 32845876 PMCID: PMC7478533 DOI: 10.1371/journal.pbio.3000840] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Revised: 09/08/2020] [Accepted: 08/12/2020] [Indexed: 11/29/2022] Open
Abstract
Humans' propensity to acquire literacy relates to several factors, including the ability to understand speech in noise (SiN). Still, the nature of the relation between reading and SiN perception abilities remains poorly understood. Here, we dissect the interplay between (1) reading abilities, (2) classical behavioral predictors of reading (phonological awareness, phonological memory, and rapid automatized naming), and (3) electrophysiological markers of SiN perception in 99 elementary school children (26 with dyslexia). We demonstrate that, in typical readers, cortical representation of the phrasal content of SiN relates to the degree of development of the lexical (but not sublexical) reading strategy. In contrast, classical behavioral predictors of reading abilities and the ability to benefit from visual speech to represent the syllabic content of SiN account for global reading performance (i.e., speed and accuracy of lexical and sublexical reading). In individuals with dyslexia, we found preserved integration of visual speech information to optimize processing of syntactic information but not to sustain acoustic/phonemic processing. Finally, within children with dyslexia, measures of cortical representation of the phrasal content of SiN were negatively related to reading speed and positively related to the compromise between reading precision and reading speed, potentially owing to compensatory attentional mechanisms. These results clarify the nature of the relation between SiN perception and reading abilities in typical child readers and children with dyslexia and identify novel electrophysiological markers of emergent literacy.
Collapse
Affiliation(s)
- Florian Destoky
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Julie Bertels
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
- Consciousness, Cognition and Computation group, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Maxime Niesen
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
- Service d'ORL et de chirurgie cervico-faciale, ULB-Hôpital Erasme, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Vincent Wens
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
- Department of Functional Neuroimaging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Marc Vander Ghinst
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Jacqueline Leybaert
- Laboratoire Cognition Langage et Développement, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Marie Lallier
- BCBL, Basque Center on Cognition, Brain and Language, San Sebastian, Spain
| | - Robin A. A. Ince
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
| | - Joachim Gross
- Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, United Kingdom
- Institute for Biomagnetism and Biosignal analysis, University of Muenster, Muenster, Germany
| | - Xavier De Tiège
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
- Department of Functional Neuroimaging, Service of Nuclear Medicine, CUB Hôpital Erasme, Université libre de Bruxelles (ULB), Brussels, Belgium
| | - Mathieu Bourguignon
- Laboratoire de Cartographie fonctionnelle du Cerveau, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
- Laboratoire Cognition Langage et Développement, UNI–ULB Neuroscience Institute, Université libre de Bruxelles (ULB), Brussels, Belgium
- BCBL, Basque Center on Cognition, Brain and Language, San Sebastian, Spain
| |
Collapse
|
10
|
Conant LL, Liebenthal E, Desai A, Seidenberg MS, Binder JR. Differential activation of the visual word form area during auditory phoneme perception in youth with dyslexia. Neuropsychologia 2020; 146:107543. [PMID: 32598966 DOI: 10.1016/j.neuropsychologia.2020.107543] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 03/16/2020] [Accepted: 06/21/2020] [Indexed: 12/12/2022]
Abstract
Developmental dyslexia is a learning disorder characterized by difficulties reading words accurately and/or fluently. Several behavioral studies have suggested the presence of anomalies at an early stage of phoneme processing, when the complex spectrotemporal patterns in the speech signal are analyzed and assigned to phonemic categories. In this study, fMRI was used to compare brain responses associated with categorical discrimination of speech syllables (P) and acoustically matched nonphonemic stimuli (N) in children and adolescents with dyslexia and in typically developing (TD) controls, aged 8-17 years. The TD group showed significantly greater activation during the P condition relative to N in an area of the left ventral occipitotemporal cortex that corresponds well with the region referred to as the "visual word form area" (VWFA). Regression analyses using reading performance as a continuous variable across the full group of participants yielded similar results. Overall, the findings are consistent with those of previous neuroimaging studies using print stimuli in individuals with dyslexia that found reduced activation in left occipitotemporal regions; however, the current study shows that these activation differences seen during reading are apparent during auditory phoneme discrimination in youth with dyslexia, suggesting that the primary deficit in at least a subset of children may lie early in the speech processing stream and that categorical perception may be an important target of early intervention in children at risk for dyslexia.
Collapse
Affiliation(s)
- Lisa L Conant
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA.
| | - Einat Liebenthal
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA; Department of Psychiatry, McLean Hospital, Harvard Medical School, Boston, MA, USA
| | - Anjali Desai
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA
| | - Mark S Seidenberg
- Department of Psychology, University of Wisconsin-Madison, Madison, WI, USA
| | - Jeffrey R Binder
- Department of Neurology, Medical College of Wisconsin, Milwaukee, WI, USA
| |
Collapse
|
11
|
Brief Report: Speech-in-Noise Recognition and the Relation to Vocal Pitch Perception in Adults with Autism Spectrum Disorder and Typical Development. J Autism Dev Disord 2020; 50:356-363. [PMID: 31583624 DOI: 10.1007/s10803-019-04244-1] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
We tested the ability to recognise speech-in-noise and its relation to the ability to discriminate vocal pitch in adults with high-functioning autism spectrum disorder (ASD) and typically developed adults (matched pairwise on age, sex, and IQ). Typically developed individuals understood speech in higher noise levels as compared to the ASD group. Within the control group but not within the ASD group, better speech-in-noise recognition abilities were significantly correlated with better vocal pitch discrimination abilities. Our results show that speech-in-noise recognition is restricted in people with ASD. We speculate that perceptual impairments such as difficulties in vocal pitch perception might be relevant in explaining these difficulties in ASD.
Collapse
|
12
|
Wallace MT, Woynaroski TG, Stevenson RA. Multisensory Integration as a Window into Orderly and Disrupted Cognition and Communication. Annu Rev Psychol 2020; 71:193-219. [DOI: 10.1146/annurev-psych-010419-051112] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
During our everyday lives, we are confronted with a vast amount of information from several sensory modalities. This multisensory information needs to be appropriately integrated for us to effectively engage with and learn from our world. Research carried out over the last half century has provided new insights into the way such multisensory processing improves human performance and perception; the neurophysiological foundations of multisensory function; the time course for its development; how multisensory abilities differ in clinical populations; and, most recently, the links between multisensory processing and cognitive abilities. This review summarizes the extant literature on multisensory function in typical and atypical circumstances, discusses the implications of the work carried out to date for theory and research, and points toward next steps for advancing the field.
Collapse
Affiliation(s)
- Mark T. Wallace
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37232, USA;,
- Departments of Psychology and Pharmacology, Vanderbilt University, Nashville, Tennessee 37232, USA
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37232, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee 37232, USA
- Vanderbilt Kennedy Center, Nashville, Tennessee 37203, USA
| | - Tiffany G. Woynaroski
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee 37232, USA;,
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, Tennessee 37232, USA
- Vanderbilt Kennedy Center, Nashville, Tennessee 37203, USA
| | - Ryan A. Stevenson
- Departments of Psychology and Psychiatry and Program in Neuroscience, University of Western Ontario, London, Ontario N6A 3K7, Canada
- Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 3K7, Canada
| |
Collapse
|
13
|
van Laarhoven T, Stekelenburg JJ, Vroomen J. Increased sub-clinical levels of autistic traits are associated with reduced multisensory integration of audiovisual speech. Sci Rep 2019; 9:9535. [PMID: 31267024 PMCID: PMC6606565 DOI: 10.1038/s41598-019-46084-0] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2019] [Accepted: 06/20/2019] [Indexed: 12/21/2022] Open
Abstract
Recent studies suggest that sub-clinical levels of autistic symptoms may be related to reduced processing of artificial audiovisual stimuli. It is unclear whether these findings extent to more natural stimuli such as audiovisual speech. The current study examined the relationship between autistic traits measured by the Autism spectrum Quotient and audiovisual speech processing in a large non-clinical population using a battery of experimental tasks assessing audiovisual perceptual binding, visual enhancement of speech embedded in noise and audiovisual temporal processing. Several associations were found between autistic traits and audiovisual speech processing. Increased autistic-like imagination was related to reduced perceptual binding measured by the McGurk illusion. Increased overall autistic symptomatology was associated with reduced visual enhancement of speech intelligibility in noise. Participants reporting increased levels of rigid and restricted behaviour were more likely to bind audiovisual speech stimuli over longer temporal intervals, while an increased tendency to focus on local aspects of sensory inputs was related to a more narrow temporal binding window. These findings demonstrate that increased levels of autistic traits may be related to alterations in audiovisual speech processing, and are consistent with the notion of a spectrum of autistic traits that extends to the general population.
Collapse
Affiliation(s)
- Thijs van Laarhoven
- Department of Cognitive Neuropsychology, Tilburg University, P.O. Box 90153, 5000 LE, Tilburg, The Netherlands.
| | - Jeroen J Stekelenburg
- Department of Cognitive Neuropsychology, Tilburg University, P.O. Box 90153, 5000 LE, Tilburg, The Netherlands
| | - Jean Vroomen
- Department of Cognitive Neuropsychology, Tilburg University, P.O. Box 90153, 5000 LE, Tilburg, The Netherlands
| |
Collapse
|
14
|
Cross-Modal Competition: The Default Computation for Multisensory Processing. J Neurosci 2018; 39:1374-1385. [PMID: 30573648 DOI: 10.1523/jneurosci.1806-18.2018] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2018] [Revised: 12/04/2018] [Accepted: 12/08/2018] [Indexed: 11/21/2022] Open
Abstract
Mature multisensory superior colliculus (SC) neurons integrate information across the senses to enhance their responses to spatiotemporally congruent cross-modal stimuli. The development of this neurotypic feature of SC neurons requires experience with cross-modal cues. In the absence of such experience the response of an SC neuron to congruent cross-modal cues is no more robust than its response to the most effective component cue. This "default" or "naive" state is believed to be one in which cross-modal signals do not interact. The present results challenge this characterization by identifying interactions between visual-auditory signals in male and female cats reared without visual-auditory experience. By manipulating the relative effectiveness of the visual and auditory cross-modal cues that were presented to each of these naive neurons, an active competition between cross-modal signals was revealed. Although contrary to current expectations, this result is explained by a neuro-computational model in which the default interaction is mutual inhibition. These findings suggest that multisensory neurons at all maturational stages are capable of some form of multisensory integration, and use experience with cross-modal stimuli to transition from their initial state of competition to their mature state of cooperation. By doing so, they develop the ability to enhance the physiological salience of cross-modal events thereby increasing their impact on the sensorimotor circuitry of the SC, and the likelihood that biologically significant events will elicit SC-mediated overt behaviors.SIGNIFICANCE STATEMENT The present results demonstrate that the default mode of multisensory processing in the superior colliculus is competition, not non-integration as previously characterized. A neuro-computational model explains how these competitive dynamics can be implemented via mutual inhibition, and how this default mode is superseded by the emergence of cooperative interactions during development.
Collapse
|
15
|
Keetels M, Bonte M, Vroomen J. A Selective Deficit in Phonetic Recalibration by Text in Developmental Dyslexia. Front Psychol 2018; 9:710. [PMID: 29867675 PMCID: PMC5962785 DOI: 10.3389/fpsyg.2018.00710] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2018] [Accepted: 04/23/2018] [Indexed: 11/30/2022] Open
Abstract
Upon hearing an ambiguous speech sound, listeners may adjust their perceptual interpretation of the speech input in accordance with contextual information, like accompanying text or lipread speech (i.e., phonetic recalibration; Bertelson et al., 2003). As developmental dyslexia (DD) has been associated with reduced integration of text and speech sounds, we investigated whether this deficit becomes manifest when text is used to induce this type of audiovisual learning. Adults with DD and normal readers were exposed to ambiguous consonants halfway between /aba/ and /ada/ together with text or lipread speech. After this audiovisual exposure phase, they categorized auditory-only ambiguous test sounds. Results showed that individuals with DD, unlike normal readers, did not use text to recalibrate their phoneme categories, whereas their recalibration by lipread speech was spared. Individuals with DD demonstrated similar deficits when ambiguous vowels (halfway between /wIt/ and /wet/) were recalibrated by text. These findings indicate that DD is related to a specific letter-speech sound association deficit that extends over phoneme classes (vowels and consonants), but – as lipreading was spared – does not extend to a more general audio–visual integration deficit. In particular, these results highlight diminished reading-related audiovisual learning in addition to the commonly reported phonological problems in developmental dyslexia.
Collapse
Affiliation(s)
- Mirjam Keetels
- Cognitive Neuropsychology Laboratory, Department of Cognitive Neuropsychology, Tilburg University, Tilburg, Netherlands
| | - Milene Bonte
- Maastricht Brain Imaging Center, Department Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Jean Vroomen
- Cognitive Neuropsychology Laboratory, Department of Cognitive Neuropsychology, Tilburg University, Tilburg, Netherlands
| |
Collapse
|
16
|
Franceschini S, Trevisan P, Ronconi L, Bertoni S, Colmar S, Double K, Facoetti A, Gori S. Action video games improve reading abilities and visual-to-auditory attentional shifting in English-speaking children with dyslexia. Sci Rep 2017; 7:5863. [PMID: 28725022 PMCID: PMC5517521 DOI: 10.1038/s41598-017-05826-8] [Citation(s) in RCA: 85] [Impact Index Per Article: 12.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2017] [Accepted: 06/02/2017] [Indexed: 11/09/2022] Open
Abstract
Dyslexia is characterized by difficulties in learning to read and there is some evidence that action video games (AVG), without any direct phonological or orthographic stimulation, improve reading efficiency in Italian children with dyslexia. However, the cognitive mechanism underlying this improvement and the extent to which the benefits of AVG training would generalize to deep English orthography, remain two critical questions. During reading acquisition, children have to integrate written letters with speech sounds, rapidly shifting their attention from visual to auditory modality. In our study, we tested reading skills and phonological working memory, visuo-spatial attention, auditory, visual and audio-visual stimuli localization, and cross-sensory attentional shifting in two matched groups of English-speaking children with dyslexia before and after they played AVG or non-action video games. The speed of words recognition and phonological decoding increased after playing AVG, but not non-action video games. Furthermore, focused visuo-spatial attention and visual-to-auditory attentional shifting also improved only after AVG training. This unconventional reading remediation program also increased phonological short-term memory and phoneme blending skills. Our report shows that an enhancement of visuo-spatial attention and phonological working memory, and an acceleration of visual-to-auditory attentional shifting can directly translate into better reading in English-speaking children with dyslexia.
Collapse
Affiliation(s)
- Sandro Franceschini
- Developmental and Cognitive Neuroscience Lab, Department of General Psychology, University of Padua, Padova, 35131, Italy. .,Child Psychopathology Unit, Scientific Institute "E. Medea", Bosisio Parini, Lecco, 23842, Italy.
| | - Piergiorgio Trevisan
- Department of Languages and Literatures, Communication, Education and Society, University of Udine, Udine, 33100, Italy
| | - Luca Ronconi
- Developmental and Cognitive Neuroscience Lab, Department of General Psychology, University of Padua, Padova, 35131, Italy.,Child Psychopathology Unit, Scientific Institute "E. Medea", Bosisio Parini, Lecco, 23842, Italy.,Center for Mind/Brain Sciences, University of Trento, Rovereto, Trento, 38068, Italy
| | - Sara Bertoni
- Developmental and Cognitive Neuroscience Lab, Department of General Psychology, University of Padua, Padova, 35131, Italy
| | - Susan Colmar
- Sydney School of Education and Social Work, University of Sydney, Sydney, NSW 2006, Australia
| | - Kit Double
- Sydney School of Education and Social Work, University of Sydney, Sydney, NSW 2006, Australia
| | - Andrea Facoetti
- Developmental and Cognitive Neuroscience Lab, Department of General Psychology, University of Padua, Padova, 35131, Italy.,Child Psychopathology Unit, Scientific Institute "E. Medea", Bosisio Parini, Lecco, 23842, Italy
| | - Simone Gori
- Child Psychopathology Unit, Scientific Institute "E. Medea", Bosisio Parini, Lecco, 23842, Italy.,Department of Human and Social Sciences, University of Bergamo, Bergamo, 24129, Italy
| |
Collapse
|