1
|
Fellinger J, Dall M, Weber C, Holzinger D. Communicative deficits associated with maladaptive behavior in individuals with deafness and special needs. Front Psychiatry 2022; 13:944719. [PMID: 35966495 PMCID: PMC9372491 DOI: 10.3389/fpsyt.2022.944719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Accepted: 07/11/2022] [Indexed: 11/16/2022] Open
Abstract
BACKGROUND At least one in three individuals who are prelingually deaf has special needs, most commonly due to intellectual disabilities. The scant literature on challenging behavior in this population, however, suggests high rates of prevalence and an important need to better understand the contributing factors. AIM We sought to analyze the prevalence of maladaptive behavior and its association with intellectual functioning, adaptive skills, language skills, and social communication in a population of adults with deafness and special needs. METHODS Participants were 61 individuals from three therapeutic living communities established for people with deafness and special needs. The participants had a mean age of 54.7 years, 64% were male. Intellectual functioning was measured with two versions of the Snijders-Oomen Non-verbal Intelligence Scale. The Vineland-II Scales were used to assess adaptive and maladaptive behavior. Language skills were measured with instruments specifically adapted for this population, including the Reynell Developmental Language Comprehension Scale, the comprehension scale of the Child Development Inventory, and the Profile of Multiple Language Proficiencies. Due to high correlations between instruments, a composite language score was used. A specific questionnaire to measure social communication in adults with intellectual disabilities was also utilized. RESULTS The mean nonverbal developmental reference age was 6.5 years, whereas the equivalent for the language measures was about 3.5 years. The prevalence rate of elevated maladaptive behavior was 41% (v-scale score ≥18) and 18% of the participants had a clinically significant score (v-scale score ≥21). Regression analyses showed that only language and social communication skills were significantly associated with maladaptive behavior, while intellectual functioning and adaptive skills were not. CONCLUSION These findings emphasize the importance of the constant promotion of communicative skills, as those people with better language and social communication skills demonstrate lower levels of maladaptive behavior.
Collapse
Affiliation(s)
- Johannes Fellinger
- Research Institute for Developmental Medicine, Johannes Kepler University of Linz, Linz, Austria.,Institute of Neurology of Senses and Language, Hospital of St. John of God, Linz, Austria.,Department of Psychiatry and Psychotherapy, Clinical Division of Social Psychiatry, Medical University of Vienna, Vienna, Austria
| | - Magdalena Dall
- Research Institute for Developmental Medicine, Johannes Kepler University of Linz, Linz, Austria
| | - Christoph Weber
- Research Institute for Developmental Medicine, Johannes Kepler University of Linz, Linz, Austria.,Department for Inclusive Education, University of Education Upper Austria, Linz, Austria
| | - Daniel Holzinger
- Research Institute for Developmental Medicine, Johannes Kepler University of Linz, Linz, Austria.,Institute of Neurology of Senses and Language, Hospital of St. John of God, Linz, Austria.,Institute of Linguistics, Faculty of Humanities, University of Graz, Graz, Austria
| |
Collapse
|
2
|
Pant R, Kanjlia S, Bedny M. A sensitive period in the neural phenotype of language in blind individuals. Dev Cogn Neurosci 2020; 41:100744. [PMID: 31999565 PMCID: PMC6994632 DOI: 10.1016/j.dcn.2019.100744] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Revised: 11/15/2019] [Accepted: 11/29/2019] [Indexed: 01/18/2023] Open
Abstract
Congenital blindness modifies the neural basis of language: "visual" cortices respond to linguistic information, and fronto-temporal language networks are less left-lateralized. We tested the hypothesis that this plasticity follows a sensitive period by comparing the neural basis of sentence processing between adult-onset blind (AB, n = 16), congenitally blind (CB, n = 22) and blindfolded sighted adults (n = 18). In Experiment 1, participants made semantic judgments for spoken sentences and, in a control condition, solved math equations. In Experiment 2, participants answered "who did what to whom" yes/no questions for grammatically complex (with syntactic movement) and simpler sentences. In a control condition, participants performed a memory task with non-words. In both experiments, visual cortices of CB and AB but not sighted participants responded more to sentences than control conditions, but the effect was much larger in the CB group. Only the "visual" cortex of CB participants responded to grammatical complexity. Unlike the CB group, the AB group showed no reduction in left-lateralization of fronto-temporal language network, relative to the sighted. These results suggest that congenital blindness modifies the neural basis of language differently from adult-onset blindness, consistent with a developmental sensitive period hypothesis.
Collapse
Affiliation(s)
- Rashi Pant
- Department of Psychological and Brain Sciences, Johns Hopkins University, USA; Biological Psychology and Neuropsychology, University of Hamburg, Germany.
| | - Shipra Kanjlia
- Department of Psychological and Brain Sciences, Johns Hopkins University, USA
| | - Marina Bedny
- Department of Psychological and Brain Sciences, Johns Hopkins University, USA
| |
Collapse
|
3
|
Hall WC. What You Don't Know Can Hurt You: The Risk of Language Deprivation by Impairing Sign Language Development in Deaf Children. Matern Child Health J 2017; 21:961-965. [PMID: 28185206 DOI: 10.1007/s10995-017-2287-y] [Citation(s) in RCA: 79] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
A long-standing belief is that sign language interferes with spoken language development in deaf children, despite a chronic lack of evidence supporting this belief. This deserves discussion as poor life outcomes continue to be seen in the deaf population. This commentary synthesizes research outcomes with signing and non-signing children and highlights fully accessible language as a protective factor for healthy development. Brain changes associated with language deprivation may be misrepresented as sign language interfering with spoken language outcomes of cochlear implants. This may lead to professionals and organizations advocating for preventing sign language exposure before implantation and spreading misinformation. The existence of one-time-sensitive-language acquisition window means a strong possibility of permanent brain changes when spoken language is not fully accessible to the deaf child and sign language exposure is delayed, as is often standard practice. There is no empirical evidence for the harm of sign language exposure but there is some evidence for its benefits, and there is growing evidence that lack of language access has negative implications. This includes cognitive delays, mental health difficulties, lower quality of life, higher trauma, and limited health literacy. Claims of cochlear implant- and spoken language-only approaches being more effective than sign language-inclusive approaches are not empirically supported. Cochlear implants are an unreliable standalone first-language intervention for deaf children. Priorities of deaf child development should focus on healthy growth of all developmental domains through a fully-accessible first language foundation such as sign language, rather than auditory deprivation and speech skills.
Collapse
Affiliation(s)
- Wyatte C Hall
- Clinical & Translational Science Institute, University of Rochester Medical Center, Rochester, NY, USA.
| |
Collapse
|
4
|
Language deprivation syndrome: a possible neurodevelopmental disorder with sociocultural origins. Soc Psychiatry Psychiatr Epidemiol 2017; 52:761-776. [PMID: 28204923 PMCID: PMC5469702 DOI: 10.1007/s00127-017-1351-7] [Citation(s) in RCA: 76] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/07/2016] [Accepted: 01/22/2017] [Indexed: 10/20/2022]
Abstract
PURPOSE There is a need to better understand the epidemiological relationship between language development and psychiatric symptomatology. Language development can be particularly impacted by social factors-as seen in the developmental choices made for deaf children, which can create language deprivation. A possible mental health syndrome may be present in deaf patients with severe language deprivation. METHODS Electronic databases were searched to identify publications focusing on language development and mental health in the deaf population. Screening of relevant publications narrowed the search results to 35 publications. RESULTS Although there is very limited empirical evidence, there appears to be suggestions of a mental health syndrome by clinicians working with deaf patients. Possible features include language dysfluency, fund of knowledge deficits, and disruptions in thinking, mood, and/or behavior. CONCLUSION The clinical specialty of deaf mental health appears to be struggling with a clinically observed phenomenon that has yet to be empirically investigated and defined within the DSM. Descriptions of patients within the clinical setting suggest a language deprivation syndrome. Language development experiences have an epidemiological relationship with psychiatric outcomes in deaf people. This requires more empirical attention and has implications for other populations with behavioral health disparities as well.
Collapse
|
5
|
Aparicio M, Peigneux P, Charlier B, Balériaux D, Kavec M, Leybaert J. The Neural Basis of Speech Perception through Lipreading and Manual Cues: Evidence from Deaf Native Users of Cued Speech. Front Psychol 2017; 8:426. [PMID: 28424636 PMCID: PMC5371603 DOI: 10.3389/fpsyg.2017.00426] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2016] [Accepted: 03/07/2017] [Indexed: 11/13/2022] Open
Abstract
We present here the first neuroimaging data for perception of Cued Speech (CS) by deaf adults who are native users of CS. CS is a visual mode of communicating a spoken language through a set of manual cues which accompany lipreading and disambiguate it. With CS, sublexical units of the oral language are conveyed clearly and completely through the visual modality without requiring hearing. The comparison of neural processing of CS in deaf individuals with processing of audiovisual (AV) speech in normally hearing individuals represents a unique opportunity to explore the similarities and differences in neural processing of an oral language delivered in a visuo-manual vs. an AV modality. The study included deaf adult participants who were early CS users and native hearing users of French who process speech audiovisually. Words were presented in an event-related fMRI design. Three conditions were presented to each group of participants. The deaf participants saw CS words (manual + lipread), words presented as manual cues alone, and words presented to be lipread without manual cues. The hearing group saw AV spoken words, audio-alone and lipread-alone. Three findings are highlighted. First, the middle and superior temporal gyrus (excluding Heschl's gyrus) and left inferior frontal gyrus pars triangularis constituted a common, amodal neural basis for AV and CS perception. Second, integration was inferred in posterior parts of superior temporal sulcus for audio and lipread information in AV speech, but in the occipito-temporal junction, including MT/V5, for the manual cues and lipreading in CS. Third, the perception of manual cues showed a much greater overlap with the regions activated by CS (manual + lipreading) than lipreading alone did. This supports the notion that manual cues play a larger role than lipreading for CS processing. The present study contributes to a better understanding of the role of manual cues as support of visual speech perception in the framework of the multimodal nature of human communication.
Collapse
Affiliation(s)
- Mario Aparicio
- Laboratory of Cognition, Language and Development, Centre de Recherches Neurosciences et Cognition, Université Libre de Bruxelles,Brussels, Belgium
| | - Philippe Peigneux
- Neuropsychology and Functional Neuroimaging Research Unit (UR2NF), Centre de Recherches Cognition et Neurosciences, Université Libre de Bruxelles,Brussels, Belgium
| | - Brigitte Charlier
- Laboratory of Cognition, Language and Development, Centre de Recherches Neurosciences et Cognition, Université Libre de Bruxelles,Brussels, Belgium
| | - Danielle Balériaux
- Department of Radiology, Clinics of Magnetic Resonance, Erasme HospitalBrussels, Belgium
| | - Martin Kavec
- Department of Radiology, Clinics of Magnetic Resonance, Erasme HospitalBrussels, Belgium
| | - Jacqueline Leybaert
- Laboratory of Cognition, Language and Development, Centre de Recherches Neurosciences et Cognition, Université Libre de Bruxelles,Brussels, Belgium
| |
Collapse
|
6
|
Lane C, Kanjlia S, Richardson H, Fulton A, Omaki A, Bedny M. Reduced Left Lateralization of Language in Congenitally Blind Individuals. J Cogn Neurosci 2016; 29:65-78. [PMID: 27647280 DOI: 10.1162/jocn_a_01045] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Language processing depends on a left-lateralized network of frontotemporal cortical regions. This network is remarkably consistent across individuals and cultures. However, there is also evidence that developmental factors, such as delayed exposure to language, can modify this network. Recently, it has been found that, in congenitally blind individuals, the typical frontotemporal language network expands to include parts of "visual" cortices. Here, we report that blindness is also associated with reduced left lateralization in frontotemporal language areas. We analyzed fMRI data from two samples of congenitally blind adults (n = 19 and n = 13) and one sample of congenitally blind children (n = 20). Laterality indices were computed for sentence comprehension relative to three different control conditions: solving math equations (Experiment 1), a memory task with nonwords (Experiment 2), and a "does this come next?" task with music (Experiment 3). Across experiments and participant samples, the frontotemporal language network was less left-lateralized in congenitally blind than in sighted individuals. Reduction in left lateralization was not related to Braille reading ability or amount of occipital plasticity. Notably, we observed a positive correlation between the lateralization of frontotemporal cortex and that of language-responsive occipital areas in blind individuals. Blind individuals with right-lateralized language responses in frontotemporal cortices also had right-lateralized occipital responses to language. Together, these results reveal a modified neurobiology of language in blindness. Our findings suggest that, despite its usual consistency across people, the neurobiology of language can be modified by nonlinguistic experiences.
Collapse
|
7
|
Tabaquim MDLM, Nardi CG, Ferrari JB, Moretti CN, Yamada MO, Bevilacqua MC. Avaliação do desenvolvimento cognitivo e afetivo-social de crianças com perda auditiva. REVISTA CEFAC 2013. [DOI: 10.1590/s1516-18462013005000051] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
OBEJTIVO: avaliar o nível de desenvolvimento cognitivo e afetivo-social de crianças com diagnóstico de deficiência auditiva. MÉTODO: participaram do estudo 50 crianças com diagnóstico de deficiência auditiva, com idade média de 16,1 meses, de ambos os gêneros. Empregou-se a entrevista semi-estruturada com os pais para a obtenção da história de vida da criança, a Escala de Desenvolvimento Comportamental e a Escala de Avaliação da Reação de Retração do Bebê. RESULTADO: o estudo identificou 80% das crianças com perda auditiva profunda bilateral. O Quociente de Desenvolvimento na normalidade ocorreu em 76% da amostra, sendo as funções da linguagem e pessoal-social, as mais prejudicadas. Os níveis de ajustamento afetivo e interacional foram caracterizados pelas reações de alarme para problemas de interação pessoal-social. CONCLUSÃO: as competências defasadas de habilidades verbais e não-verbais mostraram implicações no desenvolvimento cognitivo, sugestivas da condição da criança com perda auditiva e dos fatores de risco associados, com limitação circunstancial a trocas comunicativas que promovem o desenvolvimento de competências adaptativas e o fortalecimento da auto-estima para as relações.
Collapse
|
8
|
Secora KR, Peterson JR, Urbano CM, Chung B, Okanoya K, Cooper BG. Syringeal specialization of frequency control during song production in the Bengalese finch (Lonchura striata domestica). PLoS One 2012; 7:e34135. [PMID: 22479543 PMCID: PMC3313989 DOI: 10.1371/journal.pone.0034135] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2011] [Accepted: 02/22/2012] [Indexed: 11/26/2022] Open
Abstract
Background Singing in songbirds is a complex, learned behavior which shares many parallels with human speech. The avian vocal organ (syrinx) has two potential sound sources, and each sound generator is under unilateral, ipsilateral neural control. Different songbird species vary in their use of bilateral or unilateral phonation (lateralized sound production) and rapid switching between left and right sound generation (interhemispheric switching of motor control). Bengalese finches (Lonchura striata domestica) have received considerable attention, because they rapidly modify their song in response to manipulations of auditory feedback. However, how the left and right sides of the syrinx contribute to acoustic control of song has not been studied. Methodology Three manipulations of lateralized syringeal control of sound production were conducted. First, unilateral syringeal muscular control was eliminated by resection of the left or right tracheosyringeal portion of the hypoglossal nerve, which provides neuromuscular innervation of the syrinx. Spectral and temporal features of song were compared before and after lateralized nerve injury. In a second experiment, either the left or right sound source was devoiced to confirm the role of each sound generator in the control of acoustic phonology. Third, air pressure was recorded before and after unilateral denervation to enable quantification of acoustic change within individual syllables following lateralized nerve resection. Significance These experiments demonstrate that the left sound source produces louder, higher frequency, lower entropy sounds, and the right sound generator produces lower amplitude, lower frequency, higher entropy sounds. The bilateral division of labor is complex and the frequency specialization is the opposite pattern observed in most songbirds. Further, there is evidence for rapid interhemispheric switching during song production. Lateralized control of song production in Bengalese finches may enhance acoustic complexity of song and facilitate the rapid modification of sound production following manipulations of auditory feedback.
Collapse
Affiliation(s)
- Kristen R. Secora
- Department of Psychology, Texas Christian University, Fort Worth, Texas, United States of America
| | - Jennifer R. Peterson
- Department of Psychology, Texas Christian University, Fort Worth, Texas, United States of America
| | - Catherine M. Urbano
- Department of Psychology, Texas Christian University, Fort Worth, Texas, United States of America
| | - Boah Chung
- Department of Psychology, Texas Christian University, Fort Worth, Texas, United States of America
| | - Kazuo Okanoya
- Department of Cognitive and Behavioral Sciences, The University of Tokyo, Tokyo, Japan
| | - Brenton G. Cooper
- Department of Psychology, Texas Christian University, Fort Worth, Texas, United States of America
- * E-mail:
| |
Collapse
|
9
|
Aparicio M, Peigneux P, Charlier B, Neyrat C, Leybaert J. Early experience of Cued Speech enhances speechreading performance in deaf. Scand J Psychol 2011; 53:41-6. [PMID: 21995589 DOI: 10.1111/j.1467-9450.2011.00919.x] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
It is known that deaf individuals usually outperform normal hearing subjects in speechreading; however, the underlying reasons remain unclear. In the present study, speechreading performance was assessed in normal hearing participants (NH), deaf participants who had been exposed to the Cued Speech (CS) system early and intensively, and deaf participants exposed to oral language without Cued Speech (NCS). Results show a gradation in performance with highest performance in CS, then in NCS, and finally NH participants. Moreover, error analysis suggests that speechreading processing is more accurate in the CS group than in the other groups. Given that early and intensive CS has been shown to promote development of accurate phonological processing, we propose that the higher speechreading results in Cued Speech users are linked to a better capacity in phonological decoding of visual articulators.
Collapse
Affiliation(s)
- Mario Aparicio
- Laboratoire Cognition Langage Développement, Universite Libre de Bruxelles, Belgium.
| | | | | | | | | |
Collapse
|
10
|
Hemispheric differences in processing of vocalizations depend on early experience. Proc Natl Acad Sci U S A 2010; 107:2301-6. [PMID: 20133876 DOI: 10.1073/pnas.0900091107] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
An intriguing phenomenon in the neurobiology of language is lateralization: the dominant role of one hemisphere in a particular function. Lateralization is not exclusive to language because lateral differences are observed in other sensory modalities, behaviors, and animal species. Despite much scientific attention, the function of lateralization, its possible dependence on experience, and the functional implications of such dependence have yet to be clearly determined. We have explored the role of early experience in the development of lateralized sensory processing in the brain, using the songbird model of vocal learning. By controlling exposure to natural vocalizations (through isolation, song tutoring, and muting), we manipulated the postnatal auditory environment of developing zebra finches, and then assessed effects on hemispheric specialization for communication sounds in adulthood. Using bilateral multielectrode recordings from a forebrain auditory area known to selectively process species-specific vocalizations, we found that auditory responses to species-typical songs and long calls, in both male and female birds, were stronger in the right hemisphere than in the left, and that right-side responses adapted more rapidly to stimulus repetition. We describe specific instances, particularly in males, where these lateral differences show an influence of auditory experience with song and/or the bird's own voice during development.
Collapse
|
11
|
RUDNER MARY, ANDIN JOSEFINE, RÖNNBERG JERKER. Working memory, deafness and sign language. Scand J Psychol 2009; 50:495-505. [DOI: 10.1111/j.1467-9450.2009.00744.x] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
12
|
Parmanto B, Saptono A, Murthi R, Safos C, Lathan CE. Secure telemonitoring system for delivering telerehabilitation therapy to enhance children's communication function to home. Telemed J E Health 2009; 14:905-11. [PMID: 19035799 DOI: 10.1089/tmj.2008.0003] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
A secure telemonitoring system was developed to transform CosmoBot system, a stand-alone speech-language therapy software, into a telerehabilitation system. The CosmoBot system is a motivating, computer-based play character designed to enhance children's communication skills and stimulate verbal interaction during the remediation of speech and language disorders. The CosmoBot system consists of the Mission Control human interface device and Cosmo's Play and Learn software featuring a robot character named Cosmo that targets educational goals for children aged 3-5 years. The secure telemonitoring infrastructure links a distant speech-language therapist and child/parents at home or school settings. The result is a telerehabilitation system that allows a speech-language therapist to monitor children's activities at home while providing feedback and therapy materials remotely. We have developed the means for telerehabilitation of communication skills that can be implemented in children's home settings. The architecture allows the therapist to remotely monitor the children after completion of the therapy session and to provide feedback for the following session.
Collapse
Affiliation(s)
- Bambang Parmanto
- Rehabilitation Engineering Research Center (RERC) on Telerehabilitation, University of Pittsburgh, Pittsburgh, Pennsylvania 15260, USA.
| | | | | | | | | |
Collapse
|
13
|
D'Hondt M, Leybaert J. Lateralization effects during semantic and rhyme judgement tasks in deaf and hearing subjects. BRAIN AND LANGUAGE 2003; 87:227-240. [PMID: 14585292 DOI: 10.1016/s0093-934x(03)00104-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
A visual hemifield experiment investigated hemispheric specialization among hearing children and adults and prelingually, profoundly deaf youngsters who were exposed intensively to Cued Speech (CS). Of interest was whether deaf CS users, who undergo a development of phonology and grammar of the spoken language similar to that of hearing youngsters, would display similar laterality patterns in the processing of written language. Semantic, rhyme, and visual judgement tasks were used. In the visual task no VF advantage was observed. A RVF (left hemisphere) advantage was obtained for both the deaf and the hearing subjects for the semantic task, supporting Neville's claim that the acquisition of competence in the grammar of language is critical in establishing the specialization of the left hemisphere for language. For the rhyme task, however, a RVF advantage was obtained for the hearing subjects, but not for the deaf ones, suggesting that different neural resources are recruited by deaf and hearing subjects. Hearing the sounds of language may be necessary to develop left lateralised processing of rhymes.
Collapse
Affiliation(s)
- Murielle D'Hondt
- Laboratoire de Psychologie Expérimentale, Free University of Brussels, Brussels, Belgium
| | | |
Collapse
|