1
|
Lammert JM, Levine AT, Koshkebaghi D, Butler BE. Sign language experience has little effect on face and biomotion perception in bimodal bilinguals. Sci Rep 2023; 13:15328. [PMID: 37714887 PMCID: PMC10504335 DOI: 10.1038/s41598-023-41636-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 08/29/2023] [Indexed: 09/17/2023] Open
Abstract
Sensory and language experience can affect brain organization and domain-general abilities. For example, D/deaf individuals show superior visual perception compared to hearing controls in several domains, including the perception of faces and peripheral motion. While these enhancements may result from sensory loss and subsequent neural plasticity, they may also reflect experience using a visual-manual language, like American Sign Language (ASL), where signers must process moving hand signs and facial cues simultaneously. In an effort to disentangle these concurrent sensory experiences, we examined how learning sign language influences visual abilities by comparing bimodal bilinguals (i.e., sign language users with typical hearing) and hearing non-signers. Bimodal bilinguals and hearing non-signers completed online psychophysical measures of face matching and biological motion discrimination. No significant group differences were observed across these two tasks, suggesting that sign language experience is insufficient to induce perceptual advantages in typical-hearing adults. However, ASL proficiency (but not years of experience or age of acquisition) was found to predict performance on the motion perception task among bimodal bilinguals. Overall, the results presented here highlight a need for more nuanced study of how linguistic environments, sensory experience, and cognitive functions impact broad perceptual processes and underlying neural correlates.
Collapse
Affiliation(s)
- Jessica M Lammert
- Department of Psychology, University of Western Ontario, Western Interdisciplinary Research Building Room 6126, London, ON, N6A 5C2, Canada
- Western Institute for Neuroscience, University of Western Ontario, London, Canada
| | - Alexandra T Levine
- Department of Psychology, University of Western Ontario, Western Interdisciplinary Research Building Room 6126, London, ON, N6A 5C2, Canada
- Western Institute for Neuroscience, University of Western Ontario, London, Canada
| | - Dursa Koshkebaghi
- Undergraduate Neuroscience Program, University of Western Ontario, London, Canada
| | - Blake E Butler
- Department of Psychology, University of Western Ontario, Western Interdisciplinary Research Building Room 6126, London, ON, N6A 5C2, Canada.
- Western Institute for Neuroscience, University of Western Ontario, London, Canada.
- National Centre for Audiology, University of Western Ontario, London, Canada.
- Children's Health Research Institute, Lawson Health Research, London, Canada.
| |
Collapse
|
2
|
Heled E, Ohayon M. Working Memory for Faces among Individuals with Congenital Deafness. J Am Acad Audiol 2022; 33:342-348. [PMID: 36446592 DOI: 10.1055/s-0042-1754369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
Abstract
BACKGROUND Studies examining face processing among individuals with congenital deafness show inconsistent results that are often accounted for by sign language skill. However, working memory for faces as an aspect of face processing has not yet been examined in congenital deafness. PURPOSE To explore working memory for faces among individuals with congenital deafness who are skilled in sign language. RESEARCH DESIGN A quasi-experimental study of individuals with congenital deafness and a control group. STUDY SAMPLE Sixteen individuals with congenital deafness who are skilled in sign language and 18 participants with intact hearing, matched for age, and education. INTERVENTION The participants performed two conditions of the N-back test in ascending difficulty (i.e., 1-back and 2-back). DATA COLLECTION AND ANALYSIS Levene's and Shapiro-Wilk tests were used to assess group homoscedasticity and normality, respectively. A two-way repeated measures analysis of variance was applied to compare the groups in response time and accuracy of the N-back test, as well as Pearson correlation between response time and accuracy, and sign language skill duration. RESULTS The congenital deafness group performed better than controls, as was found in the response time but not in the accuracy variables. However, an interaction effect showed that this pattern was significant for the 1-back but not for the 2-back condition in the response time but not the accuracy. Further, there was a marginal effect in response time but a significant one in accuracy showing the 2-back was performed worse than the 1-back. No significant correlation was found between response time and accuracy, and sign language skill duration. CONCLUSION Face processing advantage associated with congenital deafness is dependent on cognitive load, but sign language duration does not affect this trend. In addition, response time and accuracy are not equally sensitive to performance differences in the N-back test.
Collapse
Affiliation(s)
- Eyal Heled
- Department of Psychology, Ariel University, Ariel, Israel
- Department of Neurological Rehabilitation, Sheba Medical Center, Ramat-Gan, Israel
| | - Maayon Ohayon
- Department of Psychology, Ariel University, Ariel, Israel
| |
Collapse
|
3
|
Craig M, Dewar M, Turner G, Collier T, Kapur N. Evidence for superior encoding of detailed visual memories in deaf signers. Sci Rep 2022; 12:9097. [PMID: 35641543 PMCID: PMC9156778 DOI: 10.1038/s41598-022-13000-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 05/19/2022] [Indexed: 11/17/2022] Open
Abstract
Recent evidence shows that deaf signers outperform hearing non-signers in some tests of visual attention and discrimination. Furthermore, they can retain visual information better over short periods, i.e., seconds. However, it is unknown if deaf signers’ retention of detailed visual information is superior following more extended periods. We report a study investigating this possibility. Our data revealed that deaf individuals outperformed hearing people in a visual long-term memory test that probed the fine detail of new memories. Deaf individuals also performed better in a scene-discrimination test, which correlated positively with performance on the long-term memory test. Our findings provide evidence that deaf signers can demonstrate superior visual long-term memory, possibly because of enhanced visual attention during encoding. The relative contributions of factors including sign language fluency, protracted practice, and neural plasticity are still to be established. Our findings add to evidence showing that deaf signers are at an advantage in some respects, including the retention of detailed visual memories over the longer term.
Collapse
Affiliation(s)
- Michael Craig
- Department of Psychology, Faculty of Health and Life Sciences, Northumbria University, Newcastle upon Tyne, UK. .,Memory Lab, Department of Psychology, School of Social Sciences, Heriot-Watt University, Edinburgh, UK.
| | - Michaela Dewar
- Memory Lab, Department of Psychology, School of Social Sciences, Heriot-Watt University, Edinburgh, UK
| | - Graham Turner
- Centre for Translation and Interpreting Studies in Scotland, School of Social Sciences, Heriot-Watt University, Edinburgh, UK
| | - Trudi Collier
- Memory Lab, Department of Psychology, School of Social Sciences, Heriot-Watt University, Edinburgh, UK.,Centre for Translation and Interpreting Studies in Scotland, School of Social Sciences, Heriot-Watt University, Edinburgh, UK
| | - Narinder Kapur
- Division of Psychology and Language Sciences, Department of Clinical, Education and Health Psychology, Faculty of Brain Sciences, University College London, London, UK
| |
Collapse
|
4
|
Grégoire A, Deggouj N, Dricot L, Decat M, Kupers R. Brain Morphological Modifications in Congenital and Acquired Auditory Deprivation: A Systematic Review and Coordinate-Based Meta-Analysis. Front Neurosci 2022; 16:850245. [PMID: 35418829 PMCID: PMC8995770 DOI: 10.3389/fnins.2022.850245] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Accepted: 03/01/2022] [Indexed: 12/02/2022] Open
Abstract
Neuroplasticity following deafness has been widely demonstrated in both humans and animals, but the anatomical substrate of these changes is not yet clear in human brain. However, it is of high importance since hearing loss is a growing problem due to aging population. Moreover, knowing these brain changes could help to understand some disappointing results with cochlear implant, and therefore could improve hearing rehabilitation. A systematic review and a coordinate-based meta-analysis were realized about the morphological brain changes highlighted by MRI in severe to profound hearing loss, congenital and acquired before or after language onset. 25 papers were included in our review, concerning more than 400 deaf subjects, most of them presenting prelingual deafness. The most consistent finding is a volumetric decrease in gray matter around bilateral auditory cortex. This change was confirmed by the coordinate-based meta-analysis which shows three converging clusters in this region. The visual areas of deaf children is also significantly impacted, with a decrease of the volume of both gray and white matters. Finally, deafness is responsible of a gray matter increase within the cerebellum, especially at the right side. These results are largely discussed and compared with those from deaf animal models and blind humans, which demonstrate for example a much more consistent gray matter decrease along their respective primary sensory pathway. In human deafness, a lot of other factors than deafness could interact on the brain plasticity. One of the most important is the use of sign language and its age of acquisition, which induce among others changes within the hand motor region and the visual cortex. But other confounding factors exist which have been too little considered in the current literature, such as the etiology of the hearing impairment, the speech-reading ability, the hearing aid use, the frequent associated vestibular dysfunction or neurocognitive impairment. Another important weakness highlighted by this review concern the lack of papers about postlingual deafness, whereas it represents most of the deaf population. Further studies are needed to better understand these issues, and finally try to improve deafness rehabilitation.
Collapse
Affiliation(s)
- Anaïs Grégoire
- Department of ENT, Cliniques Universitaires Saint-Luc, Brussels, Belgium
- Institute of NeuroScience (IoNS), UCLouvain, Brussels, Belgium
| | - Naïma Deggouj
- Department of ENT, Cliniques Universitaires Saint-Luc, Brussels, Belgium
- Institute of NeuroScience (IoNS), UCLouvain, Brussels, Belgium
| | - Laurence Dricot
- Institute of NeuroScience (IoNS), UCLouvain, Brussels, Belgium
| | - Monique Decat
- Department of ENT, Cliniques Universitaires Saint-Luc, Brussels, Belgium
- Institute of NeuroScience (IoNS), UCLouvain, Brussels, Belgium
| | - Ron Kupers
- Institute of NeuroScience (IoNS), UCLouvain, Brussels, Belgium
- Department of Neuroscience, Panum Institute, University of Copenhagen, Copenhagen, Denmark
- Ecole d’Optométrie, Université de Montréal, Montréal, QC, Canada
| |
Collapse
|
5
|
Tonelli A, Togoli I, Arrighi R, Gori M. Deprivation of Auditory Experience Influences Numerosity Discrimination, but Not Numerosity Estimation. Brain Sci 2022; 12:brainsci12020179. [PMID: 35203942 PMCID: PMC8869924 DOI: 10.3390/brainsci12020179] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Revised: 01/24/2022] [Accepted: 01/27/2022] [Indexed: 02/04/2023] Open
Abstract
Number sense is the ability to estimate the number of items, and it is common to many species. Despite the numerous studies dedicated to unveiling how numerosity is processed in the human brain, to date, it is not clear whether the representation of numerosity is supported by a single general mechanism or by multiple mechanisms. Since it is known that deafness entails a selective impairment in the processing of temporal information, we assessed the approximate numerical abilities of deaf individuals to disentangle these two hypotheses. We used a numerosity discrimination task (2AFC) and an estimation task, in both cases using sequential (temporal) or simultaneous (spatial) stimuli. The results showed a selective impairment of the deaf participants compared with the controls (hearing) in the temporal numerosity discrimination task, while no difference was found to discriminate spatial numerosity. Interestingly, the deaf and hearing participants did not differ in spatial or temporal numerosity estimation. Overall, our results suggest that the deficit in temporal processing induced by deafness also impacts perception in other domains such as numerosity, where sensory information is conveyed in a temporal format, which further suggests the existence of separate mechanisms subserving the processing of temporal and spatial numerosity.
Collapse
Affiliation(s)
- Alessia Tonelli
- U-VIP, Unit for Visually Impaired People, Istituto Italiano di Tecnologia, 16163 Genova, Italy;
- Correspondence:
| | - Irene Togoli
- Cognitive Neuroscience Department, International School for Advanced Studies (SISSA), 34136 Trieste, Italy;
| | - Roberto Arrighi
- Department of Neuroscience, Psychology, Pharmacology and Child Health, University of Florence, 50121 Florence, Italy;
| | - Monica Gori
- U-VIP, Unit for Visually Impaired People, Istituto Italiano di Tecnologia, 16163 Genova, Italy;
| |
Collapse
|
6
|
Ponticorvo S, Manara R, Cassandro E, Canna A, Scarpa A, Troisi D, Cassandro C, Cuoco S, Cappiello A, Pellecchia MT, Salle FD, Esposito F. Cross-modal connectivity effects in age-related hearing loss. Neurobiol Aging 2021; 111:1-13. [PMID: 34915240 DOI: 10.1016/j.neurobiolaging.2021.09.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 08/31/2021] [Accepted: 09/01/2021] [Indexed: 10/19/2022]
Abstract
Age-related sensorineural hearing loss (HL) leads to localized brain changes in the primary auditory cortex, long-range functional alterations, and is considered a risk factor for dementia. Nonhuman studies have repeatedly highlighted cross-modal brain plasticity in sensorial brain networks other than those primarily involved in the peripheral damage, thus in this study, the possible cortical alterations associated with HL have been analyzed using a whole-brain multimodal connectomic approach. Fifty-two HL and 30 normal hearing participants were examined in a 3T MRI study along with audiological and neurological assessments. Between-regions functional connectivity and whole-brain probabilistic tractography were calculated in a connectome-based manner and graph theory was used to obtain low-dimensional features for the analysis of brain connectivity at global and local levels. The HL condition was associated with a different functional organization of the visual subnetwork as revealed by a significant increase in global efficiency, density, and clustering coefficient. These functional effects were mirrored by similar (but more subtle) structural effects suggesting that a functional repurposing of visual cortical centers occurs to compensate for age-related loss of hearing abilities.
Collapse
Affiliation(s)
- Sara Ponticorvo
- Department of Medicine, Surgery and Dentistry, Scuola Medica Salernitana, University of Salerno, Baronissi, Italy
| | - Renzo Manara
- Department of Medicine, Surgery and Dentistry, Scuola Medica Salernitana, University of Salerno, Baronissi, Italy; Department of Neuroscience, University of Padova, Padova, Italy
| | - Ettore Cassandro
- Department of Medicine, Surgery and Dentistry, Scuola Medica Salernitana, University of Salerno, Baronissi, Italy; University Hospital "San Giovanni di Dio e Ruggi D'Aragona", Scuola Medica Salernitana, Salerno, Italy
| | - Antonietta Canna
- Department of Advanced Medical and Surgical Sciences, University of Campania "Luigi Vanvitelli", Napoli, Italy
| | - Alfonso Scarpa
- Department of Medicine, Surgery and Dentistry, Scuola Medica Salernitana, University of Salerno, Baronissi, Italy; University Hospital "San Giovanni di Dio e Ruggi D'Aragona", Scuola Medica Salernitana, Salerno, Italy
| | - Donato Troisi
- Department of Medicine, Surgery and Dentistry, Scuola Medica Salernitana, University of Salerno, Baronissi, Italy; University Hospital "San Giovanni di Dio e Ruggi D'Aragona", Scuola Medica Salernitana, Salerno, Italy
| | - Claudia Cassandro
- University Hospital "San Giovanni di Dio e Ruggi D'Aragona", Scuola Medica Salernitana, Salerno, Italy
| | - Sofia Cuoco
- Department of Medicine, Surgery and Dentistry, Scuola Medica Salernitana, University of Salerno, Baronissi, Italy; University Hospital "San Giovanni di Dio e Ruggi D'Aragona", Scuola Medica Salernitana, Salerno, Italy
| | - Arianna Cappiello
- Department of Medicine, Surgery and Dentistry, Scuola Medica Salernitana, University of Salerno, Baronissi, Italy; University Hospital "San Giovanni di Dio e Ruggi D'Aragona", Scuola Medica Salernitana, Salerno, Italy
| | - Maria Teresa Pellecchia
- Department of Medicine, Surgery and Dentistry, Scuola Medica Salernitana, University of Salerno, Baronissi, Italy; University Hospital "San Giovanni di Dio e Ruggi D'Aragona", Scuola Medica Salernitana, Salerno, Italy
| | - Francesco Di Salle
- Department of Medicine, Surgery and Dentistry, Scuola Medica Salernitana, University of Salerno, Baronissi, Italy; University Hospital "San Giovanni di Dio e Ruggi D'Aragona", Scuola Medica Salernitana, Salerno, Italy
| | - Fabrizio Esposito
- Department of Advanced Medical and Surgical Sciences, University of Campania "Luigi Vanvitelli", Napoli, Italy.
| |
Collapse
|
7
|
Lasfargues-Delannoy A, Strelnikov K, Deguine O, Marx M, Barone P. Supra-normal skills in processing of visuo-auditory prosodic information by cochlear-implanted deaf patients. Hear Res 2021; 410:108330. [PMID: 34492444 DOI: 10.1016/j.heares.2021.108330] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Revised: 07/08/2021] [Accepted: 08/02/2021] [Indexed: 10/20/2022]
Abstract
Cochlear implanted (CI) adults with acquired deafness are known to depend on multisensory integration skills (MSI) for speech comprehension through the fusion of speech reading skills and their deficient auditory perception. But, little is known on how CI patients perceive prosodic information relating to speech content. Our study aimed to identify how CI patients use MSI between visual and auditory information to process paralinguistic prosodic information of multimodal speech and the visual strategies employed. A psychophysics assessment was developed, in which CI patients and hearing controls (NH) had to distinguish between a question and a statement. The controls were separated into two age groups (young and aged-matched) to dissociate any effect of aging. In addition, the oculomotor strategies used when facing a speaker in this prosodic decision task were recorded using an eye-tracking device and compared to controls. This study confirmed that prosodic processing is multisensory but it revealed that CI patients showed significant supra-normal audiovisual integration for prosodic information compared to hearing controls irrespective of age. This study clearly showed that CI patients had a visuo-auditory gain more than 3 times larger than that observed in hearing controls. Furthermore, CI participants performed better in the visuo-auditory situation through a specific oculomotor exploration of the face as they significantly fixate the mouth region more than young NH participants who fixate the eyes, whereas the aged-matched controls presented an intermediate exploration pattern equally reported between the eyes and mouth. To conclude, our study demonstrated that CI patients have supra-normal skills MSI when integrating visual and auditory linguistic prosodic information, and a specific adaptive strategy developed as it participates directly in speech content comprehension.
Collapse
Affiliation(s)
- Anne Lasfargues-Delannoy
- Université Fédérale de Toulouse - Université Paul Sabatier (UPS), France; UMR 5549 CerCo, UPS CNRS, France; CHU Toulouse - France, Service d'Oto Rhino Laryngologie (ORL), Otoneurologie et ORL Pédiatrique, Hôpital Pierre Paul Riquet, site Purpan France.
| | - Kuzma Strelnikov
- Université Fédérale de Toulouse - Université Paul Sabatier (UPS), France; UMR 5549 CerCo, UPS CNRS, France; CHU Toulouse, France
| | - Olivier Deguine
- Université Fédérale de Toulouse - Université Paul Sabatier (UPS), France; UMR 5549 CerCo, UPS CNRS, France; CHU Toulouse - France, Service d'Oto Rhino Laryngologie (ORL), Otoneurologie et ORL Pédiatrique, Hôpital Pierre Paul Riquet, site Purpan France
| | - Mathieu Marx
- Université Fédérale de Toulouse - Université Paul Sabatier (UPS), France; UMR 5549 CerCo, UPS CNRS, France; CHU Toulouse - France, Service d'Oto Rhino Laryngologie (ORL), Otoneurologie et ORL Pédiatrique, Hôpital Pierre Paul Riquet, site Purpan France
| | - Pascal Barone
- Université Fédérale de Toulouse - Université Paul Sabatier (UPS), France; UMR 5549 CerCo, UPS CNRS, France
| |
Collapse
|
8
|
Cross-modal plasticity and central deficiencies: the case of deafness and the use of cochlear implants. HANDBOOK OF CLINICAL NEUROLOGY 2020. [PMID: 32977890 DOI: 10.1016/b978-0-444-64148-9.00025-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/18/2024]
Abstract
The primary objective of this chapter is to describe the consequences of central deficiencies on the neurodevelopment of children. We approach this topic from the standpoint of congenital deafness. Thus we first present the current state of knowledge on cortical reorganization following congenital deafness. The allocation of auditory cortices to other sensory systems can enhance sensory processing and therefore the cognitive functions related to them. Second, we explore the linguistic development of deaf children. Given that the English written system is speech-based, its acquisition is complex and atypical for deaf children, usually leading to poorer achievements. Next, we explore the impact of a neural prosthesis named the cochlear implant on the neurocognitive and linguistic development of deaf children. In some cases, it allows the individuals to, at least partially, regain access to the lost sense. We also comment on the specific needs of the deaf population when it comes to neuropsychological assessment. Finally, we touch on the specific context of deaf children born of deaf parents, and therefore naturally exposed to sign language as the only means of communication.
Collapse
|
9
|
Shalev T, Schwartz S, Miller P, Hadad BS. Do deaf individuals have better visual skills in the periphery? Evidence from processing facial attributes. VISUAL COGNITION 2020. [DOI: 10.1080/13506285.2020.1770390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Affiliation(s)
- Tal Shalev
- Department of Special Education, University of Haifa, Haifa, Israel
| | - Sivan Schwartz
- Department of psychology, University of Haifa, Haifa, Israel
| | - Paul Miller
- Department of Special Education, University of Haifa, Haifa, Israel
| | - Bat-Sheva Hadad
- Department of Special Education, University of Haifa, Haifa, Israel
- Edmond J. Safra Brain Research Center, University of Haifa, Haifa, Israel
| |
Collapse
|
10
|
Pieniak M, Lachowicz‐Tabaczek K, Masalski M, Hummel T, Oleszkiewicz A. Self‐rated sensory performance in profoundly deaf individuals. Do deaf people share the conviction about sensory compensation? J SENS STUD 2020. [DOI: 10.1111/joss.12572] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Affiliation(s)
- Michal Pieniak
- Institute of PsychologyUniversity of Wroclaw Wroclaw Poland
| | | | - Marcin Masalski
- Department and Clinic of Otolaryngology, Head and Neck SurgeryWroclaw Medical University Wroclaw Poland
- Department of Biomedical EngineeringWroclaw University of Science and Technology Wroclaw Poland
| | - Thomas Hummel
- Taste and Smell Clinic, Department of OtorhinolaryngologyTechnische Universität Dresden Dresden Germany
| | - Anna Oleszkiewicz
- Institute of PsychologyUniversity of Wroclaw Wroclaw Poland
- Taste and Smell Clinic, Department of OtorhinolaryngologyTechnische Universität Dresden Dresden Germany
| |
Collapse
|
11
|
Simon M, Campbell E, Genest F, MacLean MW, Champoux F, Lepore F. The Impact of Early Deafness on Brain Plasticity: A Systematic Review of the White and Gray Matter Changes. Front Neurosci 2020; 14:206. [PMID: 32292323 PMCID: PMC7135892 DOI: 10.3389/fnins.2020.00206] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2019] [Accepted: 02/25/2020] [Indexed: 11/29/2022] Open
Abstract
Background: Auditory deprivation alters cortical and subcortical brain regions, primarily linked to auditory and language processing, resulting in behavioral consequences. Neuroimaging studies have reported various degrees of structural changes, yet multiple variables in deafness profiles need to be considered for proper interpretation of results. To date, many inconsistencies are reported in the gray and white matter alterations following early profound deafness. The purpose of this study was to provide the first systematic review synthesizing gray and white matter changes in deaf individuals. Methods: We conducted a systematic review according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement in 27 studies comprising 626 deaf individuals. Results: Evidence shows that auditory deprivation significantly alters the white matter across the primary and secondary auditory cortices. The most consistent alteration across studies was in the bilateral superior temporal gyri. Furthermore, reductions in the fractional anisotropy of white matter fibers comprising in inferior fronto-occipital fasciculus, the superior longitudinal fasciculus, and the subcortical auditory pathway are reported. The reviewed studies also suggest that gray and white matter integrity is sensitive to early sign language acquisition, attenuating the effect of auditory deprivation on neurocognitive development. Conclusions: These findings suggest that understanding cortical reorganization through gray and white matter changes in auditory and non-auditory areas is an important factor in the development of auditory rehabilitation strategies in the deaf population.
Collapse
Affiliation(s)
- Marie Simon
- Département de Psychologie, Centre de Recherche en Neuropsychologie et Cognition, Université de Montréal, Montreal, QC, Canada
| | - Emma Campbell
- Département de Psychologie, Centre de Recherche en Neuropsychologie et Cognition, Université de Montréal, Montreal, QC, Canada
| | - François Genest
- Département de Psychologie, Centre de Recherche en Neuropsychologie et Cognition, Université de Montréal, Montreal, QC, Canada
| | - Michèle W MacLean
- Département de Psychologie, Centre de Recherche en Neuropsychologie et Cognition, Université de Montréal, Montreal, QC, Canada
| | - François Champoux
- École d'Orthophonie et d'Audiologie, Université de Montréal, Montreal, QC, Canada
| | - Franco Lepore
- Département de Psychologie, Centre de Recherche en Neuropsychologie et Cognition, Université de Montréal, Montreal, QC, Canada
| |
Collapse
|
12
|
Gwinn OS, Jiang F. Hemispheric Asymmetries in Deaf and Hearing During Sustained Peripheral Selective Attention. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2020; 25:1-9. [PMID: 31407782 PMCID: PMC6951033 DOI: 10.1093/deafed/enz030] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2019] [Revised: 06/04/2019] [Accepted: 06/13/2019] [Indexed: 06/10/2023]
Abstract
Previous studies have shown that compared to hearing individuals, early deaf individuals allocate relatively more attention to the periphery than central visual field. However, it is not clear whether these two groups also differ in their ability to selectively attend to specific peripheral locations. We examined deaf and hearing participants' selective attention using electroencephalography (EEG) and a frequency tagging paradigm, in which participants attended to one of two peripheral displays of moving dots that changed directions at different rates. Both participant groups showed similar amplifications and reductions in the EEG signal at the attended and unattended frequencies, indicating similar control over their peripheral attention for motion stimuli. However, for deaf participants these effects were larger in a right hemispheric region of interest (ROI), while for hearing participants these effects were larger in a left ROI. These results contribute to a growing body of evidence for a right hemispheric processing advantage in deaf populations when attending to motion.
Collapse
Affiliation(s)
- O Scott Gwinn
- University of Nevada, Reno
- College of Education, Psychology and Social Work, Flinders University, Adelaide, South Australia, Australia
| | | |
Collapse
|
13
|
Krejtz I, Krejtz K, Wisiecka K, Abramczyk M, Olszanowski M, Duchowski AT. Attention Dynamics During Emotion Recognition by Deaf and Hearing Individuals. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2020; 25:10-21. [PMID: 31665493 DOI: 10.1093/deafed/enz036] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2018] [Revised: 07/11/2019] [Accepted: 08/01/2019] [Indexed: 06/10/2023]
Abstract
The enhancement hypothesis suggests that deaf individuals are more vigilant to visual emotional cues than hearing individuals. The present eye-tracking study examined ambient-focal visual attention when encoding affect from dynamically changing emotional facial expressions. Deaf (n = 17) and hearing (n = 17) individuals watched emotional facial expressions that in 10-s animations morphed from a neutral expression to one of happiness, sadness, or anger. The task was to recognize emotion as quickly as possible. Deaf participants tended to be faster than hearing participants in affect recognition, but the groups did not differ in accuracy. In general, happy faces were more accurately and more quickly recognized than faces expressing anger or sadness. Both groups demonstrated longer average fixation duration when recognizing happiness in comparison to anger and sadness. Deaf individuals directed their first fixations less often to the mouth region than the hearing group. During the last stages of emotion recognition, deaf participants exhibited more focal viewing of happy faces than negative faces. This pattern was not observed among hearing individuals. The analysis of visual gaze dynamics, switching between ambient and focal attention, was useful in studying the depth of cognitive processing of emotional information among deaf and hearing individuals.
Collapse
Affiliation(s)
- Izabela Krejtz
- SWPS University of Social Sciences and Humanities, Chodakowska 19/31, Warsaw, Poland
| | - Krzysztof Krejtz
- SWPS University of Social Sciences and Humanities, Chodakowska 19/31, Warsaw, Poland
| | | | | | - Michał Olszanowski
- SWPS University of Social Sciences and Humanities, Chodakowska 19/31, Warsaw, Poland
| | | |
Collapse
|
14
|
Stoll C, Palluel-Germain R, Gueriot FX, Chiquet C, Pascalis O, Aptel F. Visual field plasticity in hearing users of sign language. Vision Res 2018; 153:105-110. [PMID: 30165056 DOI: 10.1016/j.visres.2018.08.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2018] [Revised: 08/03/2018] [Accepted: 08/14/2018] [Indexed: 10/28/2022]
Abstract
Studies have observed that deaf signers have a larger Visual Field (VF) than hearing non-signers with a particular large extension in the lower part of the VF. This increment could stem from early deafness or from the extensive use of sign language, since the lower VF is critical to perceive and understand linguistics gestures in sign language communication. The aim of the present study was to explore the potential impact of sign language experience without deafness on the VF sensitivity within its lower part. Using standard Humphrey Visual Field Analyzer, we compared luminance sensitivity in the fovea and between 3 and 27 degrees of visual eccentricity for the upper and lower VF, between hearing users of French Sign Language and age-matched hearing non-signers. The sensitivity in the fovea and in the upper VF were similar in both groups. Hearing signers had, however, higher luminance sensitivity than non-signers in the lower VF but only between 3 and 15°, the visual location for sign language perception. Sign language experience, no associated with deafness, may then be a modulating factor of VF sensitivity but restricted to the very specific location where signs are perceived.
Collapse
Affiliation(s)
- Chloé Stoll
- Univ. Grenoble-Alpes, LPNC, F-38040, Grenoble-CNRS, LPNC UMR 5105, F-38040 Grenoble, France; Laboratory for Investigative Neurophysiology, University Hospital Center and University of Lausanne, 1011 Lausanne, Switzerland; Department of Ophthalmology, Jules Gonin Eye Hospital, University of Lausanne, 1004 Lausanne, Switzerland.
| | | | | | - Christophe Chiquet
- Department of Ophthalmology, Centre Hospitalo-Universitaire de Grenoble, BP217, France
| | - Olivier Pascalis
- Univ. Grenoble-Alpes, LPNC, F-38040, Grenoble-CNRS, LPNC UMR 5105, F-38040 Grenoble, France
| | - Florent Aptel
- Department of Ophthalmology, Centre Hospitalo-Universitaire de Grenoble, BP217, France
| |
Collapse
|
15
|
Stoll C, Palluel-Germain R, Caldara R, Lao J, Dye MWG, Aptel F, Pascalis O. Face Recognition is Shaped by the Use of Sign Language. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2018; 23:62-70. [PMID: 28977622 DOI: 10.1093/deafed/enx034] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2017] [Accepted: 08/24/2017] [Indexed: 06/07/2023]
Abstract
Previous research has suggested that early deaf signers differ in face processing. Which aspects of face processing are changed and the role that sign language may have played in that change are however unclear. Here, we compared face categorization (human/non-human) and human face recognition performance in early profoundly deaf signers, hearing signers, and hearing non-signers. In the face categorization task, the three groups performed similarly in term of both response time and accuracy. However, in the face recognition task, signers (both deaf and hearing) were slower than hearing non-signers to accurately recognize faces, but had a higher accuracy rate. We conclude that sign language experience, but not deafness, drives a speed-accuracy trade-off in face recognition (but not face categorization). This suggests strategic differences in the processing of facial identity for individuals who use a sign language, regardless of their hearing status.
Collapse
Affiliation(s)
| | | | | | | | - Matthew W G Dye
- National Technical Institute for the Deaf, Rochester Institute of Technology
| | | | | |
Collapse
|
16
|
Functional selectivity for face processing in the temporal voice area of early deaf individuals. Proc Natl Acad Sci U S A 2017; 114:E6437-E6446. [PMID: 28652333 DOI: 10.1073/pnas.1618287114] [Citation(s) in RCA: 55] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Brain systems supporting face and voice processing both contribute to the extraction of important information for social interaction (e.g., person identity). How does the brain reorganize when one of these channels is absent? Here, we explore this question by combining behavioral and multimodal neuroimaging measures (magneto-encephalography and functional imaging) in a group of early deaf humans. We show enhanced selective neural response for faces and for individual face coding in a specific region of the auditory cortex that is typically specialized for voice perception in hearing individuals. In this region, selectivity to face signals emerges early in the visual processing hierarchy, shortly after typical face-selective responses in the ventral visual pathway. Functional and effective connectivity analyses suggest reorganization in long-range connections from early visual areas to the face-selective temporal area in individuals with early and profound deafness. Altogether, these observations demonstrate that regions that typically specialize for voice processing in the hearing brain preferentially reorganize for face processing in born-deaf people. Our results support the idea that cross-modal plasticity in the case of early sensory deprivation relates to the original functional specialization of the reorganized brain regions.
Collapse
|
17
|
Maguinness C, von Kriegstein K. Cross-modal processing of voices and faces in developmental prosopagnosia and developmental phonagnosia. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1313347] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Corrina Maguinness
- Max Planck Research Group Neural Mechanisms of Human Communication, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Katharina von Kriegstein
- Max Planck Research Group Neural Mechanisms of Human Communication, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Department of Psychology, Humboldt University of Berlin, Berlin, Germany
| |
Collapse
|
18
|
Adaptive and maladaptive neural compensatory consequences of sensory deprivation-From a phantom percept perspective. Prog Neurobiol 2017; 153:1-17. [PMID: 28408150 DOI: 10.1016/j.pneurobio.2017.03.010] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2016] [Revised: 03/21/2017] [Accepted: 03/28/2017] [Indexed: 12/19/2022]
Abstract
It is suggested that the brain undergoes plastic changes in order to adapt to changing environmental needs. Sensory deprivation results in decreased input to the brain leading to adaptive or maladaptive changes. Although several theories hypothesize the mechanism of these adaptive and maladaptive changes, the course of action taken by the brain heavily depends on the age of incidence of damage. The growing body of literature on the topic proposes that maladaptive changes in the brain are instrumental in creating phantom percepts, defined as the perception of a sensory experience in the absence of a physical stimulus. The current article reviews the mechanisms of adaptive and maladaptive plasticity in the brain in congenital, early, and late-onset sensory deprivation in conjunction with the phantom percepts in the different sensory domains. We propose that the mechanisms of adaptive and maladaptive plasticity fall under a universal construct of updating hierarchical Bayesian prediction errors. This theory of the Bayesian brain hypothesizes that the brain constantly compares its internal milieu with changing environmental cues and either adjusts its predictions or discards the change, depending on the novelty or salience of the external stimulus. We propose that adaptive plasticity reflects both successful bottom-up compensation and top-down updating of the model while maladaptive plasticity reflects failure in one or both mechanisms, resulting in a constant prediction-error. Finally, we hypothesize that phantom percepts are generated by the brain as a solution to this prediction error and are thus a manifestation of unsuccessful adaptation to sensory deprivation.
Collapse
|
19
|
Marschark M, Paivio A, Spencer LJ, Durkin A, Borgna G, Convertino C, Machmer E. Don't Assume Deaf Students are Visual Learners. JOURNAL OF DEVELOPMENTAL AND PHYSICAL DISABILITIES 2017; 29:153-171. [PMID: 28344430 PMCID: PMC5362161 DOI: 10.1007/s10882-016-9494-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
In the education of deaf learners, from primary school to postsecondary settings, it frequently is suggested that deaf students are visual learners. That assumption appears to be based on the visual nature of signed languages-used by some but not all deaf individuals-and the fact that with greater hearing losses, deaf students will rely relatively more on vision than audition. However, the questions of whether individuals with hearing loss are more likely to be visual learners than verbal learners or more likely than hearing peers to be visual learners have not been empirically explored. Several recent studies, in fact, have indicated that hearing learners typically perform as well or better than deaf learners on a variety of visual-spatial tasks. The present study used two standardized instruments to examine learning styles among college deaf students who primarily rely on sign language or spoken language and their hearing peers. The visual-verbal dimension was of particular interest. Consistent with recent indirect findings, results indicated that deaf students are no more likely than hearing students to be visual learners and are no stronger in their visual skills and habits than their verbal skills and habits, nor are deaf students' visual orientations associated with sign language skills. The results clearly have specific implications for the educating of deaf learners.
Collapse
Affiliation(s)
- Marc Marschark
- Center for Education Research Partnerships, National Technical Institute for the Deaf – Rochester Institute of Technology, Rochester, NY 14623, USA
- School of Psychology, University of Aberdeen, Regent’s Walk, AB24 2UB Aberdeen, United Kingdom
| | - Allan Paivio
- Department of Psychology, University of Western Ontario, N6A 3K7, London, ON, Canada
| | - Linda J. Spencer
- Department of Special Education, Communication Disorders, New Mexico State University, Las Cruces, NM 88003, USA
| | - Andreana Durkin
- Center for Education Research Partnerships, National Technical Institute for the Deaf – Rochester Institute of Technology, Rochester, NY 14623, USA
| | - Georgianna Borgna
- Center for Education Research Partnerships, National Technical Institute for the Deaf – Rochester Institute of Technology, Rochester, NY 14623, USA
| | - Carol Convertino
- Center for Education Research Partnerships, National Technical Institute for the Deaf – Rochester Institute of Technology, Rochester, NY 14623, USA
| | - Elizabeth Machmer
- Center for Education Research Partnerships, National Technical Institute for the Deaf – Rochester Institute of Technology, Rochester, NY 14623, USA
| |
Collapse
|
20
|
Megreya AM, Bindemann M. A visual processing advantage for young-adolescent deaf observers: Evidence from face and object matching tasks. Sci Rep 2017; 7:41133. [PMID: 28117407 PMCID: PMC5259729 DOI: 10.1038/srep41133] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2016] [Accepted: 12/15/2016] [Indexed: 11/09/2022] Open
Abstract
It is unresolved whether the permanent auditory deprivation that deaf people experience leads to the enhanced visual processing of faces. The current study explored this question with a matching task in which observers searched for a target face among a concurrent lineup of ten faces. This was compared with a control task in which the same stimuli were presented upside down, to disrupt typical face processing, and an object matching task. A sample of young-adolescent deaf observers performed with higher accuracy than hearing controls across all of these tasks. These results clarify previous findings and provide evidence for a general visual processing advantage in deaf observers rather than a face-specific effect.
Collapse
Affiliation(s)
- Ahmed M Megreya
- Department of Psychological Sciences, College of Education, Qatar University, Qatar
| | | |
Collapse
|
21
|
Dole M, Méary D, Pascalis O. Modifications of Visual Field Asymmetries for Face Categorization in Early Deaf Adults: A Study With Chimeric Faces. Front Psychol 2017; 8:30. [PMID: 28163692 PMCID: PMC5247456 DOI: 10.3389/fpsyg.2017.00030] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2016] [Accepted: 01/05/2017] [Indexed: 12/02/2022] Open
Abstract
Right hemisphere lateralization for face processing is well documented in typical populations. At the behavioral level, this right hemisphere bias is often related to a left visual field (LVF) bias. A conventional mean to study this phenomenon consists of using chimeric faces that are composed of the left and right parts of two faces. In this paradigm, participants generally use the left part of the chimeric face, mostly processed through the right optic tract, to determine its identity, gender or age. To assess the impact of early auditory deprivation on face processing abilities, we tested the LVF bias in a group of early deaf participants and hearing controls. In two experiments, deaf and hearing participants performed a gender categorization task with chimeric and normal average faces. Over the two experiments the results confirmed the presence of a LVF bias in participants, which was less frequent in deaf participants. This result suggested modifications of hemispheric lateralization for face processing in deaf participants. In Experiment 2 we also recorded eye movements to examine whether the LVF bias could be related to face scanning behavior. In this second study, participants performed a similar task while we recorded eye movements using an eye tracking system. Using areas of interest analysis we observed that the proportion of fixations on the mouth relatively to the other areas was increased in deaf participants in comparison with the hearing group. This was associated with a decrease of the proportion of fixations on the eyes. In addition these measures were correlated to the LVF bias suggesting a relationship between the LVF bias and the patterns of facial exploration. Taken together, these results suggest that early auditory deprivation results in plasticity phenomenon affecting the perception of static faces through modifications of hemispheric lateralization and of gaze behavior.
Collapse
Affiliation(s)
- Marjorie Dole
- Laboratoire de Psychologie et NeuroCognition, CNRS UMR 5105, Université Grenoble-AlpesGrenoble, France; Gipsa-Lab, Département Parole et Cognition, CNRS UMR 5216, Université Grenoble-AlpesGrenoble, France
| | - David Méary
- Laboratoire de Psychologie et NeuroCognition, CNRS UMR 5105, Université Grenoble-Alpes Grenoble, France
| | - Olivier Pascalis
- Laboratoire de Psychologie et NeuroCognition, CNRS UMR 5105, Université Grenoble-Alpes Grenoble, France
| |
Collapse
|
22
|
He H, Xu B, Tanaka J. Investigating the face inversion effect in a deaf population using the Dimensions Tasks. VISUAL COGNITION 2016. [DOI: 10.1080/13506285.2016.1221488] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
23
|
Ding H, Ming D, Wan B, Li Q, Qin W, Yu C. Enhanced spontaneous functional connectivity of the superior temporal gyrus in early deafness. Sci Rep 2016; 6:23239. [PMID: 26984611 PMCID: PMC4794647 DOI: 10.1038/srep23239] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2015] [Accepted: 03/02/2016] [Indexed: 11/09/2022] Open
Abstract
Early auditory deprivation may drive the auditory cortex into cross-modal processing of non-auditory sensory information. In a recent study, we had shown that early deaf subjects exhibited increased activation in the superior temporal gyrus (STG) bilaterally during visual spatial working memory; however, the changes in the organization of the STG related spontaneous functional network, and their cognitive relevance are still unknown. To clarify this issue, we applied resting state functional magnetic resonance imaging on 42 early deafness (ED) and 40 hearing controls (HC). We also acquired the visual spatial and numerical n-back working memory (WM) information in these subjects. Compared with hearing subjects, the ED exhibited faster reaction time of visual WM tasks in both spatial and numerical domains. Furthermore, ED subjects exhibited significantly increased functional connectivity between the STG (especially of the right hemisphere) and bilateral anterior insula and dorsal anterior cingulated cortex. Finally, the functional connectivity of STG could predict visual spatial WM performance, even after controlling for numerical WM performance. Our findings suggest that early auditory deprivation can strengthen the spontaneous functional connectivity of STG, which may contribute to the cross-modal involvement of this region in visual working memory.
Collapse
Affiliation(s)
- Hao Ding
- School of Medical Imaging, Tianjin Medical University, Tianjin 300070, People's Republic of China.,Department of Biomedical Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Dong Ming
- Department of Biomedical Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Baikun Wan
- Department of Biomedical Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Qiang Li
- Technical College for the Deaf, Tianjin University of Technology, Tianjin 300384, People's Republic of China
| | - Wen Qin
- Department of Radiology and Tianjin Key Laboratory of Functional Imaging, Tianjin Medical University General Hospital, Tianjin 300052, People's Republic of China
| | - Chunshui Yu
- Department of Radiology and Tianjin Key Laboratory of Functional Imaging, Tianjin Medical University General Hospital, Tianjin 300052, People's Republic of China
| |
Collapse
|
24
|
Stropahl M, Plotz K, Schönfeld R, Lenarz T, Sandmann P, Yovel G, De Vos M, Debener S. Cross-modal reorganization in cochlear implant users: Auditory cortex contributes to visual face processing. Neuroimage 2015. [PMID: 26220741 DOI: 10.1016/j.neuroimage.2015.07.062] [Citation(s) in RCA: 58] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
There is converging evidence that the auditory cortex takes over visual functions during a period of auditory deprivation. A residual pattern of cross-modal take-over may prevent the auditory cortex to adapt to restored sensory input as delivered by a cochlear implant (CI) and limit speech intelligibility with a CI. The aim of the present study was to investigate whether visual face processing in CI users activates auditory cortex and whether this has adaptive or maladaptive consequences. High-density electroencephalogram data were recorded from CI users (n=21) and age-matched normal hearing controls (n=21) performing a face versus house discrimination task. Lip reading and face recognition abilities were measured as well as speech intelligibility. Evaluation of event-related potential (ERP) topographies revealed significant group differences over occipito-temporal scalp regions. Distributed source analysis identified significantly higher activation in the right auditory cortex for CI users compared to NH controls, confirming visual take-over. Lip reading skills were significantly enhanced in the CI group and appeared to be particularly better after a longer duration of deafness, while face recognition was not significantly different between groups. However, auditory cortex activation in CI users was positively related to face recognition abilities. Our results confirm a cross-modal reorganization for ecologically valid visual stimuli in CI users. Furthermore, they suggest that residual takeover, which can persist even after adaptation to a CI is not necessarily maladaptive.
Collapse
Affiliation(s)
- Maren Stropahl
- Neuropsychology Lab, Department of Psychology, Carl von Ossietzky University Oldenburg, Germany.
| | - Karsten Plotz
- Department of Phoniatrics, Pediatric Audiology and Neurootology, Evangelisches Krankenhaus Oldenburg, Germany
| | - Rüdiger Schönfeld
- Department of Phoniatrics, Pediatric Audiology and Neurootology, Evangelisches Krankenhaus Oldenburg, Germany
| | - Thomas Lenarz
- Department of Otolaryngology, Hannover Medical School, Germany; Cluster of Excellence Hearing4all Oldenburg, Germany
| | - Pascale Sandmann
- Cluster of Excellence Hearing4all Oldenburg, Germany; Department of Neurology, Hannover Medical School, Germany
| | - Galit Yovel
- Department of Psychology, Tel Aviv University, Tel Aviv, Israel
| | - Maarten De Vos
- Cluster of Excellence Hearing4all Oldenburg, Germany; Department of Engineering Science, University of Oxford, UK; Methods in Cognitive Psychology, Department of Psychology, Carl von Ossietzky University Oldenburg, Germany
| | - Stefan Debener
- Neuropsychology Lab, Department of Psychology, Carl von Ossietzky University Oldenburg, Germany; Cluster of Excellence Hearing4all Oldenburg, Germany
| |
Collapse
|
25
|
Ding H, Qin W, Liang M, Ming D, Wan B, Li Q, Yu C. Cross-modal activation of auditory regions during visuo-spatial working memory in early deafness. Brain 2015; 138:2750-65. [PMID: 26070981 DOI: 10.1093/brain/awv165] [Citation(s) in RCA: 57] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2014] [Accepted: 04/18/2015] [Indexed: 11/13/2022] Open
Abstract
Early deafness can reshape deprived auditory regions to enable the processing of signals from the remaining intact sensory modalities. Cross-modal activation has been observed in auditory regions during non-auditory tasks in early deaf subjects. In hearing subjects, visual working memory can evoke activation of the visual cortex, which further contributes to behavioural performance. In early deaf subjects, however, whether and how auditory regions participate in visual working memory remains unclear. We hypothesized that auditory regions may be involved in visual working memory processing and activation of auditory regions may contribute to the superior behavioural performance of early deaf subjects. In this study, 41 early deaf subjects (22 females and 19 males, age range: 20-26 years, age of onset of deafness < 2 years) and 40 age- and gender-matched hearing controls underwent functional magnetic resonance imaging during a visuo-spatial delayed recognition task that consisted of encoding, maintenance and recognition stages. The early deaf subjects exhibited faster reaction times on the spatial working memory task than did the hearing controls. Compared with hearing controls, deaf subjects exhibited increased activation in the superior temporal gyrus bilaterally during the recognition stage. This increased activation amplitude predicted faster and more accurate working memory performance in deaf subjects. Deaf subjects also had increased activation in the superior temporal gyrus bilaterally during the maintenance stage and in the right superior temporal gyrus during the encoding stage. These increased activation amplitude also predicted faster reaction times on the spatial working memory task in deaf subjects. These findings suggest that cross-modal plasticity occurs in auditory association areas in early deaf subjects. These areas are involved in visuo-spatial working memory. Furthermore, amplitudes of cross-modal activation during the maintenance stage were positively correlated with the age of onset of hearing aid use and were negatively correlated with the percentage of lifetime hearing aid use in deaf subjects. These findings suggest that earlier and longer hearing aid use may inhibit cross-modal reorganization in early deaf subjects. Granger causality analysis revealed that, compared to the hearing controls, the deaf subjects had an enhanced net causal flow from the frontal eye field to the superior temporal gyrus. These findings indicate that a top-down mechanism may better account for the cross-modal activation of auditory regions in early deaf subjects.See MacSweeney and Cardin (doi:10/1093/awv197) for a scientific commentary on this article.
Collapse
Affiliation(s)
- Hao Ding
- 1 Department of Biomedical Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Wen Qin
- 2 Department of Radiology and Tianjin Key Laboratory of Functional Imaging, Tianjin Medical University General Hospital, Tianjin 300052, People's Republic of China
| | - Meng Liang
- 3 School of Medical Imaging, Tianjin Medical University, Tianjin 300070, People's Republic of China
| | - Dong Ming
- 1 Department of Biomedical Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Baikun Wan
- 1 Department of Biomedical Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Qiang Li
- 4 Technical College for the Deaf, Tianjin University of Technology, Tianjin 300384, People's Republic of China
| | - Chunshui Yu
- 2 Department of Radiology and Tianjin Key Laboratory of Functional Imaging, Tianjin Medical University General Hospital, Tianjin 300052, People's Republic of China
| |
Collapse
|
26
|
Saegusa C, Namatame M, Watanabe K. Interpreting text messages with graphic facial expression by deaf and hearing people. Front Psychol 2015; 6:383. [PMID: 25883582 PMCID: PMC4382978 DOI: 10.3389/fpsyg.2015.00383] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2015] [Accepted: 03/18/2015] [Indexed: 01/21/2023] Open
Abstract
In interpreting verbal messages, humans use not only verbal information but also non-verbal signals such as facial expression. For example, when a person says “yes” with a troubled face, what he or she really means appears ambiguous. In the present study, we examined how deaf and hearing people differ in perceiving real meanings in texts accompanied by representations of facial expression. Deaf and hearing participants were asked to imagine that the face presented on the computer monitor was asked a question from another person (e.g., do you like her?). They observed either a realistic or a schematic face with a different magnitude of positive or negative expression on a computer monitor. A balloon that contained either a positive or negative text response to the question appeared at the same time as the face. Then, participants rated how much the individual on the monitor really meant it (i.e., perceived earnestness), using a 7-point scale. Results showed that the facial expression significantly modulated the perceived earnestness. The influence of positive expression on negative text responses was relatively weaker than that of negative expression on positive responses (i.e., “no” tended to mean “no” irrespective of facial expression) for both participant groups. However, this asymmetrical effect was stronger in the hearing group. These results suggest that the contribution of facial expression in perceiving real meanings from text messages is qualitatively similar but quantitatively different between deaf and hearing people.
Collapse
Affiliation(s)
- Chihiro Saegusa
- Research Center for Advanced Science and Technology, The University of Tokyo , Tokyo, Japan
| | - Miki Namatame
- Department of Synthetic Design, Tsukuba University of Technology , Tsukuba, Japan
| | - Katsumi Watanabe
- Research Center for Advanced Science and Technology, The University of Tokyo , Tokyo, Japan
| |
Collapse
|
27
|
Okada R, Nakagawa J, Takahashi M, Kanaka N, Fukamauchi F, Watanabe K, Namatame M, Matsuda T. The deaf utilize phonological representations in visually presented verbal memory tasks. Neurosci Res 2014; 90:83-9. [PMID: 25498951 DOI: 10.1016/j.neures.2014.11.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2014] [Revised: 10/27/2014] [Accepted: 11/10/2014] [Indexed: 11/29/2022]
Abstract
The phonological abilities of congenitally deaf individuals are inferior to those of people who can hear. However, deaf individuals can acquire spoken languages by utilizing orthography and lip-reading. The present study used functional magnetic resonance imaging (fMRI) to show that deaf individuals utilize phonological representations via a mnemonic process. We compared the brain activation of deaf and hearing participants while they memorized serially visually presented Japanese kana letters (Kana), finger alphabets (Finger), and Arabic letters (Arabic). Hearing participants did not know which finger alphabets corresponded to which language sounds, whereas deaf participants did. All of the participants understood the correspondence between Kana and their language sounds. None of the participants knew the correspondence between Arabic and their language sounds, so this condition was used as a baseline. We found that the left superior temporal gyrus (STG) was activated by phonological representations in the deaf group when memorizing both Kana and Finger. Additionally, the brain areas associated with phonological representations for Finger in the deaf group were the same as the areas for Kana in the hearing group. Overall, despite the fact that they are superior in visual information processing, deaf individuals utilize phonological rather than visual representations in visually presented verbal memory.
Collapse
Affiliation(s)
- Rieko Okada
- Tamagawa University Brain Science Institute, 6-1-1 Tamagawa Gakuen, Machida City, Tokyo 194-8610, Japan
| | - Jun Nakagawa
- Tamagawa University Brain Science Institute, 6-1-1 Tamagawa Gakuen, Machida City, Tokyo 194-8610, Japan; Section of Liaison Psychiatry & Palliative Medicine, Graduate School of Tokyo Medical & Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Muneyoshi Takahashi
- Tamagawa University Brain Science Institute, 6-1-1 Tamagawa Gakuen, Machida City, Tokyo 194-8610, Japan
| | - Noriko Kanaka
- Tamagawa University Brain Science Institute, 6-1-1 Tamagawa Gakuen, Machida City, Tokyo 194-8610, Japan
| | - Fumihiko Fukamauchi
- Faculty of Industrial Technology, National University Corporation Tsukuba University of Technology, 4-12-7 Kasuga, Tsukuba City, Ibaraki 305-8521, Japan; Enomoto Clinic, 1-2-5 Nishi-Ikebukuro, Toshima-ku, Tokyo 171-0021, Japan
| | - Katsumi Watanabe
- Research Center for Advanced Science and Technology, University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo 153-8904, Japan
| | - Miki Namatame
- Faculty of Industrial Technology, National University Corporation Tsukuba University of Technology, 4-12-7 Kasuga, Tsukuba City, Ibaraki 305-8521, Japan
| | - Tetsuya Matsuda
- Tamagawa University Brain Science Institute, 6-1-1 Tamagawa Gakuen, Machida City, Tokyo 194-8610, Japan.
| |
Collapse
|
28
|
Kapur N, Cole J, Manly T, Viskontas I, Ninteman A, Hasher L, Pascual-Leone A. Positive Clinical Neuroscience. Neuroscientist 2013; 19:354-69. [DOI: 10.1177/1073858412470976] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Disorders of the brain and its sensory organs have traditionally been associated with deficits in movement, perception, cognition, emotion, and behavior. It is increasingly evident, however, that positive phenomena may also occur in such conditions, with implications for the individual, science, medicine, and for society. This article provides a selective review of such positive phenomena – enhanced function after brain lesions, better-than-normal performance in people with sensory loss, creativity associated with neurological disease, and enhanced performance associated with aging. We propose that, akin to the well-established field of positive psychology and the emerging field of positive clinical psychology, the nascent fields of positive neurology and positive neuropsychology offer new avenues to understand brain-behavior relationships, with both theoretical and therapeutic implications.
Collapse
Affiliation(s)
| | | | - Tom Manly
- MRC Cognition and Brain Sciences Unit, Cambridge, UK
| | - Indre Viskontas
- University of California, San Francisco, San Francisco, CA, USA
| | | | - Lynn Hasher
- University of Toronto, Toronto, Ontario, Canada
| | - Alvaro Pascual-Leone
- Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, MA, USA
| |
Collapse
|
29
|
de Heering A, Aljuhanay A, Rossion B, Pascalis O. Early deafness increases the face inversion effect but does not modulate the composite face effect. Front Psychol 2012; 3:124. [PMID: 22539929 PMCID: PMC3336184 DOI: 10.3389/fpsyg.2012.00124] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2011] [Accepted: 04/08/2012] [Indexed: 11/13/2022] Open
Abstract
Early deprivation in audition can have striking effects on the development of visual processing. Here we investigated whether early deafness induces changes in holistic/configural face processing. To this end, we compared the results of a group of early deaf participants to those of a group of hearing participants in an inversion-matching task (Experiment 1) and a composite face task (Experiment 2). We hypothesized that deaf individuals would show an enhanced inversion effect and/or an increased composite face effect compared to hearing controls in case of enhanced holistic/configural face processing. Conversely, these effects would be reduced if they rely more on facial features than hearing controls. As a result, we found that deaf individuals showed an increased inversion effect for faces, but not for non-face objects. They were also significantly slower than hearing controls to match inverted faces. However, the two populations did not differ regarding the overall size of their composite face effect. Altogether these results suggest that early deafness does not enhance or reduce the amount of holistic/configural processing devoted to faces but may increase the dependency on this mode of processing.
Collapse
Affiliation(s)
- Adélaïde de Heering
- Face Categorization Lab, Faculté de Psychologie et des Sciences de l'Education, Université Catholique de Louvain Louvain-la-Neuve, Belgium
| | | | | | | |
Collapse
|
30
|
Anderson JR, Fincham JM, Schneider DW, Yang J. Using brain imaging to track problem solving in a complex state space. Neuroimage 2012; 60:633-43. [PMID: 22209783 PMCID: PMC3288582 DOI: 10.1016/j.neuroimage.2011.12.025] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2011] [Revised: 12/08/2011] [Accepted: 12/11/2011] [Indexed: 11/29/2022] Open
Abstract
This paper describes how behavioral and imaging data can be combined with a Hidden Markov Model (HMM) to track participants' trajectories through a complex state space. Participants completed a problem-solving variant of a memory game that involved 625 distinct states, 24 operators, and an astronomical number of paths through the state space. Three sources of information were used for classification purposes. First, an Imperfect Memory Model was used to estimate transition probabilities for the HMM. Second, behavioral data provided information about the timing of different events. Third, multivoxel pattern analysis of the imaging data was used to identify features of the operators. By combining the three sources of information, an HMM algorithm was able to efficiently identify the most probable path that participants took through the state space, achieving over 80% accuracy. These results support the approach as a general methodology for tracking mental states that occur during individual problem-solving episodes.
Collapse
Affiliation(s)
- John R. Anderson
- Department of Psychology Carnegie Mellon University Pittsburgh, PA 15208
| | - Jon M. Fincham
- Department of Psychology Carnegie Mellon University Pittsburgh, PA 15208
| | | | - Jian Yang
- The International WIC Institute Beijing University of Technology No. 100 Pingleyuan, Chaoyang District, Beijing 100124, China
| |
Collapse
|
31
|
The role of visual experience for the neural basis of spatial cognition. Neurosci Biobehav Rev 2012; 36:1179-87. [PMID: 22330729 DOI: 10.1016/j.neubiorev.2012.01.008] [Citation(s) in RCA: 100] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2011] [Revised: 01/16/2012] [Accepted: 01/28/2012] [Indexed: 12/20/2022]
Abstract
Blindness often results in the adaptive neural reorganization of the remaining modalities, producing sharper auditory and haptic behavioral performance. Yet, non-visual modalities might not be able to fully compensate for the lack of visual experience as in the case of congenital blindness. For example, developmental visual experience seems to be necessary for the maturation of multisensory neurons for spatial tasks. Additionally, the ability of vision to convey information in parallel might be taken into account as the main attribute that cannot be fully compensated by the spared modalities. Therefore, the lack of visual experience might impair all spatial tasks that require the integration of inputs from different modalities, such as having to represent a set of objects on the basis of the spatial relationships among the objects, rather than the spatial relationship that each object has with oneself. Here we integrate behavioral and neural evidence to conclude that visual experience is necessary for the neural development of normal spatial cognition.
Collapse
|
32
|
Cortical plasticity for visuospatial processing and object recognition in deaf and hearing signers. Neuroimage 2011; 60:661-72. [PMID: 22210355 DOI: 10.1016/j.neuroimage.2011.12.031] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2011] [Revised: 11/09/2011] [Accepted: 12/14/2011] [Indexed: 11/23/2022] Open
Abstract
Experience-dependent plasticity in deaf participants has been shown in a variety of studies focused on either the dorsal or ventral aspects of the visual system, but both systems have never been investigated in concert. Using functional magnetic resonance imaging (fMRI), we investigated functional plasticity for spatial processing (a dorsal visual pathway function) and for object processing (a ventral visual pathway function) concurrently, in the context of differing sensory (auditory deprivation) and language (use of a signed language) experience. During scanning, deaf native users of American Sign Language (ASL), hearing native ASL users, and hearing participants without ASL experience attended to either the spatial arrangement of frames containing objects or the identity of the objects themselves. These two tasks revealed the expected dorsal/ventral dichotomy for spatial versus object processing in all groups. In addition, the object identity matching task contained both face and house stimuli, allowing us to examine category-selectivity in the ventral pathway in all three participant groups. When contrasting the groups we found that deaf signers differed from the two hearing groups in dorsal pathway parietal regions involved in spatial cognition, suggesting sensory experience-driven plasticity. Group differences in the object processing system indicated that responses in the face-selective right lateral fusiform gyrus and anterior superior temporal cortex were sensitive to a combination of altered sensory and language experience, whereas responses in the amygdala were more closely tied to sensory experience. By selectively engaging the dorsal and ventral visual pathways within participants in groups with different sensory and language experiences, we have demonstrated that these experiences affect the function of both of these systems, and that certain changes are more closely tied to sensory experience, while others are driven by the combination of sensory and language experience.
Collapse
|
33
|
Abstract
There is growing evidence that sensory deprivation is associated with crossmodal neuroplastic changes in the brain. After visual or auditory deprivation, brain areas that are normally associated with the lost sense are recruited by spared sensory modalities. These changes underlie adaptive and compensatory behaviours in blind and deaf individuals. Although there are differences between these populations owing to the nature of the deprived sensory modality, there seem to be common principles regarding how the brain copes with sensory loss and the factors that influence neuroplastic changes. Here, we discuss crossmodal neuroplasticity with regards to behavioural adaptation after sensory deprivation and highlight the possibility of maladaptive consequences within the context of rehabilitation.
Collapse
|
34
|
McCullough S, Emmorey K. Categorical perception of affective and linguistic facial expressions. Cognition 2008; 110:208-21. [PMID: 19111287 DOI: 10.1016/j.cognition.2008.11.007] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2008] [Revised: 08/29/2008] [Accepted: 11/12/2008] [Indexed: 10/21/2022]
Abstract
Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX discrimination and identification tasks on morphed affective and linguistic facial expression continua. The continua were created by morphing end-point photo exemplars into 11 images, changing linearly from one expression to another in equal steps. For both affective and linguistic expressions, hearing non-signers exhibited better discrimination across category boundaries than within categories for both experiments, thus replicating previous results with affective expressions and demonstrating CP effects for non-canonical facial expressions. Deaf signers, however, showed significant CP effects only for linguistic facial expressions. Subsequent analyses indicated that order of presentation influenced signers' response time performance for affective facial expressions: viewing linguistic facial expressions first slowed response time for affective facial expressions. We conclude that CP effects for affective facial expressions can be influenced by language experience.
Collapse
|
35
|
Horton HK, Silverstein SM. Cognition and functional outcome among deaf and hearing people with schizophrenia. Schizophr Res 2007; 94:187-96. [PMID: 17560083 PMCID: PMC3864919 DOI: 10.1016/j.schres.2007.04.008] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/08/2006] [Revised: 04/07/2007] [Accepted: 04/12/2007] [Indexed: 11/15/2022]
Abstract
Recent research has highlighted the relationships between impairments in cognitive functioning and poorer functional outcomes among people with schizophrenia (PWS). The purpose of this study was to replicate and extend this work by testing the relationships between cognition and functional outcome among deaf adults with schizophrenia. Empirical findings from deafness-oriented research reveals enhanced abilities in certain aspects of visual-spatial processing compared to hearing people. Sixty-five PWS (34 deaf, 31 hearing) were assessed using measures of verbal and visual memory, attention, and visual processing. The first hypothesis tested whether cognition predicted functional outcome in a similar fashion for both deaf and hearing subjects (n=63). For all subjects, higher levels of cognitive ability were associated with higher levels of functional outcome, and the strongest predictors of outcome were verbal memory and visual-spatial memory (recall condition) (VSM recall). However, the deaf and hearing groups did show different patterns of relationships between cognition and functioning when all cognitive variables were examined. The second hypothesis was that deaf subjects would display superior performance in early visual processing, visual-spatial memory (copy condition) (VSM copy), and VSM recall. Deaf subjects displayed superior performance on each task; however, no significant differences emerged. Deaf subjects outperformed hearing subjects in an unexpected domain (word memory/recognition). This study extends prior work in the area of cognition and schizophrenia and indicates that deaf and hearing subjects may benefit from interventions that address different domains of cognition.
Collapse
Affiliation(s)
- Heather K Horton
- School of Social Welfare, University at Albany, Richardson Hall 280, 135 Western Avenue, Albany, NY 12203, USA.
| | | |
Collapse
|
36
|
Bavelier D, Dye MWG, Hauser PC. Do deaf individuals see better? Trends Cogn Sci 2006; 10:512-8. [PMID: 17015029 PMCID: PMC2885708 DOI: 10.1016/j.tics.2006.09.006] [Citation(s) in RCA: 283] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2006] [Revised: 08/15/2006] [Accepted: 09/18/2006] [Indexed: 12/27/2022]
Abstract
The possibility that, following early auditory deprivation, the remaining senses such as vision are enhanced has been met with much excitement. However, deaf individuals exhibit both better and worse visual skills than hearing controls. We show that, when deafness is considered to the exclusion of other confounds, enhancements in visual cognition are noted. The changes are not, however, widespread but are selective, limited, as we propose, to those aspects of vision that are attentionally demanding and would normally benefit from auditory-visual convergence. The behavioral changes are accompanied by a reorganization of multisensory areas, ranging from higher-order cortex to early cortical areas, highlighting cross-modal interactions as a fundamental feature of brain organization and cognitive processing.
Collapse
Affiliation(s)
- Daphne Bavelier
- Brain and Cognitive Science Department, Meliora Hall, University of Rochester, Rochester, NY 14627-0268, USA.
| | | | | |
Collapse
|
37
|
McCullough S, Emmorey K, Sereno M. Neural organization for recognition of grammatical and emotional facial expressions in deaf ASL signers and hearing nonsigners. ACTA ACUST UNITED AC 2005; 22:193-203. [PMID: 15653293 DOI: 10.1016/j.cogbrainres.2004.08.012] [Citation(s) in RCA: 57] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/12/2004] [Indexed: 10/26/2022]
Abstract
Recognition of emotional facial expressions is universal for all humans, but signed language users must also recognize certain non-affective facial expressions as linguistic markers. fMRI was used to investigate the neural systems underlying recognition of these functionally distinct expressions, comparing deaf ASL signers and hearing nonsigners. Within the superior temporal sulcus (STS), activation for emotional expressions was right lateralized for the hearing group and bilateral for the deaf group. In contrast, activation within STS for linguistic facial expressions was left lateralized only for signers and only when linguistic facial expressions co-occurred with verbs. Within the fusiform gyrus (FG), activation was left lateralized for ASL signers for both expression types, whereas activation was bilateral for both expression types for nonsigners. We propose that left lateralization in FG may be due to continuous analysis of local facial features during on-line sign language processing. The results indicate that function in part drives the lateralization of neural systems that process human facial expressions.
Collapse
Affiliation(s)
- Stephen McCullough
- Laboratory for Cognitive Neuroscience, The Salk Institute for Biological Studies 10010 North Torrey Pines Rd. La Jolla, CA 92037, USA.
| | | | | |
Collapse
|
38
|
Abstract
The performance of ten deaf-blind and ten sighted-hearing participants on four tactile memory tasks was investigated. Recognition and recall memory tasks and a matching pairs game were used. It was hypothesized that deaf-blind participants would be superior on each task. Performance was measured in terms of the time taken, and the number of items correctly recalled. In Experiments 1 and 2, which measured recognition memory in terms of the time taken to remember target items, the hypothesis was supported, but not by the length of time taken to recognize the target items, or for the number of target items correctly identified. The hypothesis was supported by Experiment 3, which measured recall memory, with regard to time taken to complete some of the tasks but not for the number of correctly recalled positions. Experiment 4, which used the matching pairs game, supported the hypothesis in terms of both time taken and the number of moves required. It is concluded that the deaf-blind people's tactile encoding is more efficient than that of sighted-hearing people, and that it is probable that their storage and retrieval are normal.
Collapse
Affiliation(s)
- Paul Arnold
- Department of Psychology, University of Manchester, UK.
| | | |
Collapse
|