1
|
Wainscott SD, Spurgin K. Differentiating Language for Students Who Are Deaf or Hard of Hearing: A Practice-Informed Framework for Auditory and Visual Supports. Lang Speech Hear Serv Sch 2024; 55:473-494. [PMID: 38324382 DOI: 10.1044/2023_lshss-22-00088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/09/2024] Open
Abstract
PURPOSE Speech-language pathologists (SLPs) serving students who are d/Deaf or hard of hearing (Deaf/hh) and their deaf education counterparts must navigate complexities in language that include modalities that are spoken or signed and proficiency, which is often compromised. This tutorial describes a practice-informed framework that conceptualizes and organizes a continuum of auditory and visual language supports with the aim of informing the practice of the SLP whose training is more inherently focused on spoken language alone, as well as the practice of the teacher of the Deaf/hh (TDHH) who may focus more on visual language supports. METHOD This product resulted from a need within interdisciplinary, graduate programs for SLPs and TDHHs. Both cohorts required preparation to address the needs of diverse language learners who are Deaf/hh. This tutorial includes a brief review of the challenges in developing language proficiency and describes the complexities of effective service delivery. The process of developing a practice-informed framework for language supports is summarized, referencing established practices in auditory-based and visually based methodologies, identifying parallel practices, and summarizing the practices within a multitiered framework called the Framework of Differentiated Practices for Language Support. Recommendations for use of the framework include guidance on the identification of a student's language modality/ies and proficiency to effectively match students' needs and target supports. CONCLUSIONS An examination of established practices in language supports across auditory and visual modalities reveals clear parallels that can be organized into a tiered framework. The result is a reference for differentiating language for the interdisciplinary school team. The parallel supports also provide evidence of similarities in practice across philosophical boundaries as professionals work collaboratively.
Collapse
Affiliation(s)
- Sarah D Wainscott
- Department of Communication Sciences and Oral Health, Texas Woman's University, Denton
| | - Kelsey Spurgin
- Department of Special Education, Ball State University, Muncie, IN
| |
Collapse
|
2
|
Rönnberg J, Signoret C, Andin J, Holmer E. The cognitive hearing science perspective on perceiving, understanding, and remembering language: The ELU model. Front Psychol 2022; 13:967260. [PMID: 36118435 PMCID: PMC9477118 DOI: 10.3389/fpsyg.2022.967260] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 08/08/2022] [Indexed: 11/13/2022] Open
Abstract
The review gives an introductory description of the successive development of data patterns based on comparisons between hearing-impaired and normal hearing participants' speech understanding skills, later prompting the formulation of the Ease of Language Understanding (ELU) model. The model builds on the interaction between an input buffer (RAMBPHO, Rapid Automatic Multimodal Binding of PHOnology) and three memory systems: working memory (WM), semantic long-term memory (SLTM), and episodic long-term memory (ELTM). RAMBPHO input may either match or mismatch multimodal SLTM representations. Given a match, lexical access is accomplished rapidly and implicitly within approximately 100-400 ms. Given a mismatch, the prediction is that WM is engaged explicitly to repair the meaning of the input - in interaction with SLTM and ELTM - taking seconds rather than milliseconds. The multimodal and multilevel nature of representations held in WM and LTM are at the center of the review, being integral parts of the prediction and postdiction components of language understanding. Finally, some hypotheses based on a selective use-disuse of memory systems mechanism are described in relation to mild cognitive impairment and dementia. Alternative speech perception and WM models are evaluated, and recent developments and generalisations, ELU model tests, and boundaries are discussed.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | | | | | | |
Collapse
|
3
|
Gong H, Lei J, Chen L. Phonological store and speechreading performance of Chinese students with hearing impairment. CLINICAL LINGUISTICS & PHONETICS 2022; 36:456-469. [PMID: 34151654 DOI: 10.1080/02699206.2021.1930175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Revised: 05/06/2021] [Accepted: 05/10/2021] [Indexed: 06/13/2023]
Abstract
This study presents three experiments to examine the role of the phonological store component of working memory in the speechreading performance of students with hearing impairment (HI) in China. In Experiment 1, 86 high school students with HI completed an immediate serial recall task with four lists of monosyllabic words that differed in phonological and visual similarities. In Experiment 2 and Experiment 3, 40 participants divided into high or low phonological store capacity (PS) and 40 participants divided into high or low visual phonological story capacity (VPS) completed a speechreading test at the word, phrase and sentence levels. Results revealed that (1) immediate serial recall showed effects of phonological and visual similarity and their interaction; (2) there was no significant effect of phonological store capacities on speechreading; and (3) there was a significant effect of visual phonological store capacities on accuracy but not speed of speechreading. These findings point to a general phonological store system for visual orthographic coding and phonological coding that students with HI engage in speechreading in Chinese. It provides evidence for the contention that the visual-based coding has a more direct impact on speechreading performance of Chinese students with HI than the speech-based coding.
Collapse
Affiliation(s)
- Huina Gong
- Department of Special Education, Central China Normal University, Wuhan, China
| | - Jianghua Lei
- Department of Special Education, Central China Normal University, Wuhan, China
| | - Liang Chen
- Communication Sciences and Special Education, University of Georgia, Athens, Georgia, USA
| |
Collapse
|
4
|
Mekki Y, Guillemot V, Lemaitre H, Carrion-Castillo A, Forkel S, Frouin V, Philippe C. The genetic architecture of language functional connectivity. Neuroimage 2021; 249:118795. [PMID: 34929384 DOI: 10.1016/j.neuroimage.2021.118795] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 11/11/2021] [Accepted: 12/08/2021] [Indexed: 02/08/2023] Open
Abstract
Language is a unique trait of the human species, of which the genetic architecture remains largely unknown. Through language disorders studies, many candidate genes were identified. However, such complex and multifactorial trait is unlikely to be driven by only few genes and case-control studies, suffering from a lack of power, struggle to uncover significant variants. In parallel, neuroimaging has significantly contributed to the understanding of structural and functional aspects of language in the human brain and the recent availability of large scale cohorts like UK Biobank have made possible to study language via image-derived endophenotypes in the general population. Because of its strong relationship with task-based fMRI (tbfMRI) activations and its easiness of acquisition, resting-state functional MRI (rsfMRI) have been more popularised, making it a good surrogate of functional neuronal processes. Taking advantage of such a synergistic system by aggregating effects across spatially distributed traits, we performed a multivariate genome-wide association study (mvGWAS) between genetic variations and resting-state functional connectivity (FC) of classical brain language areas in the inferior frontal (pars opercularis, triangularis and orbitalis), temporal and inferior parietal lobes (angular and supramarginal gyri), in 32,186 participants from UK Biobank. Twenty genomic loci were found associated with language FCs, out of which three were replicated in an independent replication sample. A locus in 3p11.1, regulating EPHA3 gene expression, is found associated with FCs of the semantic component of the language network, while a locus in 15q14, regulating THBS1 gene expression is found associated with FCs of the perceptual-motor language processing, bringing novel insights into the neurobiology of language.
Collapse
Affiliation(s)
- Yasmina Mekki
- NeuroSpin, Institut Joliot, CEA - Université Paris-Saclay, Gif-Sur-Yvette, 91191, France.
| | - Vincent Guillemot
- Hub de Bioinformatique et Biostatistique, Département Biologie Computationnelle, Institut Pasteur, USR 3756 CNRS, Paris, France
| | - Hervé Lemaitre
- Groupe d'Imagerie Neurofonctionnelle, Institut des Maladies Neurodégénératives, CNRS UMR 5293, Université de Bordeaux, Centre Broca Nouvelle-Aquitaine, Bordeaux, France
| | | | - Stephanie Forkel
- Groupe d'Imagerie Neurofonctionnelle, Institut des Maladies Neurodégénératives, CNRS UMR 5293, Université de Bordeaux, Centre Broca Nouvelle-Aquitaine, Bordeaux, France; Brain Connectivity and Behaviour Laboratory, Sorbonne Universities, Paris, France; Department of Neuroimaging, Institute of Psychiatry, Psychology and Neurosciences, King's College London, UK
| | - Vincent Frouin
- NeuroSpin, Institut Joliot, CEA - Université Paris-Saclay, Gif-Sur-Yvette, 91191, France
| | - Cathy Philippe
- NeuroSpin, Institut Joliot, CEA - Université Paris-Saclay, Gif-Sur-Yvette, 91191, France.
| |
Collapse
|
5
|
Abstract
The first 40 years of research on the neurobiology of sign languages (1960-2000) established that the same key left hemisphere brain regions support both signed and spoken languages, based primarily on evidence from signers with brain injury and at the end of the 20th century, based on evidence from emerging functional neuroimaging technologies (positron emission tomography and fMRI). Building on this earlier work, this review focuses on what we have learned about the neurobiology of sign languages in the last 15-20 years, what controversies remain unresolved, and directions for future research. Production and comprehension processes are addressed separately in order to capture whether and how output and input differences between sign and speech impact the neural substrates supporting language. In addition, the review includes aspects of language that are unique to sign languages, such as pervasive lexical iconicity, fingerspelling, linguistic facial expressions, and depictive classifier constructions. Summary sketches of the neural networks supporting sign language production and comprehension are provided with the hope that these will inspire future research as we begin to develop a more complete neurobiological model of sign language processing.
Collapse
|
6
|
Starowicz-Filip A, Prochwicz K, Kłosowska J, Chrobak AA, Krzyżewski R, Myszka A, Rajtar-Zembaty A, Bętkowska-Korpała B, Kwinta B. Is Addenbrooke's Cognitive Examination III Sensitive Enough to Detect Cognitive Dysfunctions in Patients with Focal Cerebellar Lesions? Arch Clin Neuropsychol 2021; 37:423-436. [PMID: 34128041 DOI: 10.1093/arclin/acab045] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/24/2021] [Indexed: 11/14/2022] Open
Abstract
OBJECTIVE The main aim of the study was to evaluate whether the available brief test of mental functions Addenbrooke's cognitive examination III (ACE III) detects cognitive impairment in patients with cerebellar damage. The second goal was to show the ACE III cognitive impairment profile of patients with focal cerebellar lesions. METHOD The study sample consisted of 31 patients with focal cerebellar lesions, 78 patients with supratentorial brain damage, and 31 subjects after spine surgery or with spine degeneration considered as control group, free of organic brain damage. The ACE III was used. RESULTS Patients with cerebellar damage obtained significantly lower results in the ACE III total score and in several subscales: attention, fluency, language, and visuospatial domains than healthy controls without brain damage. With the cut-off level of 89 points, the ACE III was characterized by the sensitivity of 71%, specificity of 72%, and accuracy of 72%. The cerebellar cognitive impairment profile was found to be "frontal-like" and similar to that observed in patients with anterior supratentorial brain damage, with decreased ability to retrieve previously learned material and its preserved recognition, impaired word fluency, and executive dysfunction. The results are consistent with cerebellar cognitive affective syndrome. CONCLUSIONS The ACE III can be used as a sensitive screening tool to detect cognitive impairments in patients with cerebellar damage.
Collapse
Affiliation(s)
- Anna Starowicz-Filip
- Chair of Psychiatry, Department of Medical Psychology, Jagiellonian University Medical College, Kraków, Poland.,Department of Neurosurgery, University Hospital, Kraków, Poland
| | | | | | | | - Roger Krzyżewski
- Department of Neurosurgery, Jagiellonian University Medical College, Kraków, Poland
| | - Aneta Myszka
- Department of Neurosurgery, Jagiellonian University Medical College, Kraków, Poland
| | - Anna Rajtar-Zembaty
- Chair of Psychiatry, Department of Medical Psychology, Jagiellonian University Medical College, Kraków, Poland
| | - Barbara Bętkowska-Korpała
- Chair of Psychiatry, Department of Medical Psychology, Jagiellonian University Medical College, Kraków, Poland
| | - Borys Kwinta
- Department of Neurosurgery, Jagiellonian University Medical College, Kraków, Poland
| |
Collapse
|
7
|
Rönnberg J, Holmer E, Rudner M. Cognitive Hearing Science: Three Memory Systems, Two Approaches, and the Ease of Language Understanding Model. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:359-370. [PMID: 33439747 DOI: 10.1044/2020_jslhr-20-00007] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose The purpose of this study was to conceptualize the subtle balancing act between language input and prediction (cognitive priming of future input) to achieve understanding of communicated content. When understanding fails, reconstructive postdiction is initiated. Three memory systems play important roles: working memory (WM), episodic long-term memory (ELTM), and semantic long-term memory (SLTM). The axiom of the Ease of Language Understanding (ELU) model is that explicit WM resources are invoked by a mismatch between language input-in the form of rapid automatic multimodal binding of phonology-and multimodal phonological and lexical representations in SLTM. However, if there is a match between rapid automatic multimodal binding of phonology output and SLTM/ELTM representations, language processing continues rapidly and implicitly. Method and Results In our first ELU approach, we focused on experimental manipulations of signal processing in hearing aids and background noise to cause a mismatch with LTM representations; both resulted in increased dependence on WM. Our second-and main approach relevant for this review article-focuses on the relative effects of age-related hearing loss on the three memory systems. According to the ELU, WM is predicted to be frequently occupied with reconstruction of what was actually heard, resulting in a relative disuse of phonological/lexical representations in the ELTM and SLTM systems. The prediction and results do not depend on test modality per se but rather on the particular memory system. This will be further discussed. Conclusions Related to the literature on ELTM decline as precursors of dementia and the fact that the risk for Alzheimer's disease increases substantially over time due to hearing loss, there is a possibility that lowered ELTM due to hearing loss and disuse may be part of the causal chain linking hearing loss and dementia. Future ELU research will focus on this possibility.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Emil Holmer
- Linnaeus Centre HEAD, Swedish Institute for Disability Research Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research Department of Behavioural Sciences and Learning, Linköping University, Sweden
| |
Collapse
|
8
|
Banaszkiewicz A, Bola Ł, Matuszewski J, Szczepanik M, Kossowski B, Mostowski P, Rutkowski P, Śliwińska M, Jednoróg K, Emmorey K, Marchewka A. The role of the superior parietal lobule in lexical processing of sign language: Insights from fMRI and TMS. Cortex 2020; 135:240-254. [PMID: 33401098 DOI: 10.1016/j.cortex.2020.10.025] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Revised: 09/24/2020] [Accepted: 10/22/2020] [Indexed: 11/29/2022]
Abstract
There is strong evidence that neuronal bases for language processing are remarkably similar for sign and spoken languages. However, as meanings and linguistic structures of sign languages are coded in movement and space and decoded through vision, differences are also present, predominantly in occipitotemporal and parietal areas, such as superior parietal lobule (SPL). Whether the involvement of SPL reflects domain-general visuospatial attention or processes specific to sign language comprehension remains an open question. Here we conducted two experiments to investigate the role of SPL and the laterality of its engagement in sign language lexical processing. First, using unique longitudinal and between-group designs we mapped brain responses to sign language in hearing late learners and deaf signers. Second, using transcranial magnetic stimulation (TMS) in both groups we tested the behavioural relevance of SPL's engagement and its lateralisation during sign language comprehension. SPL activation in hearing participants was observed in the right hemisphere before and bilaterally after the sign language course. Additionally, after the course hearing learners exhibited greater activation in the occipital cortex and left SPL than deaf signers. TMS applied to the right SPL decreased accuracy in both hearing learners and deaf signers. Stimulation of the left SPL decreased accuracy only in hearing learners. Our results suggest that right SPL might be involved in visuospatial attention while left SPL might support phonological decoding of signs in non-proficient signers.
Collapse
Affiliation(s)
- A Banaszkiewicz
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Ł Bola
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - J Matuszewski
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - M Szczepanik
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - B Kossowski
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - P Mostowski
- Section for Sign Linguistics, Faculty of Polish Studies, University of Warsaw, Warsaw, Poland
| | - P Rutkowski
- Section for Sign Linguistics, Faculty of Polish Studies, University of Warsaw, Warsaw, Poland
| | - M Śliwińska
- Department of Psychology, University of York, Heslington, UK
| | - K Jednoróg
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - K Emmorey
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, San Diego, USA
| | - A Marchewka
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland.
| |
Collapse
|
9
|
Banaszkiewicz A, Matuszewski J, Bola Ł, Szczepanik M, Kossowski B, Rutkowski P, Szwed M, Emmorey K, Jednoróg K, Marchewka A. Multimodal imaging of brain reorganization in hearing late learners of sign language. Hum Brain Mapp 2020; 42:384-397. [PMID: 33098616 PMCID: PMC7776004 DOI: 10.1002/hbm.25229] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2020] [Revised: 07/17/2020] [Accepted: 09/30/2020] [Indexed: 11/09/2022] Open
Abstract
The neural plasticity underlying language learning is a process rather than a single event. However, the dynamics of training-induced brain reorganization have rarely been examined, especially using a multimodal magnetic resonance imaging approach, which allows us to study the relationship between functional and structural changes. We focus on sign language acquisition in hearing adults who underwent an 8-month long course and five neuroimaging sessions. We assessed what neural changes occurred as participants learned a new language in a different modality-as reflected by task-based activity, connectivity changes, and co-occurring structural alterations. Major changes in the activity pattern appeared after just 3 months of learning, as indicated by increases in activation within the modality-independent perisylvian language network, together with increased activation in modality-dependent parieto-occipital, visuospatial and motion-sensitive regions. Despite further learning, no alterations in activation were detected during the following months. However, enhanced coupling between left-lateralized occipital and inferior frontal regions was observed as the proficiency increased. Furthermore, an increase in gray matter volume was detected in the left inferior frontal gyrus which peaked at the end of learning. Overall, these results showed complexity and temporal distinctiveness of various aspects of brain reorganization associated with learning of new language in different sensory modality.
Collapse
Affiliation(s)
- Anna Banaszkiewicz
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Jacek Matuszewski
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Łukasz Bola
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland.,Institute of Psychology, Jagiellonian University, Kraków, Poland.,Department of Psychology, Harvard University, Boston, Massachusetts, USA
| | - Michał Szczepanik
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Bartosz Kossowski
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Paweł Rutkowski
- Section for Sign Linguistics, Faculty of Polish Studies, University of Warsaw, Warsaw, Poland
| | - Marcin Szwed
- Institute of Psychology, Jagiellonian University, Kraków, Poland
| | - Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, San Diego State University, San Diego, California, USA
| | - Katarzyna Jednoróg
- Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| | - Artur Marchewka
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Warsaw, Poland
| |
Collapse
|
10
|
Haluts N, Trippa M, Friedmann N, Treves A. Professional or Amateur? The Phonological Output Buffer as a Working Memory Operator. ENTROPY 2020; 22:e22060662. [PMID: 33286434 PMCID: PMC7517200 DOI: 10.3390/e22060662] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Revised: 06/10/2020] [Accepted: 06/12/2020] [Indexed: 11/16/2022]
Abstract
The Phonological Output Buffer (POB) is thought to be the stage in language production where phonemes are held in working memory and assembled into words. The neural implementation of the POB remains unclear despite a wealth of phenomenological data. Individuals with POB impairment make phonological errors when they produce words and non-words, including phoneme omissions, insertions, transpositions, substitutions and perseverations. Errors can apply to different kinds and sizes of units, such as phonemes, number words, morphological affixes, and function words, and evidence from POB impairments suggests that units tend to substituted with units of the same kind—e.g., numbers with numbers and whole morphological affixes with other affixes. This suggests that different units are processed and stored in the POB in the same stage, but perhaps separately in different mini-stores. Further, similar impairments can affect the buffer used to produce Sign Language, which raises the question of whether it is instantiated in a distinct device with the same design. However, what appear as separate buffers may be distinct regions in the activity space of a single extended POB network, connected with a lexicon network. The self-consistency of this idea can be assessed by studying an autoassociative Potts network, as a model of memory storage distributed over several cortical areas, and testing whether the network can represent both units of word and signs, reflecting the types and patterns of errors made by individuals with POB impairment.
Collapse
Affiliation(s)
- Neta Haluts
- Language and Brain Lab, Sagol School of Neuroscience and School of Education, Tel Aviv University, Tel Aviv-Yafo 69978, Israel; (N.H.); (N.F.)
| | | | - Naama Friedmann
- Language and Brain Lab, Sagol School of Neuroscience and School of Education, Tel Aviv University, Tel Aviv-Yafo 69978, Israel; (N.H.); (N.F.)
| | - Alessandro Treves
- SISSA—Cognitive Neuroscience, Via Bonomea 265, 34136 Trieste, Italy;
- Correspondence:
| |
Collapse
|
11
|
Martinez D, Singleton JL. The effect of bilingualism on lexical learning and memory across two language modalities: some evidence for a domain-specific, but not general, advantage. JOURNAL OF COGNITIVE PSYCHOLOGY 2019. [DOI: 10.1080/20445911.2019.1634080] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- David Martinez
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, USA
| | | |
Collapse
|
12
|
Malaia E, Wilbur RB. Visual and linguistic components of short-term memory: Generalized Neural Model (GNM) for spoken and sign languages. Cortex 2019; 112:69-79. [DOI: 10.1016/j.cortex.2018.05.020] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2018] [Revised: 04/02/2018] [Accepted: 05/29/2018] [Indexed: 10/14/2022]
|
13
|
Rönnberg J, Holmer E, Rudner M. Cognitive hearing science and ease of language understanding. Int J Audiol 2019; 58:247-261. [DOI: 10.1080/14992027.2018.1551631] [Citation(s) in RCA: 52] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Emil Holmer
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Mary Rudner
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| |
Collapse
|
14
|
Rudner M. Working Memory for Linguistic and Non-linguistic Manual Gestures: Evidence, Theory, and Application. Front Psychol 2018; 9:679. [PMID: 29867655 PMCID: PMC5962724 DOI: 10.3389/fpsyg.2018.00679] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2018] [Accepted: 04/19/2018] [Indexed: 12/02/2022] Open
Abstract
Linguistic manual gestures are the basis of sign languages used by deaf individuals. Working memory and language processing are intimately connected and thus when language is gesture-based, it is important to understand related working memory mechanisms. This article reviews work on working memory for linguistic and non-linguistic manual gestures and discusses theoretical and applied implications. Empirical evidence shows that there are effects of load and stimulus degradation on working memory for manual gestures. These effects are similar to those found for working memory for speech-based language. Further, there are effects of pre-existing linguistic representation that are partially similar across language modalities. But above all, deaf signers score higher than hearing non-signers on an n-back task with sign-based stimuli, irrespective of their semantic and phonological content, but not with non-linguistic manual actions. This pattern may be partially explained by recent findings relating to cross-modal plasticity in deaf individuals. It suggests that in linguistic gesture-based working memory, semantic aspects may outweigh phonological aspects when processing takes place under challenging conditions. The close association between working memory and language development should be taken into account in understanding and alleviating the challenges faced by deaf children growing up with cochlear implants as well as other clinical populations.
Collapse
Affiliation(s)
- Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| |
Collapse
|
15
|
Abstract
Jones, Hughes, and Macken (2007) claim that their data and our own are inconsistent with a multicomponent working-memory model. We explain in greater detail how the model can account for the data and can address their more specific criticisms. Both sides accept that data relating to the presence of a phonological similarity effect throughout the list depend on list length. We accept that, at this point, all explanations of their interaction are speculative and require further empirical investigation. We examine J, H, & M's interpretation of their and our results in terms of an auditory modality effect, observing that their interpretation of this effect is not well supported by the literature. We suggest that their account assumes a very narrow basis for a general theory of short-term retention, in contrast to a phonological loop interpretation, which forms part of a well-developed and articulated model of working memory.
Collapse
|
16
|
Le HB, Zhang HH, Wu QL, Zhang J, Yin JJ, Ma SH. Neural Activity During Mental Rotation in Deaf Signers: The Influence of Long-Term Sign Language Experience. Ear Hear 2018; 39:1015-1024. [PMID: 29298164 DOI: 10.1097/aud.0000000000000540] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
OBJECTIVES Mental rotation is the brain's visuospatial understanding of what objects are and where they belong. Previous research indicated that deaf signers showed behavioral enhancement for nonlinguistic visual tasks, including mental rotation. In this study, we investigated the neural difference of mental rotation processing between deaf signers and hearing nonsigners using blood oxygen level-dependent (BOLD) functional magnetic resonance imaging (fMRI). DESIGN The participants performed a block-designed experiment, consisting of alternating blocks of comparison and rotation periods, separated by a baseline or fixation period. Mental rotation tasks were performed using three-dimensional figures. fMRI images were acquired during the entire experiment, and the fMRI data were analyzed with Analysis of Functional NeuroImages. A factorial design analysis of variance was designed for fMRI analyses. The differences of activation were analyzed for the main effects of group and task, as well as for the interaction of group by task. RESULTS The study showed differences in activated areas between deaf signers and hearing nonsigners on the mental rotation of three-dimensional figures. Subtracting activations of fixation from activations of rotation, both groups showed consistent activation in bilateral occipital lobe, bilateral parietal lobe, and bilateral posterior temporal lobe. There were different main effects of task (rotation versus comparison) with significant activation clusters in the bilateral precuneus, the right middle frontal gyrus, the bilateral medial frontal gyrus, the right interior frontal gyrus, the right superior frontal gyrus, the right anterior cingulate, and the bilateral posterior cingulate. There were significant interaction effects of group by task in the bilateral anterior cingulate, the right inferior frontal gyrus, the left superior frontal gyrus, the left posterior cingulate, the left middle temporal gyrus, and the right inferior parietal lobe. In simple effects of deaf and hearing groups with rotation minus comparison, deaf signers mainly showed activity in the right hemisphere, while hearing nonsigners showed bilateral activity. In the simple effects of rotation task, decreased activities were shown for deaf signers compared with hearing nonsigners throughout several regions, including the bilateral parahippocampal gyrus, the left posterior cingulate cortex, the right anterior cingulate cortex, and the right inferior parietal lobe. CONCLUSION Decreased activations in several brain regions of deaf signers when compared to hearing nonsigners reflected increased neural efficiency and a precise functional circuitry, which was generated through long-term experience with sign language processing. In addition, we inferred tentatively that there may be a lateralization pattern to the right hemisphere for deaf signers when performing mental rotation tasks.
Collapse
Affiliation(s)
- Hong-Bo Le
- Department of Radiology, The First Affiliated Hospital of Shantou University Medical College, Shantou, China
- Guangdong Key Laboratory of Medical Molecular Imaging, The First Affiliated Hospital of Shantou University Medical College, Shantou, China
| | - Hui-Hong Zhang
- Department of Radiology, Shenzhen Hospital of Southern Medical University, Shenzhen, China
- MR Division, Shantou Central Hospital, Shantou, China
| | - Qiu-Lin Wu
- Guangdong Key Laboratory of Medical Molecular Imaging, The First Affiliated Hospital of Shantou University Medical College, Shantou, China
| | - Jiong Zhang
- Department of Radiology, The First Affiliated Hospital of Shantou University Medical College, Shantou, China
- Guangdong Key Laboratory of Medical Molecular Imaging, The First Affiliated Hospital of Shantou University Medical College, Shantou, China
| | - Jing-Jing Yin
- Department of Radiology, The First Affiliated Hospital of Shantou University Medical College, Shantou, China
- Guangdong Key Laboratory of Medical Molecular Imaging, The First Affiliated Hospital of Shantou University Medical College, Shantou, China
| | - Shu-Hua Ma
- Department of Radiology, The First Affiliated Hospital of Shantou University Medical College, Shantou, China
- Guangdong Key Laboratory of Medical Molecular Imaging, The First Affiliated Hospital of Shantou University Medical College, Shantou, China
| |
Collapse
|
17
|
Kanazawa Y, Nakamura K, Ishii T, Aso T, Yamazaki H, Omori K. Phonological memory in sign language relies on the visuomotor neural system outside the left hemisphere language network. PLoS One 2017; 12:e0177599. [PMID: 28931014 PMCID: PMC5607140 DOI: 10.1371/journal.pone.0177599] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2016] [Accepted: 04/28/2017] [Indexed: 11/18/2022] Open
Abstract
Sign language is an essential medium for everyday social interaction for deaf people and plays a critical role in verbal learning. In particular, language development in those people should heavily rely on the verbal short-term memory (STM) via sign language. Most previous studies compared neural activations during signed language processing in deaf signers and those during spoken language processing in hearing speakers. For sign language users, it thus remains unclear how visuospatial inputs are converted into the verbal STM operating in the left-hemisphere language network. Using functional magnetic resonance imaging, the present study investigated neural activation while bilinguals of spoken and signed language were engaged in a sequence memory span task. On each trial, participants viewed a nonsense syllable sequence presented either as written letters or as fingerspelling (4-7 syllables in length) and then held the syllable sequence for 12 s. Behavioral analysis revealed that participants relied on phonological memory while holding verbal information regardless of the type of input modality. At the neural level, this maintenance stage broadly activated the left-hemisphere language network, including the inferior frontal gyrus, supplementary motor area, superior temporal gyrus and inferior parietal lobule, for both letter and fingerspelling conditions. Interestingly, while most participants reported that they relied on phonological memory during maintenance, direct comparisons between letters and fingers revealed strikingly different patterns of neural activation during the same period. Namely, the effortful maintenance of fingerspelling inputs relative to letter inputs activated the left superior parietal lobule and dorsal premotor area, i.e., brain regions known to play a role in visuomotor analysis of hand/arm movements. These findings suggest that the dorsal visuomotor neural system subserves verbal learning via sign language by relaying gestural inputs to the classical left-hemisphere language network.
Collapse
Affiliation(s)
- Yuji Kanazawa
- Human Brain Research Center, Kyoto University Graduate School of Medicine, Kyoto, Japan
- Department of Otolaryngology-Head and Neck Surgery, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Kimihiro Nakamura
- Human Brain Research Center, Kyoto University Graduate School of Medicine, Kyoto, Japan
- Faculty of Human Sciences, University of Tsukuba, Tsukuba, Japan
| | - Toru Ishii
- Human Brain Research Center, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Toshihiko Aso
- Human Brain Research Center, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Hiroshi Yamazaki
- Department of Otolaryngology-Head and Neck Surgery, Kyoto University Graduate School of Medicine, Kyoto, Japan
| | - Koichi Omori
- Department of Otolaryngology-Head and Neck Surgery, Kyoto University Graduate School of Medicine, Kyoto, Japan
| |
Collapse
|
18
|
Cardin V, Rudner M, De Oliveira RF, Andin J, Su MT, Beese L, Woll B, Rönnberg J. The Organization of Working Memory Networks is Shaped by Early Sensory Experience. Cereb Cortex 2017; 28:3540-3554. [DOI: 10.1093/cercor/bhx222] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2017] [Indexed: 11/14/2022] Open
Affiliation(s)
- Velia Cardin
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
- Deafness Cognition and Language Research Centre, Department of Experimental Psychology, University College London, 49 Gordon Square, London, UK
- School of Psychology, University of East Anglia, Norwich Research Park, Norwich, UK
| | - Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Rita F De Oliveira
- School of Applied Science, London South Bank University, 103 Borough Road, London, UK
| | - Josefine Andin
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Merina T Su
- Developmental Neurosciences Programme, UCL GOS Institute of Child Health, 30 Guilford Street, London, UK
| | - Lilli Beese
- Deafness Cognition and Language Research Centre, Department of Experimental Psychology, University College London, 49 Gordon Square, London, UK
| | - Bencie Woll
- Deafness Cognition and Language Research Centre, Department of Experimental Psychology, University College London, 49 Gordon Square, London, UK
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| |
Collapse
|
19
|
The relation between working memory and language comprehension in signers and speakers. Acta Psychol (Amst) 2017; 177:69-77. [PMID: 28477456 DOI: 10.1016/j.actpsy.2017.04.014] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2016] [Revised: 04/25/2017] [Accepted: 04/30/2017] [Indexed: 11/22/2022] Open
Abstract
This study investigated the relation between linguistic and spatial working memory (WM) resources and language comprehension for signed compared to spoken language. Sign languages are both linguistic and visual-spatial, and therefore provide a unique window on modality-specific versus modality-independent contributions of WM resources to language processing. Deaf users of American Sign Language (ASL), hearing monolingual English speakers, and hearing ASL-English bilinguals completed several spatial and linguistic serial recall tasks. Additionally, their comprehension of spatial and non-spatial information in ASL and spoken English narratives was assessed. Results from the linguistic serial recall tasks revealed that the often reported advantage for speakers on linguistic short-term memory tasks does not extend to complex WM tasks with a serial recall component. For English, linguistic WM predicted retention of non-spatial information, and both linguistic and spatial WM predicted retention of spatial information. For ASL, spatial WM predicted retention of spatial (but not non-spatial) information, and linguistic WM did not predict retention of either spatial or non-spatial information. Overall, our findings argue against strong assumptions of independent domain-specific subsystems for the storage and processing of linguistic and spatial information and furthermore suggest a less important role for serial encoding in signed than spoken language comprehension.
Collapse
|
20
|
Miozzo M, Petrova A, Fischer-Baum S, Peressotti F. Serial position encoding of signs. Cognition 2016; 154:69-80. [DOI: 10.1016/j.cognition.2016.05.008] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2014] [Revised: 04/29/2016] [Accepted: 05/15/2016] [Indexed: 10/21/2022]
|
21
|
|
22
|
Fogerty D, Humes LE, Busey TA. Age-Related Declines in Early Sensory Memory: Identification of Rapid Auditory and Visual Stimulus Sequences. Front Aging Neurosci 2016; 8:90. [PMID: 27199737 PMCID: PMC4858528 DOI: 10.3389/fnagi.2016.00090] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2015] [Accepted: 04/11/2016] [Indexed: 11/22/2022] Open
Abstract
Age-related temporal-processing declines of rapidly presented sequences may involve contributions of sensory memory. This study investigated recall for rapidly presented auditory (vowel) and visual (letter) sequences presented at six different stimulus onset asynchronies (SOA) that spanned threshold SOAs for sequence identification. Younger, middle-aged, and older adults participated in all tasks. Results were investigated at both equivalent performance levels (i.e., SOA threshold) and at identical physical stimulus values (i.e., SOAs). For four-item sequences, results demonstrated best performance for the first and last items in the auditory sequences, but only the first item for visual sequences. For two-item sequences, adults identified the second vowel or letter significantly better than the first. Overall, when temporal-order performance was equated for each individual by testing at SOA thresholds, recall accuracy for each position across the age groups was highly similar. These results suggest that modality-specific processing declines of older adults primarily determine temporal-order performance for rapid sequences. However, there is some evidence for a second amodal processing decline in older adults related to early sensory memory for final items in a sequence. This selective deficit was observed particularly for longer sequence lengths and was not accounted for by temporal masking.
Collapse
Affiliation(s)
- Daniel Fogerty
- Department of Communication Sciences and Disorders, University of South CarolinaColumbia, SC, USA
| | - Larry E. Humes
- Department of Speech and Hearing Sciences, Indiana UniversityBloomington, IN, USA
| | - Thomas A. Busey
- Department of Brain and Psychological Sciences, Indiana UniversityBloomington, IN, USA
| |
Collapse
|
23
|
Ahmad RF, Malik AS, Kamel N, Reza F, Abdullah JM. Simultaneous EEG-fMRI for working memory of the human brain. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE 2016; 39:363-78. [PMID: 27043850 DOI: 10.1007/s13246-016-0438-x] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/10/2015] [Accepted: 03/14/2016] [Indexed: 02/06/2023]
Abstract
Memory plays an important role in human life. Memory can be divided into two categories, i.e., long term memory and short term memory (STM). STM or working memory (WM) stores information for a short span of time and it is used for information manipulations and fast response activities. WM is generally involved in the higher cognitive functions of the brain. Different studies have been carried out by researchers to understand the WM process. Most of these studies were based on neuroimaging modalities like fMRI, EEG, MEG etc., which use standalone processes. Each neuroimaging modality has some pros and cons. For example, EEG gives high temporal resolution but poor spatial resolution. On the other hand, the fMRI results have a high spatial resolution but poor temporal resolution. For a more in depth understanding and insight of what is happening inside the human brain during the WM process or during cognitive tasks, high spatial as well as high temporal resolution is desirable. Over the past decade, researchers have been working to combine different modalities to achieve a high spatial and temporal resolution at the same time. Developments of MRI compatible EEG equipment in recent times have enabled researchers to combine EEG-fMRI successfully. The research publications in simultaneous EEG-fMRI have been increasing tremendously. This review is focused on the WM research involving simultaneous EEG-fMRI data acquisition and analysis. We have covered the simultaneous EEG-fMRI application in WM and data processing. Also, it adds to potential fusion methods which can be used for simultaneous EEG-fMRI for WM and cognitive tasks.
Collapse
Affiliation(s)
- Rana Fayyaz Ahmad
- Centre for Intelligent Signal and Imaging Research (CISIR), Tronoh, Malaysia. .,Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, 32610, Bandar Seri Iskandar, Perak, Malaysia.
| | - Aamir Saeed Malik
- Centre for Intelligent Signal and Imaging Research (CISIR), Tronoh, Malaysia. .,Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, 32610, Bandar Seri Iskandar, Perak, Malaysia.
| | - Nidal Kamel
- Centre for Intelligent Signal and Imaging Research (CISIR), Tronoh, Malaysia.,Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, 32610, Bandar Seri Iskandar, Perak, Malaysia
| | - Faruque Reza
- Department of Neurosciences, Universiti Sains Malaysia, Kubang Kerian, 16150, Kota Bharu, Kelantan, Malaysia.,Centre for Neuroscience Services and Research, Universiti Sains Malaysia, Kubang Kerian, 16150, Kota Bharu, Kelantan, Malaysia
| | - Jafri Malin Abdullah
- Department of Neurosciences, Universiti Sains Malaysia, Kubang Kerian, 16150, Kota Bharu, Kelantan, Malaysia.,Centre for Neuroscience Services and Research, Universiti Sains Malaysia, Kubang Kerian, 16150, Kota Bharu, Kelantan, Malaysia
| |
Collapse
|
24
|
Ferjan Ramirez N, Leonard MK, Davenport TS, Torres C, Halgren E, Mayberry RI. Neural Language Processing in Adolescent First-Language Learners: Longitudinal Case Studies in American Sign Language. Cereb Cortex 2016; 26:1015-26. [PMID: 25410427 PMCID: PMC4737603 DOI: 10.1093/cercor/bhu273] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
One key question in neurolinguistics is the extent to which the neural processing system for language requires linguistic experience during early life to develop fully. We conducted a longitudinal anatomically constrained magnetoencephalography (aMEG) analysis of lexico-semantic processing in 2 deaf adolescents who had no sustained language input until 14 years of age, when they became fully immersed in American Sign Language. After 2 to 3 years of language, the adolescents' neural responses to signed words were highly atypical, localizing mainly to right dorsal frontoparietal regions and often responding more strongly to semantically primed words (Ferjan Ramirez N, Leonard MK, Torres C, Hatrak M, Halgren E, Mayberry RI. 2014. Neural language processing in adolescent first-language learners. Cereb Cortex. 24 (10): 2772-2783). Here, we show that after an additional 15 months of language experience, the adolescents' neural responses remained atypical in terms of polarity. While their responses to less familiar signed words still showed atypical localization patterns, the localization of responses to highly familiar signed words became more concentrated in the left perisylvian language network. Our findings suggest that the timing of language experience affects the organization of neural language processing; however, even in adolescence, language representation in the human brain continues to evolve with experience.
Collapse
Affiliation(s)
- Naja Ferjan Ramirez
- Department of Linguistics
- Multimodal Imaging Laboratory
- Institute for Learning and Brain Sciences, University of Washington, Seattle, WA 98195, USA
| | - Matthew K. Leonard
- Multimodal Imaging Laboratory
- Department of Radiology
- Department of Neurological Surgery, University of California, San Francisco, CA 94158, USA
| | | | | | - Eric Halgren
- Multimodal Imaging Laboratory
- Department of Radiology
- Department of Neuroscience and
- Kavli Institute for Brain and Mind, University of California, San Diego, La Jolla, CA 92093, USA
| | | |
Collapse
|
25
|
Preexisting semantic representation improves working memory performance in the visuospatial domain. Mem Cognit 2016; 44:608-20. [DOI: 10.3758/s13421-016-0585-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
26
|
Starowicz-Filip A, Chrobak AA, Milczarek O, Kwiatkowski S. The visuospatial functions in children after cerebellar low-grade astrocytoma surgery: A contribution to the pediatric neuropsychology of the cerebellum. J Neuropsychol 2015; 11:201-221. [DOI: 10.1111/jnp.12093] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2015] [Revised: 10/01/2015] [Indexed: 11/30/2022]
Affiliation(s)
- Anna Starowicz-Filip
- Jagiellonian University Medical College; Krakow Poland
- Neurosurgery Department; Childrens’ University Hospital in Krakow; Poland
| | | | - Olga Milczarek
- Jagiellonian University Medical College; Krakow Poland
- Neurosurgery Department; Childrens’ University Hospital in Krakow; Poland
| | - Stanisław Kwiatkowski
- Jagiellonian University Medical College; Krakow Poland
- Neurosurgery Department; Childrens’ University Hospital in Krakow; Poland
| |
Collapse
|
27
|
Rudner M, Toscano E, Holmer E. Load and distinctness interact in working memory for lexical manual gestures. Front Psychol 2015; 6:1147. [PMID: 26321979 PMCID: PMC4535352 DOI: 10.3389/fpsyg.2015.01147] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2015] [Accepted: 07/23/2015] [Indexed: 11/13/2022] Open
Abstract
The Ease of Language Understanding model (Rönnberg et al., 2013) predicts that decreasing the distinctness of language stimuli increases working memory load; in the speech domain this notion is supported by empirical evidence. Our aim was to determine whether such an over-additive interaction can be generalized to sign processing in sign-naïve individuals and whether it is modulated by experience of computer gaming. Twenty young adults with no knowledge of sign language performed an n-back working memory task based on manual gestures lexicalized in sign language; the visual resolution of the signs and working memory load were manipulated. Performance was poorer when load was high and resolution was low. These two effects interacted over-additively, demonstrating that reducing the resolution of signed stimuli increases working memory load when there is no pre-existing semantic representation. This suggests that load and distinctness are handled by a shared amodal mechanism which can be revealed empirically when stimuli are degraded and load is high, even without pre-existing semantic representation. There was some evidence that the mechanism is influenced by computer gaming experience. Future work should explore how the shared mechanism is influenced by pre-existing semantic representation and sensory factors together with computer gaming experience.
Collapse
Affiliation(s)
- Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University , Sweden
| | - Elena Toscano
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University , Sweden
| | - Emil Holmer
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University , Sweden
| |
Collapse
|
28
|
Emmorey K, McCullough S, Mehta S, Grabowski TJ. How sensory-motor systems impact the neural organization for language: direct contrasts between spoken and signed language. Front Psychol 2014; 5:484. [PMID: 24904497 PMCID: PMC4033845 DOI: 10.3389/fpsyg.2014.00484] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2013] [Accepted: 05/03/2014] [Indexed: 11/24/2022] Open
Abstract
To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language.
Collapse
Affiliation(s)
- Karen Emmorey
- Laboratory for Language and Cognitive Neuroscience, School of Speech, Language, and Hearing Sciences, San Diego State University San Diego, CA, USA
| | - Stephen McCullough
- Laboratory for Language and Cognitive Neuroscience, School of Speech, Language, and Hearing Sciences, San Diego State University San Diego, CA, USA
| | - Sonya Mehta
- Department of Psychology, University of Washington Seattle, WA, USA ; Department of Radiology, University of Washington Seattle, WA, USA
| | | |
Collapse
|
29
|
Sörqvist P, Rönnberg J. Individual differences in distractibility: An update and a model. Psych J 2014; 3:42-57. [PMID: 25632345 PMCID: PMC4285120 DOI: 10.1002/pchj.47] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2013] [Accepted: 11/18/2013] [Indexed: 11/08/2022]
Abstract
This paper reviews the current literature on individual differences in susceptibility to the effects of background sound on visual-verbal task performance. A large body of evidence suggests that individual differences in working memory capacity (WMC) underpin individual differences in susceptibility to auditory distraction in most tasks and contexts. Specifically, high WMC is associated with a more steadfast locus of attention (thus overruling the call for attention that background noise may evoke) and a more constrained auditory-sensory gating (i.e., less processing of the background sound). The relation between WMC and distractibility is a general framework that may also explain distractibility differences between populations that differ along variables that covary with WMC (such as age, developmental disorders, and personality traits). A neurocognitive task-engagement/distraction trade-off (TEDTOFF) model that summarizes current knowledge is outlined and directions for future research are proposed.
Collapse
Affiliation(s)
- Patrik Sörqvist
- Department of Building, Energy and Environmental Engineering, University of GävleGävle, Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping UniversityLinköping, Sweden
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping UniversityLinköping, Sweden
- Department of Behavioral Sciences and Learning, Linköping UniversityLinköping, Sweden
| |
Collapse
|
30
|
Rönnberg J, Lunner T, Zekveld A, Sörqvist P, Danielsson H, Lyxell B, Dahlström O, Signoret C, Stenfelt S, Pichora-Fuller MK, Rudner M. The Ease of Language Understanding (ELU) model: theoretical, empirical, and clinical advances. Front Syst Neurosci 2013; 7:31. [PMID: 23874273 PMCID: PMC3710434 DOI: 10.3389/fnsys.2013.00031] [Citation(s) in RCA: 566] [Impact Index Per Article: 51.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2013] [Accepted: 06/24/2013] [Indexed: 12/28/2022] Open
Abstract
Working memory is important for online language processing during conversation. We use it to maintain relevant information, to inhibit or ignore irrelevant information, and to attend to conversation selectively. Working memory helps us to keep track of and actively participate in conversation, including taking turns and following the gist. This paper examines the Ease of Language Understanding model (i.e., the ELU model, Rönnberg, 2003; Rönnberg et al., 2008) in light of new behavioral and neural findings concerning the role of working memory capacity (WMC) in uni-modal and bimodal language processing. The new ELU model is a meaning prediction system that depends on phonological and semantic interactions in rapid implicit and slower explicit processing mechanisms that both depend on WMC albeit in different ways. It is based on findings that address the relationship between WMC and (a) early attention processes in listening to speech, (b) signal processing in hearing aids and its effects on short-term memory, (c) inhibition of speech maskers and its effect on episodic long-term memory, (d) the effects of hearing impairment on episodic and semantic long-term memory, and finally, (e) listening effort. New predictions and clinical implications are outlined. Comparisons with other WMC and speech perception models are made.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden ; Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
31
|
Rönnberg J, Lunner T, Zekveld A, Sörqvist P, Danielsson H, Lyxell B, Dahlström O, Signoret C, Stenfelt S, Pichora-Fuller MK, Rudner M. The Ease of Language Understanding (ELU) model: theoretical, empirical, and clinical advances. Front Syst Neurosci 2013; 7:31. [PMID: 23874273 DOI: 10.3389/fnsys] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2013] [Accepted: 06/24/2013] [Indexed: 05/28/2023] Open
Abstract
Working memory is important for online language processing during conversation. We use it to maintain relevant information, to inhibit or ignore irrelevant information, and to attend to conversation selectively. Working memory helps us to keep track of and actively participate in conversation, including taking turns and following the gist. This paper examines the Ease of Language Understanding model (i.e., the ELU model, Rönnberg, 2003; Rönnberg et al., 2008) in light of new behavioral and neural findings concerning the role of working memory capacity (WMC) in uni-modal and bimodal language processing. The new ELU model is a meaning prediction system that depends on phonological and semantic interactions in rapid implicit and slower explicit processing mechanisms that both depend on WMC albeit in different ways. It is based on findings that address the relationship between WMC and (a) early attention processes in listening to speech, (b) signal processing in hearing aids and its effects on short-term memory, (c) inhibition of speech maskers and its effect on episodic long-term memory, (d) the effects of hearing impairment on episodic and semantic long-term memory, and finally, (e) listening effort. New predictions and clinical implications are outlined. Comparisons with other WMC and speech perception models are made.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University Linköping, Sweden ; Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University Linköping, Sweden
| | | | | | | | | | | | | | | | | | | | | |
Collapse
|
32
|
Wang J, Napier J. Signed language working memory capacity of signed language interpreters and deaf signers. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2013; 18:271-286. [PMID: 23303377 DOI: 10.1093/deafed/ens068] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
This study investigated the effects of hearing status and age of signed language acquisition on signed language working memory capacity. Professional Auslan (Australian sign language)/English interpreters (hearing native signers and hearing nonnative signers) and deaf Auslan signers (deaf native signers and deaf nonnative signers) completed an Auslan working memory (WM) span task. The results revealed that the hearing signers (i.e., the professional interpreters) significantly outperformed the deaf signers on the Auslan WM span task. However, the results showed no significant differences between the native signers and the nonnative signers in their Auslan working memory capacity. Furthermore, there was no significant interaction between hearing status and age of signed language acquisition. Additionally, the study found no significant differences between the deaf native signers (adults) and the deaf nonnative signers (adults) in their Auslan working memory capacity. The findings are discussed in relation to the participants' memory strategies and their early language experience. The findings present challenges for WM theories.
Collapse
Affiliation(s)
- Jihong Wang
- Department of Linguistics, Macquarie University, Sydney NSW 2109, Australia.
| | | |
Collapse
|
33
|
Rudner M, Karlsson T, Gunnarsson J, Rönnberg J. Levels of processing and language modality specificity in working memory. Neuropsychologia 2012; 51:656-66. [PMID: 23287569 DOI: 10.1016/j.neuropsychologia.2012.12.011] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2012] [Revised: 12/17/2012] [Accepted: 12/18/2012] [Indexed: 11/25/2022]
Abstract
Neural networks underpinning working memory demonstrate sign language specific components possibly related to differences in temporary storage mechanisms. A processing approach to memory systems suggests that the organisation of memory storage is related to type of memory processing as well. In the present study, we investigated for the first time semantic, phonological and orthographic processing in working memory for sign- and speech-based language. During fMRI we administered a picture-based 2-back working memory task with Semantic, Phonological, Orthographic and Baseline conditions to 11 deaf signers and 20 hearing non-signers. Behavioural data showed poorer and slower performance for both groups in Phonological and Orthographic conditions than in the Semantic condition, in line with depth-of-processing theory. An exclusive masking procedure revealed distinct sign-specific neural networks supporting working memory components at all three levels of processing. The overall pattern of sign-specific activations may reflect a relative intermodality difference in the relationship between phonology and semantics influencing working memory storage and processing.
Collapse
Affiliation(s)
- Mary Rudner
- The Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Sweden.
| | | | | | | |
Collapse
|
34
|
García-Orza J, Carratalá P. Sign recall by hearing signers: Evidences of dual coding. JOURNAL OF COGNITIVE PSYCHOLOGY 2012. [DOI: 10.1080/20445911.2012.682054] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
35
|
Hirshorn EA, Fernandez NM, Bavelier D. Routes to short-term memory indexing: lessons from deaf native users of American Sign Language. Cogn Neuropsychol 2012; 29:85-103. [PMID: 22871205 DOI: 10.1080/02643294.2012.704354] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
Models of working memory (WM) have been instrumental in understanding foundational cognitive processes and sources of individual differences. However, current models cannot conclusively explain the consistent group differences between deaf signers and hearing speakers on a number of short-term memory (STM) tasks. Here we take the perspective that these results are not due to a temporal order-processing deficit in deaf individuals, but rather reflect different biases in how different types of memory cues are used to do a given task. We further argue that the main driving force behind the shifts in relative biasing is a consequence of language modality (sign vs. speech) and the processing they afford, and not deafness, per se.
Collapse
|
36
|
Mayberry RI, Chen JK, Witcher P, Klein D. Age of acquisition effects on the functional organization of language in the adult brain. BRAIN AND LANGUAGE 2011; 119:16-29. [PMID: 21705060 DOI: 10.1016/j.bandl.2011.05.007] [Citation(s) in RCA: 89] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/24/2010] [Revised: 03/09/2011] [Accepted: 05/23/2011] [Indexed: 05/11/2023]
Abstract
Using functional magnetic resonance imaging (fMRI), we neuroimaged deaf adults as they performed two linguistic tasks with sentences in American Sign Language, grammatical judgment and phonemic-hand judgment. Participants' age-onset of sign language acquisition ranged from birth to 14 years; length of sign language experience was substantial and did not vary in relation to age of acquisition. For both tasks, a more left lateralized pattern of activation was observed, with activity for grammatical judgment being more anterior than that observed for phonemic-hand judgment, which was more posterior by comparison. Age of acquisition was linearly and negatively related to activation levels in anterior language regions and positively related to activation levels in posterior visual regions for both tasks.
Collapse
Affiliation(s)
- Rachel I Mayberry
- Department of Linguistics, University of California, San Diego, La Jolla, CA 92093-0108, USA.
| | | | | | | |
Collapse
|
37
|
Abstract
I present an account of the origins and development of the multicomponent approach to working memory, making a distinction between the overall theoretical framework, which has remained relatively stable, and the attempts to build more specific models within this framework. I follow this with a brief discussion of alternative models and their relationship to the framework. I conclude with speculations on further developments and a comment on the value of attempting to apply models and theories beyond the laboratory studies on which they are typically based.
Collapse
Affiliation(s)
- Alan Baddeley
- Department of Psychology, University of York, United Kingdom.
| |
Collapse
|
38
|
Banai K, Sabin AT, Wright BA. Separable developmental trajectories for the abilities to detect auditory amplitude and frequency modulation. Hear Res 2011; 280:219-27. [PMID: 21664958 DOI: 10.1016/j.heares.2011.05.019] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/29/2010] [Revised: 05/23/2011] [Accepted: 05/25/2011] [Indexed: 10/18/2022]
Abstract
Amplitude modulation (AM) and frequency modulation (FM) are inherent components of most natural sounds. The ability to detect these modulations, considered critical for normal auditory and speech perception, improves over the course of development. However, the extent to which the development of AM and FM detection skills follow different trajectories, and therefore can be attributed to the maturation of separate processes, remains unclear. Here we explored the relationship between the developmental trajectories for the detection of sinusoidal AM and FM in a cross-sectional design employing children aged 8-10 and 11-12 years and adults. For FM of tonal carriers, both average performance (mean) and performance consistency (within-listener standard deviation) were adult-like in the 8-10 y/o. In contrast, in the same listeners, average performance for AM of wideband noise carriers was still not adult-like in the 11-12 y/o, though performance consistency was already mature in the 8-10 y/o. Among the children there were no significant correlations for either measure between the degrees of maturity for AM and FM detection. These differences in developmental trajectory between the two modulation cues and between average detection thresholds and performance consistency suggest that at least partially distinct processes may underlie the development of AM and FM detection as well as the abilities to detect modulation and to do so consistently.
Collapse
Affiliation(s)
- Karen Banai
- Department of Communication Sciences and Disorders, University of Haifa, Haifa 31905, Israel.
| | | | | |
Collapse
|
39
|
Rönnberg J, Rudner M, Lunner T. Cognitive hearing science: the legacy of Stuart Gatehouse. Trends Amplif 2011; 15:140-8. [PMID: 21606047 DOI: 10.1177/1084713811409762] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Stuart Gatehouse was one of the pioneers of cognitive hearing science. The ease of language understanding (ELU) model (Rönnberg) is one example of a cognitive hearing science model where the interplay between memory systems and signal processing is emphasized. The mismatch notion is central to ELU and concerns how phonological information derived from the signal, matches/mismatches phonological representations in lexical and semantic long-term memory (LTM). When signals match, processing is rapid, automatic and implicit, and lexical activation proceeds smoothly. Given a mismatch, lexical activation fails, and working or short-term memory (WM/STM) is assumed to be invoked to engage in explicit repair strategies to disambiguate what was said in the conversation. In a recent study, negative long-term consequences of mismatch were found by means of relating hearing loss to episodic LTM in a sample of old hearing-aid wearers. STM was intact (Rönnberg et al.). Beneficial short-term consequences of a binary masking noise reduction scheme on STM was obtained in 4-talker babble for individuals with high WM capacity, but not in stationary noise backgrounds (Ng et al.). This suggests that individuals high on WM capacity inhibit semantic auditory distraction in 4-talker babble while exploiting the phonological benefits in terms of speech quality provided by binary masking (Wang). Both long-term and short-term mismatch effects, apparent in data sets including behavioral as well as subjective (Rudner et al.) data, need to be taken into account in the design of future hearing instruments.
Collapse
|
40
|
Binding in visual working memory: The role of the episodic buffer. Neuropsychologia 2011; 49:1393-400. [DOI: 10.1016/j.neuropsychologia.2010.12.042] [Citation(s) in RCA: 262] [Impact Index Per Article: 20.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2010] [Revised: 12/09/2010] [Accepted: 12/30/2010] [Indexed: 11/20/2022]
|
41
|
Hall ML, Bavelier D. Short-term memory stages in sign vs. speech: the source of the serial span discrepancy. Cognition 2011; 120:54-66. [PMID: 21450284 DOI: 10.1016/j.cognition.2011.02.014] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2010] [Revised: 02/18/2011] [Accepted: 02/23/2011] [Indexed: 10/18/2022]
Abstract
Speakers generally outperform signers when asked to recall a list of unrelated verbal items. This phenomenon is well established, but its source has remained unclear. In this study, we evaluate the relative contribution of the three main processing stages of short-term memory--perception, encoding, and recall--in this effect. The present study factorially manipulates whether American Sign Language (ASL) or English is used for perception, memory encoding, and recall in hearing ASL-English bilinguals. Results indicate that using ASL during both perception and encoding contributes to the serial span discrepancy. Interestingly, performing recall in ASL slightly increased span, ruling out the view that signing is in general a poor choice for short-term memory. These results suggest that despite the general equivalence of sign and speech in other memory domains, speech-based representations are better suited for the specific task of perception and memory encoding of a series of unrelated verbal items in serial order through the phonological loop. This work suggests that interpretation of performance on serial recall tasks in English may not translate straightforwardly to serial tasks in sign language.
Collapse
Affiliation(s)
- Matthew L Hall
- Department of Psychology, UC San Diego, 9500 Gilman Dr., La Jolla, CA 92039-0109, USA.
| | | |
Collapse
|
42
|
Gozzi M, Geraci C, Cecchetto C, Perugini M, Papagno C. Looking for an explanation for the low sign span. Is order involved? JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2010; 16:101-107. [PMID: 20679138 DOI: 10.1093/deafed/enq035] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
Although signed and speech-based languages have a similar internal organization of verbal short-term memory, sign span is lower than word span. We investigated whether this is due to the fact that signs are not suited for serial recall, as proposed by Bavelier, Newport, Hall, Supalla, and Boutla (2008. Ordered short-term memory differs in signers and speakers: Implications for models of short-term memory. Cognition, 107, 433-459). We administered a serial recall task with stimuli in Italian Sign Language to 12 deaf people, and we compared their performance with that of twelve age-, gender-, and education-matched hearing participants who performed the task in Italian. The results do not offer evidence for the hypothesis that serial order per se is a detrimental factor for deaf participants. An alternative explanation for the lower sign span based on signs being phonologically heavier than words is considered.
Collapse
Affiliation(s)
- Marta Gozzi
- Dipartimento di Psicologia, Università di Milano-Bicocca, Piazza dell'Ateneo Nuovo 1, Milan, Italy
| | | | | | | | | |
Collapse
|
43
|
Stenfelt S, Rönnberg J. The signal-cognition interface: interactions between degraded auditory signals and cognitive processes. Scand J Psychol 2010; 50:385-93. [PMID: 19778386 DOI: 10.1111/j.1467-9450.2009.00748.x] [Citation(s) in RCA: 81] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
A hearing loss leads to problems with speech perception; this is exacerbated when competing noise is present. The speech signal is recognized by the cognitive system of the listener; noise and distortion tax the cognitive system when interpreting it. The auditory system must interact with the cognitive system for optimal signal decoding. This article discusses this interaction between the signal and cognitive system based on two models: an auditory model describing signal transmission and degeneration due to a hearing loss and a cognitive model for Ease of Language Understanding. The signal distortion depends on the specifics of the hearing impairment and thus differently distorted signals can affect the cognitive system in different ways. Consequently, the severity of a hearing loss may not only depend on the lesion itself but also on the cognitive recourses required to interpret the signal.
Collapse
Affiliation(s)
- Stefan Stenfelt
- Department of Clinical and Experimental Medicine, Linköping University, Sweden.
| | | |
Collapse
|
44
|
Rudner M, Davidsson L, Ronnberg J. Effects of age on the temporal organization of working memory in deaf signers. AGING NEUROPSYCHOLOGY AND COGNITION 2009; 17:360-83. [PMID: 19921581 DOI: 10.1080/13825580903311832] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Deaf native signers have a general working memory (WM) capacity similar to that of hearing non-signers but are less sensitive to the temporal order of stored items at retrieval. General WM capacity declines with age, but little is known of how cognitive aging affects WM function in deaf signers. We investigated WM function in elderly deaf signers (EDS) and an age-matched comparison group of hearing non-signers (EHN) using a paradigm designed to highlight differences in temporal and spatial processing of item and order information. EDS performed worse than EHN on both item and order recognition using a temporal style of presentation. Reanalysis together with earlier data showed that with the temporal style of presentation, order recognition performance for EDS was also lower than for young adult deaf signers. Older participants responded more slowly than younger participants. These findings suggest that apart from age-related slowing irrespective of sensory and language status, there is an age-related difference specific to deaf signers in the ability to retain order information in WM when temporal processing demands are high. This may be due to neural reorganisation arising from sign language use. Concurrent spatial information with the Mixed style of presentation resulted in enhanced order processing for all groups, suggesting that concurrent temporal and spatial cues may enhance learning for both deaf and hearing groups. These findings support and extend the WM model for Ease of Language Understanding.
Collapse
Affiliation(s)
- Mary Rudner
- Department of Behavioural Sciences and Learning, Linkoping University, Linkoping, Sweden.
| | | | | |
Collapse
|
45
|
Baumann O, Chan E, Mattingley JB. Dissociable neural circuits for encoding and retrieval of object locations during active navigation in humans. Neuroimage 2009; 49:2816-25. [PMID: 19837178 DOI: 10.1016/j.neuroimage.2009.10.021] [Citation(s) in RCA: 77] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2009] [Revised: 09/03/2009] [Accepted: 10/08/2009] [Indexed: 10/20/2022] Open
Abstract
Several cortical and subcortical circuits have been implicated in object location memory and navigation. Uncertainty remains, however, about which neural circuits are involved in the distinct processes of encoding and retrieval during active navigation through three-dimensional space. We used functional magnetic resonance imaging (fMRI) to measure neural responses as participants learned the location of a single target object relative to a small set of landmarks. Following a delay, the target was removed and participants were required to navigate back to its original position. The relative and absolute locations of landmarks and the target object were changed on every trial, so that participants had to learn a novel arrangement for each spatial scene. At encoding, greater activity within the right hippocampus and the parahippocampal gyrus bilaterally predicted more accurate navigation to the hidden target object in the retrieval phase. By contrast, during the retrieval phase, more accurate performance was associated with increased activity in the left hippocampus and the striatum bilaterally. Dividing participants into good and poor navigators, based upon behavioural performance, revealed greater striatal activity in good navigators during retrieval, perhaps reflecting superior procedural learning in these individuals. By contrast, the poor navigators showed stronger left hippocampal activity, suggesting reliance on a less effective verbal or symbolic code by this group. Our findings suggest separate neural substrates for the encoding and retrieval stages of object location memory during active navigation, which are further modulated by participants' overall navigational ability.
Collapse
Affiliation(s)
- Oliver Baumann
- The University of Queensland, Queensland Brain Institute & School of Psychology, St Lucia, Queensland, 4072, Australia.
| | | | | |
Collapse
|
46
|
RUDNER MARY, ANDIN JOSEFINE, RÖNNBERG JERKER. Working memory, deafness and sign language. Scand J Psychol 2009; 50:495-505. [DOI: 10.1111/j.1467-9450.2009.00744.x] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
47
|
Rönnberg J, Rudner M, Foo C, Lunner T. Cognition counts: A working memory system for ease of language understanding (ELU). Int J Audiol 2009; 47 Suppl 2:S99-105. [PMID: 19012117 DOI: 10.1080/14992020802301167] [Citation(s) in RCA: 310] [Impact Index Per Article: 20.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
48
|
Pa J, Wilson SM, Pickell H, Bellugi U, Hickok G. Neural organization of linguistic short-term memory is sensory modality-dependent: evidence from signed and spoken language. J Cogn Neurosci 2009; 20:2198-210. [PMID: 18457510 DOI: 10.1162/jocn.2008.20154] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Despite decades of research, there is still disagreement regarding the nature of the information that is maintained in linguistic short-term memory (STM). Some authors argue for abstract phonological codes, whereas others argue for more general sensory traces. We assess these possibilities by investigating linguistic STM in two distinct sensory-motor modalities, spoken and signed language. Hearing bilingual participants (native in English and American Sign Language) performed equivalent STM tasks in both languages during functional magnetic resonance imaging. Distinct, sensory-specific activations were seen during the maintenance phase of the task for spoken versus signed language. These regions have been previously shown to respond to nonlinguistic sensory stimulation, suggesting that linguistic STM tasks recruit sensory-specific networks. However, maintenance-phase activations common to the two languages were also observed, implying some form of common process. We conclude that linguistic STM involves sensory-dependent neural networks, but suggest that sensory-independent neural networks may also exist.
Collapse
Affiliation(s)
- Judy Pa
- University of California, Irvine, CA 92697, USA
| | | | | | | | | |
Collapse
|
49
|
Rudner M, Foo C, Sundewall-Thorén E, Lunner T, Rönnberg J. Phonological mismatch and explicit cognitive processing in a sample of 102 hearing-aid users. Int J Audiol 2009; 47 Suppl 2:S91-8. [PMID: 19012116 DOI: 10.1080/14992020802304393] [Citation(s) in RCA: 47] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Rudner et al (2008) showed that when compression release settings are manipulated in the hearing instruments of Swedish habitual users, the resulting mismatch between the phonological form of the input speech signal and representations stored in long-term memory leads to greater engagement of explicit cognitive processing under taxing listening conditions. The mismatch effect is manifest in significant correlations between performance on cognitive tests and aided-speech-recognition performance in modulated noise and/or with fast compression release settings. This effect is predicted by the ELU model (Rönnberg et al, 2008). In order to test whether the mismatch effect can be generalized across languages, we examined two sets of aided speech recognition data collected from a Danish population where two cognitive tests, reading span and letter monitoring, had been administered. A reanalysis of all three datasets, including 102 participants, demonstrated the mismatch effect. These findings suggest that the effect of phonological mismatch, as predicted by the ELU model (Rönnberg et al, this issue) and tapped by the reading span test, is a stable phenomenon across these two Scandinavian languages.
Collapse
Affiliation(s)
- Mary Rudner
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | | | | | | | | |
Collapse
|
50
|
Rudner M, Rönnberg J. Explicit processing demands reveal language modality-specific organization of working memory. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2008; 13:466-484. [PMID: 18353759 PMCID: PMC2533441 DOI: 10.1093/deafed/enn005] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/09/2007] [Revised: 01/21/2008] [Accepted: 02/07/2008] [Indexed: 05/26/2023]
Abstract
The working memory model for Ease of Language Understanding (ELU) predicts that processing differences between language modalities emerge when cognitive demands are explicit. This prediction was tested in three working memory experiments with participants who were Deaf Signers (DS), Hearing Signers (HS), or Hearing Nonsigners (HN). Easily nameable pictures were used as stimuli to avoid confounds relating to sensory modality. Performance was largely similar for DS, HS, and HN, suggesting that previously identified intermodal differences may be due to differences in retention of sensory information. When explicit processing demands were high, differences emerged between DS and HN, suggesting that although working memory storage in both groups is sensitive to temporal organization, retrieval is not sensitive to temporal organization in DS. A general effect of semantic similarity was also found. These findings are discussed in relation to the ELU model.
Collapse
Affiliation(s)
- Mary Rudner
- Department of Behavioural Sciences and Learning, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden.
| | | |
Collapse
|