1
|
Bezsudnova Y, Quinn AJ, Wynn SC, Jensen O. Spatiotemporal Properties of Common Semantic Categories for Words and Pictures. J Cogn Neurosci 2024; 36:1760-1769. [PMID: 38739567 DOI: 10.1162/jocn_a_02182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
The timing of semantic processing during object recognition in the brain is a topic of ongoing discussion. One way of addressing this question is by applying multivariate pattern analysis to human electrophysiological responses to object images of different semantic categories. However, although multivariate pattern analysis can reveal whether neuronal activity patterns are distinct for different stimulus categories, concerns remain on whether low-level visual features also contribute to the classification results. To circumvent this issue, we applied a cross-decoding approach to magnetoencephalography data from stimuli from two different modalities: images and their corresponding written words. We employed items from three categories and presented them in a randomized order. We show that if the classifier is trained on words, pictures are classified between 150 and 430 msec after stimulus onset, and when training on pictures, words are classified between 225 and 430 msec. The topographical map, identified using a searchlight approach for cross-modal activation in both directions, showed left lateralization, confirming the involvement of linguistic representations. These results point to semantic activation of pictorial stimuli occurring at ∼150 msec, whereas for words, the semantic activation occurs at ∼230 msec.
Collapse
Affiliation(s)
| | | | - Syanah C Wynn
- University of Birmingham
- Gutenberg University Medical Center Mainz
| | | |
Collapse
|
2
|
Molinaro N, Nara S, Carreiras M. Early language dissociation in bilingual minds: magnetoencephalography evidence through a machine learning approach. Cereb Cortex 2024; 34:bhae053. [PMID: 38367613 DOI: 10.1093/cercor/bhae053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 01/26/2024] [Accepted: 01/27/2024] [Indexed: 02/19/2024] Open
Abstract
Does neural activity reveal how balanced bilinguals choose languages? Despite using diverse neuroimaging techniques, prior studies haven't provided a definitive solution to this problem. Nonetheless, studies involving direct brain stimulation in bilinguals have identified distinct brain regions associated with language production in different languages. In this magnetoencephalography study with 45 proficient Spanish-Basque bilinguals, we investigated language selection during covert picture naming and word reading tasks. Participants were prompted to name line drawings or read words if the color of the stimulus changed to green, in 10% of trials. The task was performed either in Spanish or Basque. Despite similar sensor-level evoked activity for both languages in both tasks, decoding analyses revealed language-specific classification ~100 ms post-stimulus onset. During picture naming, right occipital-temporal sensors predominantly contributed to language decoding, while left occipital-temporal sensors were crucial for decoding during word reading. Cross-task decoding analysis unveiled robust generalization effects from picture naming to word reading. Our methodology involved a fine-grained examination of neural responses using magnetoencephalography, offering insights into the dynamics of language processing in bilinguals. This study refines our understanding of the neural underpinnings of language selection and bridges the gap between non-invasive and invasive experimental evidence in bilingual language production.
Collapse
Affiliation(s)
- Nicola Molinaro
- Basque Center on Cognition, Brain and Language, Paseo Mikeletegi, 69, 20009, Donostia/San Sebastian, Spain
- Ikerbasque, Basque Foundation for Science, 48009, Bilbao, Spain
| | - Sanjeev Nara
- Basque Center on Cognition, Brain and Language, Paseo Mikeletegi, 69, 20009, Donostia/San Sebastian, Spain
- Mathematical Institute, Department of Mathematics and Computer Science, Physics, Geography, Justus-Liebig-Universität Gießen (University of Giessen), 35392, Gießen, Germany
| | - Manuel Carreiras
- Basque Center on Cognition, Brain and Language, Paseo Mikeletegi, 69, 20009, Donostia/San Sebastian, Spain
- Ikerbasque, Basque Foundation for Science, 48009, Bilbao, Spain
- University of the Basque Country. UPV/EHU, 48940, Leioa, Spain
| |
Collapse
|
3
|
Ghazaryan G, van Vliet M, Lammi L, Lindh-Knuutila T, Kivisaari S, Hultén A, Salmelin R. Cortical time-course of evidence accumulation during semantic processing. Commun Biol 2023; 6:1242. [PMID: 38066098 PMCID: PMC10709650 DOI: 10.1038/s42003-023-05611-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 11/20/2023] [Indexed: 12/18/2023] Open
Abstract
Our understanding of the surrounding world and communication with other people are tied to mental representations of concepts. In order for the brain to recognize an object, it must determine which concept to access based on information available from sensory inputs. In this study, we combine magnetoencephalography and machine learning to investigate how concepts are represented and accessed in the brain over time. Using brain responses from a silent picture naming task, we track the dynamics of visual and semantic information processing, and show that the brain gradually accumulates information on different levels before eventually reaching a plateau. The timing of this plateau point varies across individuals and feature models, indicating notable temporal variation in visual object recognition and semantic processing.
Collapse
Affiliation(s)
- Gayane Ghazaryan
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, FI-00076, Aalto, Finland.
| | - Marijn van Vliet
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, FI-00076, Aalto, Finland
| | - Lotta Lammi
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, FI-00076, Aalto, Finland
| | - Tiina Lindh-Knuutila
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, FI-00076, Aalto, Finland
| | - Sasa Kivisaari
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, FI-00076, Aalto, Finland
| | - Annika Hultén
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, FI-00076, Aalto, Finland
- Aalto NeuroImaging, Aalto University, P.O. Box 12200, Aalto, FI-00076, Finland
| | - Riitta Salmelin
- Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, FI-00076, Aalto, Finland
- Aalto NeuroImaging, Aalto University, P.O. Box 12200, Aalto, FI-00076, Finland
| |
Collapse
|
4
|
Dirani J, Pylkkänen L. The time course of cross-modal representations of conceptual categories. Neuroimage 2023; 277:120254. [PMID: 37391047 DOI: 10.1016/j.neuroimage.2023.120254] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 05/29/2023] [Accepted: 06/27/2023] [Indexed: 07/02/2023] Open
Abstract
To what extent does language production activate cross-modal conceptual representations? In picture naming, we view specific exemplars of concepts and then name them with a label, like "dog". In overt reading, the written word does not express a specific exemplar. Here we used a decoding approach with magnetoencephalography (MEG) to address whether picture naming and overt word reading involve shared representations of superordinate categories (e.g., animal). This addresses a fundamental question about the modality-generality of conceptual representations and their temporal evolution. Crucially, we do this using a language production task that does not require explicit categorization judgment and that controls for word form properties across semantic categories. We trained our models to classify the animal/tool distinction using MEG data of one modality at each time point and then tested the generalization of those models on the other modality. We obtained evidence for the automatic activation of cross-modal semantic category representations for both pictures and words later than their respective modality-specific representations. Cross-modal representations were activated at 150 ms and lasted until around 450 ms. The time course of lexical activation was also assessed revealing that semantic category is represented before lexical access for pictures but after lexical access for words. Notably, this earlier activation of semantic category in pictures occurred simultaneously with visual representations. We thus show evidence for the spontaneous activation of cross-modal semantic categories in picture naming and word reading. These results serve to anchor a more comprehensive spatio-temporal delineation of the semantic feature space during production planning.
Collapse
Affiliation(s)
- Julien Dirani
- Department of Psychology, New York University, New York, NY, 10003, USA.
| | - Liina Pylkkänen
- Department of Psychology, New York University, New York, NY, 10003, USA; Department of Linguistics, New York University, New York, NY, 10003, USA; NYUAD Research Institute, New York University Abu Dhabi, Abu Dhabi, 129188, UAE
| |
Collapse
|
5
|
Leonardelli E, Fairhall SL. Similarity-based fMRI-MEG fusion reveals hierarchical organisation within the brain's semantic system. Neuroimage 2022; 259:119405. [PMID: 35752412 DOI: 10.1016/j.neuroimage.2022.119405] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 06/11/2022] [Accepted: 06/21/2022] [Indexed: 01/17/2023] Open
Abstract
Our ability to understand and interact with our environment relies upon conceptual knowledge of the meaning of objects. This process is supported by a distributed network of frontal, parietal, and temporal brain regions. Insight into the differential roles of various elements of this system can be inferred from the timing of activation, and here we use similarity-based fMRI-MEG fusion to understand when the representational spaces in different elements of the semantic system converge with representational spaces in the evolving MEG signal. Participants performed a semantic-typicality judgment of written words drawn from nine different semantic categories in separate fMRI and MEG sessions. Results indicate an initial period of congruence between MEG and fMRI informational spaces dominated by the posterior inferior temporal gyrus and the ventral temporal cortex between 350-450 msec. This is followed by a second period of convergence between 450-795 msec where MEG and fMRI representational spaces conform in left angular gyrus and precuneus in addition to ventral temporal cortex. Results are consistent with the multistage recruitment of the semantic system, initially involving automatic aspects of the representational system and later extending to broader elements of the semantic system more strongly associated with internalised cognition.
Collapse
|
6
|
Iamshchinina P, Karapetian A, Kaiser D, Cichy RM. Resolving the time course of visual and auditory object categorization. J Neurophysiol 2022; 127:1622-1628. [PMID: 35583972 PMCID: PMC9190735 DOI: 10.1152/jn.00515.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Humans can effortlessly categorize objects, both when they are conveyed through visual images and spoken words. To resolve the neural correlates of object categorization, studies have so far primarily focused on the visual modality. It is therefore still unclear how the brain extracts categorical information from auditory signals. In the current study, we used EEG (n = 48) and time-resolved multivariate pattern analysis to investigate 1) the time course with which object category information emerges in the auditory modality and 2) how the representational transition from individual object identification to category representation compares between the auditory modality and the visual modality. Our results show that 1) auditory object category representations can be reliably extracted from EEG signals and 2) a similar representational transition occurs in the visual and auditory modalities, where an initial representation at the individual-object level is followed by a subsequent representation of the objects’ category membership. Altogether, our results suggest an analogous hierarchy of information processing across sensory channels. However, there was no convergence toward conceptual modality-independent representations, thus providing no evidence for a shared supramodal code. NEW & NOTEWORTHY Object categorization operates on inputs from different sensory modalities, such as vision and audition. This process was mainly studied in vision. Here, we explore auditory object categorization. We show that auditory object category representations can be reliably extracted from EEG signals and, similar to vision, auditory representations initially carry information about individual objects, which is followed by a subsequent representation of the objects’ category membership.
Collapse
Affiliation(s)
- Polina Iamshchinina
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.,Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| | - Agnessa Karapetian
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Daniel Kaiser
- Mathematical Institute, Department of Mathematics and Computer Science, Physics, Geography, Justus-Liebig-Universität Gießen, Gießen, Germany.,Center for Mind, Brain and Behavior (CMBB), Philipps-Universität Marburg and Justus-Liebig-Universität Gießen, Marburg, Germany
| | - Radoslaw M Cichy
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany.,Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin, Germany
| |
Collapse
|
7
|
Bruera A, Poesio M. Exploring the Representations of Individual Entities in the Brain Combining EEG and Distributional Semantics. Front Artif Intell 2022; 5:796793. [PMID: 35280237 PMCID: PMC8905499 DOI: 10.3389/frai.2022.796793] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2021] [Accepted: 01/25/2022] [Indexed: 11/23/2022] Open
Abstract
Semantic knowledge about individual entities (i.e., the referents of proper names such as Jacinta Ardern) is fine-grained, episodic, and strongly social in nature, when compared with knowledge about generic entities (the referents of common nouns such as politician). We investigate the semantic representations of individual entities in the brain; and for the first time we approach this question using both neural data, in the form of newly-acquired EEG data, and distributional models of word meaning, employing them to isolate semantic information regarding individual entities in the brain. We ran two sets of analyses. The first set of analyses is only concerned with the evoked responses to individual entities and their categories. We find that it is possible to classify them according to both their coarse and their fine-grained category at appropriate timepoints, but that it is hard to map representational information learned from individuals to their categories. In the second set of analyses, we learn to decode from evoked responses to distributional word vectors. These results indicate that such a mapping can be learnt successfully: this counts not only as a demonstration that representations of individuals can be discriminated in EEG responses, but also as a first brain-based validation of distributional semantic models as representations of individual entities. Finally, in-depth analyses of the decoder performance provide additional evidence that the referents of proper names and categories have little in common when it comes to their representation in the brain.
Collapse
Affiliation(s)
- Andrea Bruera
- Cognitive Science Research Group, School of Electronic Engineering and Computer Science, Queen Mary University of London, London, United Kingdom
| | | |
Collapse
|
8
|
Neudorf J, Gould L, Mickleborough MJS, Ekstrand C, Borowsky R. Unique, Shared, and Dominant Brain Activation in Visual Word Form Area and Lateral Occipital Complex during Reading and Picture Naming. Neuroscience 2022; 481:178-196. [PMID: 34800577 DOI: 10.1016/j.neuroscience.2021.11.022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Revised: 11/08/2021] [Accepted: 11/10/2021] [Indexed: 11/17/2022]
Abstract
Identifying printed words and pictures concurrently is ubiquitous in daily tasks, and so it is important to consider the extent to which reading words and naming pictures may share a cognitive-neurophysiological functional architecture. Two functional magnetic resonance imaging (fMRI) experiments examined whether reading along the left ventral occipitotemporal region (vOT; often referred to as a visual word form area, VWFA) has activation that is overlapping with referent pictures (i.e., both conditions significant and shared, or with one significantly more dominant) or unique (i.e., one condition significant, the other not), and whether picture naming along the right lateral occipital complex (LOC) has overlapping or unique activation relative to referent words. Experiment 1 used familiar regular and exception words (to force lexical reading) and their corresponding pictures in separate naming blocks, and showed dominant activation for pictures in the LOC, and shared activation in the VWFA for exception words and their corresponding pictures (regular words did not elicit significant VWFA activation). Experiment 2 controlled for visual complexity by superimposing the words and pictures and instructing participants to either name the word or the picture, and showed primarily shared activation in the VWFA and LOC regions for both word reading and picture naming, with some dominant activation for pictures in the LOC. Overall, these results highlight the importance of including exception words to force lexical reading when comparing to picture naming, and the significant shared activation in VWFA and LOC serves to challenge specialized models of reading or picture naming.
Collapse
Affiliation(s)
- Josh Neudorf
- Cognitive Neuroscience Lab, Department of Psychology, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - Layla Gould
- Division of Neurosurgery, College of Medicine, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - Marla J S Mickleborough
- Cognitive Neuroscience Lab, Department of Psychology, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - Chelsea Ekstrand
- Cognitive Neuroscience Lab, Department of Psychology, University of Saskatchewan, Saskatoon, Saskatchewan, Canada
| | - Ron Borowsky
- Cognitive Neuroscience Lab, Department of Psychology, University of Saskatchewan, Saskatoon, Saskatchewan, Canada; Division of Neurosurgery, College of Medicine, University of Saskatchewan, Saskatoon, Saskatchewan, Canada.
| |
Collapse
|
9
|
Diab MS, Elhosseini MA, El-Sayed MS, Ali HA. Brain Strategy Algorithm for Multiple Object Tracking Based on Merging Semantic Attributes and Appearance Features. SENSORS 2021; 21:s21227604. [PMID: 34833680 PMCID: PMC8625767 DOI: 10.3390/s21227604] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 11/10/2021] [Accepted: 11/11/2021] [Indexed: 11/16/2022]
Abstract
The human brain can effortlessly perform vision processes using the visual system, which helps solve multi-object tracking (MOT) problems. However, few algorithms simulate human strategies for solving MOT. Therefore, devising a method that simulates human activity in vision has become a good choice for improving MOT results, especially occlusion. Eight brain strategies have been studied from a cognitive perspective and imitated to build a novel algorithm. Two of these strategies gave our algorithm novel and outstanding results, rescuing saccades and stimulus attributes. First, rescue saccades were imitated by detecting the occlusion state in each frame, representing the critical situation that the human brain saccades toward. Then, stimulus attributes were mimicked by using semantic attributes to reidentify the person in these occlusion states. Our algorithm favourably performs on the MOT17 dataset compared to state-of-the-art trackers. In addition, we created a new dataset of 40,000 images, 190,000 annotations and 4 classes to train the detection model to detect occlusion and semantic attributes. The experimental results demonstrate that our new dataset achieves an outstanding performance on the scaled YOLOv4 detection model by achieving a 0.89 mAP 0.5.
Collapse
Affiliation(s)
- Mai S. Diab
- Faculty of Computer & Artificial Intelligence, Benha University, Benha 13511, Egypt;
- Intoolab Ltd., London WC2H 9JQ, UK
- Correspondence:
| | - Mostafa A. Elhosseini
- Computers Engineering and Control System, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt; (M.A.E.); (H.A.A.)
- College of Computer Science and Engineering in Yanbu, Taibah University, Madinah 46421, Saudi Arabia
| | - Mohamed S. El-Sayed
- Faculty of Computer & Artificial Intelligence, Benha University, Benha 13511, Egypt;
| | - Hesham A. Ali
- Computers Engineering and Control System, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt; (M.A.E.); (H.A.A.)
- Faculty of Artificial Intelligence, Delta University for Science and Technology, Mansoura 35511, Egypt
| |
Collapse
|
10
|
Interference Resolution in Nonfluent Variant Primary Progressive Aphasia: Evidence From a Picture-Word Interference Task. Cogn Behav Neurol 2021; 34:11-25. [PMID: 33652466 DOI: 10.1097/wnn.0000000000000255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Accepted: 08/25/2020] [Indexed: 11/26/2022]
Abstract
BACKGROUND Picture-word interference tasks have been used to investigate (a) the time course of lexical access in individuals with primary progressive aphasia (PPA) and (b) how these individuals resolve competition during lexical selection. OBJECTIVE To investigate the time course of Greek-speaking individuals with PPA to produce grammatical gender-marked determiner phrases by examining their picture-naming latencies in the context of distractor words. METHOD Eight individuals with nonfluent variant PPA (nfv-PPA; M age = 62.8 years) and eight cognitively intact controls (M age = 61.1 years) participated in our study. In a picture-word interference task, the study participants named depicted objects by producing determiner + noun sequences. Interference was generated by manipulating the grammatical gender of the depicted objects and distractor words. Two stimulus onset asynchronies were used: +200 ms and +400 ms. RESULTS The individuals with nfv-PPA exhibited longer picture-naming latencies than the controls (P = 0.003). The controls exhibited interference from incongruent distractors at both asynchronies (P < 0.001); the individuals with PPA exhibited interference from incongruent distractors only at the +400-ms interval (P = 0.002). The gender-congruency effect was stronger for the individuals with PPA than for the controls at the +400-ms interval (P = 0.05); the opposite pattern was observed at the +200-ms interval (P = 0.024). CONCLUSION Gender interference resolution was abnormal in the individuals with nfv-PPA. The results point to deficits in lexicosyntactic networks that compromised the time course of picture-naming production.
Collapse
|
11
|
Clarke A. Dynamic activity patterns in the anterior temporal lobe represents object semantics. Cogn Neurosci 2020; 11:111-121. [PMID: 32249714 PMCID: PMC7446031 DOI: 10.1080/17588928.2020.1742678] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2019] [Revised: 02/07/2020] [Indexed: 02/07/2023]
Abstract
The anterior temporal lobe (ATL) is considered a crucial area for the representation of transmodal concepts. Recent evidence suggests that specific regions within the ATL support the representation of individual object concepts, as shown by studies combining multivariate analysis methods and explicit measures of semantic knowledge. This research looks to further our understanding by probing conceptual representations at a spatially and temporally resolved neural scale. Representational similarity analysis was applied to human intracranial recordings from anatomically defined lateral to medial ATL sub-regions. Neural similarity patterns were tested against semantic similarity measures, where semantic similarity was defined by a hybrid corpus-based and feature-based approach. Analyses show that the perirhinal cortex, in the medial ATL, significantly related to semantic effects around 200 to 400 ms, and were greater than more lateral ATL regions. Further, semantic effects were present in low frequency (theta and alpha) oscillatory phase signals. These results provide converging support that more medial regions of the ATL support the representation of basic-level visual object concepts within the first 400 ms, and provide a bridge between prior fMRI and MEG work by offering detailed evidence for the presence of conceptual representations within the ATL.
Collapse
Affiliation(s)
- Alex Clarke
- Department of Psychology, University of Cambridge, Cambridge, UK
| |
Collapse
|
12
|
Bruffaerts R, Tyler LK, Shafto M, Tsvetanov KA, Clarke A. Perceptual and conceptual processing of visual objects across the adult lifespan. Sci Rep 2019; 9:13771. [PMID: 31551468 PMCID: PMC6760174 DOI: 10.1038/s41598-019-50254-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Accepted: 09/02/2019] [Indexed: 12/24/2022] Open
Abstract
Making sense of the external world is vital for multiple domains of cognition, and so it is crucial that object recognition is maintained across the lifespan. We investigated age differences in perceptual and conceptual processing of visual objects in a population-derived sample of 85 healthy adults (24-87 years old) by relating measures of object processing to cognition across the lifespan. Magnetoencephalography (MEG) was recorded during a picture naming task to provide a direct measure of neural activity, that is not confounded by age-related vascular changes. Multiple linear regression was used to estimate neural responsivity for each individual, namely the capacity to represent visual or semantic information relating to the pictures. We find that the capacity to represent semantic information is linked to higher naming accuracy, a measure of task-specific performance. In mature adults, the capacity to represent semantic information also correlated with higher levels of fluid intelligence, reflecting domain-general performance. In contrast, the latency of visual processing did not relate to measures of cognition. These results indicate that neural responsivity measures relate to naming accuracy and fluid intelligence. We propose that maintaining neural responsivity in older age confers benefits in task-related and domain-general cognitive processes, supporting the brain maintenance view of healthy cognitive ageing.
Collapse
Affiliation(s)
- Rose Bruffaerts
- Department of Psychology, University of Cambridge, Cambridge, CB2 3EB, UK
- Laboratory for Cognitive Neurology, Department of Neurosciences, University of Leuven, 3000, Leuven, Belgium
- Neurology Department, University Hospitals Leuven, 3000, Leuven, Belgium
| | - Lorraine K Tyler
- Department of Psychology, University of Cambridge, Cambridge, CB2 3EB, UK.
- Cambridge Centre for Ageing and Neuroscience (Cam-CAN), University of Cambridge and MRC Cognition and Brain Sciences Unit, Cambridge, CB2 7EF, UK.
| | - Meredith Shafto
- Department of Psychology, University of Cambridge, Cambridge, CB2 3EB, UK
| | - Kamen A Tsvetanov
- Department of Psychology, University of Cambridge, Cambridge, CB2 3EB, UK
- Cambridge Centre for Ageing and Neuroscience (Cam-CAN), University of Cambridge and MRC Cognition and Brain Sciences Unit, Cambridge, CB2 7EF, UK
| | - Alex Clarke
- Department of Psychology, University of Cambridge, Cambridge, CB2 3EB, UK
| |
Collapse
|