1
|
Xiong S, Tan Y, Wang G, Yan P, Xiang X. Learning feature relationships in CNN model via relational embedding convolution layer. Neural Netw 2024; 179:106510. [PMID: 39024707 DOI: 10.1016/j.neunet.2024.106510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2024] [Revised: 06/28/2024] [Accepted: 07/03/2024] [Indexed: 07/20/2024]
Abstract
Establishing the relationships among hierarchical visual attributes of objects in the visual world is crucial for human cognition. The classic convolution neural network (CNN) can successfully extract hierarchical features but ignore the relationships among features, resulting in shortcomings compared to humans in areas like interpretability and domain generalization. Recently, algorithms have introduced feature relationships by external prior knowledge and special auxiliary modules, which have been proven to bring multiple improvements in many computer vision tasks. However, prior knowledge is often difficult to obtain, and auxiliary modules bring additional consumption of computing and storage resources, which limits the flexibility and practicality of the algorithm. In this paper, we aim to drive the CNN model to learn the relationships among hierarchical deep features without prior knowledge and consumption increasing, while enhancing the fundamental performance of some aspects. Firstly, the task of learning the relationships among hierarchical features in CNN is defined and three key problems related to this task are pointed out, including the quantitative metric of connection intensity, the threshold of useless connections, and the updating strategy of relation graph. Secondly, Relational Embedding Convolution (RE-Conv) layer is proposed for the representation of feature relationships in convolution layer, followed by a scheme called use & disuse strategy which aims to address the three problems of feature relation learning. Finally, the improvements brought by the proposed feature relation learning scheme have been demonstrated through numerous experiments, including interpretability, domain generalization, noise robustness, and inference efficiency. In particular, the proposed scheme outperforms many state-of-the-art methods in the domain generalization community and can be seamlessly integrated with existing methods for further improvement. Meanwhile, it maintains comparable precision to the original CNN model while reducing floating point operations (FLOPs) by approximately 50%.
Collapse
Affiliation(s)
- Shengzhou Xiong
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, 430074, China; National Key Laboratory of Multispectral Information Intelligent Processing Technology, Wuhan, 430074, China.
| | - Yihua Tan
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, 430074, China; National Key Laboratory of Multispectral Information Intelligent Processing Technology, Wuhan, 430074, China.
| | - Guoyou Wang
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, 430074, China; National Key Laboratory of Multispectral Information Intelligent Processing Technology, Wuhan, 430074, China.
| | - Pei Yan
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, 430074, China; National Key Laboratory of Multispectral Information Intelligent Processing Technology, Wuhan, 430074, China.
| | - Xuanyu Xiang
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, 430074, China; National Key Laboratory of Multispectral Information Intelligent Processing Technology, Wuhan, 430074, China.
| |
Collapse
|
2
|
Kikumoto A, Shibata K, Nishio T, Badre D. Practice Reshapes the Geometry and Dynamics of Task-tailored Representations. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.09.12.612718. [PMID: 39314386 PMCID: PMC11419051 DOI: 10.1101/2024.09.12.612718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 09/25/2024]
Abstract
Extensive practice makes task performance more efficient and precise, leading to automaticity. However, theories of automaticity differ on which levels of task representations (e.g., low-level features, stimulus-response mappings, or high-level conjunctive memories of individual events) change with practice, despite predicting the same pattern of improvement (e.g., power law of practice). To resolve this controversy, we built on recent theoretical advances in understanding computations through neural population dynamics. Specifically, we hypothesized that practice optimizes the neural representational geometry of task representations to minimally separate the highest-level task contingencies needed for successful performance. This involves efficiently reaching conjunctive neural states that integrate task-critical features nonlinearly while abstracting over non-critical dimensions. To test this hypothesis, human participants (n = 40) engaged in extensive practice of a simple, context-dependent action selection task over 3 days while recording EEG. During initial rapid improvement in task performance, representations of the highest-level, context-specific conjunctions of task-features were enhanced as a function of the number of successful episodes. Crucially, only enhancement of these conjunctive representations, and not lower-order representations, predicted the power-law improvement in performance. Simultaneously, over sessions, these conjunctive neural states became more stable earlier in time and more aligned, abstracting over redundant task features, which correlated with offline performance gain in reducing switch costs. Thus, practice optimizes the dynamic representational geometry as task-tailored neural states that minimally tesselate the task space, taming their high-dimensionality.
Collapse
Affiliation(s)
- Atsushi Kikumoto
- Department of Cognitive and Psychological Sciences, Brown University Providence, RI, U.S
- RIKEN Center for Brain Science, Wako, Saitama, Japan
| | | | | | - David Badre
- Department of Cognitive and Psychological Sciences, Brown University Providence, RI, U.S
- Carney Institute for Brain Science Brown University, Providence, RI, U.S
| |
Collapse
|
3
|
Driscoll LN, Shenoy K, Sussillo D. Flexible multitask computation in recurrent networks utilizes shared dynamical motifs. Nat Neurosci 2024; 27:1349-1363. [PMID: 38982201 PMCID: PMC11239504 DOI: 10.1038/s41593-024-01668-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 04/26/2024] [Indexed: 07/11/2024]
Abstract
Flexible computation is a hallmark of intelligent behavior. However, little is known about how neural networks contextually reconfigure for different computations. In the present work, we identified an algorithmic neural substrate for modular computation through the study of multitasking artificial recurrent neural networks. Dynamical systems analyses revealed learned computational strategies mirroring the modular subtask structure of the training task set. Dynamical motifs, which are recurring patterns of neural activity that implement specific computations through dynamics, such as attractors, decision boundaries and rotations, were reused across tasks. For example, tasks requiring memory of a continuous circular variable repurposed the same ring attractor. We showed that dynamical motifs were implemented by clusters of units when the unit activation function was restricted to be positive. Cluster lesions caused modular performance deficits. Motifs were reconfigured for fast transfer learning after an initial phase of learning. This work establishes dynamical motifs as a fundamental unit of compositional computation, intermediate between neuron and network. As whole-brain studies simultaneously record activity from multiple specialized systems, the dynamical motif framework will guide questions about specialization and generalization.
Collapse
Affiliation(s)
- Laura N Driscoll
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.
| | - Krishna Shenoy
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Department of Neurosurgery, Stanford University, Stanford, CA, USA
- Department of Bioengineering, Stanford University, Stanford, CA, USA
- Department of Neurobiology, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
- Bio-X Institute, Stanford University, Stanford, CA, USA
- Howard Hughes Medical Institute at Stanford University, Stanford, CA, USA
| | - David Sussillo
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, CA, USA
| |
Collapse
|
4
|
Tafazoli S, Bouchacourt FM, Ardalan A, Markov NT, Uchimura M, Mattar MG, Daw ND, Buschman TJ. Building compositional tasks with shared neural subspaces. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.31.578263. [PMID: 38352540 PMCID: PMC10862921 DOI: 10.1101/2024.01.31.578263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/22/2024]
Abstract
Cognition is remarkably flexible; we are able to rapidly learn and perform many different tasks1. Theoretical modeling has shown artificial neural networks trained to perform multiple tasks will re-use representations2 and computational components3 across tasks. By composing tasks from these sub-components, an agent can flexibly switch between tasks and rapidly learn new tasks4. Yet, whether such compositionality is found in the brain is unknown. Here, we show the same subspaces of neural activity represent task-relevant information across multiple tasks, with each task compositionally combining these subspaces in a task-specific manner. We trained monkeys to switch between three compositionally related tasks. Neural recordings found task-relevant information about stimulus features and motor actions were represented in subspaces of neural activity that were shared across tasks. When monkeys performed a task, neural representations in the relevant shared sensory subspace were transformed to the relevant shared motor subspace. Subspaces were flexibly engaged as monkeys discovered the task in effect; their internal belief about the current task predicted the strength of representations in task-relevant subspaces. In sum, our findings suggest that the brain can flexibly perform multiple tasks by compositionally combining task-relevant neural representations across tasks.
Collapse
Affiliation(s)
- Sina Tafazoli
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | | | - Adel Ardalan
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Nikola T. Markov
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Motoaki Uchimura
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | | | - Nathaniel D. Daw
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
- Department of Psychology, Princeton University, Princeton, NJ, USA
| | - Timothy J. Buschman
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
- Department of Psychology, Princeton University, Princeton, NJ, USA
| |
Collapse
|
5
|
Jahn CI, Markov NT, Morea B, Daw ND, Ebitz RB, Buschman TJ. Learning attentional templates for value-based decision-making. Cell 2024; 187:1476-1489.e21. [PMID: 38401541 PMCID: PMC11574977 DOI: 10.1016/j.cell.2024.01.041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 12/18/2023] [Accepted: 01/25/2024] [Indexed: 02/26/2024]
Abstract
Attention filters sensory inputs to enhance task-relevant information. It is guided by an "attentional template" that represents the stimulus features that are currently relevant. To understand how the brain learns and uses templates, we trained monkeys to perform a visual search task that required them to repeatedly learn new attentional templates. Neural recordings found that templates were represented across the prefrontal and parietal cortex in a structured manner, such that perceptually neighboring templates had similar neural representations. When the task changed, a new attentional template was learned by incrementally shifting the template toward rewarded features. Finally, we found that attentional templates transformed stimulus features into a common value representation that allowed the same decision-making mechanisms to deploy attention, regardless of the identity of the template. Altogether, our results provide insight into the neural mechanisms by which the brain learns to control attention and how attention can be flexibly deployed across tasks.
Collapse
Affiliation(s)
- Caroline I Jahn
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA.
| | - Nikola T Markov
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA
| | - Britney Morea
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA
| | - Nathaniel D Daw
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA; Department of Psychology, Princeton University, Princeton, NJ 08540, USA
| | - R Becket Ebitz
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA; Department of Neurosciences, Université de Montréal, Montréal, QC H3C 3J7, Canada
| | - Timothy J Buschman
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08540, USA; Department of Psychology, Princeton University, Princeton, NJ 08540, USA.
| |
Collapse
|
6
|
Gurnani H, Cayco Gajic NA. Signatures of task learning in neural representations. Curr Opin Neurobiol 2023; 83:102759. [PMID: 37708653 DOI: 10.1016/j.conb.2023.102759] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 06/28/2023] [Accepted: 07/20/2023] [Indexed: 09/16/2023]
Abstract
While neural plasticity has long been studied as the basis of learning, the growth of large-scale neural recording techniques provides a unique opportunity to study how learning-induced activity changes are coordinated across neurons within the same circuit. These distributed changes can be understood through an evolution of the geometry of neural manifolds and latent dynamics underlying new computations. In parallel, studies of multi-task and continual learning in artificial neural networks hint at a tradeoff between non-interference and compositionality as guiding principles to understand how neural circuits flexibly support multiple behaviors. In this review, we highlight recent findings from both biological and artificial circuits that together form a new framework for understanding task learning at the population level.
Collapse
Affiliation(s)
- Harsha Gurnani
- Department of Biology, University of Washington, Seattle, WA, USA. https://twitter.com/HarshaGurnani
| | - N Alex Cayco Gajic
- Laboratoire de Neuroscience Cognitives, Ecole Normale Supérieure, Université PSL, Paris, France.
| |
Collapse
|
7
|
Shinn M. Phantom oscillations in principal component analysis. Proc Natl Acad Sci U S A 2023; 120:e2311420120. [PMID: 37988465 PMCID: PMC10691246 DOI: 10.1073/pnas.2311420120] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 10/18/2023] [Indexed: 11/23/2023] Open
Abstract
Principal component analysis (PCA) is a dimensionality reduction method that is known for being simple and easy to interpret. Principal components are often interpreted as low-dimensional patterns in high-dimensional space. However, this simple interpretation fails for timeseries, spatial maps, and other continuous data. In these cases, nonoscillatory data may have oscillatory principal components. Here, we show that two common properties of data cause oscillatory principal components: smoothness and shifts in time or space. These two properties implicate almost all neuroscience data. We show how the oscillations produced by PCA, which we call "phantom oscillations," impact data analysis. We also show that traditional cross-validation does not detect phantom oscillations, so we suggest procedures that do. Our findings are supported by a collection of mathematical proofs. Collectively, our work demonstrates that patterns which emerge from high-dimensional data analysis may not faithfully represent the underlying data.
Collapse
Affiliation(s)
- Maxwell Shinn
- University College London (UCL) Queen Square Institute of Neurology, University College London, LondonWC1E 6BT, United Kingdom
| |
Collapse
|
8
|
Durstewitz D, Koppe G, Thurm MI. Reconstructing computational system dynamics from neural data with recurrent neural networks. Nat Rev Neurosci 2023; 24:693-710. [PMID: 37794121 DOI: 10.1038/s41583-023-00740-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/18/2023] [Indexed: 10/06/2023]
Abstract
Computational models in neuroscience usually take the form of systems of differential equations. The behaviour of such systems is the subject of dynamical systems theory. Dynamical systems theory provides a powerful mathematical toolbox for analysing neurobiological processes and has been a mainstay of computational neuroscience for decades. Recently, recurrent neural networks (RNNs) have become a popular machine learning tool for studying the non-linear dynamics of neural and behavioural processes by emulating an underlying system of differential equations. RNNs have been routinely trained on similar behavioural tasks to those used for animal subjects to generate hypotheses about the underlying computational mechanisms. By contrast, RNNs can also be trained on the measured physiological and behavioural data, thereby directly inheriting their temporal and geometrical properties. In this way they become a formal surrogate for the experimentally probed system that can be further analysed, perturbed and simulated. This powerful approach is called dynamical system reconstruction. In this Perspective, we focus on recent trends in artificial intelligence and machine learning in this exciting and rapidly expanding field, which may be less well known in neuroscience. We discuss formal prerequisites, different model architectures and training approaches for RNN-based dynamical system reconstructions, ways to evaluate and validate model performance, how to interpret trained models in a neuroscience context, and current challenges.
Collapse
Affiliation(s)
- Daniel Durstewitz
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany.
- Interdisciplinary Center for Scientific Computing, Heidelberg University, Heidelberg, Germany.
- Faculty of Physics and Astronomy, Heidelberg University, Heidelberg, Germany.
| | - Georgia Koppe
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Dept. of Psychiatry and Psychotherapy, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
- Hector Institute for Artificial Intelligence in Psychiatry, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| | - Max Ingo Thurm
- Dept. of Theoretical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim, Germany
| |
Collapse
|
9
|
Soo WWM, Goudar V, Wang XJ. Training biologically plausible recurrent neural networks on cognitive tasks with long-term dependencies. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.10.561588. [PMID: 37873445 PMCID: PMC10592728 DOI: 10.1101/2023.10.10.561588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
Training recurrent neural networks (RNNs) has become a go-to approach for generating and evaluating mechanistic neural hypotheses for cognition. The ease and efficiency of training RNNs with backpropagation through time and the availability of robustly supported deep learning libraries has made RNN modeling more approachable and accessible to neuroscience. Yet, a major technical hindrance remains. Cognitive processes such as working memory and decision making involve neural population dynamics over a long period of time within a behavioral trial and across trials. It is difficult to train RNNs to accomplish tasks where neural representations and dynamics have long temporal dependencies without gating mechanisms such as LSTMs or GRUs which currently lack experimental support and prohibit direct comparison between RNNs and biological neural circuits. We tackled this problem based on the idea of specialized skip-connections through time to support the emergence of task-relevant dynamics, and subsequently reinstitute biological plausibility by reverting to the original architecture. We show that this approach enables RNNs to successfully learn cognitive tasks that prove impractical if not impossible to learn using conventional methods. Over numerous tasks considered here, we achieve less training steps and shorter wall-clock times, particularly in tasks that require learning long-term dependencies via temporal integration over long timescales or maintaining a memory of past events in hidden-states. Our methods expand the range of experimental tasks that biologically plausible RNN models can learn, thereby supporting the development of theory for the emergent neural mechanisms of computations involving long-term dependencies.
Collapse
|
10
|
Vanderlip CR, Asch PA, Reynolds JH, Glavis-Bloom C. Domain-Specific Cognitive Impairment Reflects Prefrontal Dysfunction in Aged Common Marmosets. eNeuro 2023; 10:ENEURO.0187-23.2023. [PMID: 37553239 PMCID: PMC10444537 DOI: 10.1523/eneuro.0187-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 07/14/2023] [Accepted: 07/17/2023] [Indexed: 08/10/2023] Open
Abstract
Age-related cognitive impairment is not expressed uniformly across cognitive domains. Cognitive functions that rely on brain areas that undergo substantial neuroanatomical changes with age often show age-related impairment, whereas those that rely on brain areas with minimal age-related change typically do not. The common marmoset has grown in popularity as a model for neuroscience research, but robust cognitive phenotyping, particularly as a function of age and across multiple cognitive domains, is lacking. This presents a major limitation for the development and evaluation of the marmoset as a model of cognitive aging and leaves open the question of whether they exhibit age-related cognitive impairment that is restricted to some cognitive domains, as in humans. In this study, we characterized stimulus-reward association learning and cognitive flexibility in young adults to geriatric marmosets using a Simple Discrimination task and a Serial Reversal task, respectively. We found that aged marmosets show transient impairment in learning-to-learn but have conserved ability to form stimulus-reward associations. Furthermore, aged marmosets have impaired cognitive flexibility driven by susceptibility to proactive interference. As these impairments are in domains critically dependent on the prefrontal cortex, our findings support prefrontal cortical dysfunction as a prominent feature of neurocognitive aging. This work positions the marmoset as a key model for understanding the neural underpinnings of cognitive aging.
Collapse
Affiliation(s)
- Casey R Vanderlip
- Systems Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, California 92037
| | - Payton A Asch
- Systems Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, California 92037
| | - John H Reynolds
- Systems Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, California 92037
| | - Courtney Glavis-Bloom
- Systems Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, California 92037
| |
Collapse
|
11
|
Zhang Y, Wang J, Gorriz JM, Wang S. Deep Learning and Vision Transformer for Medical Image Analysis. J Imaging 2023; 9:147. [PMID: 37504824 PMCID: PMC10381785 DOI: 10.3390/jimaging9070147] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 07/18/2023] [Indexed: 07/29/2023] Open
Abstract
Artificial intelligence (AI) refers to the field of computer science theory and technology [...].
Collapse
Affiliation(s)
- Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| | - Jiaji Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| | - Juan Manuel Gorriz
- Department of Signal Theory, Networking, and Communications, University of Granada, 52005 Granada, Spain
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK
| |
Collapse
|
12
|
Glavis-Bloom C, Vanderlip CR, Asch PA, Reynolds JH. Domain-specific cognitive impairment reflects prefrontal dysfunction in aged common marmosets. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.22.541766. [PMID: 37292989 PMCID: PMC10245905 DOI: 10.1101/2023.05.22.541766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Age-related cognitive impairment is not expressed uniformly across cognitive domains. Cognitive functions that rely on brain areas that undergo substantial neuroanatomical changes with age often show age-related impairment, while those that rely on brain areas with minimal age-related change typically do not. The common marmoset has grown in popularity as a model for neuroscience research, but robust cognitive phenotyping, particularly as a function of age and across multiple cognitive domains, is lacking. This presents a major limitation for the development and evaluation of the marmoset as a model of cognitive aging, and leaves open the question of whether they exhibit age-related cognitive impairment that is restricted to some cognitive domains, as in humans. In this study, we characterized stimulus-reward association learning and cognitive flexibility in young adults to geriatric marmosets using a Simple Discrimination and a Serial Reversal task, respectively. We found that aged marmosets show transient impairment in "learning-to-learn" but have conserved ability to form stimulus-reward associations. Furthermore, aged marmosets have impaired cognitive flexibility driven by susceptibility to proactive interference. Since these impairments are in domains critically dependent on the prefrontal cortex, our findings support prefrontal cortical dysfunction as a prominent feature of neurocognitive aging. This work positions the marmoset as a key model for understanding the neural underpinnings of cognitive aging. Significance Statement Aging is the greatest risk factor for neurodegenerative disease development, and understanding why is critical for the development of effective therapeutics. The common marmoset, a short-lived non-human primate with neuroanatomical similarity to humans, has gained traction for neuroscientific investigations. However, the lack of robust cognitive phenotyping, particularly as a function of age and across multiple cognitive domains limits their validity as a model for age-related cognitive impairment. We demonstrate that aging marmosets, like humans, have impairment that is specific to cognitive domains reliant on brain areas that undergo substantial neuroanatomical changes with age. This work validates the marmoset as a key model for understanding region-specific vulnerability to the aging process.
Collapse
Affiliation(s)
- Courtney Glavis-Bloom
- Systems Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037
| | - Casey R Vanderlip
- Systems Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037
| | - Payton A Asch
- Systems Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037
| | - John H Reynolds
- Systems Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037
| |
Collapse
|