1
|
Sani OG, Pesaran B, Shanechi MM. Dissociative and prioritized modeling of behaviorally relevant neural dynamics using recurrent neural networks. Nat Neurosci 2024:10.1038/s41593-024-01731-2. [PMID: 39242944 DOI: 10.1038/s41593-024-01731-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Accepted: 07/17/2024] [Indexed: 09/09/2024]
Abstract
Understanding the dynamical transformation of neural activity to behavior requires new capabilities to nonlinearly model, dissociate and prioritize behaviorally relevant neural dynamics and test hypotheses about the origin of nonlinearity. We present dissociative prioritized analysis of dynamics (DPAD), a nonlinear dynamical modeling approach that enables these capabilities with a multisection neural network architecture and training approach. Analyzing cortical spiking and local field potential activity across four movement tasks, we demonstrate five use-cases. DPAD enabled more accurate neural-behavioral prediction. It identified nonlinear dynamical transformations of local field potentials that were more behavior predictive than traditional power features. Further, DPAD achieved behavior-predictive nonlinear neural dimensionality reduction. It enabled hypothesis testing regarding nonlinearities in neural-behavioral transformation, revealing that, in our datasets, nonlinearities could largely be isolated to the mapping from latent cortical dynamics to behavior. Finally, DPAD extended across continuous, intermittently sampled and categorical behaviors. DPAD provides a powerful tool for nonlinear dynamical modeling and investigation of neural-behavioral data.
Collapse
Affiliation(s)
- Omid G Sani
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA
| | - Bijan Pesaran
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, USA.
- Thomas Lord Department of Computer Science, University of Southern California, Los Angeles, CA, USA.
- Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, USA.
- Alfred E. Mann Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, USA.
| |
Collapse
|
2
|
Schilling A, Gerum R, Boehm C, Rasheed J, Metzner C, Maier A, Reindl C, Hamer H, Krauss P. Deep learning based decoding of single local field potential events. Neuroimage 2024; 297:120696. [PMID: 38909761 DOI: 10.1016/j.neuroimage.2024.120696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 06/12/2024] [Accepted: 06/18/2024] [Indexed: 06/25/2024] Open
Abstract
How is information processed in the cerebral cortex? In most cases, recorded brain activity is averaged over many (stimulus) repetitions, which erases the fine-structure of the neural signal. However, the brain is obviously a single-trial processor. Thus, we here demonstrate that an unsupervised machine learning approach can be used to extract meaningful information from electro-physiological recordings on a single-trial basis. We use an auto-encoder network to reduce the dimensions of single local field potential (LFP) events to create interpretable clusters of different neural activity patterns. Strikingly, certain LFP shapes correspond to latency differences in different recording channels. Hence, LFP shapes can be used to determine the direction of information flux in the cerebral cortex. Furthermore, after clustering, we decoded the cluster centroids to reverse-engineer the underlying prototypical LFP event shapes. To evaluate our approach, we applied it to both extra-cellular neural recordings in rodents, and intra-cranial EEG recordings in humans. Finally, we find that single channel LFP event shapes during spontaneous activity sample from the realm of possible stimulus evoked event shapes. A finding which so far has only been demonstrated for multi-channel population coding.
Collapse
Affiliation(s)
- Achim Schilling
- Neuroscience Lab, University Hospital Erlangen, Germany; Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, Germany
| | - Richard Gerum
- Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, Germany; Department of Physics and Center for Vision Research, York University, Toronto, Canada
| | - Claudia Boehm
- Neuroscience Lab, University Hospital Erlangen, Germany; Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, Germany
| | - Jwan Rasheed
- Neuroscience Lab, University Hospital Erlangen, Germany; Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, Germany
| | - Claus Metzner
- Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, Germany; Pattern Recognition Lab, University Erlangen-Nürnberg, Germany
| | - Andreas Maier
- Pattern Recognition Lab, University Erlangen-Nürnberg, Germany
| | - Caroline Reindl
- Epilepsy Center, Department of Neurology, University Hospital Erlangen, Germany
| | - Hajo Hamer
- Epilepsy Center, Department of Neurology, University Hospital Erlangen, Germany
| | - Patrick Krauss
- Cognitive Computational Neuroscience Group, University Erlangen-Nürnberg, Germany; Pattern Recognition Lab, University Erlangen-Nürnberg, Germany.
| |
Collapse
|
3
|
Codol O, Michaels JA, Kashefi M, Pruszynski JA, Gribble PL. MotorNet, a Python toolbox for controlling differentiable biomechanical effectors with artificial neural networks. eLife 2024; 12:RP88591. [PMID: 39078880 PMCID: PMC11288629 DOI: 10.7554/elife.88591] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/02/2024] Open
Abstract
Artificial neural networks (ANNs) are a powerful class of computational models for unravelling neural mechanisms of brain function. However, for neural control of movement, they currently must be integrated with software simulating biomechanical effectors, leading to limiting impracticalities: (1) researchers must rely on two different platforms and (2) biomechanical effectors are not generally differentiable, constraining researchers to reinforcement learning algorithms despite the existence and potential biological relevance of faster training methods. To address these limitations, we developed MotorNet, an open-source Python toolbox for creating arbitrarily complex, differentiable, and biomechanically realistic effectors that can be trained on user-defined motor tasks using ANNs. MotorNet is designed to meet several goals: ease of installation, ease of use, a high-level user-friendly application programming interface, and a modular architecture to allow for flexibility in model building. MotorNet requires no dependencies outside Python, making it easy to get started with. For instance, it allows training ANNs on typically used motor control models such as a two joint, six muscle, planar arm within minutes on a typical desktop computer. MotorNet is built on PyTorch and therefore can implement any network architecture that is possible using the PyTorch framework. Consequently, it will immediately benefit from advances in artificial intelligence through PyTorch updates. Finally, it is open source, enabling users to create and share their own improvements, such as new effector and network architectures or custom task designs. MotorNet's focus on higher-order model and task design will alleviate overhead cost to initiate computational projects for new researchers by providing a standalone, ready-to-go framework, and speed up efforts of established computational teams by enabling a focus on concepts and ideas over implementation.
Collapse
Affiliation(s)
- Olivier Codol
- Western Institute for Neuroscience, University of Western OntarioOntarioCanada
- Department of Psychology, University of Western OntarioOntarioCanada
| | - Jonathan A Michaels
- Western Institute for Neuroscience, University of Western OntarioOntarioCanada
- Department of Physiology & Pharmacology, Schulich School of Medicine & Dentistry, University of Western OntarioOntarioCanada
- Robarts Research Institute, University of Western OntarioOntarioCanada
| | - Mehrdad Kashefi
- Western Institute for Neuroscience, University of Western OntarioOntarioCanada
- Department of Physiology & Pharmacology, Schulich School of Medicine & Dentistry, University of Western OntarioOntarioCanada
- Robarts Research Institute, University of Western OntarioOntarioCanada
| | - J Andrew Pruszynski
- Western Institute for Neuroscience, University of Western OntarioOntarioCanada
- Department of Psychology, University of Western OntarioOntarioCanada
- Department of Physiology & Pharmacology, Schulich School of Medicine & Dentistry, University of Western OntarioOntarioCanada
- Robarts Research Institute, University of Western OntarioOntarioCanada
| | - Paul L Gribble
- Western Institute for Neuroscience, University of Western OntarioOntarioCanada
- Department of Psychology, University of Western OntarioOntarioCanada
- Department of Physiology & Pharmacology, Schulich School of Medicine & Dentistry, University of Western OntarioOntarioCanada
| |
Collapse
|
4
|
Huang H. Eight challenges in developing theory of intelligence. Front Comput Neurosci 2024; 18:1388166. [PMID: 39114083 PMCID: PMC11303322 DOI: 10.3389/fncom.2024.1388166] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 07/09/2024] [Indexed: 08/10/2024] Open
Abstract
A good theory of mathematical beauty is more practical than any current observation, as new predictions about physical reality can be self-consistently verified. This belief applies to the current status of understanding deep neural networks including large language models and even the biological intelligence. Toy models provide a metaphor of physical reality, allowing mathematically formulating the reality (i.e., the so-called theory), which can be updated as more conjectures are justified or refuted. One does not need to present all details in a model, but rather, more abstract models are constructed, as complex systems such as the brains or deep networks have many sloppy dimensions but much less stiff dimensions that strongly impact macroscopic observables. This type of bottom-up mechanistic modeling is still promising in the modern era of understanding the natural or artificial intelligence. Here, we shed light on eight challenges in developing theory of intelligence following this theoretical paradigm. Theses challenges are representation learning, generalization, adversarial robustness, continual learning, causal learning, internal model of the brain, next-token prediction, and the mechanics of subjective experience.
Collapse
Affiliation(s)
- Haiping Huang
- PMI Lab, School of Physics, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
5
|
Chandran KS, Ghosh K. A deep learning based cognitive model to probe the relation between psychophysics and electrophysiology of flicker stimulus. Brain Inform 2024; 11:18. [PMID: 38987386 PMCID: PMC11236830 DOI: 10.1186/s40708-024-00231-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Accepted: 06/14/2024] [Indexed: 07/12/2024] Open
Abstract
The flicker stimulus is a visual stimulus of intermittent illumination. A flicker stimulus can appear flickering or steady to a human subject, depending on the physical parameters associated with the stimulus. When the flickering light appears steady, flicker fusion is said to have occurred. This work aims to bridge the gap between the psychophysics of flicker fusion and the electrophysiology associated with flicker stimulus through a Deep Learning based computational model of flicker perception. Convolutional Recurrent Neural Networks (CRNNs) were trained with psychophysics data of flicker stimulus obtained from a human subject. We claim that many of the reported features of electrophysiology of the flicker stimulus, including the presence of fundamentals and harmonics of the stimulus, can be explained as the result of a temporal convolution operation on the flicker stimulus. We further show that the convolution layer output of a CRNN trained with psychophysics data is more responsive to specific frequencies as in human EEG response to flicker, and the convolution layer of a trained CRNN can give a nearly sinusoidal output for 10 hertz flicker stimulus as reported for some human subjects.
Collapse
Affiliation(s)
- Keerthi S Chandran
- Center for Soft Computing Research, Indian Statistical Institue, 203 BT Road, Kolkata, West Bengal, 700108, India.
- Machine Intelligence Unit, Indian Statistical Institute, 203 BT Road, Kolkata, West Bengal, 700108, India.
| | - Kuntal Ghosh
- Center for Soft Computing Research, Indian Statistical Institue, 203 BT Road, Kolkata, West Bengal, 700108, India
- Machine Intelligence Unit, Indian Statistical Institute, 203 BT Road, Kolkata, West Bengal, 700108, India
| |
Collapse
|
6
|
Lippl S, Kay K, Jensen G, Ferrera VP, Abbott LF. A mathematical theory of relational generalization in transitive inference. Proc Natl Acad Sci U S A 2024; 121:e2314511121. [PMID: 38968113 PMCID: PMC11252811 DOI: 10.1073/pnas.2314511121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Accepted: 05/30/2024] [Indexed: 07/07/2024] Open
Abstract
Humans and animals routinely infer relations between different items or events and generalize these relations to novel combinations of items. This allows them to respond appropriately to radically novel circumstances and is fundamental to advanced cognition. However, how learning systems (including the brain) can implement the necessary inductive biases has been unclear. We investigated transitive inference (TI), a classic relational task paradigm in which subjects must learn a relation ([Formula: see text] and [Formula: see text]) and generalize it to new combinations of items ([Formula: see text]). Through mathematical analysis, we found that a broad range of biologically relevant learning models (e.g. gradient flow or ridge regression) perform TI successfully and recapitulate signature behavioral patterns long observed in living subjects. First, we found that models with item-wise additive representations automatically encode transitive relations. Second, for more general representations, a single scalar "conjunctivity factor" determines model behavior on TI and, further, the principle of norm minimization (a standard statistical inductive bias) enables models with fixed, partly conjunctive representations to generalize transitively. Finally, neural networks in the "rich regime," which enables representation learning and improves generalization on many tasks, unexpectedly show poor generalization and anomalous behavior on TI. We find that such networks implement a form of norm minimization (over hidden weights) that yields a local encoding mechanism lacking transitivity. Our findings show how minimal statistical learning principles give rise to a classical relational inductive bias (transitivity), explain empirically observed behaviors, and establish a formal approach to understanding the neural basis of relational abstraction.
Collapse
Affiliation(s)
- Samuel Lippl
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY10027
- Center for Theoretical Neuroscience, Department of Neuroscience, Columbia University, New York, NY10027
- Department of Neuroscience, Columbia University Medical Center, New York, NY10032
| | - Kenneth Kay
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY10027
- Center for Theoretical Neuroscience, Department of Neuroscience, Columbia University, New York, NY10027
- Grossman Center for the Statistics of Mind, Columbia University, New York, NY10027
| | - Greg Jensen
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY10027
- Department of Neuroscience, Columbia University Medical Center, New York, NY10032
- Department of Psychology, Reed College, Portland, OR97202
| | - Vincent P. Ferrera
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY10027
- Department of Neuroscience, Columbia University Medical Center, New York, NY10032
- Department of Psychiatry, Columbia University Medical Center, New York, NY10032
| | - L. F. Abbott
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Department of Neuroscience, Columbia University, New York, NY10027
- Center for Theoretical Neuroscience, Department of Neuroscience, Columbia University, New York, NY10027
- Department of Neuroscience, Columbia University Medical Center, New York, NY10032
| |
Collapse
|
7
|
Noel JP, Balzani E, Savin C, Angelaki DE. Context-invariant beliefs are supported by dynamic reconfiguration of single unit functional connectivity in prefrontal cortex of male macaques. Nat Commun 2024; 15:5738. [PMID: 38982106 PMCID: PMC11233555 DOI: 10.1038/s41467-024-50203-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2023] [Accepted: 07/02/2024] [Indexed: 07/11/2024] Open
Abstract
Natural behaviors occur in closed action-perception loops and are supported by dynamic and flexible beliefs abstracted away from our immediate sensory milieu. How this real-world flexibility is instantiated in neural circuits remains unknown. Here, we have male macaques navigate in a virtual environment by primarily leveraging sensory (optic flow) signals, or by more heavily relying on acquired internal models. We record single-unit spiking activity simultaneously from the dorsomedial superior temporal area (MSTd), parietal area 7a, and the dorso-lateral prefrontal cortex (dlPFC). Results show that while animals were able to maintain adaptive task-relevant beliefs regardless of sensory context, the fine-grain statistical dependencies between neurons, particularly in 7a and dlPFC, dynamically remapped with the changing computational demands. In dlPFC, but not 7a, destroying these statistical dependencies abolished the area's ability for cross-context decoding. Lastly, correlational analyses suggested that the more unit-to-unit couplings remapped in dlPFC, and the less they did so in MSTd, the less were population codes and behavior impacted by the loss of sensory evidence. We conclude that dynamic functional connectivity between neurons in prefrontal cortex maintain a stable population code and context-invariant beliefs during naturalistic behavior.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York City, NY, USA.
- Department of Neuroscience, University of Minnesota, Minneapolis, MN, USA.
| | - Edoardo Balzani
- Center for Neural Science, New York University, New York City, NY, USA
- Flatiron Institute, Simons Foundation, New York, NY, USA
| | - Cristina Savin
- Center for Neural Science, New York University, New York City, NY, USA
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York City, NY, USA
| |
Collapse
|
8
|
Wu N, Valera I, Sinz F, Ecker A, Euler T, Qiu Y. Probabilistic neural transfer function estimation with Bayesian system identification. PLoS Comput Biol 2024; 20:e1012354. [PMID: 39083559 PMCID: PMC11318871 DOI: 10.1371/journal.pcbi.1012354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 08/12/2024] [Accepted: 07/22/2024] [Indexed: 08/02/2024] Open
Abstract
Neural population responses in sensory systems are driven by external physical stimuli. This stimulus-response relationship is typically characterized by receptive fields, which have been estimated by neural system identification approaches. Such models usually require a large amount of training data, yet, the recording time for animal experiments is limited, giving rise to epistemic uncertainty for the learned neural transfer functions. While deep neural network models have demonstrated excellent power on neural prediction, they usually do not provide the uncertainty of the resulting neural representations and derived statistics, such as most exciting inputs (MEIs), from in silico experiments. Here, we present a Bayesian system identification approach to predict neural responses to visual stimuli, and explore whether explicitly modeling network weight variability can be beneficial for identifying neural response properties. To this end, we use variational inference to estimate the posterior distribution of each model weight given the training data. Tests with different neural datasets demonstrate that this method can achieve higher or comparable performance on neural prediction, with a much higher data efficiency compared to Monte Carlo dropout methods and traditional models using point estimates of the model parameters. At the same time, our variational method provides us with an effectively infinite ensemble, avoiding the idiosyncrasy of any single model, to generate MEIs. This allows us to estimate the uncertainty of stimulus-response function, which we have found to be negatively correlated with the predictive performance at model level and may serve to evaluate models. Furthermore, our approach enables us to identify response properties with credible intervals and to determine whether the inferred features are meaningful by performing statistical tests on MEIs. Finally, in silico experiments show that our model generates stimuli driving neuronal activity significantly better than traditional models in the limited-data regime.
Collapse
Affiliation(s)
- Nan Wu
- Department of Computer Science, Saarland University, Saarbrücken, Germany
- Institute for Ophthalmic Research and Centre for Integrative Neuroscience (CIN), Tübingen University, Tübingen, Germany
| | - Isabel Valera
- Department of Computer Science, Saarland University, Saarbrücken, Germany
| | - Fabian Sinz
- Department of Computer Science and Campus Institute Data Science (CIDAS), Göttingen University, Göttingen, Germany
| | - Alexander Ecker
- Department of Computer Science and Campus Institute Data Science (CIDAS), Göttingen University, Göttingen, Germany
- Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
| | - Thomas Euler
- Institute for Ophthalmic Research and Centre for Integrative Neuroscience (CIN), Tübingen University, Tübingen, Germany
| | - Yongrong Qiu
- Institute for Ophthalmic Research and Centre for Integrative Neuroscience (CIN), Tübingen University, Tübingen, Germany
- Department of Computer Science and Campus Institute Data Science (CIDAS), Göttingen University, Göttingen, Germany
- Department of Ophthalmology, Byers Eye Institute, Stanford University School of Medicine, Stanford, California, United State of America
- Stanford Bio-X, Stanford University, Stanford, California, United State of America
- Wu Tsai Neurosciences Institute, Stanford University, Stanford, California, United State of America
| |
Collapse
|
9
|
Ma AC, Cameron AD, Wiener M. Memorability shapes perceived time (and vice versa). Nat Hum Behav 2024; 8:1296-1308. [PMID: 38649460 DOI: 10.1038/s41562-024-01863-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 03/13/2024] [Indexed: 04/25/2024]
Abstract
Visual stimuli are known to vary in their perceived duration. Some visual stimuli are also known to linger for longer in memory. Yet, whether these two features of visual processing are linked is unknown. Despite early assumptions that time is an extracted or higher-order feature of perception, more recent work over the past two decades has demonstrated that timing may be instantiated within sensory modality circuits. A primary location for many of these studies is the visual system, where duration-sensitive responses have been demonstrated. Furthermore, visual stimulus features have been observed to shift perceived duration. These findings suggest that visual circuits mediate or construct perceived time. Here we present evidence across a series of experiments that perceived time is affected by the image properties of scene size, clutter and memorability. More specifically, we observe that scene size and memorability dilate time, whereas clutter contracts it. Furthermore, the durations of more memorable images are also perceived more precisely. Conversely, the longer the perceived duration of an image, the more memorable it is. To explain these findings, we applied a recurrent convolutional neural network model of the ventral visual system, in which images are progressively processed over time. We find that more memorable images are processed faster, and that this increase in processing speed predicts both the lengthening and the increased precision of perceived durations. These findings provide evidence for a link between image features, time perception and memory that can be further explored with models of visual processing.
Collapse
Affiliation(s)
- Alex C Ma
- Department of Psychology, George Mason University, Fairfax, VA, USA
| | - Ayana D Cameron
- Department of Psychology, George Mason University, Fairfax, VA, USA
| | - Martin Wiener
- Department of Psychology, George Mason University, Fairfax, VA, USA.
| |
Collapse
|
10
|
Ostojic S, Fusi S. Computational role of structure in neural activity and connectivity. Trends Cogn Sci 2024; 28:677-690. [PMID: 38553340 DOI: 10.1016/j.tics.2024.03.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 02/29/2024] [Accepted: 03/07/2024] [Indexed: 07/05/2024]
Abstract
One major challenge of neuroscience is identifying structure in seemingly disorganized neural activity. Different types of structure have different computational implications that can help neuroscientists understand the functional role of a particular brain area. Here, we outline a unified approach to characterize structure by inspecting the representational geometry and the modularity properties of the recorded activity and show that a similar approach can also reveal structure in connectivity. We start by setting up a general framework for determining geometry and modularity in activity and connectivity and relating these properties with computations performed by the network. We then use this framework to review the types of structure found in recent studies of model networks performing three classes of computations.
Collapse
Affiliation(s)
- Srdjan Ostojic
- Laboratoire de Neurosciences Cognitives et Computationnelles, INSERM U960, Ecole Normale Superieure - PSL Research University, 75005 Paris, France.
| | - Stefano Fusi
- Center for Theoretical Neuroscience, Columbia University, New York, NY, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA; Department of Neuroscience, Columbia University, New York, NY, USA; Kavli Institute for Brain Science, Columbia University, New York, NY, USA
| |
Collapse
|
11
|
Thieu MK, Ayzenberg V, Lourenco SF, Kragel PA. Visual looming is a primitive for human emotion. iScience 2024; 27:109886. [PMID: 38799577 PMCID: PMC11126809 DOI: 10.1016/j.isci.2024.109886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 03/11/2024] [Accepted: 04/30/2024] [Indexed: 05/29/2024] Open
Abstract
The neural computations for looming detection are strikingly similar across species. In mammals, information about approaching threats is conveyed from the retina to the midbrain superior colliculus, where approach variables are computed to enable defensive behavior. Although neuroscientific theories posit that midbrain representations contribute to emotion through connectivity with distributed brain systems, it remains unknown whether a computational system for looming detection can predict both defensive behavior and phenomenal experience in humans. Here, we show that a shallow convolutional neural network based on the Drosophila visual system predicts defensive blinking to looming objects in infants and superior colliculus responses to optical expansion in adults. Further, the neural network's responses to naturalistic video clips predict self-reported emotion largely by way of subjective arousal. These findings illustrate how a simple neural network architecture optimized for a species-general task relevant for survival explains motor and experiential components of human emotion.
Collapse
Affiliation(s)
| | - Vladislav Ayzenberg
- Emory University, Atlanta, GA, USA
- University of Pennsylvania, Philadelphia, PA, USA
| | | | | |
Collapse
|
12
|
Wang R, Chen ZS. Large-scale foundation models and generative AI for BigData neuroscience. Neurosci Res 2024:S0168-0102(24)00075-0. [PMID: 38897235 DOI: 10.1016/j.neures.2024.06.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2023] [Revised: 04/15/2024] [Accepted: 05/15/2024] [Indexed: 06/21/2024]
Abstract
Recent advances in machine learning have led to revolutionary breakthroughs in computer games, image and natural language understanding, and scientific discovery. Foundation models and large-scale language models (LLMs) have recently achieved human-like intelligence thanks to BigData. With the help of self-supervised learning (SSL) and transfer learning, these models may potentially reshape the landscapes of neuroscience research and make a significant impact on the future. Here we present a mini-review on recent advances in foundation models and generative AI models as well as their applications in neuroscience, including natural language and speech, semantic memory, brain-machine interfaces (BMIs), and data augmentation. We argue that this paradigm-shift framework will open new avenues for many neuroscience research directions and discuss the accompanying challenges and opportunities.
Collapse
Affiliation(s)
- Ran Wang
- Department of Psychiatry, New York University Grossman School of Medicine, New York, NY 10016, USA
| | - Zhe Sage Chen
- Department of Psychiatry, New York University Grossman School of Medicine, New York, NY 10016, USA; Department of Neuroscience and Physiology, Neuroscience Institute, New York University Grossman School of Medicine, New York, NY 10016, USA; Department of Biomedical Engineering, New York University Tandon School of Engineering, Brooklyn, NY 11201, USA.
| |
Collapse
|
13
|
Wang L, Zhou Z, Yang X, Shi S, Zeng X, Cao D. The present state and challenges of active learning in drug discovery. Drug Discov Today 2024; 29:103985. [PMID: 38642700 DOI: 10.1016/j.drudis.2024.103985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Revised: 04/08/2024] [Accepted: 04/15/2024] [Indexed: 04/22/2024]
Abstract
Active learning (AL) is an iterative feedback process that efficiently identifies valuable data within vast chemical space, even with limited labeled data. This characteristic renders it a valuable approach to tackle the ongoing challenges faced in drug discovery, such as the ever-expanding explore space and the limitations of labeled data. Consequently, AL is increasingly gaining prominence in the field of drug development. In this paper, we comprehensively review the application of AL at all stages of drug discovery, including compounds-target interaction prediction, virtual screening, molecular generation and optimization, as well as molecular properties prediction. Additionally, we discuss the challenges and prospects associated with the current applications of AL in drug discovery.
Collapse
Affiliation(s)
- Lei Wang
- Xiangya School of Pharmaceutical Sciences, Central South University, Changsha 410013, Hunan, China
| | - Zhenran Zhou
- Department of Computer Science, Hunan University, Changsha 410082, Hunan, China
| | - Xixi Yang
- Department of Computer Science, Hunan University, Changsha 410082, Hunan, China
| | - Shaohua Shi
- Institute for Advancing Translational Medicine in Bone and Joint Diseases, School of Chinese Medicine, Hong Kong Baptist University, Hong Kong SAR, China
| | - Xiangxiang Zeng
- Department of Computer Science, Hunan University, Changsha 410082, Hunan, China.
| | - Dongsheng Cao
- Xiangya School of Pharmaceutical Sciences, Central South University, Changsha 410013, Hunan, China.
| |
Collapse
|
14
|
Ratzon A, Derdikman D, Barak O. Representational drift as a result of implicit regularization. eLife 2024; 12:RP90069. [PMID: 38695551 PMCID: PMC11065423 DOI: 10.7554/elife.90069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/04/2024] Open
Abstract
Recent studies show that, even in constant environments, the tuning of single neurons changes over time in a variety of brain regions. This representational drift has been suggested to be a consequence of continuous learning under noise, but its properties are still not fully understood. To investigate the underlying mechanism, we trained an artificial network on a simplified navigational task. The network quickly reached a state of high performance, and many units exhibited spatial tuning. We then continued training the network and noticed that the activity became sparser with time. Initial learning was orders of magnitude faster than ensuing sparsification. This sparsification is consistent with recent results in machine learning, in which networks slowly move within their solution space until they reach a flat area of the loss function. We analyzed four datasets from different labs, all demonstrating that CA1 neurons become sparser and more spatially informative with exposure to the same environment. We conclude that learning is divided into three overlapping phases: (i) Fast familiarity with the environment; (ii) slow implicit regularization; and (iii) a steady state of null drift. The variability in drift dynamics opens the possibility of inferring learning algorithms from observations of drift statistics.
Collapse
Affiliation(s)
- Aviv Ratzon
- Rappaport Faculty of Medicine, Technion - Israel Institute of TechnologyHaifaIsrael
- Network Biology Research Laboratory, Technion - Israel Institute of TechnologyHaifaIsrael
| | - Dori Derdikman
- Rappaport Faculty of Medicine, Technion - Israel Institute of TechnologyHaifaIsrael
| | - Omri Barak
- Rappaport Faculty of Medicine, Technion - Israel Institute of TechnologyHaifaIsrael
- Network Biology Research Laboratory, Technion - Israel Institute of TechnologyHaifaIsrael
| |
Collapse
|
15
|
Yoon JH, Lee D, Lee C, Cho E, Lee S, Cazenave-Gassiot A, Kim K, Chae S, Dennis EA, Suh PG. Paradigm shift required for translational research on the brain. Exp Mol Med 2024; 56:1043-1054. [PMID: 38689090 PMCID: PMC11148129 DOI: 10.1038/s12276-024-01218-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 02/07/2024] [Accepted: 02/20/2024] [Indexed: 05/02/2024] Open
Abstract
Biomedical research on the brain has led to many discoveries and developments, such as understanding human consciousness and the mind and overcoming brain diseases. However, historical biomedical research on the brain has unique characteristics that differ from those of conventional biomedical research. For example, there are different scientific interpretations due to the high complexity of the brain and insufficient intercommunication between researchers of different disciplines owing to the limited conceptual and technical overlap of distinct backgrounds. Therefore, the development of biomedical research on the brain has been slower than that in other areas. Brain biomedical research has recently undergone a paradigm shift, and conducting patient-centered, large-scale brain biomedical research has become possible using emerging high-throughput analysis tools. Neuroimaging, multiomics, and artificial intelligence technology are the main drivers of this new approach, foreshadowing dramatic advances in translational research. In addition, emerging interdisciplinary cooperative studies provide insights into how unresolved questions in biomedicine can be addressed. This review presents the in-depth aspects of conventional biomedical research and discusses the future of biomedical research on the brain.
Collapse
Affiliation(s)
- Jong Hyuk Yoon
- Neurodegenerative Diseases Research Group, Korea Brain Research Institute, Daegu, 41062, Republic of Korea.
| | - Dongha Lee
- Cognitive Science Research Group, Korea Brain Research Institute, Daegu, 41062, Republic of Korea
| | - Chany Lee
- Cognitive Science Research Group, Korea Brain Research Institute, Daegu, 41062, Republic of Korea
| | - Eunji Cho
- Neurodegenerative Diseases Research Group, Korea Brain Research Institute, Daegu, 41062, Republic of Korea
| | - Seulah Lee
- Neurodegenerative Diseases Research Group, Korea Brain Research Institute, Daegu, 41062, Republic of Korea
| | - Amaury Cazenave-Gassiot
- Department of Biochemistry and Precision Medicine Translational Research Program, Yong Loo Lin School of Medicine, National University of Singapore, Singapore, 119077, Singapore
- Singapore Lipidomics Incubator (SLING), Life Sciences Institute, National University of Singapore, Singapore, 117456, Singapore
| | - Kipom Kim
- Research Strategy Office, Korea Brain Research Institute, Daegu, 41062, Republic of Korea
| | - Sehyun Chae
- Neurovascular Unit Research Group, Korean Brain Research Institute, Daegu, 41062, Republic of Korea
| | - Edward A Dennis
- Department of Pharmacology and Department of Chemistry and Biochemistry, University of California, San Diego, La Jolla, CA, 92093-0601, USA
| | - Pann-Ghill Suh
- Korea Brain Research Institute, Daegu, 41062, Republic of Korea
| |
Collapse
|
16
|
Fisco-Compte P, Aquilué-Llorens D, Roqueiro N, Fossas E, Guillamon A. Empirical modeling and prediction of neuronal dynamics. BIOLOGICAL CYBERNETICS 2024; 118:83-110. [PMID: 38597964 PMCID: PMC11068704 DOI: 10.1007/s00422-024-00986-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 02/29/2024] [Indexed: 04/11/2024]
Abstract
Mathematical modeling of neuronal dynamics has experienced a fast growth in the last decades thanks to the biophysical formalism introduced by Hodgkin and Huxley in the 1950s. Other types of models (for instance, integrate and fire models), although less realistic, have also contributed to understand neuronal dynamics. However, there is still a vast volume of data that have not been associated with a mathematical model, mainly because data are acquired more rapidly than they can be analyzed or because it is difficult to analyze (for instance, if the number of ionic channels involved is huge). Therefore, developing new methodologies to obtain mathematical or computational models associated with data (even without previous knowledge of the source) can be helpful to make future predictions. Here, we explore the capability of a wavelet neural network to identify neuronal (single-cell) dynamics. We present an optimized computational scheme that trains the ANN with biologically plausible input currents. We obtain successful identification for data generated from four different neuron models when using all variables as inputs of the network. We also show that the empiric model obtained is able to generalize and predict the neuronal dynamics generated by variable input currents different from those used to train the artificial network. In the more realistic situation of using only the voltage and the injected current as input data to train the network, we lose predictive ability but, for low-dimensional models, the results are still satisfactory. We understand our contribution as a first step toward obtaining empiric models from experimental voltage traces.
Collapse
Affiliation(s)
- Pau Fisco-Compte
- Departament d'Enginyeria Elèctrica, CITCEA-UPC, Universitat Politècnica de Catalunya - Barcelona TECH, Av. Diagonal, 647, (Edifici ETSEIB), Barcelona, Catalonia, 08028, Spain
| | - David Aquilué-Llorens
- Neuroscience BU, Starlab Barcelona S.L., Av Tibidabo 47 bis, Barcelona, Catalonia, 08035, Spain
| | - Nestor Roqueiro
- Depto. de Automação e Sistemas, Federal University of Santa Catarina, Bairro Trindade, Caixa Postal 476, Florianopolis, Santa Catarina, 88040-900, Brazil
| | - Enric Fossas
- Institut d'Organització i Control, Universitat Politècnica de Catalunya - Barcelona TECH, Av. Diagonal, 647, planta 11 (Edifici ETSEIB), Barcelona, Catalonia, 08028, Spain
| | - Antoni Guillamon
- Departament de Matemàtiques (EPSEB) and Institut de Matemàtiques de la UPC (IMTech), Universitat Politècnica de Catalunya - Barcelona TECH, Av. Dr. Marañón, 44-50, Barcelona, Catalonia, 08028, Spain.
- Centre de Recerca Matemàtica, Edifici C, Campus de Bellaterra, Cerdanyola del Vallès, Catalonia, 08193, Spain.
| |
Collapse
|
17
|
Davidson G, Orhan AE, Lake BM. Spatial relation categorization in infants and deep neural networks. Cognition 2024; 245:105690. [PMID: 38330851 DOI: 10.1016/j.cognition.2023.105690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 12/06/2023] [Accepted: 12/07/2023] [Indexed: 02/10/2024]
Abstract
Spatial relations, such as above, below, between, and containment, are important mediators in children's understanding of the world (Piaget, 1954). The development of these relational categories in infancy has been extensively studied (Quinn, 2003) yet little is known about their computational underpinnings. Using developmental tests, we examine the extent to which deep neural networks, pretrained on a standard vision benchmark or egocentric video captured from one baby's perspective, form categorical representations for visual stimuli depicting relations. Notably, the networks did not receive any explicit training on relations. We then analyze whether these networks recover similar patterns to ones identified in development, such as reproducing the relative difficulty of categorizing different spatial relations and different stimulus abstractions. We find that the networks we evaluate tend to recover many of the patterns observed with the simpler relations of "above versus below" or "between versus outside", but struggle to match developmental findings related to "containment". We identify factors in the choice of model architecture, pretraining data, and experimental design that contribute to the extent the networks match developmental patterns, and highlight experimental predictions made by our modeling results. Our results open the door to modeling infants' earliest categorization abilities with modern machine learning tools and demonstrate the utility and productivity of this approach.
Collapse
Affiliation(s)
- Guy Davidson
- Center for Data Science, New York University, United States of America.
| | - A Emin Orhan
- Center for Data Science, New York University, United States of America
| | - Brenden M Lake
- Center for Data Science, New York University, United States of America; Department of Psychology, New York University, United States of America
| |
Collapse
|
18
|
Heinen R, Bierbrauer A, Wolf OT, Axmacher N. Representational formats of human memory traces. Brain Struct Funct 2024; 229:513-529. [PMID: 37022435 PMCID: PMC10978732 DOI: 10.1007/s00429-023-02636-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 03/28/2023] [Indexed: 04/07/2023]
Abstract
Neural representations are internal brain states that constitute the brain's model of the external world or some of its features. In the presence of sensory input, a representation may reflect various properties of this input. When perceptual information is no longer available, the brain can still activate representations of previously experienced episodes due to the formation of memory traces. In this review, we aim at characterizing the nature of neural memory representations and how they can be assessed with cognitive neuroscience methods, mainly focusing on neuroimaging. We discuss how multivariate analysis techniques such as representational similarity analysis (RSA) and deep neural networks (DNNs) can be leveraged to gain insights into the structure of neural representations and their different representational formats. We provide several examples of recent studies which demonstrate that we are able to not only measure memory representations using RSA but are also able to investigate their multiple formats using DNNs. We demonstrate that in addition to slow generalization during consolidation, memory representations are subject to semantization already during short-term memory, by revealing a shift from visual to semantic format. In addition to perceptual and conceptual formats, we describe the impact of affective evaluations as an additional dimension of episodic memories. Overall, these studies illustrate how the analysis of neural representations may help us gain a deeper understanding of the nature of human memory.
Collapse
Affiliation(s)
- Rebekka Heinen
- Department of Neuropsychology, Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr University Bochum, Universitätsstraße 150, 44801, Bochum, Germany.
| | - Anne Bierbrauer
- Department of Neuropsychology, Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr University Bochum, Universitätsstraße 150, 44801, Bochum, Germany
- Institute for Systems Neuroscience, Medical Center Hamburg-Eppendorf, Martinistraße 52, 20251, Hamburg, Germany
| | - Oliver T Wolf
- Department of Cognitive Psychology, Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr University Bochum, Universitätsstraße 150, 44801, Bochum, Germany
| | - Nikolai Axmacher
- Department of Neuropsychology, Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr University Bochum, Universitätsstraße 150, 44801, Bochum, Germany
| |
Collapse
|
19
|
Zanin M, Aktürk T, Yıldırım E, Yerlikaya D, Yener G, Güntekin B. Reconstructing brain functional networks through identifiability and deep learning. Netw Neurosci 2024; 8:241-259. [PMID: 38562295 PMCID: PMC10923503 DOI: 10.1162/netn_a_00353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Accepted: 11/17/2023] [Indexed: 04/04/2024] Open
Abstract
We propose a novel approach for the reconstruction of functional networks representing brain dynamics based on the idea that the coparticipation of two brain regions in a common cognitive task should result in a drop in their identifiability, or in the uniqueness of their dynamics. This identifiability is estimated through the score obtained by deep learning models in supervised classification tasks and therefore requires no a priori assumptions about the nature of such coparticipation. The method is tested on EEG recordings obtained from Alzheimer's and Parkinson's disease patients, and matched healthy volunteers, for eyes-open and eyes-closed resting-state conditions, and the resulting functional networks are analysed through standard topological metrics. Both groups of patients are characterised by a reduction in the identifiability of the corresponding EEG signals, and by differences in the patterns that support such identifiability. Resulting functional networks are similar, but not identical to those reconstructed by using a correlation metric. Differences between control subjects and patients can be observed in network metrics like the clustering coefficient and the assortativity in different frequency bands. Differences are also observed between eyes open and closed conditions, especially for Parkinson's disease patients.
Collapse
Affiliation(s)
- Massimiliano Zanin
- Instituto de Física Interdisciplinar y Sistemas Complejos IFISC (CSIC-UIB), Campus UIB, Palma de Mallorca, Spain
| | - Tuba Aktürk
- Program of Electroneurophysiology, Vocational School, Istanbul Medipol University, Istanbul, Turkey
- Health Sciences and Technology Research Institute (SABITA), Istanbul Medipol University, Istanbul, Turkey
| | - Ebru Yıldırım
- Program of Electroneurophysiology, Vocational School, Istanbul Medipol University, Istanbul, Turkey
| | - Deniz Yerlikaya
- Department of Neurosciences, Health Sciences Institute, Dokuz Eylül University, Izmir, Turkey
| | - Görsev Yener
- Department of Neurosciences, Health Sciences Institute, Dokuz Eylül University, Izmir, Turkey
- School of Medicine, Izmir University of Economics, Izmir, Turkey
- Brain Dynamics Multidisciplinary Research Center, Dokuz Eylül University, Izmir, Turkey
| | - Bahar Güntekin
- Health Sciences and Technology Research Institute (SABITA), Istanbul Medipol University, Istanbul, Turkey
- Department of Biophysics, School of Medicine, Istanbul Medipol University, Turkey
| |
Collapse
|
20
|
Kay K, Biderman N, Khajeh R, Beiran M, Cueva CJ, Shohamy D, Jensen G, Wei XX, Ferrera VP, Abbott LF. Emergent neural dynamics and geometry for generalization in a transitive inference task. PLoS Comput Biol 2024; 20:e1011954. [PMID: 38662797 PMCID: PMC11125559 DOI: 10.1371/journal.pcbi.1011954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 05/24/2024] [Accepted: 02/28/2024] [Indexed: 05/25/2024] Open
Abstract
Relational cognition-the ability to infer relationships that generalize to novel combinations of objects-is fundamental to human and animal intelligence. Despite this importance, it remains unclear how relational cognition is implemented in the brain due in part to a lack of hypotheses and predictions at the levels of collective neural activity and behavior. Here we discovered, analyzed, and experimentally tested neural networks (NNs) that perform transitive inference (TI), a classic relational task (if A > B and B > C, then A > C). We found NNs that (i) generalized perfectly, despite lacking overt transitive structure prior to training, (ii) generalized when the task required working memory (WM), a capacity thought to be essential to inference in the brain, (iii) emergently expressed behaviors long observed in living subjects, in addition to a novel order-dependent behavior, and (iv) expressed different task solutions yielding alternative behavioral and neural predictions. Further, in a large-scale experiment, we found that human subjects performing WM-based TI showed behavior inconsistent with a class of NNs that characteristically expressed an intuitive task solution. These findings provide neural insights into a classical relational ability, with wider implications for how the brain realizes relational cognition.
Collapse
Affiliation(s)
- Kenneth Kay
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- Grossman Center for the Statistics of Mind, Columbia University, New York, New York, United States of America
| | - Natalie Biderman
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Department of Psychology, Columbia University, New York, New York, United States of America
| | - Ramin Khajeh
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
| | - Manuel Beiran
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
| | - Christopher J. Cueva
- Department of Brain and Cognitive Sciences, MIT, Cambridge, Massachusetts, United States of America
| | - Daphna Shohamy
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Department of Psychology, Columbia University, New York, New York, United States of America
- The Kavli Institute for Brain Science, Columbia University, New York, New York, United States of America
| | - Greg Jensen
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Department of Neuroscience, Columbia University Medical Center, New York, New York, United States of America
- Department of Psychology at Reed College, Portland, Oregon, United States of America
| | - Xue-Xin Wei
- Departments of Neuroscience and Psychology, The University of Texas at Austin, Austin, Texas, United States of America
| | - Vincent P. Ferrera
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Department of Neuroscience, Columbia University Medical Center, New York, New York, United States of America
- Department of Psychiatry, Columbia University Medical Center, New York, New York, United States of America
| | - LF Abbott
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Center for Theoretical Neuroscience, Columbia University, New York, New York, United States of America
- The Kavli Institute for Brain Science, Columbia University, New York, New York, United States of America
- Department of Neuroscience, Columbia University Medical Center, New York, New York, United States of America
| |
Collapse
|
21
|
Juliani A, Safron A, Kanai R. Deep CANALs: a deep learning approach to refining the canalization theory of psychopathology. Neurosci Conscious 2024; 2024:niae005. [PMID: 38533457 PMCID: PMC10965250 DOI: 10.1093/nc/niae005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Revised: 01/15/2024] [Accepted: 01/19/2024] [Indexed: 03/28/2024] Open
Abstract
Psychedelic therapy has seen a resurgence of interest in the last decade, with promising clinical outcomes for the treatment of a variety of psychopathologies. In response to this success, several theoretical models have been proposed to account for the positive therapeutic effects of psychedelics. One of the more prominent models is "RElaxed Beliefs Under pSychedelics," which proposes that psychedelics act therapeutically by relaxing the strength of maladaptive high-level beliefs encoded in the brain. The more recent "CANAL" model of psychopathology builds on the explanatory framework of RElaxed Beliefs Under pSychedelics by proposing that canalization (the development of overly rigid belief landscapes) may be a primary factor in psychopathology. Here, we make use of learning theory in deep neural networks to develop a series of refinements to the original CANAL model. Our primary theoretical contribution is to disambiguate two separate optimization landscapes underlying belief representation in the brain and describe the unique pathologies which can arise from the canalization of each. Along each dimension, we identify pathologies of either too much or too little canalization, implying that the construct of canalization does not have a simple linear correlation with the presentation of psychopathology. In this expanded paradigm, we demonstrate the ability to make novel predictions regarding what aspects of psychopathology may be amenable to psychedelic therapy, as well as what forms of psychedelic therapy may ultimately be most beneficial for a given individual.
Collapse
Affiliation(s)
- Arthur Juliani
- Microsoft Research , Microsoft, 300 Lafayette St, New York, NY 10012, USA
| | - Adam Safron
- Department of Psychiatry and Behavioral Sciences, Johns Hopkins University, 600 N Wolfe St, Baltimore, MD 21205, USA
| | - Ryota Kanai
- Neurotechnology R & D Unit, Araya Inc, 6F Sanpo Sakuma Building, 1-11 Kandasakumacho, Chiyoda-ku, Tokyo 101-0025, Japan
| |
Collapse
|
22
|
McMullin MA, Kumar R, Higgins NC, Gygi B, Elhilali M, Snyder JS. Preliminary Evidence for Global Properties in Human Listeners During Natural Auditory Scene Perception. Open Mind (Camb) 2024; 8:333-365. [PMID: 38571530 PMCID: PMC10990578 DOI: 10.1162/opmi_a_00131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 02/10/2024] [Indexed: 04/05/2024] Open
Abstract
Theories of auditory and visual scene analysis suggest the perception of scenes relies on the identification and segregation of objects within it, resembling a detail-oriented processing style. However, a more global process may occur while analyzing scenes, which has been evidenced in the visual domain. It is our understanding that a similar line of research has not been explored in the auditory domain; therefore, we evaluated the contributions of high-level global and low-level acoustic information to auditory scene perception. An additional aim was to increase the field's ecological validity by using and making available a new collection of high-quality auditory scenes. Participants rated scenes on 8 global properties (e.g., open vs. enclosed) and an acoustic analysis evaluated which low-level features predicted the ratings. We submitted the acoustic measures and average ratings of the global properties to separate exploratory factor analyses (EFAs). The EFA of the acoustic measures revealed a seven-factor structure explaining 57% of the variance in the data, while the EFA of the global property measures revealed a two-factor structure explaining 64% of the variance in the data. Regression analyses revealed each global property was predicted by at least one acoustic variable (R2 = 0.33-0.87). These findings were extended using deep neural network models where we examined correlations between human ratings of global properties and deep embeddings of two computational models: an object-based model and a scene-based model. The results support that participants' ratings are more strongly explained by a global analysis of the scene setting, though the relationship between scene perception and auditory perception is multifaceted, with differing correlation patterns evident between the two models. Taken together, our results provide evidence for the ability to perceive auditory scenes from a global perspective. Some of the acoustic measures predicted ratings of global scene perception, suggesting representations of auditory objects may be transformed through many stages of processing in the ventral auditory stream, similar to what has been proposed in the ventral visual stream. These findings and the open availability of our scene collection will make future studies on perception, attention, and memory for natural auditory scenes possible.
Collapse
Affiliation(s)
| | - Rohit Kumar
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Nathan C. Higgins
- Department of Communication Sciences & Disorders, University of South Florida, Tampa, FL, USA
| | - Brian Gygi
- East Bay Institute for Research and Education, Martinez, CA, USA
| | - Mounya Elhilali
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Joel S. Snyder
- Department of Psychology, University of Nevada, Las Vegas, Las Vegas, NV, USA
| |
Collapse
|
23
|
Noda T, Aschauer DF, Chambers AR, Seiler JPH, Rumpel S. Representational maps in the brain: concepts, approaches, and applications. Front Cell Neurosci 2024; 18:1366200. [PMID: 38584779 PMCID: PMC10995314 DOI: 10.3389/fncel.2024.1366200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Accepted: 03/08/2024] [Indexed: 04/09/2024] Open
Abstract
Neural systems have evolved to process sensory stimuli in a way that allows for efficient and adaptive behavior in a complex environment. Recent technological advances enable us to investigate sensory processing in animal models by simultaneously recording the activity of large populations of neurons with single-cell resolution, yielding high-dimensional datasets. In this review, we discuss concepts and approaches for assessing the population-level representation of sensory stimuli in the form of a representational map. In such a map, not only are the identities of stimuli distinctly represented, but their relational similarity is also mapped onto the space of neuronal activity. We highlight example studies in which the structure of representational maps in the brain are estimated from recordings in humans as well as animals and compare their methodological approaches. Finally, we integrate these aspects and provide an outlook for how the concept of representational maps could be applied to various fields in basic and clinical neuroscience.
Collapse
Affiliation(s)
- Takahiro Noda
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University-Mainz, Mainz, Germany
| | - Dominik F. Aschauer
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University-Mainz, Mainz, Germany
| | - Anna R. Chambers
- Department of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA, United States
- Eaton Peabody Laboratories, Massachusetts Eye and Ear Infirmary, Boston, MA, United States
| | - Johannes P.-H. Seiler
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University-Mainz, Mainz, Germany
| | - Simon Rumpel
- Institute of Physiology, Focus Program Translational Neurosciences, University Medical Center, Johannes Gutenberg University-Mainz, Mainz, Germany
| |
Collapse
|
24
|
Sievers B, Thornton MA. Deep social neuroscience: the promise and peril of using artificial neural networks to study the social brain. Soc Cogn Affect Neurosci 2024; 19:nsae014. [PMID: 38334747 PMCID: PMC10880882 DOI: 10.1093/scan/nsae014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 12/20/2023] [Accepted: 02/04/2024] [Indexed: 02/10/2024] Open
Abstract
This review offers an accessible primer to social neuroscientists interested in neural networks. It begins by providing an overview of key concepts in deep learning. It then discusses three ways neural networks can be useful to social neuroscientists: (i) building statistical models to predict behavior from brain activity; (ii) quantifying naturalistic stimuli and social interactions; and (iii) generating cognitive models of social brain function. These applications have the potential to enhance the clinical value of neuroimaging and improve the generalizability of social neuroscience research. We also discuss the significant practical challenges, theoretical limitations and ethical issues faced by deep learning. If the field can successfully navigate these hazards, we believe that artificial neural networks may prove indispensable for the next stage of the field's development: deep social neuroscience.
Collapse
Affiliation(s)
- Beau Sievers
- Department of Psychology, Stanford University, 420 Jane Stanford Way, Stanford, CA 94305, USA
- Department of Psychology, Harvard University, 33 Kirkland St., Cambridge, MA 02138, USA
| | - Mark A Thornton
- Department of Psychological and Brain Sciences, Dartmouth College, 6207 Moore Hall, Hanover, NH 03755, USA
| |
Collapse
|
25
|
Ratzon A, Derdikman D, Barak O. Representational drift as a result of implicit regularization. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.05.04.539512. [PMID: 38370656 PMCID: PMC10871206 DOI: 10.1101/2023.05.04.539512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
Recent studies show that, even in constant environments, the tuning of single neurons changes over time in a variety of brain regions. This representational drift has been suggested to be a consequence of continuous learning under noise, but its properties are still not fully understood. To investigate the underlying mechanism, we trained an artificial network on a simplified navigational task. The network quickly reached a state of high performance, and many units exhibited spatial tuning. We then continued training the network and noticed that the activity became sparser with time. Initial learning was orders of magnitude faster than ensuing sparsification. This sparsification is consistent with recent results in machine learning, in which networks slowly move within their solution space until they reach a flat area of the loss function. We analyzed four datasets from different labs, all demonstrating that CA1 neurons become sparser and more spatially informative with exposure to the same environment. We conclude that learning is divided into three overlapping phases: (i) Fast familiarity with the environment; (ii) slow implicit regularization; (iii) a steady state of null drift. The variability in drift dynamics opens the possibility of inferring learning algorithms from observations of drift statistics.
Collapse
Affiliation(s)
- Aviv Ratzon
- Rappaport Faculty of Medicine, Technion - Israel Institute of Technology, Haifa 31096, Israel
- Network Biology Research Laboratory, Technion - Israel Institute of Technology, Haifa 32000, Israel
| | - Dori Derdikman
- Rappaport Faculty of Medicine, Technion - Israel Institute of Technology, Haifa 31096, Israel
| | - Omri Barak
- Rappaport Faculty of Medicine, Technion - Israel Institute of Technology, Haifa 31096, Israel
- Network Biology Research Laboratory, Technion - Israel Institute of Technology, Haifa 32000, Israel
| |
Collapse
|
26
|
Dyballa L, Rudzite AM, Hoseini MS, Thapa M, Stryker MP, Field GD, Zucker SW. Population encoding of stimulus features along the visual hierarchy. Proc Natl Acad Sci U S A 2024; 121:e2317773121. [PMID: 38227668 PMCID: PMC10823231 DOI: 10.1073/pnas.2317773121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 12/13/2023] [Indexed: 01/18/2024] Open
Abstract
The retina and primary visual cortex (V1) both exhibit diverse neural populations sensitive to diverse visual features. Yet it remains unclear how neural populations in each area partition stimulus space to span these features. One possibility is that neural populations are organized into discrete groups of neurons, with each group signaling a particular constellation of features. Alternatively, neurons could be continuously distributed across feature-encoding space. To distinguish these possibilities, we presented a battery of visual stimuli to the mouse retina and V1 while measuring neural responses with multi-electrode arrays. Using machine learning approaches, we developed a manifold embedding technique that captures how neural populations partition feature space and how visual responses correlate with physiological and anatomical properties of individual neurons. We show that retinal populations discretely encode features, while V1 populations provide a more continuous representation. Applying the same analysis approach to convolutional neural networks that model visual processing, we demonstrate that they partition features much more similarly to the retina, indicating they are more like big retinas than little brains.
Collapse
Affiliation(s)
- Luciano Dyballa
- Department of Computer Science, Yale University, New Haven, CT06511
| | | | - Mahmood S. Hoseini
- Department of Physiology, University of California, San Francisco, CA94143
| | - Mishek Thapa
- Department of Neurobiology, Duke University, Durham, NC27708
- Department of Ophthalmology, David Geffen School of Medicine, Stein Eye Institute, University of California, Los Angeles, CA90095
| | - Michael P. Stryker
- Department of Physiology, University of California, San Francisco, CA94143
- Kavli Institute for Fundamental Neuroscience, University of California, San Francisco, CA94143
| | - Greg D. Field
- Department of Neurobiology, Duke University, Durham, NC27708
- Department of Ophthalmology, David Geffen School of Medicine, Stein Eye Institute, University of California, Los Angeles, CA90095
| | - Steven W. Zucker
- Department of Computer Science, Yale University, New Haven, CT06511
- Department of Biomedical Engineering, Yale University, New Haven, CT06511
| |
Collapse
|
27
|
Thieu MK, Ayzenberg V, Lourenco SF, Kragel PA. Visual looming is a primitive for human emotion. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.08.29.555380. [PMID: 37693448 PMCID: PMC10491236 DOI: 10.1101/2023.08.29.555380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2023]
Abstract
Looming objects afford threat of collision across the animal kingdom. Defensive responses to looming and neural computations for looming detection are strikingly conserved across species. In mammals, information about rapidly approaching threats is conveyed from the retina to the midbrain superior colliculus, where variables that indicate the position and velocity of approach are computed to enable defensive behavior. Although neuroscientific theories posit that midbrain representations contribute to emotion through connectivity with distributed brain systems, it remains unknown whether a computational system for looming detection can predict both defensive behavior and phenomenal experience in humans. Here, we show that a shallow convolutional neural network based on the Drosophila visual system predicts defensive blinking to looming objects in infants and superior colliculus responses to optical expansion in adults. Further, the responses of the convolutional network to a broad array of naturalistic video clips predict self-reported emotion largely on the basis of subjective arousal. Our findings illustrate how motor and experiential components of human emotion relate to species-general systems for survival in unpredictable environments.
Collapse
|
28
|
Kim G, Kim DK, Jeong H. Spontaneous emergence of rudimentary music detectors in deep neural networks. Nat Commun 2024; 15:148. [PMID: 38168097 PMCID: PMC10761941 DOI: 10.1038/s41467-023-44516-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Accepted: 12/15/2023] [Indexed: 01/05/2024] Open
Abstract
Music exists in almost every society, has universal acoustic features, and is processed by distinct neural circuits in humans even with no experience of musical training. However, it remains unclear how these innate characteristics emerge and what functions they serve. Here, using an artificial deep neural network that models the auditory information processing of the brain, we show that units tuned to music can spontaneously emerge by learning natural sound detection, even without learning music. The music-selective units encoded the temporal structure of music in multiple timescales, following the population-level response characteristics observed in the brain. We found that the process of generalization is critical for the emergence of music-selectivity and that music-selectivity can work as a functional basis for the generalization of natural sound, thereby elucidating its origin. These findings suggest that evolutionary adaptation to process natural sounds can provide an initial blueprint for our sense of music.
Collapse
Affiliation(s)
- Gwangsu Kim
- Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea
| | - Dong-Kyum Kim
- Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea
| | - Hawoong Jeong
- Department of Physics, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea.
- Center for Complex Systems, Korea Advanced Institute of Science and Technology, Daejeon, 34141, Korea.
| |
Collapse
|
29
|
Wang C, Zhang T, Chen X, He S, Li S, Wu S. BrainPy, a flexible, integrative, efficient, and extensible framework for general-purpose brain dynamics programming. eLife 2023; 12:e86365. [PMID: 38132087 PMCID: PMC10796146 DOI: 10.7554/elife.86365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Accepted: 12/20/2023] [Indexed: 12/23/2023] Open
Abstract
Elucidating the intricate neural mechanisms underlying brain functions requires integrative brain dynamics modeling. To facilitate this process, it is crucial to develop a general-purpose programming framework that allows users to freely define neural models across multiple scales, efficiently simulate, train, and analyze model dynamics, and conveniently incorporate new modeling approaches. In response to this need, we present BrainPy. BrainPy leverages the advanced just-in-time (JIT) compilation capabilities of JAX and XLA to provide a powerful infrastructure tailored for brain dynamics programming. It offers an integrated platform for building, simulating, training, and analyzing brain dynamics models. Models defined in BrainPy can be JIT compiled into binary instructions for various devices, including Central Processing Unit, Graphics Processing Unit, and Tensor Processing Unit, which ensures high-running performance comparable to native C or CUDA. Additionally, BrainPy features an extensible architecture that allows for easy expansion of new infrastructure, utilities, and machine-learning approaches. This flexibility enables researchers to incorporate cutting-edge techniques and adapt the framework to their specific needs.
Collapse
Affiliation(s)
- Chaoming Wang
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Bejing Key Laboratory of Behavior and Mental Health, Peking UniversityBeijingChina
- Guangdong Institute of Intelligence Science and TechnologyGuangdongChina
| | - Tianqiu Zhang
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Bejing Key Laboratory of Behavior and Mental Health, Peking UniversityBeijingChina
| | - Xiaoyu Chen
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Bejing Key Laboratory of Behavior and Mental Health, Peking UniversityBeijingChina
| | - Sichao He
- Beijing Jiaotong UniversityBeijingChina
| | - Shangyang Li
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Bejing Key Laboratory of Behavior and Mental Health, Peking UniversityBeijingChina
| | - Si Wu
- School of Psychological and Cognitive Sciences, IDG/McGovern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Center of Quantitative Biology, Academy for Advanced Interdisciplinary Studies, Bejing Key Laboratory of Behavior and Mental Health, Peking UniversityBeijingChina
- Guangdong Institute of Intelligence Science and TechnologyGuangdongChina
| |
Collapse
|
30
|
Veit W, Browning H. Neural networks, AI, and the goals of modeling. Behav Brain Sci 2023; 46:e411. [PMID: 38054344 DOI: 10.1017/s0140525x23001681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2023]
Abstract
Deep neural networks (DNNs) have found many useful applications in recent years. Of particular interest have been those instances where their successes imitate human cognition and many consider artificial intelligences to offer a lens for understanding human intelligence. Here, we criticize the underlying conflation between the predictive and explanatory power of DNNs by examining the goals of modeling.
Collapse
Affiliation(s)
- Walter Veit
- Department of Philosophy, University of Bristol, Bristol, UK https://walterveit.com/
| | - Heather Browning
- Department of Philosophy, University of Southampton, Southampton, UK https://www.heatherbrowning.net/
| |
Collapse
|
31
|
Tuckute G, Feather J, Boebinger D, McDermott JH. Many but not all deep neural network audio models capture brain responses and exhibit correspondence between model stages and brain regions. PLoS Biol 2023; 21:e3002366. [PMID: 38091351 PMCID: PMC10718467 DOI: 10.1371/journal.pbio.3002366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 10/06/2023] [Indexed: 12/18/2023] Open
Abstract
Models that predict brain responses to stimuli provide one measure of understanding of a sensory system and have many potential applications in science and engineering. Deep artificial neural networks have emerged as the leading such predictive models of the visual system but are less explored in audition. Prior work provided examples of audio-trained neural networks that produced good predictions of auditory cortical fMRI responses and exhibited correspondence between model stages and brain regions, but left it unclear whether these results generalize to other neural network models and, thus, how to further improve models in this domain. We evaluated model-brain correspondence for publicly available audio neural network models along with in-house models trained on 4 different tasks. Most tested models outpredicted standard spectromporal filter-bank models of auditory cortex and exhibited systematic model-brain correspondence: Middle stages best predicted primary auditory cortex, while deep stages best predicted non-primary cortex. However, some state-of-the-art models produced substantially worse brain predictions. Models trained to recognize speech in background noise produced better brain predictions than models trained to recognize speech in quiet, potentially because hearing in noise imposes constraints on biological auditory representations. The training task influenced the prediction quality for specific cortical tuning properties, with best overall predictions resulting from models trained on multiple tasks. The results generally support the promise of deep neural networks as models of audition, though they also indicate that current models do not explain auditory cortical responses in their entirety.
Collapse
Affiliation(s)
- Greta Tuckute
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research MIT, Cambridge, Massachusetts, United States of America
- Center for Brains, Minds, and Machines, MIT, Cambridge, Massachusetts, United States of America
| | - Jenelle Feather
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research MIT, Cambridge, Massachusetts, United States of America
- Center for Brains, Minds, and Machines, MIT, Cambridge, Massachusetts, United States of America
| | - Dana Boebinger
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research MIT, Cambridge, Massachusetts, United States of America
- Center for Brains, Minds, and Machines, MIT, Cambridge, Massachusetts, United States of America
- Program in Speech and Hearing Biosciences and Technology, Harvard, Cambridge, Massachusetts, United States of America
- University of Rochester Medical Center, Rochester, New York, New York, United States of America
| | - Josh H. McDermott
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research MIT, Cambridge, Massachusetts, United States of America
- Center for Brains, Minds, and Machines, MIT, Cambridge, Massachusetts, United States of America
- Program in Speech and Hearing Biosciences and Technology, Harvard, Cambridge, Massachusetts, United States of America
| |
Collapse
|
32
|
Dibot NM, Tieo S, Mendelson TC, Puech W, Renoult JP. Sparsity in an artificial neural network predicts beauty: Towards a model of processing-based aesthetics. PLoS Comput Biol 2023; 19:e1011703. [PMID: 38048323 PMCID: PMC10721202 DOI: 10.1371/journal.pcbi.1011703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 12/14/2023] [Accepted: 11/20/2023] [Indexed: 12/06/2023] Open
Abstract
Generations of scientists have pursued the goal of defining beauty. While early scientists initially focused on objective criteria of beauty ('feature-based aesthetics'), philosophers and artists alike have since proposed that beauty arises from the interaction between the object and the individual who perceives it. The aesthetic theory of fluency formalizes this idea of interaction by proposing that beauty is determined by the efficiency of information processing in the perceiver's brain ('processing-based aesthetics'), and that efficient processing induces a positive aesthetic experience. The theory is supported by numerous psychological results, however, to date there is no quantitative predictive model to test it on a large scale. In this work, we propose to leverage the capacity of deep convolutional neural networks (DCNN) to model the processing of information in the brain by studying the link between beauty and neuronal sparsity, a measure of information processing efficiency. Whether analyzing pictures of faces, figurative or abstract art paintings, neuronal sparsity explains up to 28% of variance in beauty scores, and up to 47% when combined with a feature-based metric. However, we also found that sparsity is either positively or negatively correlated with beauty across the multiple layers of the DCNN. Our quantitative model stresses the importance of considering how information is processed, in addition to the content of that information, when predicting beauty, but also suggests an unexpectedly complex relationship between fluency and beauty.
Collapse
Affiliation(s)
- Nicolas M. Dibot
- CEFE, Univ. Montpellier, CNRS, EPHE, IRD, Montpellier, France
- LIRMM, Univ. Montpellier, CNRS, Montpellier, France
| | - Sonia Tieo
- CEFE, Univ. Montpellier, CNRS, EPHE, IRD, Montpellier, France
| | - Tamra C. Mendelson
- Department of Biological Sciences, University of Maryland, Baltimore County, Baltimore, Maryland, United States of America
| | | | | |
Collapse
|
33
|
Lee MJ, DiCarlo JJ. How well do rudimentary plasticity rules predict adult visual object learning? PLoS Comput Biol 2023; 19:e1011713. [PMID: 38079444 PMCID: PMC10754461 DOI: 10.1371/journal.pcbi.1011713] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 12/28/2023] [Accepted: 11/27/2023] [Indexed: 12/29/2023] Open
Abstract
A core problem in visual object learning is using a finite number of images of a new object to accurately identify that object in future, novel images. One longstanding, conceptual hypothesis asserts that this core problem is solved by adult brains through two connected mechanisms: 1) the re-representation of incoming retinal images as points in a fixed, multidimensional neural space, and 2) the optimization of linear decision boundaries in that space, via simple plasticity rules applied to a single downstream layer. Though this scheme is biologically plausible, the extent to which it explains learning behavior in humans has been unclear-in part because of a historical lack of image-computable models of the putative neural space, and in part because of a lack of measurements of human learning behaviors in difficult, naturalistic settings. Here, we addressed these gaps by 1) drawing from contemporary, image-computable models of the primate ventral visual stream to create a large set of testable learning models (n = 2,408 models), and 2) using online psychophysics to measure human learning trajectories over a varied set of tasks involving novel 3D objects (n = 371,000 trials), which we then used to develop (and publicly release) empirical benchmarks for comparing learning models to humans. We evaluated each learning model on these benchmarks, and found those based on deep, high-level representations from neural networks were surprisingly aligned with human behavior. While no tested model explained the entirety of replicable human behavior, these results establish that rudimentary plasticity rules, when combined with appropriate visual representations, have high explanatory power in predicting human behavior with respect to this core object learning problem.
Collapse
Affiliation(s)
- Michael J. Lee
- Department of Brain and Cognitive Sciences, MIT, Cambridge, Massachusetts, United States of America
- Center for Brains, Minds and Machines, MIT, Cambridge, Massachusetts, United States of America
| | - James J. DiCarlo
- Department of Brain and Cognitive Sciences, MIT, Cambridge, Massachusetts, United States of America
- Center for Brains, Minds and Machines, MIT, Cambridge, Massachusetts, United States of America
- McGovern Institute for Brain Research, MIT, Cambridge, Massachusetts, United States of America
| |
Collapse
|
34
|
Moore JA, Wilms M, Gutierrez A, Ismail Z, Fakhar K, Hadaeghi F, Hilgetag CC, Forkert ND. Simulation of neuroplasticity in a CNN-based in-silico model of neurodegeneration of the visual system. Front Comput Neurosci 2023; 17:1274824. [PMID: 38105786 PMCID: PMC10722164 DOI: 10.3389/fncom.2023.1274824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 11/08/2023] [Indexed: 12/19/2023] Open
Abstract
The aim of this work was to enhance the biological feasibility of a deep convolutional neural network-based in-silico model of neurodegeneration of the visual system by equipping it with a mechanism to simulate neuroplasticity. Therefore, deep convolutional networks of multiple sizes were trained for object recognition tasks and progressively lesioned to simulate neurodegeneration of the visual cortex. More specifically, the injured parts of the network remained injured while we investigated how the added retraining steps were able to recover some of the model's object recognition baseline performance. The results showed with retraining, model object recognition abilities are subject to a smoother and more gradual decline with increasing injury levels than without retraining and, therefore, more similar to the longitudinal cognition impairments of patients diagnosed with Alzheimer's disease (AD). Moreover, with retraining, the injured model exhibits internal activation patterns similar to those of the healthy baseline model when compared to the injured model without retraining. Furthermore, we conducted this analysis on a network that had been extensively pruned, resulting in an optimized number of parameters or synapses. Our findings show that this network exhibited remarkably similar capability to recover task performance with decreasingly viable pathways through the network. In conclusion, adding a retraining step to the in-silico setup that simulates neuroplasticity improves the model's biological feasibility considerably and could prove valuable to test different rehabilitation approaches in-silico.
Collapse
Affiliation(s)
- Jasmine A. Moore
- Department of Radiology, University of Calgary, Calgary, AB, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
- Biomedical Engineering Program, University of Calgary, Calgary, AB, Canada
| | - Matthias Wilms
- Department of Radiology, University of Calgary, Calgary, AB, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
- Alberta Children’s Hospital Research Institute, University of Calgary, Calgary, AB, Canada
| | - Alejandro Gutierrez
- Department of Radiology, University of Calgary, Calgary, AB, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
- Biomedical Engineering Program, University of Calgary, Calgary, AB, Canada
| | - Zahinoor Ismail
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
- Department of Clinical Neurosciences, University of Calgary, Calgary, AB, Canada
| | - Kayson Fakhar
- Institute of Computational Neuroscience, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
| | - Fatemeh Hadaeghi
- Institute of Computational Neuroscience, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
| | - Claus C. Hilgetag
- Institute of Computational Neuroscience, University Medical Center Hamburg-Eppendorf (UKE), Hamburg, Germany
- Department of Health Sciences, Boston University, Boston, MA, United States
| | - Nils D. Forkert
- Department of Radiology, University of Calgary, Calgary, AB, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
- Alberta Children’s Hospital Research Institute, University of Calgary, Calgary, AB, Canada
| |
Collapse
|
35
|
Karapetian A, Boyanova A, Pandaram M, Obermayer K, Kietzmann TC, Cichy RM. Empirically Identifying and Computationally Modeling the Brain-Behavior Relationship for Human Scene Categorization. J Cogn Neurosci 2023; 35:1879-1897. [PMID: 37590093 PMCID: PMC10586810 DOI: 10.1162/jocn_a_02043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/19/2023]
Abstract
Humans effortlessly make quick and accurate perceptual decisions about the nature of their immediate visual environment, such as the category of the scene they face. Previous research has revealed a rich set of cortical representations potentially underlying this feat. However, it remains unknown which of these representations are suitably formatted for decision-making. Here, we approached this question empirically and computationally, using neuroimaging and computational modeling. For the empirical part, we collected EEG data and RTs from human participants during a scene categorization task (natural vs. man-made). We then related EEG data to behavior to behavior using a multivariate extension of signal detection theory. We observed a correlation between neural data and behavior specifically between ∼100 msec and ∼200 msec after stimulus onset, suggesting that the neural scene representations in this time period are suitably formatted for decision-making. For the computational part, we evaluated a recurrent convolutional neural network (RCNN) as a model of brain and behavior. Unifying our previous observations in an image-computable model, the RCNN predicted well the neural representations, the behavioral scene categorization data, as well as the relationship between them. Our results identify and computationally characterize the neural and behavioral correlates of scene categorization in humans.
Collapse
Affiliation(s)
- Agnessa Karapetian
- Freie Universität Berlin, Germany
- Charité - Universitätsmedizin Berlin, Einstein Center for Neurosciences Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Germany
| | | | | | - Klaus Obermayer
- Charité - Universitätsmedizin Berlin, Einstein Center for Neurosciences Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Germany
- Technische Universität Berlin, Germany
- Humboldt-Universität zu Berlin, Germany
| | | | - Radoslaw M Cichy
- Freie Universität Berlin, Germany
- Charité - Universitätsmedizin Berlin, Einstein Center for Neurosciences Berlin, Germany
- Bernstein Center for Computational Neuroscience Berlin, Germany
- Humboldt-Universität zu Berlin, Germany
| |
Collapse
|
36
|
Palaniyappan L, Benrimoh D, Voppel A, Rocca R. Studying Psychosis Using Natural Language Generation: A Review of Emerging Opportunities. BIOLOGICAL PSYCHIATRY. COGNITIVE NEUROSCIENCE AND NEUROIMAGING 2023; 8:994-1004. [PMID: 38441079 DOI: 10.1016/j.bpsc.2023.04.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 04/16/2023] [Accepted: 04/19/2023] [Indexed: 03/07/2024]
Abstract
Disrupted language in psychotic disorders, such as schizophrenia, can manifest as false contents and formal deviations, often described as thought disorder. These features play a critical role in the social dysfunction associated with psychosis, but we continue to lack insights regarding how and why these symptoms develop. Natural language generation (NLG) is a field of computer science that focuses on generating human-like language for various applications. The theory that psychosis is related to the evolution of language in humans suggests that NLG systems that are sufficiently evolved to generate human-like language may also exhibit psychosis-like features. In this conceptual review, we propose using NLG systems that are at various stages of development as in silico tools to study linguistic features of psychosis. We argue that a program of in silico experimental research on the network architecture, function, learning rules, and training of NLG systems can help us understand better why thought disorder occurs in patients. This will allow us to gain a better understanding of the relationship between language and psychosis and potentially pave the way for new therapeutic approaches to address this vexing challenge.
Collapse
Affiliation(s)
- Lena Palaniyappan
- Douglas Mental Health University Institute, Department of Psychiatry, McGill University, Montreal, Quebec, Canada; Robarts Research Institute, Western University, London, Ontario, Canada; Department of Medical Biophysics, Western University, London, Ontario, Canada.
| | - David Benrimoh
- Douglas Mental Health University Institute, Department of Psychiatry, McGill University, Montreal, Quebec, Canada; Department of Psychiatry, Stanford University, Palo Alto, California
| | - Alban Voppel
- Douglas Mental Health University Institute, Department of Psychiatry, McGill University, Montreal, Quebec, Canada; Department of Psychiatry, University of Groningen, Groningen, the Netherlands
| | - Roberta Rocca
- Interacting Minds Centre, Department of Culture, Cognition and Computation, Aarhus University, Aarhus, Denmark
| |
Collapse
|
37
|
Lin C, Bulls LS, Tepfer LJ, Vyas AD, Thornton MA. Advancing Naturalistic Affective Science with Deep Learning. AFFECTIVE SCIENCE 2023; 4:550-562. [PMID: 37744976 PMCID: PMC10514024 DOI: 10.1007/s42761-023-00215-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 08/03/2023] [Indexed: 09/26/2023]
Abstract
People express their own emotions and perceive others' emotions via a variety of channels, including facial movements, body gestures, vocal prosody, and language. Studying these channels of affective behavior offers insight into both the experience and perception of emotion. Prior research has predominantly focused on studying individual channels of affective behavior in isolation using tightly controlled, non-naturalistic experiments. This approach limits our understanding of emotion in more naturalistic contexts where different channels of information tend to interact. Traditional methods struggle to address this limitation: manually annotating behavior is time-consuming, making it infeasible to do at large scale; manually selecting and manipulating stimuli based on hypotheses may neglect unanticipated features, potentially generating biased conclusions; and common linear modeling approaches cannot fully capture the complex, nonlinear, and interactive nature of real-life affective processes. In this methodology review, we describe how deep learning can be applied to address these challenges to advance a more naturalistic affective science. First, we describe current practices in affective research and explain why existing methods face challenges in revealing a more naturalistic understanding of emotion. Second, we introduce deep learning approaches and explain how they can be applied to tackle three main challenges: quantifying naturalistic behaviors, selecting and manipulating naturalistic stimuli, and modeling naturalistic affective processes. Finally, we describe the limitations of these deep learning methods, and how these limitations might be avoided or mitigated. By detailing the promise and the peril of deep learning, this review aims to pave the way for a more naturalistic affective science.
Collapse
Affiliation(s)
- Chujun Lin
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Landry S. Bulls
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Lindsey J. Tepfer
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Amisha D. Vyas
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Mark A. Thornton
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| |
Collapse
|
38
|
Summerfield C, Miller K. Computational and systems neuroscience: The next 20 years. PLoS Biol 2023; 21:e3002306. [PMID: 37751414 PMCID: PMC10522016 DOI: 10.1371/journal.pbio.3002306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/28/2023] Open
Abstract
Over the past 20 years, neuroscience has been propelled forward by theory-driven experimentation. We consider the future outlook for the field in the age of big neural data and powerful artificial intelligence models.
Collapse
Affiliation(s)
- Christopher Summerfield
- Google DeepMind, London, United Kingdom
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| | - Kevin Miller
- Google DeepMind, London, United Kingdom
- Department of Ophthalmology, University College London, London, United Kingdom
| |
Collapse
|
39
|
Yang Z, Yao S, Heng Y, Shen P, Lv T, Feng S, Tao L, Zhang W, Qiu W, Lu H, Cai W. Automated diagnosis and management of follicular thyroid nodules based on the devised small-dataset interpretable foreground optimization network deep learning: a multicenter diagnostic study. Int J Surg 2023; 109:2732-2741. [PMID: 37204464 PMCID: PMC10498847 DOI: 10.1097/js9.0000000000000506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 05/10/2023] [Indexed: 05/20/2023]
Abstract
BACKGROUND Currently, follicular thyroid carcinoma (FTC) has a relatively low incidence with a lack of effective preoperative diagnostic means. To reduce the need for invasive diagnostic procedures and to address information deficiencies inherent in a small dataset, we utilized interpretable foreground optimization network deep learning to develop a reliable preoperative FTC detection system. METHODS In this study, a deep learning model (FThyNet) was established using preoperative ultrasound images. Data on patients in the training and internal validation cohort ( n =432) were obtained from Ruijin Hospital, China. Data on patients in the external validation cohort ( n =71) were obtained from four other clinical centers. We evaluated the predictive performance of FThyNet and its ability to generalize across multiple external centers and compared the results yielded with assessments from physicians directly predicting FTC outcomes. In addition, the influence of texture information around the nodule edge on the prediction results was evaluated. RESULTS FThyNet had a consistently high accuracy in predicting FTC with an area under the receiver operating characteristic curve (AUC) of 89.0% [95% CI 87.0-90.9]. Particularly, the AUC for grossly invasive FTC reached 90.3%, which was significantly higher than that of the radiologists (56.1% [95% CI 51.8-60.3]). The parametric visualization study found that those nodules with blurred edges and relatively distorted surrounding textures were more likely to have FTC. Furthermore, edge texture information played an important role in FTC prediction with an AUC of 68.3% [95% CI 61.5-75.5], and highly invasive malignancies had the highest texture complexity. CONCLUSION FThyNet could effectively predict FTC, provide explanations consistent with pathological knowledge, and improve clinical understanding of the disease.
Collapse
Affiliation(s)
- Zheyu Yang
- Department of General Surgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine
| | - Siqiong Yao
- School of Life Sciences and Biotechnology, Shanghai Jiao Tong University
| | - Yu Heng
- Department of Otolaryngology, Eye, Ear, Nose and Throat Hospital, Fudan University
| | - Pengcheng Shen
- School of Life Sciences and Biotechnology, Shanghai Jiao Tong University
| | - Tian Lv
- Department of Head, Neck and Thyroid Surgery, Zhejiang Provincial People’s Hospital, People’s Hospital of Hangzhou Medical College, Hangzhou, People’s Republic of China
| | - Siqi Feng
- Department of General Surgery, Liaoning Cancer Hospital & Institute, Shenyang
| | - Lei Tao
- Department of Otolaryngology, Eye, Ear, Nose and Throat Hospital, Fudan University
| | - Weituo Zhang
- Shanghai Tong Ren Hospital and Clinical Research Institute
- Hong Qiao International Institute of Medicine, Shanghai
| | - Weihua Qiu
- Department of General Surgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine
- Department of General Surgery, Ruijin Hospital Gubei Campus, Shanghai Jiao Tong University School of Medicine
| | - Hui Lu
- School of Life Sciences and Biotechnology, Shanghai Jiao Tong University
| | - Wei Cai
- Department of General Surgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine
| |
Collapse
|
40
|
Grogans SE, Bliss-Moreau E, Buss KA, Clark LA, Fox AS, Keltner D, Cowen AS, Kim JJ, Kragel PA, MacLeod C, Mobbs D, Naragon-Gainey K, Fullana MA, Shackman AJ. The nature and neurobiology of fear and anxiety: State of the science and opportunities for accelerating discovery. Neurosci Biobehav Rev 2023; 151:105237. [PMID: 37209932 PMCID: PMC10330657 DOI: 10.1016/j.neubiorev.2023.105237] [Citation(s) in RCA: 14] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 05/11/2023] [Accepted: 05/13/2023] [Indexed: 05/22/2023]
Abstract
Fear and anxiety play a central role in mammalian life, and there is considerable interest in clarifying their nature, identifying their biological underpinnings, and determining their consequences for health and disease. Here we provide a roundtable discussion on the nature and biological bases of fear- and anxiety-related states, traits, and disorders. The discussants include scientists familiar with a wide variety of populations and a broad spectrum of techniques. The goal of the roundtable was to take stock of the state of the science and provide a roadmap to the next generation of fear and anxiety research. Much of the discussion centered on the key challenges facing the field, the most fruitful avenues for future research, and emerging opportunities for accelerating discovery, with implications for scientists, funders, and other stakeholders. Understanding fear and anxiety is a matter of practical importance. Anxiety disorders are a leading burden on public health and existing treatments are far from curative, underscoring the urgency of developing a deeper understanding of the factors governing threat-related emotions.
Collapse
Affiliation(s)
- Shannon E Grogans
- Department of Psychology, University of Maryland, College Park, MD 20742, USA
| | - Eliza Bliss-Moreau
- Department of Psychology, University of California, Davis, CA 95616, USA; California National Primate Research Center, University of California, Davis, CA 95616, USA
| | - Kristin A Buss
- Department of Psychology, The Pennsylvania State University, University Park, PA 16802 USA
| | - Lee Anna Clark
- Department of Psychology, University of Notre Dame, Notre Dame, IN 46556, USA
| | - Andrew S Fox
- Department of Psychology, University of California, Davis, CA 95616, USA; California National Primate Research Center, University of California, Davis, CA 95616, USA
| | - Dacher Keltner
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| | | | - Jeansok J Kim
- Department of Psychology, University of Washington, Seattle, WA 98195, USA
| | - Philip A Kragel
- Department of Psychology, Emory University, Atlanta, GA 30322, USA
| | - Colin MacLeod
- Centre for the Advancement of Research on Emotion, School of Psychological Science, The University of Western Australia, Perth, WA 6009, Australia
| | - Dean Mobbs
- Department of Humanities and Social Sciences, California Institute of Technology, Pasadena, California 91125, USA; Computation and Neural Systems Program, California Institute of Technology, Pasadena, CA 91125, USA
| | - Kristin Naragon-Gainey
- School of Psychological Science, University of Western Australia, Perth, WA 6009, Australia
| | - Miquel A Fullana
- Adult Psychiatry and Psychology Department, Institute of Neurosciences, Hospital Clinic, Barcelona, Spain; Imaging of Mood, and Anxiety-Related Disorders Group, Institut d'Investigacions Biomèdiques August Pi i Sunyer, CIBERSAM, University of Barcelona, Barcelona, Spain
| | - Alexander J Shackman
- Department of Psychology, University of Maryland, College Park, MD 20742, USA; Neuroscience and Cognitive Science Program, University of Maryland, College Park, MD 20742, USA; Maryland Neuroimaging Center, University of Maryland, College Park, MD 20742, USA.
| |
Collapse
|
41
|
Noel JP, Balzani E, Savin C, Angelaki DE. Context-invariant beliefs are supported by dynamic reconfiguration of single unit functional connectivity in prefrontal cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.30.551169. [PMID: 37577498 PMCID: PMC10418097 DOI: 10.1101/2023.07.30.551169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
Natural behaviors occur in closed action-perception loops and are supported by dynamic and flexible beliefs abstracted away from our immediate sensory milieu. How this real-world flexibility is instantiated in neural circuits remains unknown. Here we have macaques navigate in a virtual environment by primarily leveraging sensory (optic flow) signals, or by more heavily relying on acquired internal models. We record single-unit spiking activity simultaneously from the dorsomedial superior temporal area (MSTd), parietal area 7a, and the dorso-lateral prefrontal cortex (dlPFC). Results show that while animals were able to maintain adaptive task-relevant beliefs regardless of sensory context, the fine-grain statistical dependencies between neurons, particularly in 7a and dlPFC, dynamically remapped with the changing computational demands. In dlPFC, but not 7a, destroying these statistical dependencies abolished the area's ability for cross-context decoding. Lastly, correlation analyses suggested that the more unit-to-unit couplings remapped in dlPFC, and the less they did so in MSTd, the less were population codes and behavior impacted by the loss of sensory evidence. We conclude that dynamic functional connectivity between prefrontal cortex neurons maintains a stable population code and context-invariant beliefs during naturalistic behavior with closed action-perception loops.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York City, NY, USA
| | - Edoardo Balzani
- Center for Neural Science, New York University, New York City, NY, USA
| | - Cristina Savin
- Center for Neural Science, New York University, New York City, NY, USA
| | - Dora E. Angelaki
- Center for Neural Science, New York University, New York City, NY, USA
| |
Collapse
|
42
|
Ferro M, Falagario UG, Barone B, Maggi M, Crocetto F, Busetto GM, Giudice FD, Terracciano D, Lucarelli G, Lasorsa F, Catellani M, Brescia A, Mistretta FA, Luzzago S, Piccinelli ML, Vartolomei MD, Jereczek-Fossa BA, Musi G, Montanari E, Cobelli OD, Tataru OS. Artificial Intelligence in the Advanced Diagnosis of Bladder Cancer-Comprehensive Literature Review and Future Advancement. Diagnostics (Basel) 2023; 13:2308. [PMID: 37443700 DOI: 10.3390/diagnostics13132308] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 07/03/2023] [Accepted: 07/05/2023] [Indexed: 07/15/2023] Open
Abstract
Artificial intelligence is highly regarded as the most promising future technology that will have a great impact on healthcare across all specialties. Its subsets, machine learning, deep learning, and artificial neural networks, are able to automatically learn from massive amounts of data and can improve the prediction algorithms to enhance their performance. This area is still under development, but the latest evidence shows great potential in the diagnosis, prognosis, and treatment of urological diseases, including bladder cancer, which are currently using old prediction tools and historical nomograms. This review focuses on highly significant and comprehensive literature evidence of artificial intelligence in the management of bladder cancer and investigates the near introduction in clinical practice.
Collapse
Affiliation(s)
- Matteo Ferro
- Department of Urology, IEO-European Institute of Oncology, IRCCS-Istituto di Ricovero e Cura a Carattere Scientifico, 20141 Milan, Italy
| | - Ugo Giovanni Falagario
- Department of Urology and Organ Transplantation, University of Foggia, 71121 Foggia, Italy
| | - Biagio Barone
- Urology Unit, Department of Surgical Sciences, AORN Sant'Anna e San Sebastiano, 81100 Caserta, Italy
| | - Martina Maggi
- Department of Maternal Infant and Urologic Sciences, Policlinico Umberto I Hospital, Sapienza University of Rome, 00161 Rome, Italy
| | - Felice Crocetto
- Department of Neurosciences and Reproductive Sciences and Odontostomatology, University of Naples Federico II, 80131 Naples, Italy
| | - Gian Maria Busetto
- Department of Urology and Organ Transplantation, University of Foggia, 71121 Foggia, Italy
| | - Francesco Del Giudice
- Department of Maternal Infant and Urologic Sciences, Policlinico Umberto I Hospital, Sapienza University of Rome, 00161 Rome, Italy
| | - Daniela Terracciano
- Department of Translational Medical Sciences, University of Naples "Federico II", 80131 Naples, Italy
| | - Giuseppe Lucarelli
- Urology, Andrology and Kidney Transplantation Unit, Department of Emergency and Organ Transplantation, University of Bari, 70124 Bari, Italy
| | - Francesco Lasorsa
- Urology, Andrology and Kidney Transplantation Unit, Department of Emergency and Organ Transplantation, University of Bari, 70124 Bari, Italy
| | - Michele Catellani
- Department of Urology, ASST Papa Giovanni XXIII, 24127 Bergamo, Italy
| | - Antonio Brescia
- Department of Urology, IEO-European Institute of Oncology, IRCCS-Istituto di Ricovero e Cura a Carattere Scientifico, 20141 Milan, Italy
| | - Francesco Alessandro Mistretta
- Department of Urology, IEO-European Institute of Oncology, IRCCS-Istituto di Ricovero e Cura a Carattere Scientifico, 20141 Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Stefano Luzzago
- Department of Urology, IEO-European Institute of Oncology, IRCCS-Istituto di Ricovero e Cura a Carattere Scientifico, 20141 Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Mattia Luca Piccinelli
- Department of Urology, IEO-European Institute of Oncology, IRCCS-Istituto di Ricovero e Cura a Carattere Scientifico, 20141 Milan, Italy
| | | | - Barbara Alicja Jereczek-Fossa
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
- Division of Radiation Oncology, IEO-European Institute of Oncology IRCCS, 20141 Milan, Italy
| | - Gennaro Musi
- Department of Urology, IEO-European Institute of Oncology, IRCCS-Istituto di Ricovero e Cura a Carattere Scientifico, 20141 Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Emanuele Montanari
- Department of Urology, Foundation IRCCS Ca' Granda-Ospedale Maggiore Policlinico, 20122 Milan, Italy
- Department of Clinical Sciences and Community Health, University of Milan, 20122 Milan, Italy
| | - Ottavio de Cobelli
- Department of Urology, IEO-European Institute of Oncology, IRCCS-Istituto di Ricovero e Cura a Carattere Scientifico, 20141 Milan, Italy
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Octavian Sabin Tataru
- Department of Simulation Applied in Medicine, George Emil Palade University of Medicine, Pharmacy, Science and Technology of Târgu Mures, 540142 Târgu Mures, Romania
| |
Collapse
|
43
|
Celeghin A, Borriero A, Orsenigo D, Diano M, Méndez Guerrero CA, Perotti A, Petri G, Tamietto M. Convolutional neural networks for vision neuroscience: significance, developments, and outstanding issues. Front Comput Neurosci 2023; 17:1153572. [PMID: 37485400 PMCID: PMC10359983 DOI: 10.3389/fncom.2023.1153572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Accepted: 06/19/2023] [Indexed: 07/25/2023] Open
Abstract
Convolutional Neural Networks (CNN) are a class of machine learning models predominately used in computer vision tasks and can achieve human-like performance through learning from experience. Their striking similarities to the structural and functional principles of the primate visual system allow for comparisons between these artificial networks and their biological counterparts, enabling exploration of how visual functions and neural representations may emerge in the real brain from a limited set of computational principles. After considering the basic features of CNNs, we discuss the opportunities and challenges of endorsing CNNs as in silico models of the primate visual system. Specifically, we highlight several emerging notions about the anatomical and physiological properties of the visual system that still need to be systematically integrated into current CNN models. These tenets include the implementation of parallel processing pathways from the early stages of retinal input and the reconsideration of several assumptions concerning the serial progression of information flow. We suggest design choices and architectural constraints that could facilitate a closer alignment with biology provide causal evidence of the predictive link between the artificial and biological visual systems. Adopting this principled perspective could potentially lead to new research questions and applications of CNNs beyond modeling object recognition.
Collapse
Affiliation(s)
| | | | - Davide Orsenigo
- Department of Psychology, University of Torino, Turin, Italy
| | - Matteo Diano
- Department of Psychology, University of Torino, Turin, Italy
| | | | | | | | - Marco Tamietto
- Department of Psychology, University of Torino, Turin, Italy
- Department of Medical and Clinical Psychology, and CoRPS–Center of Research on Psychology in Somatic Diseases–Tilburg University, Tilburg, Netherlands
| |
Collapse
|
44
|
Kumar C, Rahimi N, Gonjari R, McLinden J, Hosni SI, Shahriari Y, Shao M. Context-aware Multimodal Auditory BCI Classification through Graph Neural Networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083118 DOI: 10.1109/embc40787.2023.10339984] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
The prospect of electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) in the presence of topological information of participants is often left unexplored in most of the brain-computer interface (BCI) systems. Additionally, the usage of these modalities together in the field of multimodality analysis to support multiple brain signals toward improving BCI performance is not fully examined. This study first presents a multimodal data fusion framework to exploit and decode the complementary synergistic properties in multimodal neural signals. Moreover, the relations among different subjects and their observations also play critical roles in classifying unknown subjects. We developed a context-aware graph neural network (GNN) model utilizing the pairwise relationship among participants to investigate the performance on an auditory task classification. We explored standard and deviant auditory EEG and fNIRS data where each subject was asked to perform an auditory oddball task and has multiple trials regarded as context-aware nodes in our graph construction. In experiments, our multimodal data fusion strategy showed an improvement up to 8.40% via SVM and 2.02% via GNN, compared to the single-modal EEG or fNIRS. In addition, our context-aware GNN achieved 5.3%, 4.07% and 4.53% higher accuracy for EEG, fNIRS and multimodal data based experiments, compared to the baseline models.
Collapse
|
45
|
Kay K, Bonnen K, Denison RN, Arcaro MJ, Barack DL. Tasks and their role in visual neuroscience. Neuron 2023; 111:1697-1713. [PMID: 37040765 DOI: 10.1016/j.neuron.2023.03.022] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Revised: 03/13/2023] [Accepted: 03/15/2023] [Indexed: 04/13/2023]
Abstract
Vision is widely used as a model system to gain insights into how sensory inputs are processed and interpreted by the brain. Historically, careful quantification and control of visual stimuli have served as the backbone of visual neuroscience. There has been less emphasis, however, on how an observer's task influences the processing of sensory inputs. Motivated by diverse observations of task-dependent activity in the visual system, we propose a framework for thinking about tasks, their role in sensory processing, and how we might formally incorporate tasks into our models of vision.
Collapse
Affiliation(s)
- Kendrick Kay
- Center for Magnetic Resonance Research, Department of Radiology, University of Minnesota, Minneapolis, MN 55455, USA.
| | - Kathryn Bonnen
- School of Optometry, Indiana University, Bloomington, IN 47405, USA
| | - Rachel N Denison
- Department of Psychological and Brain Sciences, Boston University, Boston, MA 02215, USA
| | - Mike J Arcaro
- Department of Psychology, University of Pennsylvania, Philadelphia, PA 19146, USA
| | - David L Barack
- Departments of Neuroscience and Philosophy, University of Pennsylvania, Philadelphia, PA 19146, USA
| |
Collapse
|
46
|
Sandbrink KJ, Mamidanna P, Michaelis C, Bethge M, Mathis MW, Mathis A. Contrasting action and posture coding with hierarchical deep neural network models of proprioception. eLife 2023; 12:e81499. [PMID: 37254843 PMCID: PMC10361732 DOI: 10.7554/elife.81499] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Accepted: 05/16/2023] [Indexed: 06/01/2023] Open
Abstract
Biological motor control is versatile, efficient, and depends on proprioceptive feedback. Muscles are flexible and undergo continuous changes, requiring distributed adaptive control mechanisms that continuously account for the body's state. The canonical role of proprioception is representing the body state. We hypothesize that the proprioceptive system could also be critical for high-level tasks such as action recognition. To test this theory, we pursued a task-driven modeling approach, which allowed us to isolate the study of proprioception. We generated a large synthetic dataset of human arm trajectories tracing characters of the Latin alphabet in 3D space, together with muscle activities obtained from a musculoskeletal model and model-based muscle spindle activity. Next, we compared two classes of tasks: trajectory decoding and action recognition, which allowed us to train hierarchical models to decode either the position and velocity of the end-effector of one's posture or the character (action) identity from the spindle firing patterns. We found that artificial neural networks could robustly solve both tasks, and the networks' units show tuning properties similar to neurons in the primate somatosensory cortex and the brainstem. Remarkably, we found uniformly distributed directional selective units only with the action-recognition-trained models and not the trajectory-decoding-trained models. This suggests that proprioceptive encoding is additionally associated with higher-level functions such as action recognition and therefore provides new, experimentally testable hypotheses of how proprioception aids in adaptive motor control.
Collapse
Affiliation(s)
- Kai J Sandbrink
- The Rowland Institute at Harvard, Harvard UniversityCambridgeUnited States
| | - Pranav Mamidanna
- Tübingen AI Center, Eberhard Karls Universität Tübingen & Institute for Theoretical PhysicsTübingenGermany
| | - Claudio Michaelis
- Tübingen AI Center, Eberhard Karls Universität Tübingen & Institute for Theoretical PhysicsTübingenGermany
| | - Matthias Bethge
- Tübingen AI Center, Eberhard Karls Universität Tübingen & Institute for Theoretical PhysicsTübingenGermany
| | - Mackenzie Weygandt Mathis
- The Rowland Institute at Harvard, Harvard UniversityCambridgeUnited States
- Brain Mind Institute, School of Life Sciences, École Polytechnique Fédérale de LausanneGenèveSwitzerland
| | - Alexander Mathis
- The Rowland Institute at Harvard, Harvard UniversityCambridgeUnited States
- Brain Mind Institute, School of Life Sciences, École Polytechnique Fédérale de LausanneGenèveSwitzerland
| |
Collapse
|
47
|
Doerig A, Sommers RP, Seeliger K, Richards B, Ismael J, Lindsay GW, Kording KP, Konkle T, van Gerven MAJ, Kriegeskorte N, Kietzmann TC. The neuroconnectionist research programme. Nat Rev Neurosci 2023:10.1038/s41583-023-00705-w. [PMID: 37253949 DOI: 10.1038/s41583-023-00705-w] [Citation(s) in RCA: 20] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/21/2023] [Indexed: 06/01/2023]
Abstract
Artificial neural networks (ANNs) inspired by biology are beginning to be widely used to model behavioural and neural data, an approach we call 'neuroconnectionism'. ANNs have been not only lauded as the current best models of information processing in the brain but also criticized for failing to account for basic cognitive functions. In this Perspective article, we propose that arguing about the successes and failures of a restricted set of current ANNs is the wrong approach to assess the promise of neuroconnectionism for brain science. Instead, we take inspiration from the philosophy of science, and in particular from Lakatos, who showed that the core of a scientific research programme is often not directly falsifiable but should be assessed by its capacity to generate novel insights. Following this view, we present neuroconnectionism as a general research programme centred around ANNs as a computational language for expressing falsifiable theories about brain computation. We describe the core of the programme, the underlying computational framework and its tools for testing specific neuroscientific hypotheses and deriving novel understanding. Taking a longitudinal view, we review past and present neuroconnectionist projects and their responses to challenges and argue that the research programme is highly progressive, generating new and otherwise unreachable insights into the workings of the brain.
Collapse
Affiliation(s)
- Adrien Doerig
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany.
- Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands.
| | - Rowan P Sommers
- Department of Neurobiology of Language, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Katja Seeliger
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Blake Richards
- Department of Neurology and Neurosurgery, McGill University, Montréal, QC, Canada
- School of Computer Science, McGill University, Montréal, QC, Canada
- Mila, Montréal, QC, Canada
- Montréal Neurological Institute, Montréal, QC, Canada
- Learning in Machines and Brains Program, CIFAR, Toronto, ON, Canada
| | | | | | - Konrad P Kording
- Learning in Machines and Brains Program, CIFAR, Toronto, ON, Canada
- Bioengineering, Neuroscience, University of Pennsylvania, Pennsylvania, PA, USA
| | | | | | | | - Tim C Kietzmann
- Institute of Cognitive Science, University of Osnabrück, Osnabrück, Germany
| |
Collapse
|
48
|
Nelli S, Braun L, Dumbalska T, Saxe A, Summerfield C. Neural knowledge assembly in humans and neural networks. Neuron 2023; 111:1504-1516.e9. [PMID: 36898375 PMCID: PMC10618408 DOI: 10.1016/j.neuron.2023.02.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 12/21/2022] [Accepted: 02/09/2023] [Indexed: 03/11/2023]
Abstract
Human understanding of the world can change rapidly when new information comes to light, such as when a plot twist occurs in a work of fiction. This flexible "knowledge assembly" requires few-shot reorganization of neural codes for relations among objects and events. However, existing computational theories are largely silent about how this could occur. Here, participants learned a transitive ordering among novel objects within two distinct contexts before exposure to new knowledge that revealed how they were linked. Blood-oxygen-level-dependent (BOLD) signals in dorsal frontoparietal cortical areas revealed that objects were rapidly and dramatically rearranged on the neural manifold after minimal exposure to linking information. We then adapt online stochastic gradient descent to permit similar rapid knowledge assembly in a neural network model.
Collapse
Affiliation(s)
- Stephanie Nelli
- Department of Cognitive Science, Occidental College, Los Angeles, CA 90041, USA; Department of Experimental Psychology, University of Oxford, Oxford OX2 6GC, UK.
| | - Lukas Braun
- Department of Experimental Psychology, University of Oxford, Oxford OX2 6GC, UK
| | | | - Andrew Saxe
- Department of Experimental Psychology, University of Oxford, Oxford OX2 6GC, UK; Gatsby Unit & Sainsbury Wellcome Centre, University College London, London W1T 4JG, UK; CIFAR Azrieli Global Scholars Program, CIFAR, Toronto, ON M5G 1M1, Canada
| | | |
Collapse
|
49
|
Qiu Y, Klindt DA, Szatko KP, Gonschorek D, Hoefling L, Schubert T, Busse L, Bethge M, Euler T. Efficient coding of natural scenes improves neural system identification. PLoS Comput Biol 2023; 19:e1011037. [PMID: 37093861 PMCID: PMC10159360 DOI: 10.1371/journal.pcbi.1011037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 05/04/2023] [Accepted: 03/20/2023] [Indexed: 04/25/2023] Open
Abstract
Neural system identification aims at learning the response function of neurons to arbitrary stimuli using experimentally recorded data, but typically does not leverage normative principles such as efficient coding of natural environments. Visual systems, however, have evolved to efficiently process input from the natural environment. Here, we present a normative network regularization for system identification models by incorporating, as a regularizer, the efficient coding hypothesis, which states that neural response properties of sensory representations are strongly shaped by the need to preserve most of the stimulus information with limited resources. Using this approach, we explored if a system identification model can be improved by sharing its convolutional filters with those of an autoencoder which aims to efficiently encode natural stimuli. To this end, we built a hybrid model to predict the responses of retinal neurons to noise stimuli. This approach did not only yield a higher performance than the "stand-alone" system identification model, it also produced more biologically plausible filters, meaning that they more closely resembled neural representation in early visual systems. We found these results applied to retinal responses to different artificial stimuli and across model architectures. Moreover, our normatively regularized model performed particularly well in predicting responses of direction-of-motion sensitive retinal neurons. The benefit of natural scene statistics became marginal, however, for predicting the responses to natural movies. In summary, our results indicate that efficiently encoding environmental inputs can improve system identification models, at least for noise stimuli, and point to the benefit of probing the visual system with naturalistic stimuli.
Collapse
Affiliation(s)
- Yongrong Qiu
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Graduate Training Centre of Neuroscience (GTC), International Max Planck Research School, U Tübingen, Tübingen, Germany
| | - David A Klindt
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Department of Mathematical Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| | - Klaudia P Szatko
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Graduate Training Centre of Neuroscience (GTC), International Max Planck Research School, U Tübingen, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
| | - Dominic Gonschorek
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Research Training Group 2381, U Tübingen, Tübingen, Germany
| | - Larissa Hoefling
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
| | - Timm Schubert
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
| | - Laura Busse
- Division of Neurobiology, Faculty of Biology, LMU Munich, Planegg-Martinsried, Germany
- Bernstein Center for Computational Neuroscience, Planegg-Martinsried, Germany
| | - Matthias Bethge
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
- Institute for Theoretical Physics, U Tübingen, Tübingen, Germany
| | - Thomas Euler
- Institute for Ophthalmic Research, U Tübingen, Tübingen, Germany
- Centre for Integrative Neuroscience (CIN), U Tübingen, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
| |
Collapse
|
50
|
Alfalahi H, Dias SB, Khandoker AH, Chaudhuri KR, Hadjileontiadis LJ. A scoping review of neurodegenerative manifestations in explainable digital phenotyping. NPJ Parkinsons Dis 2023; 9:49. [PMID: 36997573 PMCID: PMC10063633 DOI: 10.1038/s41531-023-00494-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Accepted: 03/16/2023] [Indexed: 04/03/2023] Open
Abstract
Neurologists nowadays no longer view neurodegenerative diseases, like Parkinson's and Alzheimer's disease, as single entities, but rather as a spectrum of multifaceted symptoms with heterogeneous progression courses and treatment responses. The definition of the naturalistic behavioral repertoire of early neurodegenerative manifestations is still elusive, impeding early diagnosis and intervention. Central to this view is the role of artificial intelligence (AI) in reinforcing the depth of phenotypic information, thereby supporting the paradigm shift to precision medicine and personalized healthcare. This suggestion advocates the definition of disease subtypes in a new biomarker-supported nosology framework, yet without empirical consensus on standardization, reliability and interpretability. Although the well-defined neurodegenerative processes, linked to a triad of motor and non-motor preclinical symptoms, are detected by clinical intuition, we undertake an unbiased data-driven approach to identify different patterns of neuropathology distribution based on the naturalistic behavior data inherent to populations in-the-wild. We appraise the role of remote technologies in the definition of digital phenotyping specific to brain-, body- and social-level neurodegenerative subtle symptoms, emphasizing inter- and intra-patient variability powered by deep learning. As such, the present review endeavors to exploit digital technologies and AI to create disease-specific phenotypic explanations, facilitating the understanding of neurodegenerative diseases as "bio-psycho-social" conditions. Not only does this translational effort within explainable digital phenotyping foster the understanding of disease-induced traits, but it also enhances diagnostic and, eventually, treatment personalization.
Collapse
Affiliation(s)
- Hessa Alfalahi
- Department of Biomedical Engineering, Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates.
- Healthcare Engineering Innovation Center (HEIC), Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates.
| | - Sofia B Dias
- Department of Biomedical Engineering, Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates
- Healthcare Engineering Innovation Center (HEIC), Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates
- CIPER, Faculdade de Motricidade Humana, University of Lisbon, Lisbon, Portugal
| | - Ahsan H Khandoker
- Department of Biomedical Engineering, Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates
- Healthcare Engineering Innovation Center (HEIC), Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates
| | - Kallol Ray Chaudhuri
- Parkinson Foundation, International Center of Excellence, King's College London, Denmark Hills, London, UK
- Department of Basic and Clinical Neurosciences, Institute of Psychiatry, Psychology and Neuroscience, King's College London, De Crespigny Park, London, UK
| | - Leontios J Hadjileontiadis
- Department of Biomedical Engineering, Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates
- Healthcare Engineering Innovation Center (HEIC), Khalifa University of Science and Technology, Abu Dhabi, United Arab Emirates
- Department of Electrical and Computer Engineering, Aristotle University of Thessaloniki, Thessaloniki, Greece
| |
Collapse
|