1
|
Wang Z, Yu J, Zhai M, Wang Z, Sheng K, Zhu Y, Wang T, Liu M, Wang L, Yan M, Zhang J, Xu Y, Wang X, Ma L, Hu W, Cheng H. System-level time computation and representation in the suprachiasmatic nucleus revealed by large-scale calcium imaging and machine learning. Cell Res 2024; 34:493-503. [PMID: 38605178 PMCID: PMC11217450 DOI: 10.1038/s41422-024-00956-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 03/28/2024] [Indexed: 04/13/2024] Open
Abstract
The suprachiasmatic nucleus (SCN) is the mammalian central circadian pacemaker with heterogeneous neurons acting in concert while each neuron harbors a self-sustained molecular clockwork. Nevertheless, how system-level SCN signals encode time of the day remains enigmatic. Here we show that population-level Ca2+ signals predict hourly time, via a group decision-making mechanism coupled with a spatially modular time feature representation in the SCN. Specifically, we developed a high-speed dual-view two-photon microscope for volumetric Ca2+ imaging of up to 9000 GABAergic neurons in adult SCN slices, and leveraged machine learning methods to capture emergent properties from multiscale Ca2+ signals as a whole. We achieved hourly time prediction by polling random cohorts of SCN neurons, reaching 99.0% accuracy at a cohort size of 900. Further, we revealed that functional neuron subtypes identified by contrastive learning tend to aggregate separately in the SCN space, giving rise to bilaterally symmetrical ripple-like modular patterns. Individual modules represent distinctive time features, such that a module-specifically learned time predictor can also accurately decode hourly time from random polling of the same module. These findings open a new paradigm in deciphering the design principle of the biological clock at the system level.
Collapse
Affiliation(s)
- Zichen Wang
- National Biomedical Imaging Center, State Key Laboratory of Membrane Biology, Institute of Molecular Medicine, Peking-Tsinghua Center for Life Sciences, College of Future Technology, Peking University, Beijing, China
- Research Unit of Mitochondria in Brain Diseases, Chinese Academy of Medical Sciences, PKU-Nanjing Institute of Translational Medicine, Nanjing, Jiangsu, China
| | - Jing Yu
- National Biomedical Imaging Center, State Key Laboratory of Membrane Biology, Institute of Molecular Medicine, Peking-Tsinghua Center for Life Sciences, College of Future Technology, Peking University, Beijing, China
- Research Unit of Mitochondria in Brain Diseases, Chinese Academy of Medical Sciences, PKU-Nanjing Institute of Translational Medicine, Nanjing, Jiangsu, China
| | - Muyue Zhai
- National Biomedical Imaging Center, State Key Laboratory of Membrane Biology, Institute of Molecular Medicine, Peking-Tsinghua Center for Life Sciences, College of Future Technology, Peking University, Beijing, China
| | - Zehua Wang
- Wangxuan Institute of Computer Technology, Peking University, Beijing, China
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
| | - Kaiwen Sheng
- Beijing Academy of Artificial Intelligence, Beijing, China
- Department of Bioengineering, Stanford University, Stanford, CA, USA
| | - Yu Zhu
- Beijing Academy of Artificial Intelligence, Beijing, China
| | - Tianyu Wang
- National Biomedical Imaging Center, State Key Laboratory of Membrane Biology, Institute of Molecular Medicine, Peking-Tsinghua Center for Life Sciences, College of Future Technology, Peking University, Beijing, China
| | - Mianzhi Liu
- National Biomedical Imaging Center, State Key Laboratory of Membrane Biology, Institute of Molecular Medicine, Peking-Tsinghua Center for Life Sciences, College of Future Technology, Peking University, Beijing, China
| | - Lu Wang
- National Biomedical Imaging Center, State Key Laboratory of Membrane Biology, Institute of Molecular Medicine, Peking-Tsinghua Center for Life Sciences, College of Future Technology, Peking University, Beijing, China
| | - Miao Yan
- National Biomedical Imaging Center, State Key Laboratory of Membrane Biology, Institute of Molecular Medicine, Peking-Tsinghua Center for Life Sciences, College of Future Technology, Peking University, Beijing, China
| | - Jue Zhang
- Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China
- College of Engineering, Peking University, Beijing, China
| | - Ying Xu
- Jiangsu Key Laboratory of Neuropsychiatric Diseases and Cambridge-Su Genomic Resource Center, Medical School of Soochow University, Suzhou, Jiangsu, China
| | - Xianhua Wang
- National Biomedical Imaging Center, State Key Laboratory of Membrane Biology, Institute of Molecular Medicine, Peking-Tsinghua Center for Life Sciences, College of Future Technology, Peking University, Beijing, China
- Research Unit of Mitochondria in Brain Diseases, Chinese Academy of Medical Sciences, PKU-Nanjing Institute of Translational Medicine, Nanjing, Jiangsu, China
| | - Lei Ma
- National Biomedical Imaging Center, State Key Laboratory of Membrane Biology, Institute of Molecular Medicine, Peking-Tsinghua Center for Life Sciences, College of Future Technology, Peking University, Beijing, China.
- Beijing Academy of Artificial Intelligence, Beijing, China.
| | - Wei Hu
- Wangxuan Institute of Computer Technology, Peking University, Beijing, China.
| | - Heping Cheng
- National Biomedical Imaging Center, State Key Laboratory of Membrane Biology, Institute of Molecular Medicine, Peking-Tsinghua Center for Life Sciences, College of Future Technology, Peking University, Beijing, China.
- Research Unit of Mitochondria in Brain Diseases, Chinese Academy of Medical Sciences, PKU-Nanjing Institute of Translational Medicine, Nanjing, Jiangsu, China.
| |
Collapse
|
2
|
Liu J, Younk R, Drahos LM, Nagrale SS, Yadav S, Widge AS, Shoaran M. Neural Decoding and Feature Selection Techniques for Closed-Loop Control of Defensive Behavior. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.06.06.597165. [PMID: 38895388 PMCID: PMC11185693 DOI: 10.1101/2024.06.06.597165] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Abstract
Objective Many psychiatric disorders involve excessive avoidant or defensive behavior, such as avoidance in anxiety and trauma disorders or defensive rituals in obsessive-compulsive disorders. Developing algorithms to predict these behaviors from local field potentials (LFPs) could serve as foundational technology for closed-loop control of such disorders. A significant challenge is identifying the LFP features that encode these defensive behaviors. Approach We analyzed LFP signals from the infralimbic cortex and basolateral amygdala of rats undergoing tone-shock conditioning and extinction, standard for investigating defensive behaviors. We utilized a comprehensive set of neuro-markers across spectral, temporal, and connectivity domains, employing SHapley Additive exPlanations for feature importance evaluation within Light Gradient-Boosting Machine models. Our goal was to decode three commonly studied avoidance/defensive behaviors: freezing, bar-press suppression, and motion (accelerometry), examining the impact of different features on decoding performance. Main results Band power and band power ratio between channels emerged as optimal features across sessions. High-gamma (80-150 Hz) power, power ratios, and inter-regional correlations were more informative than other bands that are more classically linked to defensive behaviors. Focusing on highly informative features enhanced performance. Across 4 recording sessions with 16 subjects, we achieved an average coefficient of determination of 0.5357 and 0.3476, and Pearson correlation coefficients of 0.7579 and 0.6092 for accelerometry jerk and bar press rate, respectively. Utilizing only the most informative features revealed differential encoding between accelerometry and bar press rate, with the former primarily through local spectral power and the latter via inter-regional connectivity. Our methodology demonstrated remarkably low time complexity, requiring <110 ms for training and <1 ms for inference. Significance Our results demonstrate the feasibility of accurately decoding defensive behaviors with minimal latency, using LFP features from neural circuits strongly linked to these behaviors. This methodology holds promise for real-time decoding to identify physiological targets in closed-loop psychiatric neuromodulation.
Collapse
Affiliation(s)
- Jinhan Liu
- Institute of Electrical and Micro Engineering, EPFL, Lausanne, Switzerland
- Neuro-X Institute, EPFL, Geneva, Switzerland
| | - Rebecca Younk
- Department of Psychiatry and Behavioral Sciences, University of Minnesota, Minneapolis, MN, USA
| | - Lauren M Drahos
- Department of Psychiatry and Behavioral Sciences, University of Minnesota, Minneapolis, MN, USA
| | - Sumedh S Nagrale
- Department of Psychiatry and Behavioral Sciences, University of Minnesota, Minneapolis, MN, USA
| | - Shreya Yadav
- Department of Psychiatry and Behavioral Sciences, University of Minnesota, Minneapolis, MN, USA
| | - Alik S Widge
- Department of Psychiatry and Behavioral Sciences, University of Minnesota, Minneapolis, MN, USA
- These authors jointly supervised this work
| | - Mahsa Shoaran
- Institute of Electrical and Micro Engineering, EPFL, Lausanne, Switzerland
- Neuro-X Institute, EPFL, Geneva, Switzerland
- These authors jointly supervised this work
| |
Collapse
|
3
|
Zhou R, Yu Y, Li C. Revealing neural dynamical structure of C. elegans with deep learning. iScience 2024; 27:109759. [PMID: 38711456 PMCID: PMC11070340 DOI: 10.1016/j.isci.2024.109759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 01/27/2024] [Accepted: 04/15/2024] [Indexed: 05/08/2024] Open
Abstract
Caenorhabditis elegans serves as a common model for investigating neural dynamics and functions of biological neural networks. Data-driven approaches have been employed in reconstructing neural dynamics. However, challenges remain regarding the curse of high-dimensionality and stochasticity in realistic systems. In this study, we develop a deep neural network (DNN) approach to reconstruct the neural dynamics of C. elegans and study neural mechanisms for locomotion. Our model identifies two limit cycles in the neural activity space: one underpins basic pirouette behavior, essential for navigation, and the other introduces extra Ω turns. The combination of two limit cycles elucidates predominant locomotion patterns in neural imaging data. The corresponding energy landscape explains the switching strategies between two limit cycles, quantitatively, and provides testable predictions on neural functions and circuit roles. Our work provides a general approach to study neural dynamics by combining imaging data and stochastic modeling.
Collapse
Affiliation(s)
- Ruisong Zhou
- School of Mathematical Sciences and Shanghai Center for Mathematical Sciences, Fudan University, Shanghai 200433, China
| | - Yuguo Yu
- Research Institute of Intelligent and Complex Systems, State Key Laboratory of Medical Neurobiology, MOE Frontiers Center for Brain Science, and Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China
- Shanghai Artificial Intelligence Laboratory, Shanghai 200232, China
| | - Chunhe Li
- School of Mathematical Sciences and Shanghai Center for Mathematical Sciences, Fudan University, Shanghai 200433, China
- Institute of Science and Technology for Brain-Inspired Intelligence and MOE Frontiers Center for Brain Science, Fudan University, Shanghai 200433, China
| |
Collapse
|
4
|
Manley J, Lu S, Barber K, Demas J, Kim H, Meyer D, Traub FM, Vaziri A. Simultaneous, cortex-wide dynamics of up to 1 million neurons reveal unbounded scaling of dimensionality with neuron number. Neuron 2024; 112:1694-1709.e5. [PMID: 38452763 PMCID: PMC11098699 DOI: 10.1016/j.neuron.2024.02.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 05/18/2023] [Accepted: 02/14/2024] [Indexed: 03/09/2024]
Abstract
The brain's remarkable properties arise from the collective activity of millions of neurons. Widespread application of dimensionality reduction to multi-neuron recordings implies that neural dynamics can be approximated by low-dimensional "latent" signals reflecting neural computations. However, can such low-dimensional representations truly explain the vast range of brain activity, and if not, what is the appropriate resolution and scale of recording to capture them? Imaging neural activity at cellular resolution and near-simultaneously across the mouse cortex, we demonstrate an unbounded scaling of dimensionality with neuron number in populations up to 1 million neurons. Although half of the neural variance is contained within sixteen dimensions correlated with behavior, our discovered scaling of dimensionality corresponds to an ever-increasing number of neuronal ensembles without immediate behavioral or sensory correlates. The activity patterns underlying these higher dimensions are fine grained and cortex wide, highlighting that large-scale, cellular-resolution recording is required to uncover the full substrates of neuronal computations.
Collapse
Affiliation(s)
- Jason Manley
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA; The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
| | - Sihao Lu
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Kevin Barber
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Jeffrey Demas
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA; The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA
| | - Hyewon Kim
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - David Meyer
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Francisca Martínez Traub
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA
| | - Alipasha Vaziri
- Laboratory of Neurotechnology and Biophysics, The Rockefeller University, New York, NY 10065, USA; The Kavli Neural Systems Institute, The Rockefeller University, New York, NY 10065, USA.
| |
Collapse
|
5
|
Grabowska A, Zabielski J, Senderecka M. Machine learning reveals differential effects of depression and anxiety on reward and punishment processing. Sci Rep 2024; 14:8422. [PMID: 38600089 DOI: 10.1038/s41598-024-58031-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Accepted: 03/25/2024] [Indexed: 04/12/2024] Open
Abstract
Recent studies suggest that depression and anxiety are associated with unique aspects of EEG responses to reward and punishment, respectively; also, abnormal responses to punishment in depressed individuals are related to anxiety, the symptoms of which are comorbid with depression. In a non-clinical sample, we aimed to investigate the relationships between reward processing and anxiety, between punishment processing and anxiety, between reward processing and depression, and between punishment processing and depression. Towards this aim, we separated feedback-related brain activity into delta and theta bands to isolate activity that indexes functionally distinct processes. Based on the delta/theta frequency and feedback valence, we then used machine learning (ML) to classify individuals with high severity of depressive symptoms and individuals with high severity of anxiety symptoms versus controls. The significant difference between the depression and control groups was driven mainly by delta activity; there were no differences between reward- and punishment-theta activities. The high severity of anxiety symptoms was marginally more strongly associated with the punishment- than the reward-theta feedback processing. The findings provide new insights into the differences in the impacts of anxiety and depression on reward and punishment processing; our study shows the utility of ML in testing brain-behavior hypotheses and emphasizes the joint effect of theta-RewP/FRN and delta frequency on feedback-related brain activity.
Collapse
Affiliation(s)
- Anna Grabowska
- Doctoral School in the Social Sciences, Jagiellonian University, Main Square 34, 30-010, Kraków, Poland.
- Institute of Philosophy, Jagiellonian University, Grodzka 52, 31-044, Kraków, Poland.
| | - Jakub Zabielski
- Institute of Philosophy, Jagiellonian University, Grodzka 52, 31-044, Kraków, Poland
| | - Magdalena Senderecka
- Institute of Philosophy, Jagiellonian University, Grodzka 52, 31-044, Kraków, Poland.
| |
Collapse
|
6
|
Borra D, Filippini M, Ursino M, Fattori P, Magosso E. Convolutional neural networks reveal properties of reach-to-grasp encoding in posterior parietal cortex. Comput Biol Med 2024; 172:108188. [PMID: 38492454 DOI: 10.1016/j.compbiomed.2024.108188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 01/26/2024] [Accepted: 02/18/2024] [Indexed: 03/18/2024]
Abstract
Deep neural networks (DNNs) are widely adopted to decode motor states from both non-invasively and invasively recorded neural signals, e.g., for realizing brain-computer interfaces. However, the neurophysiological interpretation of how DNNs make the decision based on the input neural activity is limitedly addressed, especially when applied to invasively recorded data. This reduces decoder reliability and transparency, and prevents the exploitation of decoders to better comprehend motor neural encoding. Here, we adopted an explainable artificial intelligence approach - based on a convolutional neural network and an explanation technique - to reveal spatial and temporal neural properties of reach-to-grasping from single-neuron recordings of the posterior parietal area V6A. The network was able to accurately decode 5 different grip types, and the explanation technique automatically identified the cells and temporal samples that most influenced the network prediction. Grip encoding in V6A neurons already started at movement preparation, peaking during movement execution. A difference was found within V6A: dorsal V6A neurons progressively encoded more for increasingly advanced grips, while ventral V6A neurons for increasingly rudimentary grips, with both subareas following a linear trend between the amount of grip encoding and the level of grip skills. By revealing the elements of the neural activity most relevant for each grip with no a priori assumptions, our approach supports and advances current knowledge about reach-to-grasp encoding in V6A, and it may represent a general tool able to investigate neural correlates of motor or cognitive tasks (e.g., attention and memory tasks) from single-neuron recordings.
Collapse
Affiliation(s)
- Davide Borra
- Department of Electrical, Electronic and Information Engineering "Guglielmo Marconi" (DEI), University of Bologna, Cesena Campus, Cesena, 47522, Italy.
| | - Matteo Filippini
- Department of Biomedical and Neuromotor Sciences (DIBINEM), University of Bologna, Bologna, 40126, Italy
| | - Mauro Ursino
- Department of Electrical, Electronic and Information Engineering "Guglielmo Marconi" (DEI), University of Bologna, Cesena Campus, Cesena, 47522, Italy; Alma Mater Research Institute for Human-Centered Artificial Intelligence, University of Bologna, Bologna, 40126, Italy
| | - Patrizia Fattori
- Alma Mater Research Institute for Human-Centered Artificial Intelligence, University of Bologna, Bologna, 40126, Italy; Department of Biomedical and Neuromotor Sciences (DIBINEM), University of Bologna, Bologna, 40126, Italy
| | - Elisa Magosso
- Department of Electrical, Electronic and Information Engineering "Guglielmo Marconi" (DEI), University of Bologna, Cesena Campus, Cesena, 47522, Italy; Alma Mater Research Institute for Human-Centered Artificial Intelligence, University of Bologna, Bologna, 40126, Italy
| |
Collapse
|
7
|
Qian Y, Alhaskawi A, Dong Y, Ni J, Abdalbary S, Lu H. Transforming medicine: artificial intelligence integration in the peripheral nervous system. Front Neurol 2024; 15:1332048. [PMID: 38419700 PMCID: PMC10899496 DOI: 10.3389/fneur.2024.1332048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 02/01/2024] [Indexed: 03/02/2024] Open
Abstract
In recent years, artificial intelligence (AI) has undergone remarkable advancements, exerting a significant influence across a multitude of fields. One area that has particularly garnered attention and witnessed substantial progress is its integration into the realm of the nervous system. This article provides a comprehensive examination of AI's applications within the peripheral nervous system, with a specific focus on AI-enhanced diagnostics for peripheral nervous system disorders, AI-driven pain management, advancements in neuroprosthetics, and the development of neural network models. By illuminating these facets, we unveil the burgeoning opportunities for revolutionary medical interventions and the enhancement of human capabilities, thus paving the way for a future in which AI becomes an integral component of our nervous system's interface.
Collapse
Affiliation(s)
- Yue Qian
- Rehabilitation Center, Hangzhou Wuyunshan Hospital (Hangzhou Institute of Health Promotion), Hangzhou, China
| | - Ahmad Alhaskawi
- Department of Orthopedics, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Yanzhao Dong
- Department of Orthopedics, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Juemin Ni
- Rehabilitation Center, Hangzhou Wuyunshan Hospital (Hangzhou Institute of Health Promotion), Hangzhou, China
| | - Sahar Abdalbary
- Department of Orthopedic Physical Therapy, Faculty of Physical Therapy, Nahda University in Beni Suef, Beni Suef, Egypt
| | - Hui Lu
- Department of Orthopedics, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
- Alibaba-Zhejiang University Joint Research Center of Future Digital Healthcare, Zhejiang University, Hangzhou, China
| |
Collapse
|
8
|
Lauria S, Saleh MF. Conditional recurrent neural networks for broad applications in nonlinear optics. OPTICS EXPRESS 2024; 32:5582-5591. [PMID: 38439280 DOI: 10.1364/oe.506519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 01/24/2024] [Indexed: 03/06/2024]
Abstract
We present a novel implementation of conditional long short-term memory recurrent neural networks that successfully predict the spectral evolution of a pulse in nonlinear periodically-poled waveguides. The developed networks offer large flexibility by allowing the propagation of optical pulses with ranges of energies and temporal widths in waveguides with different poling periods. The results show very high agreement with the traditional numerical models. Moreover, we are able to use a single network to calculate both the real and imaginary parts of the pulse complex envelope, allowing for successfully retrieving the pulse temporal and spectral evolution using the same network.
Collapse
|
9
|
Mitchell EC, Story B, Boothe D, Franaszczuk PJ, Maroulas V. A topological deep learning framework for neural spike decoding. Biophys J 2024:S0006-3495(24)00041-9. [PMID: 38402607 DOI: 10.1016/j.bpj.2024.01.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 01/10/2024] [Accepted: 01/23/2024] [Indexed: 02/27/2024] Open
Abstract
The brain's spatial orientation system uses different neuron ensembles to aid in environment-based navigation. Two of the ways brains encode spatial information are through head direction cells and grid cells. Brains use head direction cells to determine orientation, whereas grid cells consist of layers of decked neurons that overlay to provide environment-based navigation. These neurons fire in ensembles where several neurons fire at once to activate a single head direction or grid. We want to capture this firing structure and use it to decode head direction and animal location from head direction and grid cell activity. Understanding, representing, and decoding these neural structures require models that encompass higher-order connectivity, more than the one-dimensional connectivity that traditional graph-based models provide. To that end, in this work, we develop a topological deep learning framework for neural spike train decoding. Our framework combines unsupervised simplicial complex discovery with the power of deep learning via a new architecture we develop herein called a simplicial convolutional recurrent neural network. Simplicial complexes, topological spaces that use not only vertices and edges but also higher-dimensional objects, naturally generalize graphs and capture more than just pairwise relationships. Additionally, this approach does not require prior knowledge of the neural activity beyond spike counts, which removes the need for similarity measurements. The effectiveness and versatility of the simplicial convolutional neural network is demonstrated on head direction and trajectory prediction via head direction and grid cell datasets.
Collapse
Affiliation(s)
- Edward C Mitchell
- University of Tennessee Knoxville, Knoxville, Tennessee; Joe Gibbs Human Performance Institute, Huntersville, North Carolina
| | - Brittany Story
- University of Tennessee Knoxville, Knoxville, Tennessee; Army Research Lab, Aberdeen, Maryland
| | | | - Piotr J Franaszczuk
- Army Research Lab, Aberdeen, Maryland; Johns Hopkins University, Baltimore, Maryland
| | | |
Collapse
|
10
|
Çakar T, Son-Turan S, Girişken Y, Sayar A, Ertuğrul S, Filiz G, Tuna E. Unlocking the neural mechanisms of consumer loan evaluations: an fNIRS and ML-based consumer neuroscience study. Front Hum Neurosci 2024; 18:1286918. [PMID: 38375365 PMCID: PMC10875049 DOI: 10.3389/fnhum.2024.1286918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 01/11/2024] [Indexed: 02/21/2024] Open
Abstract
Introduction This study conducts a comprehensive exploration of the neurocognitive processes underlying consumer credit decision-making using cutting-edge techniques from neuroscience and machine learning (ML). Employing functional Near-Infrared Spectroscopy (fNIRS), the research examines the hemodynamic responses of participants while evaluating diverse credit offers. Methods The experimental phase of this study investigates the hemodynamic responses collected from 39 healthy participants with respect to different loan offers. This study integrates fNIRS data with advanced ML algorithms, specifically Extreme Gradient Boosting, CatBoost, Extra Tree Classifier, and Light Gradient Boosted Machine, to predict participants' credit decisions based on prefrontal cortex (PFC) activation patterns. Results Findings reveal distinctive PFC regions correlating with credit behaviors, including the dorsolateral prefrontal cortex (dlPFC) associated with strategic decision-making, the orbitofrontal cortex (OFC) linked to emotional valuations, and the ventromedial prefrontal cortex (vmPFC) reflecting brand integration and reward processing. Notably, the right dorsomedial prefrontal cortex (dmPFC) and the right vmPFC contribute to positive credit preferences. Discussion This interdisciplinary approach bridges neuroscience, machine learning and finance, offering unprecedented insights into the neural mechanisms guiding financial choices regarding different loan offers. The study's predictive model holds promise for refining financial services and illuminating human financial behavior within the burgeoning field of neurofinance. The work exemplifies the potential of interdisciplinary research to enhance our understanding of human financial decision-making.
Collapse
Affiliation(s)
- Tuna Çakar
- Department of Computer Engineering, MEF University, Istanbul, Türkiye
| | - Semen Son-Turan
- Department of Business Administration, MEF University, Maslak, Türkiye
| | - Yener Girişken
- Faculty of Economics and Administrative Sciences, Final International University, Istanbul, Türkiye
| | - Alperen Sayar
- Informatics Technologies Master Program, MEF University, Istanbul, Türkiye
| | - Seyit Ertuğrul
- Informatics Technologies Master Program, MEF University, Istanbul, Türkiye
| | - Gözde Filiz
- Computer Science and Engineering Ph.D. Program, MEF University, Istanbul, Türkiye
| | - Esin Tuna
- Department of Psychology, MEF University, Istanbul, Türkiye
| |
Collapse
|
11
|
Çakar T, Filiz G. Unraveling neural pathways of political engagement: bridging neuromarketing and political science for understanding voter behavior and political leader perception. Front Hum Neurosci 2023; 17:1293173. [PMID: 38188505 PMCID: PMC10771297 DOI: 10.3389/fnhum.2023.1293173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 11/30/2023] [Indexed: 01/09/2024] Open
Abstract
Introduction Political neuromarketing is an emerging interdisciplinary field integrating marketing, neuroscience, and psychology to decipher voter behavior and political leader perception. This interdisciplinary field offers novel techniques to understand complex phenomena such as voter engagement, political leadership, and party branding. Methods This study aims to understand the neural activation patterns of voters when they are exposed to political leaders using functional near-infrared spectroscopy (fNIRS) and machine learning methods. We recruited participants and recorded their brain activity using fNIRS when they were exposed to images of different political leaders. Results This neuroimaging method (fNIRS) reveals brain regions central to brand perception, including the dorsolateral prefrontal cortex (dlPFC), the dorsomedial prefrontal cortex (dmPFC), and the ventromedial prefrontal cortex (vmPFC). Machine learning methods were used to predict the participants' perceptions of leaders based on their brain activity. The study has identified the brain regions that are involved in processing political stimuli and making judgments about political leaders. Within this study, the best-performing machine learning model, LightGBM, achieved a highest accuracy score of 0.78, underscoring its efficacy in predicting voters' perceptions of political leaders based on the brain activity of the former. Discussion The findings from this study provide new insights into the neural basis of political decision-making and the development of effective political marketing campaigns while bridging neuromarketing, political science, and machine learning, in turn enabling predictive insights into voter preferences and behavior.
Collapse
Affiliation(s)
- Tuna Çakar
- Department of Computer Engineering, MEF University, Istanbul, Türkiye
- Graduate School of Science and Engineering, Computer Science and Engineering PhD Program, MEF University, Istanbul, Türkiye
| | - Gözde Filiz
- Department of Computer Engineering, MEF University, Istanbul, Türkiye
- Graduate School of Science and Engineering, Computer Science and Engineering PhD Program, MEF University, Istanbul, Türkiye
| |
Collapse
|
12
|
Chu KC, Huang HJ, Huang YS. Validity of Diagnostic Support Model for Attention Deficit Hyperactivity Disorder: A Machine Learning Approach. J Pers Med 2023; 13:1525. [PMID: 38003840 PMCID: PMC10672705 DOI: 10.3390/jpm13111525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 07/28/2023] [Accepted: 07/31/2023] [Indexed: 11/26/2023] Open
Abstract
An accurate and early diagnosis of attention deficit hyperactivity disorder can improve health outcomes and prevent unnecessary medical expenses. This study developed a diagnostic support model using a machine learning approach to effectively screen individuals for attention deficit hyperactivity disorder. Three models were developed: a logistic regression model, a classification and regression tree (CART), and a neural network. The models were assessed by using a receiver operating characteristic analysis. In total, 74 participants were enrolled into the disorder group, while 21 participants were enrolled in the control group. The sensitivity and specificity of each model, indicating the rate of true positive and true negative results, respectively, were assessed. The CART model demonstrated a superior performance compared to the other two models, with region values of receiver operating characteristic analyses in the following order: CART (0.848) > logistic regression model (0.826) > neural network (0.67). The sensitivity and specificity of the CART model were 78.8% and 50%, respectively. This model can be applied to other neuroscience research fields, including the diagnoses of autism spectrum disorder, Tourette syndrome, and dementia. This will enhance the effect and practical value of our research.
Collapse
Affiliation(s)
- Kuo-Chung Chu
- Department of Information Management, National Taipei University of Nursing and Health Sciences, Taipei 112, Taiwan; (K.-C.C.)
- Department of Education and Research, Taipei City Hospital, Taipei 103, Taiwan
| | - Hsin-Jou Huang
- Department of Information Management, National Taipei University of Nursing and Health Sciences, Taipei 112, Taiwan; (K.-C.C.)
| | - Yu-Shu Huang
- Department of Child Psychiatry and Sleep Center, Chang Gung Memorial Hospital at Linkou, Taoyuan City 333, Taiwan
- College of Medicine, Chang Gung University, Taoyuan 333, Taiwan
| |
Collapse
|
13
|
Segraves MA. Using Natural Scenes to Enhance our Understanding of the Cerebral Cortex's Role in Visual Search. Annu Rev Vis Sci 2023; 9:435-454. [PMID: 37164028 DOI: 10.1146/annurev-vision-100720-124033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Using natural scenes is an approach to studying the visual and eye movement systems approximating how these systems function in everyday life. This review examines the results from behavioral and neurophysiological studies using natural scene viewing in humans and monkeys. The use of natural scenes for the study of cerebral cortical activity is relatively new and presents challenges for data analysis. Methods and results from the use of natural scenes for the study of the visual and eye movement cortex are presented, with emphasis on new insights that this method provides enhancing what is known about these cortical regions from the use of conventional methods.
Collapse
Affiliation(s)
- Mark A Segraves
- Department of Neurobiology, Northwestern University, Evanston, Illinois, USA;
| |
Collapse
|
14
|
Thölke P, Mantilla-Ramos YJ, Abdelhedi H, Maschke C, Dehgan A, Harel Y, Kemtur A, Mekki Berrada L, Sahraoui M, Young T, Bellemare Pépin A, El Khantour C, Landry M, Pascarella A, Hadid V, Combrisson E, O'Byrne J, Jerbi K. Class imbalance should not throw you off balance: Choosing the right classifiers and performance metrics for brain decoding with imbalanced data. Neuroimage 2023:120253. [PMID: 37385392 DOI: 10.1016/j.neuroimage.2023.120253] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 06/05/2023] [Accepted: 06/26/2023] [Indexed: 07/01/2023] Open
Abstract
Machine learning (ML) is increasingly used in cognitive, computational and clinical neuroscience. The reliable and efficient application of ML requires a sound understanding of its subtleties and limitations. Training ML models on datasets with imbalanced classes is a particularly common problem, and it can have severe consequences if not adequately addressed. With the neuroscience ML user in mind, this paper provides a didactic assessment of the class imbalance problem and illustrates its impact through systematic manipulation of data imbalance ratios in (i) simulated data and (ii) brain data recorded with electroencephalography (EEG), magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI). Our results illustrate how the widely-used Accuracy (Acc) metric, which measures the overall proportion of successful predictions, yields misleadingly high performances, as class imbalance increases. Because Acc weights the per-class ratios of correct predictions proportionally to class size, it largely disregards the performance on the minority class. A binary classification model that learns to systematically vote for the majority class will yield an artificially high decoding accuracy that directly reflects the imbalance between the two classes, rather than any genuine generalizable ability to discriminate between them. We show that other evaluation metrics such as the Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC), and the less common Balanced Accuracy (BAcc) metric - defined as the arithmetic mean between sensitivity and specificity, provide more reliable performance evaluations for imbalanced data. Our findings also highlight the robustness of Random Forest (RF), and the benefits of using stratified cross-validation and hyperprameter optimization to tackle data imbalance. Critically, for neuroscience ML applications that seek to minimize overall classification error, we recommend the routine use of BAcc, which in the specific case of balanced data is equivalent to using standard Acc, and readily extends to multi-class settings. Importantly, we present a list of recommendations for dealing with imbalanced data, as well as open-source code to allow the neuroscience community to replicate and extend our observations and explore alternative approaches to coping with imbalanced data.
Collapse
Affiliation(s)
- Philipp Thölke
- Cognitive and Computational Neuroscience Laboratory (CoCo Lab), University of Montreal, 2900, boul. Edouard-Montpetit, Montreal, H3T 1J4, Quebec, Canada; Institute of Cognitive Science, Osnabrück University, Neuer Graben 29/Schloss, Osnabrück, 49074, Lower Saxony, Germany.
| | - Yorguin-Jose Mantilla-Ramos
- Cognitive and Computational Neuroscience Laboratory (CoCo Lab), University of Montreal, 2900, boul. Edouard-Montpetit, Montreal, H3T 1J4, Quebec, Canada; Neuropsychology and Behavior Group (GRUNECO), Faculty of Medicine, Universidad de Antioquia,53-108, Medellin, Aranjuez, Medellin, 050010, Colombia
| | - Hamza Abdelhedi
- Cognitive and Computational Neuroscience Laboratory (CoCo Lab), University of Montreal, 2900, boul. Edouard-Montpetit, Montreal, H3T 1J4, Quebec, Canada
| | - Charlotte Maschke
- Cognitive and Computational Neuroscience Laboratory (CoCo Lab), University of Montreal, 2900, boul. Edouard-Montpetit, Montreal, H3T 1J4, Quebec, Canada; Integrated Program in Neuroscience, McGill University, 1033 Pine Ave,Montreal, H3A 0G4, Canada
| | - Arthur Dehgan
- Cognitive and Computational Neuroscience Laboratory (CoCo Lab), University of Montreal, 2900, boul. Edouard-Montpetit, Montreal, H3T 1J4, Quebec, Canada; Institut de Neurosciences de la Timone (INT), CNRS, Aix Marseille University,Marseille, 13005, France
| | - Yann Harel
- Cognitive and Computational Neuroscience Laboratory (CoCo Lab), University of Montreal, 2900, boul. Edouard-Montpetit, Montreal, H3T 1J4, Quebec, Canada
| | - Anirudha Kemtur
- Cognitive and Computational Neuroscience Laboratory (CoCo Lab), University of Montreal, 2900, boul. Edouard-Montpetit, Montreal, H3T 1J4, Quebec, Canada
| | - Loubna Mekki Berrada
- Cognitive and Computational Neuroscience Laboratory (CoCo Lab), University of Montreal, 2900, boul. Edouard-Montpetit, Montreal, H3T 1J4, Quebec, Canada
| | - Myriam Sahraoui
- Cognitive and Computational Neuroscience Laboratory (CoCo Lab), University of Montreal, 2900, boul. Edouard-Montpetit, Montreal, H3T 1J4, Quebec, Canada
| | - Tammy Young
- Cognitive and Computational Neuroscience Laboratory (CoCo Lab), University of Montreal, 2900, boul. Edouard-Montpetit, Montreal, H3T 1J4, Quebec, Canada; Department of Computing Science, University of Alberta, 116 St & 85 Ave, Edmonton, T6G 2R3, AB, Canada
| | - Antoine Bellemare Pépin
- Cognitive and Computational Neuroscience Laboratory (CoCo Lab), University of Montreal, 2900, boul. Edouard-Montpetit, Montreal, H3T 1J4, Quebec, Canada; Department of Music, Concordia University, 1550 De Maisonneuve Blvd. W., Montreal, H3H 1G8, QC, Canada
| | - Clara El Khantour
- Cognitive and Computational Neuroscience Laboratory (CoCo Lab), University of Montreal, 2900, boul. Edouard-Montpetit, Montreal, H3T 1J4, Quebec, Canada
| | - Mathieu Landry
- Cognitive and Computational Neuroscience Laboratory (CoCo Lab), University of Montreal, 2900, boul. Edouard-Montpetit, Montreal, H3T 1J4, Quebec, Canada
| | - Annalisa Pascarella
- Institute for Applied Mathematics Mauro Picone, National Research Council, Roma, Italy, Roma, Italy
| | - Vanessa Hadid
- Cognitive and Computational Neuroscience Laboratory (CoCo Lab), University of Montreal, 2900, boul. Edouard-Montpetit, Montreal, H3T 1J4, Quebec, Canada
| | - Etienne Combrisson
- Institut de Neurosciences de la Timone (INT), CNRS, Aix Marseille University,Marseille, 13005, France
| | - Jordan O'Byrne
- Cognitive and Computational Neuroscience Laboratory (CoCo Lab), University of Montreal, 2900, boul. Edouard-Montpetit, Montreal, H3T 1J4, Quebec, Canada
| | - Karim Jerbi
- Cognitive and Computational Neuroscience Laboratory (CoCo Lab), University of Montreal, 2900, boul. Edouard-Montpetit, Montreal, H3T 1J4, Quebec, Canada; Mila (Quebec Machine Learning Institute),6666 Rue Saint-Urbain, Montreal, H2S 3H1, QC, Canada; UNIQUE Centre (Quebec Neuro-AI Research Centre), 3744 rue Jean-Brillant, Montreal,H3T 1P1,QC, Canada
| |
Collapse
|
15
|
Thiele F, Windebank AJ, Siddiqui AM. Motivation for using data-driven algorithms in research: A review of machine learning solutions for image analysis of micrographs in neuroscience. J Neuropathol Exp Neurol 2023; 82:595-610. [PMID: 37244652 PMCID: PMC10280360 DOI: 10.1093/jnen/nlad040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/29/2023] Open
Abstract
Machine learning is a powerful tool that is increasingly being used in many research areas, including neuroscience. The recent development of new algorithms and network architectures, especially in the field of deep learning, has made machine learning models more reliable and accurate and useful for the biomedical research sector. By minimizing the effort necessary to extract valuable features from datasets, they can be used to find trends in data automatically and make predictions about future data, thereby improving the reproducibility and efficiency of research. One application is the automatic evaluation of micrograph images, which is of great value in neuroscience research. While the development of novel models has enabled numerous new research applications, the barrier to use these new algorithms has also decreased by the integration of deep learning models into known applications such as microscopy image viewers. For researchers unfamiliar with machine learning algorithms, the steep learning curve can hinder the successful implementation of these methods into their workflows. This review explores the use of machine learning in neuroscience, including its potential applications and limitations, and provides some guidance on how to select a fitting framework to use in real-life research projects.
Collapse
Affiliation(s)
- Frederic Thiele
- Department of Neurology, Mayo Clinic, Rochester, Minnesota, USA
- Department of Neurosurgery, Medical Center of the University of Munich, Munich, Germany
| | | | - Ahad M Siddiqui
- Department of Neurology, Mayo Clinic, Rochester, Minnesota, USA
| |
Collapse
|
16
|
Moldoveanu M, Zaidi A. In-Network Learning: Distributed Training and Inference in Networks. ENTROPY (BASEL, SWITZERLAND) 2023; 25:920. [PMID: 37372264 DOI: 10.3390/e25060920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 06/02/2023] [Accepted: 06/06/2023] [Indexed: 06/29/2023]
Abstract
In this paper, we study distributed inference and learning over networks which can be modeled by a directed graph. A subset of the nodes observes different features, which are all relevant/required for the inference task that needs to be performed at some distant end (fusion) node. We develop a learning algorithm and an architecture that can combine the information from the observed distributed features, using the processing units available across the networks. In particular, we employ information-theoretic tools to analyze how inference propagates and fuses across a network. Based on the insights gained from this analysis, we derive a loss function that effectively balances the model's performance with the amount of information transmitted across the network. We study the design criterion of our proposed architecture and its bandwidth requirements. Furthermore, we discuss implementation aspects using neural networks in typical wireless radio access and provide experiments that illustrate benefits over state-of-the-art techniques.
Collapse
Affiliation(s)
- Matei Moldoveanu
- Laboratoire d'Informatique Gaspard-Monge, Université Paris-Est, 77454 Marne-la-Vallée, France
- Mathematical and Algorithmic Sciences Lab, Paris Research Center, Huawei Technologies, 92100 Boulogne-Billancourt, France
| | - Abdellatif Zaidi
- Laboratoire d'Informatique Gaspard-Monge, Université Paris-Est, 77454 Marne-la-Vallée, France
- Mathematical and Algorithmic Sciences Lab, Paris Research Center, Huawei Technologies, 92100 Boulogne-Billancourt, France
| |
Collapse
|
17
|
Borra D, Bossi F, Rivolta D, Magosso E. Deep learning applied to EEG source-data reveals both ventral and dorsal visual stream involvement in holistic processing of social stimuli. Sci Rep 2023; 13:7365. [PMID: 37147445 PMCID: PMC10162973 DOI: 10.1038/s41598-023-34487-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 05/02/2023] [Indexed: 05/07/2023] Open
Abstract
Perception of social stimuli (faces and bodies) relies on "holistic" (i.e., global) mechanisms, as supported by picture-plane inversion: perceiving inverted faces/bodies is harder than perceiving their upright counterpart. Albeit neuroimaging evidence suggested involvement of face-specific brain areas in holistic processing, their spatiotemporal dynamics and selectivity for social stimuli is still debated. Here, we investigate the spatiotemporal dynamics of holistic processing for faces, bodies and houses (adopted as control non-social category), by applying deep learning to high-density electroencephalographic signals (EEG) at source-level. Convolutional neural networks were trained to classify cortical EEG responses to stimulus orientation (upright/inverted), separately for each stimulus type (faces, bodies, houses), resulting to perform well above chance for faces and bodies, and close to chance for houses. By explaining network decision, the 150-200 ms time interval and few visual ventral-stream regions were identified as mostly relevant for discriminating face and body orientation (lateral occipital cortex, and for face only, precuneus cortex, fusiform and lingual gyri), together with two additional dorsal-stream areas (superior and inferior parietal cortices). Overall, the proposed approach is sensitive in detecting cortical activity underlying perceptual phenomena, and by maximally exploiting discriminant information contained in data, may reveal spatiotemporal features previously undisclosed, stimulating novel investigations.
Collapse
Affiliation(s)
- Davide Borra
- Department of Electrical, Electronic and Information Engineering "Guglielmo Marconi" (DEI), University of Bologna, Cesena Campus, Cesena, Italy
| | - Francesco Bossi
- MoMiLab Research Unit, IMT School for Advanced Studies Lucca, Lucca, Italy
| | - Davide Rivolta
- Department of Education, Psychology, and Communication, University of Bari Aldo Moro, Bari, Italy
| | - Elisa Magosso
- Department of Electrical, Electronic and Information Engineering "Guglielmo Marconi" (DEI), University of Bologna, Cesena Campus, Cesena, Italy.
- Alma Mater Research Institute for Human-Centered Artificial Intelligence, University of Bologna, Bologna, Italy.
| |
Collapse
|
18
|
Huang L. A quasi-comprehensive exploration of the mechanisms of spatial working memory. Nat Hum Behav 2023; 7:729-739. [PMID: 36959326 DOI: 10.1038/s41562-023-01559-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 02/16/2023] [Indexed: 03/25/2023]
Abstract
Why are some spatial patterns remembered more easily than others? There are many possible mechanisms underlying spatial working memory function. Here, the author explores different mechanisms simultaneously in a single conceptual model. He conducts a large-scale experiment (35.4 million responses used to measure human observers' spatial working memory across 80,000 patterns) and builds a convolutional neural network as a benchmark for what is expected to be explainable. The author then creates a quasi-comprehensive exploration model of spatial working memory based on classic concepts, as well as new notions, including spatial uncertainty, Bayesian integration, out-of-range responses, averaging, grouping, categorical memory, line detection, gap detection, blurring, lateral inhibition, chunking, multiple spatial-frequency channels, redundancy, response bias and random guess. This model provides a tentative overarching framework for the mechanisms of spatial working memory.
Collapse
Affiliation(s)
- Liqiang Huang
- Department of Psychology, The Chinese University of Hong Kong, Hong Kong, China.
| |
Collapse
|
19
|
Liu Y, Zhao R, Xiong X, Ren X. A Bibliometric Analysis of Consumer Neuroscience towards Sustainable Consumption. Behav Sci (Basel) 2023; 13:bs13040298. [PMID: 37102812 PMCID: PMC10136158 DOI: 10.3390/bs13040298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 03/13/2023] [Accepted: 03/15/2023] [Indexed: 04/03/2023] Open
Abstract
Consumer neuroscience is a new paradigm for studying consumer behavior, focusing on neuroscientific tools to explore the underlying neural processes and behavioral implications of consumption. Based on the bibliometric analysis tools, this paper provides a review of progress in research on consumer neuroscience during 2000–2021. In this paper, we identify research hotspots and frontiers in the field through a statistical analysis of bibliometric indicators, including the number of publications, countries, institutions, and keywords. Aiming at facilitating carbon neutrality via sustainable consumption, this paper discusses the prospects of applying neuroscience to sustainable consumption. The results show 364 publications in the field during 2000–2021, showing a rapid upward trend, indicating that consumer neuroscience research is gaining ground. The majority of these consumer neuroscience studies chose to use electroencephalogram tools, accounting for 63.8% of the total publications; the cutting-edge research mainly involved event-related potential (ERP) studies of various marketing stimuli interventions, functional magnetic resonance imaging (fMRI)-based studies of consumer decision-making and emotion-specific brain regions, and machine-learning-based studies of consumer decision-making optimization models.
Collapse
|
20
|
Grecucci A, Sorella S, Consolini J. Decoding individual differences in expressing and suppressing anger from structural brain networks: A supervised machine learning approach. Behav Brain Res 2023; 439:114245. [PMID: 36470420 DOI: 10.1016/j.bbr.2022.114245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Revised: 11/28/2022] [Accepted: 11/29/2022] [Indexed: 12/12/2022]
Abstract
Anger can be broken down into different elements: a transitory state (state anger), a stable personality feature (trait anger), a tendency to express it (anger-out), or to suppress it (anger-in), and the ability to regulate it (anger control). These elements are characterized by individual differences that vary across a continuum. Among them, the abilities to express and suppress anger are of particular relevance as they determine outcomes and enable successful anger management in daily situations. The aim of this study was to demonstrate that anger suppression and expression can be decoded by patterns of grey matter of specific well-known brain networks. To this aim, a supervised machine learning technique, known as Kernel Ridge Regression, was used to predict anger expression and suppression scores of 212 healthy subjects from the grey matter concentration. Results show that individual differences in anger suppression were predicted by two grey matter patterns associated with the Default-Mode Network and the Salience Network. Additionally, individual differences in anger expression were predicted by a circuit mainly involving subcortical and fronto-temporal regions when considering whole brain grey matter features. These results expand previous findings regarding the neural bases of anger by showing that individual differences in specific anger-related components can be predicted by the grey matter features of specific networks.
Collapse
Affiliation(s)
- Alessandro Grecucci
- Clinical and Affective Neuroscience Lab, Cli.A.N. Lab, Department of Psychology and Cognitive Sciences - DiPSCo, University of Trento, Rovereto, Italy; Center for Medical Sciences, CISMed, University of Trento, Trento, Italy.
| | - Sara Sorella
- Clinical and Affective Neuroscience Lab, Cli.A.N. Lab, Department of Psychology and Cognitive Sciences - DiPSCo, University of Trento, Rovereto, Italy.
| | - Jennifer Consolini
- Clinical and Affective Neuroscience Lab, Cli.A.N. Lab, Department of Psychology and Cognitive Sciences - DiPSCo, University of Trento, Rovereto, Italy.
| |
Collapse
|
21
|
Zheng S, Li Y, Luo C, Chen F, Ling G, Zheng B. Machine Learning for Predicting the Development of Postoperative Acute Kidney Injury After Coronary Artery Bypass Grafting Without Extracorporeal Circulation. CARDIOVASCULAR INNOVATIONS AND APPLICATIONS 2023. [DOI: 10.15212/cvia.2023.0006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/13/2023] Open
Abstract
Background: Cardiac surgery-associated acute kidney injury (CSA-AKI) is a major complication that increases morbidity and mortality after cardiac surgery. Most established predictive models are limited to the analysis of nonlinear relationships and do not adequately consider intraoperative variables and early postoperative variables. Nonextracorporeal circulation coronary artery bypass grafting (off-pump CABG) remains the procedure of choice for most coronary surgeries, and refined CSA-AKI predictive models for off-pump CABG are notably lacking. Therefore, this study used an artificial intelligence-based machine learning approach to predict CSA-AKI from comprehensive perioperative data.
Methods: In total, 293 variables were analysed in the clinical data of patients undergoing off-pump CABG in the Department of Cardiac Surgery at the First Affiliated Hospital of Guangxi Medical University between 2012 and 2021. According to the KDIGO criteria, postoperative AKI was defined by an elevation of at least 50% within 7 days, or 0.3 mg/dL within 48 hours, with respect to the reference serum creatinine level. Five machine learning algorithms—a simple decision tree, random forest, support vector machine, extreme gradient boosting and gradient boosting decision tree (GBDT)—were used to construct the CSA-AKI predictive model. The performance of these models was evaluated with the area under the receiver operating characteristic curve (AUC). Shapley additive explanation (SHAP) values were used to explain the predictive model.
Results: The three most influential features in the importance matrix plot were 1-day postoperative serum potassium concentration, 1-day postoperative serum magnesium ion concentration, and 1-day postoperative serum creatine phosphokinase concentration.
Conclusion: GBDT exhibited the largest AUC (0.87) and can be used to predict the risk of AKI development after surgery, thus enabling clinicians to optimise treatment strategies and minimise postoperative complications.
Collapse
Affiliation(s)
- Sai Zheng
- The First Affiliated Hospital of Guangxi Medical University, Cardiac Surgery, Nanning, Guangxi, China
| | - Yugui Li
- The First Affiliated Hospital of Guangxi Medical University, Cardiac Surgery, Nanning, Guangxi, China
| | - Cheng Luo
- The First Affiliated Hospital of Guangxi Medical University, Cardiac Surgery, Nanning, Guangxi, China
| | - Fang Chen
- The First Affiliated Hospital of Guangxi Medical University, Cardiac Surgery, Nanning, Guangxi, China
| | - Guoxing Ling
- The First Affiliated Hospital of Guangxi Medical University, Cardiac Surgery, Nanning, Guangxi, China
| | - Baoshi Zheng
- The First Affiliated Hospital of Guangxi Medical University, Cardiac Surgery, Nanning, Guangxi, China
| |
Collapse
|
22
|
Suomala J, Kauttonen J. Computational meaningfulness as the source of beneficial cognitive biases. Front Psychol 2023; 14:1189704. [PMID: 37205079 PMCID: PMC10187636 DOI: 10.3389/fpsyg.2023.1189704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2023] [Accepted: 04/05/2023] [Indexed: 05/21/2023] Open
Abstract
The human brain has evolved to solve the problems it encounters in multiple environments. In solving these challenges, it forms mental simulations about multidimensional information about the world. These processes produce context-dependent behaviors. The brain as overparameterized modeling organ is an evolutionary solution for producing behavior in a complex world. One of the most essential characteristics of living creatures is that they compute the values of information they receive from external and internal contexts. As a result of this computation, the creature can behave in optimal ways in each environment. Whereas most other living creatures compute almost exclusively biological values (e.g., how to get food), the human as a cultural creature computes meaningfulness from the perspective of one's activity. The computational meaningfulness means the process of the human brain, with the help of which an individual tries to make the respective situation comprehensible to herself to know how to behave optimally. This paper challenges the bias-centric approach of behavioral economics by exploring different possibilities opened up by computational meaningfulness with insight into wider perspectives. We concentrate on confirmation bias and framing effect as behavioral economics examples of cognitive biases. We conclude that from the computational meaningfulness perspective of the brain, the use of these biases are indispensable property of an optimally designed computational system of what the human brain is like. From this perspective, cognitive biases can be rational under some conditions. Whereas the bias-centric approach relies on small-scale interpretable models which include only a few explanatory variables, the computational meaningfulness perspective emphasizes the behavioral models, which allow multiple variables in these models. People are used to working in multidimensional and varying environments. The human brain is at its best in such an environment and scientific study should increasingly take place in such situations simulating the real environment. By using naturalistic stimuli (e.g., videos and VR) we can create more realistic, life-like contexts for research purposes and analyze resulting data using machine learning algorithms. In this manner, we can better explain, understand and predict human behavior and choice in different contexts.
Collapse
Affiliation(s)
- Jyrki Suomala
- Department of NeuroLab, Laurea University of Applied Sciences, Vantaa, Finland
- *Correspondence: Jyrki Suomala,
| | - Janne Kauttonen
- Competences, RDI and Digitalization, Haaga-Helia University of Applied Sciences, Helsinki, Finland
| |
Collapse
|
23
|
Machine learning of large-scale multimodal brain imaging data reveals neural correlates of hand preference. Neuroimage 2022; 262:119534. [PMID: 35931311 DOI: 10.1016/j.neuroimage.2022.119534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Revised: 07/31/2022] [Accepted: 08/01/2022] [Indexed: 11/22/2022] Open
Abstract
Lateralization is a fundamental characteristic of many behaviors and the organization of the brain, and atypical lateralization has been suggested to be linked to various brain-related disorders such as autism and schizophrenia. Right-handedness is one of the most prominent markers of human behavioural lateralization, yet its neurobiological basis remains to be determined. Here, we present a large-scale analysis of handedness, as measured by self-reported direction of hand preference, and its variability related to brain structural and functional organization in the UK Biobank (N = 36,024). A multivariate machine learning approach with multi-modalities of brain imaging data was adopted, to reveal how well brain imaging features could predict individual's handedness (i.e., right-handedness vs. non-right-handedness) and further identify the top brain signatures that contributed to the prediction. Overall, the results showed a good prediction performance, with an area under the receiver operating characteristic curve (AUROC) score of up to 0.72, driven largely by resting-state functional measures. Virtual lesion analysis and large-scale decoding analysis suggested that the brain networks with the highest importance in the prediction showed functional relevance to hand movement and several higher-level cognitive functions including language, arithmetic, and social interaction. Genetic analyses of contributions of common DNA polymorphisms to the imaging-derived handedness prediction score showed a significant heritability (h2=7.55%, p <0.001) that was similar to and slightly higher than that for the behavioural measure itself (h2=6.74%, p <0.001). The genetic correlation between the two was high (rg=0.71), suggesting that the imaging-derived score could be used as a surrogate in genetic studies where the behavioural measure is not available. This large-scale study using multimodal brain imaging and multivariate machine learning has shed new light on the neural correlates of human handedness.
Collapse
|
24
|
Andreev AV, Badarin AA, Maximenko VA, Hramov AE. Forecasting macroscopic dynamics in adaptive Kuramoto network using reservoir computing. CHAOS (WOODBURY, N.Y.) 2022; 32:103126. [PMID: 36319291 DOI: 10.1063/5.0114127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Accepted: 09/30/2022] [Indexed: 06/16/2023]
Abstract
Forecasting a system's behavior is an essential task encountering the complex systems theory. Machine learning offers supervised algorithms, e.g., recurrent neural networks and reservoir computers that predict the behavior of model systems whose states consist of multidimensional time series. In real life, we often have limited information about the behavior of complex systems. The brightest example is the brain neural network described by the electroencephalogram. Forecasting the behavior of these systems is a more challenging task but provides a potential for real-life application. Here, we trained reservoir computer to predict the macroscopic signal produced by the network of phase oscillators. The Lyapunov analysis revealed the chaotic nature of the signal and reservoir computer failed to forecast it. Augmenting the feature space using Takkens' theorem improved the quality of forecasting. RC achieved the best prediction score when the number of signals coincided with the embedding dimension estimated via the nearest false neighbors method. We found that short-time prediction required a large number of features, while long-time prediction utilizes a limited number of features. These results refer to the bias-variance trade-off, an important concept in machine learning.
Collapse
Affiliation(s)
- Andrey V Andreev
- Engineering School of Information Technologies, Telecommunications and Control Systems, Ural Federal University named after the first President of Russia B.N.Yeltsin, 19 Mira str., 620002 Ekaterinburg, Russia
| | - Artem A Badarin
- Engineering School of Information Technologies, Telecommunications and Control Systems, Ural Federal University named after the first President of Russia B.N.Yeltsin, 19 Mira str., 620002 Ekaterinburg, Russia
| | - Vladimir A Maximenko
- Engineering School of Information Technologies, Telecommunications and Control Systems, Ural Federal University named after the first President of Russia B.N.Yeltsin, 19 Mira str., 620002 Ekaterinburg, Russia
| | - Alexander E Hramov
- Engineering School of Information Technologies, Telecommunications and Control Systems, Ural Federal University named after the first President of Russia B.N.Yeltsin, 19 Mira str., 620002 Ekaterinburg, Russia
| |
Collapse
|
25
|
Scherer M, Wang T, Guggenberger R, Milosevic L, Gharabaghi A. FiNN: A toolbox for neurophysiological network analysis. Netw Neurosci 2022; 6:1205-1218. [PMID: 38800466 PMCID: PMC11117079 DOI: 10.1162/netn_a_00265] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Accepted: 06/23/2022] [Indexed: 05/29/2024] Open
Abstract
Recently, neuroscience has seen a shift from localist approaches to network-wide investigations of brain function. Neurophysiological signals across different spatial and temporal scales provide insight into neural communication. However, additional methodological considerations arise when investigating network-wide brain dynamics rather than local effects. Specifically, larger amounts of data, investigated across a higher dimensional space, are necessary. Here, we present FiNN (Find Neurophysiological Networks), a novel toolbox for the analysis of neurophysiological data with a focus on functional and effective connectivity. FiNN provides a wide range of data processing methods and statistical and visualization tools to facilitate inspection of connectivity estimates and the resulting metrics of brain dynamics. The Python toolbox and its documentation are freely available as Supporting Information. We evaluated FiNN against a number of established frameworks on both a conceptual and an implementation level. We found FiNN to require much less processing time and memory than other toolboxes. In addition, FiNN adheres to a design philosophy of easy access and modifiability, while providing efficient data processing implementations. Since the investigation of network-level neural dynamics is experiencing increasing interest, we place FiNN at the disposal of the neuroscientific community as open-source software.
Collapse
Affiliation(s)
- Maximilian Scherer
- Institute for Neuromodulation and Neurotechnology, University Hospital and University of Tübingen, Tübingen, Germany
- Krembil Brain Institute, University Health Network, and Institute of Biomedical Engineering, University of Toronto, Toronto, Canada
| | - Tianlu Wang
- Institute for Neuromodulation and Neurotechnology, University Hospital and University of Tübingen, Tübingen, Germany
| | - Robert Guggenberger
- Institute for Neuromodulation and Neurotechnology, University Hospital and University of Tübingen, Tübingen, Germany
| | - Luka Milosevic
- Institute for Neuromodulation and Neurotechnology, University Hospital and University of Tübingen, Tübingen, Germany
- Krembil Brain Institute, University Health Network, and Institute of Biomedical Engineering, University of Toronto, Toronto, Canada
| | - Alireza Gharabaghi
- Institute for Neuromodulation and Neurotechnology, University Hospital and University of Tübingen, Tübingen, Germany
| |
Collapse
|
26
|
Objective Supervised Machine Learning-Based Classification and Inference of Biological Neuronal Networks. Molecules 2022; 27:molecules27196256. [PMID: 36234792 PMCID: PMC9573053 DOI: 10.3390/molecules27196256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 08/29/2022] [Accepted: 09/15/2022] [Indexed: 11/16/2022] Open
Abstract
The classification of biological neuron types and networks poses challenges to the full understanding of the human brain’s organisation and functioning. In this paper, we develop a novel objective classification model of biological neuronal morphology and electrical types and their networks, based on the attributes of neuronal communication using supervised machine learning solutions. This presents advantages compared to the existing approaches in neuroinformatics since the data related to mutual information or delay between neurons obtained from spike trains are more abundant than conventional morphological data. We constructed two open-access computational platforms of various neuronal circuits from the Blue Brain Project realistic models, named Neurpy and Neurgen. Then, we investigated how we could perform network tomography with cortical neuronal circuits for the morphological, topological and electrical classification of neurons. We extracted the simulated data of 10,000 network topology combinations with five layers, 25 morphological type (m-type) cells, and 14 electrical type (e-type) cells. We applied the data to several different classifiers (including Support Vector Machine (SVM), Decision Trees, Random Forest, and Artificial Neural Networks). We achieved accuracies of up to 70%, and the inference of biological network structures using network tomography reached up to 65% of accuracy. Objective classification of biological networks can be achieved with cascaded machine learning methods using neuron communication data. SVM methods seem to perform better amongst used techniques. Our research not only contributes to existing classification efforts but sets the road-map for future usage of brain–machine interfaces towards an in vivo objective classification of neurons as a sensing mechanism of the brain’s structure.
Collapse
|
27
|
Liu F, Meamardoost S, Gunawan R, Komiyama T, Mewes C, Zhang Y, Hwang E, Wang L. Deep learning for neural decoding in motor cortex. J Neural Eng 2022; 19. [PMID: 36148535 DOI: 10.1088/1741-2552/ac8fb5] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 09/06/2022] [Indexed: 11/12/2022]
Abstract
Objective. Neural decoding is an important tool in neural engineering and neural data analysis. Of various machine learning algorithms adopted for neural decoding, the recently introduced deep learning is promising to excel. Therefore, we sought to apply deep learning to decode movement trajectories from the activity of motor cortical neurons.Approach. In this paper, we assessed the performance of deep learning methods in three different decoding schemes, concurrent, time-delay, and spatiotemporal. In the concurrent decoding scheme where the input to the network is the neural activity coincidental to the movement, deep learning networks including artificial neural network (ANN) and long-short term memory (LSTM) were applied to decode movement and compared with traditional machine learning algorithms. Both ANN and LSTM were further evaluated in the time-delay decoding scheme in which temporal delays are allowed between neural signals and movements. Lastly, in the spatiotemporal decoding scheme, we trained convolutional neural network (CNN) to extract movement information from images representing the spatial arrangement of neurons, their activity, and connectomes (i.e. the relative strengths of connectivity between neurons) and combined CNN and ANN to develop a hybrid spatiotemporal network. To reveal the input features of the CNN in the hybrid network that deep learning discovered for movement decoding, we performed a sensitivity analysis and identified specific regions in the spatial domain.Main results. Deep learning networks (ANN and LSTM) outperformed traditional machine learning algorithms in the concurrent decoding scheme. The results of ANN and LSTM in the time-delay decoding scheme showed that including neural data from time points preceding movement enabled decoders to perform more robustly when the temporal relationship between the neural activity and movement dynamically changes over time. In the spatiotemporal decoding scheme, the hybrid spatiotemporal network containing the concurrent ANN decoder outperformed single-network concurrent decoders.Significance. Taken together, our study demonstrates that deep learning could become a robust and effective method for the neural decoding of behavior.
Collapse
Affiliation(s)
- Fangyu Liu
- Department of Civil and Environmental Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA 24061, United States of America
| | - Saber Meamardoost
- Department of Chemical and Biological Engineering, University at Buffalo, Buffalo, NY 14260, United States of America
| | - Rudiyanto Gunawan
- Department of Chemical and Biological Engineering, University at Buffalo, Buffalo, NY 14260, United States of America
| | - Takaki Komiyama
- Department of Neurobiology, Center for Neural Circuits and Behavior, and Department of Neurosciences, University of California San Diego, La Jolla, CA 92093, United States of America
| | - Claudia Mewes
- Department of Physics and Astronomy, University of Alabama, Tuscaloosa, AL 35487, United States of America
| | - Ying Zhang
- Department of Cell and Molecular Biology, University of Rhode Island, Kingston, RI 02881, United States of America
| | - EunJung Hwang
- Department of Neurobiology, Center for Neural Circuits and Behavior, and Department of Neurosciences, University of California San Diego, La Jolla, CA 92093, United States of America.,Cell Biology and Anatomy Discipline, Center for Brain Function and Repair, Chicago Medical School, Rosalind Franklin University of Medicine and Science, North Chicago, IL 60064, United States of America
| | - Linbing Wang
- Department of Civil and Environmental Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA 24061, United States of America
| |
Collapse
|
28
|
Xu H, Liu M, Zhang D. How does the brain represent the semantic content of an image? Neural Netw 2022; 154:31-42. [DOI: 10.1016/j.neunet.2022.06.034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 04/13/2022] [Accepted: 06/28/2022] [Indexed: 11/24/2022]
|
29
|
Combrisson E, Allegra M, Basanisi R, Ince RAA, Giordano B, Bastin J, Brovelli A. Group-level inference of information-based measures for the analyses of cognitive brain networks from neurophysiological data. Neuroimage 2022; 258:119347. [PMID: 35660460 DOI: 10.1016/j.neuroimage.2022.119347] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Revised: 05/24/2022] [Accepted: 05/30/2022] [Indexed: 12/30/2022] Open
Abstract
The reproducibility crisis in neuroimaging and in particular in the case of underpowered studies has introduced doubts on our ability to reproduce, replicate and generalize findings. As a response, we have seen the emergence of suggested guidelines and principles for neuroscientists known as Good Scientific Practice for conducting more reliable research. Still, every study remains almost unique in its combination of analytical and statistical approaches. While it is understandable considering the diversity of designs and brain data recording, it also represents a striking point against reproducibility. Here, we propose a non-parametric permutation-based statistical framework, primarily designed for neurophysiological data, in order to perform group-level inferences on non-negative measures of information encompassing metrics from information-theory, machine-learning or measures of distances. The framework supports both fixed- and random-effect models to adapt to inter-individuals and inter-sessions variability. Using numerical simulations, we compared the accuracy in ground-truth retrieving of both group models, such as test- and cluster-wise corrections for multiple comparisons. We then reproduced and extended existing results using both spatially uniform MEG and non-uniform intracranial neurophysiological data. We showed how the framework can be used to extract stereotypical task- and behavior-related effects across the population covering scales from the local level of brain regions, inter-areal functional connectivity to measures summarizing network properties. We also present an open-source Python toolbox called Frites1 that includes the proposed statistical pipeline using information-theoretic metrics such as single-trial functional connectivity estimations for the extraction of cognitive brain networks. Taken together, we believe that this framework deserves careful attention as its robustness and flexibility could be the starting point toward the uniformization of statistical approaches.
Collapse
Affiliation(s)
- Etienne Combrisson
- Institut de Neurosciences de la Timone, Aix Marseille Université, UMR 7289 CNRS, 13005, Marseille, France.
| | - Michele Allegra
- Institut de Neurosciences de la Timone, Aix Marseille Université, UMR 7289 CNRS, 13005, Marseille, France; Dipartimento di Fisica e Astronomia "Galileo Galilei", Università di Padova, via Marzolo 8, 35131 Padova, Italy; Padua Neuroscience Center, Università di Padova, via Orus 2, 35131 Padova, Italy
| | - Ruggero Basanisi
- Institut de Neurosciences de la Timone, Aix Marseille Université, UMR 7289 CNRS, 13005, Marseille, France
| | - Robin A A Ince
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | - Bruno Giordano
- Institut de Neurosciences de la Timone, Aix Marseille Université, UMR 7289 CNRS, 13005, Marseille, France
| | - Julien Bastin
- Univ. Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, 38000 Grenoble, France
| | - Andrea Brovelli
- Institut de Neurosciences de la Timone, Aix Marseille Université, UMR 7289 CNRS, 13005, Marseille, France.
| |
Collapse
|
30
|
Suomala J, Kauttonen J. Human’s Intuitive Mental Models as a Source of Realistic Artificial Intelligence and Engineering. Front Psychol 2022; 13:873289. [PMID: 35707640 PMCID: PMC9189375 DOI: 10.3389/fpsyg.2022.873289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 04/29/2022] [Indexed: 11/13/2022] Open
Abstract
Despite the success of artificial intelligence (AI), we are still far away from AI that model the world as humans do. This study focuses for explaining human behavior from intuitive mental models’ perspectives. We describe how behavior arises in biological systems and how the better understanding of this biological system can lead to advances in the development of human-like AI. Human can build intuitive models from physical, social, and cultural situations. In addition, we follow Bayesian inference to combine intuitive models and new information to make decisions. We should build similar intuitive models and Bayesian algorithms for the new AI. We suggest that the probability calculation in Bayesian sense is sensitive to semantic properties of the objects’ combination formed by observation and prior experience. We call this brain process as computational meaningfulness and it is closer to the Bayesian ideal, when the occurrence of probabilities of these objects are believable. How does the human brain form models of the world and apply these models in its behavior? We outline the answers from three perspectives. First, intuitive models support an individual to use information meaningful ways in a current context. Second, neuroeconomics proposes that the valuation network in the brain has essential role in human decision making. It combines psychological, economical, and neuroscientific approaches to reveal the biological mechanisms by which decisions are made. Then, the brain is an over-parameterized modeling organ and produces optimal behavior in a complex word. Finally, a progress in data analysis techniques in AI has allowed us to decipher how the human brain valuates different options in complex situations. By combining big datasets with machine learning models, it is possible to gain insight from complex neural data beyond what was possible before. We describe these solutions by reviewing the current research from this perspective. In this study, we outline the basic aspects for human-like AI and we discuss on how science can benefit from AI. The better we understand human’s brain mechanisms, the better we can apply this understanding for building new AI. Both development of AI and understanding of human behavior go hand in hand.
Collapse
Affiliation(s)
- Jyrki Suomala
- NeuroLab, Laurea University of Applied Sciences, Vantaa, Finland
| | - Janne Kauttonen
- Competences, RDI and Digitalization, Haaga-Helia University of Applied Sciences, Helsinki, Finland
- *Correspondence: Janne Kauttonen,
| |
Collapse
|
31
|
Hirschmann J, Steina A, Vesper J, Florin E, Schnitzler A. Neuronal oscillations predict deep brain stimulation outcome in Parkinson's disease. Brain Stimul 2022; 15:792-802. [PMID: 35568311 DOI: 10.1016/j.brs.2022.05.008] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 05/06/2022] [Accepted: 05/07/2022] [Indexed: 11/17/2022] Open
Abstract
BACKGROUND Neuronal oscillations are linked to symptoms of Parkinson's disease. This relation can be exploited for optimizing deep brain stimulation (DBS), e.g. by informing a device or human about the optimal location, time and intensity of stimulation. Whether oscillations predict individual DBS outcome is not clear so far. OBJECTIVE To predict motor symptom improvement from subthalamic power and subthalamo-cortical coherence. METHODS We applied machine learning techniques to simultaneously recorded magnetoencephalography and local field potential data from 36 patients with Parkinson's disease. Gradient-boosted tree learning was applied in combination with feature importance analysis to generate and understand out-of-sample predictions. RESULTS A few features sufficed for making accurate predictions. A model operating on five coherence features, for example, achieved correlations of r > 0.8 between actual and predicted outcomes. Coherence comprised more information in less features than subthalamic power, although in general their information content was comparable. Both signals predicted akinesia/rigidity reduction best. The most important local feature was subthalamic high-beta power (20-35 Hz). The most important connectivity features were subthalamo-parietal coherence in the very high frequency band (>200 Hz) and subthalamo-parietal coherence in low-gamma band (36-60 Hz). Successful prediction was not due to the model inferring distance to target or symptom severity from neuronal oscillations. CONCLUSION This study demonstrates for the first time that neuronal oscillations are predictive of DBS outcome. Coherence between subthalamic and parietal oscillations are particularly informative. These results highlight the clinical relevance of inter-areal synchrony in basal ganglia-cortex loops and might facilitate further improvements of DBS in the future.
Collapse
Affiliation(s)
- Jan Hirschmann
- Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty, Heinrich Heine University, 40225, Düsseldorf, Germany.
| | - Alexandra Steina
- Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty, Heinrich Heine University, 40225, Düsseldorf, Germany
| | - Jan Vesper
- Functional Neurosurgery and Stereotaxy, Department of Neurosurgery, Medical Faculty, Heinrich Heine University, 40225, Düsseldorf, Germany
| | - Esther Florin
- Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty, Heinrich Heine University, 40225, Düsseldorf, Germany
| | - Alfons Schnitzler
- Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty, Heinrich Heine University, 40225, Düsseldorf, Germany; Center for Movement Disorders and Neuromodulation, Department of Neurology, Medical Faculty, Heinrich Heine University, 40225, Düsseldorf, Germany
| |
Collapse
|
32
|
Bârzan H, Ichim AM, Moca VV, Mureşan RC. Time-Frequency Representations of Brain Oscillations: Which One Is Better? Front Neuroinform 2022; 16:871904. [PMID: 35492077 PMCID: PMC9050353 DOI: 10.3389/fninf.2022.871904] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 03/21/2022] [Indexed: 02/02/2023] Open
Abstract
Brain oscillations are thought to subserve important functions by organizing the dynamical landscape of neural circuits. The expression of such oscillations in neural signals is usually evaluated using time-frequency representations (TFR), which resolve oscillatory processes in both time and frequency. While a vast number of methods exist to compute TFRs, there is often no objective criterion to decide which one is better. In feature-rich data, such as that recorded from the brain, sources of noise and unrelated processes abound and contaminate results. The impact of these distractor sources is especially problematic, such that TFRs that are more robust to contaminants are expected to provide more useful representations. In addition, the minutiae of the techniques themselves impart better or worse time and frequency resolutions, which also influence the usefulness of the TFRs. Here, we introduce a methodology to evaluate the "quality" of TFRs of neural signals by quantifying how much information they retain about the experimental condition during visual stimulation and recognition tasks, in mice and humans, respectively. We used machine learning to discriminate between various experimental conditions based on TFRs computed with different methods. We found that various methods provide more or less informative TFRs depending on the characteristics of the data. In general, however, more advanced techniques, such as the superlet transform, seem to provide better results for complex time-frequency landscapes, such as those extracted from electroencephalography signals. Finally, we introduce a method based on feature perturbation that is able to quantify how much time-frequency components contribute to the correct discrimination among experimental conditions. The methodology introduced in the present study may be extended to other analyses of neural data, enabling the discovery of data features that are modulated by the experimental manipulation.
Collapse
Affiliation(s)
- Harald Bârzan
- Department of Theoretical and Experimental Neuroscience, Transylvanian Institute of Neuroscience, Cluj-Napoca, Romania
- Department of Electronics, Telecommunications and Informational Technologies, Technical University of Cluj-Napoca, Cluj-Napoca, Romania
| | - Ana-Maria Ichim
- Department of Theoretical and Experimental Neuroscience, Transylvanian Institute of Neuroscience, Cluj-Napoca, Romania
- Department of Electronics, Telecommunications and Informational Technologies, Technical University of Cluj-Napoca, Cluj-Napoca, Romania
| | - Vasile Vlad Moca
- Department of Theoretical and Experimental Neuroscience, Transylvanian Institute of Neuroscience, Cluj-Napoca, Romania
| | - Raul Cristian Mureşan
- Department of Theoretical and Experimental Neuroscience, Transylvanian Institute of Neuroscience, Cluj-Napoca, Romania
| |
Collapse
|
33
|
Yoder JA, Anderson CB, Wang C, Izquierdo EJ. Reinforcement Learning for Central Pattern Generation in Dynamical Recurrent Neural Networks. Front Comput Neurosci 2022; 16:818985. [PMID: 35465269 PMCID: PMC9028035 DOI: 10.3389/fncom.2022.818985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2021] [Accepted: 03/10/2022] [Indexed: 11/21/2022] Open
Abstract
Lifetime learning, or the change (or acquisition) of behaviors during a lifetime, based on experience, is a hallmark of living organisms. Multiple mechanisms may be involved, but biological neural circuits have repeatedly demonstrated a vital role in the learning process. These neural circuits are recurrent, dynamic, and non-linear and models of neural circuits employed in neuroscience and neuroethology tend to involve, accordingly, continuous-time, non-linear, and recurrently interconnected components. Currently, the main approach for finding configurations of dynamical recurrent neural networks that demonstrate behaviors of interest is using stochastic search techniques, such as evolutionary algorithms. In an evolutionary algorithm, these dynamic recurrent neural networks are evolved to perform the behavior over multiple generations, through selection, inheritance, and mutation, across a population of solutions. Although, these systems can be evolved to exhibit lifetime learning behavior, there are no explicit rules built into these dynamic recurrent neural networks that facilitate learning during their lifetime (e.g., reward signals). In this work, we examine a biologically plausible lifetime learning mechanism for dynamical recurrent neural networks. We focus on a recently proposed reinforcement learning mechanism inspired by neuromodulatory reward signals and ongoing fluctuations in synaptic strengths. Specifically, we extend one of the best-studied and most-commonly used dynamic recurrent neural networks to incorporate the reinforcement learning mechanism. First, we demonstrate that this extended dynamical system (model and learning mechanism) can autonomously learn to perform a central pattern generation task. Second, we compare the robustness and efficiency of the reinforcement learning rules in relation to two baseline models, a random walk and a hill-climbing walk through parameter space. Third, we systematically study the effect of the different meta-parameters of the learning mechanism on the behavioral learning performance. Finally, we report on preliminary results exploring the generality and scalability of this learning mechanism for dynamical neural networks as well as directions for future work.
Collapse
Affiliation(s)
- Jason A. Yoder
- Computer Science and Software Engineering Department, Rose-Hulman Institute of Technology, Terre Haute, IN, United States
- *Correspondence: Jason A. Yoder
| | - Cooper B. Anderson
- Computer Science and Software Engineering Department, Rose-Hulman Institute of Technology, Terre Haute, IN, United States
| | - Cehong Wang
- Computer Science and Software Engineering Department, Rose-Hulman Institute of Technology, Terre Haute, IN, United States
| | - Eduardo J. Izquierdo
- Computational Neuroethology Lab, Cognitive Science Program, Indiana University, Bloomington, IN, United States
| |
Collapse
|
34
|
Zapp SJ, Nitsche S, Gollisch T. Retinal receptive-field substructure: scaffolding for coding and computation. Trends Neurosci 2022; 45:430-445. [DOI: 10.1016/j.tins.2022.03.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Revised: 02/28/2022] [Accepted: 03/17/2022] [Indexed: 11/29/2022]
|
35
|
Bae H, Lee S, Lee CY, Kim CE. A Novel Framework for Understanding the Pattern Identification of Traditional Asian Medicine From the Machine Learning Perspective. Front Med (Lausanne) 2022; 8:763533. [PMID: 35186965 PMCID: PMC8853725 DOI: 10.3389/fmed.2021.763533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Accepted: 12/23/2021] [Indexed: 11/13/2022] Open
Abstract
Pattern identification (PI), a unique diagnostic system of traditional Asian medicine, is the process of inferring the pathological nature or location of lesions based on observed symptoms. Despite its critical role in theory and practice, the information processing principles underlying PI systems are generally unclear. We present a novel framework for comprehending the PI system from a machine learning perspective. After a brief introduction to the dimensionality of the data, we propose that the PI system can be modeled as a dimensionality reduction process and discuss analytical issues that can be addressed using our framework. Our framework promotes a new approach in understanding the underlying mechanisms of the PI process with strong mathematical tools, thereby enriching the explanatory theories of traditional Asian medicine.
Collapse
Affiliation(s)
- Hyojin Bae
- Department of Physiology, Gachon University College of Korean Medicine, Seongnam, South Korea
| | - Sanghun Lee
- Korean Medicine Data Division, Korea Institute of Oriental Medicine, Daejeon, South Korea.,Department of Korean Convergence Medical Science, University of Science and Technology, Daejeon, South Korea
| | - Choong-Yeol Lee
- Department of Physiology, Gachon University College of Korean Medicine, Seongnam, South Korea
| | - Chang-Eop Kim
- Department of Physiology, Gachon University College of Korean Medicine, Seongnam, South Korea
| |
Collapse
|
36
|
Lin B. Regularity Normalization: Neuroscience-Inspired Unsupervised Attention across Neural Network Layers. ENTROPY 2021; 24:e24010059. [PMID: 35052085 PMCID: PMC8774926 DOI: 10.3390/e24010059] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 12/20/2021] [Accepted: 12/23/2021] [Indexed: 11/17/2022]
Abstract
Inspired by the adaptation phenomenon of neuronal firing, we propose the regularity normalization (RN) as an unsupervised attention mechanism (UAM) which computes the statistical regularity in the implicit space of neural networks under the Minimum Description Length (MDL) principle. Treating the neural network optimization process as a partially observable model selection problem, the regularity normalization constrains the implicit space by a normalization factor, the universal code length. We compute this universal code incrementally across neural network layers and demonstrate the flexibility to include data priors such as top-down attention and other oracle information. Empirically, our approach outperforms existing normalization methods in tackling limited, imbalanced and non-stationary input distribution in image classification, classic control, procedurally-generated reinforcement learning, generative modeling, handwriting generation and question answering tasks with various neural network architectures. Lastly, the unsupervised attention mechanisms is a useful probing tool for neural networks by tracking the dependency and critical learning stages across layers and recurrent time steps of deep networks.
Collapse
Affiliation(s)
- Baihan Lin
- Department of Neuroscience, Columbia University Irving Medical Center, New York, NY 10032, USA;
- Department of Systems Biology, Columbia University Irving Medical Center, New York, NY 10032, USA
| |
Collapse
|
37
|
Foundations of Bayesian Learning in Clinical Neuroscience. ACTA NEUROCHIRURGICA. SUPPLEMENT 2021; 134:75-78. [PMID: 34862530 DOI: 10.1007/978-3-030-85292-4_10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
There is an increasing interest in using prediction models to forecast clinical outcomes within the fields of neurosurgery and clinical neuroscience. The present chapter outlines the foundations of Bayesian learning and introduces Bayes theorem and its use in machine learning methodology. The use of Bayesian networks to structure and define associations between outcome predictors and final outcomes is highlighted and Naïve Bayes classifiers are outlined for use in predicting neurosurgical outcomes, where the understanding of underlying causes is less important. The present work aims to orient researchers in Bayesian machine learning methods and when and how to use them. When used correctly, these tools have the potential to improve the understanding of factors influencing neurosurgical outcomes, aid in structuring the relationships between them, and provide reliable machine learning classification models for predicting neurosurgical outcomes.
Collapse
|
38
|
Machine Learning in Neuro-Oncology, Epilepsy, Alzheimer's Disease, and Schizophrenia. ACTA NEUROCHIRURGICA. SUPPLEMENT 2021; 134:349-361. [PMID: 34862559 DOI: 10.1007/978-3-030-85292-4_39] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Applications of machine learning (ML) in translational medicine include therapeutic drug creation, diagnostic development, surgical planning, outcome prediction, and intraoperative assistance. Opportunities in the neurosciences are rich given advancement in our understanding of the brain, expanding indications for intervention, and diagnostic challenges often characterized by multiple clinical and environmental factors. We present a review of ML in neuro-oncology, epilepsy, Alzheimer's disease, and schizophrenia to highlight recent progression in these field, optimizing machine learning capabilities in their current forms. Supervised learning models appear to be the most commonly incorporated algorithm models for machine learning across the reviewed neuroscience disciplines with primary aim of diagnosis. Accuracy ranges are high from 63% to 99% across all algorithms investigated. Machine learning contributions to neurosurgery, neurology, psychiatry, and the clinical and basic science neurosciences may enhance current medical best practices while also broadening our understanding of dynamic neural networks and the brain.
Collapse
|
39
|
Early prediction of developing spontaneous activity in cultured neuronal networks. Sci Rep 2021; 11:20407. [PMID: 34650146 PMCID: PMC8516856 DOI: 10.1038/s41598-021-99538-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 09/27/2021] [Indexed: 11/18/2022] Open
Abstract
Synchronization and bursting activity are intrinsic electrophysiological properties of in vivo and in vitro neural networks. During early development, cortical cultures exhibit a wide repertoire of synchronous bursting dynamics whose characterization may help to understand the parameters governing the transition from immature to mature networks. Here we used machine learning techniques to characterize and predict the developing spontaneous activity in mouse cortical neurons on microelectrode arrays (MEAs) during the first three weeks in vitro. Network activity at three stages of early development was defined by 18 electrophysiological features of spikes, bursts, synchrony, and connectivity. The variability of neuronal network activity during early development was investigated by applying k-means and self-organizing map (SOM) clustering analysis to features of bursts and synchrony. These electrophysiological features were predicted at the third week in vitro with high accuracy from those at earlier times using three machine learning models: Multivariate Adaptive Regression Splines, Support Vector Machines, and Random Forest. Our results indicate that initial patterns of electrical activity during the first week in vitro may already predetermine the final development of the neuronal network activity. The methodological approach used here may be applied to explore the biological mechanisms underlying the complex dynamics of spontaneous activity in developing neuronal cultures.
Collapse
|
40
|
|
41
|
Simpson S, Chen Y, Wellmeyer E, Smith LC, Aragon Montes B, George O, Kimbrough A. The Hidden Brain: Uncovering Previously Overlooked Brain Regions by Employing Novel Preclinical Unbiased Network Approaches. Front Syst Neurosci 2021; 15:595507. [PMID: 33967705 PMCID: PMC8097000 DOI: 10.3389/fnsys.2021.595507] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2020] [Accepted: 03/26/2021] [Indexed: 12/18/2022] Open
Abstract
A large focus of modern neuroscience has revolved around preselected brain regions of interest based on prior studies. While there are reasons to focus on brain regions implicated in prior work, the result has been a biased assessment of brain function. Thus, many brain regions that may prove crucial in a wide range of neurobiological problems, including neurodegenerative diseases and neuropsychiatric disorders, have been neglected. Advances in neuroimaging and computational neuroscience have made it possible to make unbiased assessments of whole-brain function and identify previously overlooked regions of the brain. This review will discuss the tools that have been developed to advance neuroscience and network-based computational approaches used to further analyze the interconnectivity of the brain. Furthermore, it will survey examples of neural network approaches that assess connectivity in clinical (i.e., human) and preclinical (i.e., animal model) studies and discuss how preclinical studies of neurodegenerative diseases and neuropsychiatric disorders can greatly benefit from the unbiased nature of whole-brain imaging and network neuroscience.
Collapse
Affiliation(s)
- Sierra Simpson
- Department of Psychiatry, School of Medicine, University of California, San Diego, San Diego, CA, United States
| | - Yueyi Chen
- Department of Psychiatry, School of Medicine, University of California, San Diego, San Diego, CA, United States.,Department of Basic Medical Sciences, College of Veterinary Medicine, Purdue University, West Lafayette, IN, United States
| | - Emma Wellmeyer
- Department of Psychiatry, School of Medicine, University of California, San Diego, San Diego, CA, United States
| | - Lauren C Smith
- Department of Psychiatry, School of Medicine, University of California, San Diego, San Diego, CA, United States
| | - Brianna Aragon Montes
- Department of Psychiatry, School of Medicine, University of California, San Diego, San Diego, CA, United States
| | - Olivier George
- Department of Psychiatry, School of Medicine, University of California, San Diego, San Diego, CA, United States
| | - Adam Kimbrough
- Department of Basic Medical Sciences, College of Veterinary Medicine, Purdue University, West Lafayette, IN, United States.,Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, United States.,Purdue Institute for Inflammation, Immunology, and Infectious Disease, West Lafayette, IN, United States
| |
Collapse
|
42
|
Tuckute G, Hansen ST, Kjaer TW, Hansen LK. Real-Time Decoding of Attentional States Using Closed-Loop EEG Neurofeedback. Neural Comput 2021; 33:967-1004. [PMID: 33513324 DOI: 10.1162/neco_a_01363] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Accepted: 10/16/2020] [Indexed: 11/04/2022]
Abstract
Sustained attention is a cognitive ability to maintain task focus over extended periods of time (Mackworth, 1948; Chun, Golomb, & Turk-Browne, 2011). In this study, scalp electroencephalography (EEG) signals were processed in real time using a 32 dry-electrode system during a sustained visual attention task. An attention training paradigm was implemented, as designed in DeBettencourt, Cohen, Lee, Norman, and Turk-Browne (2015) in which the composition of a sequence of blended images is updated based on the participant's decoded attentional level to a primed image category. It was hypothesized that a single neurofeedback training session would improve sustained attention abilities. Twenty-two participants were trained on a single neurofeedback session with behavioral pretraining and posttraining sessions within three consecutive days. Half of the participants functioned as controls in a double-blinded design and received sham neurofeedback. During the neurofeedback session, attentional states to primed categories were decoded in real time and used to provide a continuous feedback signal customized to each participant in a closed-loop approach. We report a mean classifier decoding error rate of 34.3% (chance = 50%). Within the neurofeedback group, there was a greater level of task-relevant attentional information decoded in the participant's brain before making a correct behavioral response than before an incorrect response. This effect was not visible in the control group (interaction p=7.23e-4), which strongly indicates that we were able to achieve a meaningful measure of subjective attentional state in real time and control participants' behavior during the neurofeedback session. We do not provide conclusive evidence whether the single neurofeedback session per se provided lasting effects in sustained attention abilities. We developed a portable EEG neurofeedback system capable of decoding attentional states and predicting behavioral choices in the attention task at hand. The neurofeedback code framework is Python based and open source, and it allows users to actively engage in the development of neurofeedback tools for scientific and translational use.
Collapse
Affiliation(s)
- Greta Tuckute
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark, and Department of Brain and Cognitive Sciences, MIT, Cambridge, MA, 02139, U.S.A.,
| | - Sofie Therese Hansen
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark,
| | - Troels Wesenberg Kjaer
- Department of Neurology, Zealand University Hospital, 4000 Roskilde, Denmark, and Department of Clinical Medicine, University of Copenhagen, 2200 Copenhagen, Denmark,
| | - Lars Kai Hansen
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, 2800 Kgs. Lyngby, Denmark,
| |
Collapse
|
43
|
Wang PY, Sapra S, George VK, Silva GA. Generalizable Machine Learning in Neuroscience Using Graph Neural Networks. Front Artif Intell 2021; 4:618372. [PMID: 33748747 PMCID: PMC7971515 DOI: 10.3389/frai.2021.618372] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Accepted: 01/12/2021] [Indexed: 11/17/2022] Open
Abstract
Although a number of studies have explored deep learning in neuroscience, the application of these algorithms to neural systems on a microscopic scale, i.e. parameters relevant to lower scales of organization, remains relatively novel. Motivated by advances in whole-brain imaging, we examined the performance of deep learning models on microscopic neural dynamics and resulting emergent behaviors using calcium imaging data from the nematode C. elegans. As one of the only species for which neuron-level dynamics can be recorded, C. elegans serves as the ideal organism for designing and testing models bridging recent advances in deep learning and established concepts in neuroscience. We show that neural networks perform remarkably well on both neuron-level dynamics prediction and behavioral state classification. In addition, we compared the performance of structure agnostic neural networks and graph neural networks to investigate if graph structure can be exploited as a favourable inductive bias. To perform this experiment, we designed a graph neural network which explicitly infers relations between neurons from neural activity and leverages the inferred graph structure during computations. In our experiments, we found that graph neural networks generally outperformed structure agnostic models and excel in generalization on unseen organisms, implying a potential path to generalizable machine learning in neuroscience.
Collapse
Affiliation(s)
- Paul Y. Wang
- Center for Engineered Natural Intelligence, University of California San Diego, La Jolla, CA, United States
- Department of Physics, University of California San Diego, La Jolla, CA, United States
| | - Sandalika Sapra
- Center for Engineered Natural Intelligence, University of California San Diego, La Jolla, CA, United States
- Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA, United States
| | - Vivek Kurien George
- Center for Engineered Natural Intelligence, University of California San Diego, La Jolla, CA, United States
- Department of Bioengineering, University of California San Diego, La Jolla, CA, United States
| | - Gabriel A. Silva
- Center for Engineered Natural Intelligence, University of California San Diego, La Jolla, CA, United States
- Department of Bioengineering, University of California San Diego, La Jolla, CA, United States
- Department of Neurosciences, University of California San Diego, La Jolla, CA, United States
| |
Collapse
|
44
|
Bae H, Kim SJ, Kim CE. Lessons From Deep Neural Networks for Studying the Coding Principles of Biological Neural Networks. Front Syst Neurosci 2021; 14:615129. [PMID: 33519390 PMCID: PMC7843526 DOI: 10.3389/fnsys.2020.615129] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 12/14/2020] [Indexed: 12/26/2022] Open
Abstract
One of the central goals in systems neuroscience is to understand how information is encoded in the brain, and the standard approach is to identify the relation between a stimulus and a neural response. However, the feature of a stimulus is typically defined by the researcher's hypothesis, which may cause biases in the research conclusion. To demonstrate potential biases, we simulate four likely scenarios using deep neural networks trained on the image classification dataset CIFAR-10 and demonstrate the possibility of selecting suboptimal/irrelevant features or overestimating the network feature representation/noise correlation. Additionally, we present studies investigating neural coding principles in biological neural networks to which our points can be applied. This study aims to not only highlight the importance of careful assumptions and interpretations regarding the neural response to stimulus features but also suggest that the comparative study between deep and biological neural networks from the perspective of machine learning can be an effective strategy for understanding the coding principles of the brain.
Collapse
Affiliation(s)
- Hyojin Bae
- Department of Physiology, Gachon University College of Korean Medicine, Seongnam, South Korea
| | - Sang Jeong Kim
- Laboratory of Neurophysiology, Department of Physiology, Seoul National University College of Medicine, Seoul, South Korea
| | - Chang-Eop Kim
- Department of Physiology, Gachon University College of Korean Medicine, Seongnam, South Korea
| |
Collapse
|
45
|
Bučková B, Brunovský M, Bareš M, Hlinka J. Predicting Sex From EEG: Validity and Generalizability of Deep-Learning-Based Interpretable Classifier. Front Neurosci 2020; 14:589303. [PMID: 33192274 PMCID: PMC7652844 DOI: 10.3389/fnins.2020.589303] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Accepted: 09/17/2020] [Indexed: 11/13/2022] Open
Abstract
Explainable artificial intelligence holds a great promise for neuroscience and plays an important role in the hypothesis generation process. We follow-up a recent machine learning-oriented study that constructed a deep convolutional neural network to automatically identify biological sex from EEG recordings in healthy individuals and highlighted the discriminative role of beta-band power. If generalizing, this finding would be relevant not only theoretically by pointing to some specific neurobiological sexual dimorphisms, but potentially also as a relevant confound in quantitative EEG diagnostic practice. To put this finding to test, we assess whether the automatic identification of biological sex generalizes to another dataset, particularly in the presence of a psychiatric disease, by testing the hypothesis of higher beta power in women compared to men on 134 patients suffering from Major Depressive Disorder. Moreover, we construct ROC curves and compare the performance of the classifiers in determining sex both before and after the antidepressant treatment. We replicate the observation of a significant difference in beta-band power between men and women, providing classification accuracy of nearly 77%. The difference was consistent across the majority of electrodes, however multivariate classification models did not generally improve the performance. Similar results were observed also after the antidepressant treatment (classification accuracy above 70%), further supporting the robustness of the initial finding.
Collapse
Affiliation(s)
- Barbora Bučková
- Department of Cybernetics, Faculty of Electrical Engineering, Czech Technical University in Prague, Prague, Czechia.,Department of Complex Systems, Institute of Computer Science of the Czech Academy of Sciences, Prague, Czechia
| | - Martin Brunovský
- National Institute of Mental Health, Klecany, Czechia.,Third Faculty of Medicine, Charles University, Prague, Czechia
| | - Martin Bareš
- National Institute of Mental Health, Klecany, Czechia.,Third Faculty of Medicine, Charles University, Prague, Czechia
| | - Jaroslav Hlinka
- Department of Complex Systems, Institute of Computer Science of the Czech Academy of Sciences, Prague, Czechia.,National Institute of Mental Health, Klecany, Czechia
| |
Collapse
|
46
|
Ienca M, Ignatiadis K. Artificial Intelligence in Clinical Neuroscience: Methodological and Ethical Challenges. AJOB Neurosci 2020; 11:77-87. [PMID: 32228387 DOI: 10.1080/21507740.2020.1740352] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
Clinical neuroscience is increasingly relying on the collection of large volumes of differently structured data and the use of intelligent algorithms for data analytics. In parallel, the ubiquitous collection of unconventional data sources (e.g. mobile health, digital phenotyping, consumer neurotechnology) is increasing the variety of data points. Big data analytics and approaches to Artificial Intelligence (AI) such as advanced machine learning are showing great potential to make sense of these larger and heterogeneous data flows. AI provides great opportunities for making new discoveries about the brain, improving current preventative and diagnostic models in both neurology and psychiatry and developing more effective assistive neurotechnologies. Concurrently, it raises many new methodological and ethical challenges. Given their transformative nature, it is still largely unclear how AI-driven approaches to the study of the human brain will meet adequate standards of scientific validity and affect normative instruments in neuroethics and research ethics. This manuscript provides an overview of current AI-driven approaches to clinical neuroscience and an assessment of the associated key methodological and ethical challenges. In particular, it will discuss what ethical principles are primarily affected by AI approaches to human neuroscience, and what normative safeguards should be enforced in this domain.
Collapse
Affiliation(s)
- Marcello Ienca
- Swiss Federal Institute of Technology, ETH Zurich, Department of Health Sciences and Technology
| | - Karolina Ignatiadis
- Swiss Federal Institute of Technology, ETH Zurich, Department of Health Sciences and Technology
| |
Collapse
|
47
|
Machine Learning for Neural Decoding. eNeuro 2020; 7:ENEURO.0506-19.2020. [PMID: 32737181 PMCID: PMC7470933 DOI: 10.1523/eneuro.0506-19.2020] [Citation(s) in RCA: 74] [Impact Index Per Article: 18.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Revised: 07/01/2020] [Accepted: 07/03/2020] [Indexed: 01/11/2023] Open
Abstract
Despite rapid advances in machine learning tools, the majority of neural decoding approaches still use traditional methods. Modern machine learning tools, which are versatile and easy to use, have the potential to significantly improve decoding performance. This tutorial describes how to effectively apply these algorithms for typical decoding problems. We provide descriptions, best practices, and code for applying common machine learning methods, including neural networks and gradient boosting. We also provide detailed comparisons of the performance of various methods at the task of decoding spiking activity in motor cortex, somatosensory cortex, and hippocampus. Modern methods, particularly neural networks and ensembles, significantly outperform traditional approaches, such as Wiener and Kalman filters. Improving the performance of neural decoding algorithms allows neuroscientists to better understand the information contained in a neural population and can help to advance engineering applications such as brain–machine interfaces. Our code package is available at github.com/kordinglab/neural_decoding.
Collapse
|
48
|
Agrawal M, Peterson JC, Griffiths TL. Scaling up psychology via Scientific Regret Minimization. Proc Natl Acad Sci U S A 2020; 117:8825-8835. [PMID: 32241896 PMCID: PMC7183163 DOI: 10.1073/pnas.1915841117] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Do large datasets provide value to psychologists? Without a systematic methodology for working with such datasets, there is a valid concern that analyses will produce noise artifacts rather than true effects. In this paper, we offer a way to enable researchers to systematically build models and identify novel phenomena in large datasets. One traditional approach is to analyze the residuals of models-the biggest errors they make in predicting the data-to discover what might be missing from those models. However, once a dataset is sufficiently large, machine learning algorithms approximate the true underlying function better than the data, suggesting, instead, that the predictions of these data-driven models should be used to guide model building. We call this approach "Scientific Regret Minimization" (SRM), as it focuses on minimizing errors for cases that we know should have been predictable. We apply this exploratory method on a subset of the Moral Machine dataset, a public collection of roughly 40 million moral decisions. Using SRM, we find that incorporating a set of deontological principles that capture dimensions along which groups of agents can vary (e.g., sex and age) improves a computational model of human moral judgment. Furthermore, we are able to identify and independently validate three interesting moral phenomena: criminal dehumanization, age of responsibility, and asymmetric notions of responsibility.
Collapse
Affiliation(s)
- Mayank Agrawal
- Department of Psychology, Princeton University, Princeton, NJ 08544;
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ 08544
| | - Joshua C Peterson
- Department of Computer Science, Princeton University, Princeton, NJ 08544
| | - Thomas L Griffiths
- Department of Psychology, Princeton University, Princeton, NJ 08544
- Department of Computer Science, Princeton University, Princeton, NJ 08544
| |
Collapse
|
49
|
Nakagome S, Luu TP, He Y, Ravindran AS, Contreras-Vidal JL. An empirical comparison of neural networks and machine learning algorithms for EEG gait decoding. Sci Rep 2020; 10:4372. [PMID: 32152333 PMCID: PMC7062700 DOI: 10.1038/s41598-020-60932-4] [Citation(s) in RCA: 32] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2019] [Accepted: 02/03/2020] [Indexed: 11/09/2022] Open
Abstract
Previous studies of Brain Computer Interfaces (BCI) based on scalp electroencephalography (EEG) have demonstrated the feasibility of decoding kinematics for lower limb movements during walking. In this computational study, we investigated offline decoding analysis with different models and conditions to assess how they influence the performance and stability of the decoder. Specifically, we conducted three computational decoding experiments that investigated decoding accuracy: (1) based on delta band time-domain features, (2) when downsampling data, (3) of different frequency band features. In each experiment, eight different decoder algorithms were compared including the current state-of-the-art. Different tap sizes (sample window sizes) were also evaluated for a real-time applicability assessment. A feature of importance analysis was conducted to ascertain which features were most relevant for decoding; moreover, the stability to perturbations was assessed to quantify the robustness of the methods. Results indicated that generally the Gated Recurrent Unit (GRU) and Quasi Recurrent Neural Network (QRNN) outperformed other methods in terms of decoding accuracy and stability. Previous state-of-the-art Unscented Kalman Filter (UKF) still outperformed other decoders when using smaller tap sizes, with fast convergence in performance, but occurred at a cost to noise vulnerability. Downsampling and the inclusion of other frequency band features yielded overall improvement in performance. The results suggest that neural network-based decoders with downsampling or a wide range of frequency band features could not only improve decoder performance but also robustness with applications for stable use of BCIs.
Collapse
Affiliation(s)
- Sho Nakagome
- Non-Invasive Brain Machine Interface Laboratory, Electrical and Computer Engineering Department, Houston, 77004, USA
| | - Trieu Phat Luu
- Non-Invasive Brain Machine Interface Laboratory, Electrical and Computer Engineering Department, Houston, 77004, USA
| | - Yongtian He
- Non-Invasive Brain Machine Interface Laboratory, Electrical and Computer Engineering Department, Houston, 77004, USA
| | - Akshay Sujatha Ravindran
- Non-Invasive Brain Machine Interface Laboratory, Electrical and Computer Engineering Department, Houston, 77004, USA
| | - Jose L Contreras-Vidal
- Non-Invasive Brain Machine Interface Laboratory, Electrical and Computer Engineering Department, Houston, 77004, USA.
| |
Collapse
|
50
|
Wang X, Lin X, Dang X. Supervised learning in spiking neural networks: A review of algorithms and evaluations. Neural Netw 2020; 125:258-280. [PMID: 32146356 DOI: 10.1016/j.neunet.2020.02.011] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Revised: 12/15/2019] [Accepted: 02/20/2020] [Indexed: 01/08/2023]
Abstract
As a new brain-inspired computational model of the artificial neural network, a spiking neural network encodes and processes neural information through precisely timed spike trains. Spiking neural networks are composed of biologically plausible spiking neurons, which have become suitable tools for processing complex temporal or spatiotemporal information. However, because of their intricately discontinuous and implicit nonlinear mechanisms, the formulation of efficient supervised learning algorithms for spiking neural networks is difficult, and has become an important problem in this research field. This article presents a comprehensive review of supervised learning algorithms for spiking neural networks and evaluates them qualitatively and quantitatively. First, a comparison between spiking neural networks and traditional artificial neural networks is provided. The general framework and some related theories of supervised learning for spiking neural networks are then introduced. Furthermore, the state-of-the-art supervised learning algorithms in recent years are reviewed from the perspectives of applicability to spiking neural network architecture and the inherent mechanisms of supervised learning algorithms. A performance comparison of spike train learning of some representative algorithms is also made. In addition, we provide five qualitative performance evaluation criteria for supervised learning algorithms for spiking neural networks and further present a new taxonomy for supervised learning algorithms depending on these five performance evaluation criteria. Finally, some future research directions in this research field are outlined.
Collapse
Affiliation(s)
- Xiangwen Wang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| | - Xianghong Lin
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China.
| | - Xiaochao Dang
- College of Computer Science and Engineering, Northwest Normal University, Lanzhou, 730070, People's Republic of China
| |
Collapse
|