1
|
Fietkiewicz C, McDougal RA, Corrales Marco D, Chiel HJ, Thomas PJ. Tutorial: using NEURON for neuromechanical simulations. Front Comput Neurosci 2023; 17:1143323. [PMID: 37583894 PMCID: PMC10424731 DOI: 10.3389/fncom.2023.1143323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Accepted: 06/20/2023] [Indexed: 08/17/2023] Open
Abstract
The dynamical properties of the brain and the dynamics of the body strongly influence one another. Their interaction generates complex adaptive behavior. While a wide variety of simulation tools exist for neural dynamics or biomechanics separately, there are few options for integrated brain-body modeling. Here, we provide a tutorial to demonstrate how the widely-used NEURON simulation platform can support integrated neuromechanical modeling. As a first step toward incorporating biomechanics into a NEURON simulation, we provide a framework for integrating inputs from a "periphery" and outputs to that periphery. In other words, "body" dynamics are driven in part by "brain" variables, such as voltages or firing rates, and "brain" dynamics are influenced by "body" variables through sensory feedback. To couple the "brain" and "body" components, we use NEURON's pointer construct to share information between "brain" and "body" modules. This approach allows separate specification of brain and body dynamics and code reuse. Though simple in concept, the use of pointers can be challenging due to a complicated syntax and several different programming options. In this paper, we present five different computational models, with increasing levels of complexity, to demonstrate the concepts of code modularity using pointers and the integration of neural and biomechanical modeling within NEURON. The models include: (1) a neuromuscular model of calcium dynamics and muscle force, (2) a neuromechanical, closed-loop model of a half-center oscillator coupled to a rudimentary motor system, (3) a closed-loop model of neural control for respiration, (4) a pedagogical model of a non-smooth "brain/body" system, and (5) a closed-loop model of feeding behavior in the sea hare Aplysia californica that incorporates biologically-motivated non-smooth dynamics. This tutorial illustrates how NEURON can be integrated with a broad range of neuromechanical models. Code available at https://github.com/fietkiewicz/PointerBuilder.
Collapse
Affiliation(s)
- Chris Fietkiewicz
- Department of Mathematics and Computer Science, Hobart and William Smith Colleges, Geneva, NY, United States
| | - Robert A. McDougal
- Department of Biostatistics, Yale School of Public Health, New Haven, CT, United States
- Wu Tsai Institute, Yale University, New Haven, CT, United States
- Program in Computational Biology and Bioinformatics, Yale University, New Haven, CT, United States
- Section for Biomedical Informatics, Yale School of Medicine, New Haven, CT, United States
| | - David Corrales Marco
- Department of Mathematics and Computer Science, Hobart and William Smith Colleges, Geneva, NY, United States
| | - Hillel J. Chiel
- Department of Biology, Case Western Reserve University, Cleveland, OH, United States
- Department of Neurosciences, Case Western Reserve University, Cleveland, OH, United States
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH, United States
| | - Peter J. Thomas
- Department of Biology, Case Western Reserve University, Cleveland, OH, United States
- Department of Mathematics, Applied Mathematics and Statistics, Case Western Reserve University, Cleveland, OH, United States
- Department of Cognitive Science, Case Western Reserve University, Cleveland, OH, United States
- Department of Electrical, Control, and Systems Engineering, Case Western Reserve University, Cleveland, OH, United States
- Department of Data and Computer Science, Case Western Reserve University, Cleveland, OH, United States
| |
Collapse
|
2
|
Dura-Bernal S, Neymotin SA, Suter BA, Dacre J, Moreira JVS, Urdapilleta E, Schiemann J, Duguid I, Shepherd GMG, Lytton WW. Multiscale model of primary motor cortex circuits predicts in vivo cell-type-specific, behavioral state-dependent dynamics. Cell Rep 2023; 42:112574. [PMID: 37300831 PMCID: PMC10592234 DOI: 10.1016/j.celrep.2023.112574] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 02/27/2023] [Accepted: 05/12/2023] [Indexed: 06/12/2023] Open
Abstract
Understanding cortical function requires studying multiple scales: molecular, cellular, circuit, and behavioral. We develop a multiscale, biophysically detailed model of mouse primary motor cortex (M1) with over 10,000 neurons and 30 million synapses. Neuron types, densities, spatial distributions, morphologies, biophysics, connectivity, and dendritic synapse locations are constrained by experimental data. The model includes long-range inputs from seven thalamic and cortical regions and noradrenergic inputs. Connectivity depends on cell class and cortical depth at sublaminar resolution. The model accurately predicts in vivo layer- and cell-type-specific responses (firing rates and LFP) associated with behavioral states (quiet wakefulness and movement) and experimental manipulations (noradrenaline receptor blockade and thalamus inactivation). We generate mechanistic hypotheses underlying the observed activity and analyzed low-dimensional population latent dynamics. This quantitative theoretical framework can be used to integrate and interpret M1 experimental data and sheds light on the cell-type-specific multiscale dynamics associated with several experimental conditions and behaviors.
Collapse
Affiliation(s)
- Salvador Dura-Bernal
- Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA; Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA.
| | - Samuel A Neymotin
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA; Department of Psychiatry, Grossman School of Medicine, New York University (NYU), New York, NY, USA
| | - Benjamin A Suter
- Department of Physiology, Northwestern University, Evanston, IL, USA
| | - Joshua Dacre
- Centre for Discovery Brain Sciences, Edinburgh Medical School: Biomedical Sciences, University of Edinburgh, Edinburgh, UK
| | - Joao V S Moreira
- Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA
| | - Eugenio Urdapilleta
- Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA
| | - Julia Schiemann
- Centre for Discovery Brain Sciences, Edinburgh Medical School: Biomedical Sciences, University of Edinburgh, Edinburgh, UK; Center for Integrative Physiology and Molecular Medicine, Saarland University, Saarbrücken, Germany
| | - Ian Duguid
- Centre for Discovery Brain Sciences, Edinburgh Medical School: Biomedical Sciences, University of Edinburgh, Edinburgh, UK
| | | | - William W Lytton
- Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA; Aligning Science Across Parkinson's (ASAP) Collaborative Research Network, Chevy Chase, MD, USA; Department of Neurology, Kings County Hospital Center, Brooklyn, NY, USA
| |
Collapse
|
3
|
Haşegan D, Deible M, Earl C, D’Onofrio D, Hazan H, Anwar H, Neymotin SA. Training spiking neuronal networks to perform motor control using reinforcement and evolutionary learning. Front Comput Neurosci 2022; 16:1017284. [PMID: 36249482 PMCID: PMC9563231 DOI: 10.3389/fncom.2022.1017284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 08/31/2022] [Indexed: 11/13/2022] Open
Abstract
Artificial neural networks (ANNs) have been successfully trained to perform a wide range of sensory-motor behaviors. In contrast, the performance of spiking neuronal network (SNN) models trained to perform similar behaviors remains relatively suboptimal. In this work, we aimed to push the field of SNNs forward by exploring the potential of different learning mechanisms to achieve optimal performance. We trained SNNs to solve the CartPole reinforcement learning (RL) control problem using two learning mechanisms operating at different timescales: (1) spike-timing-dependent reinforcement learning (STDP-RL) and (2) evolutionary strategy (EVOL). Though the role of STDP-RL in biological systems is well established, several other mechanisms, though not fully understood, work in concert during learning in vivo. Recreating accurate models that capture the interaction of STDP-RL with these diverse learning mechanisms is extremely difficult. EVOL is an alternative method and has been successfully used in many studies to fit model neural responsiveness to electrophysiological recordings and, in some cases, for classification problems. One advantage of EVOL is that it may not need to capture all interacting components of synaptic plasticity and thus provides a better alternative to STDP-RL. Here, we compared the performance of each algorithm after training, which revealed EVOL as a powerful method for training SNNs to perform sensory-motor behaviors. Our modeling opens up new capabilities for SNNs in RL and could serve as a testbed for neurobiologists aiming to understand multi-timescale learning mechanisms and dynamics in neuronal circuits.
Collapse
Affiliation(s)
- Daniel Haşegan
- Vilcek Institute of Graduate Biomedical Sciences, NYU Grossman School of Medicine, New York, NY, United States
| | - Matt Deible
- Department of Computer Science, University of Pittsburgh, Pittsburgh, PA, United States
| | - Christopher Earl
- Department of Computer Science, University of Massachusetts Amherst, Amherst, MA, United States
| | - David D’Onofrio
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, United States
| | - Hananel Hazan
- Allen Discovery Center, Tufts University, Boston, MA, United States
| | - Haroon Anwar
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, United States
| | - Samuel A. Neymotin
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, United States
- Department of Psychiatry, NYU Grossman School of Medicine, New York, NY, United States
| |
Collapse
|
4
|
Anwar H, Caby S, Dura-Bernal S, D’Onofrio D, Hasegan D, Deible M, Grunblatt S, Chadderdon GL, Kerr CC, Lakatos P, Lytton WW, Hazan H, Neymotin SA. Training a spiking neuronal network model of visual-motor cortex to play a virtual racket-ball game using reinforcement learning. PLoS One 2022; 17:e0265808. [PMID: 35544518 PMCID: PMC9094569 DOI: 10.1371/journal.pone.0265808] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Accepted: 03/08/2022] [Indexed: 11/18/2022] Open
Abstract
Recent models of spiking neuronal networks have been trained to perform behaviors in static environments using a variety of learning rules, with varying degrees of biological realism. Most of these models have not been tested in dynamic visual environments where models must make predictions on future states and adjust their behavior accordingly. The models using these learning rules are often treated as black boxes, with little analysis on circuit architectures and learning mechanisms supporting optimal performance. Here we developed visual/motor spiking neuronal network models and trained them to play a virtual racket-ball game using several reinforcement learning algorithms inspired by the dopaminergic reward system. We systematically investigated how different architectures and circuit-motifs (feed-forward, recurrent, feedback) contributed to learning and performance. We also developed a new biologically-inspired learning rule that significantly enhanced performance, while reducing training time. Our models included visual areas encoding game inputs and relaying the information to motor areas, which used this information to learn to move the racket to hit the ball. Neurons in the early visual area relayed information encoding object location and motion direction across the network. Neuronal association areas encoded spatial relationships between objects in the visual scene. Motor populations received inputs from visual and association areas representing the dorsal pathway. Two populations of motor neurons generated commands to move the racket up or down. Model-generated actions updated the environment and triggered reward or punishment signals that adjusted synaptic weights so that the models could learn which actions led to reward. Here we demonstrate that our biologically-plausible learning rules were effective in training spiking neuronal network models to solve problems in dynamic environments. We used our models to dissect the circuit architectures and learning rules most effective for learning. Our model shows that learning mechanisms involving different neural circuits produce similar performance in sensory-motor tasks. In biological networks, all learning mechanisms may complement one another, accelerating the learning capabilities of animals. Furthermore, this also highlights the resilience and redundancy in biological systems.
Collapse
Affiliation(s)
- Haroon Anwar
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Simon Caby
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Salvador Dura-Bernal
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
- Dept. Physiology & Pharmacology, State University of New York Downstate, Brooklyn, New York, United States of America
| | - David D’Onofrio
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Daniel Hasegan
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - Matt Deible
- University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Sara Grunblatt
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
| | - George L. Chadderdon
- Dept. Physiology & Pharmacology, State University of New York Downstate, Brooklyn, New York, United States of America
| | - Cliff C. Kerr
- Dept Physics, University of Sydney, Sydney, Australia
- Institute for Disease Modeling, Global Health Division, Bill & Melinda Gates Foundation, Seattle, Washington, United States of America
| | - Peter Lakatos
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
- Dept. Psychiatry, NYU Grossman School of Medicine, New York, New York, United States of America
| | - William W. Lytton
- Dept. Physiology & Pharmacology, State University of New York Downstate, Brooklyn, New York, United States of America
- Dept Neurology, Kings County Hospital Center, Brooklyn, New York, United States of America
| | - Hananel Hazan
- Dept of Biology, Tufts University, Medford, Massachusetts, United States of America
| | - Samuel A. Neymotin
- Center for Biomedical Imaging and Neuromodulation, Nathan Kline Institute for Psychiatric Research, Orangeburg, New York, United States of America
- Dept. Psychiatry, NYU Grossman School of Medicine, New York, New York, United States of America
- * E-mail:
| |
Collapse
|
5
|
Azimirad V, Ramezanlou MT, Sotubadi SV, Janabi-Sharifi F. A consecutive hybrid spiking-convolutional (CHSC) neural controller for sequential decision making in robots. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.11.097] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
6
|
Li K, Príncipe JC. Biologically-Inspired Pulse Signal Processing for Intelligence at the Edge. Front Artif Intell 2021; 4:568384. [PMID: 34568811 PMCID: PMC8457635 DOI: 10.3389/frai.2021.568384] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Accepted: 08/19/2021] [Indexed: 11/14/2022] Open
Abstract
There is an ever-growing mismatch between the proliferation of data-intensive, power-hungry deep learning solutions in the machine learning (ML) community and the need for agile, portable solutions in resource-constrained devices, particularly for intelligence at the edge. In this paper, we present a fundamentally novel approach that leverages data-driven intelligence with biologically-inspired efficiency. The proposed Sparse Embodiment Neural-Statistical Architecture (SENSA) decomposes the learning task into two distinct phases: a training phase and a hardware embedment phase where prototypes are extracted from the trained network and used to construct fast, sparse embodiment for hardware deployment at the edge. Specifically, we propose the Sparse Pulse Automata via Reproducing Kernel (SPARK) method, which first constructs a learning machine in the form of a dynamical system using energy-efficient spike or pulse trains, commonly used in neuroscience and neuromorphic engineering, then extracts a rule-based solution in the form of automata or lookup tables for rapid deployment in edge computing platforms. We propose to use the theoretically-grounded unifying framework of the Reproducing Kernel Hilbert Space (RKHS) to provide interpretable, nonlinear, and nonparametric solutions, compared to the typical neural network approach. In kernel methods, the explicit representation of the data is of secondary nature, allowing the same algorithm to be used for different data types without altering the learning rules. To showcase SPARK’s capabilities, we carried out the first proof-of-concept demonstration on the task of isolated-word automatic speech recognition (ASR) or keyword spotting, benchmarked on the TI-46 digit corpus. Together, these energy-efficient and resource-conscious techniques will bring advanced machine learning solutions closer to the edge.
Collapse
Affiliation(s)
- Kan Li
- Computational NeuroEngineering Laboratory (CNEL), Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, United States
| | - José C Príncipe
- Computational NeuroEngineering Laboratory (CNEL), Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, United States
| |
Collapse
|
7
|
Romeni S, Zoccolan D, Micera S. A machine learning framework to optimize optic nerve electrical stimulation for vision restoration. PATTERNS (NEW YORK, N.Y.) 2021; 2:100286. [PMID: 34286301 PMCID: PMC8276026 DOI: 10.1016/j.patter.2021.100286] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Revised: 03/05/2021] [Accepted: 05/17/2021] [Indexed: 11/25/2022]
Abstract
Optic nerve electrical stimulation is a promising technique to restore vision in blind subjects. Machine learning methods can be used to select effective stimulation protocols, but they require a model of the stimulated system to generate enough training data. Here, we use a convolutional neural network (CNN) as a model of the ventral visual stream. A genetic algorithm drives the activation of the units in a layer of the CNN representing a cortical region toward a desired pattern, by refining the activation imposed at a layer representing the optic nerve. To simulate the pattern of activation elicited by the sites of an electrode array, a simple point-source model was introduced and its optimization process was investigated for static and dynamic scenes. Psychophysical data confirm that our stimulation evolution framework produces results compatible with natural vision. Machine learning approaches could become a very powerful tool to optimize and personalize neuroprosthetic systems.
Collapse
Affiliation(s)
- Simone Romeni
- Bertarelli Foundation Chair in Translational NeuroEngineering, Center for Neuroprosthetics and Institute of Bioengineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Davide Zoccolan
- Visual Neuroscience Lab, International School for Advanced Studies (SISSA), Trieste, Italy
| | - Silvestro Micera
- Bertarelli Foundation Chair in Translational NeuroEngineering, Center for Neuroprosthetics and Institute of Bioengineering, École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
- The Biorobotics Institute and Department of Excellence in Robotics and AI, Scuola Superiore Sant’Anna, Pontedera, Italy
| |
Collapse
|
8
|
Subramaniam S, Blake DT, Constantinidis C. Cholinergic Deep Brain Stimulation for Memory and Cognitive Disorders. J Alzheimers Dis 2021; 83:491-503. [PMID: 34334401 PMCID: PMC8543284 DOI: 10.3233/jad-210425] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/21/2021] [Indexed: 12/20/2022]
Abstract
Memory and cognitive impairment as sequelae of neurodegeneration in Alzheimer's disease and age-related dementia are major health issues with increasing social and economic burden. Deep brain stimulation (DBS) has emerged as a potential treatment to slow or halt progression of the disease state. The selection of stimulation target is critical, and structures that have been targeted for memory and cognitive enhancement include the Papez circuit, structures projecting to the frontal lobe such as the ventral internal capsule, and the cholinergic forebrain. Recent human clinical and animal model results imply that DBS of the nucleus basalis of Meynert can induce a therapeutic modulation of neuronal activity. Benefits include enhanced activity across the cortical mantle, and potential for amelioration of neuropathological mechanisms associated with Alzheimer's disease. The choice of stimulation parameters is also critical. High-frequency, continuous stimulation is used for movement disorders as a way of inhibiting their output; however, no overexcitation has been hypothesized in Alzheimer's disease and lower stimulation frequency or intermittent patterns of stimulation (periods of stimulation interleaved with periods of no stimulation) are likely to be more effective for stimulation of the cholinergic forebrain. Efficacy and long-term tolerance in human patients remain open questions, though the cumulative experience gained by DBS for movement disorders provides assurance for the safety of the procedure.
Collapse
Affiliation(s)
- Saravanan Subramaniam
- Department of Neurobiology & Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, USA
| | - David T. Blake
- Brain and Behavior Discovery Institute, Department of Neurology, Medical College of Georgia, Augusta University, Augusta, GA, USA
| | - Christos Constantinidis
- Department of Neurobiology & Anatomy, Wake Forest School of Medicine, Winston-Salem, NC, USA
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
- Neuroscience Program, Vanderbilt University, Nashville, TN, USA
- Department of Ophthalmology and Visual Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
9
|
Kumaravelu K, Tomlinson T, Callier T, Sombeck J, Bensmaia SJ, Miller LE, Grill WM. A comprehensive model-based framework for optimal design of biomimetic patterns of electrical stimulation for prosthetic sensation. J Neural Eng 2020; 17:046045. [PMID: 32759488 PMCID: PMC8559728 DOI: 10.1088/1741-2552/abacd8] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
OBJECTIVE Touch and proprioception are essential to motor function as shown by the movement deficits that result from the loss of these senses, e.g. due to neuropathy of sensory nerves. To achieve a high-performance brain-controlled prosthetic arm/hand thus requires the restoration of somatosensation, perhaps through intracortical microstimulation (ICMS) of somatosensory cortex (S1). The challenge is to generate patterns of neuronal activation that evoke interpretable percepts. We present a framework to design optimal spatiotemporal patterns of ICMS (STIM) that evoke naturalistic patterns of neuronal activity and demonstrate performance superior to four previous approaches. APPROACH We recorded multiunit activity from S1 during a center-out reach task (from proprioceptive neurons in Brodmann's area 2) and during application of skin indentations (from cutaneous neurons in Brodmann's area 1). We implemented a computational model of a cortical hypercolumn and used a genetic algorithm to design STIM that evoked patterns of model neuron activity that mimicked their experimentally-measured counterparts. Finally, from the ICMS patterns, the evoked neuronal activity, and the stimulus parameters that gave rise to it, we trained a recurrent neural network (RNN) to learn the mapping function between the physical stimulus and the biomimetic stimulation pattern, i.e. the sensory encoder to be integrated into a neuroprosthetic device. MAIN RESULTS We identified ICMS patterns that evoked simulated responses that closely approximated the measured responses for neurons within 50 µm of the electrode tip. The RNN-based sensory encoder generalized well to untrained limb movements or skin indentations. STIM designed using the model-based optimization approach outperformed STIM designed using existing linear and nonlinear mappings. SIGNIFICANCE The proposed framework produces an encoder that converts limb state or patterns of pressure exerted onto the prosthetic hand into STIM that evoke naturalistic patterns of neuronal activation.
Collapse
Affiliation(s)
| | | | - Thierri Callier
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL
| | - Joseph Sombeck
- Department of Biomedical Engineering, Northwestern University, Chicago, IL
| | - Sliman J. Bensmaia
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL
| | - Lee E. Miller
- Department of Biomedical Engineering, Northwestern University, Chicago, IL
- Department of Physical Medicine and Rehabilitation, Northwestern University, Chicago, IL
- Deptartment of Physiology, Northwestern University, Chicago, IL
| | - Warren M. Grill
- Department of Biomedical Engineering, Duke University, Durham, NC
- Department of Electrical and Computer Engineering, Duke University, Durham, NC
- Department of Neurobiology, Duke University, Durham, NC
- Department of Neurosurgery, Duke University, Durham, NC
| |
Collapse
|
10
|
Ciba M, Bestel R, Nick C, de Arruda GF, Peron T, Henrique CC, Costa LDF, Rodrigues FA, Thielemann C. Comparison of Different Spike Train Synchrony Measures Regarding Their Robustness to Erroneous Data From Bicuculline-Induced Epileptiform Activity. Neural Comput 2020; 32:887-911. [PMID: 32187002 DOI: 10.1162/neco_a_01277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
As synchronized activity is associated with basic brain functions and pathological states, spike train synchrony has become an important measure to analyze experimental neuronal data. Many measures of spike train synchrony have been proposed, but there is no gold standard allowing for comparison of results from different experiments. This work aims to provide guidance on which synchrony measure is best suited to quantify the effect of epileptiform-inducing substances (e.g., bicuculline, BIC) in in vitro neuronal spike train data. Spike train data from recordings are likely to suffer from erroneous spike detection, such as missed spikes (false negative) or noise (false positive). Therefore, different timescale-dependent (cross-correlation, mutual information, spike time tiling coefficient) and timescale-independent (Spike-contrast, phase synchronization (PS), A-SPIKE-synchronization, A-ISI-distance, ARI-SPIKE-distance) synchrony measures were compared in terms of their robustness to erroneous spike trains. For this purpose, erroneous spike trains were generated by randomly adding (false positive) or deleting (false negative) spikes (in silico manipulated data) from experimental data. In addition, experimental data were analyzed using different spike detection threshold factors in order to confirm the robustness of the synchrony measures. All experimental data were recorded from cortical neuronal networks on microelectrode array chips, which show epileptiform activity induced by the substance BIC. As a result of the in silico manipulated data, Spike-contrast was the only measure that was robust to false-negative as well as false-positive spikes. Analyzing the experimental data set revealed that all measures were able to capture the effect of BIC in a statistically significant way, with Spike-contrast showing the highest statistical significance even at low spike detection thresholds. In summary, we suggest using Spike-contrast to complement established synchrony measures because it is timescale independent and robust to erroneous spike trains.
Collapse
Affiliation(s)
- Manuel Ciba
- Biomems Lab, University of Applied Science Aschaffenburg, 63743 Aschaffenburg, Germany
| | - Robert Bestel
- Biomems Lab, University of Applied Science Aschaffenburg, 63743 Aschaffenburg, Germany
| | - Christoph Nick
- Biomems Lab, University of Applied Science Aschaffenburg, 63743 Aschaffenburg, Germany
| | | | - Thomas Peron
- Institute of Mathematics and Computer Science, University of São Paulo, São Carlos SP 13566-590, Brazil
| | - Comin César Henrique
- Department of Computer Science, Federal University of São Carlos, São Carlos SP 13565-905, Brazil
| | | | | | - Christiane Thielemann
- Biomems Lab, University of Applied Science Aschaffenburg, 63743 Aschaffenburg, Germany
| |
Collapse
|
11
|
Couraud M, Cattaert D, Paclet F, Oudeyer PY, de Rugy A. Model and experiments to optimize co-adaptation in a simplified myoelectric control system. J Neural Eng 2019; 15:026006. [PMID: 28832013 DOI: 10.1088/1741-2552/aa87cf] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
OBJECTIVE To compensate for a limb lost in an amputation, myoelectric prostheses use surface electromyography (EMG) from the remaining muscles to control the prosthesis. Despite considerable progress, myoelectric controls remain markedly different from the way we normally control movements, and require intense user adaptation. To overcome this, our goal is to explore concurrent machine co-adaptation techniques that are developed in the field of brain-machine interface, and that are beginning to be used in myoelectric controls. APPROACH We combined a simplified myoelectric control with a perturbation for which human adaptation is well characterized and modeled, in order to explore co-adaptation settings in a principled manner. RESULTS First, we reproduced results obtained in a classical visuomotor rotation paradigm in our simplified myoelectric context, where we rotate the muscle pulling vectors used to reconstruct wrist force from EMG. Then, a model of human adaptation in response to directional error was used to simulate various co-adaptation settings, where perturbations and machine co-adaptation are both applied on muscle pulling vectors. These simulations established that a relatively low gain of machine co-adaptation that minimizes final errors generates slow and incomplete adaptation, while higher gains increase adaptation rate but also errors by amplifying noise. After experimental verification on real subjects, we tested a variable gain that cumulates the advantages of both, and implemented it with directionally tuned neurons similar to those used to model human adaptation. This enables machine co-adaptation to locally improve myoelectric control, and to absorb more challenging perturbations. SIGNIFICANCE The simplified context used here enabled to explore co-adaptation settings in both simulations and experiments, and to raise important considerations such as the need for a variable gain encoded locally. The benefits and limits of extending this approach to more complex and functional myoelectric contexts are discussed.
Collapse
Affiliation(s)
- M Couraud
- Institut de Neurosciences Cognitives et Intégratives d'Aquitaine, CNRS UMR 5287, Université de Bordeaux, France
| | | | | | | | | |
Collapse
|
12
|
Dura-Bernal S, Suter BA, Gleeson P, Cantarelli M, Quintana A, Rodriguez F, Kedziora DJ, Chadderdon GL, Kerr CC, Neymotin SA, McDougal RA, Hines M, Shepherd GMG, Lytton WW. NetPyNE, a tool for data-driven multiscale modeling of brain circuits. eLife 2019; 8:e44494. [PMID: 31025934 PMCID: PMC6534378 DOI: 10.7554/elife.44494] [Citation(s) in RCA: 68] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Accepted: 04/25/2019] [Indexed: 12/22/2022] Open
Abstract
Biophysical modeling of neuronal networks helps to integrate and interpret rapidly growing and disparate experimental datasets at multiple scales. The NetPyNE tool (www.netpyne.org) provides both programmatic and graphical interfaces to develop data-driven multiscale network models in NEURON. NetPyNE clearly separates model parameters from implementation code. Users provide specifications at a high level via a standardized declarative language, for example connectivity rules, to create millions of cell-to-cell connections. NetPyNE then enables users to generate the NEURON network, run efficiently parallelized simulations, optimize and explore network parameters through automated batch runs, and use built-in functions for visualization and analysis - connectivity matrices, voltage traces, spike raster plots, local field potentials, and information theoretic measures. NetPyNE also facilitates model sharing by exporting and importing standardized formats (NeuroML and SONATA). NetPyNE is already being used to teach computational neuroscience students and by modelers to investigate brain regions and phenomena.
Collapse
Affiliation(s)
- Salvador Dura-Bernal
- Department of Physiology & PharmacologyState University of New York Downstate Medical CenterBrooklynUnited States
| | - Benjamin A Suter
- Department of PhysiologyNorthwestern UniversityChicagoUnited States
| | - Padraig Gleeson
- Department of Neuroscience, Physiology and PharmacologyUniversity College LondonLondonUnited Kingdom
| | | | | | - Facundo Rodriguez
- Department of Physiology & PharmacologyState University of New York Downstate Medical CenterBrooklynUnited States
- MetaCell LLCBostonUnited States
| | - David J Kedziora
- Complex Systems Group, School of PhysicsUniversity of SydneySydneyAustralia
| | - George L Chadderdon
- Department of Physiology & PharmacologyState University of New York Downstate Medical CenterBrooklynUnited States
| | - Cliff C Kerr
- Complex Systems Group, School of PhysicsUniversity of SydneySydneyAustralia
| | - Samuel A Neymotin
- Department of Physiology & PharmacologyState University of New York Downstate Medical CenterBrooklynUnited States
- Nathan Kline Institute for Psychiatric ResearchOrangeburgUnited States
| | - Robert A McDougal
- Department of Neuroscience and School of MedicineYale UniversityNew HavenUnited States
- Center for Medical InformaticsYale UniversityNew HavenUnited States
| | - Michael Hines
- Department of Neuroscience and School of MedicineYale UniversityNew HavenUnited States
| | | | - William W Lytton
- Department of Physiology & PharmacologyState University of New York Downstate Medical CenterBrooklynUnited States
- Department of NeurologyKings County HospitalBrooklynUnited States
| |
Collapse
|
13
|
Li K, Príncipe JC. Biologically-Inspired Spike-Based Automatic Speech Recognition of Isolated Digits Over a Reproducing Kernel Hilbert Space. Front Neurosci 2018; 12:194. [PMID: 29666568 PMCID: PMC5891646 DOI: 10.3389/fnins.2018.00194] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2017] [Accepted: 03/12/2018] [Indexed: 11/13/2022] Open
Abstract
This paper presents a novel real-time dynamic framework for quantifying time-series structure in spoken words using spikes. Audio signals are converted into multi-channel spike trains using a biologically-inspired leaky integrate-and-fire (LIF) spike generator. These spike trains are mapped into a function space of infinite dimension, i.e., a Reproducing Kernel Hilbert Space (RKHS) using point-process kernels, where a state-space model learns the dynamics of the multidimensional spike input using gradient descent learning. This kernelized recurrent system is very parsimonious and achieves the necessary memory depth via feedback of its internal states when trained discriminatively, utilizing the full context of the phoneme sequence. A main advantage of modeling nonlinear dynamics using state-space trajectories in the RKHS is that it imposes no restriction on the relationship between the exogenous input and its internal state. We are free to choose the input representation with an appropriate kernel, and changing the kernel does not impact the system nor the learning algorithm. Moreover, we show that this novel framework can outperform both traditional hidden Markov model (HMM) speech processing as well as neuromorphic implementations based on spiking neural network (SNN), yielding accurate and ultra-low power word spotters. As a proof of concept, we demonstrate its capabilities using the benchmark TI-46 digit corpus for isolated-word automatic speech recognition (ASR) or keyword spotting. Compared to HMM using Mel-frequency cepstral coefficient (MFCC) front-end without time-derivatives, our MFCC-KAARMA offered improved performance. For spike-train front-end, spike-KAARMA also outperformed state-of-the-art SNN solutions. Furthermore, compared to MFCCs, spike trains provided enhanced noise robustness in certain low signal-to-noise ratio (SNR) regime.
Collapse
Affiliation(s)
- Kan Li
- Computational NeuroEngineering Laboratory, Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, United States
| | - José C Príncipe
- Computational NeuroEngineering Laboratory, Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, United States
| |
Collapse
|
14
|
Ciba M, Isomura T, Jimbo Y, Bahmer A, Thielemann C. Spike-contrast: A novel time scale independent and multivariate measure of spike train synchrony. J Neurosci Methods 2017; 293:136-143. [PMID: 28935422 DOI: 10.1016/j.jneumeth.2017.09.008] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2017] [Revised: 09/14/2017] [Accepted: 09/15/2017] [Indexed: 11/28/2022]
Abstract
BACKGROUND Synchrony within neuronal networks is thought to be a fundamental feature of neuronal networks. In order to quantify synchrony between spike trains, various synchrony measures were developed. Most of them are time scale dependent and thus require the setting of an appropriate time scale. Recently, alternative methods have been developed, such as the time scale independent SPIKE-distance by Kreuz et al. NEW METHOD In this study, a novel time-scale independent spike train synchrony measure called Spike-contrast is proposed. The algorithm is based on the temporal "contrast" (activity vs. non-activity in certain temporal bins) and not only provides a single synchrony value, but also a synchrony curve as a function of the bin size. RESULTS For most test data sets synchrony values obtained with Spike-contrast are highly correlated with those of the SPIKE-distance (Spearman correlation value of 0.99). Correlation was lower for data containing multiple time scales (Spearman correlation value of 0.89). When analyzing large sets of data, Spike-contrast performed faster. COMPARISON OF EXISTING METHOD Spike-contrast is compared to the SPIKE-distance algorithm. The test data consisted of artificial spike trains with various levels of synchrony, including Poisson spike trains and bursts, spike trains from simulated neuronal Izhikevich networks, and bursts made of smaller bursts (sub-bursts). CONCLUSIONS The high correlation of Spike-contrast with the established SPIKE-distance for most test data, suggests the suitability of the proposed measure. Both measures are complementary as SPIKE-distance provides a synchrony profile over time, whereas Spike-contrast provides a synchrony curve over bin size.
Collapse
Affiliation(s)
- Manuel Ciba
- BioMEMS Lab, University of Applied Sciences Aschaffenburg, 63743 Aschaffenburg, Germany.
| | - Takuya Isomura
- Department of Human and Engineered Environmental Studies, Graduate School of Frontier Sciences, The University of Tokyo, Bunkyo-ku, Tokyo 113-8656, Japan
| | - Yasuhiko Jimbo
- Department of Precision Engineering, School of Engineering, University of Tokyo, Bunkyo-ku, Tokyo 113-8656, Japan
| | - Andreas Bahmer
- University ENT-Clinic Würzburg, Theoretical and Experimental Neurophysiology, 97080 Würzburg, Germany
| | - Christiane Thielemann
- BioMEMS Lab, University of Applied Sciences Aschaffenburg, 63743 Aschaffenburg, Germany
| |
Collapse
|
15
|
Satuvuori E, Mulansky M, Bozanic N, Malvestio I, Zeldenrust F, Lenk K, Kreuz T. Measures of spike train synchrony for data with multiple time scales. J Neurosci Methods 2017; 287:25-38. [PMID: 28583477 PMCID: PMC5508708 DOI: 10.1016/j.jneumeth.2017.05.028] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2017] [Revised: 05/04/2017] [Accepted: 05/30/2017] [Indexed: 10/29/2022]
Abstract
BACKGROUND Measures of spike train synchrony are widely used in both experimental and computational neuroscience. Time-scale independent and parameter-free measures, such as the ISI-distance, the SPIKE-distance and SPIKE-synchronization, are preferable to time scale parametric measures, since by adapting to the local firing rate they take into account all the time scales of a given dataset. NEW METHOD In data containing multiple time scales (e.g. regular spiking and bursts) one is typically less interested in the smallest time scales and a more adaptive approach is needed. Here we propose the A-ISI-distance, the A-SPIKE-distance and A-SPIKE-synchronization, which generalize the original measures by considering the local relative to the global time scales. For the A-SPIKE-distance we also introduce a rate-independent extension called the RIA-SPIKE-distance, which focuses specifically on spike timing. RESULTS The adaptive generalizations A-ISI-distance and A-SPIKE-distance allow to disregard spike time differences that are not relevant on a more global scale. A-SPIKE-synchronization does not any longer demand an unreasonably high accuracy for spike doublets and coinciding bursts. Finally, the RIA-SPIKE-distance proves to be independent of rate ratios between spike trains. COMPARISON WITH EXISTING METHODS We find that compared to the original versions the A-ISI-distance and the A-SPIKE-distance yield improvements for spike trains containing different time scales without exhibiting any unwanted side effects in other examples. A-SPIKE-synchronization matches spikes more efficiently than SPIKE-synchronization. CONCLUSIONS With these proposals we have completed the picture, since we now provide adaptive generalized measures that are sensitive to firing rate only (A-ISI-distance), to timing only (ARI-SPIKE-distance), and to both at the same time (A-SPIKE-distance).
Collapse
Affiliation(s)
- Eero Satuvuori
- Institute for Complex Systems, CNR, Sesto Fiorentino, Italy; Department of Physics and Astronomy, University of Florence, Sesto Fiorentino, Italy; MOVE Research Institute, Department of Human Movement Sciences, Vrije Universiteit Amsterdam, The Netherlands.
| | - Mario Mulansky
- Institute for Complex Systems, CNR, Sesto Fiorentino, Italy.
| | - Nebojsa Bozanic
- Institute for Complex Systems, CNR, Sesto Fiorentino, Italy.
| | - Irene Malvestio
- Institute for Complex Systems, CNR, Sesto Fiorentino, Italy; Department of Physics and Astronomy, University of Florence, Sesto Fiorentino, Italy; Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain.
| | - Fleur Zeldenrust
- Donders Institute for Brain Cognition and Behaviour, Radboud Universiteit, Nijmegen, The Netherlands.
| | - Kerstin Lenk
- BioMediTech, Tampere University of Technology, Tampere, Finland; DFG-Center for Regenerative Therapies Dresden, Technische Universität Dresden, Dresden, Germany.
| | - Thomas Kreuz
- Institute for Complex Systems, CNR, Sesto Fiorentino, Italy.
| |
Collapse
|
16
|
Dura-Bernal S, Neymotin SA, Kerr CC, Sivagnanam S, Majumdar A, Francis JT, Lytton WW. Evolutionary algorithm optimization of biological learning parameters in a biomimetic neuroprosthesis. IBM JOURNAL OF RESEARCH AND DEVELOPMENT 2017; 61:6.1-6.14. [PMID: 29200477 PMCID: PMC5708558 DOI: 10.1147/jrd.2017.2656758] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Biomimetic simulation permits neuroscientists to better understand the complex neuronal dynamics of the brain. Embedding a biomimetic simulation in a closed-loop neuroprosthesis, which can read and write signals from the brain, will permit applications for amelioration of motor, psychiatric, and memory-related brain disorders. Biomimetic neuroprostheses require real-time adaptation to changes in the external environment, thus constituting an example of a dynamic data-driven application system. As model fidelity increases, so does the number of parameters and the complexity of finding appropriate parameter configurations. Instead of adapting synaptic weights via machine learning, we employed major biological learning methods: spike-timing dependent plasticity and reinforcement learning. We optimized the learning metaparameters using evolutionary algorithms, which were implemented in parallel and which used an island model approach to obtain sufficient speed. We employed these methods to train a cortical spiking model to utilize macaque brain activity, indicating a selected target, to drive a virtual musculoskeletal arm with realistic anatomical and biomechanical properties to reach to that target. The optimized system was able to reproduce macaque data from a comparable experimental motor task. These techniques can be used to efficiently tune the parameters of multiscale systems, linking realistic neuronal dynamics to behavior, and thus providing a useful tool for neuroscience and neuroprosthetics.
Collapse
|
17
|
Ghazi-Zahedi K, Haeufle DFB, Montúfar G, Schmitt S, Ay N. Evaluating Morphological Computation in Muscle and DC-Motor Driven Models of Hopping Movements. Front Robot AI 2016. [DOI: 10.3389/frobt.2016.00042] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
18
|
Neymotin SA, Dura-Bernal S, Lakatos P, Sanger TD, Lytton WW. Multitarget Multiscale Simulation for Pharmacological Treatment of Dystonia in Motor Cortex. Front Pharmacol 2016; 7:157. [PMID: 27378922 PMCID: PMC4906029 DOI: 10.3389/fphar.2016.00157] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2016] [Accepted: 05/30/2016] [Indexed: 12/20/2022] Open
Abstract
A large number of physiomic pathologies can produce hyperexcitability in cortex. Depending on severity, cortical hyperexcitability may manifest clinically as a hyperkinetic movement disorder or as epilpesy. We focus here on dystonia, a movement disorder that produces involuntary muscle contractions and involves pathology in multiple brain areas including basal ganglia, thalamus, cerebellum, and sensory and motor cortices. Most research in dystonia has focused on basal ganglia, while much pharmacological treatment is provided directly at muscles to prevent contraction. Motor cortex is another potential target for therapy that exhibits pathological dynamics in dystonia, including heightened activity and altered beta oscillations. We developed a multiscale model of primary motor cortex, ranging from molecular, up to cellular, and network levels, containing 1715 compartmental model neurons with multiple ion channels and intracellular molecular dynamics. We wired the model based on electrophysiological data obtained from mouse motor cortex circuit mapping experiments. We used the model to reproduce patterns of heightened activity seen in dystonia by applying independent random variations in parameters to identify pathological parameter sets. These models demonstrated degeneracy, meaning that there were many ways of obtaining the pathological syndrome. There was no single parameter alteration which would consistently distinguish pathological from physiological dynamics. At higher dimensions in parameter space, we were able to use support vector machines to distinguish the two patterns in different regions of space and thereby trace multitarget routes from dystonic to physiological dynamics. These results suggest the use of in silico models for discovery of multitarget drug cocktails.
Collapse
Affiliation(s)
- Samuel A Neymotin
- Department Physiology and Pharmacology, SUNY Downstate Medical Center, State University of New YorkBrooklyn, NY, USA; Department Neuroscience, Yale University School of MedicineNew Haven, CT, USA
| | - Salvador Dura-Bernal
- Department Physiology and Pharmacology, SUNY Downstate Medical Center, State University of New York Brooklyn, NY, USA
| | - Peter Lakatos
- Nathan S. Kline Institute for Psychiatric Research Orangeburg, NY, USA
| | - Terence D Sanger
- Department Biomedical Engineering, University of Southern CaliforniaLos Angeles, CA, USA; Division Neurology, Child Neurology and Movement Disorders, Children's Hospital Los AngelesLos Angeles, CA, USA
| | - William W Lytton
- Department Physiology and Pharmacology, SUNY Downstate Medical Center, State University of New YorkBrooklyn, NY, USA; Department Neurology, SUNY Downstate Medical CenterBrooklyn, NY, USA; Department Neurology, Kings County Hospital CenterBrooklyn, NY, USA; The Robert F. Furchgott Center for Neural and Behavioral ScienceBrooklyn, NY, US
| |
Collapse
|