1
|
Timme NM, Ardinger CE, Weir SDC, Zelaya-Escobar R, Kruger R, Lapish CC. Non-consummatory behavior signals predict aversion-resistant alcohol drinking in head-fixed mice. Neuropharmacology 2024; 242:109762. [PMID: 37871677 PMCID: PMC10872650 DOI: 10.1016/j.neuropharm.2023.109762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 10/05/2023] [Accepted: 10/12/2023] [Indexed: 10/25/2023]
Abstract
A key facet of alcohol use disorder is continuing to drink alcohol despite negative consequences (so called "aversion-resistant drinking"). In this study, we sought to assess the degree to which head-fixed mice exhibit aversion-resistant drinking and to leverage behavioral analysis techniques available in head-fixture to relate non-consummatory behaviors to aversion-resistant drinking. We assessed aversion-resistant drinking in head-fixed female and male C57BL/6 J mice. We adulterated 20% (v/v) alcohol with varying concentrations of the bitter tastant quinine to measure the degree to which mice would continue to drink despite this aversive stimulus. We recorded high-resolution video of the mice during head-fixed drinking, tracked body parts with machine vision tools, and analyzed body movements in relation to consumption. Female and male head-fixed mice exhibited heterogenous levels of aversion-resistant drinking. Additionally, non-consummatory behaviors, such as paw movement and snout movement, were related to the intensity of aversion-resistant drinking. These studies demonstrate that head-fixed mice exhibit aversion-resistant drinking and that non-consummatory behaviors can be used to assess perceived aversiveness in this paradigm. Furthermore, these studies lay the groundwork for future experiments that will utilize advanced electrophysiological techniques to record from large populations of neurons during aversion-resistant drinking to understand the neurocomputational processes that drive this clinically relevant behavior. This article is part of the Special Issue on "PFC circuit function in psychiatric disease and relevant models".
Collapse
Affiliation(s)
- Nicholas M Timme
- Department of Psychology, Indiana University - Purdue University Indianapolis, 402 N. Blackford St, LD 124, Indianapolis, IN, 46202, USA.
| | - Cherish E Ardinger
- Department of Psychology, Indiana University - Purdue University Indianapolis, 402 N. Blackford St, LD 124, Indianapolis, IN, 46202, USA
| | - Seth D C Weir
- Department of Psychology, Indiana University - Purdue University Indianapolis, 402 N. Blackford St, LD 124, Indianapolis, IN, 46202, USA
| | - Rachel Zelaya-Escobar
- Department of Psychology, Indiana University - Purdue University Indianapolis, 402 N. Blackford St, LD 124, Indianapolis, IN, 46202, USA
| | - Rachel Kruger
- Department of Psychology, Indiana University - Purdue University Indianapolis, 402 N. Blackford St, LD 124, Indianapolis, IN, 46202, USA
| | - Christopher C Lapish
- Department of Anatomy, Cell Biology, and Physiology, Indiana University School of Medicine, 635 Barnhill Drive, MSB 5035, Indianapolis, IN, 46202, USA; Stark Neuroscience Institute, Indiana University School of Medicine, 320 W. 15th St, NB 414, Indianapolis, IN, 46202, USA
| |
Collapse
|
2
|
Zhang A, Zador AM. Neurons in the primary visual cortex of freely moving rats encode both sensory and non-sensory task variables. PLoS Biol 2023; 21:e3002384. [PMID: 38048367 PMCID: PMC10721203 DOI: 10.1371/journal.pbio.3002384] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 12/14/2023] [Accepted: 10/17/2023] [Indexed: 12/06/2023] Open
Abstract
Neurons in primary visual cortex (area V1) are strongly driven by both sensory stimuli and non-sensory events. However, although the representation of sensory stimuli has been well characterized, much less is known about the representation of non-sensory events. Here, we characterize the specificity and organization of non-sensory representations in rat V1 during a freely moving visual decision task. We find that single neurons encode diverse combinations of task features simultaneously and across task epochs. Despite heterogeneity at the level of single neuron response patterns, both visual and nonvisual task variables could be reliably decoded from small neural populations (5 to 40 units) throughout a trial. Interestingly, in animals trained to make an auditory decision following passive observation of a visual stimulus, some but not all task features could also be decoded from V1 activity. Our results support the view that even in V1-the earliest stage of the cortical hierarchy-bottom-up sensory information may be combined with top-down non-sensory information in a task-dependent manner.
Collapse
Affiliation(s)
- Anqi Zhang
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, United States of America
- Cold Spring Harbor Laboratory School of Biological Sciences, Cold Spring Harbor, New York, United States of America
| | - Anthony M. Zador
- Cold Spring Harbor Laboratory, Cold Spring Harbor, New York, United States of America
| |
Collapse
|
3
|
Timme NM, Ardinger CE, Weir SDC, Zelaya-Escobar R, Kruger R, Lapish CC. Non-Consummatory Behavior Signals Predict Aversion-Resistant Alcohol Drinking in Head-Fixed Mice. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.06.20.545767. [PMID: 37873153 PMCID: PMC10592797 DOI: 10.1101/2023.06.20.545767] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
A key facet of alcohol use disorder is continuing to drink alcohol despite negative consequences (so called "aversion-resistant drinking"). In this study, we sought to assess the degree to which head-fixed mice exhibit aversion-resistant drinking and to leverage behavioral analysis techniques available in head-fixture to relate non-consummatory behaviors to aversion-resistant drinking. We assessed aversion-resistant drinking in head-fixed female and male C57BL/6J mice. We adulterated 20% (v/v) alcohol with varying concentrations of the bitter tastant quinine to measure the degree to which mice would continue to drink despite this aversive stimulus. We recorded high-resolution video of the mice during head-fixed drinking, tracked body parts with machine vision tools, and analyzed body movements in relation to consumption. Female and male head-fixed mice exhibited heterogenous levels of aversion-resistant drinking. Additionally, non-consummatory behaviors, such as paw movement and snout movement, were related to the intensity of aversion-resistant drinking. These studies demonstrate that head-fixed mice exhibit aversion-resistant drinking and that non-consummatory behaviors can be used to assess perceived aversiveness in this paradigm. Furthermore, these studies lay the groundwork for future experiments that will utilize advanced electrophysiological techniques to record from large populations of neurons during aversion-resistant drinking to understand the neurocomputational processes that drive this clinically relevant behavior.
Collapse
Affiliation(s)
- Nicholas M. Timme
- Department of Psychology, Indiana University – Purdue University Indianapolis, 402 N. Blackford St, LD 124, Indianapolis, IN, 46202, USA
| | - Cherish E. Ardinger
- Department of Psychology, Indiana University – Purdue University Indianapolis, 402 N. Blackford St, LD 124, Indianapolis, IN, 46202, USA
| | - Seth D. C. Weir
- Department of Psychology, Indiana University – Purdue University Indianapolis, 402 N. Blackford St, LD 124, Indianapolis, IN, 46202, USA
| | - Rachel Zelaya-Escobar
- Department of Psychology, Indiana University – Purdue University Indianapolis, 402 N. Blackford St, LD 124, Indianapolis, IN, 46202, USA
| | - Rachel Kruger
- Department of Psychology, Indiana University – Purdue University Indianapolis, 402 N. Blackford St, LD 124, Indianapolis, IN, 46202, USA
| | - Christopher C. Lapish
- Department of Anatomy, Cell Biology, and Physiology, Indiana University School of Medicine, 635 Barnhill Drive, MSB 5035, Indianapolis, IN, 46202, USA
- Stark Neuroscience Institute, Indiana University School of Medicine, 320 W. 15 St, NB 414, Indianapolis, IN 46202, USA
| |
Collapse
|
4
|
Phadke RA, Wetzel AM, Fournier LA, Sha M, Padró-Luna NM, Cruz-Martín A. REVEALS: An Open Source Multi Camera GUI For Rodent Behavior Acquisition. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.22.554365. [PMID: 37662188 PMCID: PMC10473639 DOI: 10.1101/2023.08.22.554365] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2023]
Abstract
Understanding the rich behavioral data generated by mice is essential for deciphering the function of the healthy and diseased brain. However, the current landscape lacks effective, affordable, and accessible methods for acquiring such data, especially when employing multiple cameras simultaneously. We have developed REVEALS (Rodent BEhaVior Multi-camErA Laboratory AcquiSition), a graphical user interface (GUI) written in python for acquiring rodent behavioral data via commonly used USB3 cameras. REVEALS allows for user-friendly control of recording from one or multiple cameras simultaneously while streamlining the data acquisition process, enabling researchers to collect and analyze large datasets efficiently. We release this software package as a stand-alone, open-source framework for researchers to use and modify according to their needs. We describe the details of the GUI implementation, including the camera control software and the video recording functionality. We validate results demonstrating the GUI's stability, reliability, and accuracy for capturing and analyzing rodent behavior using DeepLabCut pose estimation in both an object and social interaction assay. REVEALS can also be incorporated into other custom pipelines to analyze complex behavior, such as MoSeq. In summary, REVEALS provides an interface for collecting behavioral data from one or multiple perspectives that, combined with deep learning algorithms, will allow the scientific community to discover and characterize complex behavioral phenotypes to understand brain function better.
Collapse
Affiliation(s)
- Rhushikesh A. Phadke
- Molecular Biology, Cell Biology and Biochemistry Program, Boston University, Boston, MA, USA
| | - Austin M. Wetzel
- Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Luke A. Fournier
- Neurobiology Section in the Department of Biology, Boston University, Boston, MA, USA
| | - Mingqi Sha
- Neurobiology Section in the Department of Biology, Boston University, Boston, MA, USA
| | - Nicole M. Padró-Luna
- Summer Undergraduate Research Fellowship Program, Boston University, Boston, MA, USA
- College of Natural Sciences, Río Piedras Campus, University of Puerto Rico, Río Piedras, PR
| | - Alberto Cruz-Martín
- Neurobiology Section in the Department of Biology, Boston University, Boston, MA, USA
| |
Collapse
|
5
|
Wang Y, LeDue JM, Murphy TH. Multiscale imaging informs translational mouse modeling of neurological disease. Neuron 2022; 110:3688-3710. [PMID: 36198319 DOI: 10.1016/j.neuron.2022.09.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 05/26/2022] [Accepted: 09/06/2022] [Indexed: 11/05/2022]
Abstract
Multiscale neurophysiology reveals that simple motor actions are associated with changes in neuronal firing in virtually every brain region studied. Accordingly, the assessment of focal pathology such as stroke or progressive neurodegenerative diseases must also extend widely across brain areas. To derive mechanistic information through imaging, multiple resolution scales and multimodal factors must be included, such as the structure and function of specific neurons and glial cells and the dynamics of specific neurotransmitters. Emerging multiscale methods in preclinical animal studies that span micro- to macroscale examinations fill this gap, allowing a circuit-based understanding of pathophysiological mechanisms. Combined with high-performance computation and open-source data repositories, these emerging multiscale and large field-of-view techniques include live functional ultrasound, multi- and single-photon wide-scale light microscopy, video-based miniscopes, and tissue-penetrating fiber photometry, as well as variants of post-mortem expansion microscopy. We present these technologies and outline use cases and data pipelines to uncover new knowledge within animal models of stroke, Alzheimer's disease, and movement disorders.
Collapse
Affiliation(s)
- Yundi Wang
- University of British Columbia, Department of Psychiatry, Kinsmen Laboratory of Neurological Research, Detwiller Pavilion, 2255 Wesbrook Mall, Vancouver, BC V6T 1Z3, Canada; Djavad Mowafaghian Centre for Brain Health, University of British Columbia, 2215 Wesbrook Mall, Vancouver, BC V6T 1Z3, Canada
| | - Jeffrey M LeDue
- University of British Columbia, Department of Psychiatry, Kinsmen Laboratory of Neurological Research, Detwiller Pavilion, 2255 Wesbrook Mall, Vancouver, BC V6T 1Z3, Canada; Djavad Mowafaghian Centre for Brain Health, University of British Columbia, 2215 Wesbrook Mall, Vancouver, BC V6T 1Z3, Canada
| | - Timothy H Murphy
- University of British Columbia, Department of Psychiatry, Kinsmen Laboratory of Neurological Research, Detwiller Pavilion, 2255 Wesbrook Mall, Vancouver, BC V6T 1Z3, Canada; Djavad Mowafaghian Centre for Brain Health, University of British Columbia, 2215 Wesbrook Mall, Vancouver, BC V6T 1Z3, Canada.
| |
Collapse
|
6
|
Baker S, Tekriwal A, Felsen G, Christensen E, Hirt L, Ojemann SG, Kramer DR, Kern DS, Thompson JA. Automatic extraction of upper-limb kinematic activity using deep learning-based markerless tracking during deep brain stimulation implantation for Parkinson's disease: A proof of concept study. PLoS One 2022; 17:e0275490. [PMID: 36264986 PMCID: PMC9584454 DOI: 10.1371/journal.pone.0275490] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2022] [Accepted: 09/16/2022] [Indexed: 11/12/2022] Open
Abstract
Optimal placement of deep brain stimulation (DBS) therapy for treating movement disorders routinely relies on intraoperative motor testing for target determination. However, in current practice, motor testing relies on subjective interpretation and correlation of motor and neural information. Recent advances in computer vision could improve assessment accuracy. We describe our application of deep learning-based computer vision to conduct markerless tracking for measuring motor behaviors of patients undergoing DBS surgery for the treatment of Parkinson's disease. Video recordings were acquired during intraoperative kinematic testing (N = 5 patients), as part of standard of care for accurate implantation of the DBS electrode. Kinematic data were extracted from videos post-hoc using the Python-based computer vision suite DeepLabCut. Both manual and automated (80.00% accuracy) approaches were used to extract kinematic episodes from threshold derived kinematic fluctuations. Active motor epochs were compressed by modeling upper limb deflections with a parabolic fit. A semi-supervised classification model, support vector machine (SVM), trained on the parameters defined by the parabolic fit reliably predicted movement type. Across all cases, tracking was well calibrated (i.e., reprojection pixel errors 0.016-0.041; accuracies >95%). SVM predicted classification demonstrated high accuracy (85.70%) including for two common upper limb movements, arm chain pulls (92.30%) and hand clenches (76.20%), with accuracy validated using a leave-one-out process for each patient. These results demonstrate successful capture and categorization of motor behaviors critical for assessing the optimal brain target for DBS surgery. Conventional motor testing procedures have proven informative and contributory to targeting but have largely remained subjective and inaccessible to non-Western and rural DBS centers with limited resources. This approach could automate the process and improve accuracy for neuro-motor mapping, to improve surgical targeting, optimize DBS therapy, provide accessible avenues for neuro-motor mapping and DBS implantation, and advance our understanding of the function of different brain areas.
Collapse
Affiliation(s)
- Sunderland Baker
- Department of Human Biology and Kinesiology, Colorado College, Colorado Springs, Colorado, United States of America
| | - Anand Tekriwal
- Department of Neurosurgery, University of Colorado Anschutz Medical Campus, Aurora, Colorado, United States of America
- Department of Physiology and Biophysics, University of Colorado Anschutz Medical Campus, Aurora, Colorado, United States of America
- Neuroscience Graduate Program, University of Colorado Anschutz Medical Campus, Aurora, Colorado, United States of America
- Medical Scientist Training Program, University of Colorado Anschutz Medical Campus, Aurora, Colorado, United States of America
| | - Gidon Felsen
- Department of Physiology and Biophysics, University of Colorado Anschutz Medical Campus, Aurora, Colorado, United States of America
| | - Elijah Christensen
- Neuroscience Graduate Program, University of Colorado Anschutz Medical Campus, Aurora, Colorado, United States of America
- Medical Scientist Training Program, University of Colorado Anschutz Medical Campus, Aurora, Colorado, United States of America
| | - Lisa Hirt
- Department of Neurosurgery, University of Colorado Anschutz Medical Campus, Aurora, Colorado, United States of America
| | - Steven G. Ojemann
- Department of Neurosurgery, University of Colorado Anschutz Medical Campus, Aurora, Colorado, United States of America
| | - Daniel R. Kramer
- Department of Neurosurgery, University of Colorado Anschutz Medical Campus, Aurora, Colorado, United States of America
| | - Drew S. Kern
- Department of Neurosurgery, University of Colorado Anschutz Medical Campus, Aurora, Colorado, United States of America
- Department of Neurology, University of Colorado Anschutz Medical Campus, Aurora, Colorado, United States of America
| | - John A. Thompson
- Department of Neurosurgery, University of Colorado Anschutz Medical Campus, Aurora, Colorado, United States of America
- Neuroscience Graduate Program, University of Colorado Anschutz Medical Campus, Aurora, Colorado, United States of America
- Department of Neurology, University of Colorado Anschutz Medical Campus, Aurora, Colorado, United States of America
| |
Collapse
|
7
|
Suryanto ME, Saputra F, Kurnia KA, Vasquez RD, Roldan MJM, Chen KHC, Huang JC, Hsiao CD. Using DeepLabCut as a Real-Time and Markerless Tool for Cardiac Physiology Assessment in Zebrafish. BIOLOGY 2022; 11:1243. [PMID: 36009871 PMCID: PMC9405297 DOI: 10.3390/biology11081243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2022] [Revised: 08/13/2022] [Accepted: 08/19/2022] [Indexed: 11/21/2022]
Abstract
DeepLabCut (DLC) is a deep learning-based tool initially invented for markerless pose estimation in mammals. In this study, we explored the possibility of adopting this tool for conducting markerless cardiac physiology assessment in an important aquatic toxicology model of zebrafish (Danio rerio). Initially, high-definition videography was applied to capture heartbeat information at a frame rate of 30 frames per second (fps). Next, 20 videos from different individuals were used to perform convolutional neural network training by labeling the heart chamber (ventricle) with eight landmarks. Using Residual Network (ResNet) 152, a neural network with 152 convolutional neural network layers with 500,000 iterations, we successfully obtained a trained model that can track the heart chamber in a real-time manner. Later, we validated DLC performance with the previously published ImageJ Time Series Analysis (TSA) and Kymograph (KYM) methods. We also evaluated DLC performance by challenging experimental animals with ethanol and ponatinib to induce cardiac abnormality and heartbeat irregularity. The results showed that DLC is more accurate than the TSA method in several parameters tested. The DLC-trained model also detected the ventricle of zebrafish embryos even in the occurrence of heart abnormalities, such as pericardial edema. We believe that this tool is beneficial for research studies, especially for cardiac physiology assessment in zebrafish embryos.
Collapse
Affiliation(s)
- Michael Edbert Suryanto
- Department of Chemistry, Chung Yuan Christian University, Taoyuan 320314, Taiwan
- Department of Bioscience Technology, Chung Yuan Christian University, Taoyuan 320314, Taiwan
| | - Ferry Saputra
- Department of Chemistry, Chung Yuan Christian University, Taoyuan 320314, Taiwan
- Department of Bioscience Technology, Chung Yuan Christian University, Taoyuan 320314, Taiwan
| | - Kevin Adi Kurnia
- Department of Chemistry, Chung Yuan Christian University, Taoyuan 320314, Taiwan
- Department of Bioscience Technology, Chung Yuan Christian University, Taoyuan 320314, Taiwan
| | - Ross D. Vasquez
- Department of Pharmacy, Research Center for Natural and Applied Sciences, University of Santo Tomas, Manila 1008, Philippines
| | - Marri Jmelou M. Roldan
- Faculty of Pharmacy, The Graduate School, University of Santo Tomas, Manila 1008, Philippines
| | - Kelvin H.-C. Chen
- Department of Applied Chemistry, National Pingtung University, Pingtung 90003, Taiwan
| | - Jong-Chin Huang
- Department of Applied Chemistry, National Pingtung University, Pingtung 90003, Taiwan
| | - Chung-Der Hsiao
- Department of Chemistry, Chung Yuan Christian University, Taoyuan 320314, Taiwan
- Department of Bioscience Technology, Chung Yuan Christian University, Taoyuan 320314, Taiwan
- Center for Nanotechnology, Chung Yuan Christian University, Taoyuan 320314, Taiwan
- Research Center for Aquatic Toxicology and Pharmacology, Chung Yuan Christian University, Taoyuan 320314, Taiwan
| |
Collapse
|
8
|
Hardin A, Schlupp I. Using machine learning and DeepLabCut in animal behavior. Acta Ethol 2022. [DOI: 10.1007/s10211-022-00397-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
9
|
Marks M, Qiuhan J, Sturman O, von Ziegler L, Kollmorgen S, von der Behrens W, Mante V, Bohacek J, Yanik MF. Deep-learning based identification, tracking, pose estimation, and behavior classification of interacting primates and mice in complex environments. NAT MACH INTELL 2022; 4:331-340. [PMID: 35465076 PMCID: PMC7612650 DOI: 10.1038/s42256-022-00477-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2020] [Accepted: 03/13/2022] [Indexed: 11/24/2022]
Abstract
The quantification of behaviors of interest from video data is commonly used to study brain function, the effects of pharmacological interventions, and genetic alterations. Existing approaches lack the capability to analyze the behavior of groups of animals in complex environments. We present a novel deep learning architecture for classifying individual and social animal behavior, even in complex environments directly from raw video frames, while requiring no intervention after initial human supervision. Our behavioral classifier is embedded in a pipeline (SIPEC) that performs segmentation, identification, pose-estimation, and classification of complex behavior, outperforming the state of the art. SIPEC successfully recognizes multiple behaviors of freely moving individual mice as well as socially interacting non-human primates in 3D, using data only from simple mono-vision cameras in home-cage setups.
Collapse
Affiliation(s)
- Markus Marks
- Institute of Neuroinformatics ETH Zürich and University of Zürich, Switzerland
- Neuroscience Center Zurich, ETH Zürich and University of Zürich, Switzerland
| | - Jin Qiuhan
- Laboratory for Neuro- & Psychophysiology, Department of Neurosciences, KU Leuven, Belgium
| | - Oliver Sturman
- Laboratory of Molecular and Behavioral Neuroscience, Institute for Neuroscience, Department of Health Sciences and Technology, ETH Zurich, Switzerland
- Neuroscience Center Zurich, ETH Zürich and University of Zürich, Switzerland
| | - Lukas von Ziegler
- Laboratory of Molecular and Behavioral Neuroscience, Institute for Neuroscience, Department of Health Sciences and Technology, ETH Zurich, Switzerland
- Neuroscience Center Zurich, ETH Zürich and University of Zürich, Switzerland
| | - Sepp Kollmorgen
- Institute of Neuroinformatics ETH Zürich and University of Zürich, Switzerland
- Neuroscience Center Zurich, ETH Zürich and University of Zürich, Switzerland
| | - Wolfger von der Behrens
- Institute of Neuroinformatics ETH Zürich and University of Zürich, Switzerland
- Neuroscience Center Zurich, ETH Zürich and University of Zürich, Switzerland
| | - Valerio Mante
- Institute of Neuroinformatics ETH Zürich and University of Zürich, Switzerland
- Neuroscience Center Zurich, ETH Zürich and University of Zürich, Switzerland
| | - Johannes Bohacek
- Laboratory of Molecular and Behavioral Neuroscience, Institute for Neuroscience, Department of Health Sciences and Technology, ETH Zurich, Switzerland
- Neuroscience Center Zurich, ETH Zürich and University of Zürich, Switzerland
| | - Mehmet Fatih Yanik
- Institute of Neuroinformatics ETH Zürich and University of Zürich, Switzerland
- Neuroscience Center Zurich, ETH Zürich and University of Zürich, Switzerland
| |
Collapse
|
10
|
Vagvolgyi BP, Jayakumar RP, Madhav MS, Knierim JJ, Cowan NJ. Wide-angle, monocular head tracking using passive markers. J Neurosci Methods 2022; 368:109453. [PMID: 34968626 PMCID: PMC8857048 DOI: 10.1016/j.jneumeth.2021.109453] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 11/22/2021] [Accepted: 12/17/2021] [Indexed: 10/19/2022]
Abstract
BACKGROUND Camera images can encode large amounts of visual information of an animal and its environment, enabling high fidelity 3D reconstruction of the animal and its environment using computer vision methods. Most systems, both markerless (e.g. deep learning based) and marker-based, require multiple cameras to track features across multiple points of view to enable such 3D reconstruction. However, such systems can be expensive and are challenging to set up in small animal research apparatuses. NEW METHODS We present an open-source, marker-based system for tracking the head of a rodent for behavioral research that requires only a single camera with a potentially wide field of view. The system features a lightweight visual target and computer vision algorithms that together enable high-accuracy tracking of the six-degree-of-freedom position and orientation of the animal's head. The system, which only requires a single camera positioned above the behavioral arena, robustly reconstructs the pose over a wide range of head angles (360° in yaw, and approximately ± 120° in roll and pitch). RESULTS Experiments with live animals demonstrate that the system can reliably identify rat head position and orientation. Evaluations using a commercial optical tracker device show that the system achieves accuracy that rivals commercial multi-camera systems. COMPARISON WITH EXISTING METHODS Our solution significantly improves upon existing monocular marker-based tracking methods, both in accuracy and in allowable range of motion. CONCLUSIONS The proposed system enables the study of complex behaviors by providing robust, fine-scale measurements of rodent head motions in a wide range of orientations.
Collapse
Affiliation(s)
- Balazs P. Vagvolgyi
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, U.S.A.,Corresponding author: (Balazs P. Vagvolgyi)
| | - Ravikrishnan P. Jayakumar
- Mind/Brain Institute, Johns Hopkins University, Baltimore, MD, U.S.A.,Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, U.S.A.,Mechanical Engineering Department, Johns Hopkins University, Baltimore, MD, U.S.A
| | - Manu S. Madhav
- Mind/Brain Institute, Johns Hopkins University, Baltimore, MD, U.S.A.,Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD, U.S.A.,Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, U.S.A.,School of Biomedical Engineering, Djawad Mowafaghian Centre for Brain Health, University of British Columbia, BC, Canada,Corresponding author: (Balazs P. Vagvolgyi)
| | - James J. Knierim
- Mind/Brain Institute, Johns Hopkins University, Baltimore, MD, U.S.A.,Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD, U.S.A
| | - Noah J. Cowan
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, U.S.A.,Mechanical Engineering Department, Johns Hopkins University, Baltimore, MD, U.S.A
| |
Collapse
|
11
|
Solby H, Radovanovic M, Sommerville JA. A New Look at Infant Problem-Solving: Using DeepLabCut to Investigate Exploratory Problem-Solving Approaches. Front Psychol 2021; 12:705108. [PMID: 34819894 PMCID: PMC8606407 DOI: 10.3389/fpsyg.2021.705108] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Accepted: 10/18/2021] [Indexed: 12/22/2022] Open
Abstract
When confronted with novel problems, problem-solvers must decide whether to copy a modeled solution or to explore their own unique solutions. While past work has established that infants can learn to solve problems both through their own exploration and through imitation, little work has explored the factors that influence which of these approaches infants select to solve a given problem. Moreover, past work has treated imitation and exploration as qualitatively distinct, although these two possibilities may exist along a continuum. Here, we apply a program novel to developmental psychology (DeepLabCut) to archival data (Lucca et al., 2020) to investigate the influence of the effort and success of an adult's modeled solution, and infants' firsthand experience with failure, on infants' imitative versus exploratory problem-solving approaches. Our results reveal that tendencies toward exploration are relatively immune to the information from the adult model, but that exploration generally increased in response to firsthand experience with failure. In addition, we found that increases in maximum force and decreases in trying time were associated with greater exploration, and that exploration subsequently predicted problem-solving success on a new iteration of the task. Thus, our results demonstrate that infants increase exploration in response to failure and that exploration may operate in a larger motivational framework with force, trying time, and expectations of task success.
Collapse
|
12
|
North R, Wurr R, Macon R, Mannion C, Hyde J, Torres-Espin A, Rosenzweig ES, Ferguson AR, Tuszynski MH, Beattie MS, Bresnahan JC, Joiner WM. Quantifying the kinematic features of dexterous finger movements in nonhuman primates with markerless tracking. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:6110-6115. [PMID: 34892511 DOI: 10.1109/embc46164.2021.9630018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Research using nonhuman primate models for human disease frequently requires behavioral observational techniques to quantify functional outcomes. The ability to assess reaching and grasping patterns is of particular interest in clinical conditions that affect the motor system (e.g., spinal cord injury, SCI). Here we explored the use of DeepLabCut, an open-source deep learning toolset, in combination with a standard behavioral task (Brinkman Board) to quantify nonhuman primate performance in precision grasping. We examined one male rhesus macaque (Macaca mulatta) in the task which involved retrieving rewards from variously-oriented shallow wells. Simultaneous recordings were made using GoPro Hero7 Black cameras (resolution 1920 x 1080 at 120 fps) from two different angles (from the side and top of the hand motion). The task/device design necessitates use of the right hand to complete the task. Two neural networks (corresponding to the top and side view cameras) were trained using 400 manually annotated images, tracking 19 unique landmarks each. Based on previous reports, this produced sufficient tracking (Side: trained pixel error of 2.15, test pixel error of 11.25; Top: trained pixel error of 2.06, test pixel error of 30.31) so that landmarks could be tracked on the remaining frames. Landmarks included in the tracking were the spatial location of the knuckles and the fingernails of each digit, and three different behavioral measures were quantified for assessment of hand movement (finger separation, middle digit extension and preshaping distance). Together, our preliminary results suggest that this markerless approach is a possible method to examine specific kinematic features of dexterous function.Clinical Relevance- The methodology presented below allows for the markerless tracking of kinematic features of dexterous finger movement by non-human primates. This method could allow for direct comparisons between human patients and non-human primate models of clinical conditions (e.g., spinal cord injury). This would provide objective quantitative metrics and crucial information for assessing movement impairments across populations and the potential translation of treatments, interventions and their outcomes.
Collapse
|
13
|
Macpherson T, Churchland A, Sejnowski T, DiCarlo J, Kamitani Y, Takahashi H, Hikida T. Natural and Artificial Intelligence: A brief introduction to the interplay between AI and neuroscience research. Neural Netw 2021; 144:603-613. [PMID: 34649035 DOI: 10.1016/j.neunet.2021.09.018] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 09/15/2021] [Accepted: 09/21/2021] [Indexed: 10/20/2022]
Abstract
Neuroscience and artificial intelligence (AI) share a long history of collaboration. Advances in neuroscience, alongside huge leaps in computer processing power over the last few decades, have given rise to a new generation of in silico neural networks inspired by the architecture of the brain. These AI systems are now capable of many of the advanced perceptual and cognitive abilities of biological systems, including object recognition and decision making. Moreover, AI is now increasingly being employed as a tool for neuroscience research and is transforming our understanding of brain functions. In particular, deep learning has been used to model how convolutional layers and recurrent connections in the brain's cerebral cortex control important functions, including visual processing, memory, and motor control. Excitingly, the use of neuroscience-inspired AI also holds great promise for understanding how changes in brain networks result in psychopathologies, and could even be utilized in treatment regimes. Here we discuss recent advancements in four areas in which the relationship between neuroscience and AI has led to major advancements in the field; (1) AI models of working memory, (2) AI visual processing, (3) AI analysis of big neuroscience datasets, and (4) computational psychiatry.
Collapse
Affiliation(s)
- Tom Macpherson
- Laboratory for Advanced Brain Functions, Institute for Protein Research, Osaka University, Osaka, Japan
| | - Anne Churchland
- Cold Spring Harbor Laboratory, Neuroscience, Cold Spring Harbor, NY, USA
| | - Terry Sejnowski
- Computational Neurobiology Laboratory, Salk Institute for Biological Studies, CA, USA; Division of Biological Sciences, University of California San Diego, CA, USA
| | - James DiCarlo
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, MA, USA
| | - Yukiyasu Kamitani
- Department of Neuroinformatics, ATR Computational Neuroscience Laboratories, Kyoto, Japan; Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | - Hidehiko Takahashi
- Department of Psychiatry and Behavioral Sciences, Tokyo Medical and Dental University Graduate School, Tokyo, Japan
| | - Takatoshi Hikida
- Laboratory for Advanced Brain Functions, Institute for Protein Research, Osaka University, Osaka, Japan.
| |
Collapse
|
14
|
Bowles S, Williamson WR, Nettles D, Hickman J, Welle CG. Closed-loop automated reaching apparatus (CLARA) for interrogating complex motor behaviors. J Neural Eng 2021; 18:10.1088/1741-2552/ac1ed1. [PMID: 34407518 PMCID: PMC8699662 DOI: 10.1088/1741-2552/ac1ed1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Accepted: 08/18/2021] [Indexed: 11/11/2022]
Abstract
Objective.Closed-loop neuromodulation technology is a rapidly expanding category of therapeutics for a broad range of indications. Development of these innovative neurological devices requires high-throughput systems for closed-loop stimulation of model organisms, while monitoring physiological signals and complex, naturalistic behaviors. To address this need, we developed CLARA, a closed-loop automated reaching apparatus.Approach.Using breakthroughs in computer vision, CLARA integrates fully-automated, markerless kinematic tracking of multiple features to classify animal behavior and precisely deliver neural stimulation based on behavioral outcomes. CLARA is compatible with advanced neurophysiological tools, enabling the testing of neurostimulation devices and identification of novel neurological biomarkers.Results.The CLARA system tracks unconstrained skilled reach behavior in 3D at 150 Hz without physical markers. The system fully automates trial initiation and pellet delivery and is capable of accurately delivering stimulation in response to trial outcome with short latency. Kinematic data from the CLARA system provided novel insights into the dynamics of reach consistency over the course of learning, suggesting that learning selectively improves reach failures but does not alter the kinematics of successful reaches. Additionally, using the closed-loop capabilities of CLARA, we demonstrate that vagus nerve stimulation (VNS) improves skilled reach performance and increases reach trajectory consistency in healthy animals.Significance.The CLARA system is the first mouse behavior apparatus that uses markerless pose tracking to provide real-time closed-loop stimulation in response to the outcome of an unconstrained motor task. Additionally, we demonstrate that the CLARA system was essential for our investigating the role of closed-loop VNS stimulation on motor performance in healthy animals. This approach has high translational relevance for developing neurostimulation technology based on complex human behavior.
Collapse
Affiliation(s)
- S Bowles
- Neurosurgery, The University of Colorado Anschutz Medical Campus, Aurora, CO 80045, United States of America
- These authors contributed equally
| | - W R Williamson
- NeuroTechnology Center, The University of Colorado Anschutz Medical Campus, Aurora, CO 80045, United States of America
- These authors contributed equally
| | - D Nettles
- Neurosurgery, The University of Colorado Anschutz Medical Campus, Aurora, CO 80045, United States of America
| | - J Hickman
- Neurosurgery, The University of Colorado Anschutz Medical Campus, Aurora, CO 80045, United States of America
| | - C G Welle
- Neurosurgery, The University of Colorado Anschutz Medical Campus, Aurora, CO 80045, United States of America
| |
Collapse
|
15
|
Mah KM, Torres-Espín A, Hallworth BW, Bixby JL, Lemmon VP, Fouad K, Fenrich KK. Automation of training and testing motor and related tasks in pre-clinical behavioural and rehabilitative neuroscience. Exp Neurol 2021; 340:113647. [PMID: 33600814 PMCID: PMC10443427 DOI: 10.1016/j.expneurol.2021.113647] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2020] [Revised: 01/25/2021] [Accepted: 02/12/2021] [Indexed: 12/12/2022]
Abstract
Testing and training animals in motor and related tasks is a cornerstone of pre-clinical behavioural and rehabilitative neuroscience. Yet manually testing and training animals in these tasks is time consuming and analyses are often subjective. Consequently, there have been many recent advances in automating both the administration and analyses of animal behavioural training and testing. This review is an in-depth appraisal of the history of, and recent developments in, the automation of animal behavioural assays used in neuroscience. We describe the use of common locomotor and non-locomotor tasks used for motor training and testing before and after nervous system injury. This includes a discussion of how these tasks help us to understand the underlying mechanisms of neurological repair and the utility of some tasks for the delivery of rehabilitative training to enhance recovery. We propose two general approaches to automation: automating the physical administration of behavioural tasks (i.e., devices used to facilitate task training, rehabilitative training, and motor testing) and leveraging the use of machine learning in behaviour analysis to generate large volumes of unbiased and comprehensive data. The advantages and disadvantages of automating various motor tasks as well as the limitations of machine learning analyses are examined. In closing, we provide a critical appraisal of the current state of automation in animal behavioural neuroscience and a prospective on some of the advances in machine learning we believe will dramatically enhance the usefulness of these approaches for behavioural neuroscientists.
Collapse
Affiliation(s)
- Kar Men Mah
- Department of Neurological Surgery, The Miami Project to Cure Paralysis, University of Miami, Miami, FL 33136, USA
| | - Abel Torres-Espín
- Brain and Spinal Injury Center, Department of Neurological Surgery, University of California San Francisco, San Francisco, CA, USA
| | - Ben W Hallworth
- Neuroscience and Mental Health Institute, University of Alberta, Edmonton, Alberta, Canada; Department of Mechanical Engineering, University of Alberta, Edmonton, Alberta, Canada
| | - John L Bixby
- Department of Neurological Surgery, The Miami Project to Cure Paralysis, University of Miami, Miami, FL 33136, USA; Department of Molecular & Cellular Pharmacology, University of Miami, Miller School of Medicine, Miami, FL 33136, USA
| | - Vance P Lemmon
- Department of Neurological Surgery, The Miami Project to Cure Paralysis, University of Miami, Miami, FL 33136, USA
| | - Karim Fouad
- Neuroscience and Mental Health Institute, University of Alberta, Edmonton, Alberta, Canada; Department of Physical Therapy, University of Alberta, Edmonton, Alberta, Canada; Faculty of Rehabilitation Medicine, University of Alberta, Edmonton, Alberta, Canada
| | - Keith K Fenrich
- Neuroscience and Mental Health Institute, University of Alberta, Edmonton, Alberta, Canada; Faculty of Rehabilitation Medicine, University of Alberta, Edmonton, Alberta, Canada.
| |
Collapse
|
16
|
Real-Time Closed-Loop Feedback in Behavioral Time Scales Using DeepLabCut. eNeuro 2021; 8:ENEURO.0415-20.2021. [PMID: 33547045 PMCID: PMC8174057 DOI: 10.1523/eneuro.0415-20.2021] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Revised: 01/23/2021] [Accepted: 01/26/2021] [Indexed: 11/21/2022] Open
Abstract
Computer vision approaches have made significant inroads into offline tracking of behavior and estimating animal poses. In particular, because of their versatility, deep-learning approaches have been gaining attention in behavioral tracking without any markers. Here, we developed an approach using DeepLabCut for real-time estimation of movement. We trained a deep-neural network (DNN) offline with high-speed video data of a mouse whisking, then transferred the trained network to work with the same mouse, whisking in real-time. With this approach, we tracked the tips of three whiskers in an arc and converted positions into a TTL output within behavioral time scales, i.e., 10.5 ms. With this approach, it is possible to trigger output based on movement of individual whiskers, or on the distance between adjacent whiskers. Flexible closed-loop systems like the one we have deployed here can complement optogenetic approaches and can be used to directly manipulate the relationship between movement and neural activity.
Collapse
|
17
|
Dennis EJ, El Hady A, Michaiel A, Clemens A, Tervo DRG, Voigts J, Datta SR. Systems Neuroscience of Natural Behaviors in Rodents. J Neurosci 2021; 41:911-919. [PMID: 33443081 PMCID: PMC7880287 DOI: 10.1523/jneurosci.1877-20.2020] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2020] [Revised: 10/15/2020] [Accepted: 10/20/2020] [Indexed: 11/21/2022] Open
Abstract
Animals evolved in complex environments, producing a wide range of behaviors, including navigation, foraging, prey capture, and conspecific interactions, which vary over timescales ranging from milliseconds to days. Historically, these behaviors have been the focus of study for ecology and ethology, while systems neuroscience has largely focused on short timescale behaviors that can be repeated thousands of times and occur in highly artificial environments. Thanks to recent advances in machine learning, miniaturization, and computation, it is newly possible to study freely moving animals in more natural conditions while applying systems techniques: performing temporally specific perturbations, modeling behavioral strategies, and recording from large numbers of neurons while animals are freely moving. The authors of this review are a group of scientists with deep appreciation for the common aims of systems neuroscience, ecology, and ethology. We believe it is an extremely exciting time to be a neuroscientist, as we have an opportunity to grow as a field, to embrace interdisciplinary, open, collaborative research to provide new insights and allow researchers to link knowledge across disciplines, species, and scales. Here we discuss the origins of ethology, ecology, and systems neuroscience in the context of our own work and highlight how combining approaches across these fields has provided fresh insights into our research. We hope this review facilitates some of these interactions and alliances and helps us all do even better science, together.
Collapse
Affiliation(s)
- Emily Jane Dennis
- Princeton University and Howard Hughes Medical Institute, Princeton, New Jersey, 08540
| | - Ahmed El Hady
- Princeton University and Howard Hughes Medical Institute, Princeton, New Jersey, 08540
| | | | - Ann Clemens
- University of Edinburgh, Edinburgh, Scotland, EH8 9JZ
| | | | - Jakob Voigts
- Massachusetts Institute of Technology, Cambridge, Massachusets, 02139
| | | |
Collapse
|
18
|
Schweihoff JF, Loshakov M, Pavlova I, Kück L, Ewell LA, Schwarz MK. DeepLabStream enables closed-loop behavioral experiments using deep learning-based markerless, real-time posture detection. Commun Biol 2021; 4:130. [PMID: 33514883 PMCID: PMC7846585 DOI: 10.1038/s42003-021-01654-9] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Accepted: 12/31/2020] [Indexed: 12/30/2022] Open
Abstract
In general, animal behavior can be described as the neuronal-driven sequence of reoccurring postures through time. Most of the available current technologies focus on offline pose estimation with high spatiotemporal resolution. However, to correlate behavior with neuronal activity it is often necessary to detect and react online to behavioral expressions. Here we present DeepLabStream, a versatile closed-loop tool providing real-time pose estimation to deliver posture dependent stimulations. DeepLabStream has a temporal resolution in the millisecond range, can utilize different input, as well as output devices and can be tailored to multiple experimental designs. We employ DeepLabStream to semi-autonomously run a second-order olfactory conditioning task with freely moving mice and optogenetically label neuronal ensembles active during specific head directions.
Collapse
Affiliation(s)
- Jens F Schweihoff
- Functional Neuroconnectomics Group, Institute of Experimental Epileptology and Cognition Research, Medical Faculty, University of Bonn, Bonn, Germany
| | - Matvey Loshakov
- Functional Neuroconnectomics Group, Institute of Experimental Epileptology and Cognition Research, Medical Faculty, University of Bonn, Bonn, Germany
| | - Irina Pavlova
- Functional Neuroconnectomics Group, Institute of Experimental Epileptology and Cognition Research, Medical Faculty, University of Bonn, Bonn, Germany
| | - Laura Kück
- Institute of Experimental Epileptology and Cognition Research, Medical Faculty, University of Bonn, Bonn, Germany
| | - Laura A Ewell
- Institute of Experimental Epileptology and Cognition Research, Medical Faculty, University of Bonn, Bonn, Germany
| | - Martin K Schwarz
- Functional Neuroconnectomics Group, Institute of Experimental Epileptology and Cognition Research, Medical Faculty, University of Bonn, Bonn, Germany.
| |
Collapse
|
19
|
Kane GA, Lopes G, Saunders JL, Mathis A, Mathis MW. Real-time, low-latency closed-loop feedback using markerless posture tracking. eLife 2020; 9:e61909. [PMID: 33289631 PMCID: PMC7781595 DOI: 10.7554/elife.61909] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2020] [Accepted: 12/06/2020] [Indexed: 02/06/2023] Open
Abstract
The ability to control a behavioral task or stimulate neural activity based on animal behavior in real-time is an important tool for experimental neuroscientists. Ideally, such tools are noninvasive, low-latency, and provide interfaces to trigger external hardware based on posture. Recent advances in pose estimation with deep learning allows researchers to train deep neural networks to accurately quantify a wide variety of animal behaviors. Here, we provide a new DeepLabCut-Live! package that achieves low-latency real-time pose estimation (within 15 ms, >100 FPS), with an additional forward-prediction module that achieves zero-latency feedback, and a dynamic-cropping mode that allows for higher inference speeds. We also provide three options for using this tool with ease: (1) a stand-alone GUI (called DLC-Live! GUI), and integration into (2) Bonsai, and (3) AutoPilot. Lastly, we benchmarked performance on a wide range of systems so that experimentalists can easily decide what hardware is required for their needs.
Collapse
Affiliation(s)
- Gary A Kane
- The Rowland Institute at Harvard, Harvard UniversityCambridgeUnited States
| | | | - Jonny L Saunders
- Institute of Neuroscience, Department of Psychology, University of OregonEugeneUnited States
| | - Alexander Mathis
- The Rowland Institute at Harvard, Harvard UniversityCambridgeUnited States
- Center for Neuroprosthetics, Center for Intelligent Systems, & Brain Mind Institute, School of Life Sciences, Swiss Federal Institute of Technology (EPFL)LausanneSwitzerland
| | - Mackenzie W Mathis
- The Rowland Institute at Harvard, Harvard UniversityCambridgeUnited States
- Center for Neuroprosthetics, Center for Intelligent Systems, & Brain Mind Institute, School of Life Sciences, Swiss Federal Institute of Technology (EPFL)LausanneSwitzerland
| |
Collapse
|