1
|
Duncan L, Zhu S, Pergolotti M, Giri S, Salsabili H, Faezipour M, Ostadabbas S, Mirbozorgi SA. Camera-Based Short Physical Performance Battery and Timed Up and Go Assessment for Older Adults With Cancer. IEEE Trans Biomed Eng 2023; 70:2529-2539. [PMID: 37028022 DOI: 10.1109/tbme.2023.3253061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2023]
Abstract
This paper presents an automatic camera-based device to monitor and evaluate the gait speed, standing balance, and 5 times sit-stand (5TSS) tests of the Short Physical Performance Battery (SPPB) and the Timed Up and Go (TUG) test. The proposed design measures and calculates the parameters of the SPPB tests automatically. The SPPB data can be used for physical performance assessment of older patients under cancer treatment. This stand-alone device has a Raspberry Pi (RPi) computer, three cameras, and two DC motors. The left and right cameras are used for gait speed tests. The center camera is used for standing balance, 5TSS, and TUG tests and for angle positioning of the camera platform toward the subject using DC motors by turning the camera left/right and tilting it up/down. The key algorithm for operating the proposed system is developed using Channel and Spatial Reliability Tracking in the cv2 module in Python. Graphical User Interfaces (GUIs) in the RPi are developed to run tests and adjust cameras, controlled remotely via smartphone and its Wi-Fi hotspot. We have tested the implemented camera setup prototype and extracted all SPPB and TUG parameters by conducting several experiments on a human subject population of 8 volunteers (male and female, light and dark complexions) in 69 test runs. The measured data and calculated outputs of the system consist of tests of gait speed (0.041 to 1.92 m/s with average accuracy of >95%), and standing balance, 5TSS, TUG, all with average time accuracy of >97%.
Collapse
|
2
|
A Large-Scale Mouse Pose Dataset for Mouse Pose Estimation. Symmetry (Basel) 2022. [DOI: 10.3390/sym14050875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Mouse pose estimations have important applications in the fields of animal behavior research, biomedicine, and animal conservation studies. Accurate and efficient mouse pose estimations using computer vision are necessary. Although methods for mouse pose estimations have developed, bottlenecks still exist. One of the most prominent problems is the lack of uniform and standardized training datasets. Here, we resolve this difficulty by introducing the mouse pose dataset. Our mouse pose dataset contains 40,000 frames of RGB images and large-scale 2D ground-truth motion images. All the images were captured from interacting lab mice through a stable single viewpoint, including 5 distinct species and 20 mice in total. Moreover, to improve the annotation efficiency, five keypoints of mice are creatively proposed, in which one keypoint is at the center and the other two pairs of keypoints are symmetric. Then, we created simple, yet effective software that works for annotating images. It is another important link to establish a benchmark model for 2D mouse pose estimations. We employed modified object detections and pose estimation algorithms to achieve precise, effective, and robust performances. As the first large and standardized mouse pose dataset, our proposed mouse pose dataset will help advance research on animal pose estimations and assist in application areas related to animal experiments.
Collapse
|
3
|
Han Y, Huang K, Chen K, Pan H, Ju F, Long Y, Gao G, Wu R, Wang A, Wang L, Wei P. MouseVenue3D: A Markerless Three-Dimension Behavioral Tracking System for Matching Two-Photon Brain Imaging in Free-Moving Mice. Neurosci Bull 2022; 38:303-317. [PMID: 34637091 PMCID: PMC8975979 DOI: 10.1007/s12264-021-00778-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Accepted: 06/23/2021] [Indexed: 10/20/2022] Open
Abstract
Understanding the connection between brain and behavior in animals requires precise monitoring of their behaviors in three-dimensional (3-D) space. However, there is no available three-dimensional behavior capture system that focuses on rodents. Here, we present MouseVenue3D, an automated and low-cost system for the efficient capture of 3-D skeleton trajectories in markerless rodents. We improved the most time-consuming step in 3-D behavior capturing by developing an automatic calibration module. Then, we validated this process in behavior recognition tasks, and showed that 3-D behavioral data achieved higher accuracy than 2-D data. Subsequently, MouseVenue3D was combined with fast high-resolution miniature two-photon microscopy for synchronous neural recording and behavioral tracking in the freely-moving mouse. Finally, we successfully decoded spontaneous neuronal activity from the 3-D behavior of mice. Our findings reveal that subtle, spontaneous behavior modules are strongly correlated with spontaneous neuronal activity patterns.
Collapse
Affiliation(s)
- Yaning Han
- Shenzhen Key Laboratory of Neuropsychiatric Modulation and Collaborative Innovation Center for Brain Science, Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, CAS Center for Excellence in Brain Science and Intelligence Technology, Brain Cognition and Brain Disease Institute, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen-Hong Kong Institute of Brain Science, Shenzhen Fundamental Research Institutions, Shenzhen, 518055, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Kang Huang
- Shenzhen Key Laboratory of Neuropsychiatric Modulation and Collaborative Innovation Center for Brain Science, Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, CAS Center for Excellence in Brain Science and Intelligence Technology, Brain Cognition and Brain Disease Institute, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen-Hong Kong Institute of Brain Science, Shenzhen Fundamental Research Institutions, Shenzhen, 518055, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Ke Chen
- Shenzhen Key Laboratory of Neuropsychiatric Modulation and Collaborative Innovation Center for Brain Science, Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, CAS Center for Excellence in Brain Science and Intelligence Technology, Brain Cognition and Brain Disease Institute, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen-Hong Kong Institute of Brain Science, Shenzhen Fundamental Research Institutions, Shenzhen, 518055, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Hongli Pan
- Shenzhen Key Laboratory of Neuropsychiatric Modulation and Collaborative Innovation Center for Brain Science, Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, CAS Center for Excellence in Brain Science and Intelligence Technology, Brain Cognition and Brain Disease Institute, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen-Hong Kong Institute of Brain Science, Shenzhen Fundamental Research Institutions, Shenzhen, 518055, China
| | - Furong Ju
- Shenzhen Key Laboratory of Neuropsychiatric Modulation and Collaborative Innovation Center for Brain Science, Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, CAS Center for Excellence in Brain Science and Intelligence Technology, Brain Cognition and Brain Disease Institute, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen-Hong Kong Institute of Brain Science, Shenzhen Fundamental Research Institutions, Shenzhen, 518055, China
| | - Yueyue Long
- Shenzhen Key Laboratory of Neuropsychiatric Modulation and Collaborative Innovation Center for Brain Science, Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, CAS Center for Excellence in Brain Science and Intelligence Technology, Brain Cognition and Brain Disease Institute, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen-Hong Kong Institute of Brain Science, Shenzhen Fundamental Research Institutions, Shenzhen, 518055, China
- University of Rochester, Rochester, NY, 14627, USA
| | - Gao Gao
- Shenzhen Key Laboratory of Neuropsychiatric Modulation and Collaborative Innovation Center for Brain Science, Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, CAS Center for Excellence in Brain Science and Intelligence Technology, Brain Cognition and Brain Disease Institute, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen-Hong Kong Institute of Brain Science, Shenzhen Fundamental Research Institutions, Shenzhen, 518055, China
- Honam University, Gwangju, 62399, South Korea
| | - Runlong Wu
- State Key Laboratory of Membrane Biology, Institute of Molecular Medicine, Peking University, Beijing, 100101, China
| | - Aimin Wang
- Department of Electronics, Peking University, Beijing, 100871, China
- State Key Laboratory of Advanced Optical Communication Systems and Networks, Peking University, Beijing, 100101, China
| | - Liping Wang
- Shenzhen Key Laboratory of Neuropsychiatric Modulation and Collaborative Innovation Center for Brain Science, Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, CAS Center for Excellence in Brain Science and Intelligence Technology, Brain Cognition and Brain Disease Institute, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen-Hong Kong Institute of Brain Science, Shenzhen Fundamental Research Institutions, Shenzhen, 518055, China.
- University of Chinese Academy of Sciences, Beijing, 100049, China.
| | - Pengfei Wei
- Shenzhen Key Laboratory of Neuropsychiatric Modulation and Collaborative Innovation Center for Brain Science, Guangdong Provincial Key Laboratory of Brain Connectome and Behavior, CAS Center for Excellence in Brain Science and Intelligence Technology, Brain Cognition and Brain Disease Institute, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen-Hong Kong Institute of Brain Science, Shenzhen Fundamental Research Institutions, Shenzhen, 518055, China.
- University of Chinese Academy of Sciences, Beijing, 100049, China.
| |
Collapse
|
4
|
Duncan L, Gulati P, Giri S, Ostadabbas S, Abdollah Mirbozorgi S. Camera-Based Human Gait Speed Monitoring and Tracking for Performance Assessment of Elderly Patients with Cancer. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3522-3525. [PMID: 34891999 DOI: 10.1109/embc46164.2021.9630474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
This paper presents a camera-based device for monitoring walking gait speed. The walking gait speed data will be used for performance assessment of elderly patients with cancer and calibrating wearable walking gait speed monitoring devices. This standalone device has a Raspberry Pi computer, three cameras (two cameras for finding the trajectory and gait speed of the subject and one camera for tracking the subject), and two stepper motors. The stepper motors turn the camera platform left and right and tilt it up and down by using video footage from the center camera. The left and right cameras are used to record videos of the person walking. The algorithm for operating the proposed system is developed in Python. The measured data and calculated outputs of the system consist of times for frames, distances from the center camera, horizontal angles, distances moved, instantaneous gait speed (frame-by-frame), total distance walked, and average speed. This system covers a large Lab area of 134.3 m2 and has achieved errors of less than 5% for gait speed calculation.Clinical Relevance- This project will help specialists to adjust the chemo dosage for elderly patients with cancer. The results will be used to analyze the human walking movements for estimating frailty and rehabilitation applications, too.
Collapse
|
5
|
Wang X, Wang W, Tang Y, Wang H, Zhang L, Wang J. Apparatus and methods for mouse behavior recognition on foot contact features. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
6
|
Valientes DA, Raus AM, Lvy AS. An Improved Method for Individual Tracking of Voluntary Wheel Running in Pair-housed Juvenile Mice. Bio Protoc 2021; 11:e4071. [PMID: 34327268 DOI: 10.21769/bioprotoc.4071] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 03/17/2021] [Accepted: 03/29/2021] [Indexed: 11/02/2022] Open
Abstract
Rodent cages equipped with access to a voluntary running wheel are commonly used to study the effects of aerobic physical activity on physiology and behavior. Notable discoveries in exercise neurobiology, including the key role of brain-derived neurotrophic factor (BDNF) in neural plasticity and cognition, have been made using rodents housed with voluntary running wheels. A major advantage of using home-cage running wheels over treadmills is the elimination of stress potentially associated with forced running. In addition, voluntary wheel running may simulate a more natural running pattern in laboratory mice. Singly housing mice with voluntary running wheels is traditionally employed to obtain exact quantitation of the distance ran; however, social isolation stress is often ignored to obtain precise running distances. Moreover, voluntary exercise studies in adolescent mice must consider the neurodevelopmental implications of isolation stress. In this protocol, we wean 21-day-old mouse pups directly into running wheel-equipped cages and pair-house them to reduce the impact of social isolation and other developmentally specific factors that could adversely affect their behavior or development. Individual running distances are obtained from each mouse in the cage using a radio-frequency identification (RFID) ear tag and a hidden antenna placed directly under the running wheel. We have demonstrated that voluntary running during a specific juvenile-adolescent developmental period can improve hippocampal memory when tested during adolescence ( Ivy et al., 2020 ). Individual exercise tracking of group-housed mice can enable future studies to precisely correlate the amount of exercise with readouts such as cell-specific gene expression, epigenetic mechanisms, serum biomarkers, and behavior, in an intra-individual manner. Graphic abstract: Figure 1.Illustration of the dual RFID and Vital View system for individual mouse running in a pair-housed cage.
Collapse
Affiliation(s)
- David A Valientes
- Department of Pediatrics, University of California Irvine School of Medicine, Irvine, CA, USA
| | - Anthony M Raus
- Department of Physiology and Biophysics, University of California Irvine School of Medicine, Irvine, CA, USA
| | - Autumn S Lvy
- Department of Pediatrics, University of California Irvine School of Medicine, Irvine, CA, USA.,Department of Physiology and Biophysics, University of California Irvine School of Medicine, Irvine, CA, USA.,Department of Anatomy/Neurobiology, University of California Irvine School of Medicine, Irvine, CA, USA.,Children's Hospital of Orange County, Orange, CA, USA
| |
Collapse
|
7
|
Schweihoff JF, Loshakov M, Pavlova I, Kück L, Ewell LA, Schwarz MK. DeepLabStream enables closed-loop behavioral experiments using deep learning-based markerless, real-time posture detection. Commun Biol 2021; 4:130. [PMID: 33514883 PMCID: PMC7846585 DOI: 10.1038/s42003-021-01654-9] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Accepted: 12/31/2020] [Indexed: 12/30/2022] Open
Abstract
In general, animal behavior can be described as the neuronal-driven sequence of reoccurring postures through time. Most of the available current technologies focus on offline pose estimation with high spatiotemporal resolution. However, to correlate behavior with neuronal activity it is often necessary to detect and react online to behavioral expressions. Here we present DeepLabStream, a versatile closed-loop tool providing real-time pose estimation to deliver posture dependent stimulations. DeepLabStream has a temporal resolution in the millisecond range, can utilize different input, as well as output devices and can be tailored to multiple experimental designs. We employ DeepLabStream to semi-autonomously run a second-order olfactory conditioning task with freely moving mice and optogenetically label neuronal ensembles active during specific head directions.
Collapse
Affiliation(s)
- Jens F Schweihoff
- Functional Neuroconnectomics Group, Institute of Experimental Epileptology and Cognition Research, Medical Faculty, University of Bonn, Bonn, Germany
| | - Matvey Loshakov
- Functional Neuroconnectomics Group, Institute of Experimental Epileptology and Cognition Research, Medical Faculty, University of Bonn, Bonn, Germany
| | - Irina Pavlova
- Functional Neuroconnectomics Group, Institute of Experimental Epileptology and Cognition Research, Medical Faculty, University of Bonn, Bonn, Germany
| | - Laura Kück
- Institute of Experimental Epileptology and Cognition Research, Medical Faculty, University of Bonn, Bonn, Germany
| | - Laura A Ewell
- Institute of Experimental Epileptology and Cognition Research, Medical Faculty, University of Bonn, Bonn, Germany
| | - Martin K Schwarz
- Functional Neuroconnectomics Group, Institute of Experimental Epileptology and Cognition Research, Medical Faculty, University of Bonn, Bonn, Germany.
| |
Collapse
|
8
|
Improved 3D tracking and automated classification of rodents' behavioral activity using depth-sensing cameras. Behav Res Methods 2021; 52:2156-2167. [PMID: 32232737 DOI: 10.3758/s13428-020-01381-9] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Analysis of rodents' behavior/activity is of fundamental importance in many research fields. However, many behavioral experiments still rely on manual scoring, with obvious problems in reproducibility. Despite important advances in video-analysis systems and computational ethology, automated behavior quantification is still a challenge. The need for large training datasets, background stability requirements, and reduction to two-dimensional analysis (impairing full posture characterization), limit their use. Here we present a novel integrated solution for behavioral analysis of individual rats, combining video segmentation, tracking of body parts, and automated classification of behaviors, using machine learning and computer vision methods. Low-cost depth cameras (RGB-D) are used to enable three-dimensional tracking and classification in dark conditions and absence of color contrast. Our solution automatically tracks five anatomical landmarks in dynamic environments and recognizes seven distinct behaviors, within the accuracy range of human annotations. The developed free software was validated in experiments where behavioral differences between Wistar Kyoto and Wistar rats were automatically quantified. The results reveal the capability for effective automated phenotyping. An extended annotated RGB-D dataset is also made publicly available. The proposed solution is an easy-to-use tool, with low-cost setup and powerful 3D segmentation methods (in static/dynamic environments). The ability to work in dark conditions means that natural animal behavior is not affected by recording lights. Furthermore, automated classification is possible with only ~30 minutes of annotated videos. By creating conditions for high-throughput analysis and reproducible quantitative measurements of animal behavior experiments, we believe this contribution can greatly improve behavioral analysis research.
Collapse
|
9
|
A Robust Real-Time Detecting and Tracking Framework for Multiple Kinds of Unmarked Object. SENSORS 2019; 20:s20010002. [PMID: 31861254 PMCID: PMC6982905 DOI: 10.3390/s20010002] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2019] [Revised: 12/10/2019] [Accepted: 12/13/2019] [Indexed: 11/16/2022]
Abstract
A rodent real-time tracking framework is proposed to automatically detect and track multi-objects in real time and output the coordinates of each object, which combines deep learning (YOLO v3: You Only Look Once, v3), the Kalman Filter, improved Hungarian algorithm, and the nine-point position correction algorithm. A model of a Rat-YOLO is trained in our experiment. The Kalman Filter model is established in an acceleration model to predict the position of the rat in the next frame. The predicted data is used to fill the losing position of rats if the Rat-YOLO doesn't work in the current frame, and to associate the ID between the last frame and current frame. The Hungarian assigned algorithm is used to show the relationship between the objects of the last frame and the objects of the current frame and match the ID of the objects. The nine-point position correction algorithm is presented to adjust the correctness of the Rat-YOLO result and the predicted results. As the training of deep learning needs more datasets than our experiment, and it is time-consuming to process manual marking, automatic software for generating labeled datasets is proposed under a fixed scene and the labeled datasets are manually verified in term of their correctness. Besides this, in an off-line experiment, a mask is presented to remove the highlight. In this experiment, we select the 500 frames of the data as the training datasets and label these images with the automatic label generating software. A video (of 2892 frames) is tested by the trained Rat model and the accuracy of detecting all the three rats is around 72.545%, however, the Rat-YOLO combining the Kalman Filter and nine-point position correction arithmetic improved the accuracy to 95.194%.
Collapse
|
10
|
Lee B, Jia Y, Mirbozorgi SA, Connolly M, Tong X, Zeng Z, Mahmoudi B, Ghovanloo M. An Inductively-Powered Wireless Neural Recording and Stimulation System for Freely-Behaving Animals. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2019; 13:413-424. [PMID: 30624226 PMCID: PMC6510586 DOI: 10.1109/tbcas.2019.2891303] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
An inductively-powered wireless integrated neural recording and stimulation (WINeRS-8) system-on-a-chip (SoC) that is compatible with the EnerCage-HC2 for wireless/battery-less operation has been presented for neuroscience experiments on freely behaving animals. WINeRS-8 includes a 32-ch recording analog front end, a 4-ch current-controlled stimulator, and a 434 MHz on - off keying data link to an external software- defined radio wideband receiver (Rx). The headstage also has a bluetooth low energy link for controlling the SoC. WINeRS-8/EnerCage-HC2 systems form a bidirectional wireless and battery-less neural interface within a standard homecage, which can support longitudinal experiments in an enriched environment. Both systems were verified in vivo on rat animal model, and the recorded signals were compared with hardwired and battery-powered recording results. Realtime stimulation and recording verified the system's potential for bidirectional neural interfacing within the homecage, while continuously delivering 35 mW to the hybrid WINeRS-8 headstage over an unlimited period.
Collapse
Affiliation(s)
- Byunghun Lee
- School of Electrical Engineering, Incheon National University, South Korea ()
| | - Yaoyao Jia
- GT- Bionics lab, School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30308, USA ()
| | - S. Abdollah Mirbozorgi
- GT- Bionics lab, School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30308, USA ()
| | - Mark Connolly
- Department of Physiology, Emory University, Atlanta, GA 30329, USA
| | - Xingyuan Tong
- School of Electronics Engineering, Xi’an University of Posts and Telecommunications, Xi’an, 710121, China
| | | | - Babak Mahmoudi
- Department of Physiology, Emory University, Atlanta, GA 30329, USA
| | - Maysam Ghovanloo
- GT- Bionics lab, School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA 30308, USA ()
| |
Collapse
|
11
|
Pereira TD, Aldarondo DE, Willmore L, Kislin M, Wang SSH, Murthy M, Shaevitz JW. Fast animal pose estimation using deep neural networks. Nat Methods 2018; 16:117-125. [PMID: 30573820 DOI: 10.1038/s41592-018-0234-5] [Citation(s) in RCA: 281] [Impact Index Per Article: 46.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2018] [Accepted: 10/31/2018] [Indexed: 02/06/2023]
Abstract
The need for automated and efficient systems for tracking full animal pose has increased with the complexity of behavioral data and analyses. Here we introduce LEAP (LEAP estimates animal pose), a deep-learning-based method for predicting the positions of animal body parts. This framework consists of a graphical interface for labeling of body parts and training the network. LEAP offers fast prediction on new data, and training with as few as 100 frames results in 95% of peak performance. We validated LEAP using videos of freely behaving fruit flies and tracked 32 distinct points to describe the pose of the head, body, wings and legs, with an error rate of <3% of body length. We recapitulated reported findings on insect gait dynamics and demonstrated LEAP's applicability for unsupervised behavioral classification. Finally, we extended the method to more challenging imaging situations and videos of freely moving mice.
Collapse
Affiliation(s)
- Talmo D Pereira
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Diego E Aldarondo
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA.,Program in Neuroscience, Harvard University, Cambridge, MA, USA
| | - Lindsay Willmore
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Mikhail Kislin
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | - Samuel S-H Wang
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA.,Department of Molecular Biology, Princeton University, Princeton, NJ, USA
| | - Mala Murthy
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA. .,Department of Molecular Biology, Princeton University, Princeton, NJ, USA.
| | - Joshua W Shaevitz
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA. .,Lewis-Sigler Institute for Integrative Genomics, Princeton University, Princeton, NJ, USA. .,Department of Physics, Princeton University, Princeton, NJ, USA.
| |
Collapse
|