1
|
Smith TJ, Smith TR, Faruk F, Bendea M, Tirumala Kumara S, Capadona JR, Hernandez-Reynoso AG, Pancrazio JJ. Real-Time Assessment of Rodent Engagement Using ArUco Markers: A Scalable and Accessible Approach for Scoring Behavior in a Nose-Poking Go/No-Go Task. eNeuro 2024; 11:ENEURO.0500-23.2024. [PMID: 38351132 PMCID: PMC11046262 DOI: 10.1523/eneuro.0500-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 01/30/2024] [Accepted: 02/05/2024] [Indexed: 03/07/2024] Open
Abstract
In the field of behavioral neuroscience, the classification and scoring of animal behavior play pivotal roles in the quantification and interpretation of complex behaviors displayed by animals. Traditional methods have relied on video examination by investigators, which is labor-intensive and susceptible to bias. To address these challenges, research efforts have focused on computational methods and image-processing algorithms for automated behavioral classification. Two primary approaches have emerged: marker- and markerless-based tracking systems. In this study, we showcase the utility of "Augmented Reality University of Cordoba" (ArUco) markers as a marker-based tracking approach for assessing rat engagement during a nose-poking go/no-go behavioral task. In addition, we introduce a two-state engagement model based on ArUco marker tracking data that can be analyzed with a rectangular kernel convolution to identify critical transition points between states of engagement and distraction. In this study, we hypothesized that ArUco markers could be utilized to accurately estimate animal engagement in a nose-poking go/no-go behavioral task, enabling the computation of optimal task durations for behavioral testing. Here, we present the performance of our ArUco tracking program, demonstrating a classification accuracy of 98% that was validated against the manual curation of video data. Furthermore, our convolution analysis revealed that, on average, our animals became disengaged with the behavioral task at ∼75 min, providing a quantitative basis for limiting experimental session durations. Overall, our approach offers a scalable, efficient, and accessible solution for automated scoring of rodent engagement during behavioral data collection.
Collapse
Affiliation(s)
- Thomas J Smith
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas 75080
| | - Trevor R Smith
- Department of Mechanical and Aerospace Engineering, West Virginia University, Morgantown, West Virginia 26506
| | - Fareeha Faruk
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas 75080
| | - Mihai Bendea
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas 75080
| | - Shreya Tirumala Kumara
- Department of Bioengineering, The University of Texas at Dallas, Richardson, Texas 75080
| | - Jeffrey R Capadona
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio 44106
- Advanced Platform Technology Center, Louis Stokes Cleveland Veterans Affairs Medical Center, Cleveland, Ohio 44106
| | | | - Joseph J Pancrazio
- Department of Bioengineering, The University of Texas at Dallas, Richardson, Texas 75080
| |
Collapse
|
2
|
Jin M, Li J, Zhang L. DOPE++: 6D pose estimation algorithm for weakly textured objects based on deep neural networks. PLoS One 2022; 17:e0269175. [PMID: 35675352 PMCID: PMC9176784 DOI: 10.1371/journal.pone.0269175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 05/15/2022] [Indexed: 11/22/2022] Open
Abstract
This paper focuses on 6D pose estimation for weakly textured targets from RGB-D images. A 6D pose estimation algorithm (DOPE++) based on a deep neural network for weakly textured objects is proposed to solve the poor real-time pose estimation and low recognition efficiency in the robot grasping process of parts with weak texture. More specifically, we first introduce the depthwise separable convolution operation to lighten the original deep object pose estimation (DOPE) network structure to improve the network operation speed. Second, an attention mechanism is introduced to improve network accuracy. In response to the low recognition efficiency of the original DOPE network for parts with occlusion relationships and the false recognition problem in recognizing parts with scales that are too large or too small, a random mask local processing method and a multiscale fusion pose estimation module are proposed. The results show that our proposed DOPE++ network improves the real-time performance of 6D pose estimation and enhances the recognition of parts at different scales without loss of accuracy. To address the problem of a single background representation of the part pose estimation dataset, a virtual dataset is constructed for data expansion to form a hybrid dataset.
Collapse
Affiliation(s)
- Mei Jin
- School of Electrical Engineering, Yanshan University, Qinhuangdao, China
- Hebei Key Laboratory of Testing and Metrology Technology and Instruments, Yanshan University, Qinhuangdao, China
| | - Jiaqing Li
- School of Electrical Engineering, Yanshan University, Qinhuangdao, China
- Hebei Key Laboratory of Testing and Metrology Technology and Instruments, Yanshan University, Qinhuangdao, China
- * E-mail:
| | - Liguo Zhang
- School of Electrical Engineering, Yanshan University, Qinhuangdao, China
- Hebei Key Laboratory of Testing and Metrology Technology and Instruments, Yanshan University, Qinhuangdao, China
| |
Collapse
|
3
|
Vagvolgyi BP, Jayakumar RP, Madhav MS, Knierim JJ, Cowan NJ. Wide-angle, monocular head tracking using passive markers. J Neurosci Methods 2022; 368:109453. [PMID: 34968626 PMCID: PMC8857048 DOI: 10.1016/j.jneumeth.2021.109453] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 11/22/2021] [Accepted: 12/17/2021] [Indexed: 10/19/2022]
Abstract
BACKGROUND Camera images can encode large amounts of visual information of an animal and its environment, enabling high fidelity 3D reconstruction of the animal and its environment using computer vision methods. Most systems, both markerless (e.g. deep learning based) and marker-based, require multiple cameras to track features across multiple points of view to enable such 3D reconstruction. However, such systems can be expensive and are challenging to set up in small animal research apparatuses. NEW METHODS We present an open-source, marker-based system for tracking the head of a rodent for behavioral research that requires only a single camera with a potentially wide field of view. The system features a lightweight visual target and computer vision algorithms that together enable high-accuracy tracking of the six-degree-of-freedom position and orientation of the animal's head. The system, which only requires a single camera positioned above the behavioral arena, robustly reconstructs the pose over a wide range of head angles (360° in yaw, and approximately ± 120° in roll and pitch). RESULTS Experiments with live animals demonstrate that the system can reliably identify rat head position and orientation. Evaluations using a commercial optical tracker device show that the system achieves accuracy that rivals commercial multi-camera systems. COMPARISON WITH EXISTING METHODS Our solution significantly improves upon existing monocular marker-based tracking methods, both in accuracy and in allowable range of motion. CONCLUSIONS The proposed system enables the study of complex behaviors by providing robust, fine-scale measurements of rodent head motions in a wide range of orientations.
Collapse
Affiliation(s)
- Balazs P. Vagvolgyi
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, U.S.A.,Corresponding author: (Balazs P. Vagvolgyi)
| | - Ravikrishnan P. Jayakumar
- Mind/Brain Institute, Johns Hopkins University, Baltimore, MD, U.S.A.,Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, U.S.A.,Mechanical Engineering Department, Johns Hopkins University, Baltimore, MD, U.S.A
| | - Manu S. Madhav
- Mind/Brain Institute, Johns Hopkins University, Baltimore, MD, U.S.A.,Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD, U.S.A.,Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, U.S.A.,School of Biomedical Engineering, Djawad Mowafaghian Centre for Brain Health, University of British Columbia, BC, Canada,Corresponding author: (Balazs P. Vagvolgyi)
| | - James J. Knierim
- Mind/Brain Institute, Johns Hopkins University, Baltimore, MD, U.S.A.,Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD, U.S.A
| | - Noah J. Cowan
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, U.S.A.,Mechanical Engineering Department, Johns Hopkins University, Baltimore, MD, U.S.A
| |
Collapse
|