1
|
Smith TJ, Smith TR, Faruk F, Bendea M, Tirumala Kumara S, Capadona JR, Hernandez-Reynoso AG, Pancrazio JJ. Real-Time Assessment of Rodent Engagement Using ArUco Markers: A Scalable and Accessible Approach for Scoring Behavior in a Nose-Poking Go/No-Go Task. eNeuro 2024; 11:ENEURO.0500-23.2024. [PMID: 38351132 PMCID: PMC11046262 DOI: 10.1523/eneuro.0500-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 01/30/2024] [Accepted: 02/05/2024] [Indexed: 03/07/2024] Open
Abstract
In the field of behavioral neuroscience, the classification and scoring of animal behavior play pivotal roles in the quantification and interpretation of complex behaviors displayed by animals. Traditional methods have relied on video examination by investigators, which is labor-intensive and susceptible to bias. To address these challenges, research efforts have focused on computational methods and image-processing algorithms for automated behavioral classification. Two primary approaches have emerged: marker- and markerless-based tracking systems. In this study, we showcase the utility of "Augmented Reality University of Cordoba" (ArUco) markers as a marker-based tracking approach for assessing rat engagement during a nose-poking go/no-go behavioral task. In addition, we introduce a two-state engagement model based on ArUco marker tracking data that can be analyzed with a rectangular kernel convolution to identify critical transition points between states of engagement and distraction. In this study, we hypothesized that ArUco markers could be utilized to accurately estimate animal engagement in a nose-poking go/no-go behavioral task, enabling the computation of optimal task durations for behavioral testing. Here, we present the performance of our ArUco tracking program, demonstrating a classification accuracy of 98% that was validated against the manual curation of video data. Furthermore, our convolution analysis revealed that, on average, our animals became disengaged with the behavioral task at ∼75 min, providing a quantitative basis for limiting experimental session durations. Overall, our approach offers a scalable, efficient, and accessible solution for automated scoring of rodent engagement during behavioral data collection.
Collapse
Affiliation(s)
- Thomas J Smith
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas 75080
| | - Trevor R Smith
- Department of Mechanical and Aerospace Engineering, West Virginia University, Morgantown, West Virginia 26506
| | - Fareeha Faruk
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas 75080
| | - Mihai Bendea
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, Texas 75080
| | - Shreya Tirumala Kumara
- Department of Bioengineering, The University of Texas at Dallas, Richardson, Texas 75080
| | - Jeffrey R Capadona
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, Ohio 44106
- Advanced Platform Technology Center, Louis Stokes Cleveland Veterans Affairs Medical Center, Cleveland, Ohio 44106
| | | | - Joseph J Pancrazio
- Department of Bioengineering, The University of Texas at Dallas, Richardson, Texas 75080
| |
Collapse
|
2
|
Zhang J, Yang Z, Jiang S, Zhou Z. A spatial registration method based on 2D-3D registration for an augmented reality spinal surgery navigation system. Int J Med Robot 2023:e2612. [PMID: 38113328 DOI: 10.1002/rcs.2612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 09/27/2023] [Accepted: 12/06/2023] [Indexed: 12/21/2023]
Abstract
BACKGROUND In order to provide accurate and reliable image guidance for augmented reality (AR) spinal surgery navigation, a spatial registration method has been proposed. METHODS In the AR spinal surgery navigation system, grayscale-based 2D/3D registration technology has been used to register preoperative computed tomography images with intraoperative X-ray images to complete the spatial registration, and then the fusion of virtual image and real spine has been realised. RESULTS In the image registration experiment, the success rate of spine model registration was 90%. In the spinal model verification experiment, the surface registration error of the spinal model ranged from 0.361 to 0.612 mm, and the total average surface registration error was 0.501 mm. CONCLUSION The spatial registration method based on 2D/3D registration technology can be used in AR spinal surgery navigation systems and is highly accurate and minimally invasive.
Collapse
Affiliation(s)
- Jingqi Zhang
- School of Mechanical Engineering, Tianjin University, Tianjin, China
| | - Zhiyong Yang
- School of Mechanical Engineering, Tianjin University, Tianjin, China
| | - Shan Jiang
- School of Mechanical Engineering, Tianjin University, Tianjin, China
| | - Zeyang Zhou
- School of Mechanical Engineering, Tianjin University, Tianjin, China
| |
Collapse
|
3
|
Vysocký A, Poštulka T, Chlebek J, Kot T, Maslowski J, Grushko S. Hand Gesture Interface for Robot Path Definition in Collaborative Applications: Implementation and Comparative Study. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23094219. [PMID: 37177421 PMCID: PMC10180605 DOI: 10.3390/s23094219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Revised: 04/21/2023] [Accepted: 04/21/2023] [Indexed: 05/15/2023]
Abstract
The article explores the possibilities of using hand gestures as a control interface for robotic systems in a collaborative workspace. The development of hand gesture control interfaces has become increasingly important in everyday life as well as professional contexts such as manufacturing processes. We present a system designed to facilitate collaboration between humans and robots in manufacturing processes that require frequent revisions of the robot path and that allows direct definition of the waypoints, which differentiates our system from the existing ones. We introduce a novel and intuitive approach to human-robot cooperation through the use of simple gestures. As part of a robotic workspace, a proposed interface was developed and implemented utilising three RGB-D sensors for monitoring the operator's hand movements within the workspace. The system employs distributed data processing through multiple Jetson Nano units, with each unit processing data from a single camera. MediaPipe solution is utilised to localise the hand landmarks in the RGB image, enabling gesture recognition. We compare the conventional methods of defining robot trajectories with their developed gesture-based system through an experiment with 20 volunteers. The experiment involved verification of the system under realistic conditions in a real workspace closely resembling the intended industrial application. Data collected during the experiment included both objective and subjective parameters. The results indicate that the gesture-based interface enables users to define a given path objectively faster than conventional methods. We critically analyse the features and limitations of the developed system and suggest directions for future research. Overall, the experimental results indicate the usefulness of the developed system as it can speed up the definition of the robot's path.
Collapse
Affiliation(s)
- Aleš Vysocký
- Department of Robotics, Faculty of Mechanical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, 708 00 Ostrava, Czech Republic
| | - Tomáš Poštulka
- Department of Robotics, Faculty of Mechanical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, 708 00 Ostrava, Czech Republic
| | - Jakub Chlebek
- Department of Robotics, Faculty of Mechanical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, 708 00 Ostrava, Czech Republic
| | - Tomáš Kot
- Department of Robotics, Faculty of Mechanical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, 708 00 Ostrava, Czech Republic
| | - Jan Maslowski
- Department of Robotics, Faculty of Mechanical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, 708 00 Ostrava, Czech Republic
| | - Stefan Grushko
- Department of Robotics, Faculty of Mechanical Engineering, VSB-Technical University of Ostrava, 17. Listopadu 2172/15, 708 00 Ostrava, Czech Republic
| |
Collapse
|
4
|
Wojtowicz K, Wojciechowski P. Synchronous Control of a Group of Flying Robots Following a Leader UAV in an Unfamiliar Environment. SENSORS (BASEL, SWITZERLAND) 2023; 23:740. [PMID: 36679536 PMCID: PMC9867249 DOI: 10.3390/s23020740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/29/2022] [Accepted: 01/04/2023] [Indexed: 06/17/2023]
Abstract
An increasing number of professional drone flights require situational awareness of aerial vehicles. Vehicles in a group of drones must be aware of their surroundings and the other group members. The amount of data to be exchanged and the total cost are skyrocketing. This paper presents an implementation and assessment of an organized drone group comprising a fully aware leader and much less expensive followers. The solution achieved a significant cost reduction by decreasing the number of sensors onboard followers and improving the organization and manageability of the group in the system. In this project, a group of quadrotor drones was evaluated. An automatically flying leader was followed by drones equipped with low-end cameras only. The followers were tasked with following ArUco markers mounted on a preceding drone. Several test tasks were designed and conducted. Finally, the presented system proved appropriate for slowly moving groups of drones.
Collapse
|
5
|
Oščádal P, Kot T, Spurný T, Suder J, Vocetka M, Dobeš L, Bobovský Z. Camera Arrangement Optimization for Workspace Monitoring in Human-Robot Collaboration. SENSORS (BASEL, SWITZERLAND) 2022; 23:295. [PMID: 36616896 PMCID: PMC9823859 DOI: 10.3390/s23010295] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 12/23/2022] [Accepted: 12/26/2022] [Indexed: 06/17/2023]
Abstract
Human-robot interaction is becoming an integral part of practice. There is a greater emphasis on safety in workplaces where a robot may bump into a worker. In practice, there are solutions that control the robot based on the potential energy in a collision or a robot re-planning the straight-line trajectory. However, a sensor system must be designed to detect obstacles across the human-robot shared workspace. So far, there is no procedure that engineers can follow in practice to deploy sensors ideally. We come up with the idea of classifying the space as an importance index, which determines what part of the workspace sensors should sense to ensure ideal obstacle sensing. Then, the ideal camera positions can be automatically found according to this classified map. Based on the experiment, the coverage of the important volume by the calculated camera position in the workspace was found to be on average 37% greater compared to a camera placed intuitively by test subjects. Using two cameras at the workplace, the calculated positions were 27% more effective than the subjects' camera positions. Furthermore, for three cameras, the calculated positions were 13% better than the subjects' camera positions, with a total coverage of more than 99% of the classified map.
Collapse
Affiliation(s)
- Petr Oščádal
- Department of Robotics, Faculty of Mechanical Engineering, VSB-TU Ostrava, 70833 Ostrava, Czech Republic
| | - Tomáš Kot
- Department of Robotics, Faculty of Mechanical Engineering, VSB-TU Ostrava, 70833 Ostrava, Czech Republic
| | - Tomáš Spurný
- Department of Robotics, Faculty of Mechanical Engineering, VSB-TU Ostrava, 70833 Ostrava, Czech Republic
| | - Jiří Suder
- Department of Robotics, Faculty of Mechanical Engineering, VSB-TU Ostrava, 70833 Ostrava, Czech Republic
| | - Michal Vocetka
- Department of Robotics, Faculty of Mechanical Engineering, VSB-TU Ostrava, 70833 Ostrava, Czech Republic
| | - Libor Dobeš
- Moravskoslezský Automobilový Klastr, z.s., Business Incubator VŠB-TU Ostrava, 70833 Ostrava, Czech Republic
| | - Zdenko Bobovský
- Department of Robotics, Faculty of Mechanical Engineering, VSB-TU Ostrava, 70833 Ostrava, Czech Republic
| |
Collapse
|
6
|
Talmazov G, Michaud PL. Comments on "Jaw tracking integration to the virtual patient: A 4D dynamic approach". J Prosthet Dent 2022; 128:1414. [PMID: 36424210 DOI: 10.1016/j.prosdent.2022.08.038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 08/16/2022] [Accepted: 08/17/2022] [Indexed: 11/23/2022]
Affiliation(s)
| | - Pierre-Luc Michaud
- Associate Professor, Department of Dental Clinical Sciences, Faculty of Dentistry, Dalhousie University, Halifax, NS, Canada
| |
Collapse
|
7
|
Deep learning based 3D target detection for indoor scenes. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03888-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
8
|
Distributed Camera Subsystem for Obstacle Detection. SENSORS 2022; 22:s22124588. [PMID: 35746381 PMCID: PMC9228584 DOI: 10.3390/s22124588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 06/15/2022] [Accepted: 06/16/2022] [Indexed: 12/10/2022]
Abstract
This work focuses on improving a camera system for sensing a workspace in which dynamic obstacles need to be detected. The currently available state-of-the-art solution (MoveIt!) processes data in a centralized manner from cameras that have to be registered before the system starts. Our solution enables distributed data processing and dynamic change in the number of sensors at runtime. The distributed camera data processing is implemented using a dedicated control unit on which the filtering is performed by comparing the real and expected depth images. Measurements of the processing speed of all sensor data into a global voxel map were compared between the centralized system (MoveIt!) and the new distributed system as part of a performance benchmark. The distributed system is more flexible in terms of sensitivity to a number of cameras, better framerate stability and the possibility of changing the camera number on the go. The effects of voxel grid size and camera resolution were also compared during the benchmark, where the distributed system showed better results. Finally, the overhead of data transmission in the network was discussed where the distributed system is considerably more efficient. The decentralized system proves to be faster by 38.7% with one camera and 71.5% with four cameras.
Collapse
|
9
|
Jang M, Yoon H, Lee S, Kang J, Lee S. A Comparison and Evaluation of Stereo Matching on Active Stereo Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:3332. [PMID: 35591022 PMCID: PMC9100404 DOI: 10.3390/s22093332] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 04/19/2022] [Accepted: 04/24/2022] [Indexed: 06/15/2023]
Abstract
The relationship between the disparity and depth information of corresponding pixels is inversely proportional. Thus, in order to accurately estimate depth from stereo vision, it is important to obtain accurate disparity maps, which encode the difference between horizontal coordinates of corresponding image points. Stereo vision can be classified as either passive or active. Active stereo vision generates pattern texture, which passive stereo vision does not have, on the image to fill the textureless regions. In passive stereo vision, many surveys have discovered that disparity accuracy is heavily reliant on attributes, such as radiometric variation and color variation, and have found the best-performing conditions. However, in active stereo matching, the accuracy of the disparity map is influenced not only by those affecting the passive stereo technique, but also by the attributes of the generated pattern textures. Therefore, in this paper, we analyze and evaluate the relationship between the performance of the active stereo technique and the attributes of pattern texture. When evaluating, experiments are conducted under various settings, such as changing the pattern intensity, pattern contrast, number of pattern dots, and global gain, that may affect the overall performance of the active stereo matching technique. Through this evaluation, our discovery can act as a noteworthy reference for constructing an active stereo system.
Collapse
Affiliation(s)
- Mingyu Jang
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, Korea; (M.J.); (H.Y.); (S.L.)
| | - Hyunse Yoon
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, Korea; (M.J.); (H.Y.); (S.L.)
| | - Seongmin Lee
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, Korea; (M.J.); (H.Y.); (S.L.)
| | - Jiwoo Kang
- Department of IT Engineering, Sookmyung Women’s University, Seoul 04310, Korea
| | - Sanghoon Lee
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, Korea; (M.J.); (H.Y.); (S.L.)
- Department of Radiology, College of Medicine, Yonsei University, Seoul 03722, Korea
| |
Collapse
|
10
|
A Non-Anthropomorphic Bipedal Walking Robot with a Vertically Stabilized Base. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12094108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The paper deals with the proposed concept of a biped robot with vertical stabilization of the robot’s base and minimization of its sideways oscillations. This robot uses 6 actuators, which gives good preconditions for energy balance compared to purely articulated bipedal robots. In addition, the used linear actuator is self-locking, so no additional energy is required for braking or to keep it in a stable position. The direct and inverse kinematics problems are solved by means of a kinematic model of the robot. Furthermore, the task is aided by a solution for locomotion on an inclined plane. Special attention is focused on the position of the robot’s center of gravity and its stability in motion. The results of the simulation confirm that the proposed concept meets all expectations. This robot can be used as a mechatronic assistant or as a carrier for handling extensions.
Collapse
|
11
|
Finding the Optimal Pose of 2D LLT Sensors to Improve Object Pose Estimation. SENSORS 2022; 22:s22041536. [PMID: 35214438 PMCID: PMC8879124 DOI: 10.3390/s22041536] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 01/28/2022] [Accepted: 02/15/2022] [Indexed: 12/04/2022]
Abstract
In this paper, we examine a method for improving pose estimation by correctly positioning the sensors relative to the scanned object. Three objects made of different materials and using different manufacturing technologies were selected for the experiment. To collect input data for orientation estimation, a simulation environment was created where each object was scanned at different poses. A simulation model of the laser line triangulation sensor was created for scanning, and the optical surface properties of the scanned objects were set to simulate real scanning conditions. The simulation was verified on a real system using the UR10e robot to rotate and move the object. The presented results show that the simulation matches the real measurements and that the appropriate placement of the sensors has improved the orientation estimation.
Collapse
|
12
|
Camera-Based Method for Identification of the Layout of a Robotic Workcell. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10217679] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In this paper, a new method for the calibration of robotic cell components is presented and demonstrated by identification of an industrial robotic manipulator’s base and end-effector frames in a workplace. It is based on a mathematical approach using a Jacobian matrix. In addition, using the presented method, identification of other kinematic parameters of a robot is possible. The Universal Robot UR3 was later chosen to prove the working principle in both simulations and experiment, with a simple repeatable low-cost solution for such a task—image analysis to detect tag markers. The results showing the accuracy of the system are included and discussed.
Collapse
|