1
|
Yan C, Ren J, Wang R, Chen Y, Zhang J. Target Detection-Based Control Method for Archive Management Robot. SENSORS (BASEL, SWITZERLAND) 2023; 23:5343. [PMID: 37300070 PMCID: PMC10256058 DOI: 10.3390/s23115343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 05/30/2023] [Accepted: 05/30/2023] [Indexed: 06/12/2023]
Abstract
With increasing demand for efficient archive management, robots have been employed in paper-based archive management for large, unmanned archives. However, the reliability requirements of such systems are high due to their unmanned nature. To address this, this study proposes a paper archive access system with adaptive recognition for handling complex archive box access scenarios. The system comprises a vision component that employs the YOLOV5 algorithm to identify feature regions, sort and filter data, and to estimate the target center position, as well as a servo control component. This study proposes a servo-controlled robotic arm system with adaptive recognition for efficient paper-based archive management in unmanned archives. The vision part of the system employs the YOLOV5 algorithm to identify feature regions and to estimate the target center position, while the servo control part uses closed-loop control to adjust posture. The proposed feature region-based sorting and matching algorithm enhances accuracy and reduces the probability of shaking by 1.27% in restricted viewing scenarios. The system is a reliable and cost-effective solution for paper archive access in complex scenarios, and the integration of the proposed system with a lifting device enables the effective storage and retrieval of archive boxes of varying heights. However, further research is necessary to evaluate its scalability and generalizability. The experimental results demonstrate the effectiveness of the proposed adaptive box access system for unmanned archival storage. The system exhibits a higher storage success rate than existing commercial archival management robotic systems. The integration of the proposed system with a lifting device provides a promising solution for efficient archive management in unmanned archival storage. Future research should focus on evaluating the system's performance and scalability.
Collapse
Affiliation(s)
- Cheng Yan
- College of Automation, Nanjing University of Science & Technology, Xiaolingwei Street, Nanjing 210094, China; (C.Y.); (R.W.); (Y.C.)
| | - Jieqi Ren
- Second Academy of Aerospace Science and Industry, Yongding Road, Beijing 100854, China;
| | - Rui Wang
- College of Automation, Nanjing University of Science & Technology, Xiaolingwei Street, Nanjing 210094, China; (C.Y.); (R.W.); (Y.C.)
| | - Yaowei Chen
- College of Automation, Nanjing University of Science & Technology, Xiaolingwei Street, Nanjing 210094, China; (C.Y.); (R.W.); (Y.C.)
| | - Jie Zhang
- College of Automation, Nanjing University of Science & Technology, Xiaolingwei Street, Nanjing 210094, China; (C.Y.); (R.W.); (Y.C.)
| |
Collapse
|
2
|
Yang X, Chen F, Wang F, Zheng L, Wang S, Qi W, Su H. Sensor Fusion-Based Teleoperation Control of Anthropomorphic Robotic Arm. Biomimetics (Basel) 2023; 8:biomimetics8020169. [PMID: 37092421 PMCID: PMC10123651 DOI: 10.3390/biomimetics8020169] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 04/11/2023] [Accepted: 04/14/2023] [Indexed: 04/25/2023] Open
Abstract
Sensor fusion is a technique that combines information from multiple sensors in order to improve the accuracy and reliability of the data being collected. In the context of teleoperation control of an anthropomorphic robotic arm, sensor fusion technology can be used to enhance the precise control of anthropomorphic robotic arms by combining data from multiple sensors, such as cameras, data gloves, force sensors, etc. By fusing and processing this sensing information, it can enable real-time control of anthropomorphic robotic arms and dexterous hands, replicating the motion of human manipulators. In this paper, we present a sensor fusion-based teleoperation control system for the anthropomorphic robotic arm and dexterous hand, which utilizes a filter to fuse data from multiple sensors in real-time. As such, the real-time perceived human arms motion posture information is analyzed and processed, and wireless communication is used to intelligently and flexibly control the anthropomorphic robotic arm and dexterous hand. Finally, the user is able to manage the anthropomorphic operation function in a stable and reliable manner. We also discussed the implementation and experimental evaluation of the system, showing that it is able to achieve improved performance and stability compared to traditional teleoperation control methods.
Collapse
Affiliation(s)
- Xiaolong Yang
- Key Laboratory of Bionic Engineering, Ministry of Education, Jilin University, Changchun 130022, China
- College of Mechanical and Electrical Engineering, Changchun University of Science and Technology, Changchun 130022, China
- Weihai Institute for Bionics, Jilin University, Weihai 264402, China
| | - Furong Chen
- Key Laboratory of Bionic Engineering, Ministry of Education, Jilin University, Changchun 130022, China
- College of Mechanical and Electrical Engineering, Changchun University of Science and Technology, Changchun 130022, China
- Weihai Institute for Bionics, Jilin University, Weihai 264402, China
| | - Feilong Wang
- Key Laboratory of Bionic Engineering, Ministry of Education, Jilin University, Changchun 130022, China
- College of Mechanical and Electrical Engineering, Changchun University of Science and Technology, Changchun 130022, China
- Weihai Institute for Bionics, Jilin University, Weihai 264402, China
| | - Long Zheng
- Key Laboratory of Bionic Engineering, Ministry of Education, Jilin University, Changchun 130022, China
- Weihai Institute for Bionics, Jilin University, Weihai 264402, China
| | - Shukun Wang
- College of Mechanical and Electrical Engineering, Changchun University of Science and Technology, Changchun 130022, China
| | - Wen Qi
- School of Future Technology, South China University of Technology, Guangzhou 511436, China
| | - Hang Su
- Weihai Institute for Bionics, Jilin University, Weihai 264402, China
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan 20133, Italy
| |
Collapse
|
3
|
On lightmyography based muscle-machine interfaces for the efficient decoding of human gestures and forces. Sci Rep 2023; 13:327. [PMID: 36609654 PMCID: PMC9822960 DOI: 10.1038/s41598-022-25982-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 12/07/2022] [Indexed: 01/09/2023] Open
Abstract
Conventional muscle-machine interfaces like Electromyography (EMG), have significant drawbacks, such as crosstalk, a non-linear relationship between the signal and the corresponding motion, and increased signal processing requirements. In this work, we introduce a new muscle-machine interfacing technique called lightmyography (LMG), that can be used to efficiently decode human hand gestures, motion, and forces from the detected contractions of the human muscles. LMG utilizes light propagation through elastic media and human tissue, measuring changes in light luminosity to detect muscle movement. Similar to forcemyography, LMG infers muscular contractions through tissue deformation and skin displacements. In this study, we look at how different characteristics of the light source and silicone medium affect the performance of LMG and we compare LMG and EMG based gesture decoding using various machine learning techniques. To do that, we design an armband equipped with five LMG modules, and we use it to collect the required LMG data. Three different machine learning methods are employed: Random Forests, Convolutional Neural Networks, and Temporal Multi-Channel Vision Transformers. The system has also been efficiently used in decoding the forces exerted during power grasping. The results demonstrate that LMG outperforms EMG for most methods and subjects.
Collapse
|
4
|
Recent Advances in Bipedal Walking Robots: Review of Gait, Drive, Sensors and Control Systems. SENSORS 2022; 22:s22124440. [PMID: 35746222 PMCID: PMC9229068 DOI: 10.3390/s22124440] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 06/02/2022] [Accepted: 06/09/2022] [Indexed: 02/01/2023]
Abstract
Currently, there is an intensive development of bipedal walking robots. The most known solutions are based on the use of the principles of human gait created in nature during evolution. Modernbipedal robots are also based on the locomotion manners of birds. This review presents the current state of the art of bipedal walking robots based on natural bipedal movements (human and bird) as well as on innovative synthetic solutions. Firstly, an overview of the scientific analysis of human gait is provided as a basis for the design of bipedal robots. The full human gait cycle that consists of two main phases is analysed and the attention is paid to the problem of balance and stability, especially in the single support phase when the bipedal movement is unstable. The influences of passive or active gait on energy demand are also discussed. Most studies are explored based on the zero moment. Furthermore, a review of the knowledge on the specific locomotor characteristics of birds, whose kinematics are derived from dinosaurs and provide them with both walking and running abilities, is presented. Secondly, many types of bipedal robot solutions are reviewed, which include nature-inspired robots (human-like and birdlike robots) and innovative robots using new heuristic, synthetic ideas for locomotion. Totally 45 robotic solutions are gathered by thebibliographic search method. Atlas was mentioned as one of the most perfect human-like robots, while the birdlike robot cases were Cassie and Digit. Innovative robots are presented, such asslider robot without knees, robots with rotating feet (3 and 4 degrees of freedom), and the hybrid robot Leo, which can walk on surfaces and fly. In particular, the paper describes in detail the robots' propulsion systems (electric, hydraulic), the structure of the lower limb (serial, parallel, mixed mechanisms), the types and structures of control and sensor systems, and the energy efficiency of the robots. Terrain roughness recognition systems using different sensor systems based on light detection and ranging or multiple cameras are introduced. A comparison of performance, control and sensor systems, drive systems, and achievements of known human-like and birdlike robots is provided. Thirdly, for the first time, the review comments on the future of bipedal robots in relation to the concepts of conventional (natural bipedal) and synthetic unconventional gait. We critically assess and compare prospective directions for further research that involve the development of navigation systems, artificial intelligence, collaboration with humans, areas for the development of bipedal robot applications in everyday life, therapy, and industry.
Collapse
|
5
|
Peng J, Yuan Y. Moving Object Grasping Method of Mechanical Arm Based on Deep Deterministic Policy Gradient and Hindsight Experience Replay. JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS 2022. [DOI: 10.20965/jaciii.2022.p0051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The mechanical arm is an important component in many types of robots; however, in certain production lines, the conventional grasp strategy cannot satisfy the demands of modern production because of several interference factors such as vibration, noise, and light pollution. This paper proposes a new grasping method for manipulators in stamping automatic production lines. Considering the factors that affect grasping in the production environment, the deep deterministic policy gradient (DDPG) method is selected in this study as the basic reinforcement-learning algorithm, and this algorithm is used to grasp moving objects in stamping automatic production lines. Owing to the low success rate of the conventional DDPG algorithm, the hindsight experience replay (HER) is used to improve the sample utilization efficiency of the agent and learn more effective tracking strategies. Simulation results show an 82% mean success rate of the optimized DDPG-HER algorithm, which is 31% better than that of the conventional DDPG algorithm. This method provides ideas for the research and design of the sorting system used in stamping automation production lines.
Collapse
|
6
|
|
7
|
A "Global-Local" Visual Servo System for Picking Manipulators. SENSORS 2020; 20:s20123366. [PMID: 32545849 PMCID: PMC7348848 DOI: 10.3390/s20123366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Revised: 06/12/2020] [Accepted: 06/12/2020] [Indexed: 11/25/2022]
Abstract
During the process of automated crop picking, the two hand–eye coordination operation systems, namely “eye to hand” and “eye in hand” have their respective advantages and disadvantages. It is challenging to simultaneously consider both the operational accuracy and the speed of a manipulator. In response to this problem, this study constructs a “global–local” visual servo picking system based on a prototype of a picking robot to provide a global field of vision (through binocular vision) and carry out the picking operation using the monocular visual servo. Using tomato picking as an example, experiments were conducted to obtain the accuracies of judgment and range of fruit maturity, and the scenario of fruit-bearing was simulated over an area where the operation was ongoing to examine the rate of success of the system in terms of continuous fruit picking. The results show that the global–local visual servo picking system had an average accuracy of correctly judging fruit maturity of 92.8%, average error of fruit distance measurement in the range 0.485 cm, average time for continuous fruit picking of 20.06 s, and average success rate of picking of 92.45%.
Collapse
|
8
|
A Self-triggered Position Based Visual Servoing Model Predictive Control Scheme for Underwater Robotic Vehicles. MACHINES 2020. [DOI: 10.3390/machines8020033] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
An efficient position based visual sevroing control approach for Autonomous Underwater Vehicles (AUVs) by employing Non-linear Model Predictive Control (N-MPC) is designed and presented in this work. In the proposed scheme, a mechanism is incorporated within the vision-based controller that determines when the Visual Tracking Algorithm (VTA) should be activated and new control inputs should be calculated. More specifically, the control loop does not close periodically, i.e., between two consecutive activations (triggering instants), the control inputs calculated by the N-MPC at the previous triggering time instant are applied to the underwater robot in an open-loop mode. This results in a significantly smaller number of requested measurements from the vision tracking algorithm, as well as less frequent computations of the non-linear predictive control law. This results in a reduction in processing time as well as energy consumption and, therefore, increases the accuracy and autonomy of the Autonomous Underwater Vehicle. The latter is of paramount importance for persistent underwater inspection tasks. Moreover, the Field of View constraints (FoV), control input saturation, the kinematic limitations due to the underactuated degree of freedom in sway direction, and the effect of the model uncertainties as well as external disturbances have been considered during the control design. In addition, the stability and convergence of the closed-loop system has been guaranteed analytically. Finally, the efficiency and performance of the proposed vision-based control framework is demonstrated through a comparative real-time experimental study while using a small underwater vehicle.
Collapse
|
9
|
Wang H, Huang Q, Shi Q, Yue T, Chen S, Nakajima M, Takeuchi M, Fukuda T. Automated Assembly of Vascular-Like Microtube With Repetitive Single-Step Contact Manipulation. IEEE Trans Biomed Eng 2016; 62:2620-8. [PMID: 26513766 DOI: 10.1109/tbme.2015.2437952] [Citation(s) in RCA: 52] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Fabricated vessel-mimetic microtubes are essential for delivering sufficient nutrient to engineered composite tissues. In this paper, vascular-like microtubes are engineered by automated assembly of donut-shaped micromodules that embed fibroblast cells. A microrobotic system is set up with dual manipulators of 30-nm positioning resolution under an optical microscope. The system assembles the micromodules by repeated single-step pick-up motions. This process is specifically designed to avoid human interference and ensure high reproducibility for automation. We optimized the single-step motion by calibrating the key parameters (the micromodule dimensions) in a force analysis. The optimal motion achieved a 98% pick-up success rate. The automated repetitive single-step assembly is achieved by an algorithm that acquires the 3-D location and tracks the micromanipulator without being affected by low contrast. The accuracy of the acquired 3-D location was experimentally determined as approximately 1 pixel (2 μm under 4× magnification), and the tracking under different observation conditions is proved effective. Finally, we automatically assembled microtubes at 6 micromodules/min, sufficiently fast for fabricating macroscopic vessel-mimetic substitutes in biological applications.
Collapse
|
10
|
Pérez L, Rodríguez Í, Rodríguez N, Usamentiaga R, García DF. Robot Guidance Using Machine Vision Techniques in Industrial Environments: A Comparative Review. SENSORS 2016; 16:s16030335. [PMID: 26959030 PMCID: PMC4813910 DOI: 10.3390/s16030335] [Citation(s) in RCA: 58] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/13/2016] [Revised: 02/18/2016] [Accepted: 02/26/2016] [Indexed: 11/26/2022]
Abstract
In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works.
Collapse
Affiliation(s)
- Luis Pérez
- Fundación PRODINTEC, Avda. Jardín Botánico 1345, 33203 Gijón (Asturias), Spain.
| | - Íñigo Rodríguez
- Fundación PRODINTEC, Avda. Jardín Botánico 1345, 33203 Gijón (Asturias), Spain.
| | - Nuria Rodríguez
- Fundación PRODINTEC, Avda. Jardín Botánico 1345, 33203 Gijón (Asturias), Spain.
| | - Rubén Usamentiaga
- Department of Computer Science and Engineering, Universidad de Oviedo, Campus de Viesques, 33203 Gijón (Asturias), Spain.
| | - Daniel F García
- Department of Computer Science and Engineering, Universidad de Oviedo, Campus de Viesques, 33203 Gijón (Asturias), Spain.
| |
Collapse
|
11
|
Abstract
This paper describes an integrated quasi-autonomous four-limbed robot, named Capuchin, which is equipped with appropriate sensing, planning and control capabilities to “free-climb” vertical terrain. Unlike aid climbing that takes advantage of special tools and/or engineered terrain features, free climbing only relies on friction at the contacts between the climber and the rigid terrain. While moving, Capuchin adjusts its body posture (hence, the position of its centre of mass) and exerts appropriate forces at the contacts in order to remain in equilibrium. Vision is used to achieve precise contacts and force sensing to control contact forces. The robot's planner is based on a pre-existing two-stage “stance-before-motion” approach. Its controller applies a novel “lazy” force control strategy that performs force adjustments only when these are needed. Experiments demonstrate that Capuchin can reliably climb vertical terrain with irregular features.
Collapse
Affiliation(s)
- Ruixiang Zhang
- Computer Science Department, Stanford University, Stanford, CA, USA
| | | |
Collapse
|
12
|
Minati L, Nigri A, Rosazza C, Bruzzone MG. Thoughts turned into high-level commands: Proof-of-concept study of a vision-guided robot arm driven by functional MRI (fMRI) signals. Med Eng Phys 2012; 34:650-8. [PMID: 22405803 DOI: 10.1016/j.medengphy.2012.02.004] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2011] [Revised: 01/27/2012] [Accepted: 02/09/2012] [Indexed: 11/25/2022]
Abstract
Previous studies have demonstrated the possibility of using functional MRI to control a robot arm through a brain-machine interface by directly coupling haemodynamic activity in the sensory-motor cortex to the position of two axes. Here, we extend this work by implementing interaction at a more abstract level, whereby imagined actions deliver structured commands to a robot arm guided by a machine vision system. Rather than extracting signals from a small number of pre-selected regions, the proposed system adaptively determines at individual level how to map representative brain areas to the input nodes of a classifier network. In this initial study, a median action recognition accuracy of 90% was attained on five volunteers performing a game consisting of collecting randomly positioned coloured pawns and placing them into cups. The "pawn" and "cup" instructions were imparted through four mental imaginery tasks, linked to robot arm actions by a state machine. With the current implementation in MatLab language the median action recognition time was 24.3s and the robot execution time was 17.7s. We demonstrate the notion of combining haemodynamic brain-machine interfacing with computer vision to implement interaction at the level of high-level commands rather than individual movements, which may find application in future fMRI approaches relevant to brain-lesioned patients, and provide source code supporting further work on larger command sets and real-time processing.
Collapse
Affiliation(s)
- Ludovico Minati
- Scientific Department, Fondazione IRCCS Istituto Neurologico Carlo Besta, Milan, Italy.
| | | | | | | |
Collapse
|
13
|
Cruz-Ramírez SR, Arai T, Mae Y, Takubo T, Ohara K. Recognition and Removal of Interior Facilities by Vision-Based Robot System. JOURNAL OF ROBOTICS AND MECHATRONICS 2010. [DOI: 10.20965/jrm.2010.p0050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
For future dismantling jobs in the renovation of the interiors of office buildings, we propose a robotic dismantling system that will assist human workers with the hard works. As an application of the robotic system, this paper presents the process of removing ceiling fixtures, such as Lamp Panels (LPs) and Air Conditioning Vents (ACVs), by man and robot. In this collaboration, a robot arm provides assistance by holding and collecting the fixtures, and the human worker only removes screws and/or nuts. In order to lead the robot to a holding position, the human worker indicates a position on the fixture to the robot with brief and simple instructions. The robot estimates the pose of the fixture through 3D model-based object recognition with a hand-mounted stereo camera. The integration of multiple viewpoints for the robot with an active lighting system enhances the recognition performance against both natural lighting changes at the site and the variability in the pose between the camera and the object to be recognized. As a verification experiment, the sequential removal of several different ceiling fixtures is presented. In this experiment, robust recognition is achieved with an average accuracy of 10 mm. The feasibility of the system is verified by using the completion time and the precision requirements in a practical environment.
Collapse
|
14
|
|
15
|
Chesi G, Hung YS. Image noise induced errors in camera positioning. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2007; 29:1476-80. [PMID: 17568150 DOI: 10.1109/tpami.2007.70723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
The problem of evaluating worst-case camera positioning error induced by unknown-but-bounded (UBB) image noise for a given object-camera configuration is considered. Specifically, it is shown that upper bounds to the rotation and translation worst-case error for a certain image noise intensity can be obtained through convex optimizations. These upper bounds, contrary to lower bounds provided by standard optimization tools, allow one to design robust visual servo systems.
Collapse
Affiliation(s)
- Graziano Chesi
- Department of Electrical and Electronic Engineering, University of Hong Kong, Pokfulam Road, Hong Kong.
| | | |
Collapse
|