1
|
Li Q, Kroemer O, Su Z, Veiga FF, Kaboli M, Ritter HJ. A Review of Tactile Information: Perception and Action Through Touch. IEEE T ROBOT 2020. [DOI: 10.1109/tro.2020.3003230] [Citation(s) in RCA: 53] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
2
|
Abstract
Object grasping and manipulation in robotics has been largely approached using visual feedback. Human studies on the other hand have demonstrated the importance of tactile and force feedback to guide the interaction between the fingers and the objects. Inspired by these observations, we propose an approach that consists in guiding a robot’s actions mainly by tactile feedback, with remote sensing such as vision, used only as a complement. Directly sensing the interaction forces between the object, the environment, and the robot’s hand enables it to obtain information relevant to the task that can be used to perform it more reliably. This approach (that we call sensitive manipulation) requires important changes in the hardware and in the way the robot is programmed. At the hardware level, we exploit compliant actuators and novel sensors that allow to safely interact and detect the environment. We developed strategies to perform manipulation tasks that take advantage of these new sensing and actuation capabilities. In this paper, we demonstrate that using these strategies the humanoid robot Obrero can safely find, reach and grab unknown objects that are neither held in place by a fixture nor placed in a specific orientation. The robot can also make insertions by “feeling” the hole without specialized mechanisms such as a remote center of compliance (RCC).
Collapse
Affiliation(s)
- Eduardo Torres-Jara
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 32 Vassar St., Cambridge, Massachusetts 02139, USA
| | - Lorenzo Natale
- iCub Facility, Istituto Italiano di Tecnologia, Via Morego 30, Genova 16163, Italy
| |
Collapse
|
3
|
Balasubramanian K, Vaidya M, Southerland J, Badreldin I, Eleryan A, Takahashi K, Qian K, Slutzky MW, Fagg AH, Oweiss K, Hatsopoulos NG. Changes in cortical network connectivity with long-term brain-machine interface exposure after chronic amputation. Nat Commun 2017; 8:1796. [PMID: 29180616 PMCID: PMC5703974 DOI: 10.1038/s41467-017-01909-2] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2017] [Accepted: 10/24/2017] [Indexed: 11/23/2022] Open
Abstract
Studies on neural plasticity associated with brain-machine interface (BMI) exposure have primarily documented changes in single neuron activity, and largely in intact subjects. Here, we demonstrate significant changes in ensemble-level functional connectivity among primary motor cortical (MI) neurons of chronically amputated monkeys exposed to control a multiple-degree-of-freedom robot arm. A multi-electrode array was implanted in M1 contralateral or ipsilateral to the amputation in three animals. Two clusters of stably recorded neurons were arbitrarily assigned to control reach and grasp movements, respectively. With exposure, network density increased in a nearly monotonic fashion in the contralateral monkeys, whereas the ipsilateral monkey pruned the existing network before re-forming a denser connectivity. Excitatory connections among neurons within a cluster were denser, whereas inhibitory connections were denser among neurons across the two clusters. These results indicate that cortical network connectivity can be modified with BMI learning, even among neurons that have been chronically de-efferented and de-afferented due to amputation.
Collapse
Affiliation(s)
| | - Mukta Vaidya
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, 60637, USA
- Department of Neurology, Northwestern University, Chicago, 60611, IL, USA
| | - Joshua Southerland
- School of Computer Science, University of Oklahoma, Norman, OK, 73019, USA
| | - Islam Badreldin
- Department of Electrical & Computer Engineering, University of Florida, Gainesville, FL, 32611, USA
| | - Ahmed Eleryan
- Department of Electrical & Computer Engineering, Michigan State University, East Lansing, MI, 48824, USA
| | - Kazutaka Takahashi
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL, 60637, USA
| | - Kai Qian
- Department of Biomedical Engineering, Illinois Institute of Technology, Chicago, IL, 60616, USA
| | - Marc W Slutzky
- Departments of Neurology, Physiology, and Physical Medicine and Rehabilitation, Northwestern University Feinberg School of Medicine, Chicago, IL, 60611, USA
| | - Andrew H Fagg
- School of Computer Science, University of Oklahoma, Norman, OK, 73019, USA
| | - Karim Oweiss
- Department of Electrical & Computer Engineering, University of Florida, Gainesville, FL, 32611, USA.
| | - Nicholas G Hatsopoulos
- Department of Organismal Biology and Anatomy, University of Chicago, Chicago, IL, 60637, USA.
- Committee on Computational Neuroscience, University of Chicago, Chicago, IL, 60637, USA.
| |
Collapse
|
4
|
Abstract
Recently, a number of grasp detection methods have been proposed that can be used to localize robotic grasp configurations directly from sensor data without estimating object pose. The underlying idea is to treat grasp perception analogously to object detection in computer vision. These methods take as input a noisy and partially occluded RGBD image or point cloud and produce as output pose estimates of viable grasps, without assuming a known CAD model of the object. Although these methods generalize grasp knowledge to new objects well, they have not yet been demonstrated to be reliable enough for wide use. Many grasp detection methods achieve grasp success rates (grasp successes as a fraction of the total number of grasp attempts) between 75% and 95% for novel objects presented in isolation or in light clutter. Not only are these success rates too low for practical grasping applications, but the light clutter scenarios that are evaluated often do not reflect the realities of real-world grasping. This paper proposes a number of innovations that together result in an improvement in grasp detection performance. The specific improvement in performance due to each of our contributions is quantitatively measured either in simulation or on robotic hardware. Ultimately, we report a series of robotic experiments that average a 93% end-to-end grasp success rate for novel objects presented in dense clutter.
Collapse
|
5
|
Fernández C, Vicente MA, Ñeco RP, Puerto R. Robot Grasp Learning by Demonstration without Predefined Rules. INT J ADV ROBOT SYST 2017. [DOI: 10.5772/50908] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
A learning-based approach to autonomous robot grasping is presented. Pattern recognition techniques are used to measure the similarity between a set of previously stored example grasps and all the possible candidate grasps for a new object. Two sets of features are defined in order to characterize grasps: point attributes describe the surroundings of a contact point; point-set attributes describe the relationship between the set of n contact points (assuming an n-fingered robot gripper is used). In the experiments performed, the nearest neighbour classifier outperforms other approaches like multilayer perceptrons, radial basis functions or decision trees, in terms of classification accuracy, while computational load is not excessive for a real time application (a grasp is fully synthesized in 0.2 seconds). The results obtained on a synthetic database show that the proposed system is able to imitate the grasping behaviour of the user (e.g. the system learns to grasp a mug by its handle). All the code has been made available for testing purposes.
Collapse
|
6
|
Sui Z, Xiang L, Jenkins OC, Desingh K. Goal-directed robot manipulation through axiomatic scene estimation. Int J Rob Res 2017. [DOI: 10.1177/0278364916683444] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Performing robust goal-directed manipulation tasks remains a crucial challenge for autonomous robots. In an ideal case, shared autonomous control of manipulators would allow human users to specify their intent as a goal state and have the robot reason over the actions and motions to achieve this goal. However, realizing this goal remains elusive due to the problem of perceiving the robot’s environment. We address and describe the problem of axiomatic scene estimation for robot manipulation in cluttered scenes which is the estimation of a tree-structured scene graph describing the configuration of objects observed from robot sensing. We propose generative approaches to scene inference (as the axiomatic particle filter, and the axiomatic scene estimation by Markov chain Monte Carlo based sampler) of the robot’s environment as a scene graph. The result from AxScEs estimation are axioms amenable to goal-directed manipulation through symbolic inference for task planning and collision-free motion planning and execution. We demonstrate the results for goal-directed manipulation of multi-object scenes by a PR2 robot.
Collapse
Affiliation(s)
- Zhiqiang Sui
- Department of Electrical Engineering and Computer Science, University of Michigan, USA
| | - Lingzhu Xiang
- Institute for Aerospace Studies, University of Toronto, Canada
| | - Odest C Jenkins
- Department of Electrical Engineering and Computer Science, University of Michigan, USA
| | - Karthik Desingh
- Department of Electrical Engineering and Computer Science, University of Michigan, USA
| |
Collapse
|
7
|
Hang K, Li M, Stork JA, Bekiroglu Y, Pokorny FT, Billard A, Kragic D. Hierarchical Fingertip Space: A Unified Framework for Grasp Planning and In-Hand Grasp Adaptation. IEEE T ROBOT 2016. [DOI: 10.1109/tro.2016.2588879] [Citation(s) in RCA: 68] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
8
|
Wu FY, Asada HH. Implicit and Intuitive Grasp Posture Control for Wearable Robotic Fingers: A Data-Driven Method Using Partial Least Squares. IEEE T ROBOT 2016. [DOI: 10.1109/tro.2015.2506731] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
9
|
Koval MC, Pollard NS, Srinivasa SS. Pre- and post-contact policy decomposition for planar contact manipulation under uncertainty. Int J Rob Res 2015. [DOI: 10.1177/0278364915594474] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
We consider the problem of using real-time feedback from contact sensors to create closed-loop pushing actions. To do so, we formulate the problem as a partially observable Markov decision process (POMDP) with a transition model based on a physics simulator and a reward function that drives the robot towards a successful grasp. We demonstrate that it is intractable to solve the full POMDP with traditional techniques and introduce a novel decomposition of the policy into pre- and post-contact stages to reduce the computational complexity. Our method uses an offline point-based solver on a variable-resolution discretization of the state space to solve for a post-contact policy as a pre-computation step. Then, at runtime, we use an A* search to compute a pre-contact trajectory. We prove that the value of the resulting policy is within a bound of the value of the optimal policy and give intuition about when it performs well. Additionally, we show the policy produced by our algorithm achieves a successful grasp more quickly and with higher probability than a baseline QMDP policy on two different objects in simulation. Finally, we validate our simulation results on a real robot using commercially available tactile sensors.
Collapse
Affiliation(s)
- Michael C. Koval
- The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Nancy S. Pollard
- The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | | |
Collapse
|
10
|
Sánchez-Durán JA, Hidalgo-López JA, Castellanos-Ramos J, Oballe-Peinado Ó, Vidal-Verdú F. Influence of Errors in Tactile Sensors on Some High Level Parameters Used for Manipulation with Robotic Hands. SENSORS 2015; 15:20409-35. [PMID: 26295393 PMCID: PMC4570428 DOI: 10.3390/s150820409] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2015] [Revised: 07/20/2015] [Accepted: 08/10/2015] [Indexed: 11/16/2022]
Abstract
Tactile sensors suffer from many types of interference and errors like crosstalk, non-linearity, drift or hysteresis, therefore calibration should be carried out to compensate for these deviations. However, this procedure is difficult in sensors mounted on artificial hands for robots or prosthetics for instance, where the sensor usually bends to cover a curved surface. Moreover, the calibration procedure should be repeated often because the correction parameters are easily altered by time and surrounding conditions. Furthermore, this intensive and complex calibration could be less determinant, or at least simpler. This is because manipulation algorithms do not commonly use the whole data set from the tactile image, but only a few parameters such as the moments of the tactile image. These parameters could be changed less by common errors and interferences, or at least their variations could be in the order of those caused by accepted limitations, like reduced spatial resolution. This paper shows results from experiments to support this idea. The experiments are carried out with a high performance commercial sensor as well as with a low-cost error-prone sensor built with a common procedure in robotics.
Collapse
Affiliation(s)
- José A Sánchez-Durán
- Departamento de Electrónica, ETSI Informática Universidad de Málaga, Andalucía Tech, Campus de Teatinos, Málaga 29071, Spain.
- Instituto de Investigación Biomédica de Málaga (IBIMA), Málaga 29010, Spain.
| | - José A Hidalgo-López
- Departamento de Electrónica, ETSI Informática Universidad de Málaga, Andalucía Tech, Campus de Teatinos, Málaga 29071, Spain.
- Instituto de Investigación Biomédica de Málaga (IBIMA), Málaga 29010, Spain.
| | - Julián Castellanos-Ramos
- Departamento de Electrónica, ETSI Informática Universidad de Málaga, Andalucía Tech, Campus de Teatinos, Málaga 29071, Spain.
- Instituto de Investigación Biomédica de Málaga (IBIMA), Málaga 29010, Spain.
| | - Óscar Oballe-Peinado
- Departamento de Electrónica, ETSI Informática Universidad de Málaga, Andalucía Tech, Campus de Teatinos, Málaga 29071, Spain.
- Instituto de Investigación Biomédica de Málaga (IBIMA), Málaga 29010, Spain.
| | - Fernando Vidal-Verdú
- Departamento de Electrónica, ETSI Informática Universidad de Málaga, Andalucía Tech, Campus de Teatinos, Málaga 29071, Spain.
- Instituto de Investigación Biomédica de Málaga (IBIMA), Málaga 29010, Spain.
| |
Collapse
|
11
|
Koval MC, Pollard NS, Srinivasa SS. Pose estimation for planar contact manipulation with manifold particle filters. Int J Rob Res 2015. [DOI: 10.1177/0278364915571007] [Citation(s) in RCA: 43] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
We investigate the problem of using contact sensors to estimate the pose of an object during planar pushing by a fixed-shape hand. Contact sensors are unique because they inherently discriminate between “contact” and “no-contact” configurations. As a result, the set of object configurations that activates a sensor constitutes a lower-dimensional contact manifold in the configuration space of the object. This causes conventional state estimation methods, such as the particle filter, to perform poorly during periods of contact due to particle starvation. In this paper, we introduce the manifold particle filter as a principled way of solving the state estimation problem when the state moves between multiple manifolds of different dimensionality. The manifold particle filter avoids particle starvation during contact by adaptively sampling particles that reside on the contact manifold from the dual proposal distribution. We describe three techniques, one analytical and two sample-based, of sampling from the dual proposal distribution and compare their relative strengths and weaknesses. We present simulation results that show that all three techniques outperform the conventional particle filter in both speed and accuracy. In addition, we implement the manifold particle filter on a real robot and show that it successfully tracks the pose of a pushed object using commercially available tactile sensors.
Collapse
Affiliation(s)
- Michael C. Koval
- The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Nancy S. Pollard
- The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | | |
Collapse
|
12
|
|
13
|
Abstract
SUMMARYConsidering undesired slippage between manipulated object and finger tips of a multi-robot system, adaptive control synthesis of the object grasping and manipulation is addressed in this paper. Although many studies can be found in the literature dealing with grasp analysis and grasp synthesis, most assume no slippage between the finger tips and the object. Slippage can occur for many reasons such as disturbances, uncertainties in parameters, and dynamics of the system. In this paper, system dynamics is analyzed using a new presentation of friction and slippage dynamics. Then an adaptive control law is proposed for trajectory tracking and slippage control of the object as well as compensation for parameter uncertainties of the system, such as mass properties and coefficients of friction. Stability of the proposed adaptive controller is studied analytically and the performance of the system is studied numerically.
Collapse
|
14
|
|
15
|
Abstract
SUMMARYIn this paper, a kinematic model of a dual-arm/hand robotic system is derived, which allows the computation of the object position and orientation from the joint variables of each arm and each finger as well as from a suitable set of contact variables. On the basis of this model, a motion planner is designed, where the kinematic redundancy of the system is exploited to satisfy some secondary tasks aimed at ensuring grasp stability and manipulation dexterity without violating physical constraints. To this purpose, a prioritized task sequencing with smooth transitions between tasks is adopted. Afterwards, a controller is designed so as to execute the motion references provided by the planner and, at the same time, achieve a desired contact force exerted by each finger on the grasped object. To this end, a parallel position/force control is considered. A simulation case study has been developed by using the dynamic simulator GRASPIT!, which has been suitably adapted and redistributed.
Collapse
|
16
|
|
17
|
Abstract
SUMMARYIn this paper we propose an intuitive and practical grasp quality measure for grasping 3D objects with a multi-fingered robot hand. The proposed measure takes into account the object geometries through the concept of object wrench space. Physically, the positive measure value has a meaning of the minimum single disturbance that grasp cannot resist, while the negative measure value implies the minimum necessary helping force that restores a non-force-closure grasp into a force-closure one. We show that the measure value is invariant between similar grasps and also between different torque origins. We verify the validity of the proposed measure via simulations by using computer models of a three-fingered robot hand and polygonal objects.
Collapse
|
18
|
Villani L, Ficuciello F, Lippiello V, Palli G, Ruggiero F, Siciliano B. Grasping and Control of Multi-Fingered Hands. SPRINGER TRACTS IN ADVANCED ROBOTICS 2012. [DOI: 10.1007/978-3-642-29041-1_5] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
|