1
|
Wang Y, Ye Z, Wen M, Liang H, Zhang X. TransVFS: A spatio-temporal local-global transformer for vision-based force sensing during ultrasound-guided prostate biopsy. Med Image Anal 2024; 94:103130. [PMID: 38437787 DOI: 10.1016/j.media.2024.103130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 02/16/2024] [Accepted: 02/29/2024] [Indexed: 03/06/2024]
Abstract
Robot-assisted prostate biopsy is a new technology to diagnose prostate cancer, but its safety is influenced by the inability of robots to sense the tool-tissue interaction force accurately during biopsy. Recently, vision based force sensing (VFS) provides a potential solution to this issue by utilizing image sequences to infer the interaction force. However, the existing mainstream VFS methods cannot realize the accurate force sensing due to the adoption of convolutional or recurrent neural network to learn deformation from the optical images and some of these methods are not efficient especially when the recurrent convolutional operations are involved. This paper has presented a Transformer based VFS (TransVFS) method by leveraging ultrasound volume sequences acquired during prostate biopsy. The TransVFS method uses a spatio-temporal local-global Transformer to capture the local image details and the global dependency simultaneously to learn prostate deformations for force estimation. Distinctively, our method explores both the spatial and temporal attention mechanisms for image feature learning, thereby addressing the influence of the low ultrasound image resolution and the unclear prostate boundary on the accurate force estimation. Meanwhile, the two efficient local-global attention modules are introduced to reduce 4D spatio-temporal computation burden by utilizing the factorized spatio-temporal processing strategy, thereby facilitating the fast force estimation. Experiments on prostate phantom and beagle dogs show that our method significantly outperforms existing VFS methods and other spatio-temporal Transformer models. The TransVFS method surpasses the most competitive compared method ResNet3dGRU by providing the mean absolute errors of force estimation, i.e., 70.4 ± 60.0 millinewton (mN) vs 123.7 ± 95.6 mN, on the transabdominal ultrasound dataset of dogs.
Collapse
Affiliation(s)
- Yibo Wang
- Department of Biomedical Engineering, College of Life Science and Technology, Huazhong University of Science and Technology, No 1037, Luyou Road, Wuhan, China
| | - Zhichao Ye
- Department of Urology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, No 13, Hangkong Road, Wuhan, China
| | - Mingwei Wen
- Department of Biomedical Engineering, College of Life Science and Technology, Huazhong University of Science and Technology, No 1037, Luyou Road, Wuhan, China
| | - Huageng Liang
- Department of Urology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, No 13, Hangkong Road, Wuhan, China
| | - Xuming Zhang
- Department of Biomedical Engineering, College of Life Science and Technology, Huazhong University of Science and Technology, No 1037, Luyou Road, Wuhan, China.
| |
Collapse
|
2
|
Chua Z, Okamura AM. A Modular 3-Degrees-of-Freedom Force Sensor for Robot-Assisted Minimally Invasive Surgery Research. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115230. [PMID: 37299958 DOI: 10.3390/s23115230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Revised: 05/07/2023] [Accepted: 05/29/2023] [Indexed: 06/12/2023]
Abstract
Effective force modulation during tissue manipulation is important for ensuring safe, robot-assisted, minimally invasive surgery (RMIS). Strict requirements for in vivo applications have led to prior sensor designs that trade off ease of manufacture and integration against force measurement accuracy along the tool axis. Due to this trade-off, there are no commercial, off-the-shelf, 3-degrees-of-freedom (3DoF) force sensors for RMIS available to researchers. This makes it challenging to develop new approaches to indirect sensing and haptic feedback for bimanual telesurgical manipulation. We present a modular 3DoF force sensor that integrates easily with an existing RMIS tool. We achieve this by relaxing biocompatibility and sterilizability requirements and by using commercial load cells and common electromechanical fabrication techniques. The sensor has a range of ±5 N axially and ±3 N laterally with errors of below 0.15 N and maximum errors below 11% of the sensing range in all directions. During telemanipulation, a pair of jaw-mounted sensors achieved average errors below 0.15 N in all directions. It achieved an average grip force error of 0.156 N. The sensor is for bimanual haptic feedback and robotic force control in delicate tissue telemanipulation. As an open-source design, the sensors can be adapted to suit other non-RMIS robotic applications.
Collapse
Affiliation(s)
- Zonghe Chua
- Department of Electrical, Computer and Systems Engineering, Case Western Reserve University, 10900 Euclid Avenue, Glennan Building 514A, Cleveland, OH 44106, USA
| | - Allison M Okamura
- Department of Mechanical Engineering, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
3
|
Optical force estimation for interactions between tool and soft tissues. Sci Rep 2023; 13:506. [PMID: 36627354 PMCID: PMC9831996 DOI: 10.1038/s41598-022-27036-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2022] [Accepted: 12/23/2022] [Indexed: 01/11/2023] Open
Abstract
Robotic assistance in minimally invasive surgery offers numerous advantages for both patient and surgeon. However, the lack of force feedback in robotic surgery is a major limitation, and accurately estimating tool-tissue interaction forces remains a challenge. Image-based force estimation offers a promising solution without the need to integrate sensors into surgical tools. In this indirect approach, interaction forces are derived from the observed deformation, with learning-based methods improving accuracy and real-time capability. However, the relationship between deformation and force is determined by the stiffness of the tissue. Consequently, both deformation and local tissue properties must be observed for an approach applicable to heterogeneous tissue. In this work, we use optical coherence tomography, which can combine the detection of tissue deformation with shear wave elastography in a single modality. We present a multi-input deep learning network for processing of local elasticity estimates and volumetric image data. Our results demonstrate that accounting for elastic properties is critical for accurate image-based force estimation across different tissue types and properties. Joint processing of local elasticity information yields the best performance throughout our phantom study. Furthermore, we test our approach on soft tissue samples that were not present during training and show that generalization to other tissue properties is possible.
Collapse
|
4
|
Sánchez-Brizuela G, Santos-Criado FJ, Sanz-Gobernado D, de la Fuente-López E, Fraile JC, Pérez-Turiel J, Cisnal A. Gauze Detection and Segmentation in Minimally Invasive Surgery Video Using Convolutional Neural Networks. SENSORS (BASEL, SWITZERLAND) 2022; 22:5180. [PMID: 35890857 PMCID: PMC9319965 DOI: 10.3390/s22145180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Revised: 06/30/2022] [Accepted: 07/07/2022] [Indexed: 06/15/2023]
Abstract
Medical instruments detection in laparoscopic video has been carried out to increase the autonomy of surgical robots, evaluate skills or index recordings. However, it has not been extended to surgical gauzes. Gauzes can provide valuable information to numerous tasks in the operating room, but the lack of an annotated dataset has hampered its research. In this article, we present a segmentation dataset with 4003 hand-labelled frames from laparoscopic video. To prove the dataset potential, we analyzed several baselines: detection using YOLOv3, coarse segmentation, and segmentation with a U-Net. Our results show that YOLOv3 can be executed in real time but provides a modest recall. Coarse segmentation presents satisfactory results but lacks inference speed. Finally, the U-Net baseline achieves a good speed-quality compromise running above 30 FPS while obtaining an IoU of 0.85. The accuracy reached by U-Net and its execution speed demonstrate that precise and real-time gauze segmentation can be achieved, training convolutional neural networks on the proposed dataset.
Collapse
Affiliation(s)
- Guillermo Sánchez-Brizuela
- Instituto de las Tecnologías Avanzadas de la Producción (ITAP), Universidad de Valladolid, Paseo del Cauce 59, 47011 Valladolid, Spain; (D.S.-G.); (E.d.l.F.-L.); (J.-C.F.); (J.P.-T.); (A.C.)
| | - Francisco-Javier Santos-Criado
- Escuela Técnica Superior de Ingenieros Industriales, Universidad Politécnica de Madrid, Calle de José Gutiérrez Abascal, 2, 28006 Madrid, Spain;
| | - Daniel Sanz-Gobernado
- Instituto de las Tecnologías Avanzadas de la Producción (ITAP), Universidad de Valladolid, Paseo del Cauce 59, 47011 Valladolid, Spain; (D.S.-G.); (E.d.l.F.-L.); (J.-C.F.); (J.P.-T.); (A.C.)
| | - Eusebio de la Fuente-López
- Instituto de las Tecnologías Avanzadas de la Producción (ITAP), Universidad de Valladolid, Paseo del Cauce 59, 47011 Valladolid, Spain; (D.S.-G.); (E.d.l.F.-L.); (J.-C.F.); (J.P.-T.); (A.C.)
| | - Juan-Carlos Fraile
- Instituto de las Tecnologías Avanzadas de la Producción (ITAP), Universidad de Valladolid, Paseo del Cauce 59, 47011 Valladolid, Spain; (D.S.-G.); (E.d.l.F.-L.); (J.-C.F.); (J.P.-T.); (A.C.)
| | - Javier Pérez-Turiel
- Instituto de las Tecnologías Avanzadas de la Producción (ITAP), Universidad de Valladolid, Paseo del Cauce 59, 47011 Valladolid, Spain; (D.S.-G.); (E.d.l.F.-L.); (J.-C.F.); (J.P.-T.); (A.C.)
| | - Ana Cisnal
- Instituto de las Tecnologías Avanzadas de la Producción (ITAP), Universidad de Valladolid, Paseo del Cauce 59, 47011 Valladolid, Spain; (D.S.-G.); (E.d.l.F.-L.); (J.-C.F.); (J.P.-T.); (A.C.)
| |
Collapse
|
5
|
Tukra S, Lidströmer N, Ashrafian H, Gianarrou S. AI in Surgical Robotics. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
6
|
Othman W, Vandyck KE, Abril C, Barajas-Gamboa JS, Pantoja JP, Kroh M, Qasaimeh MA. Stiffness Assessment and Lump Detection in Minimally Invasive Surgery Using In-House Developed Smart Laparoscopic Forceps. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2022; 10:2500410. [PMID: 35774413 PMCID: PMC9216325 DOI: 10.1109/jtehm.2022.3180937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Revised: 05/01/2022] [Accepted: 05/16/2022] [Indexed: 11/23/2022]
Abstract
Minimally invasive surgery (MIS) incorporates surgical instruments through small incisions to perform procedures. Despite the potential advantages of MIS, the lack of tactile sensation and haptic feedback due to the indirect contact between the surgeon’s hands and the tissues restricts sensing the strength of applied forces or obtaining information about the biomechanical properties of tissues under operation. Accordingly, there is a crucial need for intelligent systems to provide an artificial tactile sensation to MIS surgeons and trainees. This study evaluates the potential of our proposed real-time grasping forces and deformation angles feedback to assist surgeons in detecting tissues’ stiffness. A prototype was developed using a standard laparoscopic grasper integrated with a force-sensitive resistor on one grasping jaw and a tunneling magneto-resistor on the handle’s joint to measure the grasping force and the jaws’ opening angle, respectively. The sensors’ data are analyzed using a microcontroller, and the output is displayed on a small screen and saved to a log file. This integrated system was evaluated by running multiple grasp-release tests using both elastomeric and biological tissue samples, in which the average force-to-angle-change ratio precisely resembled the stiffness of grasped samples. Another feature is the detection of hidden lumps by palpation, looking for sudden variations in the measured stiffness. In experiments, the real-time grasping feedback helped enhance the surgeons’ sorting accuracy of testing models based on their stiffness. The developed tool demonstrated a great potential for low-cost tactile sensing in MIS procedures, with room for future improvements. Significance: The proposed method can contribute to MIS by assessing stiffness, detecting hidden lumps, preventing excessive forces during operation, and reducing the learning curve for trainees.
Collapse
Affiliation(s)
- Wael Othman
- Engineering Division, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Kojo E. Vandyck
- Engineering Division, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| | - Carlos Abril
- Department of General Surgery, Cleveland Clinic Abu Dhabi, Digestive Disease Institute, Abu Dhabi, United Arab Emirates
| | - Juan S. Barajas-Gamboa
- Department of General Surgery, Cleveland Clinic Abu Dhabi, Digestive Disease Institute, Abu Dhabi, United Arab Emirates
| | - Juan P. Pantoja
- Department of General Surgery, Cleveland Clinic Abu Dhabi, Digestive Disease Institute, Abu Dhabi, United Arab Emirates
| | - Matthew Kroh
- Department of General Surgery, Cleveland Clinic Ohio, Digestive Disease and Surgery Institute, Cleveland, OH, USA
| | - Mohammad A. Qasaimeh
- Engineering Division, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates
| |
Collapse
|
7
|
End-Effector Force and Joint Torque Estimation of a 7-DoF Robotic Manipulator Using Deep Learning. ELECTRONICS 2021. [DOI: 10.3390/electronics10232963] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
When a mobile robotic manipulator interacts with other robots, people, or the environment in general, the end-effector forces need to be measured to assess if a task has been completed successfully. Traditionally used force or torque estimation methods are usually based on observers, which require knowledge of the robot dynamics. Contrary to this, our approach involves two methods based on deep neural networks: robot end-effector force estimation and joint torque estimation. These methods require no knowledge of robot dynamics and are computationally effective but require a force sensor under the robot base. Several different architectures were considered for the tasks, and the best ones were identified among those tested. First, the data for training the networks were obtained in simulation. The trained networks showed reasonably good performance, especially using the LSTM architecture (with a root mean squared error (RMSE) of 0.1533 N for end-effector force estimation and 0.5115 Nm for joint torque estimation). Afterward, data were collected on a real Franka Emika Panda robot and then used to train the same networks for joint torque estimation. The obtained results are slightly worse than in simulation (0.5115 Nm vs. 0.6189 Nm, according to the RMSE metric) but still reasonably good, showing the validity of the proposed approach.
Collapse
|
8
|
Hadi Hosseinabadi AH, Salcudean SE. Force sensing in robot-assisted keyhole endoscopy: A systematic survey. Int J Rob Res 2021. [DOI: 10.1177/02783649211052067] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Instrument–tissue interaction forces in minimally invasive surgery (MIS) provide valuable information that can be used to provide haptic perception, monitor tissue trauma, develop training guidelines, and evaluate the skill level of novice and expert surgeons. Force and tactile sensing is lost in many robot-assisted surgery (RAS) systems. Therefore, many researchers have focused on recovering this information through sensing systems and estimation algorithms. This article provides a comprehensive systematic review of the current force sensing research aimed at RAS and, more generally, keyhole endoscopy, in which instruments enter the body through small incisions. Articles published between January 2011 and May 2020 are considered, following the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) guidelines. The literature search resulted in 110 papers on different force estimation algorithms and sensing technologies, sensor design specifications, and fabrication techniques.
Collapse
Affiliation(s)
- Amir Hossein Hadi Hosseinabadi
- Robotics and Controls Laboratory (RCL), Electrical and Computer Engineering Department, University of British Columbia, Vancouver, British Columbia, Canada
| | - Septimiu E. Salcudean
- Robotics and Controls Laboratory (RCL), Electrical and Computer Engineering Department, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
9
|
Lee KW, Ko DK, Lim SC. Toward Vision-Based High Sampling Interaction Force Estimation With Master Position and Orientation for Teleoperation. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3094848] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
10
|
Tariverdi A, Venkiteswaran VK, Richter M, Elle OJ, Tørresen J, Mathiassen K, Misra S, Martinsen ØG. A Recurrent Neural-Network-Based Real-Time Dynamic Model for Soft Continuum Manipulators. Front Robot AI 2021; 8:631303. [PMID: 33869294 PMCID: PMC8044932 DOI: 10.3389/frobt.2021.631303] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Accepted: 02/05/2021] [Indexed: 11/25/2022] Open
Abstract
This paper introduces and validates a real-time dynamic predictive model based on a neural network approach for soft continuum manipulators. The presented model provides a real-time prediction framework using neural-network-based strategies and continuum mechanics principles. A time-space integration scheme is employed to discretize the continuous dynamics and decouple the dynamic equations for translation and rotation for each node of a soft continuum manipulator. Then the resulting architecture is used to develop distributed prediction algorithms using recurrent neural networks. The proposed RNN-based parallel predictive scheme does not rely on computationally intensive algorithms; therefore, it is useful in real-time applications. Furthermore, simulations are shown to illustrate the approach performance on soft continuum elastica, and the approach is also validated through an experiment on a magnetically-actuated soft continuum manipulator. The results demonstrate that the presented model can outperform classical modeling approaches such as the Cosserat rod model while also shows possibilities for being used in practice.
Collapse
Affiliation(s)
| | | | - Michiel Richter
- Department of Biomechanical Engineering, University of Twente, Enschede, Netherlands
| | - Ole J Elle
- The Intervention Centre, Oslo University Hospital, Oslo, Norway.,Department of Informatics, University of Oslo, Oslo, Norway
| | - Jim Tørresen
- Department of Informatics, University of Oslo, Oslo, Norway
| | - Kim Mathiassen
- Department of Technology Systems, University of Oslo, Oslo, Norway
| | - Sarthak Misra
- Department of Biomechanical Engineering, University of Twente, Enschede, Netherlands.,Department of Biomedical Engineering, University of Groningen and University Medical Centre Groningen, Groningen, Netherlands
| | - Ørjan G Martinsen
- Department of Physics, University of Oslo, Oslo, Norway.,Department of Clinical and Biomedical Engineering, Oslo University Hospital, Oslo, Norway
| |
Collapse
|
11
|
Roveda L, Piga D. Sensorless environment stiffness and interaction force estimation for impedance control tuning in robotized interaction tasks. Auton Robots 2021. [DOI: 10.1007/s10514-021-09970-z] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
AbstractIndustrial robots are increasingly used to perform tasks requiring an interaction with the surrounding environment (e.g., assembly tasks). Such environments are usually (partially) unknown to the robot, requiring the implemented controllers to suitably react to the established interaction. Standard controllers require force/torque measurements to close the loop. However, most of the industrial manipulators do not have embedded force/torque sensor(s) and such integration results in additional costs and implementation effort. To extend the use of compliant controllers to sensorless interaction control, a model-based methodology is presented in this paper. Relying on sensorless Cartesian impedance control, two Extended Kalman Filters (EKF) are proposed: an EKF for interaction force estimation and an EKF for environment stiffness estimation. Exploiting such estimations, a control architecture is proposed to implement a sensorless force loop (exploiting the provided estimated force) with adaptive Cartesian impedance control and coupling dynamics compensation (exploiting the provided estimated environment stiffness). The described approach has been validated in both simulations and experiments. A Franka EMIKA panda robot has been used. A probing task involving different materials (i.e., with different - unknown - stiffness properties) has been considered to show the capabilities of the developed EKFs (able to converge with limited errors) and control tuning (preserving stability). Additionally, a polishing-like task and an assembly task have been implemented to show the achieved performance of the proposed methodology.
Collapse
|
12
|
Tukra S, Lidströmer N, Ashrafian H, Giannarou S. AI in Surgical Robotics. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_323-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
13
|
Vision-Based Suture Tensile Force Estimation in Robotic Surgery. SENSORS 2020; 21:s21010110. [PMID: 33375388 PMCID: PMC7796030 DOI: 10.3390/s21010110] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Revised: 12/23/2020] [Accepted: 12/24/2020] [Indexed: 12/14/2022]
Abstract
Compared to laparoscopy, robotics-assisted minimally invasive surgery has the problem of an absence of force feedback, which is important to prevent a breakage of the suture. To overcome this problem, surgeons infer the suture force from their proprioception and 2D image by comparing them to the training experience. Based on this idea, a deep-learning-based method using a single image and robot position to estimate the tensile force of the sutures without a force sensor is proposed. A neural network structure with a modified Inception Resnet-V2 and Long Short Term Memory (LSTM) networks is used to estimate the suture pulling force. The feasibility of proposed network is verified using the generated DB, recording the interaction under the condition of two different artificial skins and two different situations (in vivo and in vitro) at 13 viewing angles of the images by changing the tool positions collected from the master-slave robotic system. From the evaluation conducted to show the feasibility of the interaction force estimation, the proposed learning models successfully estimated the tensile force at 10 unseen viewing angles during training.
Collapse
|
14
|
Roveda L, Piga D. Robust state dependent Riccati equation variable impedance control for robotic force-tracking tasks. INTERNATIONAL JOURNAL OF INTELLIGENT ROBOTICS AND APPLICATIONS 2020. [DOI: 10.1007/s41315-020-00153-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
AbstractIndustrial robots are increasingly used in highly flexible interaction tasks, where the intrinsic variability makes difficult to pre-program the manipulator for all the different scenarios. In such applications, interaction environments are commonly (partially) unknown to the robot, requiring the implemented controllers to take in charge for the stability of the interaction. While standard controllers are sensor-based, there is a growing need to make sensorless robots (i.e., most of the commercial robots are not equipped with force/torque sensors) able to sense the environment, properly reacting to the established interaction. This paper proposes a new methodology to sensorless force control manipulators. On the basis of sensorless Cartesian impedance control, an Extended Kalman Filter (EKF) is designed to estimate the interaction exchanged between the robot and the environment. Such an estimation is then used in order to close a robust high-performance force loop, designed exploiting a variable impedance control and a State Dependent Riccati Equation (SDRE) force controller. The described approach has been validated in simulations. A Franka EMIKA panda robot has been considered as a test platform. A probing task involving different materials (i.e., with different stiffness properties) has been considered to show the capabilities of the developed EKF (able to converge with limited errors) and controller (preserving stability and avoiding overshoots). The proposed controller has been compared with an LQR controller to show its improved performance.
Collapse
|
15
|
6D Virtual Sensor for Wrench Estimation in Robotized Interaction Tasks Exploiting Extended Kalman Filter. MACHINES 2020. [DOI: 10.3390/machines8040067] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Industrial robots are commonly used to perform interaction tasks (such as assemblies or polishing), requiring the robot to be in contact with the surrounding environment. Such environments are (partially) unknown to the robot controller. Therefore, there is the need to implement interaction controllers capable of suitably reacting to the established contacts. Although standard force controllers require force/torque measurements to close the loop, most of the industrial manipulators do not have installed force/torque sensor(s). In addition, the integration of external sensors results in additional costs and implementation effort, not affordable in many contexts/applications. To extend the use of compliant controllers to sensorless interaction control, a model-based methodology is presented in this paper for the online estimation of the interaction wrench, implementing a 6D virtual sensor. Relying on sensorless Cartesian impedance control, an Extended Kalman Filter (EKF) is proposed for the interaction wrench estimation. The described approach has been validated in simulations, taking into account four different scenarios. In addition, experimental validation has been performed employing a Franka EMIKA panda robot. A human–robot interaction scenario and an assembly task have been considered to show the capabilities of the developed EKF, which is able to perform the estimation with high bandwidth, achieving convergence with limited errors.
Collapse
|
16
|
Edwards PJ‘E, Colleoni E, Sridhar A, Kelly JD, Stoyanov D. Visual kinematic force estimation in robot-assisted surgery – application to knot tying. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2020. [DOI: 10.1080/21681163.2020.1833368] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
| | - Emanuele Colleoni
- Department of Computer Science, Surgical Robot Vision Group, WEISS, UCL, London, UK
| | - Aswhin Sridhar
- Urology Department, Westmoreland Street Hospital, UCLH, London, UK
| | - John D. Kelly
- Urology Department, Westmoreland Street Hospital, UCLH, London, UK
| | - Danail Stoyanov
- Department of Computer Science, Surgical Robot Vision Group, WEISS, UCL, London, UK
| |
Collapse
|
17
|
Behrendt F, Gessert N, Schlaefer A. Generalization of spatio-temporal deep learning for vision-based force estimation. CURRENT DIRECTIONS IN BIOMEDICAL ENGINEERING 2020. [DOI: 10.1515/cdbme-2020-0024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
Robot-assisted minimally-invasive surgery is increasingly used in clinical practice. Force feedback offers potential to develop haptic feedback for surgery systems. Forces can be estimated in a vision-based way by capturing deformation observed in 2D-image sequences with deep learning models. Variations in tissue appearance and mechanical properties likely influence force estimation methods’ generalization. In this work, we study the generalization capabilities of different spatial and spatio-temporal deep learning methods across different tissue samples. We acquire several data-sets using a clinical laparoscope and use both purely spatial and also spatio-temporal deep learning models. The results of this work show that generalization across different tissues is challenging. Nevertheless, we demonstrate that using spatio-temporal data instead of individual frames is valuable for force estimation. In particular, processing spatial and temporal data separately by a combination of a ResNet and GRU architecture shows promising results with a mean absolute error of 15.450 compared to 19.744 mN of a purely spatial CNN.
Collapse
Affiliation(s)
- Finn Behrendt
- Institute of Medical Technology , Hamburg University of Technology , Hamburg , Germany
| | - Nils Gessert
- Institute of Medical Technology , Hamburg University of Technology , Hamburg , Germany
| | - Alexander Schlaefer
- Institute of Medical Technology , Hamburg University of Technology , Hamburg , Germany
| |
Collapse
|
18
|
Neidhardt M, Gessert N, Gosau T, Kemmling J, Feldhaus S, Schumacher U, Schlaefer A. Force estimation from 4D OCT data in a human tumor xenograft mouse model. CURRENT DIRECTIONS IN BIOMEDICAL ENGINEERING 2020. [DOI: 10.1515/cdbme-2020-0022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
Minimally invasive robotic surgery offer benefits such as reduced physical trauma, faster recovery and lesser pain for the patient. For these procedures, visual and haptic feedback to the surgeon is crucial when operating surgical tools without line-of-sight with a robot. External force sensors are biased by friction at the tool shaft and thereby cannot estimate forces between tool tip and tissue. As an alternative, vision-based force estimation was proposed. Here, interaction forces are directly learned from deformation observed by an external imaging system. Recently, an approach based on optical coherence tomography and deep learning has shown promising results. However, most experiments are performed on ex-vivo tissue. In this work, we demonstrate that models trained on dead tissue do not perform well in in vivo data. We performed multiple experiments on a human tumor xenograft mouse model, both on in vivo, perfused tissue and dead tissue. We compared two deep learning models in different training scenarios. Training on perfused, in vivo data improved model performance by 24% for in vivo force estimation.
Collapse
Affiliation(s)
- Maximilian Neidhardt
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology , Hamburg , Germany
| | - Nils Gessert
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology , Hamburg , Germany
| | - Tobias Gosau
- Department of Anatomy and Experimental Morphology, University Medical Center Hamburg-Eppendorf , Hamburg , Germany
| | - Julia Kemmling
- Department of Anatomy and Experimental Morphology, University Medical Center Hamburg-Eppendorf , Hamburg , Germany
| | - Susanne Feldhaus
- Department of Anatomy and Experimental Morphology, University Medical Center Hamburg-Eppendorf , Hamburg , Germany
| | - Udo Schumacher
- Department of Anatomy and Experimental Morphology, University Medical Center Hamburg-Eppendorf , Hamburg , Germany
| | - Alexander Schlaefer
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology , Hamburg , Germany
| |
Collapse
|
19
|
Gessert N, Bengs M, Schlüter M, Schlaefer A. Deep learning with 4D spatio-temporal data representations for OCT-based force estimation. Med Image Anal 2020; 64:101730. [PMID: 32492583 DOI: 10.1016/j.media.2020.101730] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Revised: 05/20/2020] [Accepted: 05/20/2020] [Indexed: 10/24/2022]
Abstract
Estimating the forces acting between instruments and tissue is a challenging problem for robot-assisted minimally-invasive surgery. Recently, numerous vision-based methods have been proposed to replace electro-mechanical approaches. Moreover, optical coherence tomography (OCT) and deep learning have been used for estimating forces based on deformation observed in volumetric image data. The method demonstrated the advantage of deep learning with 3D volumetric data over 2D depth images for force estimation. In this work, we extend the problem of deep learning-based force estimation to 4D spatio-temporal data with streams of 3D OCT volumes. For this purpose, we design and evaluate several methods extending spatio-temporal deep learning to 4D which is largely unexplored so far. Furthermore, we provide an in-depth analysis of multi-dimensional image data representations for force estimation, comparing our 4D approach to previous, lower-dimensional methods. Also, we analyze the effect of temporal information and we study the prediction of short-term future force values, which could facilitate safety features. For our 4D force estimation architectures, we find that efficient decoupling of spatial and temporal processing is advantageous. We show that using 4D spatio-temporal data outperforms all previously used data representations with a mean absolute error of 10.7 mN. We find that temporal information is valuable for force estimation and we demonstrate the feasibility of force prediction.
Collapse
Affiliation(s)
- Nils Gessert
- Hamburg University of Technology, Institute of Medical Technology, Am Schwarzenberg-Campus 3, Hamburg 21073 Germany.
| | - Marcel Bengs
- Hamburg University of Technology, Institute of Medical Technology, Am Schwarzenberg-Campus 3, Hamburg 21073 Germany
| | - Matthias Schlüter
- Hamburg University of Technology, Institute of Medical Technology, Am Schwarzenberg-Campus 3, Hamburg 21073 Germany
| | - Alexander Schlaefer
- Hamburg University of Technology, Institute of Medical Technology, Am Schwarzenberg-Campus 3, Hamburg 21073 Germany
| |
Collapse
|
20
|
Abdi E, Kulic D, Croft E. Haptics in Teleoperated Medical Interventions: Force Measurement, Haptic Interfaces and Their Influence on User's Performance. IEEE Trans Biomed Eng 2020; 67:3438-3451. [PMID: 32305890 DOI: 10.1109/tbme.2020.2987603] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
OBJECTIVES Haptics in teleoperated medical interventions enables measurement and transfer of force information to the operator during robot-environment interaction. This paper provides an overview of the current research in this domain and guidelines for future investigations. METHODS We review current technologies in force measurement and haptic devices as well as their experimental evaluation and influence on user's performance. RESULTS Force sensing is moving away from the conventional proximal measurement methods to distal sensing and contact-less methods. Wearable devices that deliver haptic feedback on different body parts are increasingly playing an important role. Performance and accuracy improvement are the widely reported benefits of haptic feedback, while there is a debate on its effect on task completion time and exerted force. CONCLUSION With the surge of new ideas, there is a need for better and more systematic validation of the new sensing and feedback technology, through better user studies and novel methods like validated benchmarks and new taxonomies. SIGNIFICANCE This review investigates haptics from sensing to interfaces within the context of user's performance and the validation procedures to highlight salient advances. It provides guidelines to future developments and highlights the shortcomings in the field.
Collapse
|
21
|
Urias MG, Patel N, He C, Ebrahimi A, Kim JW, Iordachita I, Gehlbach PL. Artificial intelligence, robotics and eye surgery: are we overfitted? Int J Retina Vitreous 2019; 5:52. [PMID: 31890281 PMCID: PMC6912992 DOI: 10.1186/s40942-019-0202-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Accepted: 11/25/2019] [Indexed: 11/10/2022] Open
Abstract
Eye surgery, specifically retinal micro-surgery involves sensory and motor skill that approaches human boundaries and physiological limits for steadiness, accuracy, and the ability to detect the small forces involved. Despite assumptions as to the benefit of robots in surgery and also despite great development effort, numerous challenges to the full development and adoption of robotic assistance in surgical ophthalmology, remain. Historically, the first in-human-robot-assisted retinal surgery occurred nearly 30 years after the first experimental papers on the subject. Similarly, artificial intelligence emerged decades ago and it is only now being more fully realized in ophthalmology. The delay between conception and application has in part been due to the necessary technological advances required to implement new processing strategies. Chief among these has been the better matched processing power of specialty graphics processing units for machine learning. Transcending the classic concept of robots performing repetitive tasks, artificial intelligence and machine learning are related concepts that has proven their abilities to design concepts and solve problems. The implication of such abilities being that future machines may further intrude on the domain of heretofore "human-reserved" tasks. Although the potential of artificial intelligence/machine learning is profound, present marketing promises and hype exceeds its stage of development, analogous to the seventieth century mathematical "boom" with algebra. Nevertheless robotic systems augmented by machine learning may eventually improve robot-assisted retinal surgery and could potentially transform the discipline. This commentary analyzes advances in retinal robotic surgery, its current drawbacks and limitations, and the potential role of artificial intelligence in robotic retinal surgery.
Collapse
Affiliation(s)
- Müller G. Urias
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, MD 21287 USA
- Federal University of Sao Paulo, São Paulo, 04023-062 Brazil
| | - Niravkumar Patel
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Changyan He
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218 USA
- School of Mechanical Engineering and Automation at, Beihang University, Beijing, 100191 China
| | - Ali Ebrahimi
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Ji Woong Kim
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Iulian Iordachita
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Peter L. Gehlbach
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, MD 21287 USA
- Whiting School of Engineering, Johns Hopkins University, Baltimore, MD 21218 USA
| |
Collapse
|
22
|
Thomas AW, Heekeren HR, Müller KR, Samek W. Analyzing Neuroimaging Data Through Recurrent Deep Learning Models. Front Neurosci 2019; 13:1321. [PMID: 31920491 PMCID: PMC6914836 DOI: 10.3389/fnins.2019.01321] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2019] [Accepted: 11/25/2019] [Indexed: 01/25/2023] Open
Abstract
The application of deep learning (DL) models to neuroimaging data poses several challenges, due to the high dimensionality, low sample size, and complex temporo-spatial dependency structure of these data. Even further, DL models often act as black boxes, impeding insight into the association of cognitive state and brain activity. To approach these challenges, we introduce the DeepLight framework, which utilizes long short-term memory (LSTM) based DL models to analyze whole-brain functional Magnetic Resonance Imaging (fMRI) data. To decode a cognitive state (e.g., seeing the image of a house), DeepLight separates an fMRI volume into a sequence of axial brain slices, which is then sequentially processed by an LSTM. To maintain interpretability, DeepLight adapts the layer-wise relevance propagation (LRP) technique. Thereby, decomposing its decoding decision into the contributions of the single input voxels to this decision. Importantly, the decomposition is performed on the level of single fMRI volumes, enabling DeepLight to study the associations between cognitive state and brain activity on several levels of data granularity, from the level of the group down to the level of single time points. To demonstrate the versatility of DeepLight, we apply it to a large fMRI dataset of the Human Connectome Project. We show that DeepLight outperforms conventional approaches of uni- and multivariate fMRI analysis in decoding the cognitive states and in identifying the physiologically appropriate brain regions associated with these states. We further demonstrate DeepLight's ability to study the fine-grained temporo-spatial variability of brain activity over sequences of single fMRI samples.
Collapse
Affiliation(s)
- Armin W. Thomas
- Machine Learning Group, Technische Universität Berlin, Berlin, Germany
- Center for Cognitive Neuroscience Berlin, Freie Universität Berlin, Berlin, Germany
- Max Planck School of Cognition, Leipzig, Germany
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Hauke R. Heekeren
- Center for Cognitive Neuroscience Berlin, Freie Universität Berlin, Berlin, Germany
- Department of Education and Psychology, Freie Universität Berlin, Berlin, Germany
| | - Klaus-Robert Müller
- Machine Learning Group, Technische Universität Berlin, Berlin, Germany
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
- Max Planck Institute for Informatics, Saarbrücken, Germany
| | - Wojciech Samek
- Machine Learning Group, Fraunhofer Heinrich Hertz Institute, Berlin, Germany
| |
Collapse
|
23
|
A Clamping Force Estimation Method Based on a Joint Torque Disturbance Observer Using PSO-BPNN for Cable-Driven Surgical Robot End-Effectors. SENSORS 2019; 19:s19235291. [PMID: 31805636 PMCID: PMC6929025 DOI: 10.3390/s19235291] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/28/2019] [Revised: 11/29/2019] [Accepted: 11/29/2019] [Indexed: 01/17/2023]
Abstract
The ability to sense external force is an important technique for force feedback, haptics and safe interaction control in minimally-invasive surgical robots (MISRs). Moreover, this ability plays a significant role in the restricting refined surgical operations. The wrist joints of surgical robot end-effectors are usually actuated by several long-distance wire cables. Its two forceps are each actuated by two cables. The scope of force sensing includes multidimensional external force and one-dimensional clamping force. This paper focuses on one-dimensional clamping force sensing method that do not require any internal force sensor integrated in the end-effector's forceps. A new clamping force estimation method is proposed based on a joint torque disturbance observer (JTDO) for a cable-driven surgical robot end-effector. The JTDO essentially considers the variations in cable tension between the actual cable tension and the estimated cable tension using a Particle Swarm Optimization Back Propagation Neural Network (PSO-BPNN) under free motion. Furthermore, a clamping force estimator is proposed based on the forceps' JTDO and their mechanical relations. According to comparative analyses in experimental studies, the detection resolutions of collision force and clamping force were 0.11 N. The experimental results verify the feasibility and effectiveness of the proposed clamping force sensing method.
Collapse
|