1
|
Tyree A, Bhatia A, Hong M, Hanna J, Kasper KA, Good B, Perez D, Govalla DN, Hunt A, Sathishkumaraselvam V, Hoffman JP, Rozenblit JW, Gutruf P. Biosymbiotic haptic feedback - Sustained long term human machine interfaces. Biosens Bioelectron 2024; 261:116432. [PMID: 38861810 DOI: 10.1016/j.bios.2024.116432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 05/16/2024] [Accepted: 05/24/2024] [Indexed: 06/13/2024]
Abstract
Haptic technology permeates diverse fields and is receiving renewed attention for VR and AR applications. Advances in flexible electronics, facilitate the integration of haptic technologies into soft wearable systems, however, because of small footprint requirements face challenges of operational time requiring either large batteries, wired connections or frequent recharge, restricting the utility of haptic devices to short-duration tasks or low duty cycles, prohibiting continuously assisting applications. Currently many chronic applications are not investigated because of this technological gap. Here, we address wireless power and operation challenges with a biosymbiotic approach enabling continuous operation without user intervention, facilitated by wireless power transfer, eliminating the need for large batteries, and offering long-term haptic feedback without adhesive attachment to the body. These capabilities enable haptic feedback for robotic surgery training and posture correction over weeks of use with neural net computation. The demonstrations showcase that this device class expands use beyond conventional brick and strap or epidermally attached devices enabling new fields of use for imperceptible therapeutic and assistive haptic technologies supporting care and disease management.
Collapse
Affiliation(s)
- Amanda Tyree
- Department of Biomedical Engineering, University of Arizona, Tucson, AZ, 85721, USA
| | - Aman Bhatia
- Department of Biomedical Engineering, University of Arizona, Tucson, AZ, 85721, USA
| | - Minsik Hong
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, 85721, USA
| | - Jessica Hanna
- Department of Biomedical Engineering, University of Arizona, Tucson, AZ, 85721, USA
| | - Kevin Albert Kasper
- Department of Biomedical Engineering, University of Arizona, Tucson, AZ, 85721, USA
| | - Brandon Good
- Department of Biomedical Engineering, University of Arizona, Tucson, AZ, 85721, USA
| | - Dania Perez
- Department of Biomedical Engineering, University of Arizona, Tucson, AZ, 85721, USA
| | - Dema Nua Govalla
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, 85721, USA
| | - Abigail Hunt
- Department of Biomedical Engineering, University of Arizona, Tucson, AZ, 85721, USA
| | | | | | - Jerzy W Rozenblit
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, 85721, USA; Bio5 Institute, University of Arizona, Tucson, AZ, 85721, USA.
| | - Philipp Gutruf
- Department of Biomedical Engineering, University of Arizona, Tucson, AZ, 85721, USA; Department of Electrical and Computer Engineering, University of Arizona, Tucson, AZ, 85721, USA; Bio5 Institute, University of Arizona, Tucson, AZ, 85721, USA; Neroscience GIDP, University of Arizona, Tucson, AZ, 85721, USA.
| |
Collapse
|
2
|
Gillani M, Rupji M, Paul Olson TJ, Sullivan P, Shaffer VO, Balch GC, Shields MC, Liu Y, Rosen SA. Objective performance indicators differ in obese and nonobese patients during robotic proctectomy. Surgery 2024:S0039-6060(24)00594-4. [PMID: 39304451 DOI: 10.1016/j.surg.2024.08.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Revised: 06/27/2024] [Accepted: 08/14/2024] [Indexed: 09/22/2024]
Abstract
BACKGROUND Robotic surgery is perceived to be more complex in obese patients. Objective performance indicators, machine learning-enabled metrics, can provide objective data regarding surgeon movements and robotic arm kinematics. In this feasibility study, we identified differences in objective performance indicators during robotic proctectomy in obese and nonobese patients. METHODS Endoscopic videos were annotated to delineate individual surgical steps across 39 robotic proctectomies (1880 total steps). Thirteen patients were obese and 26 were nonobese. Objective performance indicators during the following steps were analyzed: splenic flexure mobilization, left colon mobilization, pelvic dissection, and rectal transection. RESULTS The following differences were noted during robotic proctectomy in obese patients: during splenic flexure mobilization, more arm swaps, longer camera path length and velocity; during left colon mobilization, longer step time, more arm swaps, higher camera-related metrics (movement, path length, velocity, acceleration, and jerk), greater dominant arm path length, moving time, and wrist articulation; during anterior pelvic dissection, longer energy activation time, camera path length, and moving time; during posterior pelvic dissection, lower nondominant arm velocity, jerk, and acceleration; during left pelvic dissection, longer energy activation time; during right pelvic dissection, greater camera-related metrics (movement, path length, moving time, and velocity); and during rectal transection, longer step time, more arm swaps, master clutch use and camera movements, greater dominant wrist articulation, and longer dominant arm path length. CONCLUSION We report step-specific objective performance indicators that differ during robotic proctectomy for obese and nonobese patients. This is the first study to use objective performance indicators to correlate a patient attribute with surgeon movements and robotic arm kinematics during robotic colorectal surgery.
Collapse
Affiliation(s)
- Mishal Gillani
- Department of Surgery, Emory University School of Medicine, Atlanta, GA
| | - Manali Rupji
- Winship Cancer Institute, Emory University, Atlanta, GA
| | | | - Patrick Sullivan
- Department of Surgery, Emory University School of Medicine, Atlanta, GA
| | | | - Glen C Balch
- Department of Surgery, Emory University School of Medicine, Atlanta, GA
| | | | - Yuan Liu
- Rollins School of Public Health, Emory University, Atlanta, GA
| | - Seth A Rosen
- Department of Surgery, Emory University School of Medicine, Atlanta, GA.
| |
Collapse
|
3
|
Bourdillon AT, Garg A, Wang H, Woo YJ, Pavone M, Boyd J. Integration of Reinforcement Learning in a Virtual Robotic Surgical Simulation. Surg Innov 2023; 30:94-102. [PMID: 35503302 DOI: 10.1177/15533506221095298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Background. The revolutions in AI hold tremendous capacity to augment human achievements in surgery, but robust integration of deep learning algorithms with high-fidelity surgical simulation remains a challenge. We present a novel application of reinforcement learning (RL) for automating surgical maneuvers in a graphical simulation.Methods. In the Unity3D game engine, the Machine Learning-Agents package was integrated with the NVIDIA FleX particle simulator for developing autonomously behaving RL-trained scissors. Proximal Policy Optimization (PPO) was used to reward movements and desired behavior such as movement along desired trajectory and optimized cutting maneuvers along the deformable tissue-like object. Constant and proportional reward functions were tested, and TensorFlow analytics was used to informed hyperparameter tuning and evaluate performance.Results. RL-trained scissors reliably manipulated the rendered tissue that was simulated with soft-tissue properties. A desirable trajectory of the autonomously behaving scissors was achieved along 1 axis. Proportional rewards performed better compared to constant rewards. Cumulative reward and PPO metrics did not consistently improve across RL-trained scissors in the setting for movement across 2 axes (horizontal and depth).Conclusion. Game engines hold promising potential for the design and implementation of RL-based solutions to simulated surgical subtasks. Task completion was sufficiently achieved in one-dimensional movement in simulations with and without tissue-rendering. Further work is needed to optimize network architecture and parameter tuning for increasing complexity.
Collapse
Affiliation(s)
| | - Animesh Garg
- Vector Institute and Department of Computer Science, University of Toronto, Toronto, ON, Canada
| | - Hanjay Wang
- Department of Cardiothoracic Surgery, 198869Stanford University, Stanford, CA, USA
| | - Y Joseph Woo
- Department of Cardiothoracic Surgery, 198869Stanford University, Stanford, CA, USA.,Department of Bioengineering, 198869Stanford University, Stanford, CA, USA
| | - Marco Pavone
- Department of Aeronautics and Astronautics, 198869Stanford University, Stanford, CA, USA
| | - Jack Boyd
- Department of Cardiothoracic Surgery, 198869Stanford University, Stanford, CA, USA
| |
Collapse
|
4
|
Abstract
Abstract
Because of the increasing use of laparoscopic surgeries, robotic technologies have been developed to overcome the challenges these surgeries impose on surgeons. This paper presents an overview of the current state of surgical robots used in laparoscopic surgeries. Four main categories were discussed: handheld laparoscopic devices, laparoscope positioning robots, master–slave teleoperated systems with dedicated consoles, and robotic training systems. A generalized control block diagram is developed to demonstrate the general control scheme for each category of surgical robots. In order to review these robotic technologies, related published works were investigated and discussed. Detailed discussions and comparison tables are presented to compare their effectiveness in laparoscopic surgeries. Each of these technologies has proved to be beneficial in laparoscopic surgeries.
Collapse
|
5
|
Kim JS, Piozzi GN, Kwak J, Kim J, Kim T, Choo J, Yang G, Lee TH, Baek SJ, Kim J, Kim SH. Quality of laparoscopic camera navigation in robot‐assisted versus conventional laparoscopic surgery for rectal cancer: An analysis of surgical videos through a video processing computer software. Int J Med Robot 2022; 18:e2393. [DOI: 10.1002/rcs.2393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Revised: 01/30/2022] [Accepted: 02/01/2022] [Indexed: 11/10/2022]
Affiliation(s)
- Ji Seon Kim
- Division of Colon and Rectal Surgery, Department of Surgery Korea University Anam Hospital Korea University College of Medicine Seoul Korea
| | - Guglielmo Niccolo Piozzi
- Division of Colon and Rectal Surgery, Department of Surgery Korea University Anam Hospital Korea University College of Medicine Seoul Korea
| | - Jung‐Myun Kwak
- Division of Colon and Rectal Surgery, Department of Surgery Korea University Anam Hospital Korea University College of Medicine Seoul Korea
| | - Jinhee Kim
- Kim Jaechul School of Artificial Intelligence, KAIST Daejeon Korea
| | - Taesung Kim
- Kim Jaechul School of Artificial Intelligence, KAIST Daejeon Korea
| | - Jaegul Choo
- Kim Jaechul School of Artificial Intelligence, KAIST Daejeon Korea
| | - Gene Yang
- Division of Minimally Invasive Surgery, Department of Surgery University at Buffalo Buffalo New York USA
| | - Tae Hoon Lee
- Division of Colon and Rectal Surgery, Department of Surgery Korea University Anam Hospital Korea University College of Medicine Seoul Korea
| | - Se Jin Baek
- Division of Colon and Rectal Surgery, Department of Surgery Korea University Anam Hospital Korea University College of Medicine Seoul Korea
| | - Jin Kim
- Division of Colon and Rectal Surgery, Department of Surgery Korea University Anam Hospital Korea University College of Medicine Seoul Korea
| | - Seon Hahn Kim
- Division of Colon and Rectal Surgery, Department of Surgery Korea University Anam Hospital Korea University College of Medicine Seoul Korea
| |
Collapse
|
6
|
Li B, Lu B, Wang Z, Zhong F, Dou Q, Liu YH. Learning Laparoscope Actions via Video Features for Proactive Robotic Field-of-View Control. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3173442] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Bin Li
- T stone Robotics Institute, The Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong
| | - Bo Lu
- T stone Robotics Institute, The Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong
| | - Ziyi Wang
- T stone Robotics Institute, The Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong
| | - Fangxun Zhong
- T stone Robotics Institute, The Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong
| | - Qi Dou
- Department of Computer Science and Engineering, and T Stone Robotics Institute, The Chinese University of Hong Kong, Hong Kong
| | - Yun-Hui Liu
- T stone Robotics Institute, The Department of Mechanical and Automation Engineering, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
7
|
Gruijthuijsen C, Garcia-Peraza-Herrera LC, Borghesan G, Reynaerts D, Deprest J, Ourselin S, Vercauteren T, Vander Poorten E. Robotic Endoscope Control Via Autonomous Instrument Tracking. Front Robot AI 2022; 9:832208. [PMID: 35480090 PMCID: PMC9035496 DOI: 10.3389/frobt.2022.832208] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 02/17/2022] [Indexed: 11/13/2022] Open
Abstract
Many keyhole interventions rely on bi-manual handling of surgical instruments, forcing the main surgeon to rely on a second surgeon to act as a camera assistant. In addition to the burden of excessively involving surgical staff, this may lead to reduced image stability, increased task completion time and sometimes errors due to the monotony of the task. Robotic endoscope holders, controlled by a set of basic instructions, have been proposed as an alternative, but their unnatural handling may increase the cognitive load of the (solo) surgeon, which hinders their clinical acceptance. More seamless integration in the surgical workflow would be achieved if robotic endoscope holders collaborated with the operating surgeon via semantically rich instructions that closely resemble instructions that would otherwise be issued to a human camera assistant, such as “focus on my right-hand instrument.” As a proof of concept, this paper presents a novel system that paves the way towards a synergistic interaction between surgeons and robotic endoscope holders. The proposed platform allows the surgeon to perform a bimanual coordination and navigation task, while a robotic arm autonomously performs the endoscope positioning tasks. Within our system, we propose a novel tooltip localization method based on surgical tool segmentation and a novel visual servoing approach that ensures smooth and appropriate motion of the endoscope camera. We validate our vision pipeline and run a user study of this system. The clinical relevance of the study is ensured through the use of a laparoscopic exercise validated by the European Academy of Gynaecological Surgery which involves bi-manual coordination and navigation. Successful application of our proposed system provides a promising starting point towards broader clinical adoption of robotic endoscope holders.
Collapse
Affiliation(s)
| | - Luis C. Garcia-Peraza-Herrera
- Department of Medical Physics and Biomedical Engineering, University College London, London, United Kingdom
- Department of Surgical and Interventional Engineering, King’s College London, London, United Kingdom
- *Correspondence: Luis C. Garcia-Peraza-Herrera,
| | - Gianni Borghesan
- Department of Mechanical Engineering, KU Leuven, Leuven, Belgium
- Core Lab ROB, Flanders Make, Lommel, Belgium
| | | | - Jan Deprest
- Department of Development and Regeneration, Division Woman and Child, KU Leuven, Leuven, Belgium
| | - Sebastien Ourselin
- Department of Surgical and Interventional Engineering, King’s College London, London, United Kingdom
| | - Tom Vercauteren
- Department of Surgical and Interventional Engineering, King’s College London, London, United Kingdom
| | | |
Collapse
|
8
|
A Natural Language Interface for an Autonomous Camera Control System on the da Vinci Surgical Robot. ROBOTICS 2022. [DOI: 10.3390/robotics11020040] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
Positioning a camera during laparoscopic and robotic procedures is challenging and essential for successful operations. During surgery, if the camera view is not optimal, surgery becomes more complex and potentially error-prone. To address this need, we have developed a voice interface to an autonomous camera system that can trigger behavioral changes and be more of a partner to the surgeon. Similarly to a human operator, the camera can take cues from the surgeon to help create optimized surgical camera views. It has the advantage of nominal behavior that is helpful in most general cases and has a natural language interface that makes it dynamically customizable and on-demand. It permits the control of a camera with a higher level of abstraction. This paper shows the implementation details and usability of a voice-activated autonomous camera system. A voice activation test on a limited set of practiced key phrases was performed using both online and offline voice recognition systems. The results show an on-average greater than 94% recognition accuracy for the online system and 86% accuracy for the offline system. However, the response time of the online system was greater than 1.5 s, whereas the local system was 0.6 s. This work is a step towards cooperative surgical robots that will effectively partner with human operators to enable more robust surgeries. A video link of the system in operation is provided in this paper.
Collapse
|
9
|
Huber M, Ourselin S, Bergeles C, Vercauteren T. Deep homography estimation in dynamic surgical scenes for laparoscopic camera motion extraction. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING. IMAGING & VISUALIZATION 2022; 10:321-329. [PMID: 38013837 PMCID: PMC10478259 DOI: 10.1080/21681163.2021.2002195] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/27/2021] [Accepted: 11/01/2021] [Indexed: 11/29/2023]
Abstract
Current laparoscopic camera motion automation relies on rule-based approaches or only focuses on surgical tools. Imitation Learning (IL) methods could alleviate these shortcomings, but have so far been applied to oversimplified setups. Instead of extracting actions from oversimplified setups, in this work we introduce a method that allows to extract a laparoscope holder's actions from videos of laparoscopic interventions. We synthetically add camera motion to a newly acquired dataset of camera motion free da Vinci surgery image sequences through a novel homography generation algorithm. The synthetic camera motion serves as a supervisory signal for camera motion estimation that is invariant to object and tool motion. We perform an extensive evaluation of state-of-the-art (SOTA) Deep Neural Networks (DNNs) across multiple compute regimes, finding our method transfers from our camera motion free da Vinci surgery dataset to videos of laparoscopic interventions, outperforming classical homography estimation approaches in both, precision by 41 % , and runtime on a CPU by 43 % .
Collapse
Affiliation(s)
- Martin Huber
- School of Biomedical Engineering & Image Sciences, Faculty of Life Sciences & Medicine, King’s College London, London, UK
| | - Sébastien Ourselin
- School of Biomedical Engineering & Image Sciences, Faculty of Life Sciences & Medicine, King’s College London, London, UK
| | - Christos Bergeles
- School of Biomedical Engineering & Image Sciences, Faculty of Life Sciences & Medicine, King’s College London, London, UK
| | - Tom Vercauteren
- School of Biomedical Engineering & Image Sciences, Faculty of Life Sciences & Medicine, King’s College London, London, UK
| |
Collapse
|
10
|
Da Col T, Caccianiga G, Catellani M, Mariani A, Ferro M, Cordima G, De Momi E, Ferrigno G, de Cobelli O. Automating Endoscope Motion in Robotic Surgery: A Usability Study on da Vinci-Assisted Ex Vivo Neobladder Reconstruction. Front Robot AI 2021; 8:707704. [PMID: 34901168 PMCID: PMC8656430 DOI: 10.3389/frobt.2021.707704] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Accepted: 11/01/2021] [Indexed: 11/18/2022] Open
Abstract
Robots for minimally invasive surgery introduce many advantages, but still require the surgeon to alternatively control the surgical instruments and the endoscope. This work aims at providing autonomous navigation of the endoscope during a surgical procedure. The autonomous endoscope motion was based on kinematic tracking of the surgical instruments and integrated with the da Vinci Research Kit. A preclinical usability study was conducted by 10 urologists. They carried out an ex vivo orthotopic neobladder reconstruction twice, using both traditional and autonomous endoscope control. The usability of the system was tested by asking participants to fill standard system usability scales. Moreover, the effectiveness of the method was assessed by analyzing the total procedure time and the time spent with the instruments out of the field of view. The average system usability score overcame the threshold usually identified as the limit to assess good usability (average score = 73.25 > 68). The average total procedure time with the autonomous endoscope navigation was comparable with the classic control (p = 0.85 > 0.05), yet it significantly reduced the time out of the field of view (p = 0.022 < 0.05). Based on our findings, the autonomous endoscope improves the usability of the surgical system, and it has the potential to be an additional and customizable tool for the surgeon that can always take control of the endoscope or leave it to move autonomously.
Collapse
Affiliation(s)
- Tommaso Da Col
- Neuro-Engineering and Medical Robotics Laboratory (NEARLab), Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Guido Caccianiga
- Haptic Intelligence Department, Max-Planck-Institute for Intelligent Systems, Stuttgart, Germany
| | - Michele Catellani
- Division of Urology, European Institute of Oncology, IRCCS, Milan, Italy
| | - Andrea Mariani
- Excellence in Robotics and AI Department, Sant’Anna School of Advanced Studies, Pisa, Italy
| | - Matteo Ferro
- Division of Urology, European Institute of Oncology, IRCCS, Milan, Italy
| | - Giovanni Cordima
- Division of Urology, European Institute of Oncology, IRCCS, Milan, Italy
| | - Elena De Momi
- Neuro-Engineering and Medical Robotics Laboratory (NEARLab), Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Giancarlo Ferrigno
- Neuro-Engineering and Medical Robotics Laboratory (NEARLab), Department of Electronics, Information and Bioengineering, Politecnico di Milano, Milan, Italy
| | - Ottavio de Cobelli
- Division of Urology, European Institute of Oncology, IRCCS, Milan, Italy
| |
Collapse
|
11
|
Huber M, Mitchell JB, Henry R, Ourselin S, Vercauteren T, Bergeles C. Homography-based Visual Servoing with Remote Center of Motion for Semi-autonomous Robotic Endoscope Manipulation. ... INTERNATIONAL SYMPOSIUM ON MEDICAL ROBOTICS. INTERNATIONAL SYMPOSIUM ON MEDICAL ROBOTICS 2021; 220:1-7. [PMID: 39351396 PMCID: PMC7616652 DOI: 10.1109/ismr48346.2021.9661563] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 10/04/2024]
Abstract
The dominant visual servoing approaches in Minimally Invasive Surgery (MIS) follow single points or adapt the endoscope's field of view based on the surgical tools' distance. These methods rely on point positions with respect to the camera frame to infer a control policy. Deviating from the dominant methods, we formulate a robotic controller that allows for image-based visual servoing that requires neither explicit tool and camera positions nor any explicit image depth information. The proposed method relies on homography-based image registration, which changes the automation paradigm from point-centric towards surgical-scene-centric approach. It simultaneously respects a programmable Remote Center of Motion (RCM). Our approach allows a surgeon to build a graph of desired views, from which, once built, views can be manually selected and automatically servoed to irrespective of robot-patient frame transformation changes. We evaluate our method on an abdominal phantom and provide an open source ROS Moveit integration for use with any serial manipulator. A video is provided.
Collapse
Affiliation(s)
- Martin Huber
- School of Biomedical Engineering & Image Sciences, Faculty of Life Sciences & Medicine, King's College London, London, United Kingdom
| | - John Bason Mitchell
- School of Biomedical Engineering & Image Sciences, Faculty of Life Sciences & Medicine, King's College London, London, United Kingdom
- Department of Medical Physics and Biomedical Engineering, Faculty of Engineering Sciences, University College London, London, United Kingdom
| | - Ross Henry
- School of Biomedical Engineering & Image Sciences, Faculty of Life Sciences & Medicine, King's College London, London, United Kingdom
| | - Sébastien Ourselin
- School of Biomedical Engineering & Image Sciences, Faculty of Life Sciences & Medicine, King's College London, London, United Kingdom
| | - Tom Vercauteren
- School of Biomedical Engineering & Image Sciences, Faculty of Life Sciences & Medicine, King's College London, London, United Kingdom
| | - Christos Bergeles
- School of Biomedical Engineering & Image Sciences, Faculty of Life Sciences & Medicine, King's College London, London, United Kingdom
| |
Collapse
|
12
|
Multirobot Confidence and Behavior Modeling: An Evaluation of Semiautonomous Task Performance and Efficiency. ROBOTICS 2021. [DOI: 10.3390/robotics10020071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
There is considerable interest in multirobot systems capable of performing spatially distributed, hazardous, and complex tasks as a team leveraging the unique abilities of humans and automated machines working alongside each other. The limitations of human perception and cognition affect operators’ ability to integrate information from multiple mobile robots, switch between their spatial frames of reference, and divide attention among many sensory inputs and command outputs. Automation is necessary to help the operator manage increasing demands as the number of robots (and humans) scales up. However, more automation does not necessarily equate to better performance. A generalized robot confidence model was developed, which transforms key operator attention indicators to a robot confidence value for each robot to enable the robots’ adaptive behaviors. This model was implemented in a multirobot test platform with the operator commanding robot trajectories using a computer mouse and an eye tracker providing gaze data used to estimate dynamic operator attention. The human-attention-based robot confidence model dynamically adapted the behavior of individual robots in response to operator attention. The model was successfully evaluated to reveal evidence linking average robot confidence to multirobot search task performance and efficiency. The contributions of this work provide essential steps toward effective human operation of multiple unmanned vehicles to perform spatially distributed and hazardous tasks in complex environments for space exploration, defense, homeland security, search and rescue, and other real-world applications.
Collapse
|
13
|
A learning robot for cognitive camera control in minimally invasive surgery. Surg Endosc 2021; 35:5365-5374. [PMID: 33904989 PMCID: PMC8346448 DOI: 10.1007/s00464-021-08509-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2020] [Accepted: 04/07/2021] [Indexed: 12/13/2022]
Abstract
Background We demonstrate the first self-learning, context-sensitive, autonomous camera-guiding robot applicable to minimally invasive surgery. The majority of surgical robots nowadays are telemanipulators without autonomous capabilities. Autonomous systems have been developed for laparoscopic camera guidance, however following simple rules and not adapting their behavior to specific tasks, procedures, or surgeons. Methods The herein presented methodology allows different robot kinematics to perceive their environment, interpret it according to a knowledge base and perform context-aware actions. For training, twenty operations were conducted with human camera guidance by a single surgeon. Subsequently, we experimentally evaluated the cognitive robotic camera control. A VIKY EP system and a KUKA LWR 4 robot were trained on data from manual camera guidance after completion of the surgeon’s learning curve. Second, only data from VIKY EP were used to train the LWR and finally data from training with the LWR were used to re-train the LWR. Results The duration of each operation decreased with the robot’s increasing experience from 1704 s ± 244 s to 1406 s ± 112 s, and 1197 s. Camera guidance quality (good/neutral/poor) improved from 38.6/53.4/7.9 to 49.4/46.3/4.1% and 56.2/41.0/2.8%. Conclusions The cognitive camera robot improved its performance with experience, laying the foundation for a new generation of cognitive surgical robots that adapt to a surgeon’s needs. Supplementary Information The online version contains supplementary material available at 10.1007/s00464-021-08509-8.
Collapse
|
14
|
Abstract
The advent of telerobotic systems has revolutionized various aspects of the industry and human life. This technology is designed to augment human sensorimotor capabilities to extend them beyond natural competence. Classic examples are space and underwater applications when distance and access are the two major physical barriers to be combated with this technology. In modern examples, telerobotic systems have been used in several clinical applications, including teleoperated surgery and telerehabilitation. In this regard, there has been a significant amount of research and development due to the major benefits in terms of medical outcomes. Recently telerobotic systems are combined with advanced artificial intelligence modules to better share the agency with the operator and open new doors of medical automation. In this review paper, we have provided a comprehensive analysis of the literature considering various topologies of telerobotic systems in the medical domain while shedding light on different levels of autonomy for this technology, starting from direct control, going up to command-tracking autonomous telerobots. Existing challenges, including instrumentation, transparency, autonomy, stochastic communication delays, and stability, in addition to the current direction of research related to benefit in telemedicine and medical automation, and future vision of this technology, are discussed in this review paper.
Collapse
|
15
|
Mariani A, Colaci G, Da Col T, Sanna N, Vendrame E, Menciassi A, De Momi E. An Experimental Comparison Towards Autonomous Camera Navigation to Optimize Training in Robot Assisted Surgery. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.2965067] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
16
|
Eslamian S, Reisner LA, Pandya AK. Development and evaluation of an autonomous camera control algorithm on the da Vinci Surgical System. Int J Med Robot 2019; 16:e2036. [PMID: 31490615 DOI: 10.1002/rcs.2036] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 08/20/2019] [Accepted: 09/02/2019] [Indexed: 12/16/2022]
Abstract
BACKGROUND Manual control of the camera arm in telerobotic surgical systems requires the surgeon to repeatedly interrupt the flow of the surgery. During surgery, there are instances when one or even both tools can drift out of the field of view. These issues may lead to increased workload and potential errors. METHODS We performed a 20-participant subject study (including four surgeons) to compare different methods of camera control on a customized da Vinci Surgical System. We tested (a) an autonomous camera algorithm, (b) standard clutched control, and (c) an experienced camera operator using a joystick. RESULTS The automated algorithm surpassed the traditional method of clutched camera control in measures of userperceived workload, efficiency, and progress. Additionally, it was consistently able to generate more centered and appropriately zoomed viewpoints than the other methods while keeping both tools safely inside the camera's field of view. CONCLUSIONS Clinical systems of the future should consider automating the camera control aspects of robotic surgery.
Collapse
Affiliation(s)
- Shahab Eslamian
- Department of Electrical and Computer Engineering, Wayne State University, Detroit, Michigan
| | - Luke A Reisner
- Department of Electrical and Computer Engineering, Wayne State University, Detroit, Michigan
| | - Abhilash K Pandya
- Department of Electrical and Computer Engineering, Wayne State University, Detroit, Michigan
| |
Collapse
|
17
|
Prince SW, Kang C, Simonelli J, Lee Y, Gerber MJ, Lim C, Chu K, Dutson EP, Tsao T. A robotic system for telementoring and training in laparoscopic surgery. Int J Med Robot 2019; 16:e2040. [DOI: 10.1002/rcs.2040] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2018] [Revised: 09/09/2019] [Accepted: 09/09/2019] [Indexed: 12/17/2022]
Affiliation(s)
- Stephen W. Prince
- Mechanical and Aerospace Engineering Department University of California Los Angeles California
| | - Christopher Kang
- Mechanical and Aerospace Engineering Department University of California Los Angeles California
| | - James Simonelli
- Mechanical and Aerospace Engineering Department University of California Los Angeles California
| | - Yu‐Hsiu Lee
- Mechanical and Aerospace Engineering Department University of California Los Angeles California
| | - Matthew J. Gerber
- Mechanical and Aerospace Engineering Department University of California Los Angeles California
| | - Christopher Lim
- Mechanical and Aerospace Engineering Department University of California Los Angeles California
| | - Kevin Chu
- Mechanical and Aerospace Engineering Department University of California Los Angeles California
| | - Erik P. Dutson
- Center for Advanced Surgical and Interventional Technology University of California Los Angeles California
| | - Tsu‐Chin Tsao
- Mechanical and Aerospace Engineering Department University of California Los Angeles California
| |
Collapse
|
18
|
Talha M, Stolkin R. Preliminary Evaluation of an Orbital Camera for Teleoperation of Remote Manipulators. 2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) 2019. [DOI: 10.1109/iros40897.2019.8968218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
19
|
Zhou Y, Jiang G, Zhang C, Wang Z, Zhang Z, Liu H. Modeling of a joint-type flexible endoscope based on elastic deformation and internal friction. Adv Robot 2019. [DOI: 10.1080/01691864.2019.1657947] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Yuanyuan Zhou
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Shenyang, People’s Republic of China
- Shenyang Institute of Automation, University of Chinese Academy of Sciences, Beijing, People’s Republic of China
- Institute for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, People’s Republic of China
- Key Laboratory of Minimally Invasive Surgical Robot, Shenyang, People’s Republic of China
| | - Guohao Jiang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Shenyang, People’s Republic of China
- Shenyang Institute of Automation, University of Chinese Academy of Sciences, Beijing, People’s Republic of China
- Institute for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, People’s Republic of China
- Key Laboratory of Minimally Invasive Surgical Robot, Shenyang, People’s Republic of China
| | - Cheng Zhang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Shenyang, People’s Republic of China
- Institute for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, People’s Republic of China
- Key Laboratory of Minimally Invasive Surgical Robot, Shenyang, People’s Republic of China
| | - Zhidong Wang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Shenyang, People’s Republic of China
- Institute for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, People’s Republic of China
- Department of Advanced Robotics, Chiba Institute of Technology, Narashino, Chiba, Japan
| | - Zhongtao Zhang
- Department of General Surgery, Beijing Friendship Hospital Capital Medical University, Beijing, People’s Republic of China
| | - Hao Liu
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Shenyang, People’s Republic of China
- Institute for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, People’s Republic of China
- Key Laboratory of Minimally Invasive Surgical Robot, Shenyang, People’s Republic of China
| |
Collapse
|
20
|
Abstract
Vision-based technologies are becoming ubiquitous when considering sensing systems for measuring the response of structures. Availability of proprietary camera systems has opened up the scope for many bridge monitoring projects. Even though structural response can be measured at high accuracies when analyzing target motions, the main limitations to achieving even better results are camera costs and image resolution. Conventional camera systems capture either the entire structure or large/small part of it. This study introduces a low-cost robotic camera system (RCS) for accurate measurement collection of structural response. The RCS automatically captures images of parts of a structure under loading, therefore, (i) giving a higher pixel density than conventional cameras capturing the entire structure, thus allowing for greater measurement accuracy, and (ii) capturing multiple parts of the structure. The proposed camera system consists of a modified action camera with a zoom lens, a robotic mechanism for camera rotation, and open-source software which enables wireless communication. A data processing strategy, together with image processing techniques, is introduced and explained. A laboratory beam subjected to static loading serves to evaluate the performance of the RCS. The response of the beam is also monitored with contact sensors and calculated from images captured with a smartphone. The RCS provides accurate response measurements. Such camera systems could be employed for long-term bridge monitoring, in which strains are collected at strategic locations, and response time-histories are formed for further analysis.
Collapse
|
21
|
Cheng T, Li W, Ng CSH, Chiu PWY, Li Z. Visual Servo Control of a Novel Magnetic Actuated Endoscope for Uniportal Video-Assisted Thoracic Surgery. IEEE Robot Autom Lett 2019. [DOI: 10.1109/lra.2019.2924838] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
22
|
Remote Presence: Development and Usability Evaluation of a Head-Mounted Display for Camera Control on the da Vinci Surgical System. ROBOTICS 2019. [DOI: 10.3390/robotics8020031] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
This paper describes the development of a new method to control the camera arm of a surgical robot and create a better sense of remote presence for the surgeon. The current surgical systems are entirely controlled by the surgeon, using hand controllers and foot pedals to manipulate either the instrument or the camera arms. The surgeon must pause the operation to move the camera arm to obtain a desired view and then resume the operation. The camera and tools cannot be moved simultaneously, leading to interrupted and unnatural movements. These interruptions can lead to medical errors and extended operation times. In our system, the surgeon controls the camera arm by his natural head movements while being immersed in a 3D-stereo view of the scene with a head-mounted display (HMD). The novel approach enables the camera arm to be maneuvered based on sensors of the HMD. We implemented this method on a da Vinci Standard Surgical System using the HTC Vive headset along with the Unity engine and the Robot Operating System framework. This paper includes the result of a subjective six-participant usability study that compares the workload of the traditional clutched camera control method against the HMD-based control. Initial results indicate that the system is usable, stable, and has a lower physical and mental workload when using the HMD control method.
Collapse
|
23
|
A Robotic Recording and Playback Platform for Training Surgeons and Learning Autonomous Behaviors Using the da Vinci Surgical System. ROBOTICS 2019. [DOI: 10.3390/robotics8010009] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
This paper describes a recording and playback system developed using a da Vinci Standard Surgical System and research kit. The system records stereo laparoscopic videos, robot arm joint angles, and surgeon–console interactions in a synchronized manner. A user can then, on-demand and at adjustable speeds, watch stereo videos and feel recorded movements on the hand controllers of entire procedures or sub procedures. Currently, there is no reported comprehensive ability to capture expert surgeon movements and insights and reproduce them on hardware directly. This system has important applications in several areas: (1) training of surgeons, (2) collection of learning data for the development of advanced control algorithms and intelligent autonomous behaviors, and (3) use as a “black box” for retrospective error analysis. We show a prototype of such an immersive system on a clinically-relevant platform along with its recording and playback fidelity. Lastly, we convey possible research avenues to create better systems for training and assisting robotic surgeons.
Collapse
|
24
|
Robust tracking of dexterous continuum robots: Fusing FBG shape sensing and stereo vision. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2017:925-928. [PMID: 29060024 DOI: 10.1109/embc.2017.8036976] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Robust and efficient tracking of continuum robots is important for improving patient safety during space-confined minimally invasive surgery, however, it has been a particularly challenging task for researchers. In this paper, we present a novel tracking scheme by fusing fiber Bragg grating (FBG) shape sensing and stereo vision to estimate the position of continuum robots. Previous visual tracking easily suffers from the lack of robustness and leads to failure, while the FBG shape sensor can only reconstruct the local shape with integral cumulative error. The proposed fusion is anticipated to compensate for their shortcomings and improve the tracking accuracy. To verify its effectiveness, the robots' centerline is recognized by morphology operation and reconstructed by stereo matching algorithm. The shape obtained by FBG sensor is transformed into distal tip position with respect to the camera coordinate system through previously calibrated registration matrices. An experimental platform was set up and repeated tracking experiments were carried out. The accuracy estimated by averaging the absolute positioning errors between shape sensing and stereo vision is 0.67±0.65 mm, 0.41±0.25 mm, 0.72±0.43 mm for x, y and z, respectively. Results indicate that the proposed fusion is feasible and can be used for closed-loop control of continuum robots.
Collapse
|
25
|
Vedula SS, Ishii M, Hager GD. Objective Assessment of Surgical Technical Skill and Competency in the Operating Room. Annu Rev Biomed Eng 2017; 19:301-325. [PMID: 28375649 DOI: 10.1146/annurev-bioeng-071516-044435] [Citation(s) in RCA: 75] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
Training skillful and competent surgeons is critical to ensure high quality of care and to minimize disparities in access to effective care. Traditional models to train surgeons are being challenged by rapid advances in technology, an intensified patient-safety culture, and a need for value-driven health systems. Simultaneously, technological developments are enabling capture and analysis of large amounts of complex surgical data. These developments are motivating a "surgical data science" approach to objective computer-aided technical skill evaluation (OCASE-T) for scalable, accurate assessment; individualized feedback; and automated coaching. We define the problem space for OCASE-T and summarize 45 publications representing recent research in this domain. We find that most studies on OCASE-T are simulation based; very few are in the operating room. The algorithms and validation methodologies used for OCASE-T are highly varied; there is no uniform consensus. Future research should emphasize competency assessment in the operating room, validation against patient outcomes, and effectiveness for surgical training.
Collapse
Affiliation(s)
- S Swaroop Vedula
- Malone Center for Engineering in Healthcare, Department of Computer Science, The Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland 21218;
| | - Masaru Ishii
- Department of Otolaryngology-Head and Neck Surgery, The Johns Hopkins University School of Medicine, Baltimore, Maryland 21287
| | - Gregory D Hager
- Malone Center for Engineering in Healthcare, Department of Computer Science, The Johns Hopkins University Whiting School of Engineering, Baltimore, Maryland 21218;
| |
Collapse
|
26
|
Fard MJ, Ameri S, Chinnam RB, Ellis RD. Soft Boundary Approach for Unsupervised Gesture Segmentation in Robotic-Assisted Surgery. IEEE Robot Autom Lett 2017. [DOI: 10.1109/lra.2016.2585303] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
27
|
Yu L, Li H, Zhao L, Ren S, Gu Q. Automatic guidance of laparoscope based on the region of interest for robot assisted laparoscopic surgery. Comput Assist Surg (Abingdon) 2016. [DOI: 10.1080/24699322.2016.1240309] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022] Open
Affiliation(s)
- Lingtao Yu
- College of Mechanical and Electrical Engineering, Harbin Engineering University, Harbin, Heilongjiang Province, PR China
| | - Hongwei Li
- College of Mechanical and Electrical Engineering, Harbin Engineering University, Harbin, Heilongjiang Province, PR China
| | - Lingyan Zhao
- College of Mechanical and Electrical Engineering, Harbin Engineering University, Harbin, Heilongjiang Province, PR China
| | - Sixu Ren
- College of Mechanical and Electrical Engineering, Harbin Engineering University, Harbin, Heilongjiang Province, PR China
| | - Qing Gu
- College of Mechanical and Electrical Engineering, Harbin Engineering University, Harbin, Heilongjiang Province, PR China
| |
Collapse
|
28
|
Fard MJ, Pandya AK, Chinnam RB, Klein MD, Ellis RD. Distance-based time series classification approach for task recognition with application in surgical robot autonomy. Int J Med Robot 2016; 13. [DOI: 10.1002/rcs.1766] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2016] [Revised: 07/09/2016] [Accepted: 07/12/2016] [Indexed: 11/09/2022]
Affiliation(s)
- Mahtab J. Fard
- Department of Industrial and Systems Engineering; Wayne State University; Detroit MI USA
| | - Abhilash K. Pandya
- Department of Electrical and Computer Engineering; Wayne State University; Detroit MI USA
| | - Ratna B. Chinnam
- Department of Industrial and Systems Engineering; Wayne State University; Detroit MI USA
| | - Michael D. Klein
- Department of Surgery; Wayne State University School of Medicine and Pediatric Surgery, Children's Hospital of Michigan; Detroit MI USA
| | - R. Darin Ellis
- Department of Industrial and Systems Engineering; Wayne State University; Detroit MI USA
| |
Collapse
|
29
|
Ellis RD, Munaco AJ, Reisner LA, Klein MD, Composto AM, Pandya AK, King BW. Task analysis of laparoscopic camera control schemes. Int J Med Robot 2015; 12:576-584. [PMID: 26648563 DOI: 10.1002/rcs.1716] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2015] [Revised: 09/24/2015] [Accepted: 10/20/2015] [Indexed: 11/07/2022]
Abstract
BACKGROUND Minimally invasive surgeries rely on laparoscopic camera views to guide the procedure. Traditionally, an expert surgical assistant operates the camera. In some cases, a robotic system is used to help position the camera, but the surgeon is required to direct all movements of the system. Some prior research has focused on developing automated robotic camera control systems, but that work has been limited to rudimentary control schemes due to a lack of understanding of how the camera should be moved for different surgical tasks. METHODS This research used task analysis with a sample of eight expert surgeons to discover and document several salient methods of camera control and their related task contexts. RESULTS Desired camera placements and behaviours were established for two common surgical subtasks (suturing and knot tying). CONCLUSION The results can be used to develop better robotic control algorithms that will be more responsive to surgeons' needs. Copyright © 2015 John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
- R Darin Ellis
- Department of Industrial and Systems Engineering, Wayne State University, Detroit, MI, USA
| | - Anthony J Munaco
- Department of Pediatric Surgery, Children's Hospital of Michigan, Detroit, MI, USA
| | - Luke A Reisner
- Department of Electrical and Computer Engineering, Wayne State University, Detroit, MI, USA
| | - Michael D Klein
- Department of Pediatric Surgery, Children's Hospital of Michigan, Detroit, MI, USA
| | - Anthony M Composto
- Department of Electrical and Computer Engineering, Wayne State University, Detroit, MI, USA
| | - Abhilash K Pandya
- Department of Electrical and Computer Engineering, Wayne State University, Detroit, MI, USA
| | - Brady W King
- Department of Pediatric Surgery, Children's Hospital of Michigan, Detroit, MI, USA
| |
Collapse
|
30
|
Kenngott HG, Wagner M, Nickel F, Wekerle AL, Preukschas A, Apitz M, Schulte T, Rempel R, Mietkowski P, Wagner F, Termer A, Müller-Stich BP. Computer-assisted abdominal surgery: new technologies. Langenbecks Arch Surg 2015; 400:273-81. [PMID: 25701196 DOI: 10.1007/s00423-015-1289-8] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2015] [Accepted: 02/09/2015] [Indexed: 12/16/2022]
Abstract
BACKGROUND Computer-assisted surgery is a wide field of technologies with the potential to enable the surgeon to improve efficiency and efficacy of diagnosis, treatment, and clinical management. PURPOSE This review provides an overview of the most important new technologies and their applications. METHODS A MEDLINE database search was performed revealing a total of 1702 references. All references were considered for information on six main topics, namely image guidance and navigation, robot-assisted surgery, human-machine interface, surgical processes and clinical pathways, computer-assisted surgical training, and clinical decision support. Further references were obtained through cross-referencing the bibliography cited in each work. Based on their respective field of expertise, the authors chose 64 publications relevant for the purpose of this review. CONCLUSION Computer-assisted systems are increasingly used not only in experimental studies but also in clinical studies. Although computer-assisted abdominal surgery is still in its infancy, the number of studies is constantly increasing, and clinical studies start showing the benefits of computers used not only as tools of documentation and accounting but also for directly assisting surgeons during diagnosis and treatment of patients. Further developments in the field of clinical decision support even have the potential of causing a paradigm shift in how patients are diagnosed and treated.
Collapse
Affiliation(s)
- H G Kenngott
- Department of General, Abdominal and Transplant Surgery, Ruprecht-Karls-University, Heidelberg, Germany
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|