1
|
Schonfeld E, Mordekai N, Berg A, Johnstone T, Shah A, Shah V, Haider G, Marianayagam NJ, Veeravagu A. Machine Learning in Neurosurgery: Toward Complex Inputs, Actionable Predictions, and Generalizable Translations. Cureus 2024; 16:e51963. [PMID: 38333513 PMCID: PMC10851045 DOI: 10.7759/cureus.51963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Accepted: 01/08/2024] [Indexed: 02/10/2024] Open
Abstract
Machine learning can predict neurosurgical diagnosis and outcomes, power imaging analysis, and perform robotic navigation and tumor labeling. State-of-the-art models can reconstruct and generate images, predict surgical events from video, and assist in intraoperative decision-making. In this review, we will detail the neurosurgical applications of machine learning, ranging from simple to advanced models, and their potential to transform patient care. As machine learning techniques, outputs, and methods become increasingly complex, their performance is often more impactful yet increasingly difficult to evaluate. We aim to introduce these advancements to the neurosurgical audience while suggesting major potential roadblocks to their safe and effective translation. Unlike the previous generation of machine learning in neurosurgery, the safe translation of recent advancements will be contingent on neurosurgeons' involvement in model development and validation.
Collapse
Affiliation(s)
- Ethan Schonfeld
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | | | - Alex Berg
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | - Thomas Johnstone
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | - Aaryan Shah
- School of Humanities and Sciences, Stanford University, Stanford, USA
| | - Vaibhavi Shah
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | - Ghani Haider
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| | | | - Anand Veeravagu
- Neurosurgery, Stanford University School of Medicine, Stanford, USA
| |
Collapse
|
2
|
Hutchinson K, Reyes I, Li Z, Alemzadeh H. COMPASS: a formal framework and aggregate dataset for generalized surgical procedure modeling. Int J Comput Assist Radiol Surg 2023; 18:2143-2154. [PMID: 37145250 DOI: 10.1007/s11548-023-02922-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Accepted: 04/14/2023] [Indexed: 05/06/2023]
Abstract
PURPOSE We propose a formal framework for the modeling and segmentation of minimally invasive surgical tasks using a unified set of motion primitives (MPs) to enable more objective labeling and the aggregation of different datasets. METHODS We model dry-lab surgical tasks as finite state machines, representing how the execution of MPs as the basic surgical actions results in the change of surgical context, which characterizes the physical interactions among tools and objects in the surgical environment. We develop methods for labeling surgical context based on video data and for automatic translation of context to MP labels. We then use our framework to create the COntext and Motion Primitive Aggregate Surgical Set (COMPASS), including six dry-lab surgical tasks from three publicly available datasets (JIGSAWS, DESK, and ROSMA), with kinematic and video data and context and MP labels. RESULTS Our context labeling method achieves near-perfect agreement between consensus labels from crowd-sourcing and expert surgeons. Segmentation of tasks to MPs results in the creation of the COMPASS dataset that nearly triples the amount of data for modeling and analysis and enables the generation of separate transcripts for the left and right tools. CONCLUSION The proposed framework results in high quality labeling of surgical data based on context and fine-grained MPs. Modeling surgical tasks with MPs enables the aggregation of different datasets and the separate analysis of left and right hands for bimanual coordination assessment. Our formal framework and aggregate dataset can support the development of explainable and multi-granularity models for improved surgical process analysis, skill assessment, error detection, and autonomy.
Collapse
Affiliation(s)
- Kay Hutchinson
- Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, VA, 22903, USA.
| | - Ian Reyes
- Department of Computer Science, University of Virginia, Charlottesville, VA, 22903, USA
- IBM, RTP, Durham, NC, 27709, USA
| | - Zongyu Li
- Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, VA, 22903, USA
| | - Homa Alemzadeh
- Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, VA, 22903, USA
- Department of Computer Science, University of Virginia, Charlottesville, VA, 22903, USA
| |
Collapse
|
3
|
Tukra S, Lidströmer N, Ashrafian H, Gianarrou S. AI in Surgical Robotics. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
4
|
Fisher C, Harty J, Yee A, Li CL, Komolibus K, Grygoryev K, Lu H, Burke R, Wilson BC, Andersson-Engels S. Perspective on the integration of optical sensing into orthopedic surgical devices. JOURNAL OF BIOMEDICAL OPTICS 2022; 27:010601. [PMID: 34984863 PMCID: PMC8727454 DOI: 10.1117/1.jbo.27.1.010601] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Accepted: 11/23/2021] [Indexed: 06/14/2023]
Abstract
SIGNIFICANCE Orthopedic surgery currently comprises over 1.5 million cases annually in the United States alone and is growing rapidly with aging populations. Emerging optical sensing techniques promise fewer side effects with new, more effective approaches aimed at improving patient outcomes following orthopedic surgery. AIM The aim of this perspective paper is to outline potential applications where fiberoptic-based approaches can complement ongoing development of minimally invasive surgical procedures for use in orthopedic applications. APPROACH Several procedures involving orthopedic and spinal surgery, along with the clinical challenge associated with each, are considered. The current and potential applications of optical sensing within these procedures are discussed and future opportunities, challenges, and competing technologies are presented for each surgical application. RESULTS Strong research efforts involving sensor miniaturization and integration of optics into existing surgical devices, including K-wires and cranial perforators, provided the impetus for this perspective analysis. These advances have made it possible to envision a next-generation set of devices that can be rigorously evaluated in controlled clinical trials to become routine tools for orthopedic surgery. CONCLUSIONS Integration of optical devices into surgical drills and burrs to discern bone/tissue interfaces could be used to reduce complication rates across a spectrum of orthopedic surgery procedures or to aid less-experienced surgeons in complex techniques, such as laminoplasty or osteotomy. These developments present both opportunities and challenges for the biomedical optics community.
Collapse
Affiliation(s)
- Carl Fisher
- Biophotonics@Tyndall, IPIC, Tyndall National Institute, Lee Maltings, Dyke Parade, Cork, Ireland
| | - James Harty
- Cork University Hospital and South Infirmary Victoria University Hospital, Department of Orthopaedic Surgery, Cork, Ireland
| | - Albert Yee
- University of Toronto, Sunnybrook Research Institute, Department of Surgery, Holland Bone and Joint Program, Division of Orthopaedic Surgery, Sunnybrook Health Sciences; Orthopaedic Biomechanics Laboratory, Physical Sciences Platform, Toronto, Canada
| | - Celina L. Li
- Biophotonics@Tyndall, IPIC, Tyndall National Institute, Lee Maltings, Dyke Parade, Cork, Ireland
| | - Katarzyna Komolibus
- Biophotonics@Tyndall, IPIC, Tyndall National Institute, Lee Maltings, Dyke Parade, Cork, Ireland
| | - Konstantin Grygoryev
- Biophotonics@Tyndall, IPIC, Tyndall National Institute, Lee Maltings, Dyke Parade, Cork, Ireland
| | - Huihui Lu
- Biophotonics@Tyndall, IPIC, Tyndall National Institute, Lee Maltings, Dyke Parade, Cork, Ireland
| | - Ray Burke
- Biophotonics@Tyndall, IPIC, Tyndall National Institute, Lee Maltings, Dyke Parade, Cork, Ireland
| | - Brian C. Wilson
- University of Toronto, Princess Margaret Cancer Centre/University Health Network, Department of Medical Biophysics, Toronto, Canada
| | - Stefan Andersson-Engels
- Biophotonics@Tyndall, IPIC, Tyndall National Institute, Lee Maltings, Dyke Parade, Cork, Ireland
- University College Cork, Department of Physics, Cork, Ireland
| |
Collapse
|
5
|
Biggar O, Zamani M, Shames I. An Expressiveness Hierarchy of Behavior Trees and Related Architectures. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3074337] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
6
|
Abdelaal AE, Liu J, Hong N, Hager GD, Salcudean SE. Parallelism in Autonomous Robotic Surgery. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3060402] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
7
|
Hua J, Zeng L, Li G, Ju Z. Learning for a Robot: Deep Reinforcement Learning, Imitation Learning, Transfer Learning. SENSORS 2021; 21:s21041278. [PMID: 33670109 PMCID: PMC7916895 DOI: 10.3390/s21041278] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Revised: 02/01/2021] [Accepted: 02/05/2021] [Indexed: 11/16/2022]
Abstract
Dexterous manipulation of the robot is an important part of realizing intelligence, but manipulators can only perform simple tasks such as sorting and packing in a structured environment. In view of the existing problem, this paper presents a state-of-the-art survey on an intelligent robot with the capability of autonomous deciding and learning. The paper first reviews the main achievements and research of the robot, which were mainly based on the breakthrough of automatic control and hardware in mechanics. With the evolution of artificial intelligence, many pieces of research have made further progresses in adaptive and robust control. The survey reveals that the latest research in deep learning and reinforcement learning has paved the way for highly complex tasks to be performed by robots. Furthermore, deep reinforcement learning, imitation learning, and transfer learning in robot control are discussed in detail. Finally, major achievements based on these methods are summarized and analyzed thoroughly, and future research challenges are proposed.
Collapse
Affiliation(s)
- Jiang Hua
- Key Laboratory of Metallurgical Equipment and Control Technology, Ministry of Education, Wuhan University of Science and Technology, Wuhan 430081, China; (J.H.); (L.Z.); (G.L.)
| | - Liangcai Zeng
- Key Laboratory of Metallurgical Equipment and Control Technology, Ministry of Education, Wuhan University of Science and Technology, Wuhan 430081, China; (J.H.); (L.Z.); (G.L.)
| | - Gongfa Li
- Key Laboratory of Metallurgical Equipment and Control Technology, Ministry of Education, Wuhan University of Science and Technology, Wuhan 430081, China; (J.H.); (L.Z.); (G.L.)
| | - Zhaojie Ju
- School of Computing, University of Portsmouth, Portsmouth 03801, UK
- Correspondence:
| |
Collapse
|
8
|
Tukra S, Lidströmer N, Ashrafian H, Giannarou S. AI in Surgical Robotics. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_323-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
9
|
Ligot A, Kuckling J, Bozhinoski D, Birattari M. Automatic modular design of robot swarms using behavior trees as a control architecture. PeerJ Comput Sci 2020; 6:e314. [PMID: 33816965 PMCID: PMC7924474 DOI: 10.7717/peerj-cs.314] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Accepted: 10/16/2020] [Indexed: 05/26/2023]
Abstract
We investigate the possibilities, challenges, and limitations that arise from the use of behavior trees in the context of the automatic modular design of collective behaviors in swarm robotics. To do so, we introduce Maple, an automatic design method that combines predefined modules-low-level behaviors and conditions-into a behavior tree that encodes the individual behavior of each robot of the swarm. We present three empirical studies based on two missions: aggregation and Foraging. To explore the strengths and weaknesses of adopting behavior trees as a control architecture, we compare Maple with Chocolate, a previously proposed automatic design method that uses probabilistic finite state machines instead. In the first study, we assess Maple's ability to produce control software that crosses the reality gap satisfactorily. In the second study, we investigate Maple's performance as a function of the design budget, that is, the maximum number of simulation runs that the design process is allowed to perform. In the third study, we explore a number of possible variants of Maple that differ in the constraints imposed on the structure of the behavior trees generated. The results of the three studies indicate that, in the context of swarm robotics, behavior trees might be appealing but in many settings do not produce better solutions than finite state machines.
Collapse
Affiliation(s)
- Antoine Ligot
- IRIDIA, Université Libre de Bruxelles, Brussels, Belgium
| | - Jonas Kuckling
- IRIDIA, Université Libre de Bruxelles, Brussels, Belgium
| | - Darko Bozhinoski
- IRIDIA, Université Libre de Bruxelles, Brussels, Belgium
- Cognitive Robotics, Delft University of Technology, Delft, Netherlands
| | | |
Collapse
|
10
|
Rahman MM, Balakuntala MV, Gonzalez G, Agarwal M, Kaur U, Venkatesh VLN, Sanchez-Tamayo N, Xue Y, Voyles RM, Aggarwal V, Wachs J. SARTRES: a semi-autonomous robot teleoperation environment for surgery. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2020. [DOI: 10.1080/21681163.2020.1834878] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- Md Masudur Rahman
- Department of Computer Science, Purdue University, West Lafayette, IN, USA
| | | | - Glebys Gonzalez
- School of Industrial Engineering, Purdue University, West Lafayette, IN, USA
| | - Mridul Agarwal
- School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Upinder Kaur
- School of Engineering Technology, Purdue University, West Lafayette, IN, USA
| | | | | | - Yexiang Xue
- Department of Computer Science, Purdue University, West Lafayette, IN, USA
| | - Richard M. Voyles
- School of Engineering Technology, Purdue University, West Lafayette, IN, USA
| | - Vaneet Aggarwal
- School of Industrial Engineering, Purdue University, West Lafayette, IN, USA
- School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Juan Wachs
- School of Industrial Engineering, Purdue University, West Lafayette, IN, USA
| |
Collapse
|
11
|
Sekhar LN, Juric-Sekhar G, Qazi Z, Patel A, McGrath LB, Pridgeon J, Kalavakonda N, Hannaford B. The Future of Skull Base Surgery: A View Through Tinted Glasses. World Neurosurg 2020; 142:29-42. [PMID: 32599213 PMCID: PMC7319930 DOI: 10.1016/j.wneu.2020.06.172] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Revised: 06/19/2020] [Accepted: 06/21/2020] [Indexed: 01/06/2023]
Abstract
In the present report, we have broadly outlined the potential advances in the field of skull base surgery, which might occur within the next 20 years based on the many areas of current research in biology and technology. Many of these advances will also be broadly applicable to other areas of neurosurgery. We have grounded our predictions for future developments in an exploration of what patients and surgeons most desire as outcomes for care. We next examined the recent developments in the field and outlined several promising areas of future improvement in skull base surgery, per se, as well as identifying the new hospital support systems needed to accommodate these changes. These include, but are not limited to, advances in imaging, Raman spectroscopy and microscopy, 3-dimensional printing and rapid prototyping, master-slave and semiautonomous robots, artificial intelligence applications in all areas of medicine, telemedicine, and green technologies in hospitals. In addition, we have reviewed the therapeutic approaches using nanotechnology, genetic engineering, antitumor antibodies, and stem cell technologies to repair damage caused by traumatic injuries, tumors, and iatrogenic injuries to the brain and cranial nerves. Additionally, we have discussed the training requirements for future skull base surgeons and stressed the need for adaptability and change. However, the essential requirements for skull base surgeons will remain unchanged, including knowledge, attention to detail, technical skill, innovation, judgment, and compassion. We believe that active involvement in these rapidly evolving technologies will enable us to shape some of the future of our discipline to address the needs of both patients and our profession.
Collapse
Affiliation(s)
- Laligam N Sekhar
- Department of Neurosurgery, University of Washington, Seattle, Washington, USA.
| | | | - Zeeshan Qazi
- Department of Neurosurgery, University of Washington, Seattle, Washington, USA
| | - Anoop Patel
- Department of Neurosurgery, University of Washington, Seattle, Washington, USA
| | - Lynn B McGrath
- Department of Neurosurgery, University of Washington, Seattle, Washington, USA
| | - James Pridgeon
- Department of Neurosurgery, University of Washington, Seattle, Washington, USA
| | - Niveditha Kalavakonda
- Department of Electrical and Computer Engineering, University of Washington, Seattle, Washington, USA
| | - Blake Hannaford
- Department of Electrical and Computer Engineering, University of Washington, Seattle, Washington, USA
| |
Collapse
|
12
|
Ferguson JM, Pitt B, Kuntz A, Granna J, Kavoussi NL, Nimmagadda N, Barth EJ, Herrell SD, Webster RJ. Comparing the accuracy of the da Vinci Xi and da Vinci Si for image guidance and automation. Int J Med Robot 2020; 16:1-10. [DOI: 10.1002/rcs.2149] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 08/06/2020] [Accepted: 08/11/2020] [Indexed: 12/19/2022]
Affiliation(s)
- James M. Ferguson
- Department of Mechanical Engineering Vanderbilt University Nashville Tennessee USA
- Vanderbilt Institute for Surgery and Engineering (VISE) Nashville Tennessee USA
| | - Bryn Pitt
- Department of Mechanical Engineering Vanderbilt University Nashville Tennessee USA
- Vanderbilt Institute for Surgery and Engineering (VISE) Nashville Tennessee USA
| | - Alan Kuntz
- Robotics Center and School of Computing, University of Utah Salt Lake City Utah USA
| | - Josephine Granna
- Department of Mechanical Engineering Vanderbilt University Nashville Tennessee USA
- Vanderbilt Institute for Surgery and Engineering (VISE) Nashville Tennessee USA
| | - Nicholas L. Kavoussi
- Vanderbilt Institute for Surgery and Engineering (VISE) Nashville Tennessee USA
- Vanderbilt University Medical Center Nashville Tennessee USA
| | - Naren Nimmagadda
- Vanderbilt Institute for Surgery and Engineering (VISE) Nashville Tennessee USA
- Vanderbilt University Medical Center Nashville Tennessee USA
| | - Eric J. Barth
- Department of Mechanical Engineering Vanderbilt University Nashville Tennessee USA
- Vanderbilt Institute for Surgery and Engineering (VISE) Nashville Tennessee USA
| | - Stanley Duke Herrell
- Vanderbilt Institute for Surgery and Engineering (VISE) Nashville Tennessee USA
- Vanderbilt University Medical Center Nashville Tennessee USA
| | - Robert J. Webster
- Department of Mechanical Engineering Vanderbilt University Nashville Tennessee USA
- Vanderbilt Institute for Surgery and Engineering (VISE) Nashville Tennessee USA
- Vanderbilt University Medical Center Nashville Tennessee USA
| |
Collapse
|
13
|
Keller B, Draelos M, Zhou K, Qian R, Kuo A, Konidaris G, Hauser K, Izatt J. Optical Coherence Tomography-Guided Robotic Ophthalmic Microsurgery via Reinforcement Learning from Demonstration. IEEE T ROBOT 2020; 36:1207-1218. [PMID: 36168513 PMCID: PMC9511825 DOI: 10.1109/tro.2020.2980158] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/03/2023]
Abstract
Ophthalmic microsurgery is technically difficult because the scale of required surgical tool manipulations challenge the limits of the surgeon's visual acuity, sensory perception, and physical dexterity. Intraoperative optical coherence tomography (OCT) imaging with micrometer-scale resolution is increasingly being used to monitor and provide enhanced real-time visualization of ophthalmic surgical maneuvers, but surgeons still face physical limitations when manipulating instruments inside the eye. Autonomously controlled robots are one avenue for overcoming these physical limitations. We demonstrate the feasibility of using learning from demonstration and reinforcement learning with an industrial robot to perform OCT-guided corneal needle insertions in an ex vivo model of deep anterior lamellar keratoplasty (DALK) surgery. Our reinforcement learning agent trained on ex vivo human corneas, then outperformed surgical fellows in reaching a target needle insertion depth in mock corneal surgery trials. This work shows the combination of learning from demonstration and reinforcement learning is a viable option for performing OCT guided robotic ophthalmic surgery.
Collapse
Affiliation(s)
- Brenton Keller
- Department of Biomedical Engineering, Duke University, Durham, NC, USA
| | - Mark Draelos
- Department of Biomedical Engineering, Duke University, Durham, NC, USA
| | - Kevin Zhou
- Department of Biomedical Engineering, Duke University, Durham, NC, USA
| | - Ruobing Qian
- Department of Biomedical Engineering, Duke University, Durham, NC, USA
| | - Anthony Kuo
- Department of Ophthalmology, Duke University Medical Center, Durham, NC, USA
| | - George Konidaris
- Department of Computer Science Brown University, Providence, RI, USA
| | - Kris Hauser
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, USA
| | - Joseph Izatt
- Department of Biomedical Engineering, Duke University, Durham, NC, USA
| |
Collapse
|
14
|
Chen C, Liu H, Zhu X, Wu D, Xie Y. The Impact of the Electronic Skin Substrate on the Robotic Tactile Sensing. INT J HUM ROBOT 2019. [DOI: 10.1142/s0219843619500269] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
The tactile sensing is of significant interest for coexisting-cooperative-cognitive robots (Tri-Co robots). In order to improve the tactile sensing performance of the robot via an electronic skin (e-skin), an auxiliary elastomeric substrate is required. This paper investigates the effect of the substrate including elastic modulus, thickness and location on the static sensing at first. It is found that thick substrate with small elastic modulus can even the force distribution effectively and improve the contact area sensing. But it brought noises and crosstalk on the e-skin when the substrate has the large deformation. In occasions of dynamic tactile sensing, the impact of substrate thickness and elastic modulus was also studied and it is found that smaller elastic modulus can help e-skin sense larger and higher frequency stimulus.
Collapse
Affiliation(s)
- Chuhao Chen
- School of Aerospace Engineering, Xiamen University, Xiamen 361102, P. R. China
- Shenzhen Research Institute of Xiamen University, Shenzhen 518000, P. R. China
| | - Houde Liu
- Graduate School at Shenzhen, Tsinghua University, Shenzhen 518055, P. R. China
| | - Xiaojun Zhu
- Graduate School at Shenzhen, Tsinghua University, Shenzhen 518055, P. R. China
| | - Dezhi Wu
- School of Aerospace Engineering, Xiamen University, Xiamen 361102, P. R. China
| | - Yu Xie
- School of Aerospace Engineering, Xiamen University, Xiamen 361102, P. R. China
- Graduate School at Shenzhen, Tsinghua University, Shenzhen 518055, P. R. China
| |
Collapse
|
15
|
Sun Y, Pan B, Fu Y, Cao F. Development of a novel intelligent laparoscope system for semi-automatic minimally invasive surgery. Int J Med Robot 2019; 16:e2049. [PMID: 31677231 DOI: 10.1002/rcs.2049] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2019] [Revised: 10/05/2019] [Accepted: 10/08/2019] [Indexed: 11/10/2022]
Abstract
BACKGROUND Intelligent surgical robot has great significance to alleviate the fatigue of surgeons. In the minimally invasive surgery robot system, adding intelligent control method to the laparoscope control has great realizability and significance. METHODS Depth independent image Jacobian matrix was modified to make it suitable for laparoscope trocar constraint. We propose a method for intelligent and autonomous adjustment of surgeon's surgical field of view, enabling it to track and predict the motion trajectory of surgical instruments. RESULTS The result of experiment shows that the proposed method could realize tracking the surgical instruments and adjusting the surgical field of view autonomously. In case of occlusion, motion trajectory of surgical instruments can be predicted. CONCLUSION The intelligent laparoscope system could improve the intelligent level of surgical robot system. Given providing "a third hand" for the surgeon, the proposed system is a highly improvement for semi-autonomous surgical robot system.
Collapse
Affiliation(s)
- Yanwen Sun
- State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin, China
| | | | | | | |
Collapse
|
16
|
Kawashima K, Kanno T, Tadano K. Robots in laparoscopic surgery: current and future status. BMC Biomed Eng 2019; 1:12. [PMID: 32903302 PMCID: PMC7422514 DOI: 10.1186/s42490-019-0012-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2019] [Accepted: 04/25/2019] [Indexed: 02/07/2023] Open
Abstract
In this paper, we focus on robots used for laparoscopic surgery, which is one of the most active areas for research and development of surgical robots. We introduce research and development of laparoscope-holder robots, master-slave robots and hand-held robotic forceps. Then, we discuss future directions for surgical robots. For robot hardware, snake like flexible mechanisms for single-port access surgery (SPA) and NOTES (Natural Orifice Transluminal Endoscopic Surgery) and applications of soft robotics are actively used. On the software side, research such as automation of surgical procedures using machine learning is one of the hot topics.
Collapse
|
17
|
Hu D, Gong Y, Seibel EJ, Sekhar LN, Hannaford B. Semi-autonomous image-guided brain tumour resection using an integrated robotic system: A bench-top study. Int J Med Robot 2018; 14:10.1002/rcs.1872. [PMID: 29105281 PMCID: PMC5762424 DOI: 10.1002/rcs.1872] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2017] [Revised: 09/14/2017] [Accepted: 09/25/2017] [Indexed: 01/02/2023]
Abstract
BACKGROUND Complete brain tumour resection is an extremely critical factor for patients' survival rate and long-term quality of life. This paper introduces a prototype medical robotic system that aims to automatically detect and clean up brain tumour residues after the removal of tumour bulk through conventional surgery. METHODS We focus on the development of an integrated surgical robotic system for image-guided robotic brain surgery. The Behavior Tree framework is explored to coordinate cross-platform medical subtasks. RESULTS The integrated system was tested on a simulated laboratory platform. Results and performance indicate the feasibility of supervised semi-automation for residual brain tumour ablation in a simulated surgical cavity with sub-millimetre accuracy. The modularity in the control architecture allows straightforward integration of further medical devices. CONCLUSIONS This work presents a semi-automated laboratory setup, simulating an intraoperative robotic neurosurgical procedure with real-time endoscopic image guidance and provides a foundation for the future transition from engineering approaches to clinical application.
Collapse
Affiliation(s)
- Danying Hu
- Biorobotics Laboratory, Department of Electrical Engineering, University of Washington, Seattle, WA, USA
| | - Yuanzheng Gong
- Human Photonics Laboratory, Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
| | - Eric J Seibel
- Human Photonics Laboratory, Department of Mechanical Engineering, University of Washington, Seattle, WA, USA
| | - Laligam N Sekhar
- Department of Neurological Surgery, School of Medicine, University of Washington, Seattle, WA, USA
| | - Blake Hannaford
- Biorobotics Laboratory, Department of Electrical Engineering, University of Washington, Seattle, WA, USA
| |
Collapse
|
18
|
Behavior Trees as a Control Architecture in the Automatic Modular Design of Robot Swarms. LECTURE NOTES IN COMPUTER SCIENCE 2018. [DOI: 10.1007/978-3-030-00533-7_3] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
|
19
|
Shademan A, Decker RS, Opfermann JD, Leonard S, Krieger A, Kim PCW. Supervised autonomous robotic soft tissue surgery. Sci Transl Med 2017; 8:337ra64. [PMID: 27147588 DOI: 10.1126/scitranslmed.aad9398] [Citation(s) in RCA: 209] [Impact Index Per Article: 29.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2015] [Accepted: 03/25/2016] [Indexed: 11/02/2022]
Abstract
The current paradigm of robot-assisted surgeries (RASs) depends entirely on an individual surgeon's manual capability. Autonomous robotic surgery-removing the surgeon's hands-promises enhanced efficacy, safety, and improved access to optimized surgical techniques. Surgeries involving soft tissue have not been performed autonomously because of technological limitations, including lack of vision systems that can distinguish and track the target tissues in dynamic surgical environments and lack of intelligent algorithms that can execute complex surgical tasks. We demonstrate in vivo supervised autonomous soft tissue surgery in an open surgical setting, enabled by a plenoptic three-dimensional and near-infrared fluorescent (NIRF) imaging system and an autonomous suturing algorithm. Inspired by the best human surgical practices, a computer program generates a plan to complete complex surgical tasks on deformable soft tissue, such as suturing and intestinal anastomosis. We compared metrics of anastomosis-including the consistency of suturing informed by the average suture spacing, the pressure at which the anastomosis leaked, the number of mistakes that required removing the needle from the tissue, completion time, and lumen reduction in intestinal anastomoses-between our supervised autonomous system, manual laparoscopic surgery, and clinically used RAS approaches. Despite dynamic scene changes and tissue movement during surgery, we demonstrate that the outcome of supervised autonomous procedures is superior to surgery performed by expert surgeons and RAS techniques in ex vivo porcine tissues and in living pigs. These results demonstrate the potential for autonomous robots to improve the efficacy, consistency, functional outcome, and accessibility of surgical techniques.
Collapse
Affiliation(s)
- Azad Shademan
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, 111 Michigan Avenue Northwest, Washington, DC 20010, USA
| | - Ryan S Decker
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, 111 Michigan Avenue Northwest, Washington, DC 20010, USA
| | - Justin D Opfermann
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, 111 Michigan Avenue Northwest, Washington, DC 20010, USA
| | - Simon Leonard
- Department of Computer Science, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218, USA
| | - Axel Krieger
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, 111 Michigan Avenue Northwest, Washington, DC 20010, USA
| | - Peter C W Kim
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, 111 Michigan Avenue Northwest, Washington, DC 20010, USA.
| |
Collapse
|
20
|
Yeoh IL, Reinhall PG, Berg MC, Chizeck HJ, Seibel EJ. Run-to-Run Optimization Control Within Exact Inverse Framework for Scan Tracking. JOURNAL OF DYNAMIC SYSTEMS, MEASUREMENT, AND CONTROL 2017; 139:0910111-9101112. [PMID: 28690340 PMCID: PMC5467038 DOI: 10.1115/1.4036231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2016] [Revised: 02/11/2017] [Indexed: 06/07/2023]
Abstract
A run-to-run optimization controller uses a reduced set of measurement parameters, in comparison to more general feedback controllers, to converge to the best control point for a repetitive process. A new run-to-run optimization controller is presented for the scanning fiber device used for image acquisition and display. This controller utilizes very sparse measurements to estimate a system energy measure and updates the input parameterizations iteratively within a feedforward with exact-inversion framework. Analysis, simulation, and experimental investigations on the scanning fiber device demonstrate improved scan accuracy over previous methods and automatic controller adaptation to changing operating temperature. A specific application example and quantitative error analyses are provided of a scanning fiber endoscope that maintains high image quality continuously across a 20 °C temperature rise without interruption of the 56 Hz video.
Collapse
Affiliation(s)
- Ivan L Yeoh
- Department of Mechanical Engineering, University of Washington, Seattle, WA 98195 e-mail:
| | - Per G Reinhall
- Department of Mechanical Engineering, University of Washington, Seattle, WA 98195
| | - Martin C Berg
- Department of Mechanical Engineering, University of Washington, Seattle, WA 98195
| | - Howard J Chizeck
- Department of Electrical Engineering, University of Washington, Seattle, WA 98195
| | - Eric J Seibel
- Department of Mechanical Engineering, University of Washington, Seattle, WA 98195
| |
Collapse
|
21
|
Hu D, Jiang Y, Belykh E, Gong Y, Preul MC, Hannaford B, Seibel EJ. Toward real-time tumor margin identification in image-guided robotic brain tumor resection. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2017; 10135. [PMID: 34321709 DOI: 10.1117/12.2255417] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
For patients with malignant brain tumors (glioblastomas), a safe maximal resection of tumor is critical for an increased survival rate. However, complete resection of the cancer is hard to achieve due to the invasive nature of these tumors, where the margins of the tumors become blurred from frank tumor to more normal brain tissue, but in which single cells or clusters of malignant cells may have invaded. Recent developments in fluorescence imaging techniques have shown great potential for improved surgical outcomes by providing surgeons intraoperative contrast-enhanced visual information of tumor in neurosurgery. The current near-infrared (NIR) fluorophores, such as indocyanine green (ICG), cyanine5.5 (Cy5.5), 5-aminolevulinic acid (5-ALA)-induced protoporphyrin IX (PpIX), are showing clinical potential to be useful in targeting and guiding resections of such tumors. Real-time tumor margin identification in NIR imaging could be helpful to both surgeons and patients by reducing the operation time and space required by other imaging modalities such as intraoperative MRI, and has the potential to integrate with robotically assisted surgery. In this paper, a segmentation method based on the Chan-Vese model was developed for identifying the tumor boundaries in an ex-vivo mouse brain from relatively noisy fluorescence images acquired by a multimodal scanning fiber endoscope (mmSFE). Tumor contours were achieved iteratively by minimizing an energy function formed by a level set function and the segmentation model. Quantitative segmentation metrics based on tumor-to-background (T/B) ratio were evaluated. Results demonstrated feasibility in detecting the brain tumor margins at quasi-real-time and has the potential to yield improved precision brain tumor resection techniques or even robotic interventions in the future.
Collapse
Affiliation(s)
- Danying Hu
- Biorobotics Laboratory, Dept. of Electrical Engr., Univ. of Washington, Seattle, WA 98195
| | - Yang Jiang
- Human Photonics Lab, Dept. of Mechanical Engr., Univ. of Washington, Seattle, WA 98195
| | - Evgenii Belykh
- Division of Neurological Surgery, Barrow Neurological Institute, St. Joseph's Hospital and Medical Center, 350 West Thomas Road, Phoenix, AZ 85013.,School of Life Sciences, Arizona State University, Tempe, AZ 85287.,Department of Neurosurgery, Irkutsk State Medical University, Irkutsk, Russia, 664003
| | - Yuanzheng Gong
- Human Photonics Lab, Dept. of Mechanical Engr., Univ. of Washington, Seattle, WA 98195
| | - Mark C Preul
- Division of Neurological Surgery, Barrow Neurological Institute, St. Joseph's Hospital and Medical Center, 350 West Thomas Road, Phoenix, AZ 85013
| | - Blake Hannaford
- Biorobotics Laboratory, Dept. of Electrical Engr., Univ. of Washington, Seattle, WA 98195
| | - Eric J Seibel
- Human Photonics Lab, Dept. of Mechanical Engr., Univ. of Washington, Seattle, WA 98195
| |
Collapse
|
22
|
Hu D, Gong Y, Hannaford B, Seibel EJ. Path Planning for Semi-automated Simulated Robotic Neurosurgery. PROCEEDINGS OF THE ... IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS. IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS 2015; 2015:2639-2645. [PMID: 26705501 DOI: 10.1109/iros.2015.7353737] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
This paper considers the semi-automated robotic surgical procedure for removing the brain tumor margins, where the manual operation is a tedious and time-consuming task for surgeons. We present robust path planning methods for robotic ablation of tumor residues in various shapes, which are represented in point-clouds instead of analytical geometry. Along with the path plans, corresponding metrics are also delivered to the surgeon for selecting the optimal candidate in the automated robotic ablation. The selected path plan is then executed and tested on RAVEN™ II surgical robot platform as part of the semi-automated robotic brain tumor ablation surgery in a simulated tissue phantom.
Collapse
Affiliation(s)
- Danying Hu
- Biorobotics Laboratory, Department of Electrical Engineering, University of Washington, Seattle, WA 98195, USA
| | - Yuanzheng Gong
- Human Photonics Laboratory, Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA
| | - Blake Hannaford
- Biorobotics Laboratory, Department of Electrical Engineering, University of Washington, Seattle, WA 98195, USA
| | - Eric J Seibel
- Human Photonics Laboratory, Department of Mechanical Engineering, University of Washington, Seattle, WA 98195, USA
| |
Collapse
|
23
|
Gong Y, Hu D, Hannaford B, Seibel EJ. Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2015; 9415:94150C. [PMID: 25821389 PMCID: PMC4376325 DOI: 10.1117/12.2082872] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for robotically-assisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.
Collapse
Affiliation(s)
- Yuanzheng Gong
- Human Photonics Lab, Dept. of Mechanical Engineering, Univ. of Washington, Seattle, WA 98195
| | - Danying Hu
- Biorobotics Lab, Dept. of Electrical Engineering, Univ. of Washington, Seattle, WA 98195
| | - Blake Hannaford
- Biorobotics Lab, Dept. of Electrical Engineering, Univ. of Washington, Seattle, WA 98195
| | - Eric J. Seibel
- Human Photonics Lab, Dept. of Mechanical Engineering, Univ. of Washington, Seattle, WA 98195
| |
Collapse
|