1
|
Zhou WK, Wang JJ, Jiang YH, Yang L, Luo YL, Man Y, Wang J. Clinical and in vitro application of robotic computer-assisted implant surgery: a scoping review. Int J Oral Maxillofac Surg 2024:S0901-5027(24)00371-0. [PMID: 39366877 DOI: 10.1016/j.ijom.2024.09.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 09/12/2024] [Accepted: 09/16/2024] [Indexed: 10/06/2024]
Abstract
In recent years, the emergence and application of robotic computer-assisted implant surgery (r-CAIS) has resulted in a revolutionary shift in conventional implant diagnosis and treatment. This scoping review was performed to verify the null hypothesis that r-CAIS has a relatively high accuracy of within 1 mm, with relatively few complications and a short operative time. This review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR). From the 3355 publications identified in the PubMed, Scopus, Web of Science, and Google Scholar databases, 28 were finally included after a comprehensive review and analysis. The null hypothesis is partly accepted, as r-CAIS has a relatively high accuracy (coronal and apical deviation within 1 mm), and no significant adverse events or complications have been reported to date, although additional confirmatory studies are needed. However, there is insufficient evidence for a shorter surgical time, and further clinical research on this topic is required.
Collapse
Affiliation(s)
- W K Zhou
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - J J Wang
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Y H Jiang
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - L Yang
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Y L Luo
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China; Department of Oral Implantology, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - Y Man
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China; Department of Oral Implantology, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China
| | - J Wang
- State Key Laboratory of Oral Diseases and National Clinical Research Center for Oral Diseases, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China; Department of Oral Implantology, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China.
| |
Collapse
|
2
|
Li C, Zhang G, Zhao B, Xie D, Du H, Duan X, Hu Y, Zhang L. Advances of surgical robotics: image-guided classification and application. Natl Sci Rev 2024; 11:nwae186. [PMID: 39144738 PMCID: PMC11321255 DOI: 10.1093/nsr/nwae186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 04/19/2024] [Accepted: 05/07/2024] [Indexed: 08/16/2024] Open
Abstract
Surgical robotics application in the field of minimally invasive surgery has developed rapidly and has been attracting increasingly more research attention in recent years. A common consensus has been reached that surgical procedures are to become less traumatic and with the implementation of more intelligence and higher autonomy, which is a serious challenge faced by the environmental sensing capabilities of robotic systems. One of the main sources of environmental information for robots are images, which are the basis of robot vision. In this review article, we divide clinical image into direct and indirect based on the object of information acquisition, and into continuous, intermittent continuous, and discontinuous according to the target-tracking frequency. The characteristics and applications of the existing surgical robots in each category are introduced based on these two dimensions. Our purpose in conducting this review was to analyze, summarize, and discuss the current evidence on the general rules on the application of image technologies for medical purposes. Our analysis gives insight and provides guidance conducive to the development of more advanced surgical robotics systems in the future.
Collapse
Affiliation(s)
- Changsheng Li
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
| | - Gongzi Zhang
- Department of Orthopedics, Chinese PLA General Hospital, Beijing 100141, China
| | - Baoliang Zhao
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Dongsheng Xie
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Hailong Du
- Department of Orthopedics, Chinese PLA General Hospital, Beijing 100141, China
| | - Xingguang Duan
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Ying Hu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Lihai Zhang
- Department of Orthopedics, Chinese PLA General Hospital, Beijing 100141, China
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| |
Collapse
|
3
|
Younis R, Yamlahi A, Bodenstedt S, Scheikl PM, Kisilenko A, Daum M, Schulze A, Wise PA, Nickel F, Mathis-Ullrich F, Maier-Hein L, Müller-Stich BP, Speidel S, Distler M, Weitz J, Wagner M. A surgical activity model of laparoscopic cholecystectomy for co-operation with collaborative robots. Surg Endosc 2024; 38:4316-4328. [PMID: 38872018 PMCID: PMC11289174 DOI: 10.1007/s00464-024-10958-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Accepted: 05/24/2024] [Indexed: 06/15/2024]
Abstract
BACKGROUND Laparoscopic cholecystectomy is a very frequent surgical procedure. However, in an ageing society, less surgical staff will need to perform surgery on patients. Collaborative surgical robots (cobots) could address surgical staff shortages and workload. To achieve context-awareness for surgeon-robot collaboration, the intraoperative action workflow recognition is a key challenge. METHODS A surgical process model was developed for intraoperative surgical activities including actor, instrument, action and target in laparoscopic cholecystectomy (excluding camera guidance). These activities, as well as instrument presence and surgical phases were annotated in videos of laparoscopic cholecystectomy performed on human patients (n = 10) and on explanted porcine livers (n = 10). The machine learning algorithm Distilled-Swin was trained on our own annotated dataset and the CholecT45 dataset. The validation of the model was conducted using a fivefold cross-validation approach. RESULTS In total, 22,351 activities were annotated with a cumulative duration of 24.9 h of video segments. The machine learning algorithm trained and validated on our own dataset scored a mean average precision (mAP) of 25.7% and a top K = 5 accuracy of 85.3%. With training and validation on our dataset and CholecT45, the algorithm scored a mAP of 37.9%. CONCLUSIONS An activity model was developed and applied for the fine-granular annotation of laparoscopic cholecystectomies in two surgical settings. A machine recognition algorithm trained on our own annotated dataset and CholecT45 achieved a higher performance than training only on CholecT45 and can recognize frequently occurring activities well, but not infrequent activities. The analysis of an annotated dataset allowed for the quantification of the potential of collaborative surgical robots to address the workload of surgical staff. If collaborative surgical robots could grasp and hold tissue, up to 83.5% of the assistant's tissue interacting tasks (i.e. excluding camera guidance) could be performed by robots.
Collapse
Affiliation(s)
- R Younis
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - A Yamlahi
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - S Bodenstedt
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - P M Scheikl
- Surgical Planning and Robotic Cognition (SPARC), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-University Erlangen-Nürnberg, Erlangen, Germany
| | - A Kisilenko
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - M Daum
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - A Schulze
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - P A Wise
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
| | - F Nickel
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany
- Department of General, Visceral and Thoracic Surgery, University Medical Center Hamburg- Eppendorf, Hamburg, Germany
| | - F Mathis-Ullrich
- Surgical Planning and Robotic Cognition (SPARC), Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-University Erlangen-Nürnberg, Erlangen, Germany
| | - L Maier-Hein
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
- Division of Intelligent Medical Systems (IMSY), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - B P Müller-Stich
- Department for Abdominal Surgery, University Center for Gastrointestinal and Liver Diseases, Basel, Switzerland
| | - S Speidel
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
| | - M Distler
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - J Weitz
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany
| | - M Wagner
- Department for General, Visceral and Transplant Surgery, Heidelberg University Hospital, Heidelberg, Germany.
- National Center for Tumor Diseases (NCT), Heidelberg, Germany.
- Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner Site Dresden, Dresden, Germany.
- Centre for the Tactile Internet with Human-in-the-Loop (CeTI), TUD Dresden University of Technology, Dresden, Germany.
- Department of Visceral, Thoracic and Vascular Surgery, Faculty of Medicine and University Hospital Carl Gustav Carus, TUD Dresden University of Technology, Fetscherstraße 74, 01307, Dresden, Germany.
| |
Collapse
|
4
|
Tao Q, Liu J, Zheng Y, Yang Y, Lin C, Guang C. Evaluation of an Active Disturbance Rejection Controller for Ophthalmic Robots with Piezo-Driven Injector. MICROMACHINES 2024; 15:833. [PMID: 39064342 PMCID: PMC11278564 DOI: 10.3390/mi15070833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Revised: 06/24/2024] [Accepted: 06/25/2024] [Indexed: 07/28/2024]
Abstract
Retinal vein cannulation involves puncturing an occluded vessel on the micron scale. Even single millinewton force can cause permanent damage. An ophthalmic robot with a piezo-driven injector is precise enough to perform this delicate procedure, but the uncertain viscoelastic characteristics of the vessel make it difficult to achieve the desired contact force without harming the retina. The paper utilizes a viscoelastic contact model to explain the mechanical characteristics of retinal blood vessels to address this issue. The uncertainty in the viscoelastic properties is considered an internal disturbance of the contact model, and an active disturbance rejection controller is then proposed to precisely control the contact force. The experimental results show that this method can precisely adjust the contact force at the millinewton level even when the viscoelastic parameters vary significantly (up to 403.8%). The root mean square (RMS) and maximum value of steady-state error are 0.32 mN and 0.41 mN. The response time is below 2.51 s with no obvious overshoot.
Collapse
Affiliation(s)
- Qiannan Tao
- School of Energy and Power Engineering, Beihang University, Beijing 100191, China;
| | - Jianjun Liu
- School of Mechanical Engineering and Automation, Beihang University, Beijing 100191, China; (J.L.); (C.L.)
| | - Yu Zheng
- College of Automation and College of Artificial Intelligence, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
| | - Yang Yang
- School of Mechanical Engineering and Automation, Beihang University, Beijing 100191, China; (J.L.); (C.L.)
| | - Chuang Lin
- School of Mechanical Engineering and Automation, Beihang University, Beijing 100191, China; (J.L.); (C.L.)
| | - Chenhan Guang
- School of Mechanical and Materials Engineering, North China University of Technology, Beijing 100144, China;
| |
Collapse
|
5
|
Su K, Liu J, Ren X, Huo Y, Du G, Zhao W, Wang X, Liang B, Li D, Liu PX. A fully autonomous robotic ultrasound system for thyroid scanning. Nat Commun 2024; 15:4004. [PMID: 38734697 DOI: 10.1038/s41467-024-48421-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2023] [Accepted: 04/23/2024] [Indexed: 05/13/2024] Open
Abstract
The current thyroid ultrasound relies heavily on the experience and skills of the sonographer and the expertise of the radiologist, and the process is physically and cognitively exhausting. In this paper, we report a fully autonomous robotic ultrasound system, which is able to scan thyroid regions without human assistance and identify malignant nod- ules. In this system, human skeleton point recognition, reinforcement learning, and force feedback are used to deal with the difficulties in locating thyroid targets. The orientation of the ultrasound probe is adjusted dynamically via Bayesian optimization. Experimental results on human participants demonstrated that this system can perform high-quality ultrasound scans, close to manual scans obtained by clinicians. Additionally, it has the potential to detect thyroid nodules and provide data on nodule characteristics for American College of Radiology Thyroid Imaging Reporting and Data System (ACR TI-RADS) calculation.
Collapse
Affiliation(s)
- Kang Su
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510006, China
| | - Jingwei Liu
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510006, China
| | - Xiaoqi Ren
- School of Future Technology, South China University of Technology, Guangzhou, 511442, China
- Peng Cheng Laboratory, Shenzhen, 518000, China
| | - Yingxiang Huo
- School of Future Technology, South China University of Technology, Guangzhou, 511442, China
- Peng Cheng Laboratory, Shenzhen, 518000, China
| | - Guanglong Du
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510006, China.
| | - Wei Zhao
- Division of Vascular and Interventional Radiology, Nanfang Hospital Southern Medical University, Guangzhou, 510515, China
| | - Xueqian Wang
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, 518055, China.
| | - Bin Liang
- Department of Automation, Tsinghua University, 100854, Beijing, China.
| | - Di Li
- School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou, 510641, China
| | - Peter Xiaoping Liu
- Department of Systems and Computer Engineering, Carleton University, Ottawa, ON, K1S 5B6, Canada.
| |
Collapse
|
6
|
Elameen AM, Dahy AA. Surgical outcomes of robotic versus conventional autologous breast reconstruction: a systematic review and meta-analysis. J Robot Surg 2024; 18:189. [PMID: 38693427 PMCID: PMC11063005 DOI: 10.1007/s11701-024-01913-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 03/17/2024] [Indexed: 05/03/2024]
Abstract
Breast reconstruction is an integral part of breast cancer management. Conventional techniques of flap harvesting for autologous breast reconstruction are associated with considerable complications. Robotic surgery has enabled a new spectrum of minimally invasive breast surgeries. The current systematic review and meta-analysis study was designed to retrieve the surgical and clinical outcomes of robotic versus conventional techniques for autologous breast reconstruction. An extensive systematic literature review was performed from inception to 25 April 2023. All clinical studies comparing the outcomes of robotic and conventional autologous breast reconstruction were included for meta-analysis. The present meta-analysis included seven articles consisting of 783 patients. Of them, 263 patients received robotic breast reconstruction, while 520 patients received conventional technique. Of note, 477 patients received latissimus dorsi flap (LDF) and 306 were subjected to deep inferior epigastric artery perforator (DIEP) flap. There was a significantly prolonged duration of surgery (MD 58.36;95% CI 32.05,84.67;P < 0.001) and duration of anaesthesia (MD 47;95% CI 16.23,77.77;P = 0.003) among patients who underwent robotic surgery. There was a similar risk of complications between robotic and conventional surgeries. The mean level of pain intensity was significantly lower among patients who received robotic breast surgery (MD- 0.28;95% CI - 0.73,0.17; P = 0.22). There was prolonged length of hospitalization among patients with conventional DIEP flap surgery (MD- 0.59;95% CI - 1.13,- 0.05;P = 0.03). The present meta-analysis highlighted the feasibility, safety, and effectiveness of robotic autologous breast reconstruction. This included the successful harvesting of LDF and DIEP flap with acceptable surgical and functional outcomes.
Collapse
Affiliation(s)
- Ali Mohamed Elameen
- Department of Plastic and Reconstructive Surgery, El-Sahel Teaching Hospital, Cairo, Egypt
| | - Asmaa Ali Dahy
- Department of Plastic and Reconstructive Surgery, Faculty of Medicine For Girls, Al-Azhar University, Gameat Al Azhar, Nasr City, Cairo, Egypt.
| |
Collapse
|
7
|
Pak S, Park SG, Park J, Cho ST, Lee YG, Ahn H. Applications of artificial intelligence in urologic oncology. Investig Clin Urol 2024; 65:202-216. [PMID: 38714511 PMCID: PMC11076794 DOI: 10.4111/icu.20230435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 02/24/2024] [Accepted: 03/11/2024] [Indexed: 05/10/2024] Open
Abstract
PURPOSE With the recent rising interest in artificial intelligence (AI) in medicine, many studies have explored the potential and usefulness of AI in urological diseases. This study aimed to comprehensively review recent applications of AI in urologic oncology. MATERIALS AND METHODS We searched the PubMed-MEDLINE databases for articles in English on machine learning (ML) and deep learning (DL) models related to general surgery and prostate, bladder, and kidney cancer. The search terms were a combination of keywords, including both "urology" and "artificial intelligence" with one of the following: "machine learning," "deep learning," "neural network," "renal cell carcinoma," "kidney cancer," "urothelial carcinoma," "bladder cancer," "prostate cancer," and "robotic surgery." RESULTS A total of 58 articles were included. The studies on prostate cancer were related to grade prediction, improved diagnosis, and predicting outcomes and recurrence. The studies on bladder cancer mainly used radiomics to identify aggressive tumors and predict treatment outcomes, recurrence, and survival rates. Most studies on the application of ML and DL in kidney cancer were focused on the differentiation of benign and malignant tumors as well as prediction of their grade and subtype. Most studies suggested that methods using AI may be better than or similar to existing traditional methods. CONCLUSIONS AI technology is actively being investigated in the field of urological cancers as a tool for diagnosis, prediction of prognosis, and decision-making and is expected to be applied in additional clinical areas soon. Despite technological, legal, and ethical concerns, AI will change the landscape of urological cancer management.
Collapse
Affiliation(s)
- Sahyun Pak
- Department of Urology, Kangnam Sacred Heart Hospital, Hallym University College of Medicine, Seoul, Korea
| | - Sung Gon Park
- Department of Urology, Kangnam Sacred Heart Hospital, Hallym University College of Medicine, Seoul, Korea
| | | | - Sung Tae Cho
- Department of Urology, Kangnam Sacred Heart Hospital, Hallym University College of Medicine, Seoul, Korea
| | - Young Goo Lee
- Department of Urology, Kangnam Sacred Heart Hospital, Hallym University College of Medicine, Seoul, Korea
| | - Hanjong Ahn
- Department of Urology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea.
| |
Collapse
|
8
|
Lee A, Baker TS, Bederson JB, Rapoport BI. Levels of autonomy in FDA-cleared surgical robots: a systematic review. NPJ Digit Med 2024; 7:103. [PMID: 38671232 PMCID: PMC11053143 DOI: 10.1038/s41746-024-01102-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 04/04/2024] [Indexed: 04/28/2024] Open
Abstract
The integration of robotics in surgery has increased over the past decade, and advances in the autonomous capabilities of surgical robots have paralleled that of assistive and industrial robots. However, classification and regulatory frameworks have not kept pace with the increasing autonomy of surgical robots. There is a need to modernize our classification to understand technological trends and prepare to regulate and streamline surgical practice around these robotic systems. We present a systematic review of all surgical robots cleared by the United States Food and Drug Administration (FDA) from 2015 to 2023, utilizing a classification system that we call Levels of Autonomy in Surgical Robotics (LASR) to categorize each robot's decision-making and action-taking abilities from Level 1 (Robot Assistance) to Level 5 (Full Autonomy). We searched the 510(k), De Novo, and AccessGUDID databases in December 2023 and included all medical devices fitting our definition of a surgical robot. 37,981 records were screened to identify 49 surgical robots. Most surgical robots were at Level 1 (86%) and some reached Level 3 (Conditional Autonomy) (6%). 2 surgical robots were recognized by the FDA to have machine learning-enabled capabilities, while more were reported to have these capabilities in their marketing materials. Most surgical robots were introduced via the 510(k) pathway, but a growing number via the De Novo pathway. This review highlights trends toward greater autonomy in surgical robotics. Implementing regulatory frameworks that acknowledge varying levels of autonomy in surgical robots may help ensure their safe and effective integration into surgical practice.
Collapse
Affiliation(s)
- Audrey Lee
- Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, New York, USA
- Sinai BioDesign, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Turner S Baker
- Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, New York, USA
- Sinai BioDesign, Icahn School of Medicine at Mount Sinai, New York, New York, USA
- Department of Population Health Science and Policy, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Joshua B Bederson
- Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, New York, USA
- Sinai BioDesign, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Benjamin I Rapoport
- Department of Neurosurgery, Icahn School of Medicine at Mount Sinai, New York, New York, USA.
- Sinai BioDesign, Icahn School of Medicine at Mount Sinai, New York, New York, USA.
| |
Collapse
|
9
|
Chen D, Zhao Z, Zhang S, Chen S, Wu X, Shi J, Liu N, Pan C, Tang Y, Meng C, Zhao X, Tao B, Liu W, Chen D, Ding H, Zhang P, Tang Z. Evolving Therapeutic Landscape of Intracerebral Hemorrhage: Emerging Cutting-Edge Advancements in Surgical Robots, Regenerative Medicine, and Neurorehabilitation Techniques. Transl Stroke Res 2024:10.1007/s12975-024-01244-x. [PMID: 38558011 DOI: 10.1007/s12975-024-01244-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 03/06/2024] [Accepted: 03/19/2024] [Indexed: 04/04/2024]
Abstract
Intracerebral hemorrhage (ICH) is the most serious form of stroke and has limited available therapeutic options. As knowledge on ICH rapidly develops, cutting-edge techniques in the fields of surgical robots, regenerative medicine, and neurorehabilitation may revolutionize ICH treatment. However, these new advances still must be translated into clinical practice. In this review, we examined several emerging therapeutic strategies and their major challenges in managing ICH, with a particular focus on innovative therapies involving robot-assisted minimally invasive surgery, stem cell transplantation, in situ neuronal reprogramming, and brain-computer interfaces. Despite the limited expansion of the drug armamentarium for ICH over the past few decades, the judicious selection of more efficacious therapeutic modalities and the exploration of multimodal combination therapies represent opportunities to improve patient prognoses after ICH.
Collapse
Affiliation(s)
- Danyang Chen
- Department of Neurology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Zhixian Zhao
- Department of Neurology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Shenglun Zhang
- School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Shiling Chen
- Department of Neurology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Xuan Wu
- Department of Neurology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Jian Shi
- School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Na Liu
- Department of Neurology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Chao Pan
- Department of Neurology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Yingxin Tang
- Department of Neurology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Cai Meng
- School of Astronautics, Beihang University, Beijing, China
| | - Xingwei Zhao
- School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Bo Tao
- School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Wenjie Liu
- Beijing WanTeFu Medical Instrument Co., Ltd., Beijing, China
| | - Diansheng Chen
- Institute of Robotics, School of Mechanical Engineering and Automation, Beihang University, Beijing, China
| | - Han Ding
- School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Ping Zhang
- Department of Neurology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China.
| | - Zhouping Tang
- Department of Neurology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, China.
| |
Collapse
|
10
|
Yang J, Barragan JA, Farrow JM, Sundaram CP, Wachs JP, Yu D. An Adaptive Human-Robotic Interaction Architecture for Augmenting Surgery Performance Using Real-Time Workload Sensing-Demonstration of a Semi-autonomous Suction Tool. HUMAN FACTORS 2024; 66:1081-1102. [PMID: 36367971 DOI: 10.1177/00187208221129940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
OBJECTIVE This study developed and evaluated a mental workload-based adaptive automation (MWL-AA) that monitors surgeon cognitive load and assist during cognitively demanding tasks and assists surgeons in robotic-assisted surgery (RAS). BACKGROUND The introduction of RAS makes operators overwhelmed. The need for precise, continuous assessment of human mental workload (MWL) states is important to identify when the interventions should be delivered to moderate operators' MWL. METHOD The MWL-AA presented in this study was a semi-autonomous suction tool. The first experiment recruited ten participants to perform surgical tasks under different MWL levels. The physiological responses were captured and used to develop a real-time multi-sensing model for MWL detection. The second experiment evaluated the effectiveness of the MWL-AA, where nine brand-new surgical trainees performed the surgical task with and without the MWL-AA. Mixed effect models were used to compare task performance, objective- and subjective-measured MWL. RESULTS The proposed system predicted high MWL hemorrhage conditions with an accuracy of 77.9%. For the MWL-AA evaluation, the surgeons' gaze behaviors and brain activities suggested lower perceived MWL with MWL-AA than without. This was further supported by lower self-reported MWL and better task performance in the task condition with MWL-AA. CONCLUSION A MWL-AA systems can reduce surgeons' workload and improve performance in a high-stress hemorrhaging scenario. Findings highlight the potential of utilizing MWL-AA to enhance the collaboration between the autonomous system and surgeons. Developing a robust and personalized MWL-AA is the first step that can be used do develop additional use cases in future studies. APPLICATION The proposed framework can be expanded and applied to more complex environments to improve human-robot collaboration.
Collapse
Affiliation(s)
- Jing Yang
- School of Industrial Engineering, Purdue University, West Lafayette, Indiana, USA
| | | | - Jason Michael Farrow
- Department of Urology, Indiana University School of Medicine, Indianapolis, Indiana, USA
| | - Chandru P Sundaram
- Department of Urology, Indiana University School of Medicine, Indianapolis, Indiana, USA
| | - Juan P Wachs
- School of Industrial Engineering, Purdue University, West Lafayette, Indiana, USA
| | - Denny Yu
- School of Industrial Engineering, Purdue University, West Lafayette, Indiana, USA
| |
Collapse
|
11
|
Dahlin E. And say the AI responded? Dancing around 'autonomy' in AI/human encounters. SOCIAL STUDIES OF SCIENCE 2024; 54:59-77. [PMID: 37650577 PMCID: PMC10832316 DOI: 10.1177/03063127231193947] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
The article explores technology-human relations in a time of artificial intelligence (AI) and in the context of long-standing problems in social theory about agency, nonhumans, and autonomy. Most theorizations of AI are grounded in dualistic thinking and traditional views of technology, oversimplifying real-world settings. This article works to unfold modes of existence at play in AI/human relations. Materials from ethnographic fieldwork are used to highlight the significance of autonomy in AI/human relations. The analysis suggests that the idea of autonomy is a double-edged sword, showing that humans not only coordinate their perception of autonomy but also switch between registers by sometimes ascribing certain autonomous features to the AI system and in other situations denying the system such features. As a result, AI/human relations prove to be not so much determined by any ostensive delegation of tasks as by the way in which AI and humans engage with each other in practice. The article suggests a theory of relationality that redirects focus away from questions of agency towards questions of what it means to be in relations.
Collapse
|
12
|
Georgadarellis GL, Cobb T, Vital CJ, Sup FC. Nursing Perceptions of Robotic Technology in Healthcare: A Pretest-Posttest Survey Analysis Using an Educational Video. IISE Trans Occup Ergon Hum Factors 2024; 12:68-83. [PMID: 38456754 DOI: 10.1080/24725838.2024.2323061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Accepted: 02/21/2024] [Indexed: 03/09/2024]
Abstract
OCCUPATIONAL APPLICATIONSWe used a survey to evaluate the perceptions of nurses and nursing students on robotic technology for nursing care before and after reviewing an educational video that included examples of medical, care, and healthcare service robotic technology. We found that the perception of robotic technology was innately favorable and became more favorable after the video. It is beneficial for engineers to incorporate nurses' frontline knowledge into the design process from the beginning, while functional changes can be implemented since nurses comprise the largest group of healthcare professionals in hospitals and are the end users of technological devices. Educating nurses in state-of-the-art technology specific to what designers are developing can enable them to provide relevant insight. Designers and engineers can use this insight to create user-friendly, effective technology that improves not only patient care but also nurse job satisfaction.
Collapse
Affiliation(s)
- Gina L Georgadarellis
- Mechanical and Industrial Engineering, University of Massachusetts Amherst, Amherst, MA, USA
| | - Tracey Cobb
- Elaine Marieb College of Nursing University of Massachusetts Amherst, Amherst, MA, USA
| | | | - Frank C Sup
- Mechanical and Industrial Engineering, University of Massachusetts Amherst, Amherst, MA, USA
| |
Collapse
|
13
|
Yu H, Wang H, Rong Y, Fang J, Niu J. Design and evaluation of a wearable vascular interventional surgical robot system. Int J Med Robot 2023:e2616. [PMID: 38131502 DOI: 10.1002/rcs.2616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 12/06/2023] [Accepted: 12/12/2023] [Indexed: 12/23/2023]
Abstract
BACKGROUND Remote-controlled robotic vascular interventional surgery can reduce radiation exposure to interventional physicians and improve safety. However, inconvenient operation and lack of force feedback limit its application. MATERIALS AND METHODS A new wearable robotic system for vascular interventional surgery is designed, which is more flexible in operation. It ensures the safety of surgery through haptic force feedback. The system was evaluated by human vascular models and animal experiments. RESULTS The average static error of the system is 0.048 mm when the axial motion is 250 mm and 1.259° when the rotational motion is 400°. The average error of the force feedback is 0.021 N. The results of vascular model experiments and animal experiments demonstrate the feasibility and safety of the system. CONCLUSIONS The proposed robotic system can assist physicians in remotely delivering standard catheters or guidewires. The system is more flexible and uses haptic force feedback to ensure surgical safety.
Collapse
Affiliation(s)
- Haoyang Yu
- Hebei Provincial Key Laboratory of Parallel Robot and Mechatronic System, Yanshan University, Qinhuangdao, Hebei, China
| | - Hongbo Wang
- Hebei Provincial Key Laboratory of Parallel Robot and Mechatronic System, Yanshan University, Qinhuangdao, Hebei, China
- Academy for Engineering & Technology, Fudan University, Shanghai, China
| | - Yu Rong
- College of Vehicles and Energy, Yanshan University, Qinhuangdao, Hebei, China
| | - Junyu Fang
- Key Laboratory of Advanced Forging & Stamping Technology and Science (Yanshan University), Ministry of Education of China, Qinhuangdao, Hebei, China
| | - Jianye Niu
- Hebei Provincial Key Laboratory of Parallel Robot and Mechatronic System, Yanshan University, Qinhuangdao, Hebei, China
- Key Laboratory of Advanced Forging & Stamping Technology and Science (Yanshan University), Ministry of Education of China, Qinhuangdao, Hebei, China
| |
Collapse
|
14
|
Guan B, Zou Y, Zhao J, Pan L, Yi B, Li J. Clean visual field reconstruction in robot-assisted laparoscopic surgery based on dynamic prediction. Comput Biol Med 2023; 165:107472. [PMID: 37713788 DOI: 10.1016/j.compbiomed.2023.107472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 08/24/2023] [Accepted: 09/04/2023] [Indexed: 09/17/2023]
Abstract
Robot-assisted minimally invasive surgery has been broadly employed in complicated operations. However, the multiple surgical instruments may occupy a large amount of visual space in complex operations performed in narrow spaces, which affects the surgeon's judgment on the shape and position of the lesion as well as the course of its adjacent vessels/lacunae. In this paper, a surgical scene reconstruction method is proposed, which involves the tracking and removal of surgical instruments and the dynamic prediction of the obscured region. For tracking and segmentation of instruments, the image sequences are preprocessed by a modified U-Net architecture composed of a pre-trained ResNet101 encoder and a redesigned decoder. Also, the segmentation boundaries of the instrument shafts are extended using image filtering and a real-time index mask algorithm to achieve precise localization of the obscured elements. For predicting the deformation of soft tissues, a soft tissue deformation prediction algorithm is proposed based on dense optical flow gravitational field and entropy increase, which can achieve local dynamic visualization of the surgical scene by integrating image morphological operations. Finally, the preliminary experiments and the pre-clinical evaluation were presented to demonstrate the performance of the proposed method. The results show that the proposed method can provide the surgeon with a clean and comprehensive surgical scene, reconstruct the course of important vessels/lacunae, and avoid inadvertent injuries.
Collapse
Affiliation(s)
- Bo Guan
- The Key Lab for Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, No. 92 Weijin Road, Nankai District, Tianjin, 300072, China
| | - Yuelin Zou
- The Key Lab for Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, No. 92 Weijin Road, Nankai District, Tianjin, 300072, China
| | - Jianchang Zhao
- National Engineering Research Center of Neuromodulation, School of Aerospace Engineering, Tsinghua University, No. 30 Shuangqing Road, Haidian District, Beijing, 100084, China
| | - Lizhi Pan
- The Key Lab for Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, No. 92 Weijin Road, Nankai District, Tianjin, 300072, China
| | - Bo Yi
- Third Xiangya Hospital, Central South University, No. 138 Tongzipo Road, Yuelu District, Changsha, 410013, China.
| | - Jianmin Li
- The Key Lab for Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, No. 92 Weijin Road, Nankai District, Tianjin, 300072, China.
| |
Collapse
|
15
|
Jiang Z, Salcudean SE, Navab N. Robotic ultrasound imaging: State-of-the-art and future perspectives. Med Image Anal 2023; 89:102878. [PMID: 37541100 DOI: 10.1016/j.media.2023.102878] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 04/27/2023] [Accepted: 06/22/2023] [Indexed: 08/06/2023]
Abstract
Ultrasound (US) is one of the most widely used modalities for clinical intervention and diagnosis due to the merits of providing non-invasive, radiation-free, and real-time images. However, free-hand US examinations are highly operator-dependent. Robotic US System (RUSS) aims at overcoming this shortcoming by offering reproducibility, while also aiming at improving dexterity, and intelligent anatomy and disease-aware imaging. In addition to enhancing diagnostic outcomes, RUSS also holds the potential to provide medical interventions for populations suffering from the shortage of experienced sonographers. In this paper, we categorize RUSS as teleoperated or autonomous. Regarding teleoperated RUSS, we summarize their technical developments, and clinical evaluations, respectively. This survey then focuses on the review of recent work on autonomous robotic US imaging. We demonstrate that machine learning and artificial intelligence present the key techniques, which enable intelligent patient and process-specific, motion and deformation-aware robotic image acquisition. We also show that the research on artificial intelligence for autonomous RUSS has directed the research community toward understanding and modeling expert sonographers' semantic reasoning and action. Here, we call this process, the recovery of the "language of sonography". This side result of research on autonomous robotic US acquisitions could be considered as valuable and essential as the progress made in the robotic US examination itself. This article will provide both engineers and clinicians with a comprehensive understanding of RUSS by surveying underlying techniques. Additionally, we present the challenges that the scientific community needs to face in the coming years in order to achieve its ultimate goal of developing intelligent robotic sonographer colleagues. These colleagues are expected to be capable of collaborating with human sonographers in dynamic environments to enhance both diagnostic and intraoperative imaging.
Collapse
Affiliation(s)
- Zhongliang Jiang
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany.
| | - Septimiu E Salcudean
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC V6T 1Z4, Canada
| | - Nassir Navab
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany; Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
16
|
Wang Y, Wang W, Cai Y, Zhao Q, Wang Y. Preoperative Planning Framework for Robot-Assisted Dental Implant Surgery: Finite-Parameter Surrogate Model and Optimization of Instrument Placement. Bioengineering (Basel) 2023; 10:952. [PMID: 37627837 PMCID: PMC10451750 DOI: 10.3390/bioengineering10080952] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 08/05/2023] [Accepted: 08/07/2023] [Indexed: 08/27/2023] Open
Abstract
For robot-assisted dental implant surgery, it is necessary to feed the instrument into a specified position to perform surgery. To improve safety and efficiency, a preoperative planning framework, including a finite-parameter surrogate model (FPSM) and an automatic instrument-placement method, is proposed in this paper. This framework is implemented via two-stage optimization. In the first stage, a group of closed curves in polar coordinates is used to represent the oral cavity. By optimizing a finite number of parameters for these curves, the oral structure is simplified to form the FPSM. In the second stage, the FPSM serves as a fast safety estimator with which the target position/orientation of the instrument for the feeding motion is automatically determined through particle swarm optimization (PSO). The optimized feeding target can be used to generate a virtual fixture (VF) to avoid undesired operations and to lower the risk of collision. This proposed framework has the advantages of being safe, fast, and accurate, overcoming the computational burden and insufficient real-time performance of complex 3D models. The framework has been developed and tested, preliminarily verifying its feasibility, efficiency, and effectiveness.
Collapse
Affiliation(s)
| | | | - Yueri Cai
- School of Mechanical Engineering and Automation, Beihang University, Beijing 100191, China; (Y.W.); (W.W.); (Q.Z.); (Y.W.)
| | | | | |
Collapse
|
17
|
Hu J, Liu J, Guo Y, Cao Z, Chen X, Zhang C. A collaborative robotic platform for sensor-aware fibula osteotomies in mandibular reconstruction surgery. Comput Biol Med 2023; 162:107040. [PMID: 37263153 DOI: 10.1016/j.compbiomed.2023.107040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Revised: 04/17/2023] [Accepted: 05/12/2023] [Indexed: 06/03/2023]
Abstract
Precision and safety are crucial in performing fibula osteotomy during mandibular reconstruction with free fibula flap (FFF). However, current clinical methods, such as template-guided osteotomy, have the potential to cause damage to fibular vessels. To address the challenge, this paper introduces the development of the surgical robot for fibula osteotomies in mandibular reconstruction surgery and propose an algorithm for sensor-aware hybrid force-motion control for safe osteotomy, which includes three parts: osteotomy motion modeling from surgeons' demonstrations, Dynamic-system-based admittance control and osteotomy sawed-through detection. As a result, the average linear variation of the osteotomized segments was 1.08±0.41mm, and the average angular variation was 1.32±0.65∘. The threshold of osteotomy sawed-through detection is 0.5 at which the average offset is 0.5mm. In conclusion, with the assistance of surgical robot for mandibular reconstruction, surgeons can perform fibula osteotomy precisely and safely.
Collapse
Affiliation(s)
- Junlei Hu
- Department of Oral Maxillofacial - Head & Neck Oncology, Shanghai Ninth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China; School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Jiannan Liu
- Department of Oral Maxillofacial - Head & Neck Oncology, Shanghai Ninth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China.
| | - Yan Guo
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Zhenggang Cao
- Institute of Medical Robot, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Xiaojun Chen
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China; Institute of Medical Robot, Shanghai Jiao Tong University, Shanghai, 200240, China.
| | - Chenping Zhang
- Department of Oral Maxillofacial - Head & Neck Oncology, Shanghai Ninth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, 200011, China.
| |
Collapse
|
18
|
Seetohul J, Shafiee M, Sirlantzis K. Augmented Reality (AR) for Surgical Robotic and Autonomous Systems: State of the Art, Challenges, and Solutions. SENSORS (BASEL, SWITZERLAND) 2023; 23:6202. [PMID: 37448050 DOI: 10.3390/s23136202] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2023] [Revised: 06/09/2023] [Accepted: 07/03/2023] [Indexed: 07/15/2023]
Abstract
Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future.
Collapse
Affiliation(s)
- Jenna Seetohul
- Mechanical Engineering Group, School of Engineering, University of Kent, Canterbury CT2 7NT, UK
| | - Mahmood Shafiee
- Mechanical Engineering Group, School of Engineering, University of Kent, Canterbury CT2 7NT, UK
- School of Mechanical Engineering Sciences, University of Surrey, Guildford GU2 7XH, UK
| | - Konstantinos Sirlantzis
- School of Engineering, Technology and Design, Canterbury Christ Church University, Canterbury CT1 1QU, UK
- Intelligent Interactions Group, School of Engineering, University of Kent, Canterbury CT2 7NT, UK
| |
Collapse
|
19
|
Kume K. Flexible robotic endoscopy for treating gastrointestinal neoplasms. World J Gastrointest Endosc 2023; 15:434-439. [PMID: 37397973 PMCID: PMC10308274 DOI: 10.4253/wjge.v15.i6.434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 04/14/2023] [Accepted: 05/04/2023] [Indexed: 06/14/2023] Open
Abstract
Therapeutic flexible endoscopic robotic systems have been developed primarily as a platform for endoscopic submucosal dissection (ESD) in the treatment of early-stage gastrointestinal cancer. Since ESD can only be performed by highly skilled endoscopists, the goal is to lower the technical hurdles to ESD by introducing a robot. In some cases, such robots have already been used clinically, but they are still in the research and development stage. This paper outlined the current status of development, including a system by the author’s group, and discussed future challenges.
Collapse
Affiliation(s)
- Keiichiro Kume
- Third Department of Internal Medicine, University of Occupational and Environmental Health, Kitakyushu 8078555, Japan
| |
Collapse
|
20
|
Hutler B, Rieder TN, Mathews DJH, Handelman DA, Greenberg AM. Designing robots that do no harm: understanding the challenges of Ethics for Robots. AI AND ETHICS 2023:1-9. [PMID: 37360148 PMCID: PMC10108783 DOI: 10.1007/s43681-023-00283-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Accepted: 03/28/2023] [Indexed: 06/28/2023]
Abstract
This article describes key challenges in creating an ethics "for" robots. Robot ethics is not only a matter of the effects caused by robotic systems or the uses to which they may be put, but also the ethical rules and principles that these systems ought to follow-what we call "Ethics for Robots." We suggest that the Principle of Nonmaleficence, or "do no harm," is one of the basic elements of an ethics for robots-especially robots that will be used in a healthcare setting. We argue, however, that the implementation of even this basic principle will raise significant challenges for robot designers. In addition to technical challenges, such as ensuring that robots are able to detect salient harms and dangers in the environment, designers will need to determine an appropriate sphere of responsibility for robots and to specify which of various types of harms must be avoided or prevented. These challenges are amplified by the fact that the robots we are currently able to design possess a form of semi-autonomy that differs from other more familiar semi-autonomous agents such as animals or young children. In short, robot designers must identify and overcome the key challenges of an ethics for robots before they may ethically utilize robots in practice.
Collapse
Affiliation(s)
- Brian Hutler
- Department of Philosophy, Temple University, 1114 Polett Walk, Philadelphia, PA 19122 USA
| | - Travis N. Rieder
- Berman Institute of Bioethics, Johns Hopkins University, 1809 Ashland Ave, Baltimore, MD 21205 USA
| | - Debra J. H. Mathews
- Berman Institute of Bioethics, Johns Hopkins University, 1809 Ashland Ave, Baltimore, MD 21205 USA
- Department of Genetic Medicine, Johns Hopkins University School of Medicine, 733 N. Broadway, Baltimore, MD 21205 USA
| | - David A. Handelman
- Applied Physics Laboratory, Johns Hopkins University, 11100 Johns Hopkins Road, Laurel, MD 20723 USA
| | - Ariel M. Greenberg
- Applied Physics Laboratory, Johns Hopkins University, 11100 Johns Hopkins Road, Laurel, MD 20723 USA
| |
Collapse
|
21
|
Chadebecq F, Lovat LB, Stoyanov D. Artificial intelligence and automation in endoscopy and surgery. Nat Rev Gastroenterol Hepatol 2023; 20:171-182. [PMID: 36352158 DOI: 10.1038/s41575-022-00701-y] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/03/2022] [Indexed: 11/10/2022]
Abstract
Modern endoscopy relies on digital technology, from high-resolution imaging sensors and displays to electronics connecting configurable illumination and actuation systems for robotic articulation. In addition to enabling more effective diagnostic and therapeutic interventions, the digitization of the procedural toolset enables video data capture of the internal human anatomy at unprecedented levels. Interventional video data encapsulate functional and structural information about a patient's anatomy as well as events, activity and action logs about the surgical process. This detailed but difficult-to-interpret record from endoscopic procedures can be linked to preoperative and postoperative records or patient imaging information. Rapid advances in artificial intelligence, especially in supervised deep learning, can utilize data from endoscopic procedures to develop systems for assisting procedures leading to computer-assisted interventions that can enable better navigation during procedures, automation of image interpretation and robotically assisted tool manipulation. In this Perspective, we summarize state-of-the-art artificial intelligence for computer-assisted interventions in gastroenterology and surgery.
Collapse
Affiliation(s)
- François Chadebecq
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Laurence B Lovat
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK
| | - Danail Stoyanov
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, London, UK.
| |
Collapse
|
22
|
Rashidi Fathabadi F, Grantner JL, Shebrain SA, Abdel-Qader I. 3D Autonomous Surgeon's Hand Movement Assessment Using a Cascaded Fuzzy Supervisor in Multi-Thread Video Processing. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23052623. [PMID: 36904830 PMCID: PMC10007173 DOI: 10.3390/s23052623] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 02/25/2023] [Accepted: 02/25/2023] [Indexed: 06/02/2023]
Abstract
The purpose of the Fundamentals of Laparoscopic Surgery (FLS) training is to develop laparoscopic surgery skills by using simulation experiences. Several advanced training methods based on simulation have been created to enable training in a non-patient environment. Laparoscopic box trainers-cheap, portable devices-have been deployed for a while to offer training opportunities, competence evaluations, and performance reviews. However, the trainees must be under the supervision of medical experts who can evaluate their abilities, which is an expensive and time-consuming operation. Thus, a high level of surgical skill, determined by assessment, is necessary to prevent any intraoperative issues and malfunctions during a real laparoscopic procedure and during human intervention. To guarantee that the use of laparoscopic surgical training methods results in surgical skill improvement, it is necessary to measure and assess surgeons' skills during tests. We used our intelligent box-trainer system (IBTS) as a platform for skill training. The main aim of this study was to monitor the surgeon's hands' movement within a predefined field of interest. To evaluate the surgeons' hands' movement in 3D space, an autonomous evaluation system using two cameras and multi-thread video processing is proposed. This method works by detecting laparoscopic instruments and using a cascaded fuzzy logic assessment system. It is composed of two fuzzy logic systems executing in parallel. The first level assesses the left and right-hand movements simultaneously. Its outputs are cascaded by the final fuzzy logic assessment at the second level. This algorithm is completely autonomous and removes the need for any human monitoring or intervention. The experimental work included nine physicians (surgeons and residents) from the surgery and obstetrics/gynecology (OB/GYN) residency programs at WMU Homer Stryker MD School of Medicine (WMed) with different levels of laparoscopic skills and experience. They were recruited to participate in the peg-transfer task. The participants' performances were assessed, and the videos were recorded throughout the exercises. The results were delivered autonomously about 10 s after the experiments were concluded. In the future, we plan to increase the computing power of the IBTS to achieve real-time performance assessment.
Collapse
Affiliation(s)
| | - Janos L. Grantner
- Electrical & Computer Engineering Department, Western Michigan University, Kalamazoo, MI 49008, USA
| | - Saad A. Shebrain
- Department of Surgery, Homer Stryker MD School of Medicine, Western Michigan University, Kalamazoo, MI 49008, USA
| | - Ikhlas Abdel-Qader
- Electrical & Computer Engineering Department, Western Michigan University, Kalamazoo, MI 49008, USA
| |
Collapse
|
23
|
Bourdillon AT, Garg A, Wang H, Woo YJ, Pavone M, Boyd J. Integration of Reinforcement Learning in a Virtual Robotic Surgical Simulation. Surg Innov 2023; 30:94-102. [PMID: 35503302 DOI: 10.1177/15533506221095298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Background. The revolutions in AI hold tremendous capacity to augment human achievements in surgery, but robust integration of deep learning algorithms with high-fidelity surgical simulation remains a challenge. We present a novel application of reinforcement learning (RL) for automating surgical maneuvers in a graphical simulation.Methods. In the Unity3D game engine, the Machine Learning-Agents package was integrated with the NVIDIA FleX particle simulator for developing autonomously behaving RL-trained scissors. Proximal Policy Optimization (PPO) was used to reward movements and desired behavior such as movement along desired trajectory and optimized cutting maneuvers along the deformable tissue-like object. Constant and proportional reward functions were tested, and TensorFlow analytics was used to informed hyperparameter tuning and evaluate performance.Results. RL-trained scissors reliably manipulated the rendered tissue that was simulated with soft-tissue properties. A desirable trajectory of the autonomously behaving scissors was achieved along 1 axis. Proportional rewards performed better compared to constant rewards. Cumulative reward and PPO metrics did not consistently improve across RL-trained scissors in the setting for movement across 2 axes (horizontal and depth).Conclusion. Game engines hold promising potential for the design and implementation of RL-based solutions to simulated surgical subtasks. Task completion was sufficiently achieved in one-dimensional movement in simulations with and without tissue-rendering. Further work is needed to optimize network architecture and parameter tuning for increasing complexity.
Collapse
Affiliation(s)
| | - Animesh Garg
- Vector Institute and Department of Computer Science, University of Toronto, Toronto, ON, Canada
| | - Hanjay Wang
- Department of Cardiothoracic Surgery, 198869Stanford University, Stanford, CA, USA
| | - Y Joseph Woo
- Department of Cardiothoracic Surgery, 198869Stanford University, Stanford, CA, USA.,Department of Bioengineering, 198869Stanford University, Stanford, CA, USA
| | - Marco Pavone
- Department of Aeronautics and Astronautics, 198869Stanford University, Stanford, CA, USA
| | - Jack Boyd
- Department of Cardiothoracic Surgery, 198869Stanford University, Stanford, CA, USA
| |
Collapse
|
24
|
Sands T. Inducing Performance of Commercial Surgical Robots in Space. SENSORS (BASEL, SWITZERLAND) 2023; 23:1510. [PMID: 36772552 PMCID: PMC9920638 DOI: 10.3390/s23031510] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 01/22/2023] [Accepted: 01/24/2023] [Indexed: 06/18/2023]
Abstract
Pre-existing surgical robotic systems are sold with electronics (sensors and controllers) that can prove difficult to retroactively improve when newly developed methods are proposed. Improvements must be somehow "imposed" upon the original robotic systems. What options are available for imposing performance from pre-existing, common systems and how do the options compare? Optimization often assumes idealized systems leading to open-loop results (lacking feedback from sensors), and this manuscript investigates utility of prefiltering, such other modern methods applied to non-idealized systems, including fusion of noisy sensors and so-called "fictional forces" associated with measurement of displacements in rotating reference frames. A dozen modern approaches are compared as the main contribution of this work. Four methods are idealized cases establishing a valid theoretical comparative benchmark. Subsequently, eight modern methods are compared against the theoretical benchmark and against the pre-existing robotic systems. The two best performing methods included one modern application of a classical approach (velocity control) and one modern approach derived using Pontryagin's methods of systems theory, including Hamiltonian minimization, adjoint equations, and terminal transversality of the endpoint Lagrangian. The key novelty presented is the best performing method called prefiltered open-loop optimal + transport decoupling, achieving 1-3 percent attitude tracking performance of the robotic instrument with a two percent reduced computational burden and without increased costs (effort).
Collapse
Affiliation(s)
- Timothy Sands
- Department of Mechanical and Aerospace Engineering, Cornell University, Ithaca, NY 14853, USA
| |
Collapse
|
25
|
Design and evaluation of vascular interventional robot system for complex coronary artery lesions. Med Biol Eng Comput 2023; 61:1365-1380. [PMID: 36705768 DOI: 10.1007/s11517-023-02775-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 01/05/2023] [Indexed: 01/28/2023]
Abstract
At present, most vascular intervention robots cannot cope with the more common coronary complex lesions in the clinic. Moreover, the lack of effective force feedback increases the risk of surgery. In this paper, a vascular interventional robot that can collaboratively deliver multiple interventional instruments has been developed to assist doctors in the operation of complex lesions. Based on the doctor's skills and the delivery principle of interventional instruments, the main and slave manipulators of the robot system are designed. Haptic force feedback is generated through resistance measuring mechanism and active drag system. In addition, a force feedback control strategy based on force-velocity mapping is proposed to realize the continuous change of force and avoid vibration. The proposed robot system was evaluated through a series of experiments. The experimental results show that the system can accurately measure the delivery resistance of interventional instruments, and provide haptic force feedback to doctors. The capability of the system to collaboratively deliver multiple interventional instruments is effective. Therefore, it can be considered that the robot system is feasible and effective.
Collapse
|
26
|
Chellal AA, Lima J, Gonçalves J, Fernandes FP, Pacheco F, Monteiro F, Brito T, Soares S. Robot-Assisted Rehabilitation Architecture Supported by a Distributed Data Acquisition System. SENSORS (BASEL, SWITZERLAND) 2022; 22:9532. [PMID: 36502234 PMCID: PMC9740827 DOI: 10.3390/s22239532] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 11/20/2022] [Accepted: 11/26/2022] [Indexed: 06/12/2023]
Abstract
Rehabilitation robotics aims to facilitate the rehabilitation procedure for patients and physical therapists. This field has a relatively long history dating back to the 1990s; however, their implementation and the standardisation of their application in the medical field does not follow the same pace, mainly due to their complexity of reproduction and the need for their approval by the authorities. This paper aims to describe architecture that can be applied to industrial robots and promote their application in healthcare ecosystems. The control of the robotic arm is performed using the software called SmartHealth, offering a 2 Degree of Autonomy (DOA). Data are gathered through electromyography (EMG) and force sensors at a frequency of 45 Hz. It also proves the capabilities of such small robots in performing such medical procedures. Four exercises focused on shoulder rehabilitation (passive, restricted active-assisted, free active-assisted and Activities of Daily Living (ADL)) were carried out and confirmed the viability of the proposed architecture and the potential of small robots (i.e., the UR3) in rehabilitation procedure accomplishment. This robot can perform the majority of the default exercises in addition to ADLs but, nevertheless, their limits were also uncovered, mainly due to their limited Range of Motion (ROM) and cost.
Collapse
Affiliation(s)
- Arezki Abderrahim Chellal
- Research Centre in Digitalization and Intelligent Robotics CeDRI, Instituto Politécnico de Bragança, 5300-252 Bragança, Portugal
- Laboratório para a Sustentabilidade e Tecnologia em Regiões de Montanha (SusTEC), Instituto Politécnico de Bragança, 5300-252 Bragança, Portugal
- Engineering Department, School of Sciences and Technology, UTAD, 5000-801 Vila Real, Portugal
| | - José Lima
- Research Centre in Digitalization and Intelligent Robotics CeDRI, Instituto Politécnico de Bragança, 5300-252 Bragança, Portugal
- Laboratório para a Sustentabilidade e Tecnologia em Regiões de Montanha (SusTEC), Instituto Politécnico de Bragança, 5300-252 Bragança, Portugal
- INESC TEC—INESC Technology and Science, 4200-465 Porto, Portugal
| | - José Gonçalves
- Research Centre in Digitalization and Intelligent Robotics CeDRI, Instituto Politécnico de Bragança, 5300-252 Bragança, Portugal
- Laboratório para a Sustentabilidade e Tecnologia em Regiões de Montanha (SusTEC), Instituto Politécnico de Bragança, 5300-252 Bragança, Portugal
- INESC TEC—INESC Technology and Science, 4200-465 Porto, Portugal
| | - Florbela P. Fernandes
- Research Centre in Digitalization and Intelligent Robotics CeDRI, Instituto Politécnico de Bragança, 5300-252 Bragança, Portugal
- Laboratório para a Sustentabilidade e Tecnologia em Regiões de Montanha (SusTEC), Instituto Politécnico de Bragança, 5300-252 Bragança, Portugal
| | - Fátima Pacheco
- Research Centre in Digitalization and Intelligent Robotics CeDRI, Instituto Politécnico de Bragança, 5300-252 Bragança, Portugal
- Laboratório para a Sustentabilidade e Tecnologia em Regiões de Montanha (SusTEC), Instituto Politécnico de Bragança, 5300-252 Bragança, Portugal
| | - Fernando Monteiro
- Research Centre in Digitalization and Intelligent Robotics CeDRI, Instituto Politécnico de Bragança, 5300-252 Bragança, Portugal
- Laboratório para a Sustentabilidade e Tecnologia em Regiões de Montanha (SusTEC), Instituto Politécnico de Bragança, 5300-252 Bragança, Portugal
| | - Thadeu Brito
- Research Centre in Digitalization and Intelligent Robotics CeDRI, Instituto Politécnico de Bragança, 5300-252 Bragança, Portugal
- Laboratório para a Sustentabilidade e Tecnologia em Regiões de Montanha (SusTEC), Instituto Politécnico de Bragança, 5300-252 Bragança, Portugal
- INESC TEC—INESC Technology and Science, 4200-465 Porto, Portugal
- Faculty of Engineering, University of Porto, 4200-465 Porto, Portugal
| | - Salviano Soares
- Engineering Department, School of Sciences and Technology, UTAD, 5000-801 Vila Real, Portugal
- IEETA—Institute of Electronics and Informatics Engineering of Aveiro, 3810-193 Aveiro, Portugal
| |
Collapse
|
27
|
Oliveira B, Morais P, Torres HR, Baptista AL, Fonseca JC, Vilaça JL. Characterization of the Workspace and Limits of Operation of Laser Treatments for Vascular Lesions of the Lower Limbs. SENSORS (BASEL, SWITZERLAND) 2022; 22:7481. [PMID: 36236577 PMCID: PMC9573018 DOI: 10.3390/s22197481] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Revised: 09/26/2022] [Accepted: 09/28/2022] [Indexed: 06/16/2023]
Abstract
The increase of the aging population brings numerous challenges to health and aesthetic segments. Here, the use of laser therapy for dermatology is expected to increase since it allows for non-invasive and infection-free treatments. However, existing laser devices require doctors' manually handling and visually inspecting the skin. As such, the treatment outcome is dependent on the user's expertise, which frequently results in ineffective treatments and side effects. This study aims to determine the workspace and limits of operation of laser treatments for vascular lesions of the lower limbs. The results of this study can be used to develop a robotic-guided technology to help address the aforementioned problems. Specifically, workspace and limits of operation were studied in eight vascular laser treatments. For it, an electromagnetic tracking system was used to collect the real-time positioning of the laser during the treatments. The computed average workspace length, height, and width were 0.84 ± 0.15, 0.41 ± 0.06, and 0.78 ± 0.16 m, respectively. This corresponds to an average volume of treatment of 0.277 ± 0.093 m3. The average treatment time was 23.2 ± 10.2 min, with an average laser orientation of 40.6 ± 5.6 degrees. Additionally, the average velocities of 0.124 ± 0.103 m/s and 31.5 + 25.4 deg/s were measured. This knowledge characterizes the vascular laser treatment workspace and limits of operation, which may ease the understanding for future robotic system development.
Collapse
Affiliation(s)
- Bruno Oliveira
- 2Ai—School of Technology, IPCA, 4750-810 Barcelos, Portugal
- Algoritmi Center, School of Engineering, University of Minho, 4800-058 Guimarães, Portugal
- LASI—Associate Laboratory of Intelligent Systems, 4800-058 Guimarães, Portugal
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, 4710-057 Braga, Portugal
- ICVS/3B’s—PT Government Associate Laboratory, 4710-057 Braga/Guimarães, Portugal
| | - Pedro Morais
- 2Ai—School of Technology, IPCA, 4750-810 Barcelos, Portugal
- LASI—Associate Laboratory of Intelligent Systems, 4800-058 Guimarães, Portugal
| | - Helena R. Torres
- 2Ai—School of Technology, IPCA, 4750-810 Barcelos, Portugal
- Algoritmi Center, School of Engineering, University of Minho, 4800-058 Guimarães, Portugal
- LASI—Associate Laboratory of Intelligent Systems, 4800-058 Guimarães, Portugal
- Life and Health Sciences Research Institute (ICVS), School of Medicine, University of Minho, 4710-057 Braga, Portugal
- ICVS/3B’s—PT Government Associate Laboratory, 4710-057 Braga/Guimarães, Portugal
| | | | - Jaime C. Fonseca
- Algoritmi Center, School of Engineering, University of Minho, 4800-058 Guimarães, Portugal
- LASI—Associate Laboratory of Intelligent Systems, 4800-058 Guimarães, Portugal
| | - João L. Vilaça
- 2Ai—School of Technology, IPCA, 4750-810 Barcelos, Portugal
- LASI—Associate Laboratory of Intelligent Systems, 4800-058 Guimarães, Portugal
| |
Collapse
|
28
|
Horowitz MC, Kahn L, Macdonald J, Schneider J. COVID-19 and public support for autonomous technologies—Did the pandemic catalyze a world of robots? PLoS One 2022; 17:e0273941. [PMID: 36170283 PMCID: PMC9518891 DOI: 10.1371/journal.pone.0273941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 08/18/2022] [Indexed: 11/25/2022] Open
Abstract
By introducing a novel risk to human interaction, COVID-19 may have galvanized interest in uses of artificial intelligence (AI). But was the pandemic a large enough catalyst to change public attitudes about the costs and benefits of autonomous systems whose operations increasingly rely on AI? To answer this question, we use a preregistered research design that exploits variation across the 2018 and 2020 waves of the CCES/CES, a nationally representative survey of adults in the United States. We compare support for autonomous cars, autonomous surgeries, weapons, and cyber defense pre- and post-the beginning of the COVID-19 pandemic. We find that, despite the incentives created by COVID-19, the pandemic did not increase support for most of these technologies, except in the case of autonomous surgery among those who know someone who died of COVID-19. The results hold even when controlling for a variety of relevant political and demographic factors. The pandemic did little to push potential autonomous vehicle users to support adoption. Further, American concerns about autonomous weapons, including cyber defense, remain sticky and perhaps exacerbated over the last two years. These findings suggest that the relationship between the COVID-19 pandemic and the adoption of many of these systems is far more nuanced and complex than headlines may suggest.
Collapse
Affiliation(s)
- Michael C. Horowitz
- Department of Political Science, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
- * E-mail:
| | - Lauren Kahn
- Council on Foreign Relations, Washington, D.C., United States of America
| | - Julia Macdonald
- Department of Political Science, University of Denver, Denver, Colorado, United States of America
| | - Jacquelyn Schneider
- Freeman Spogli Institute, Stanford University, Stanford, California, United States of America
| |
Collapse
|
29
|
Alahmari AR, Alrabghi KK, Dighriri IM. An Overview of the Current State and Perspectives of Pharmacy Robot and Medication Dispensing Technology. Cureus 2022; 14:e28642. [PMID: 36196333 PMCID: PMC9525046 DOI: 10.7759/cureus.28642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/31/2022] [Indexed: 11/20/2022] Open
Abstract
It has been widely reported that a large number of patients die from cases of errors in the issuing of medication prescriptions. These cases occur due to a wide range of things, but the common denominator in all of the cases is humans. A hospital pharmacy has a very critical task, especially with growing patient numbers. The increasing number of prescriptions needed to be filled daily reduces the amount of time that the staff can use to focus on each individual prescription, which may increase the human error ratio. The need for robotic-assisted pharmacies is arising from here to distribute drugs to eradicate or substantially reduce human error. The pharmacy robot is one of the most significant technologies that play a prominent role in the advancement of hospital pharmacy systems. The purpose of this review paper is to cover the pharmacy robot concept and the published literature reporting on pharmacy robot technology as one of the most important applications of artificial intelligence (AI) in pharmacology. Although the outcomes of the impact of the pharmacy robot have been increasingly beneficial in overall improvement, staff morale, and functionality of pharmacies, there are still mechanical errors occurring. The errors, in turn, require human intervention. The key takeaway from this study is that robots or machines cannot replace human duties in their entirety. This in turn means that those human interventions will have an impact on the workflow and throughput.
Collapse
|
30
|
Ehrlich J, Jamzad A, Asselin M, Rodgers JR, Kaufmann M, Haidegger T, Rudan J, Mousavi P, Fichtinger G, Ungi T. Sensor-Based Automated Detection of Electrosurgical Cautery States. SENSORS (BASEL, SWITZERLAND) 2022; 22:5808. [PMID: 35957364 PMCID: PMC9371045 DOI: 10.3390/s22155808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 07/30/2022] [Accepted: 08/01/2022] [Indexed: 02/04/2023]
Abstract
In computer-assisted surgery, it is typically required to detect when the tool comes into contact with the patient. In activated electrosurgery, this is known as the energy event. By continuously tracking the electrosurgical tools' location using a navigation system, energy events can help determine locations of sensor-classified tissues. Our objective was to detect the energy event and determine the settings of electrosurgical cautery-robustly and automatically based on sensor data. This study aims to demonstrate the feasibility of using the cautery state to detect surgical incisions, without disrupting the surgical workflow. We detected current changes in the wires of the cautery device and grounding pad using non-invasive current sensors and an oscilloscope. An open-source software was implemented to apply machine learning on sensor data to detect energy events and cautery settings. Our methods classified each cautery state at an average accuracy of 95.56% across different tissue types and energy level parameters altered by surgeons during an operation. Our results demonstrate the feasibility of automatically identifying energy events during surgical incisions, which could be an important safety feature in robotic and computer-integrated surgery. This study provides a key step towards locating tissue classifications during breast cancer operations and reducing the rate of positive margins.
Collapse
Affiliation(s)
- Josh Ehrlich
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (J.E.); (A.J.); (M.A.); (J.R.R.); (P.M.); (G.F.)
| | - Amoon Jamzad
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (J.E.); (A.J.); (M.A.); (J.R.R.); (P.M.); (G.F.)
| | - Mark Asselin
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (J.E.); (A.J.); (M.A.); (J.R.R.); (P.M.); (G.F.)
| | - Jessica Robin Rodgers
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (J.E.); (A.J.); (M.A.); (J.R.R.); (P.M.); (G.F.)
| | - Martin Kaufmann
- Department of Surgery, Kingston Health Sciences Centre, Kingston, ON K7L 2V7, Canada; (M.K.); (J.R.)
| | - Tamas Haidegger
- University Research and Innovation Center (EKIK), Óbuda University, 1034 Budapest, Hungary
| | - John Rudan
- Department of Surgery, Kingston Health Sciences Centre, Kingston, ON K7L 2V7, Canada; (M.K.); (J.R.)
| | - Parvin Mousavi
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (J.E.); (A.J.); (M.A.); (J.R.R.); (P.M.); (G.F.)
| | - Gabor Fichtinger
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (J.E.); (A.J.); (M.A.); (J.R.R.); (P.M.); (G.F.)
| | - Tamas Ungi
- School of Computing, Queen’s University, Kingston, ON K7L 3N6, Canada; (J.E.); (A.J.); (M.A.); (J.R.R.); (P.M.); (G.F.)
| |
Collapse
|
31
|
Patel N, Urias M, Ebrahimi A, Taylor RH, Gehlbach P, Iordachita I. Force-based Control for Safe Robot-assisted Retinal Interventions: In Vivo Evaluation in Animal Studies. IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS 2022; 4:578-587. [PMID: 36033345 PMCID: PMC9410268 DOI: 10.1109/tmrb.2022.3191441] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In recent years, robotic assistance in vitreoretinal surgery has moved from a benchtop environment to the operating rooms. Emerging robotic systems improve tool manoeuvrability and provide precise tool motions in a constrained intraocular environment and reduce/remove hand tremor. However, often due to their stiff and bulky mechanical structure, they diminish the perception of tool-to-sclera (scleral) forces, on which the surgeon relies, for eyeball manipulation. In this paper we measure these scleral forces and actively control the robot to keep them under a predefined threshold. Scleral forces are measured using a Fiber Bragg Grating (FBG) based force sensing instrument in an in vivo rabbit eye model in manual, cooperative robotic assistance with no scleral force control (NC), adaptive scleral force norm control (ANC) and adaptive scleral force component control (ACC) methods. To the best of our knowledge, this is the first time that the scleral forces are measured in an in vivo eye model during robot assisted vitreoretinal procedures. An experienced retinal surgeon repeated an intraocular tool manipulation (ITM) task 10 times in four in vivo rabbit eyes and a phantom eyeball, for a total of 50 repetitions in each control mode. Statistical analysis shows that the ANC and ACC control schemes restrict the duration of the undesired scleral forces to 4.41% and 14.53% as compared to 43.30% and 35.28% in manual and NC cases, respectively during the in vivo studies. These results show that the active robot control schemes can maintain applied scleral forces below a desired threshold during robot-assisted vitreoretinal surgery. The scleral forces measurements in this study may enable a better understanding of tool-to-sclera interactions during vitreoretinal surgery and the proposed control strategies could be extended to other microsurgery and robot-assisted interventions.
Collapse
Affiliation(s)
- Niravkumar Patel
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD USA-21218
- Indian Institute of Technology Madras, Chennai, India
| | - Muller Urias
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, MD 21287 USA
| | - Ali Ebrahimi
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD USA-21218
| | - Russell H Taylor
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD USA-21218
| | - Peter Gehlbach
- Wilmer Eye Institute, Johns Hopkins Hospital, Baltimore, MD 21287 USA
| | - Iulian Iordachita
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD USA-21218
| |
Collapse
|
32
|
Shape estimation of the anterior part of a flexible ureteroscope for intraoperative navigation. Int J Comput Assist Radiol Surg 2022; 17:1787-1799. [DOI: 10.1007/s11548-022-02670-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2021] [Accepted: 05/01/2022] [Indexed: 11/05/2022]
|
33
|
Fiorini P, Goldberg KY, Liu Y, Taylor RH. Concepts and Trends n Autonomy for Robot-Assisted Surgery. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2022; 110:993-1011. [PMID: 35911127 PMCID: PMC7613181 DOI: 10.1109/jproc.2022.3176828] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Surgical robots have been widely adopted with over 4000 robots being used in practice daily. However, these are telerobots that are fully controlled by skilled human surgeons. Introducing "surgeon-assist"-some forms of autonomy-has the potential to reduce tedium and increase consistency, analogous to driver-assist functions for lanekeeping, cruise control, and parking. This article examines the scientific and technical backgrounds of robotic autonomy in surgery and some ethical, social, and legal implications. We describe several autonomous surgical tasks that have been automated in laboratory settings, and research concepts and trends.
Collapse
Affiliation(s)
- Paolo Fiorini
- Department of Computer Science, University of Verona, 37134 Verona, Italy
| | - Ken Y. Goldberg
- Department of Industrial Engineering and Operations Research and the Department of Electrical Engineering and Computer Science, University of California at Berkeley, Berkeley, CA 94720 USA
| | - Yunhui Liu
- Department of Mechanical and Automation Engineering, T Stone Robotics Institute, The Chinese University of Hong Kong, Hong Kong, China
| | - Russell H. Taylor
- Department of Computer Science, the Department of Mechanical Engineering, the Department of Radiology, the Department of Surgery, and the Department of Otolaryngology, Head-and-Neck Surgery, Johns Hopkins University, Baltimore, MD 21218 USA, and also with the Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD 21218 USA
| |
Collapse
|
34
|
Cheng Z, Savarimuthu TR. Monopolar, bipolar, tripolar, and tetrapolar configurations in robot assisted electrical impedance scanning. Biomed Phys Eng Express 2022; 8. [PMID: 35728560 DOI: 10.1088/2057-1976/ac7adb] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Accepted: 06/21/2022] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Tissue recognition is a critical process during a Robot-assisted minimally invasive surgery (RMIS) and it relies on the involvement of advanced sensing technology. APPROACH In this paper, the concept of Robot Assisted Electrical Impedance Sensing (RAEIS) is utilized and further developed aiming to sense the electrical bioimpedance of target tissue directly based on the existing robotic instruments and control strategy. Specifically, we present a new sensing configuration called pseudo-tetrapolar method. With the help of robotic control, we can achieve a similar configuration as traditional tetrapolar, and with better accuracy. MAIN RESULTS Five configurations including monopolar, bipolar, tripolar, tetrapolar and pseudo-tetrapolar are analyzed and compared through simulation experiments. Advantages and disadvantages of each configuration are thus discussed. SIGNIFICANCE This study investigates the measurement of tissue electrical property directly based on the existing robotic surgical instruments. Specifically, different sensing configurations can be realized through different connection and control strategies, making them suitable for different application scenarios.
Collapse
Affiliation(s)
- Zhuoqi Cheng
- MMMI, SDU, Campusvej 55, SDU, Odense, 5230, DENMARK
| | | |
Collapse
|
35
|
Infrastructural Requirements and Regulatory Challenges of a Sustainable Urban Air Mobility Ecosystem. BUILDINGS 2022. [DOI: 10.3390/buildings12060747] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The United Nations has long put on the discussion agenda the sustainability challenges of urbanization, which have both direct and indirect effects on future regulation strategies. Undoubtedly, most initiatives target better quality of life, improved access to services & goods and environment protection. As commercial aerial urban transportation may become a feasible research goal in the near future, the connection possibilities between cities and regions scale up. It is expected that the growing number of vertical takeoff & landing vehicles used for passenger and goods transportation will change the infrastructure of the cities, and will have a significant effect on the cityscapes as well. In addition to the widely discussed regulatory and safety issues, the introduction of elevated traffic also raises environmental concerns, which influences the existing and required service and control infrastructure, and thus significantly affects sustainability. This paper provides narrated overview of the most common aspects of safety, licensing and regulations for passenger vertical takeoff & landing vehicles, and highlights the most important aspects of infrastructure planning, design and operation, which should be taken into account to maintain and efficiently operate this new way of transportation, leading to a sustainable urban air mobility ecosystem.
Collapse
|
36
|
Fathabadi FR, Grantner JL, Shebrain SA, Abdel-Qader I. Fuzzy logic supervisor –A surgical skills assessment system using multi-class detection of laparoscopic box-trainer instruments. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-213243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Recent developments in deep learning can be used in skill assessments for laparoscopic surgeons. In Minimally Invasive Surgery (MIS), surgeons should acquire many skills before carrying out a real operation. The Laparoscopic Surgical Box-Trainer allows surgery residents to train on specific skills that are not traditionally taught to them. This study aims to automatically detect the tips of laparoscopic instruments, localize a point, evaluate the detection accuracy to provide valuable assessment and expedite the development of surgery skills and assess the trainees’ performance using a Multi-Input-Single-Output Fuzzy Logic Supervisor system. The output of the fuzzy logic assessment is the performance evaluation for the surgeon, and it is quantified in percentages. Based on the experimental results, the trained SSD Mobilenet V2 FPN can identify each instrument at a score of 70% fidelity. On the other hand, the trained SSD ResNet50 V1 FPN can detect each instrument at the score of 90% fidelity, in each location within a region of interest, and determine their relative distance with over 65% and 80% reliability, respectively. This method can be applied in different types of laparoscopic tooltip detection. Because there were a few instances when the detection failed, and the system was designed to generate pass-fail assessment, we recommend improving the measurement algorithm and the performance assessment by adding a camera to the system and measuring the distance from multiple perspectives.
Collapse
Affiliation(s)
| | - Janos L. Grantner
- Electrical and Computer Engineering Department, Western Michigan University, USA
| | - Saad A. Shebrain
- Department of Surgery, of the Homer Stryker M.D. School of Medicine, Western Michigan University, USA
| | - Ikhlas Abdel-Qader
- Electrical and Computer Engineering Department, Western Michigan University, USA
| |
Collapse
|
37
|
Abstract
Although substantial advancements have been achieved in robot-assisted surgery, the blueprint to existing snake robotics predominantly focuses on the preliminary structural design, control, and human–robot interfaces, with features which have not been particularly explored in the literature. This paper aims to conduct a review of planning and operation concepts of hyper-redundant serpentine robots for surgical use, as well as any future challenges and solutions for better manipulation. Current researchers in the field of the manufacture and navigation of snake robots have faced issues, such as a low dexterity of the end-effectors around delicate organs, state estimation and the lack of depth perception on two-dimensional screens. A wide range of robots have been analysed, such as the i²Snake robot, inspiring the use of force and position feedback, visual servoing and augmented reality (AR). We present the types of actuation methods, robot kinematics, dynamics, sensing, and prospects of AR integration in snake robots, whilst addressing their shortcomings to facilitate the surgeon’s task. For a smoother gait control, validation and optimization algorithms such as deep learning databases are examined to mitigate redundancy in module linkage backlash and accidental self-collision. In essence, we aim to provide an outlook on robot configurations during motion by enhancing their material compositions within anatomical biocompatibility standards.
Collapse
|
38
|
Romanov D, Korostynska O, Lekang OI, Mason A. Towards human-robot collaboration in meat processing: Challenges and possibilities. J FOOD ENG 2022. [DOI: 10.1016/j.jfoodeng.2022.111117] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
39
|
Zhu Y, Smith A, Hauser K. Automated Heart and Lung Auscultation in Robotic Physical Examinations. IEEE Robot Autom Lett 2022. [DOI: 10.1109/lra.2022.3149576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
40
|
Nagy TD, Haidegger T. Performance and Capability Assessment in Surgical Subtask Automation. SENSORS (BASEL, SWITZERLAND) 2022; 22:2501. [PMID: 35408117 PMCID: PMC9002652 DOI: 10.3390/s22072501] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Revised: 03/16/2022] [Accepted: 03/19/2022] [Indexed: 02/04/2023]
Abstract
Robot-Assisted Minimally Invasive Surgery (RAMIS) has reshaped the standard clinical practice during the past two decades. Many believe that the next big step in the advancement of RAMIS will be partial autonomy, which may reduce the fatigue and the cognitive load on the surgeon by performing the monotonous, time-consuming subtasks of the surgical procedure autonomously. Although serious research efforts are paid to this area worldwide, standard evaluation methods, metrics, or benchmarking techniques are still not formed. This article aims to fill the void in the research domain of surgical subtask automation by proposing standard methodologies for performance evaluation. For that purpose, a novel characterization model is presented for surgical automation. The current metrics for performance evaluation and comparison are overviewed and analyzed, and a workflow model is presented that can help researchers to identify and apply their choice of metrics. Existing systems and setups that serve or could serve as benchmarks are also introduced and the need for standard benchmarks in the field is articulated. Finally, the matter of Human-Machine Interface (HMI) quality, robustness, and the related legal and ethical issues are presented.
Collapse
Affiliation(s)
- Tamás D. Nagy
- Antal Bejczy Center for Intelligent Robotics, EKIK, Óbuda University, Bécsi út 96/B, 1034 Budapest, Hungary;
- Doctoral School of Applied Informatics and Applied Mathematics, Óbuda University, Bécsi út 96/B, 1034 Budapest, Hungary
- Biomatics Institute, John von Neumann Faculty of Informatics, Óbuda University, Bécsi út 96/B, 1034 Budapest, Hungary
| | - Tamás Haidegger
- Antal Bejczy Center for Intelligent Robotics, EKIK, Óbuda University, Bécsi út 96/B, 1034 Budapest, Hungary;
- Austrian Center for Medical Innovation and Technology (ACMIT), Viktor-Kaplan-Straße 2/1, 2700 Wiener Neustadt, Austria
| |
Collapse
|
41
|
Rácz M, Noboa E, Détár B, Nemes Á, Galambos P, Szűcs L, Márton G, Eigner G, Haidegger T. PlatypOUs-A Mobile Robot Platform and Demonstration Tool Supporting STEM Education. SENSORS (BASEL, SWITZERLAND) 2022; 22:2284. [PMID: 35336455 PMCID: PMC8949973 DOI: 10.3390/s22062284] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Revised: 03/07/2022] [Accepted: 03/08/2022] [Indexed: 11/17/2022]
Abstract
Given the rising popularity of robotics, student-driven robot development projects are playing a key role in attracting more people towards engineering and science studies. This article presents the early development process of an open-source mobile robot platform-named PlatypOUs-which can be remotely controlled via an electromyography (EMG) appliance using the MindRove brain-computer interface (BCI) headset as a sensor for the purpose of signal acquisition. The gathered bio-signals are classified by a Support Vector Machine (SVM) whose results are translated into motion commands for the mobile platform. Along with the physical mobile robot platform, a virtual environment was implemented using Gazebo (an open-source 3D robotic simulator) inside the Robot Operating System (ROS) framework, which has the same capabilities as the real-world device. This can be used for development and test purposes. The main goal of the PlatypOUs project is to create a tool for STEM education and extracurricular activities, particularly laboratory practices and demonstrations. With the physical robot, the aim is to improve awareness of STEM outside and beyond the scope of regular education programmes. It implies several disciplines, including system design, control engineering, mobile robotics and machine learning with several application aspects in each. Using the PlatypOUs platform and the simulator provides students and self-learners with a firsthand exercise, and teaches them to deal with complex engineering problems in a professional, yet intriguing way.
Collapse
Affiliation(s)
- Melinda Rácz
- Research Centre for Natural Sciences, Eötvös Loránd Research Network, Magyar Tudósok krt. 2., H-1117 Budapest, Hungary; (M.R.); (G.M.)
- János Szentágothai Doctoral School of Neurosciences, Semmelweis University, Üllői út 26, H-1085 Budapest, Hungary
- Selye János Doctoral College for Advanced Studies, Semmelweis University, Üllői út 22, H-1085 Budapest, Hungary
| | - Erick Noboa
- Antal Bejczy Center for Intelligent Robotics, Robotics Special College, University Research and Innovation Center, Óbuda University, Bécsi út 96/B, H-1034 Budapest, Hungary; (E.N.); (B.D.); (Á.N.); (P.G.); (L.S.); (T.H.)
| | - Borsa Détár
- Antal Bejczy Center for Intelligent Robotics, Robotics Special College, University Research and Innovation Center, Óbuda University, Bécsi út 96/B, H-1034 Budapest, Hungary; (E.N.); (B.D.); (Á.N.); (P.G.); (L.S.); (T.H.)
| | - Ádám Nemes
- Antal Bejczy Center for Intelligent Robotics, Robotics Special College, University Research and Innovation Center, Óbuda University, Bécsi út 96/B, H-1034 Budapest, Hungary; (E.N.); (B.D.); (Á.N.); (P.G.); (L.S.); (T.H.)
| | - Péter Galambos
- Antal Bejczy Center for Intelligent Robotics, Robotics Special College, University Research and Innovation Center, Óbuda University, Bécsi út 96/B, H-1034 Budapest, Hungary; (E.N.); (B.D.); (Á.N.); (P.G.); (L.S.); (T.H.)
- Biomatics and Applied Artificial Intelligence Institution, John von Neumann Faculty of Informatics, Óbuda University, Bécsi út 96/B, H-1034 Budapest, Hungary
| | - László Szűcs
- Antal Bejczy Center for Intelligent Robotics, Robotics Special College, University Research and Innovation Center, Óbuda University, Bécsi út 96/B, H-1034 Budapest, Hungary; (E.N.); (B.D.); (Á.N.); (P.G.); (L.S.); (T.H.)
| | - Gergely Márton
- Research Centre for Natural Sciences, Eötvös Loránd Research Network, Magyar Tudósok krt. 2., H-1117 Budapest, Hungary; (M.R.); (G.M.)
- MindRove Kft., Hédervári út 43, H-9026 Győr, Hungary
- Faculty of Information Technology and Bionics, Pázmány Péter Catholic University, Práter utca 50/a, H-1083 Budapest, Hungary
| | - György Eigner
- Antal Bejczy Center for Intelligent Robotics, Robotics Special College, University Research and Innovation Center, Óbuda University, Bécsi út 96/B, H-1034 Budapest, Hungary; (E.N.); (B.D.); (Á.N.); (P.G.); (L.S.); (T.H.)
- Biomatics and Applied Artificial Intelligence Institution, John von Neumann Faculty of Informatics, Óbuda University, Bécsi út 96/B, H-1034 Budapest, Hungary
- Physiological Controls Research Center, University Research and Innovation Center, Óbuda University, Bécsi út 96/B, H-1034 Budapest, Hungary
| | - Tamás Haidegger
- Antal Bejczy Center for Intelligent Robotics, Robotics Special College, University Research and Innovation Center, Óbuda University, Bécsi út 96/B, H-1034 Budapest, Hungary; (E.N.); (B.D.); (Á.N.); (P.G.); (L.S.); (T.H.)
- Biomatics and Applied Artificial Intelligence Institution, John von Neumann Faculty of Informatics, Óbuda University, Bécsi út 96/B, H-1034 Budapest, Hungary
| |
Collapse
|
42
|
Saeidi H, Opfermann JD, Kam M, Wei S, Leonard S, Hsieh MH, Kang JU, Krieger A. Autonomous robotic laparoscopic surgery for intestinal anastomosis. Sci Robot 2022; 7:eabj2908. [PMID: 35080901 PMCID: PMC8992572 DOI: 10.1126/scirobotics.abj2908] [Citation(s) in RCA: 62] [Impact Index Per Article: 31.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Autonomous robotic surgery has the potential to provide efficacy, safety, and consistency independent of individual surgeon's skill and experience. Autonomous anastomosis is a challenging soft-tissue surgery task because it requires intricate imaging, tissue tracking, and surgical planning techniques, as well as a precise execution via highly adaptable control strategies often in unstructured and deformable environments. In the laparoscopic setting, such surgeries are even more challenging because of the need for high maneuverability and repeatability under motion and vision constraints. Here we describe an enhanced autonomous strategy for laparoscopic soft tissue surgery and demonstrate robotic laparoscopic small bowel anastomosis in phantom and in vivo intestinal tissues. This enhanced autonomous strategy allows the operator to select among autonomously generated surgical plans and the robot executes a wide range of tasks independently. We then use our enhanced autonomous strategy to perform in vivo autonomous robotic laparoscopic surgery for intestinal anastomosis on porcine models over a 1-week survival period. We compared the anastomosis quality criteria-including needle placement corrections, suture spacing, suture bite size, completion time, lumen patency, and leak pressure-of the developed autonomous system, manual laparoscopic surgery, and robot-assisted surgery (RAS). Data from a phantom model indicate that our system outperforms expert surgeons' manual technique and RAS technique in terms of consistency and accuracy. This was also replicated in the in vivo model. These results demonstrate that surgical robots exhibiting high levels of autonomy have the potential to improve consistency, patient outcomes, and access to a standard surgical technique.
Collapse
Affiliation(s)
- H. Saeidi
- Department of Computer Science, University of North Carolina Wilmington, Wilmington, NC, 28403, USA
- Department of Mechanical Engineering, Johns Hopkins University; Baltimore, MD 21211, USA
| | - J. D. Opfermann
- Department of Mechanical Engineering, Johns Hopkins University; Baltimore, MD 21211, USA
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University; Baltimore, MD 21211, USA
| | - M. Kam
- Department of Mechanical Engineering, Johns Hopkins University; Baltimore, MD 21211, USA
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University; Baltimore, MD 21211, USA
| | - S. Wei
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University; Baltimore, MD 21211, USA
- Department of Electrical and Computer Engineering, Johns Hopkins University; Baltimore, MD 21211, USA
| | - S. Leonard
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University; Baltimore, MD 21211, USA
| | - M. H. Hsieh
- Department of Urology, Children’s National Hospital; 111 Michigan Ave. N.W., Washington, DC 20010, USA
| | - J. U. Kang
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University; Baltimore, MD 21211, USA
- Department of Electrical and Computer Engineering, Johns Hopkins University; Baltimore, MD 21211, USA
| | - A. Krieger
- Department of Mechanical Engineering, Johns Hopkins University; Baltimore, MD 21211, USA
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University; Baltimore, MD 21211, USA
| |
Collapse
|
43
|
Robot-assisted surgery in space: pros and cons. A review from the surgeon's point of view. NPJ Microgravity 2021; 7:56. [PMID: 34934056 PMCID: PMC8692617 DOI: 10.1038/s41526-021-00183-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2020] [Accepted: 11/24/2021] [Indexed: 12/12/2022] Open
Abstract
The target of human flight in space has changed from permanence on the International Space Station to missions beyond low earth orbit and the Lunar Gateway for deep space exploration and Missions to Mars. Several conditions affecting space missions had to be considered: for example the effect of weightlessness and radiations on the human body, behavioral health decrements or communication latency, and consumable resupply. Telemedicine and telerobotic applications, robot-assisted surgery with some hints on experimental surgical procedures carried out in previous missions, had to be considered as well. The need for greater crew autonomy in health issues is related to the increasing severity of medical and surgical interventions that could occur in these missions, and the presence of a highly trained surgeon on board would be recommended. A surgical robot could be a valuable aid but only inasfar as it is provided with multiple functions, including the capability to perform certain procedures autonomously. Space missions in deep space or on other planets present new challenges for crew health. Providing a multi-function surgical robot is the new frontier. Research in this field shall be paving the way for the development of new structured plans for human health in space, as well as providing new suggestions for clinical applications on Earth.
Collapse
|
44
|
Sahu SK, Sozer C, Rosa B, Tamadon I, Renaud P, Menciassi A. Shape Reconstruction Processes for Interventional Application Devices: State of the Art, Progress, and Future Directions. Front Robot AI 2021; 8:758411. [PMID: 34869615 PMCID: PMC8640970 DOI: 10.3389/frobt.2021.758411] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Accepted: 10/11/2021] [Indexed: 01/02/2023] Open
Abstract
Soft and continuum robots are transforming medical interventions thanks to their flexibility, miniaturization, and multidirectional movement abilities. Although flexibility enables reaching targets in unstructured and dynamic environments, it also creates challenges for control, especially due to interactions with the anatomy. Thus, in recent years lots of efforts have been devoted for the development of shape reconstruction methods, with the advancement of different kinematic models, sensors, and imaging techniques. These methods can increase the performance of the control action as well as provide the tip position of robotic manipulators relative to the anatomy. Each method, however, has its advantages and disadvantages and can be worthwhile in different situations. For example, electromagnetic (EM) and Fiber Bragg Grating (FBG) sensor-based shape reconstruction methods can be used in small-scale robots due to their advantages thanks to miniaturization, fast response, and high sensitivity. Yet, the problem of electromagnetic interference in the case of EM sensors, and poor response to high strains in the case of FBG sensors need to be considered. To help the reader make a suitable choice, this paper presents a review of recent progress on shape reconstruction methods, based on a systematic literature search, excluding pure kinematic models. Methods are classified into two categories. First, sensor-based techniques are presented that discuss the use of various sensors such as FBG, EM, and passive stretchable sensors for reconstructing the shape of the robots. Second, imaging-based methods are discussed that utilize images from different imaging systems such as fluoroscopy, endoscopy cameras, and ultrasound for the shape reconstruction process. The applicability, benefits, and limitations of each method are discussed. Finally, the paper draws some future promising directions for the enhancement of the shape reconstruction methods by discussing open questions and alternative methods.
Collapse
Affiliation(s)
- Sujit Kumar Sahu
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
- Department of Excellence in Robotics & AI, Scuola Superiore Sant’Anna, Pisa, Italy
- ICube, CNRS, INSA Strasbourg, University of Strasbourg, Strasbourg, France
| | - Canberk Sozer
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
- Department of Excellence in Robotics & AI, Scuola Superiore Sant’Anna, Pisa, Italy
| | - Benoit Rosa
- ICube, CNRS, INSA Strasbourg, University of Strasbourg, Strasbourg, France
| | - Izadyar Tamadon
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
- Department of Excellence in Robotics & AI, Scuola Superiore Sant’Anna, Pisa, Italy
| | - Pierre Renaud
- ICube, CNRS, INSA Strasbourg, University of Strasbourg, Strasbourg, France
| | - Arianna Menciassi
- The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy
- Department of Excellence in Robotics & AI, Scuola Superiore Sant’Anna, Pisa, Italy
| |
Collapse
|
45
|
Yang YJ, Vadivelu AN, Pilgrim CHC, Kulic D, Abdi E. A Novel Perception Framework for Automatic Laparoscope Zoom Factor Control Using Tool Geometry. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:4700-4704. [PMID: 34892261 DOI: 10.1109/embc46164.2021.9629987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In conventional Minimally Invasive Surgery, the surgeon conducts the operation while a human or robot holds the laparoscope. Laparoscope control is returned to the surgeon in teleoperated camera holding robots, but simultaneously controlling the laparoscope and surgical tools might be cognitively demanding. On the other hand, fully automated camera holders are still limited in their performance. To help the surgeon to better focus on the main operation while maintaining their control authority, we propose an automatic laparoscope zoom factor control framework for Robot-Assisted Minimally Invasive Surgery. In this paper, we present the perception section of the framework. It extracts and uses the surgical tool's geometric characteristics to adjust the laparoscope's zoom factor, without any artificial markers. The acceptable range and tooltip's position frequency during operations are analysed based on the gallbladder removal surgery dataset (Cholec80). The common range and tooltip's heatmap are identified and presented quantitatively.
Collapse
|
46
|
Hagmann K, Hellings-Kuß A, Klodmann J, Richter R, Stulp F, Leidner D. A Digital Twin Approach for Contextual Assistance for Surgeons During Surgical Robotics Training. Front Robot AI 2021; 8:735566. [PMID: 34621791 PMCID: PMC8491613 DOI: 10.3389/frobt.2021.735566] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 09/06/2021] [Indexed: 11/13/2022] Open
Abstract
Minimally invasive robotic surgery copes with some disadvantages for the surgeon of minimally invasive surgery while preserving the advantages for the patient. Most commercially available robotic systems are telemanipulated with haptic input devices. The exploitation of the haptics channel, e.g., by means of Virtual Fixtures, would allow for an individualized enhancement of surgical performance with contextual assistance. However, it remains an open field of research as it is non-trivial to estimate the task context itself during a surgery. In contrast, surgical training allows to abstract away from a real operation and thus makes it possible to model the task accurately. The presented approach exploits this fact to parameterize Virtual Fixtures during surgical training, proposing a Shared Control Parametrization Engine that retrieves procedural context information from a Digital Twin. This approach accelerates a proficient use of the robotic system for novice surgeons by augmenting the surgeon's performance through haptic assistance. With this our aim is to reduce the required skill level and cognitive load of a surgeon performing minimally invasive robotic surgery. A pilot study is performed on the DLR MiroSurge system to evaluate the presented approach. The participants are tasked with two benchmark scenarios of surgical training. The execution of the benchmark scenarios requires basic skills as pick, place and path following. The evaluation of the pilot study shows the promising trend that novel users profit from the haptic augmentation during training of certain tasks.
Collapse
Affiliation(s)
- Katharina Hagmann
- German Aerospace Center (DLR), Institute of Robotics and Mechatronics Center, Weßling, Germany
| | - Anja Hellings-Kuß
- German Aerospace Center (DLR), Institute of Robotics and Mechatronics Center, Weßling, Germany
| | - Julian Klodmann
- German Aerospace Center (DLR), Institute of Robotics and Mechatronics Center, Weßling, Germany
| | - Rebecca Richter
- German Aerospace Center (DLR), Institute of Robotics and Mechatronics Center, Weßling, Germany
| | - Freek Stulp
- German Aerospace Center (DLR), Institute of Robotics and Mechatronics Center, Weßling, Germany
| | - Daniel Leidner
- German Aerospace Center (DLR), Institute of Robotics and Mechatronics Center, Weßling, Germany
| |
Collapse
|
47
|
Uncertainty-Aware Knowledge Distillation for Collision Identification of Collaborative Robots. SENSORS 2021; 21:s21196674. [PMID: 34640993 PMCID: PMC8512717 DOI: 10.3390/s21196674] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/22/2021] [Revised: 10/03/2021] [Accepted: 10/05/2021] [Indexed: 02/07/2023]
Abstract
Human-robot interaction has received a lot of attention as collaborative robots became widely utilized in many industrial fields. Among techniques for human-robot interaction, collision identification is an indispensable element in collaborative robots to prevent fatal accidents. This paper proposes a deep learning method for identifying external collisions in 6-DoF articulated robots. The proposed method expands the idea of CollisionNet, which was previously proposed for collision detection, to identify the locations of external forces. The key contribution of this paper is uncertainty-aware knowledge distillation for improving the accuracy of a deep neural network. Sample-level uncertainties are estimated from a teacher network, and larger penalties are imposed for uncertain samples during the training of a student network. Experiments demonstrate that the proposed method is effective for improving the performance of collision identification.
Collapse
|
48
|
Biswas SK. The Digital Era and the Future of Pediatric Surgery. J Indian Assoc Pediatr Surg 2021; 26:279-286. [PMID: 34728911 PMCID: PMC8515525 DOI: 10.4103/jiaps.jiaps_136_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Accepted: 07/09/2021] [Indexed: 11/21/2022] Open
|
49
|
Lajkó G, Nagyné Elek R, Haidegger T. Endoscopic Image-Based Skill Assessment in Robot-Assisted Minimally Invasive Surgery. SENSORS (BASEL, SWITZERLAND) 2021; 21:5412. [PMID: 34450854 PMCID: PMC8398563 DOI: 10.3390/s21165412] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 08/02/2021] [Accepted: 08/05/2021] [Indexed: 02/06/2023]
Abstract
Objective skill assessment-based personal performance feedback is a vital part of surgical training. Either kinematic-acquired through surgical robotic systems, mounted sensors on tooltips or wearable sensors-or visual input data can be employed to perform objective algorithm-driven skill assessment. Kinematic data have been successfully linked with the expertise of surgeons performing Robot-Assisted Minimally Invasive Surgery (RAMIS) procedures, but for traditional, manual Minimally Invasive Surgery (MIS), they are not readily available as a method. 3D visual features-based evaluation methods tend to outperform 2D methods, but their utility is limited and not suited to MIS training, therefore our proposed solution relies on 2D features. The application of additional sensors potentially enhances the performance of either approach. This paper introduces a general 2D image-based solution that enables the creation and application of surgical skill assessment in any training environment. The 2D features were processed using the feature extraction techniques of a previously published benchmark to assess the attainable accuracy. We relied on the JHU-ISI Gesture and Skill Assessment Working Set dataset-co-developed by the Johns Hopkins University and Intuitive Surgical Inc. Using this well-established set gives us the opportunity to comparatively evaluate different feature extraction techniques. The algorithm reached up to 95.74% accuracy in individual trials. The highest mean accuracy-averaged over five cross-validation trials-for the surgical subtask of Knot-Tying was 83.54%, for Needle-Passing 84.23% and for Suturing 81.58%. The proposed method measured well against the state of the art in 2D visual-based skill assessment, with more than 80% accuracy for all three surgical subtasks available in JIGSAWS (Knot-Tying, Suturing and Needle-Passing). By introducing new visual features-such as image-based orientation and image-based collision detection-or, from the evaluation side, utilising other Support Vector Machine kernel methods, tuning the hyperparameters or using other classification methods (e.g., the boosted trees algorithm) instead, classification accuracy can be further improved. We showed the potential use of optical flow as an input for RAMIS skill assessment, highlighting the maximum accuracy achievable with these data by evaluating it with an established skill assessment benchmark, by evaluating its methods independently. The highest performing method, the Residual Neural Network, reached means of 81.89%, 84.23% and 83.54% accuracy for the skills of Suturing, Needle-Passing and Knot-Tying, respectively.
Collapse
Affiliation(s)
- Gábor Lajkó
- Autonomous Systems Track, Double Degree Programme, EIT Digital Master School, Technische Universität Berlin, Straße des 17. Juni 135, 10623 Berlin, Germany;
- ELTE Faculty of Informatics, Pázmány Péter Sétány 1/C, Eötvös Loránd University, Egyetem tér 1-3, 1117 Budapest, Hungary
| | - Renáta Nagyné Elek
- Antal Bejczy Center for Intelligent Robotics, University Research and Innovation Center, Óbuda University, 1034 Budapest, Hungary;
- Doctoral School of Applied Informatics and Applied Mathematics, Óbuda University, Bécsi út 96/b, 1034 Budapest, Hungary
- John von Neumann Faculty of Informatics, Óbuda University, Bécsi út 96/b, 1034 Budapest, Hungary
| | - Tamás Haidegger
- Antal Bejczy Center for Intelligent Robotics, University Research and Innovation Center, Óbuda University, 1034 Budapest, Hungary;
- Austrian Center for Medical Innovation and Technology, Viktor Kaplan-Straße 2/1, 2700 Wiener Neustadt, Austria
| |
Collapse
|
50
|
Force-guided autonomous robotic ultrasound scanning control method for soft uncertain environment. Int J Comput Assist Radiol Surg 2021; 16:2189-2199. [PMID: 34373973 DOI: 10.1007/s11548-021-02462-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Accepted: 07/14/2021] [Indexed: 10/20/2022]
Abstract
PURPOSE Autonomous ultrasound imaging by robotic ultrasound scanning systems in complex soft uncertain clinical environments is important and challenging to assist in therapy. To cope with the complex environment faced by the ultrasound probe during the scanning process, we propose an autonomous robotic ultrasound (US) control method based on reinforcement learning (RL) model to build the relationship between the environment and the system. The proposed method requires only contact force as input information to achieve robot control of the posture and contact force of the probe without any a priori information about the target and the environment. METHODS First, an RL agent is proposed and trained by a policy gradient theorem-based RL model with the 6-degree-of-freedom (DOF) contact force of the US probe to learn the relationship between contact force and output force directly. Then, a force control strategy based on the admittance controller is proposed for synchronous force, orientation and position control by defining the desired contact force as the action space. RESULTS The proposed method was evaluated via collected US images, contact force and scan trajectories by scanning an unknown soft phantom. The experimental results indicated that the proposed method differs from the free-hand scanned approach in the US images within 3 ± 0.4%. The analysis results of contact forces and trajectories indicated that our method could make stable scanning processes on a soft uncertain skin surface and obtained US images. CONCLUSION We propose a concise and efficient force-guided US robot scanning control method for soft uncertain environment based on reinforcement learning. Experimental results validated our method's feasibility and validity for complex skin surface scanning, and the volunteer experiments indicated the potential application value in the complex clinical environment of robotic US imaging system especially with limited visual information.
Collapse
|