1
|
Remote Ultrasound Scan Procedures with Medical Robots: Towards New Perspectives between Medicine and Engineering. Appl Bionics Biomech 2022; 2022:1072642. [PMID: 35154375 PMCID: PMC8832154 DOI: 10.1155/2022/1072642] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 12/10/2021] [Accepted: 02/01/2022] [Indexed: 12/13/2022] Open
Abstract
Background This review explores state-of-the-art teleoperated robots for medical ultrasound scan procedures, providing a comprehensive look including the recent trends arising from the COVID-19 pandemic. Methods Physicians' experience is included to indicate the importance of their role in the design of improved medical robots. From this perspective, novel classes of equipment for remote diagnostics based on medical robotics are discussed in terms of innovative engineering technologies. Results Relevant literature is reviewed under the system engineering point of view, organizing the discussion on the basis of the main technological focus of each contribution. Conclusions This contribution is aimed at stimulating new research to obtain faster results on teleoperated robotics for ultrasound diagnostics in response to the high demand raised by the ongoing pandemic.
Collapse
|
3
|
Luan K, Li Z, Li J. An efficient end-to-end CNN for segmentation of bone surfaces from ultrasound. Comput Med Imaging Graph 2020; 84:101766. [PMID: 32781381 DOI: 10.1016/j.compmedimag.2020.101766] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2020] [Revised: 07/12/2020] [Accepted: 07/18/2020] [Indexed: 11/17/2022]
Abstract
The application of ultrasound (US) imaging in orthopedic surgery has always been a research direction. However, the various problems of US imaging hinder the development of computer assisted orthopedic surgery guided by US. US bone segmentation has been an important yet challenging task for many clinical applications. We propose a new end-to-end fully convolution network called BoneNet for real-time and accurate segmentation of bone surface from US image. The BoneNet employs the squeeze-and-excitation residual to realize a robust feature learning. In order to speed up the segmentation, we reduce the convolution kernel and used depth-wise separable convolution to reduce network parameters. In addition, we assessed the impact of different normalization operations and loss functions on network performance. Finally, we realize the comparison of the segmentation performance and generalization ability of the existing real-time US bone surface segmentation network under the unified dataset. We achieved an average Dice coefficient of 93.03 % on segmentation performance test, and 91.25 % on the generalization ability test. The results show that our proposed method ensures the real-time performance and achieves significant improvements in accuracy, which substantially outperformed the state-of-the-art.
Collapse
Affiliation(s)
- Kuan Luan
- Department of automation, Harbin Engineering University, China
| | - Zeyu Li
- Department of automation, Harbin Engineering University, China.
| | - Jin Li
- Department of automation, Harbin Engineering University, China.
| |
Collapse
|
4
|
Lindenroth L, Housden RJ, Wang S, Back J, Rhode K, Liu H. Design and Integration of a Parallel, Soft Robotic End-Effector for Extracorporeal Ultrasound. IEEE Trans Biomed Eng 2020; 67:2215-2229. [PMID: 31804926 PMCID: PMC7115900 DOI: 10.1109/tbme.2019.2957609] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
OBJECTIVE In this work we address limitations in state-of-the-art ultrasound robots by designing and integrating a novel soft robotic system for ultrasound imaging. It employs the inherent qualities of soft fluidic actuators to establish safe, adaptable interaction between ultrasound probe and patient. METHODS We acquire clinical data to determine the movement ranges and force levels required in prenatal foetal ultrasound imaging and design the soft robotic end-effector accordingly. We verify its mechanical characteristics, derive and validate a kinetostatic model and demonstrate controllability and imaging capabilities on an ultrasound phantom. RESULTS The soft robot exhibits the desired stiffness characteristics and is able to reach 100% of the required workspace when no external force is present, and 95% of the workspace when considering its compliance. The model can accurately predict the end-effector pose with a mean error of 1.18±0.29 mm in position and 0.92±0.47° in orientation. The derived controller is, with an average position error of 0.39 mm, able to track a target pose efficiently without and with externally applied loads. Ultrasound images acquired with the system are of equally good quality compared to a manual sonographer scan. CONCLUSION The system is able to withstand loads commonly applied during foetal ultrasound scans and remains controllable with a motion range similar to manual scanning. SIGNIFICANCE The proposed soft robot presents a safe, cost-effective solution to offloading sonographers in day-to-day scanning routines. The design and modelling paradigms are greatly generalizable and particularly suitable for designing soft robots for physical interaction tasks.
Collapse
|
5
|
Pandey PU, Quader N, Guy P, Garbi R, Hodgson AJ. Ultrasound Bone Segmentation: A Scoping Review of Techniques and Validation Practices. ULTRASOUND IN MEDICINE & BIOLOGY 2020; 46:921-935. [PMID: 31982208 DOI: 10.1016/j.ultrasmedbio.2019.12.014] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/23/2019] [Revised: 12/04/2019] [Accepted: 12/11/2019] [Indexed: 06/10/2023]
Abstract
Ultrasound bone segmentation is an important yet challenging task for many clinical applications. Several works have emerged attempting to improve and automate bone segmentation, which has led to a variety of computational techniques, validation practices and applied clinical scenarios. We characterize this exciting and growing body of research by reviewing published ultrasound bone segmentation techniques. We review 56 articles in detail and categorize and discuss the image analysis techniques that have been used for bone segmentation. We highlight the general trends of this field in terms of clinical motivation, image analysis techniques, ultrasound modalities and the types of validation practices used to quantify segmentation performance. Finally, we present an outlook on promising areas of research based on the unaddressed needs for solving ultrasound bone segmentation.
Collapse
Affiliation(s)
- Prashant U Pandey
- Biomedical Engineering Department, University of British Columbia, Vacouver, British Columbia, Canada.
| | - Niamul Quader
- Electrical and Computer Engineering Department, University of British Columbia, Vacouver, British Columbia, Canada
| | - Pierre Guy
- Department of Orthopaedics, University of British Columbia, Vacouver, British Columbia, Canada
| | - Rafeef Garbi
- Electrical and Computer Engineering Department, University of British Columbia, Vacouver, British Columbia, Canada
| | - Antony J Hodgson
- Mechanical Engineering Department, University of British Columbia, Vacouver, British Columbia, Canada
| |
Collapse
|
6
|
Computer Vision Intelligent Approaches to Extract Human Pose and Its Activity from Image Sequences. ELECTRONICS 2020. [DOI: 10.3390/electronics9010159] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The purpose of this work is to develop computational intelligence models based on neural networks (NN), fuzzy models (FM), support vector machines (SVM) and long short-term memory networks (LSTM) to predict human pose and activity from image sequences, based on computer vision approaches to gather the required features. To obtain the human pose semantics (output classes), based on a set of 3D points that describe the human body model (the input variables of the predictive model), prediction models were obtained from the acquired data, for example, video images. In the same way, to predict the semantics of the atomic activities that compose an activity, based again in the human body model extracted at each video frame, prediction models were learned using LSTM networks. In both cases the best learned models were implemented in an application to test the systems. The SVM model obtained 95.97% of correct classification of the six different human poses tackled in this work, during tests in different situations from the training phase. The implemented LSTM learned model achieved an overall accuracy of 88%, during tests in different situations from the training phase. These results demonstrate the validity of both approaches to predict human pose and activity from image sequences. Moreover, the system is capable of obtaining the atomic activities and quantifying the time interval in which each activity takes place.
Collapse
|
7
|
Recent Trends, Technical Concepts and Components of Computer-Assisted Orthopedic Surgery Systems: A Comprehensive Review. SENSORS 2019; 19:s19235199. [PMID: 31783631 PMCID: PMC6929084 DOI: 10.3390/s19235199] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 11/08/2019] [Accepted: 11/12/2019] [Indexed: 12/17/2022]
Abstract
Computer-assisted orthopedic surgery (CAOS) systems have become one of the most important and challenging types of system in clinical orthopedics, as they enable precise treatment of musculoskeletal diseases, employing modern clinical navigation systems and surgical tools. This paper brings a comprehensive review of recent trends and possibilities of CAOS systems. There are three types of the surgical planning systems, including: systems based on the volumetric images (computer tomography (CT), magnetic resonance imaging (MRI) or ultrasound images), further systems utilize either 2D or 3D fluoroscopic images, and the last one utilizes the kinetic information about the joints and morphological information about the target bones. This complex review is focused on three fundamental aspects of CAOS systems: their essential components, types of CAOS systems, and mechanical tools used in CAOS systems. In this review, we also outline the possibilities for using ultrasound computer-assisted orthopedic surgery (UCAOS) systems as an alternative to conventionally used CAOS systems.
Collapse
|
8
|
Development and Experimental Evaluation of a 3D Vision System for Grinding Robot. SENSORS 2018; 18:s18093078. [PMID: 30217055 PMCID: PMC6164533 DOI: 10.3390/s18093078] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/03/2018] [Revised: 09/07/2018] [Accepted: 09/10/2018] [Indexed: 11/17/2022]
Abstract
If the grinding robot can automatically position and measure the machining target on the workpiece, it will significantly improve its machining efficiency and intelligence level. However, unfortunately, the current grinding robot cannot do this because of economic and precision reasons. This paper proposes a 3D vision system mounted on the robot's fourth joint, which is used to detect the machining target of the grinding robot. Also, the hardware architecture and data processing method of the 3D vision system is described in detail. In the data processing process, we first use the voxel grid filter to preprocess the point cloud and obtain the feature descriptor. Then use fast library for approximate nearest neighbors (FLANN) to search out the difference point cloud from the precisely registered point cloud pair and use the point cloud segmentation method proposed in this paper to extract machining path points. Finally, the detection error compensation model is used to accurately calibrate the 3D vision system to transform the machining information into the grinding robot base frame. Experimental results show that the absolute average error of repeated measurements at different locations is 0.154 mm, and the absolute measurement error of the vision system caused by compound error is usually less than 0.25 mm. The proposed 3D vision system could easily integrate into an intelligent grinding system and may be suitable for industrial sites.
Collapse
|
9
|
Gibaud B, Forestier G, Feldmann C, Ferrigno G, Gonçalves P, Haidegger T, Julliard C, Katić D, Kenngott H, Maier-Hein L, März K, de Momi E, Nagy DÁ, Nakawala H, Neumann J, Neumuth T, Rojas Balderrama J, Speidel S, Wagner M, Jannin P. Toward a standard ontology of surgical process models. Int J Comput Assist Radiol Surg 2018; 13:1397-1408. [PMID: 30006820 DOI: 10.1007/s11548-018-1824-5] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2018] [Accepted: 07/05/2018] [Indexed: 12/15/2022]
Abstract
PURPOSE The development of common ontologies has recently been identified as one of the key challenges in the emerging field of surgical data science (SDS). However, past and existing initiatives in the domain of surgery have mainly been focussing on individual groups and failed to achieve widespread international acceptance by the research community. To address this challenge, the authors of this paper launched a European initiative-OntoSPM Collaborative Action-with the goal of establishing a framework for joint development of ontologies in the field of SDS. This manuscript summarizes the goals and the current status of the international initiative. METHODS A workshop was organized in 2016, gathering the main European research groups having experience in developing and using ontologies in this domain. It led to the conclusion that a common ontology for surgical process models (SPM) was absolutely needed, and that the existing OntoSPM ontology could provide a good starting point toward the collaborative design and promotion of common, standard ontologies on SPM. RESULTS The workshop led to the OntoSPM Collaborative Action-launched in mid-2016-with the objective to develop, maintain and promote the use of common ontologies of SPM relevant to the whole domain of SDS. The fundamental concept, the architecture, the management and curation of the common ontology have been established, making it ready for wider public use. CONCLUSION The OntoSPM Collaborative Action has been in operation for 24 months, with a growing dedicated membership. Its main result is a modular ontology, undergoing constant updates and extensions, based on the experts' suggestions. It remains an open collaborative action, which always welcomes new contributors and applications.
Collapse
Affiliation(s)
| | | | - Carolin Feldmann
- Division of Computer Assisted Medical Interventions, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | - Paulo Gonçalves
- Instituto Politécnico de Castelo Branco, Castelo Branco, Portugal.,IDMEC, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
| | - Tamás Haidegger
- Antal Bejczy Center for Intelligent Robotics, Óbuda University, Budapest, Hungary.,Austrian Center for Medical Innovation and Technology (ACMIT), Wiener Neustadt, Austria
| | - Chantal Julliard
- Inserm, LTSI - UMR_S 1099, Univ Rennes, Rennes, France.,LIRMM, Université de Montpellier, Montpellier, France.,Stryker GmbH, Freiburg, Germany
| | - Darko Katić
- Karlsruhe Institute of Technology, Institute for Anthropomatics and Robotics, Karlsruhe, Germany.,ArtiMinds Robotics GmbH, Karlsruhe, Germany
| | - Hannes Kenngott
- Department of General, Abdominal and Transplantation Surgery, University of Heidelberg, Heidelberg, Germany
| | - Lena Maier-Hein
- Division of Computer Assisted Medical Interventions, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Keno März
- Division of Computer Assisted Medical Interventions, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | | | - Dénes Ákos Nagy
- Antal Bejczy Center for Intelligent Robotics, Óbuda University, Budapest, Hungary.,Austrian Center for Medical Innovation and Technology (ACMIT), Wiener Neustadt, Austria
| | | | - Juliane Neumann
- Innovation Center Computer Assisted Surgery, Leipzig University, Leipzig, Germany
| | - Thomas Neumuth
- Innovation Center Computer Assisted Surgery, Leipzig University, Leipzig, Germany
| | | | | | - Martin Wagner
- Department of General, Abdominal and Transplantation Surgery, University of Heidelberg, Heidelberg, Germany
| | - Pierre Jannin
- Inserm, LTSI - UMR_S 1099, Univ Rennes, Rennes, France
| |
Collapse
|
10
|
Minimally invasive registration for computer-assisted orthopedic surgery: combining tracked ultrasound and bone surface points via the P-IMLOP algorithm. Int J Comput Assist Radiol Surg 2015; 10:761-71. [PMID: 25895079 DOI: 10.1007/s11548-015-1188-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2015] [Accepted: 03/20/2015] [Indexed: 10/23/2022]
Abstract
PURPOSE We present a registration method for computer-assisted total hip replacement (THR) surgery, which we demonstrate to improve the state of the art by both reducing the invasiveness of current methods and increasing registration accuracy. A critical element of computer-guided procedures is the determination of the spatial correspondence between the patient and a computational model of patient anatomy. The current method for establishing this correspondence in robot-assisted THR is to register points intraoperatively sampled by a tracked pointer from the exposed proximal femur and, via auxiliary incisions, from the distal femur. METHODS In this paper, we demonstrate a noninvasive technique for sampling points on the distal femur using tracked B-mode ultrasound imaging and present a new algorithm for registering these data called Projected Iterative Most-Likely Oriented Point (P-IMLOP). Points and normal orientations of the distal bone surface are segmented from ultrasound images and registered to the patient model along with points sampled from the exposed proximal femur via a tracked pointer. RESULTS The proposed approach is evaluated using a bone- and tissue-mimicking leg phantom constructed to enable accurate assessment of experimental registration accuracy with respect to a CT-image-based model of the phantom. These experiments demonstrate that localization of the femur shaft is greatly improved by tracked ultrasound. The experiments further demonstrate that, for ultrasound-based data, the P-IMLOP algorithm significantly improves registration accuracy compared to the standard ICP algorithm. CONCLUSION Registration via tracked ultrasound and the P-IMLOP algorithm has high potential to reduce the invasiveness and improve the registration accuracy of computer-assisted orthopedic procedures.
Collapse
|