1
|
Wang T, Li H, Pu T, Yang L. Microsurgery Robots: Applications, Design, and Development. SENSORS (BASEL, SWITZERLAND) 2023; 23:8503. [PMID: 37896597 PMCID: PMC10611418 DOI: 10.3390/s23208503] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2023] [Revised: 10/07/2023] [Accepted: 10/09/2023] [Indexed: 10/29/2023]
Abstract
Microsurgical techniques have been widely utilized in various surgical specialties, such as ophthalmology, neurosurgery, and otolaryngology, which require intricate and precise surgical tool manipulation on a small scale. In microsurgery, operations on delicate vessels or tissues require high standards in surgeons' skills. This exceptionally high requirement in skills leads to a steep learning curve and lengthy training before the surgeons can perform microsurgical procedures with quality outcomes. The microsurgery robot (MSR), which can improve surgeons' operation skills through various functions, has received extensive research attention in the past three decades. There have been many review papers summarizing the research on MSR for specific surgical specialties. However, an in-depth review of the relevant technologies used in MSR systems is limited in the literature. This review details the technical challenges in microsurgery, and systematically summarizes the key technologies in MSR with a developmental perspective from the basic structural mechanism design, to the perception and human-machine interaction methods, and further to the ability in achieving a certain level of autonomy. By presenting and comparing the methods and technologies in this cutting-edge research, this paper aims to provide readers with a comprehensive understanding of the current state of MSR research and identify potential directions for future development in MSR.
Collapse
Affiliation(s)
- Tiexin Wang
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310058, China
| | - Haoyu Li
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
| | - Tanhong Pu
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
| | - Liangjing Yang
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (T.W.); (H.L.); (T.P.)
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310058, China
- Department of Mechanical Engineering, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA
| |
Collapse
|
2
|
Wang Y, Qu D, Wang S, Chen J, Qiu L. Correction of Rotational Eccentricity Based on Model and Microvision in the Wire-Traction Micromanipulation System. MICROMACHINES 2023; 14:mi14050963. [PMID: 37241587 DOI: 10.3390/mi14050963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 03/25/2023] [Accepted: 04/27/2023] [Indexed: 05/28/2023]
Abstract
In the realm of automatic wire-traction micromanipulation systems, the alignment of the central axis of the coil with the rotation axis of the rotary stage can be a challenge, which leads to the occurrence of eccentricity during rotation. The wire-traction is conducted at a micron-level of manipulation precision on micron electrode wires; eccentricity has a significant impact on the control accuracy of the system. To resolve the problem, a method for measuring and correcting the coil eccentricity is proposed in this paper. First, models of radial and tilt eccentricity are established respectively based on the eccentricity sources. Then, measuring eccentricity is proposed by an eccentricity model and microscopic vision; the model is used to predict eccentricity, and visual image processing algorithms are used to calibrate model parameters. In addition, a correction based on the compensation model and hardware is designed to compensate for the eccentricity. The experimental results demonstrate the accuracy of the models in predicting eccentricity and the effectiveness of correction. The results show that the models have an accurate prediction for eccentricity that relies on the evaluation of the root mean square error (RMSE); the maximal residual error after correction was within 6 μm, and the compensation was approximately 99.6%. The proposed method, which combines the eccentricity model and microvision for measuring and correcting eccentricity, offers improved wire-traction micromanipulation accuracy, enhanced efficiency, and an integrated system. It has more suitable and wider applications in the field of micromanipulation and microassembly.
Collapse
Affiliation(s)
- Yuezong Wang
- Faculty of Materials and Manufacturing, Beijing University of Technology, Beijing 100124, China
| | - Daoduo Qu
- Faculty of Materials and Manufacturing, Beijing University of Technology, Beijing 100124, China
| | - Shengyi Wang
- Faculty of Materials and Manufacturing, Beijing University of Technology, Beijing 100124, China
| | - Jiqiang Chen
- Faculty of Materials and Manufacturing, Beijing University of Technology, Beijing 100124, China
| | - Lina Qiu
- Faculty of Materials and Manufacturing, Beijing University of Technology, Beijing 100124, China
| |
Collapse
|
3
|
Lin C, Zheng Y, Guang C, Ma K, Yang Y. Precision forceps tracking and localisation using a Kalman filter for continuous curvilinear capsulorhexis. Int J Med Robot 2022; 18:e2432. [DOI: 10.1002/rcs.2432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 06/01/2022] [Accepted: 06/04/2022] [Indexed: 11/08/2022]
Affiliation(s)
- Chuang Lin
- School of Mechanical Engineering and Automation Beihang University Beijing China
| | - Yu Zheng
- School of Mechanical Engineering and Automation Beihang University Beijing China
| | - Chenhan Guang
- School of Mechanical Engineering and Automation Beihang University Beijing China
| | - Ke Ma
- Eye Center of Beijing Tongren Hospital Capital Medical University Beijing China
| | - Yang Yang
- School of Mechanical Engineering and Automation Beihang University Beijing China
| |
Collapse
|
4
|
Intensity-based nonrigid endomicroscopic image mosaicking incorporating texture relevance for compensation of tissue deformation. Comput Biol Med 2021; 142:105169. [PMID: 34974384 DOI: 10.1016/j.compbiomed.2021.105169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2021] [Revised: 12/12/2021] [Accepted: 12/20/2021] [Indexed: 12/09/2022]
Abstract
Image mosaicking has emerged as a universal technique to broaden the field-of-view of the probe-based confocal laser endomicroscopy (pCLE) imaging system. However, due to the influence of probe-tissue contact forces and optical components on imaging quality, existing mosaicking methods remain insufficient to deal with practical challenges. In this paper, we present the texture encoded sum of conditional variance (TESCV) as a novel similarity metric, and effectively incorporate it into a sequential mosaicking scheme to simultaneously correct rigid probe shift and nonrigid tissue deformation. TESCV combines both intensity dependency and texture relevance to quantify the differences between pCLE image frames, where a discriminative binary descriptor named fully cross-detected local derivative pattern (FCLDP) is designed to extract more detailed structural textures. Furthermore, we also analytically derive the closed-form gradient of TESCV with respect to the transformation variables. Experiments on the circular dataset highlighted the advantage of the TESCV metric in improving mosaicking performance compared with the other four recently published metrics. The comparison with the other four state-of-the-art mosaicking methods on the spiral and manual datasets indicated that the proposed TESCV-based method not only worked stably at different contact forces, but was also suitable for both low- and high-resolution imaging systems. With more accurate and delicate mosaics, the proposed method holds promises to meet clinical demands for intraoperative optical biopsy.
Collapse
|
5
|
Xue Y, Li Y, Liu S, Wang P, Qian X. Oriented Localization of Surgical Tools by Location Encoding. IEEE Trans Biomed Eng 2021; 69:1469-1480. [PMID: 34652994 DOI: 10.1109/tbme.2021.3120430] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Surgical tool localization is the foundation to a series of advanced surgical functions e.g. image guided surgical navigation. For precise scenarios like surgical tool localization, sophisticated tools and sensitive tissues can be quite close. This requires a higher localization accuracy than general object localization. And it is also meaningful to know the orientation of tools. To achieve these, this paper proposes a Compressive Sensing based Location Encoding scheme, which formulates the task of surgical tool localization in pixel space into a task of vector regression in encoding space. Furthermore with this scheme, the method is able to capture orientation of surgical tools rather than simply outputting horizontal bounding boxes. To prevent gradient vanishing, a novel back-propagation rule for sparse reconstruction is derived. The back-propagation rule is applicable to different implementations of sparse reconstruction and renders the entire network end-to-end trainable. Finally, the proposed approach gives more accurate bounding boxes as well as capturing the orientation of tools, and achieves state-of-the-art performance compared with 9 competitive both oriented and non-oriented localization methods (RRD, RefineDet, etc) on a mainstream surgical image dataset: m2cai16-tool-locations. A range of experiments support our claim that regression in CSLE space performs better than traditionally detecting bounding boxes in pixel space.
Collapse
|
6
|
Orthogonality Measurement of Three-Axis Motion Trajectories for Micromanipulation Robot Systems. MICROMACHINES 2021; 12:mi12030344. [PMID: 33807003 PMCID: PMC8005171 DOI: 10.3390/mi12030344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 03/13/2021] [Accepted: 03/16/2021] [Indexed: 11/16/2022]
Abstract
In robotic micromanipulation systems, the orthogonality of the three-axis motion trajectories of the motion control systems influences the accuracy of micromanipulation. A method of measuring and evaluating the orthogonality of three-axis motion trajectories is proposed in this paper. Firstly, a system for three-axis motion trajectory measurement is developed and an orthogonal reference coordinate system is designed. The influence of the assembly error of laser displacement sensors on the reference coordinate system is analyzed using simulation. An approach to estimating the orthogonality of three-axis motion trajectories and to compensating for its error is presented using spatial line fitting and vector operation. The simulation results show that when the assembly angle of the laser displacement sensors is limited within a range of 10°, the relative angle deviation of the coordinate axes of the reference coordinate frame is approximately 0.09%. The experiment results show that precision of spatial line fitting is approximately 0.02 mm and relative error of the orthogonality measurement is approximately 0.3%.
Collapse
|
7
|
Kim JW, Zhang P, Gehlbach P, Iordachita I, Kobilarov M. Towards Autonomous Eye Surgery by Combining Deep Imitation Learning with Optimal Control. PROCEEDINGS OF MACHINE LEARNING RESEARCH 2021; 155:2347-2358. [PMID: 34712957 PMCID: PMC8549631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
During retinal microsurgery, precise manipulation of the delicate retinal tissue is required for positive surgical outcome. However, accurate manipulation and navigation of surgical tools remain difficult due to a constrained workspace and the top-down view during the surgery, which limits the surgeon's ability to estimate depth. To alleviate such difficulty, we propose to automate the tool-navigation task by learning to predict relative goal position on the retinal surface from the current tool-tip position. Given an estimated target on the retina, we generate an optimal trajectory leading to the predicted goal while imposing safety-related physical constraints aimed to minimize tissue damage. As an extended task, we generate goal predictions to various points across the retina to localize eye geometry and further generate safe trajectories within the estimated confines. Through experiments in both simulation and with several eye phantoms, we demonstrate that our framework can permit navigation to various points on the retina within 0.089mm and 0.118mm in xy error which is less than the human's surgeon mean tremor at the tool-tip of 0.180mm. All safety constraints were fulfilled and the algorithm was robust to previously unseen eyes as well as unseen objects in the scene. Live video demonstration is available here: https://youtu.be/n5j5jCCelXk.
Collapse
Affiliation(s)
- Ji Woong Kim
- Department of Mechanical Engineering, Johns Hopkins University
| | - Peiyao Zhang
- Department of Mechanical Engineering, Johns Hopkins University
| | - Peter Gehlbach
- Wilmer Eye Institute, Johns Hopkins University School of Medicine
| | | | - Marin Kobilarov
- Department of Mechanical Engineering, Johns Hopkins University
| |
Collapse
|
8
|
Kim JW, He C, Urias M, Gehlbach P, Hager GD, Iordachita I, Kobilarov M. Autonomously Navigating a Surgical Tool Inside the Eye by Learning from Demonstration. IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION : ICRA : [PROCEEDINGS]. IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION 2020; 2020. [PMID: 34621556 DOI: 10.1109/icra40945.2020.9196537] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A fundamental challenge in retinal surgery is safely navigating a surgical tool to a desired goal position on the retinal surface while avoiding damage to surrounding tissues, a procedure that typically requires tens-of-microns accuracy. In practice, the surgeon relies on depth-estimation skills to localize the tool-tip with respect to the retina in order to perform the tool-navigation task, which can be prone to human error. To alleviate such uncertainty, prior work has introduced ways to assist the surgeon by estimating the tool-tip distance to the retina and providing haptic or auditory feedback. However, automating the tool-navigation task itself remains unsolved and largely unexplored. Such a capability, if reliably automated, could serve as a building block to streamline complex procedures and reduce the chance for tissue damage. Towards this end, we propose to automate the tool-navigation task by learning to mimic expert demonstrations of the task. Specifically, a deep network is trained to imitate expert trajectories toward various locations on the retina based on recorded visual servoing to a given goal specified by the user. The proposed autonomous navigation system is evaluated in simulation and in physical experiments using a silicone eye phantom. We show that the network can reliably navigate a needle surgical tool to various desired locations within 137 μm accuracy in physical experiments and 94 μm in simulation on average, and generalizes well to unseen situations such as in the presence of auxiliary surgical tools, variable eye backgrounds, and brightness conditions.
Collapse
Affiliation(s)
- Ji Woong Kim
- Laboratory for Computing + Sensing (LCSR) dept. at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Changyan He
- Laboratory for Computing + Sensing (LCSR) dept. at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Muller Urias
- Wilmer Eye Institute at the Johns Hopkins Hospital, Baltimore, MD 21287 USA
| | - Peter Gehlbach
- Wilmer Eye Institute at the Johns Hopkins Hospital, Baltimore, MD 21287 USA
| | - Gregory D Hager
- Computer Science dept. at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Iulian Iordachita
- Laboratory for Computing + Sensing (LCSR) dept. at the Johns Hopkins University, Baltimore, MD 21218 USA
| | - Marin Kobilarov
- Laboratory for Computing + Sensing (LCSR) dept. at the Johns Hopkins University, Baltimore, MD 21218 USA
| |
Collapse
|
9
|
Wang Y, Geng B, Long C. Contour extraction of a laser stripe located on a microscope image from a stereo light microscope. Microsc Res Tech 2019; 82:260-271. [PMID: 30633434 DOI: 10.1002/jemt.23168] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2018] [Revised: 07/27/2018] [Accepted: 10/14/2018] [Indexed: 11/10/2022]
Abstract
A stereo light microscope has a special optical structure that consists of two optical paths. With two cameras fitted on its imaging planes, a microscopic binocular vision system can be constructed, and this system can be applied in three-dimensional (3D) shape measurements of a microscopic object. A novel-shape reconstruction system is developed in this article composed of laser fringe scanning and a stereo light microscope. A laser projector emits a thin light sheet and projects it onto the microscopic object's surface. The microscopic object is placed on a micro-displacement mechanism and moved in a given direction. The entire surface of the object is scanned by the light sheet. This system captures a series of microscopic images of the laser stripe, which are used to restore the 3D shape of the object. In this article, we mainly focus on the study of the laser stripe detection method, which is derived based on the Canny rule. First, the Canny rule outputs pixels at the left and right edges of the laser stripe. Then, a subpixel edge extraction method is proposed based on polynomial fitting that outputs the center curve of the laser stripe. Finally, an edge filter is used to smooth the edge burrs, and the Hermite interpolation method is used to link the broken edges and construct continuous, smoothing edge contours. The results show that this method can effectively find the subpixel position of the laser stripe and output high-quality edge contours. The method is suitable for extracting the contours of a laser stripe located in a microscope image.
Collapse
Affiliation(s)
- Yuezong Wang
- College of Mechanical Engineering and Applied Electronics Technology, Beijing University of Technology, Beijing, China
| | - Benliang Geng
- College of Mechanical Engineering and Applied Electronics Technology, Beijing University of Technology, Beijing, China
| | - Chao Long
- College of Mechanical Engineering and Applied Electronics Technology, Beijing University of Technology, Beijing, China
| |
Collapse
|
10
|
Hao R, Özgüner O, Çavuşoğlu MC. Vision-Based Surgical Tool Pose Estimation for the da Vinci ® Robotic Surgical System. PROCEEDINGS OF THE ... IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS. IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS 2018; 2018:1298-1305. [PMID: 31440395 PMCID: PMC6706092 DOI: 10.1109/iros.2018.8594471] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper presents an approach to surgical tool tracking using stereo vision for the da Vinci® Surgical Robotic System. The proposed method is based on robot kinematics, computer vision techniques and Bayesian state estimation. The proposed method employs a silhouette rendering algorithm to create virtual images of the surgical tool by generating the silhouette of the defined tool geometry under the da Vinci® robot endoscopes. The virtual rendering method provides the tool representation in image form, which makes it possible to measure the distance between the rendered tool and real tool from endoscopic stereo image streams. Particle Filter algorithm employing the virtual rendering method is then used for surgical tool tracking. The tracking performance is evaluated on an actual da Vinci® surgical robotic system and a ROS/Gazebo-based simulation of the da Vinci® system.
Collapse
Affiliation(s)
- Ran Hao
- Department of Electrical Engineering and Computer Science, Case Western Reserve University, Cleveland, OH
| | - Orhan Özgüner
- Department of Electrical Engineering and Computer Science, Case Western Reserve University, Cleveland, OH
| | - M. Cenk Çavuşoğlu
- Department of Electrical Engineering and Computer Science, Case Western Reserve University, Cleveland, OH
| |
Collapse
|
11
|
Yang S, Martel JN, Lobes LA, Riviere CN. Techniques for robot-aided intraocular surgery using monocular vision. Int J Rob Res 2018; 37:931-952. [PMID: 30739976 DOI: 10.1177/0278364918778352] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This paper presents techniques for robot-aided intraocular surgery using monocular vision in order to overcome erroneous stereo reconstruction in an intact eye. We propose a new retinal surface estimation method based on a structured-light approach. A handheld robot known as the Micron enables automatic scanning of a laser probe, creating projected beam patterns on the retinal surface. Geometric analysis of the patterns then allows planar reconstruction of the surface. To realize automated surgery in an intact eye, monocular hybrid visual servoing is accomplished through a scheme that incorporates surface reconstruction and partitioned visual servoing. We investigate the sensitivity of the estimation method according to relevant parameters and also evaluate its performance in both dry and wet conditions. The approach is validated through experiments for automated laser photocoagulation in a realistic eye phantom in vitro. Finally, we present the first demonstration of automated intraocular laser surgery in porcine eyes ex vivo.
Collapse
Affiliation(s)
- Sungwook Yang
- Center for BioMicrosystems, Korea Institute of Science and Technology, Korea
| | - Joseph N Martel
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, USA
| | - Louis A Lobes
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, USA
| | | |
Collapse
|
12
|
Zhao Z, Voros S, Weng Y, Chang F, Li R. Tracking-by-detection of surgical instruments in minimally invasive surgery via the convolutional neural network deep learning-based method. Comput Assist Surg (Abingdon) 2017; 22:26-35. [DOI: 10.1080/24699322.2017.1378777] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
Affiliation(s)
- Zijian Zhao
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Sandrine Voros
- CNRS, INSERM, TIMC-IMAG, University Grenoble-Alpes, Grenoble, France
| | - Ying Weng
- School of Computer Science, Bangor University, Bangor, UK
| | - Faliang Chang
- School of Control Science and Engineering, Shandong University, Jinan, China
| | - Ruijian Li
- Department of cardiology, Qilu Hospital of Shandong University, Jinan, China
| |
Collapse
|
13
|
Wang Y. A stereovision model applied in bio-micromanipulation system based on stereo light microscope. Microsc Res Tech 2017; 80:1256-1269. [DOI: 10.1002/jemt.22924] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2017] [Revised: 07/29/2017] [Accepted: 08/06/2017] [Indexed: 11/08/2022]
Affiliation(s)
- Yuezong Wang
- College of Mechanical Engineering and Applied Electronics Technology; Beijing University of Technology; Beijing 100124 China
| |
Collapse
|
14
|
Mukherjee S, Yang S, MacLachlan RA, Lobes LA, Martel JN, Riviere CN. Toward Monocular Camera-Guided Retinal Vein Cannulation with an Actively Stabilized Handheld Robot. IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION : ICRA : [PROCEEDINGS]. IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION 2017; 2017:2951-2956. [PMID: 28966797 DOI: 10.1109/icra.2017.7989341] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In this paper we describe work towards retinal vessel cannulation using an actively stabilized handheld robot, guided by monocular vision. We employ a previously developed monocular camera based surface reconstruction method using automated laser beam scanning over the retina. We use the reconstructed plane to find a coordinate transform between the 2D image plane coordinate system and the global 3D frame. Within a hemispherical region around the target, we use motion scaling for higher precision. The contribution of this work is the homography matrix estimation using monocular vision and application of the previously developed laser surface reconstruction to Micron guided vein cannulation. Experiments are conducted in a wet eye phantom to show the higher accuracy of the surface reconstruction as compared to standard stereo reconstruction. Further, experiments to show the increased surgical accuracy due to motion scaling are also carried out.
Collapse
Affiliation(s)
- Shohin Mukherjee
- Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 15213 USA
| | - Sungwook Yang
- Center for BioMicrosystems, Korea Institute of Science and Technology, Seoul 136-791, Korea
| | | | - Louis A Lobes
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA 15213 USA
| | - Joseph N Martel
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA 15213 USA
| | - Cameron N Riviere
- Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 15213 USA
| |
Collapse
|
15
|
Abstract
Healthcare in general, and surgery/interventional care in particular, is evolving through rapid advances in technology and increasing complexity of care, with the goal of maximizing the quality and value of care. Whereas innovations in diagnostic and therapeutic technologies have driven past improvements in the quality of surgical care, future transformation in care will be enabled by data. Conventional methodologies, such as registry studies, are limited in their scope for discovery and research, extent and complexity of data, breadth of analytical techniques, and translation or integration of research findings into patient care. We foresee the emergence of surgical/interventional data science (SDS) as a key element to addressing these limitations and creating a sustainable path toward evidence-based improvement of interventional healthcare pathways. SDS will create tools to measure, model, and quantify the pathways or processes within the context of patient health states or outcomes and use information gained to inform healthcare decisions, guidelines, best practices, policy, and training, thereby improving the safety and quality of healthcare and its value. Data are pervasive throughout the surgical care pathway; thus, SDS can impact various aspects of care, including prevention, diagnosis, intervention, or postoperative recovery. The existing literature already provides preliminary results, suggesting how a data science approach to surgical decision-making could more accurately predict severe complications using complex data from preoperative, intraoperative, and postoperative contexts, how it could support intraoperative decision-making using both existing knowledge and continuous data streams throughout the surgical care pathway, and how it could enable effective collaboration between human care providers and intelligent technologies. In addition, SDS is poised to play a central role in surgical education, for example, through objective assessments, automated virtual coaching, and robot-assisted active learning of surgical skill. However, the potential for transforming surgical care and training through SDS may only be realized through a cultural shift that not only institutionalizes technology to seamlessly capture data but also assimilates individuals with expertise in data science into clinical research teams. Furthermore, collaboration with industry partners from the inception of the discovery process promotes optimal design of data products as well as their efficient translation and commercialization. As surgery continues to evolve through advances in technology that enhance delivery of care, SDS represents a new knowledge domain to engineer surgical care of the future.
Collapse
Affiliation(s)
- S Swaroop Vedula
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, USA
| | - Gregory D Hager
- The Malone Center for Engineering in Healthcare, The Johns Hopkins University, Baltimore, USA
| |
Collapse
|
16
|
Griffin JA, Zhu W, Nam CS. The Role of Haptic Feedback in Robotic-Assisted Retinal Microsurgery Systems: A Systematic Review. IEEE TRANSACTIONS ON HAPTICS 2017; 10:94-105. [PMID: 28328500 DOI: 10.1109/toh.2016.2598341] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Retinal microsurgery is one of the most technically difficult surgeries since it is performed at the threshold of human capability. If certain retinal conditions are left untreated, they can lead to severe damage, including irreversible blindness. Thus, techniques for reliable retinal microsurgery operations are critical. Recent research shows promise for improving surgical safety by implementing various types of sensory input and output. Sensory information is used to inform the surgeon about the environment inside the eye in real time. This review examines literature that discusses human factors and ergonomics (HFE) of sensory inputs and outputs of retinal microsurgery instrumentation with a focus on force and haptic feedback. Thirty-four studies were reviewed on the following topics: (1) variation between different input sensory devices and their performance, (2) variation between alternative output sensory devices and their performance, and (3) variation between alternative output sensory devices and their user satisfaction. This review finds that the implementation of HFE is important for the consideration of retinal microsurgery devices, but it is largely missing from current research. The addition of direct comparisons between devices, measures of user acceptance, usability evaluations, and greater realism in testing would help advance the use of haptic sensory feedback for retinal microsurgery instruments.
Collapse
|
17
|
Wang Y, Zhao Z, Wang J. Microscopic vision modeling method by direct mapping analysis for micro-gripping system with stereo light microscope. Micron 2016; 83:93-109. [DOI: 10.1016/j.micron.2016.01.005] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2015] [Revised: 01/22/2016] [Accepted: 01/22/2016] [Indexed: 10/22/2022]
|
18
|
Wang Y, Jin Y, Wang L. Image distortion correction for micromanipulation system based on SLM microscopic vision. Microsc Res Tech 2016; 79:162-77. [PMID: 26789139 DOI: 10.1002/jemt.22617] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2015] [Revised: 11/27/2015] [Accepted: 12/16/2015] [Indexed: 11/10/2022]
Abstract
Stereo light microscope (SLM) simulates stereo imaging principle of human eyes. Microscopic vision system based on SLM has become an important visual tool for micro measurement, micromanipulation, and microinjection. We develop a micromanipulation system based on SLM and present an image distortion correction method. We mainly correct two kinds of image distortions: lateral and vertical distortion. Distortion correction consists of two steps. First, a linear fitting algorithm for each row or column of target points is developed, and the fitting errors are calculated. If the fitting errors are smaller than a given threshold, the linear fitting results are kept and used. Otherwise polynomial fitting procedure will be used. Second, the parallelism of straight lines is corrected. The results show that a line in world coordinate frame (WCF) is not necessarily a straight line in image coordinate frame (ICF), or two parallel lines in WCF may be not parallel in ICF. Distortion correction can restore the parallel and linear relationship. For distorted left and right images, the magnitude of distortion exceeds 6 pixels and 4 pixels in the horizontal direction, and 1.2 pixels and 1.7 pixels in the vertical direction, respectively. After corrected, for left and right image, distortion can be reduced to 0.8 pixels and 0.7 pixels in the horizontal direction, and 0.96 pixels and 1.3 pixels in the vertical direction, respectively. The results show that distortion parameters obtained from the proposed method can effectively correct distorted images.
Collapse
Affiliation(s)
- Yuezong Wang
- College of Mechanical Engineering and Applied Electronics Technology, Beijing University of Technology, Beijing, 100124, China
| | | | - Lika Wang
- College of Mechanical Engineering and Applied Electronics Technology, Beijing University of Technology, Beijing, 100124, China
| |
Collapse
|
19
|
Gupta A, Gonenc B, Balicki M, Olds K, Handa J, Gehlbach P, Taylor RH, Iordachita I. Human eye phantom for developing computer and robot-assisted epiretinal membrane peeling. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2014:6864-7. [PMID: 25571573 DOI: 10.1109/embc.2014.6945205] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
A number of technologies are being developed to facilitate key intraoperative actions in vitreoretinal microsurgery. There is a need for cost-effective, reusable benchtop eye phantoms to enable frequent evaluation of these developments. In this study, we describe an artificial eye phantom for developing intraocular imaging and force-sensing tools. We test four candidate materials for simulating epiretinal membranes using a handheld tremor-canceling micromanipulator with force-sensing micro-forceps tip and demonstrate peeling forces comparable to those encountered in clinical practice.
Collapse
|
20
|
Fogli G, Orsi G, De Maria C, Montemurro F, Palla M, Rizzo S, Vozzi G. New eye phantom for ophthalmic surgery. JOURNAL OF BIOMEDICAL OPTICS 2014; 19:068001. [PMID: 24887746 DOI: 10.1117/1.jbo.19.6.068001] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2013] [Accepted: 05/12/2014] [Indexed: 06/03/2023]
Abstract
In this work, we designed and realized a new phantom able to mimic the principal mechanical, rheological, and physical cues of the human eye and that can be used as a common benchmark to validate new surgical procedures, innovative vitrectomes, and as a training system for surgeons. This phantom, in particular its synthetic humor vitreous, had the aim of reproducing diffusion properties of the natural eye and can be used as a system to evaluate the pharmacokinetics of drugs and optimization of their dose, limiting animal experiments. The eye phantom was built layer-by-layer starting from the sclera up to the retina, using low cost and easy to process polymers. The validation of the phantom was carried out by mechanical characterization of each layer, by diffusion test with commercial drugs into a purposely developed apparatus, and finally by a team of ophthalmic surgeons. Experiments demonstrated that polycaprolactone, polydimethylsiloxane, and gelatin, properly prepared, are the best materials to mimic the mechanical properties of sclera, choroid, and retina, respectively. A polyvinyl alcohol-gelatin polymeric system is the best for mimicking the viscosity of the human humor vitreous, even if the bevacizumab half-life is lower than in the human eye.
Collapse
Affiliation(s)
- Gessica Fogli
- University of Pisa, Research Centre "E. Piaggio," Largo Lucio Lazzarino 1, Pisa 56126, Italy
| | - Gianni Orsi
- University of Pisa, Research Centre "E. Piaggio," Largo Lucio Lazzarino 1, Pisa 56126, ItalybUniversity of Pisa, Department of Ingegneria Civile e Industriale, Largo Lucio Lazzarino 1, Pisa 56126, Italy
| | - Carmelo De Maria
- University of Pisa, Research Centre "E. Piaggio," Largo Lucio Lazzarino 1, Pisa 56126, ItalycUniversity of Pisa, Department of Ingegneria dell'Informazione, Via G. Caruso 16, Pisa 56126, Italy
| | - Francesca Montemurro
- University of Pisa, Research Centre "E. Piaggio," Largo Lucio Lazzarino 1, Pisa 56126, Italy
| | - Michele Palla
- Azienda Ospedaliera Universitaria Pisana-Cisanello, Eye Surgery Clinic, Via Paradisa 2, Pisa 56124, Italy
| | - Stanislao Rizzo
- Azienda Ospedaliera Universitaria Pisana-Cisanello, Eye Surgery Clinic, Via Paradisa 2, Pisa 56124, Italy
| | - Giovanni Vozzi
- University of Pisa, Research Centre "E. Piaggio," Largo Lucio Lazzarino 1, Pisa 56126, ItalycUniversity of Pisa, Department of Ingegneria dell'Informazione, Via G. Caruso 16, Pisa 56126, Italy
| |
Collapse
|
21
|
Muhit AA, Pickering MR, Scarvell JM, Ward T, Smith PN. Image-assisted non-invasive and dynamic biomechanical analysis of human joints. Phys Med Biol 2013; 58:4679-702. [DOI: 10.1088/0031-9155/58/13/4679] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|