1
|
Ali JT, Yang G, Green CA, Reed BL, Madani A, Ponsky TA, Hazey J, Rothenberg SS, Schlachta CM, Oleynikov D, Szoka N. Defining digital surgery: a SAGES white paper. Surg Endosc 2024; 38:475-487. [PMID: 38180541 DOI: 10.1007/s00464-023-10551-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 10/17/2023] [Indexed: 01/06/2024]
Abstract
BACKGROUND Digital surgery is a new paradigm within the surgical innovation space that is rapidly advancing and encompasses multiple areas. METHODS This white paper from the SAGES Digital Surgery Working Group outlines the scope of digital surgery, defines key terms, and analyzes the challenges and opportunities surrounding this disruptive technology. RESULTS In its simplest form, digital surgery inserts a computer interface between surgeon and patient. We divide the digital surgery space into the following elements: advanced visualization, enhanced instrumentation, data capture, data analytics with artificial intelligence/machine learning, connectivity via telepresence, and robotic surgical platforms. We will define each area, describe specific terminology, review current advances as well as discuss limitations and opportunities for future growth. CONCLUSION Digital Surgery will continue to evolve and has great potential to bring value to all levels of the healthcare system. The surgical community has an essential role in understanding, developing, and guiding this emerging field.
Collapse
Affiliation(s)
- Jawad T Ali
- University of Texas at Austin, Austin, TX, USA
| | - Gene Yang
- University at Buffalo, Buffalo, NY, USA
| | | | | | - Amin Madani
- University of Toronto, Toronto, ON, Canada
- Surgical Artificial Intelligence Research Academy, University Health Network, Toronto, ON, Canada
| | - Todd A Ponsky
- Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | | | | | | | - Dmitry Oleynikov
- Monmouth Medical Center, Robert Wood Johnson Barnabas Health, Rutgers School of Medicine, Long Branch, NJ, USA
| | - Nova Szoka
- Department of Surgery, West Virginia University, Suite 7500 HSS, PO Box 9238, Morgantown, WV, 26506-9238, USA.
| |
Collapse
|
2
|
Ciocan RA, Graur F, Ciocan A, Cismaru CA, Pintilie SR, Berindan-Neagoe I, Hajjar NA, Gherman CD. Robot-Guided Ultrasonography in Surgical Interventions. Diagnostics (Basel) 2023; 13:2456. [PMID: 37510199 PMCID: PMC10378616 DOI: 10.3390/diagnostics13142456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 07/19/2023] [Indexed: 07/30/2023] Open
Abstract
INTRODUCTION The introduction of robotic-guided procedures in surgical techniques has brought an increase in the accuracy and control of resections. Surgery has evolved as a technique since the development of laparoscopy, which has added to the visualisation of the peritoneal cavity from a different perspective. Multi-armed robot associated with real-time intraoperative imaging devices brings important manoeuvrability and dexterity improvements in certain surgical fields. MATERIALS AND METHODS The present study is designed to synthesise the development of imaging techniques with a focus on ultrasonography in robotic surgery in the last ten years regarding abdominal surgical interventions. RESULTS All studies involved abdominal surgery. Out of the seven studies, two were performed in clinical trials. The other five studies were performed on organs or simulators and attempted to develop a hybrid surgical technique using ultrasonography and robotic surgery. Most studies aim to surgically identify both blood vessels and nerve structures through this combined technique (surgery and imaging). CONCLUSIONS Ultrasonography is often used in minimally invasive surgical techniques. This adds to the visualisation of blood vessels, the correct identification of tumour margins, and the location of surgical instruments in the tissue. The development of ultrasound technology from 2D to 3D and 4D has brought improvements in minimally invasive and robotic surgical techniques, and it should be further studied to bring surgery to a higher level.
Collapse
Affiliation(s)
- Răzvan Alexandru Ciocan
- Department of Surgery-Practical Abilities, "Iuliu Hațieganu" University of Medicine and Pharmacy Cluj-Napoca, Marinescu Street, No. 23, 400337 Cluj-Napoca, Romania
| | - Florin Graur
- Department of Surgery, "Iuliu Hațieganu" University of Medicine and Pharmacy Cluj-Napoca, Croitorilor Street, No. 19-21, 400162 Cluj-Napoca, Romania
| | - Andra Ciocan
- Department of Surgery, "Iuliu Hațieganu" University of Medicine and Pharmacy Cluj-Napoca, Croitorilor Street, No. 19-21, 400162 Cluj-Napoca, Romania
| | - Cosmin Andrei Cismaru
- Research Center for Functional Genomics, Biomedicine and Translational Medicine, "Iuliu Hațieganu" University of Medicine and Pharmacy Cluj-Napoca, Victor Babeș Street, No. 8, 400347 Cluj-Napoca, Romania
| | - Sebastian Romeo Pintilie
- "Iuliu Hațieganu" University of Medicine and Pharmacy Cluj-Napoca, Victor Babeș Street, No. 8, 400347 Cluj-Napoca, Romania
| | - Ioana Berindan-Neagoe
- Research Center for Functional Genomics, Biomedicine and Translational Medicine, "Iuliu Hațieganu" University of Medicine and Pharmacy Cluj-Napoca, Victor Babeș Street, No. 8, 400347 Cluj-Napoca, Romania
| | - Nadim Al Hajjar
- Department of Surgery, "Iuliu Hațieganu" University of Medicine and Pharmacy Cluj-Napoca, Croitorilor Street, No. 19-21, 400162 Cluj-Napoca, Romania
| | - Claudia Diana Gherman
- Department of Surgery-Practical Abilities, "Iuliu Hațieganu" University of Medicine and Pharmacy Cluj-Napoca, Marinescu Street, No. 23, 400337 Cluj-Napoca, Romania
| |
Collapse
|
3
|
Dowrick T, Xiao G, Nikitichev D, Dursun E, van Berkel N, Allam M, Koo B, Ramalhinho J, Thompson S, Gurusamy K, Blandford A, Stoyanov D, Davidson BR, Clarkson MJ. Evaluation of a calibration rig for stereo laparoscopes. Med Phys 2023; 50:2695-2704. [PMID: 36779419 PMCID: PMC10614700 DOI: 10.1002/mp.16310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 02/01/2023] [Accepted: 02/01/2023] [Indexed: 02/14/2023] Open
Abstract
BACKGROUND Accurate camera and hand-eye calibration are essential to ensure high-quality results in image-guided surgery applications. The process must also be able to be undertaken by a nonexpert user in a surgical setting. PURPOSE This work seeks to identify a suitable method for tracked stereo laparoscope calibration within theater. METHODS A custom calibration rig, to enable rapid calibration in a surgical setting, was designed. The rig was compared against freehand calibration. Stereo reprojection, stereo reconstruction, tracked stereo reprojection, and tracked stereo reconstruction error metrics were used to evaluate calibration quality. RESULTS Use of the calibration rig reduced mean errors: reprojection (1.47 mm [SD 0.13] vs. 3.14 mm [SD 2.11], p-value 1e-8), reconstruction (1.37 px [SD 0.10] vs. 10.10 px [SD 4.54], p-value 6e-7), and tracked reconstruction (1.38 mm [SD 0.10] vs. 12.64 mm [SD 4.34], p-value 1e-6) compared with freehand calibration. The use of a ChArUco pattern yielded slightly lower reprojection errors, while a dot grid produced lower reconstruction errors and was more robust under strong global illumination. CONCLUSION The use of the calibration rig results in a statistically significant decrease in calibration error metrics, versus freehand calibration, and represents the preferred approach for use in the operating theater.
Collapse
Affiliation(s)
- Thomas Dowrick
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Guofang Xiao
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Daniil Nikitichev
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Eren Dursun
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Niels van Berkel
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Moustafa Allam
- Royal Free CampusUCL Medical SchoolRoyal Free HospitalLondonUK
| | - Bongjin Koo
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Joao Ramalhinho
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Stephen Thompson
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | | | - Ann Blandford
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | - Danail Stoyanov
- Wellcome EPSRC Centre for Interventional and Surgical SciencesUCLLondonUK
| | | | | |
Collapse
|
4
|
Abstract
INTRODUCTION During an operation, augmented reality (AR) enables surgeons to enrich their vision of the operating field by means of digital imagery, particularly as regards tumors and anatomical structures. While in some specialties, this type of technology is routinely ustilized, in liver surgery due to the complexity of modeling organ deformities in real time, its applications remain limited. At present, numerous teams are attempting to find a solution applicable to current practice, the objective being to overcome difficulties of intraoperative navigation in an opaque organ. OBJECTIVE To identify, itemize and analyze series reporting AR techniques tested in liver surgery, the objectives being to establish a state of the art and to provide indications of perspectives for the future. METHODS In compliance with the PRISMA guidelines and availing ourselves of the PubMed, Embase and Cochrane databases, we identified English-language articles published between January 2020 and January 2022 corresponding to the following keywords: augmented reality, hepatic surgery, liver and hepatectomy. RESULTS Initially, 102 titles, studies and summaries were preselected. Twenty-eight corresponding to the inclusion criteria were included, reporting on 183patients operated with the help of AR by laparotomy (n=31) or laparoscopy (n=152). Several techniques of acquisition and visualization were reported. Anatomical precision was the main assessment criterion in 19 articles, with values ranging from 3mm to 14mm, followed by time of acquisition and clinical feasibility. CONCLUSION While several AR technologies are presently being developed, due to insufficient anatomical precision their clinical applications have remained limited. That much said, numerous teams are currently working toward their optimization, and it is highly likely that in the short term, the application of AR in liver surgery will have become more frequent and effective. As for its clinical impact, notably in oncology, it remains to be assessed.
Collapse
Affiliation(s)
- B Acidi
- Department of Surgery, AP-HP hôpital Paul-Brousse, Hepato-Biliary Center, 12, avenue Paul-Vaillant Couturier, 94804 Villejuif cedex, France; Augmented Operating Room Innovation Chair (BOPA), France; Inria « Mimesis », Strasbourg, France
| | - M Ghallab
- Department of Surgery, AP-HP hôpital Paul-Brousse, Hepato-Biliary Center, 12, avenue Paul-Vaillant Couturier, 94804 Villejuif cedex, France; Augmented Operating Room Innovation Chair (BOPA), France
| | - S Cotin
- Augmented Operating Room Innovation Chair (BOPA), France; Inria « Mimesis », Strasbourg, France
| | - E Vibert
- Department of Surgery, AP-HP hôpital Paul-Brousse, Hepato-Biliary Center, 12, avenue Paul-Vaillant Couturier, 94804 Villejuif cedex, France; Augmented Operating Room Innovation Chair (BOPA), France; DHU Hepatinov, 94800 Villejuif, France; Inserm, Paris-Saclay University, UMRS 1193, Pathogenesis and treatment of liver diseases; FHU Hepatinov, 94800 Villejuif, France
| | - N Golse
- Department of Surgery, AP-HP hôpital Paul-Brousse, Hepato-Biliary Center, 12, avenue Paul-Vaillant Couturier, 94804 Villejuif cedex, France; Augmented Operating Room Innovation Chair (BOPA), France; Inria « Mimesis », Strasbourg, France; DHU Hepatinov, 94800 Villejuif, France; Inserm, Paris-Saclay University, UMRS 1193, Pathogenesis and treatment of liver diseases; FHU Hepatinov, 94800 Villejuif, France.
| |
Collapse
|
5
|
Bogomolova K, Vorstenbosch MATM, El Messaoudi I, Holla M, Hovius SER, van der Hage JA, Hierck BP. Effect of binocular disparity on learning anatomy with stereoscopic augmented reality visualization: A double center randomized controlled trial. ANATOMICAL SCIENCES EDUCATION 2023; 16:87-98. [PMID: 34894205 PMCID: PMC10078652 DOI: 10.1002/ase.2164] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Revised: 12/05/2021] [Accepted: 12/09/2021] [Indexed: 06/01/2023]
Abstract
Binocular disparity provides one of the important depth cues within stereoscopic three-dimensional (3D) visualization technology. However, there is limited research on its effect on learning within a 3D augmented reality (AR) environment. This study evaluated the effect of binocular disparity on the acquisition of anatomical knowledge and perceived cognitive load in relation to visual-spatial abilities. In a double-center randomized controlled trial, first-year (bio)medical undergraduates studied lower extremity anatomy in an interactive 3D AR environment either with a stereoscopic 3D view (n = 32) or monoscopic 3D view (n = 34). Visual-spatial abilities were tested with a mental rotation test. Anatomical knowledge was assessed by a validated 30-item written test and 30-item specimen test. Cognitive load was measured by the NASA-TLX questionnaire. Students in the stereoscopic 3D and monoscopic 3D groups performed equally well in terms of percentage correct answers (written test: 47.9 ± 15.8 vs. 49.1 ± 18.3; P = 0.635; specimen test: 43.0 ± 17.9 vs. 46.3 ± 15.1; P = 0.429), and perceived cognitive load scores (6.2 ± 1.0 vs. 6.2 ± 1.3; P = 0.992). Regardless of intervention, visual-spatial abilities were positively associated with the specimen test scores (η2 = 0.13, P = 0.003), perceived representativeness of the anatomy test questions (P = 0.010) and subjective improvement in anatomy knowledge (P < 0.001). In conclusion, binocular disparity does not improve learning anatomy. Motion parallax should be considered as another important depth cue that contributes to depth perception during learning in a stereoscopic 3D AR environment.
Collapse
Affiliation(s)
- Katerina Bogomolova
- Department of SurgeryLeiden University Medical CenterLeidenthe Netherlands
- Center for Innovation of Medical EducationLeiden University Medical CenterLeidenthe Netherlands
| | | | - Inssaf El Messaoudi
- Department of OrthopedicsFaculty of MedicineRadboud University Medical CenterNijmegenthe Netherlands
| | - Micha Holla
- Department of OrthopedicsFaculty of MedicineRadboud University Medical CenterNijmegenthe Netherlands
| | - Steven E. R. Hovius
- Department of Plastic and Reconstructive SurgeryRadboud University Medical CenterNijmegenthe Netherlands
| | - Jos A. van der Hage
- Department of SurgeryLeiden University Medical CenterLeidenthe Netherlands
- Center for Innovation of Medical EducationLeiden University Medical CenterLeidenthe Netherlands
| | - Beerend P. Hierck
- Department of Anatomy and PhysiologyClinical Sciences, Veterinary Medicine FacultyUtrechtthe Netherlands
| |
Collapse
|
6
|
Minimally invasive and invasive liver surgery based on augmented reality training: a review of the literature. J Robot Surg 2022; 17:753-763. [DOI: 10.1007/s11701-022-01499-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 11/14/2022] [Indexed: 11/29/2022]
|
7
|
Singh A, Kusunose J, Phipps MA, Wang F, Chen LM, Caskey CF. Guiding and monitoring focused ultrasound mediated blood-brain barrier opening in rats using power Doppler imaging and passive acoustic mapping. Sci Rep 2022; 12:14758. [PMID: 36042266 PMCID: PMC9427847 DOI: 10.1038/s41598-022-18328-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 08/09/2022] [Indexed: 11/09/2022] Open
Abstract
The blood-brain barrier (BBB) prevents harmful toxins from entering brain but can also inhibit therapeutic molecules designed to treat neurodegenerative diseases. Focused ultrasound (FUS) combined with microbubbles can enhance permeability of BBB and is often performed under MRI guidance. We present an all-ultrasound system capable of targeting desired regions to open BBB with millimeter-scale accuracy in two dimensions based on Doppler images. We registered imaging coordinates to FUS coordinates with target registration error of 0.6 ± 0.3 mm and used the system to target microbubbles flowing in cellulose tube in two in vitro scenarios (agarose-embedded and through a rat skull), while receiving echoes on imaging transducer. We created passive acoustic maps from received echoes and found error between intended location in imaging plane and location of pixel with maximum intensity after passive acoustic maps reconstruction to be within 2 mm in 5/6 cases. We validated ultrasound-guided procedure in three in vivo rat brains by delivering MRI contrast agent to cortical regions of rat brains after BBB opening. Landmark-based registration of vascular maps created with MRI and Doppler ultrasound revealed BBB opening inside the intended focus with targeting accuracy within 1.5 mm. Combined use of power Doppler imaging with passive acoustic mapping demonstrates an ultrasound-based solution to guide focused ultrasound with high precision in rodents.
Collapse
Affiliation(s)
- Aparna Singh
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
- Vanderbilt University Institute of Imaging Science, Nashville, TN, USA
| | - Jiro Kusunose
- Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt University Institute of Imaging Science, Nashville, TN, USA
| | - M Anthony Phipps
- Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt University Institute of Imaging Science, Nashville, TN, USA
| | - Feng Wang
- Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt University Institute of Imaging Science, Nashville, TN, USA
| | - Li Min Chen
- Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt University Institute of Imaging Science, Nashville, TN, USA
| | - Charles F Caskey
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA.
- Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
- Vanderbilt University Institute of Imaging Science, Nashville, TN, USA.
| |
Collapse
|
8
|
Tanji A, Nagura T, Iwamoto T, Matsumura N, Nakamura M, Matsumoto M, Sato K. Total elbow arthroplasty using an augmented reality-assisted surgical technique. J Shoulder Elbow Surg 2022; 31:175-184. [PMID: 34175467 DOI: 10.1016/j.jse.2021.05.019] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Revised: 05/11/2021] [Accepted: 05/16/2021] [Indexed: 02/01/2023]
Abstract
BACKGROUND Precision placement of implants in total elbow arthroplasty (TEA) using conventional surgical techniques can be difficult and riddled with errors. Modern technologies such as augmented reality (AR) and 3-dimensional (3D) printing have already found useful applications in many fields of medicine. We proposed a cutting-edge surgical technique, augmented reality total elbow arthroplasty (ARTEA), that uses AR and 3D printing to provide 3D information for intuitive preoperative planning. The purpose of this study was to evaluate the accuracy of humeral and ulnar component placement using ARTEA. METHODS Twelve upper extremities from human frozen cadavers were used for experiments performed in this study. We scanned the extremities via computed tomography prior to performing TEA to plan placement sites using computer simulations. The ARTEA technique was used to perform TEA surgery on 6 of the extremities, whereas conventional (non-ARTEA) techniques were used on the other 6 extremities. Computed tomography scanning was repeated after TEA completion, and the error between the planned and actual placements of humeral and ulnar components was calculated and compared. RESULTS For humeral component placement, the mean positional error ± standard deviation of ARTEA vs. non-ARTEA was 1.4° ± 0.6° vs. 4.4° ± 0.9° in total rotation (P = .002) and 1.5 ± 0.6 mm vs. 8.6 ± 1.3 mm in total translation (P = .002). For ulnar component placement, the mean positional error ± standard deviation of ARTEA vs. non-ARTEA was 5.5° ± 3.1° vs. 19.5° ± 9.8° in total rotation (P = .004) and 1.5 ± 0.4 mm vs. 6.9 ± 1.6 mm in total translation (P = .002). Both rotational accuracy and translational accuracy were greater for joint components replaced using the ARTEA technique compared with the non-ARTEA technique (P < .05). CONCLUSION Compared with conventional surgical techniques, ARTEA had greater accuracy in prosthetic implant placement when used to perform TEA.
Collapse
Affiliation(s)
- Atsushi Tanji
- Department of Orthopedic Surgery, Japanese Red Cross Ashikaga Hospital, Ashikaga, Japan; Department of Orthopedic Surgery, Keio University, Tokyo, Japan.
| | - Takeo Nagura
- Department of Orthopedic Surgery, Keio University, Tokyo, Japan
| | - Takuji Iwamoto
- Department of Orthopedic Surgery, Keio University, Tokyo, Japan
| | | | - Masaya Nakamura
- Department of Orthopedic Surgery, Keio University, Tokyo, Japan
| | - Morio Matsumoto
- Department of Orthopedic Surgery, Keio University, Tokyo, Japan
| | - Kazuki Sato
- Department of Orthopedic Surgery, Keio University, Tokyo, Japan
| |
Collapse
|
9
|
Wahba R, Thomas MN, Bunck AC, Bruns CJ, Stippel DL. Clinical use of augmented reality, mixed reality, three-dimensional-navigation and artificial intelligence in liver surgery. Artif Intell Gastroenterol 2021; 2:94-104. [DOI: 10.35712/aig.v2.i4.94] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Revised: 07/10/2021] [Accepted: 08/27/2021] [Indexed: 02/06/2023] Open
Abstract
A precise knowledge of intra-parenchymal vascular and biliary architecture and the location of lesions in relation to the complex anatomy is indispensable to perform liver surgery. Therefore, virtual three-dimensional (3D)-reconstruction models from computed tomography/magnetic resonance imaging scans of the liver might be helpful for visualization. Augmented reality, mixed reality and 3D-navigation could transfer such 3D-image data directly into the operation theater to support the surgeon. This review examines the literature about the clinical and intraoperative use of these image guidance techniques in liver surgery and provides the reader with the opportunity to learn about these techniques. Augmented reality and mixed reality have been shown to be feasible for the use in open and minimally invasive liver surgery. 3D-navigation facilitated targeting of intraparenchymal lesions. The existing data is limited to small cohorts and description about technical details e.g., accordance between the virtual 3D-model and the real liver anatomy. Randomized controlled trials regarding clinical data or oncological outcome are not available. Up to now there is no intraoperative application of artificial intelligence in liver surgery. The usability of all these sophisticated image guidance tools has still not reached the grade of immersion which would be necessary for a widespread use in the daily surgical routine. Although there are many challenges, augmented reality, mixed reality, 3D-navigation and artificial intelligence are emerging fields in hepato-biliary surgery.
Collapse
Affiliation(s)
- Roger Wahba
- Department of General, Visceral, Cancer and Transplantation Surgery, University of Cologne, Faculty of Medicine and University Hospital Cologne, Cologne 50937, Germany
| | - Michael N Thomas
- Department of General, Visceral, Cancer and Transplantation Surgery, University of Cologne, Faculty of Medicine and University Hospital Cologne, Cologne 50937, Germany
| | - Alexander C Bunck
- Department of Diagnostic and Interventional Radiology, University of Cologne, Faculty of Medicine and University Hospital Cologne, Cologne 50937, Germany
| | - Christiane J Bruns
- Department of General, Visceral, Cancer and Transplantation Surgery, University of Cologne, Faculty of Medicine and University Hospital Cologne, Cologne 50937, Germany
| | - Dirk L Stippel
- Department of General, Visceral, Cancer and Transplantation Surgery, University of Cologne, Faculty of Medicine and University Hospital Cologne, Cologne 50937, Germany
| |
Collapse
|
10
|
Pladere T, Luguzis A, Zabels R, Smukulis R, Barkovska V, Krauze L, Konosonoka V, Svede A, Krumina G. When virtual and real worlds coexist: Visualization and visual system affect spatial performance in augmented reality. J Vis 2021; 21:17. [PMID: 34388233 PMCID: PMC8363769 DOI: 10.1167/jov.21.8.17] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Accepted: 07/16/2021] [Indexed: 11/24/2022] Open
Abstract
New visualization approaches are being actively developed aiming to mitigate the effect of vergence-accommodation conflict in stereoscopic augmented reality; however, high interindividual variability in spatial performance makes it difficult to predict user gain. To address this issue, we investigated the effects of consistent and inconsistent binocular and focus cues on perceptual matching in the stereoscopic environment of augmented reality using a head-mounted display that was driven in multifocal and single focal plane modes. Participants matched the distance of a real object with images projected at three viewing distances, concordant with the display focal planes when driven in the multifocal mode. As a result, consistency of depth cues facilitated faster perceptual judgments on spatial relations. Moreover, the individuals with mild binocular and accommodative disorders benefited from the visualization of information on the focal planes corresponding to image planes more than individuals with normal vision, which was reflected in performance accuracy. Because symptoms and complaints may be absent when the functionality of the sensorimotor system is reduced, the results indicate the need for a detailed assessment of visual functions in research on spatial performance. This study highlights that the development of a visualization system that reduces visual stress and improves user performance should be a priority for the successful implementation of augmented reality displays.
Collapse
Affiliation(s)
- Tatjana Pladere
- Department of Optometry and Vision Science, Faculty of Physics, Mathematics and Optometry, University of Latvia, Riga, Latvia
| | - Artis Luguzis
- Department of Optometry and Vision Science, Faculty of Physics, Mathematics and Optometry, University of Latvia, Riga, Latvia
- Laboratory of Statistical Research and Data Analysis, Faculty of Physics, Mathematics and Optometry, University of Latvia, Riga, Latvia
| | | | | | - Viktorija Barkovska
- Department of Optometry and Vision Science, Faculty of Physics, Mathematics and Optometry, University of Latvia, Riga, Latvia
| | - Linda Krauze
- Department of Optometry and Vision Science, Faculty of Physics, Mathematics and Optometry, University of Latvia, Riga, Latvia
| | - Vita Konosonoka
- Department of Optometry and Vision Science, Faculty of Physics, Mathematics and Optometry, University of Latvia, Riga, Latvia
| | - Aiga Svede
- Department of Optometry and Vision Science, Faculty of Physics, Mathematics and Optometry, University of Latvia, Riga, Latvia
| | - Gunta Krumina
- Department of Optometry and Vision Science, Faculty of Physics, Mathematics and Optometry, University of Latvia, Riga, Latvia
| |
Collapse
|
11
|
Schneider C, Allam M, Stoyanov D, Hawkes DJ, Gurusamy K, Davidson BR. Performance of image guided navigation in laparoscopic liver surgery - A systematic review. Surg Oncol 2021; 38:101637. [PMID: 34358880 DOI: 10.1016/j.suronc.2021.101637] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Revised: 07/04/2021] [Accepted: 07/24/2021] [Indexed: 02/07/2023]
Abstract
BACKGROUND Compared to open surgery, minimally invasive liver resection has improved short term outcomes. It is however technically more challenging. Navigated image guidance systems (IGS) are being developed to overcome these challenges. The aim of this systematic review is to provide an overview of their current capabilities and limitations. METHODS Medline, Embase and Cochrane databases were searched using free text terms and corresponding controlled vocabulary. Titles and abstracts of retrieved articles were screened for inclusion criteria. Due to the heterogeneity of the retrieved data it was not possible to conduct a meta-analysis. Therefore results are presented in tabulated and narrative format. RESULTS Out of 2015 articles, 17 pre-clinical and 33 clinical papers met inclusion criteria. Data from 24 articles that reported on accuracy indicates that in recent years navigation accuracy has been in the range of 8-15 mm. Due to discrepancies in evaluation methods it is difficult to compare accuracy metrics between different systems. Surgeon feedback suggests that current state of the art IGS may be useful as a supplementary navigation tool, especially in small liver lesions that are difficult to locate. They are however not able to reliably localise all relevant anatomical structures. Only one article investigated IGS impact on clinical outcomes. CONCLUSIONS Further improvements in navigation accuracy are needed to enable reliable visualisation of tumour margins with the precision required for oncological resections. To enhance comparability between different IGS it is crucial to find a consensus on the assessment of navigation accuracy as a minimum reporting standard.
Collapse
Affiliation(s)
- C Schneider
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK.
| | - M Allam
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK; General surgery Department, Tanta University, Egypt
| | - D Stoyanov
- Department of Computer Science, University College London, London, UK; Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - D J Hawkes
- Centre for Medical Image Computing (CMIC), University College London, London, UK; Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK
| | - K Gurusamy
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK
| | - B R Davidson
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK
| |
Collapse
|
12
|
Wachs JP, Kirkpatrick AW, Tisherman SA. Procedural Telementoring in Rural, Underdeveloped, and Austere Settings: Origins, Present Challenges, and Future Perspectives. Annu Rev Biomed Eng 2021; 23:115-139. [PMID: 33770455 DOI: 10.1146/annurev-bioeng-083120-023315] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Telemedicine is perhaps the most rapidly growing area in health care. Approximately 15 million Americans receive medical assistance remotely every year. Yet rural communities face significant challenges in securing subspecialist care. In the United States, 25% of the population resides in rural areas, where less than 15% of physicians work. Current surgery residency programs do not adequately prepare surgeons for rural practice. Telementoring, wherein a remote expert guides a less experienced caregiver, has been proposed to address this challenge. Nonetheless, existing mentoring technologies are not widely available to rural communities, due to a lack of infrastructure and mentor availability. For this reason, some clinicians prefer simpler and more reliable technologies. This article presents past and current telementoring systems, with a focus on rural settings, and proposes aset of requirements for such systems. We conclude with a perspective on the future of telementoring systems and the integration of artificial intelligence within those systems.
Collapse
Affiliation(s)
- Juan P Wachs
- School of Industrial Engineering, Purdue University, West Lafayette, Indiana 47907, USA;
| | - Andrew W Kirkpatrick
- Departments of Critical Care Medicine, Surgery, and Medicine; Snyder Institute for Chronic Diseases; and the Trauma Program, University of Calgary and Alberta Health Services, Calgary, Alberta T2N 2T9, Canada.,Tele-Mentored Ultrasound Supported Medical Interaction (TMUSMI) Research Group, Foothills Medical Centre, Calgary, Alberta T2N 2T9, Canada
| | - Samuel A Tisherman
- Department of Surgery and the Program in Trauma, University of Maryland School of Medicine, Baltimore, Maryland 21201, USA
| |
Collapse
|
13
|
Liu X, Plishker W, Shekhar R. Hybrid electromagnetic-ArUco tracking of laparoscopic ultrasound transducer in laparoscopic video. J Med Imaging (Bellingham) 2021; 8:015001. [PMID: 33585664 PMCID: PMC7857492 DOI: 10.1117/1.jmi.8.1.015001] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 01/12/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: The purpose of this work was to develop a new method of tracking a laparoscopic ultrasound (LUS) transducer in laparoscopic video by combining the hardware [e.g., electromagnetic (EM)] and the computer vision-based (e.g., ArUco) tracking methods. Approach: We developed a special tracking mount for the imaging tip of the LUS transducer. The mount incorporated an EM sensor and an ArUco pattern registered to it. The hybrid method used ArUco tracking for ArUco-success frames (i.e., frames where ArUco succeeds in detecting the pattern) and used corrected EM tracking for the ArUco-failure frames. The corrected EM tracking result was obtained by applying correction matrices to the original EM tracking result. The correction matrices were calculated in previous ArUco-success frames by comparing the ArUco result and the original EM tracking result. Results: We performed phantom and animal studies to evaluate the performance of our hybrid tracking method. The corrected EM tracking results showed significant improvements over the original EM tracking results. In the animal study, 59.2% frames were ArUco-success frames. For the ArUco-failure frames, mean reprojection errors for the original EM tracking method and for the corrected EM tracking method were 30.8 pixel and 10.3 pixel, respectively. Conclusions: The new hybrid method is more reliable than using ArUco tracking alone and more accurate and practical than using EM tracking alone for tracking the LUS transducer in the laparoscope camera image. The proposed method has the potential to significantly improve tracking performance for LUS-based augmented reality applications.
Collapse
Affiliation(s)
- Xinyang Liu
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, United States
| | | | - Raj Shekhar
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, United States.,IGI Technologies, Inc., Silver Spring, Maryland, United States
| |
Collapse
|
14
|
Bari H, Wadhwani S, Dasari BVM. Role of artificial intelligence in hepatobiliary and pancreatic surgery. World J Gastrointest Surg 2021; 13:7-18. [PMID: 33552391 PMCID: PMC7830072 DOI: 10.4240/wjgs.v13.i1.7] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 12/08/2020] [Accepted: 12/17/2020] [Indexed: 02/06/2023] Open
Abstract
Over the past decade, enhanced preoperative imaging and visualization, improved delineation of the complex anatomical structures of the liver and pancreas, and intra-operative technological advances have helped deliver the liver and pancreatic surgery with increased safety and better postoperative outcomes. Artificial intelligence (AI) has a major role to play in 3D visualization, virtual simulation, augmented reality that helps in the training of surgeons and the future delivery of conventional, laparoscopic, and robotic hepatobiliary and pancreatic (HPB) surgery; artificial neural networks and machine learning has the potential to revolutionize individualized patient care during the preoperative imaging, and postoperative surveillance. In this paper, we reviewed the existing evidence and outlined the potential for applying AI in the perioperative care of patients undergoing HPB surgery.
Collapse
Affiliation(s)
- Hassaan Bari
- Department of HPB and Liver Transplantation Surgery, Queen Elizabeth Hospital, Birmingham B15 2TH, United Kingdom
| | - Sharan Wadhwani
- Department of Radiology, Queen Elizabeth Hospital, Birmingham B15 2TH, United Kingdom
| | - Bobby V M Dasari
- Department of HPB and Liver Transplantation Surgery, Queen Elizabeth Hospital, Birmingham B15 2TH, United Kingdom
| |
Collapse
|
15
|
Advances and Trends in Pediatric Minimally Invasive Surgery. J Clin Med 2020; 9:jcm9123999. [PMID: 33321836 PMCID: PMC7764454 DOI: 10.3390/jcm9123999] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Revised: 11/28/2020] [Accepted: 12/03/2020] [Indexed: 12/11/2022] Open
Abstract
As many meta-analyses comparing pediatric minimally invasive to open surgery can be found in the literature, the aim of this review is to summarize the current state of minimally invasive pediatric surgery and specifically focus on the trends and developments which we expect in the upcoming years. Print and electronic databases were systematically searched for specific keywords, and cross-link searches with references found in the literature were added. Full-text articles were obtained, and eligibility criteria were applied independently. Pediatric minimally invasive surgery is a wide field, ranging from minimally invasive fetal surgery over microlaparoscopy in newborns to robotic surgery in adolescents. New techniques and devices, like natural orifice transluminal endoscopic surgery (NOTES), single-incision and endoscopic surgery, as well as the artificial uterus as a backup for surgery in preterm fetuses, all contribute to the development of less invasive procedures for children. In spite of all promising technical developments which will definitely change the way pediatric surgeons will perform minimally invasive procedures in the upcoming years, one must bear in mind that only hard data of prospective randomized controlled and double-blind trials can validate whether these techniques and devices really improve the surgical outcome of our patients.
Collapse
|
16
|
Prevost GA, Eigl B, Paolucci I, Rudolph T, Peterhans M, Weber S, Beldi G, Candinas D, Lachenmayer A. Efficiency, Accuracy and Clinical Applicability of a New Image-Guided Surgery System in 3D Laparoscopic Liver Surgery. J Gastrointest Surg 2020; 24:2251-2258. [PMID: 31621024 DOI: 10.1007/s11605-019-04395-7] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/23/2019] [Accepted: 09/01/2019] [Indexed: 01/31/2023]
Abstract
BACKGROUND To investigate efficiency, accuracy and clinical benefit of a new augmented reality system for 3D laparoscopic liver surgery. METHODS All patients who received laparoscopic liver resection by a new image-guided surgery system with augmented 3D-imaging in a university hospital were included for analysis. Digitally processed preoperative cross-sectional imaging was merged with the laparoscopic image. Intraoperative efficiency of the procedure was measured as time needed to achieve sufficient registration accuracy. Technical accuracy was reported as fiducial registration error (FRE). Clinical benefit was assessed trough a questionnaire, reporting measures in a 5-point Likert scale format ranging from 1 (high) to 5 (low). RESULTS From January to March 2018, ten laparoscopic liver resections of a total of 18 lesions were performed using the novel augmented reality system. Median time for registration was 8:50 min (range 1:31-23:56). The mean FRE was reduced from 14.0 mm (SD 5.0) in the first registration attempt to 9.2 mm (SD 2.8) in the last attempt. The questionnaire revealed the ease of use of the system (1.2, SD 0.4) and the benefit for resection of vanishing lesions (1.0, SD 0.0) as convincing positive aspects, whereas image registration accuracy for resection guidance was consistently judged as too inaccurate. CONCLUSIONS Augmented reality in 3D laparoscopic liver surgery with landmark-based registration technique is feasible with only little impact on the intraoperative workflow. The benefit for detecting particularly vanishing lesions is high. For an additional benefit during the resection process, registration accuracy has to be improved and non-rigid registration algorithms will be required to address intraoperative anatomical deformation.
Collapse
Affiliation(s)
- Gian Andrea Prevost
- Department of Visceral Surgery and Medicine, Inselspital, University Hospital Bern, University of Bern, 3010, Bern, Switzerland.
| | - Benjamin Eigl
- ARTORG Center for Biomedical Engineering Research, University of Bern, 3010, Bern, Switzerland
- CAScination AG, 3008, Bern, Switzerland
| | - Iwan Paolucci
- ARTORG Center for Biomedical Engineering Research, University of Bern, 3010, Bern, Switzerland
| | | | | | - Stefan Weber
- ARTORG Center for Biomedical Engineering Research, University of Bern, 3010, Bern, Switzerland
| | - Guido Beldi
- Department of Visceral Surgery and Medicine, Inselspital, University Hospital Bern, University of Bern, 3010, Bern, Switzerland
| | - Daniel Candinas
- Department of Visceral Surgery and Medicine, Inselspital, University Hospital Bern, University of Bern, 3010, Bern, Switzerland
| | - Anja Lachenmayer
- Department of Visceral Surgery and Medicine, Inselspital, University Hospital Bern, University of Bern, 3010, Bern, Switzerland
| |
Collapse
|
17
|
Virtual Reality Simulation and Augmented Reality-Guided Surgery for Total Maxillectomy: A Case Report. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10186288] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
With the improvement in computer graphics and sensors, technologies like virtual reality (VR) and augmented reality (AR) have created new possibilities for developing diagnostic and surgical techniques in the field of surgery. VR and AR are the latest technological modalities that have been integrated into clinical practice and medical education, and are rapidly emerging as powerful tools in the field of maxillofacial surgery. In this report, we describe a case of total maxillectomy and orbital floor reconstruction in a patient with malignant fibrous histiocytoma of the maxilla, with preoperative planning via VR simulation and AR-guided surgery. Future developments in VR and AR technologies will increase their utility and effectiveness in the field of surgery.
Collapse
|
18
|
Kosieradzki M, Lisik W, Gierwiało R, Sitnik R. Applicability of Augmented Reality in an Organ Transplantation. Ann Transplant 2020; 25:e923597. [PMID: 32732862 PMCID: PMC7418780 DOI: 10.12659/aot.923597] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Accepted: 02/20/2020] [Indexed: 12/20/2022] Open
Abstract
Augmented reality (AR) delivers virtual information or some of its elements to the real world. This technology, which has been used primarily for entertainment and military applications, has vigorously entered medicine, especially in radiology and surgery, yet has never been used in organ transplantation. AR could be useful in training transplant surgeons, promoting organ donations, graft retrieval and allocation, and microscopic diagnosis of rejection, treatment of complications, and post-transplantation neoplasms. The availability of AR display tools such as Smartphone screens and head-mounted goggles, accessibility of software for automated image segmentation and 3-dimensional reconstruction, and algorithms allowing registration, make augmented reality an attractive tool for surgery including transplantation. The shortage of hospital IT specialists and insufficient investments from medical equipment manufacturers into the development of AR technology remain the most significant obstacles in its broader application.
Collapse
Affiliation(s)
- Maciej Kosieradzki
- Department of General and Transplantation Surgery, The Medical University of Warsaw, Warsaw, Poland
| | - Wojciech Lisik
- Department of General and Transplantation Surgery, The Medical University of Warsaw, Warsaw, Poland
| | - Radosław Gierwiało
- Virtual Reality Techniques Division, Institute of Micromechanics and Photonics, Faculty of Mechatronics, Warsaw University of Technology, Warsaw, Poland
| | - Robert Sitnik
- Virtual Reality Techniques Division, Institute of Micromechanics and Photonics, Faculty of Mechatronics, Warsaw University of Technology, Warsaw, Poland
| |
Collapse
|
19
|
Wang Z, Kasman M, Martinez M, Rege R, Zeh H, Scott D, Fey AM. A Comparative Human-Centric Analysis of Virtual Reality and Dry Lab Training Tasks on the da Vinci Surgical Platform. ACTA ACUST UNITED AC 2020. [DOI: 10.1142/s2424905x19420078] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
There is a growing, widespread trend of adopting robot-assisted minimally invasive surgery (RMIS) in clinical care. Dry lab robot training and virtual reality simulation are commonly used to train surgical residents; however, it is unclear whether both types of training are equivalent or can be interchangeable and still achieve the same results in terms of training outcomes. In this paper, we take the first step in comparing the effects of physical and simulated surgical training tasks on human operator kinematics and physiological response to provide a richer understanding of exactly how the user interacts with the actual or simulated surgical robot. Four subjects, with expertise levels ranging from novice to expert surgeon, were recruited to perform three surgical tasks — Continuous Suture, Pick and Place, Tubes, with three repetitions — on two training platforms: (1) the da Vinci Si Skills Simulator and (2) da Vinci S robot, in a randomized order. We collected physiological response and kinematic movement data through body-worn sensors for a total of 72 individual experimental trials. A range of expertise was chosen for this experiment to wash out inherent differences based on expertise and only focus on inherent differences between the virtual reality and dry lab platforms. Our results show significant differences ([Formula: see text]-[Formula: see text]) between tasks done on the simulator and surgical robot. Specifically, robotic tasks resulted in significantly higher muscle activation and path length, and significantly lower economy of volume. The individual tasks also had significant differences in various kinematic and physiological metrics, leading to significant interaction effects between the task type and training platform. These results indicate that the presence of the robotic system may make surgical training tasks more difficult for the human operator. Thus, the potentially detrimental effects of virtual reality training alone are an important topic for future investigation.
Collapse
Affiliation(s)
- Ziheng Wang
- Department of Mechanical Engineering, University of Texas at Dallas, Richardson, TX 75080, USA
| | - Michael Kasman
- Department of Electrical & Computer Engineering, University of Texas at Dallas, Richardson, TX 75080, USA
| | - Marco Martinez
- Department of Surgery, Naval Medical Center, San Diego, CA 92134, USA
| | - Robert Rege
- Department of Surgery, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Herbert Zeh
- Department of Surgery, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Daniel Scott
- Department of Surgery, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Ann Majewicz Fey
- Department of Mechanical Engineering, University of Texas at Dallas, Richardson, TX 75080, USA
- Department of Surgery, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| |
Collapse
|
20
|
Huang B, Tsai YY, Cartucho J, Vyas K, Tuch D, Giannarou S, Elson DS. Tracking and visualization of the sensing area for a tethered laparoscopic gamma probe. Int J Comput Assist Radiol Surg 2020; 15:1389-1397. [PMID: 32556919 PMCID: PMC7351835 DOI: 10.1007/s11548-020-02205-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2019] [Accepted: 05/27/2020] [Indexed: 12/17/2022]
Abstract
Purpose In surgical oncology, complete cancer resection and lymph node identification are challenging due to the lack of reliable intraoperative visualization. Recently, endoscopic radio-guided cancer resection has been introduced where a novel tethered laparoscopic gamma detector can be used to determine the location of tracer activity, which can complement preoperative nuclear imaging data and endoscopic imaging. However, these probes do not clearly indicate where on the tissue surface the activity originates, making localization of pathological sites difficult and increasing the mental workload of the surgeons. Therefore, a robust real-time gamma probe tracking system integrated with augmented reality is proposed. Methods A dual-pattern marker has been attached to the gamma probe, which combines chessboard vertices and circular dots for higher detection accuracy. Both patterns are detected simultaneously based on blob detection and the pixel intensity-based vertices detector and used to estimate the pose of the probe. Temporal information is incorporated into the framework to reduce tracking failure. Furthermore, we utilized the 3D point cloud generated from structure from motion to find the intersection between the probe axis and the tissue surface. When presented as an augmented image, this can provide visual feedback to the surgeons. Results The method has been validated with ground truth probe pose data generated using the OptiTrack system. When detecting the orientation of the pose using circular dots and chessboard dots alone, the mean error obtained is \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$0.05^{\circ }$$\end{document}0.05∘ and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$0.06^{\circ }$$\end{document}0.06∘, respectively. As for the translation, the mean error for each pattern is 1.78 mm and 1.81 mm. The detection limits for pitch, roll and yaw are \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$360^{\circ }, 360^{\circ }$$\end{document}360∘,360∘ and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$8^{\circ }$$\end{document}8∘–\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$82^{\circ }\cup 188^{\circ }$$\end{document}82∘∪188∘–\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$352^{\circ }$$\end{document}352∘ . Conclusion The performance evaluation results show that this dual-pattern marker can provide high detection rates, as well as more accurate pose estimation and a larger workspace than the previously proposed hybrid markers. The augmented reality will be used to provide visual feedback to the surgeons on the location of the affected lymph nodes or tumor.
Collapse
Affiliation(s)
- Baoru Huang
- The Hamlyn Centre for Robotic Surgery, Department of Surgery and Cancer, Imperial College London, London, SW7 2AZ, UK.
| | - Ya-Yen Tsai
- The Hamlyn Centre for Robotic Surgery, Department of Surgery and Cancer, Imperial College London, London, SW7 2AZ, UK
| | - João Cartucho
- The Hamlyn Centre for Robotic Surgery, Department of Surgery and Cancer, Imperial College London, London, SW7 2AZ, UK
| | | | | | - Stamatia Giannarou
- The Hamlyn Centre for Robotic Surgery, Department of Surgery and Cancer, Imperial College London, London, SW7 2AZ, UK
| | - Daniel S Elson
- The Hamlyn Centre for Robotic Surgery, Department of Surgery and Cancer, Imperial College London, London, SW7 2AZ, UK
| |
Collapse
|
21
|
Liu X, Plishker W, Kane TD, Geller DA, Lau LW, Tashiro J, Sharma K, Shekhar R. Preclinical evaluation of ultrasound-augmented needle navigation for laparoscopic liver ablation. Int J Comput Assist Radiol Surg 2020; 15:803-810. [PMID: 32323211 DOI: 10.1007/s11548-020-02164-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2019] [Accepted: 04/06/2020] [Indexed: 12/17/2022]
Abstract
PURPOSE For laparoscopic ablation to be successful, accurate placement of the needle to the tumor is essential. Laparoscopic ultrasound is an essential tool to guide needle placement, but the ultrasound image is generally presented separately from the laparoscopic image. We aim to evaluate an augmented reality (AR) system which combines laparoscopic ultrasound image, laparoscope video, and the needle trajectory in a unified view. METHODS We created a tissue phantom made of gelatin. Artificial tumors represented by plastic spheres were secured in the gelatin at various depths. The top point of the sphere surface was our target, and its 3D coordinates were known. The participants were invited to perform needle placement with and without AR guidance. Once the participant reported that the needle tip had reached the target, the needle tip location was recorded and compared to the ground truth location of the target, and the difference was the target localization error (TLE). The time of the needle placement was also recorded. We further tested the technical feasibility of the AR system in vivo on a 40-kg swine. RESULTS The AR guidance system was evaluated by two experienced surgeons and two surgical fellows. The users performed needle placement on a total of 26 targets, 13 with AR and 13 without (i.e., the conventional approach). The average TLE for the conventional and the AR approaches was 14.9 mm and 11.1 mm, respectively. The average needle placement time needed for the conventional and AR approaches was 59.4 s and 22.9 s, respectively. For the animal study, ultrasound image and needle trajectory were successfully fused with the laparoscopic video in real time and presented on a single screen for the surgeons. CONCLUSION By providing projected needle trajectory, we believe our AR system can assist the surgeon with more efficient and precise needle placement.
Collapse
Affiliation(s)
- Xinyang Liu
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA
| | | | - Timothy D Kane
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA
| | - David A Geller
- Department of Surgery, University of Pittsburgh Medical Center, Pittsburgh, PA, USA
| | - Lung W Lau
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA
| | - Jun Tashiro
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA
| | - Karun Sharma
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA
| | - Raj Shekhar
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC, USA.
- IGI Technologies, Inc., College Park, MD, USA.
| |
Collapse
|
22
|
Luo H, Yin D, Zhang S, Xiao D, He B, Meng F, Zhang Y, Cai W, He S, Zhang W, Hu Q, Guo H, Liang S, Zhou S, Liu S, Sun L, Guo X, Fang C, Liu L, Jia F. Augmented reality navigation for liver resection with a stereoscopic laparoscope. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 187:105099. [PMID: 31601442 DOI: 10.1016/j.cmpb.2019.105099] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Revised: 08/14/2019] [Accepted: 09/27/2019] [Indexed: 06/10/2023]
Abstract
OBJECTIVE Understanding the three-dimensional (3D) spatial position and orientation of vessels and tumor(s) is vital in laparoscopic liver resection procedures. Augmented reality (AR) techniques can help surgeons see the patient's internal anatomy in conjunction with laparoscopic video images. METHOD In this paper, we present an AR-assisted navigation system for liver resection based on a rigid stereoscopic laparoscope. The stereo image pairs from the laparoscope are used by an unsupervised convolutional network (CNN) framework to estimate depth and generate an intraoperative 3D liver surface. Meanwhile, 3D models of the patient's surgical field are segmented from preoperative CT images using V-Net architecture for volumetric image data in an end-to-end predictive style. A globally optimal iterative closest point (Go-ICP) algorithm is adopted to register the pre- and intraoperative models into a unified coordinate space; then, the preoperative 3D models are superimposed on the live laparoscopic images to provide the surgeon with detailed information about the subsurface of the patient's anatomy, including tumors, their resection margins and vessels. RESULTS The proposed navigation system is tested on four laboratory ex vivo porcine livers and five operating theatre in vivo porcine experiments to validate its accuracy. The ex vivo and in vivo reprojection errors (RPE) are 6.04 ± 1.85 mm and 8.73 ± 2.43 mm, respectively. CONCLUSION AND SIGNIFICANCE Both the qualitative and quantitative results indicate that our AR-assisted navigation system shows promise and has the potential to be highly useful in clinical practice.
Collapse
Affiliation(s)
- Huoling Luo
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Dalong Yin
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China; Department of Hepatobiliary Surgery, Shengli Hospital Affiliated to University of Science and Technology of China, Hefei, China
| | - Shugeng Zhang
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China; Department of Hepatobiliary Surgery, Shengli Hospital Affiliated to University of Science and Technology of China, Hefei, China
| | - Deqiang Xiao
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Baochun He
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Fanzheng Meng
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Yanfang Zhang
- Department of Interventional Radiology, Shenzhen People's Hospital, Shenzhen, China
| | - Wei Cai
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Shenghao He
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Wenyu Zhang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Qingmao Hu
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Hongrui Guo
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Shuhang Liang
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Shuo Zhou
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Shuxun Liu
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Linmao Sun
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Xiao Guo
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China
| | - Chihua Fang
- Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Lianxin Liu
- Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China; Department of Hepatobiliary Surgery, Shengli Hospital Affiliated to University of Science and Technology of China, Hefei, China.
| | - Fucang Jia
- Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China.
| |
Collapse
|
23
|
Ramírez-Hernández LR, Rodríguez-Quiñonez JC, Castro-Toscano MJ, Hernández-Balbuena D, Flores-Fuentes W, Rascón-Carmona R, Lindner L, Sergiyenko O. Improve three-dimensional point localization accuracy in stereo vision systems using a novel camera calibration method. INT J ADV ROBOT SYST 2020. [DOI: 10.1177/1729881419896717] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Computer vision systems have demonstrated to be useful in applications of autonomous navigation, especially with the use of stereo vision systems for the three-dimensional mapping of the environment. This article presents a novel camera calibration method to improve the accuracy of stereo vision systems for three-dimensional point localization. The proposed camera calibration method uses the least square method to model the error caused by the image digitalization and the lens distortion. To obtain particular three-dimensional point coordinates, the stereo vision systems use the information of two images taken by two different cameras. Then, the system locates the two-dimensional pixel coordinates of the three-dimensional point in both images and coverts them into angles. With the obtained angles, the system finds the three-dimensional point coordinates through a triangulation process. The proposed camera calibration method is applied in the stereo vision systems, and a comparative analysis between the real and calibrated three-dimensional data points is performed to validate the improvements. Moreover, the developed method is compared with three classical calibration methods to analyze their advantages in terms of accuracy with respect to tested methods.
Collapse
Affiliation(s)
| | | | | | | | - Wendy Flores-Fuentes
- Facultad de Ingeniería, Universidad Autónoma de Baja California, Baja California, México
| | - Raúl Rascón-Carmona
- Facultad de Ingeniería, Universidad Autónoma de Baja California, Baja California, México
| | - Lars Lindner
- Instituto de Ingeniería, Universidad Autónoma de Baja California, Baja California, México
| | - Oleg Sergiyenko
- Instituto de Ingeniería, Universidad Autónoma de Baja California, Baja California, México
| |
Collapse
|
24
|
Schneider C, Thompson S, Totz J, Song Y, Allam M, Sodergren MH, Desjardins AE, Barratt D, Ourselin S, Gurusamy K, Stoyanov D, Clarkson MJ, Hawkes DJ, Davidson BR. Comparison of manual and semi-automatic registration in augmented reality image-guided liver surgery: a clinical feasibility study. Surg Endosc 2020; 34:4702-4711. [PMID: 32780240 PMCID: PMC7524854 DOI: 10.1007/s00464-020-07807-x] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2020] [Accepted: 07/10/2020] [Indexed: 02/06/2023]
Abstract
BACKGROUND The laparoscopic approach to liver resection may reduce morbidity and hospital stay. However, uptake has been slow due to concerns about patient safety and oncological radicality. Image guidance systems may improve patient safety by enabling 3D visualisation of critical intra- and extrahepatic structures. Current systems suffer from non-intuitive visualisation and a complicated setup process. A novel image guidance system (SmartLiver), offering augmented reality visualisation and semi-automatic registration has been developed to address these issues. A clinical feasibility study evaluated the performance and usability of SmartLiver with either manual or semi-automatic registration. METHODS Intraoperative image guidance data were recorded and analysed in patients undergoing laparoscopic liver resection or cancer staging. Stereoscopic surface reconstruction and iterative closest point matching facilitated semi-automatic registration. The primary endpoint was defined as successful registration as determined by the operating surgeon. Secondary endpoints were system usability as assessed by a surgeon questionnaire and comparison of manual vs. semi-automatic registration accuracy. Since SmartLiver is still in development no attempt was made to evaluate its impact on perioperative outcomes. RESULTS The primary endpoint was achieved in 16 out of 18 patients. Initially semi-automatic registration failed because the IGS could not distinguish the liver surface from surrounding structures. Implementation of a deep learning algorithm enabled the IGS to overcome this issue and facilitate semi-automatic registration. Mean registration accuracy was 10.9 ± 4.2 mm (manual) vs. 13.9 ± 4.4 mm (semi-automatic) (Mean difference - 3 mm; p = 0.158). Surgeon feedback was positive about IGS handling and improved intraoperative orientation but also highlighted the need for a simpler setup process and better integration with laparoscopic ultrasound. CONCLUSION The technical feasibility of using SmartLiver intraoperatively has been demonstrated. With further improvements semi-automatic registration may enhance user friendliness and workflow of SmartLiver. Manual and semi-automatic registration accuracy were comparable but evaluation on a larger patient cohort is required to confirm these findings.
Collapse
Affiliation(s)
- C. Schneider
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK
| | - S. Thompson
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - J. Totz
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - Y. Song
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - M. Allam
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK
| | - M. H. Sodergren
- Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - A. E. Desjardins
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - D. Barratt
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - S. Ourselin
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - K. Gurusamy
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK ,Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Department of Hepatopancreatobiliary and Liver Transplant Surgery, Royal Free Hospital, London, UK
| | - D. Stoyanov
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Computer Science, University College London, London, UK
| | - M. J. Clarkson
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - D. J. Hawkes
- Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Centre for Medical Image Computing (CMIC), University College London, London, UK ,Department of Medical Physics and Bioengineering, University College London, London, UK
| | - B. R. Davidson
- Division of Surgery & Interventional Science, Royal Free Campus, University College London, Pond Street, London, NW3 2QG UK ,Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK ,Department of Hepatopancreatobiliary and Liver Transplant Surgery, Royal Free Hospital, London, UK
| |
Collapse
|
25
|
Chang F, Laguna B, Uribe J, Vu L, Zapala MA, Devincent C, Courtier J. Evaluating the Performance of Augmented Reality in Displaying Magnetic Resonance Imaging-Derived Three-Dimensional Holographic Models. J Med Imaging Radiat Sci 2019; 51:95-102. [PMID: 31862176 DOI: 10.1016/j.jmir.2019.10.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2019] [Revised: 08/29/2019] [Accepted: 10/23/2019] [Indexed: 10/25/2022]
Abstract
INTRODUCTION/BACKGROUND Establishing accuracy and precision of magnetic resonance (MR)-derived augmented reality (AR) models is critical before clinical utilization, particularly in preoperative planning. We investigate the performance of an AR application in representing and displaying MR-derived three-dimensional holographic models. METHODS Thirty gold standard (GS) measurements were obtained on a magnetic resonance imaging (MRI) phantom (six interfiducial distances and five configurations). Four MRI pulse sequences were obtained for each of the five configurations, and distances measured in Picture Archiving and Communication System (PACS). Digital imaging and communications in medicine files were translated into three-dimensional models and then loaded onto a novel AR platform. Measurements were also obtained with the software's AR caliper tool. Significant differences among the three groups (GS, PACS, and AR) were assessed with the Kruskal-Wallis test and nonsample median test. Accuracy analysis of GS vs. AR was performed. Precision (percent deviation) of the AR-based caliper tool was also assessed. RESULTS No statistically significant difference existed between AR and GS measurements (P = .6208). PACS demonstrated mean squared error (MSE) of 0.29%. AR digital caliper demonstrated an MSE of 0.3%. Three-dimensional T2 CUBE AR measurements using the platform's AR caliper tool demonstrated an MSE of 8.6%. Percent deviation of AR software caliper tool ranged between 1.9% and 3.9%. DISCUSSION AR demonstrated a high degree of accuracy in comparison to GS, comparable to PACS-based measurements. AR caliper tool demonstrated overall lower accuracy than with physical calipers, although with MSE <10% and greatest measured difference from GS measuring <5 mm. AR-based caliper demonstrated a high degree of precision. CONCLUSION There was no statistically significant difference between GS measurements and three-dimensional AR measurements in MRI phantom models.
Collapse
Affiliation(s)
- Frank Chang
- UCSF Department of Radiology and Biomedical Imaging, Masters of Science in Biomedical Imaging Program, San Francisco, California, USA
| | - Ben Laguna
- UCSF Department of Radiology and Biomedical Imaging, San Francisco, California, USA
| | - Jesus Uribe
- UCSF School of Medicine, San Francisco, California, USA
| | - Lan Vu
- Division of Pediatric Surgery, UCSF Department of Surgery, San Francisco, California, USA
| | - Matthew A Zapala
- UCSF Department of Radiology and Biomedical Imaging, San Francisco, California, USA
| | - Craig Devincent
- UCSF Department of Radiology and Biomedical Imaging, San Francisco, California, USA
| | - Jesse Courtier
- UCSF Department of Radiology and Biomedical Imaging, San Francisco, California, USA.
| |
Collapse
|
26
|
A Reflective Augmented Reality Integral Imaging 3D Display by Using a Mirror-Based Pinhole Array. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9153124] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In this paper, we propose a reflective augmented reality (AR) display system based on integral imaging (II) using a mirror-based pinhole array (MBPA). The MBPA, obtained by punching pinholes on a mirror, functions as a three-dimensional (3D) imaging device, as well as an image combiner. The pinhole array of MBPA can realize a pinhole array-based II display, while the mirror of MBPA can image the real objects, so as to combine the images of the real objects with the reconstructed 3D images. The structure of the proposed reflective AR display is very simple, and only a projection system or a two-dimensional display screen is needed to combine with the MBPA. In our experiment, a 25cm × 14cm sized AR display was built up, a combination of a 3D virtual image and a real 3D object was presented by the proposed AR 3D display. The proposed device could realize an AR display of large size due to its compact form factor and low weight.
Collapse
|
27
|
Sato M, Koizumi M, Nakabayashi M, Inaba K, Takahashi Y, Nagashima N, Ki H, Itaoka N, Ueshima C, Nakata M, Hasumi Y. Computer vision for total laparoscopic hysterectomy. Asian J Endosc Surg 2019; 12:294-300. [PMID: 30066473 DOI: 10.1111/ases.12632] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/15/2018] [Revised: 06/06/2018] [Accepted: 06/24/2018] [Indexed: 11/29/2022]
Abstract
INTRODUCTION Laparoscopic surgery is widely performed in various surgical fields, but this technique requires time for surgeons to master. However, at the same time, there are many advantages in visualizing the operative field through a camera. In other words, we can visualize what we cannot see with our own eyes by using augmented reality and computer vision. Therefore, we investigated the possibilities and usefulness of computer vision in total laparoscopic hysterectomy. METHODS This study was approved by the Mitsui Memorial Hospital ethics committee. Patients who underwent total laparoscopic hysterectomy at Mitsui Memorial Hospital from January 2015 to December 2015 were enrolled. We evaluated 19 cases in which total laparoscopic hysterectomy was performed by the same operator and assistant. We used the Open Source Computer Vision Library for computer vision analysis. The development platform used in this study was a computer operating on Mac OS X 10.11.3. RESULTS We created panoramic images by matching features with the AKAZE algorithm. Noise reduction methods improved haziness caused by using energy devices. By abstracting the color of the suture string, we succeeded in abstracting the suture string from movies. We could not achieve satisfactory results in detecting ureters, and we expect that creative ideas for ureter detection may arise from collaborations between surgeons and medical engineers. CONCLUSIONS Although this was a preliminary study, the results suggest the utility of computer vision in assisting laparoscopic surgery.
Collapse
Affiliation(s)
- Masakazu Sato
- Department of Obstetrics and Gynecology, Mitsui Memorial Hospital, Tokyo, Japan.,Department of Obstetrics and Gynecology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Minako Koizumi
- Department of Obstetrics and Gynecology, Mitsui Memorial Hospital, Tokyo, Japan
| | - Minoru Nakabayashi
- Department of Obstetrics and Gynecology, Mitsui Memorial Hospital, Tokyo, Japan
| | - Kei Inaba
- Department of Obstetrics and Gynecology, Mitsui Memorial Hospital, Tokyo, Japan
| | - Yu Takahashi
- Department of Obstetrics and Gynecology, Mitsui Memorial Hospital, Tokyo, Japan
| | - Natsuki Nagashima
- Department of Obstetrics and Gynecology, Mitsui Memorial Hospital, Tokyo, Japan
| | - Hiroshi Ki
- Department of Obstetrics and Gynecology, Mitsui Memorial Hospital, Tokyo, Japan
| | - Nao Itaoka
- Department of Obstetrics and Gynecology, Mitsui Memorial Hospital, Tokyo, Japan
| | - Chiharu Ueshima
- Department of Obstetrics and Gynecology, Mitsui Memorial Hospital, Tokyo, Japan
| | - Maki Nakata
- Department of Obstetrics and Gynecology, Mitsui Memorial Hospital, Tokyo, Japan
| | - Yoko Hasumi
- Department of Obstetrics and Gynecology, Mitsui Memorial Hospital, Tokyo, Japan
| |
Collapse
|
28
|
Comparison of Two Innovative Strategies Using Augmented Reality for Communication in Aesthetic Dentistry: A Pilot Study. JOURNAL OF HEALTHCARE ENGINEERING 2019; 2019:7019046. [PMID: 31073394 PMCID: PMC6470451 DOI: 10.1155/2019/7019046] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2018] [Revised: 02/28/2019] [Accepted: 03/14/2019] [Indexed: 11/18/2022]
Abstract
During dental prosthetic rehabilitation, communication and conception are achieved using rigorous methodologies such as smile design protocols. The aim of the present pilot study was to compare two innovative strategies that used augmented reality for communication in dentistry. These strategies enable the user to instantly try a virtual smile proposition by taking a set of pictures from different points of view or by using the iPad as an enhanced mirror. Sixth-year dental students (n=18, women = 13, men = 5, mean age = 23.8) were included in this pilot study and were asked to answer a 5-question questionnaire studying the user experience using a visual analog scale (VAS). Answers were converted into a numerical result ranging from 0 to 100 for statistical analysis. Participants were not able to report a difference between the two strategies in terms of handling of the device (p=0.45), quality of the reconstruction (p=0.73), and fluidity of the software (p=0.67). Even if the participants' experience with the enhanced mirror was more often reported as immersive and more likely to be integrated in a daily dental office practice, no significant increase was reported (p=0.15 and p=0.07). Further investigations are required to evaluate time and cost savings in daily practice. Software accuracy is also a major point to investigate in order to go further in clinical applications.
Collapse
|
29
|
Liu X, Kane TD, Shekhar R. GPS Laparoscopic Ultrasound: Embedding an Electromagnetic Sensor in a Laparoscopic Ultrasound Transducer. ULTRASOUND IN MEDICINE & BIOLOGY 2019; 45:989-997. [PMID: 30709691 DOI: 10.1016/j.ultrasmedbio.2018.11.014] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2018] [Revised: 11/21/2018] [Accepted: 11/29/2018] [Indexed: 06/09/2023]
Abstract
Tracking the location and orientation of a laparoscopic ultrasound (LUS) transducer is a prerequisite in many surgical visualization and navigation applications. Electromagnetic (EM) tracking is a preferred method to track an LUS transducer with an articulating imaging tip. The conventional approach to integrating EM tracking with LUS is to attach an EM sensor on the outer surface of the imaging tip (external setup), which is not ideal for routine clinical use. In this work, we embedded an EM sensor inside a standard LUS transducer. We found that ultrasound image quality and the four-way articulation function of the transducer were not affected by this sensor integration. Furthermore, we found that the tracking accuracy of our integrated transducer was comparable to that of the external setup. An animal study conducted using the developed transducer suggests that an internally embedded EM sensor is a clinically more viable approach, and may be the future of tracking an articulating LUS transducer.
Collapse
Affiliation(s)
- Xinyang Liu
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Medical Center, Washington, DC, USA
| | - Timothy D Kane
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Medical Center, Washington, DC, USA
| | - Raj Shekhar
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Medical Center, Washington, DC, USA.
| |
Collapse
|
30
|
|
31
|
Xiao G, Bonmati E, Thompson S, Evans J, Hipwell J, Nikitichev D, Gurusamy K, Ourselin S, Hawkes DJ, Davidson B, Clarkson MJ. Electromagnetic tracking in image-guided laparoscopic surgery: Comparison with optical tracking and feasibility study of a combined laparoscope and laparoscopic ultrasound system. Med Phys 2018; 45:5094-5104. [PMID: 30247765 PMCID: PMC6282846 DOI: 10.1002/mp.13210] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2018] [Revised: 09/07/2018] [Accepted: 09/07/2018] [Indexed: 11/23/2022] Open
Abstract
PURPOSE In image-guided laparoscopy, optical tracking is commonly employed, but electromagnetic (EM) systems have been proposed in the literature. In this paper, we provide a thorough comparison of EM and optical tracking systems for use in image-guided laparoscopic surgery and a feasibility study of a combined, EM-tracked laparoscope and laparoscopic ultrasound (LUS) image guidance system. METHODS We first assess the tracking accuracy of a laparoscope with two optical trackers tracking retroreflective markers mounted on the shaft and an EM tracker with the sensor embedded at the proximal end, using a standard evaluation plate. We then use a stylus to test the precision of position measurement and accuracy of distance measurement of the trackers. Finally, we assess the accuracy of an image guidance system comprised of an EM-tracked laparoscope and an EM-tracked LUS probe. RESULTS In the experiment using a standard evaluation plate, the two optical trackers show less jitter in position and orientation measurement than the EM tracker. Also, the optical trackers demonstrate better consistency of orientation measurement within the test volume. However, their accuracy of measuring relative positions decreases significantly with longer distances whereas the EM tracker's performance is stable; at 50 mm distance, the RMS errors for the two optical trackers are 0.210 and 0.233 mm, respectively, and it is 0.214 mm for the EM tracker; at 250 mm distance, the RMS errors for the two optical trackers become 1.031 and 1.178 mm, respectively, while it is 0.367 mm for the EM tracker. In the experiment using the stylus, the two optical trackers have RMS errors of 1.278 and 1.555 mm in localizing the stylus tip, and it is 1.117 mm for the EM tracker. Our prototype of a combined, EM-tracked laparoscope and LUS system using representative calibration methods showed a RMS point localization error of 3.0 mm for the laparoscope and 1.3 mm for the LUS probe, the lager error of the former being predominantly due to the triangulation error when using a narrow-baseline stereo laparoscope. CONCLUSIONS The errors incurred by optical trackers, due to the lever-arm effect and variation in tracking accuracy in the depth direction, would make EM-tracked solutions preferable if the EM sensor is placed at the proximal end of the laparoscope.
Collapse
Affiliation(s)
- Guofang Xiao
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Ester Bonmati
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Stephen Thompson
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Joe Evans
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - John Hipwell
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Daniil Nikitichev
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Kurinchi Gurusamy
- Division of Surgery and Interventional ScienceUniversity College LondonLondonUK
| | - Sébastien Ourselin
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - David J. Hawkes
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| | - Brian Davidson
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Division of Surgery and Interventional ScienceUniversity College LondonLondonUK
| | - Matthew J. Clarkson
- Wellcome/EPSRC Center for Interventional and Surgical SciencesUniversity College LondonLondonUK
- Center for Medical Image ComputingUniversity College LondonLondonUK
- Department of Medical Physics and Biomedical EngineeringUniversity College LondonLondonUK
| |
Collapse
|
32
|
In vivo estimation of target registration errors during augmented reality laparoscopic surgery. Int J Comput Assist Radiol Surg 2018; 13:865-874. [PMID: 29663273 PMCID: PMC5973973 DOI: 10.1007/s11548-018-1761-3] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2018] [Accepted: 04/02/2018] [Indexed: 11/02/2022]
Abstract
PURPOSE Successful use of augmented reality for laparoscopic surgery requires that the surgeon has a thorough understanding of the likely accuracy of any overlay. Whilst the accuracy of such systems can be estimated in the laboratory, it is difficult to extend such methods to the in vivo clinical setting. Herein we describe a novel method that enables the surgeon to estimate in vivo errors during use. We show that the method enables quantitative evaluation of in vivo data gathered with the SmartLiver image guidance system. METHODS The SmartLiver system utilises an intuitive display to enable the surgeon to compare the positions of landmarks visible in both a projected model and in the live video stream. From this the surgeon can estimate the system accuracy when using the system to locate subsurface targets not visible in the live video. Visible landmarks may be either point or line features. We test the validity of the algorithm using an anatomically representative liver phantom, applying simulated perturbations to achieve clinically realistic overlay errors. We then apply the algorithm to in vivo data. RESULTS The phantom results show that using projected errors of surface features provides a reliable predictor of subsurface target registration error for a representative human liver shape. Applying the algorithm to in vivo data gathered with the SmartLiver image-guided surgery system shows that the system is capable of accuracies around 12 mm; however, achieving this reliably remains a significant challenge. CONCLUSION We present an in vivo quantitative evaluation of the SmartLiver image-guided surgery system, together with a validation of the evaluation algorithm. This is the first quantitative in vivo analysis of an augmented reality system for laparoscopic surgery.
Collapse
|
33
|
Augmented reality technology for preoperative planning and intraoperative navigation during hepatobiliary surgery: A review of current methods. Hepatobiliary Pancreat Dis Int 2018; 17:101-112. [PMID: 29567047 DOI: 10.1016/j.hbpd.2018.02.002] [Citation(s) in RCA: 61] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/31/2017] [Accepted: 11/16/2017] [Indexed: 02/05/2023]
Abstract
BACKGROUND Augmented reality (AR) technology is used to reconstruct three-dimensional (3D) images of hepatic and biliary structures from computed tomography and magnetic resonance imaging data, and to superimpose the virtual images onto a view of the surgical field. In liver surgery, these superimposed virtual images help the surgeon to visualize intrahepatic structures and therefore, to operate precisely and to improve clinical outcomes. DATA SOURCES The keywords "augmented reality", "liver", "laparoscopic" and "hepatectomy" were used for searching publications in the PubMed database. The primary source of literatures was from peer-reviewed journals up to December 2016. Additional articles were identified by manual search of references found in the key articles. RESULTS In general, AR technology mainly includes 3D reconstruction, display, registration as well as tracking techniques and has recently been adopted gradually for liver surgeries including laparoscopy and laparotomy with video-based AR assisted laparoscopic resection as the main technical application. By applying AR technology, blood vessels and tumor structures in the liver can be displayed during surgery, which permits precise navigation during complex surgical procedures. Liver transformation and registration errors during surgery were the main factors that limit the application of AR technology. CONCLUSIONS With recent advances, AR technologies have the potential to improve hepatobiliary surgical procedures. However, additional clinical studies will be required to evaluate AR as a tool for reducing postoperative morbidity and mortality and for the improvement of long-term clinical outcomes. Future research is needed in the fusion of multiple imaging modalities, improving biomechanical liver modeling, and enhancing image data processing and tracking technologies to increase the accuracy of current AR methods.
Collapse
|
34
|
Luo X, Mori K, Peters TM. Advanced Endoscopic Navigation: Surgical Big Data, Methodology, and Applications. Annu Rev Biomed Eng 2018; 20:221-251. [PMID: 29505729 DOI: 10.1146/annurev-bioeng-062117-120917] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
Abstract
Interventional endoscopy (e.g., bronchoscopy, colonoscopy, laparoscopy, cystoscopy) is a widely performed procedure that involves either diagnosis of suspicious lesions or guidance for minimally invasive surgery in a variety of organs within the body cavity. Endoscopy may also be used to guide the introduction of certain items (e.g., stents) into the body. Endoscopic navigation systems seek to integrate big data with multimodal information (e.g., computed tomography, magnetic resonance images, endoscopic video sequences, ultrasound images, external trackers) relative to the patient's anatomy, control the movement of medical endoscopes and surgical tools, and guide the surgeon's actions during endoscopic interventions. Nevertheless, it remains challenging to realize the next generation of context-aware navigated endoscopy. This review presents a broad survey of various aspects of endoscopic navigation, particularly with respect to the development of endoscopic navigation techniques. First, we investigate big data with multimodal information involved in endoscopic navigation. Next, we focus on numerous methodologies used for endoscopic navigation. We then review different endoscopic procedures in clinical applications. Finally, we discuss novel techniques and promising directions for the development of endoscopic navigation.
Collapse
Affiliation(s)
- Xiongbiao Luo
- Department of Computer Science, Fujian Key Laboratory of Computing and Sensing for Smart City, Xiamen University, Xiamen 361005, China;
| | - Kensaku Mori
- Department of Intelligent Systems, Graduate School of Informatics, Nagoya University, Nagoya 464-8601, Japan;
| | - Terry M Peters
- Robarts Research Institute, Western University, London, Ontario N6A 3K7, Canada;
| |
Collapse
|
35
|
Siddaiah-Subramanya M, Tiang KW, Nyandowe M. A New Era of Minimally Invasive Surgery: Progress and Development of Major Technical Innovations in General Surgery Over the Last Decade. Surg J (N Y) 2017; 3:e163-e166. [PMID: 29134202 PMCID: PMC5680046 DOI: 10.1055/s-0037-1608651] [Citation(s) in RCA: 30] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2017] [Accepted: 09/26/2017] [Indexed: 12/15/2022] Open
Abstract
Minimally invasive surgery (MIS) continues to play an important role in general surgery as an alternative to traditional open surgery as well as traditional laparoscopic techniques. Since the 1980s, technological advancement and innovation have seen surgical techniques in MIS rapidly grow as it is viewed as more desirable. MIS, which includes natural orifice transluminal endoscopic surgery (NOTES) and single-incision laparoscopic surgery (SILS), is less invasive and has better cosmetic results. The technological growth and adoption of NOTES and SILS by clinicians in the last decade has however not been uniform. We look at the differences in new developments and advancement in the different techniques in the last 10 years. We also aim to explain these differences as well as the implications in general surgery for the future.
Collapse
Affiliation(s)
- Manjunath Siddaiah-Subramanya
- Department of Surgery, Logan Hospital, Brisbane, Queensland, Australia.,Department of Medicine, Griffith University, Queensland, Australia.,Department of Medicine, University of Queensland, Queensland, Australia
| | - Kor Woi Tiang
- Department of Surgery, Logan Hospital, Brisbane, Queensland, Australia.,Department of Medicine, Griffith University, Queensland, Australia.,Department of Medicine, University of Queensland, Queensland, Australia
| | - Masimba Nyandowe
- Department of Surgery, Townsville Hospital, Townsville, Queensland, Australia
| |
Collapse
|
36
|
Recent Development of Augmented Reality in Surgery: A Review. JOURNAL OF HEALTHCARE ENGINEERING 2017; 2017:4574172. [PMID: 29065604 PMCID: PMC5585624 DOI: 10.1155/2017/4574172] [Citation(s) in RCA: 156] [Impact Index Per Article: 22.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2017] [Accepted: 07/03/2017] [Indexed: 12/11/2022]
Abstract
Introduction The development augmented reality devices allow physicians to incorporate data visualization into diagnostic and treatment procedures to improve work efficiency, safety, and cost and to enhance surgical training. However, the awareness of possibilities of augmented reality is generally low. This review evaluates whether augmented reality can presently improve the results of surgical procedures. Methods We performed a review of available literature dating from 2010 to November 2016 by searching PubMed and Scopus using the terms “augmented reality” and “surgery.” Results. The initial search yielded 808 studies. After removing duplicates and including only journal articles, a total of 417 studies were identified. By reading of abstracts, 91 relevant studies were chosen to be included. 11 references were gathered by cross-referencing. A total of 102 studies were included in this review. Conclusions The present literature suggest an increasing interest of surgeons regarding employing augmented reality into surgery leading to improved safety and efficacy of surgical procedures. Many studies showed that the performance of newly devised augmented reality systems is comparable to traditional techniques. However, several problems need to be addressed before augmented reality is implemented into the routine practice.
Collapse
|
37
|
Liu X, Rice CE, Shekhar R. Fast calibration of electromagnetically tracked oblique-viewing rigid endoscopes. Int J Comput Assist Radiol Surg 2017. [PMID: 28623479 DOI: 10.1007/s11548-017-1623-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
PURPOSE The oblique-viewing (i.e., angled) rigid endoscope is a commonly used tool in conventional endoscopic surgeries. The relative rotation between its two moveable parts, the telescope and the camera head, creates a rotation offset between the actual and the projection of an object in the camera image. A calibration method tailored to compensate such offset is needed. METHODS We developed a fast calibration method for oblique-viewing rigid endoscopes suitable for clinical use. In contrast to prior approaches based on optical tracking, we used electromagnetic (EM) tracking as the external tracking hardware to improve compactness and practicality. Two EM sensors were mounted on the telescope and the camera head, respectively, with considerations to minimize EM tracking errors. Single-image calibration was incorporated into the method, and a sterilizable plate, laser-marked with the calibration pattern, was also developed. Furthermore, we proposed a general algorithm to estimate the rotation center in the camera image. Formulas for updating the camera matrix in terms of clockwise and counterclockwise rotations were also developed. RESULTS The proposed calibration method was validated using a conventional [Formula: see text], 5-mm laparoscope. Freehand calibrations were performed using the proposed method, and the calibration time averaged 2 min and 8 s. The calibration accuracy was evaluated in a simulated clinical setting with several surgical tools present in the magnetic field of EM tracking. The root-mean-square re-projection error averaged 4.9 pixel (range 2.4-8.5 pixel, with image resolution of [Formula: see text] for rotation angles ranged from [Formula: see text] to [Formula: see text]. CONCLUSIONS We developed a method for fast and accurate calibration of oblique-viewing rigid endoscopes. The method was also designed to be performed in the operating room and will therefore support clinical translation of many emerging endoscopic computer-assisted surgical systems.
Collapse
Affiliation(s)
- Xinyang Liu
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, 111 Michigan Avenue NW, Washington, DC, 20010, USA
| | - Christina E Rice
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, 111 Michigan Avenue NW, Washington, DC, 20010, USA.,Department of Mechanical and Aerospace Engineering, Princeton University, Princeton, NJ, 08544, USA
| | - Raj Shekhar
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, 111 Michigan Avenue NW, Washington, DC, 20010, USA.
| |
Collapse
|
38
|
Morgan I, Jayarathne U, Rankin A, Peters TM, Chen ECS. Hand-eye calibration for surgical cameras: a Procrustean Perspective-n-Point solution. Int J Comput Assist Radiol Surg 2017; 12:1141-1149. [PMID: 28425030 DOI: 10.1007/s11548-017-1590-9] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2017] [Accepted: 04/10/2017] [Indexed: 11/30/2022]
Abstract
PURPOSE Surgical cameras are prevalent in modern operating theatres often used as surrogates for direct vision. A surgical navigational system is a useful adjunct, but requires an accurate "hand-eye" calibration to determine the geometrical relationship between the surgical camera and tracking markers. METHODS Using a tracked ball-tip stylus, we formulated hand-eye calibration as a Perspective-n-Point problem, which can be solved efficiently and accurately using as few as 15 measurements. RESULTS The proposed hand-eye calibration algorithm was applied to three types of camera and validated against five other widely used methods. Using projection error as the accuracy metric, our proposed algorithm compared favourably with existing methods. CONCLUSION We present a fully automated hand-eye calibration technique, based on Procrustean point-to-line registration, which provides superior results for calibrating surgical cameras when compared to existing methods.
Collapse
Affiliation(s)
| | | | - Adam Rankin
- Robarts Research Institute, Western University, London, ON, Canada
| | - Terry M Peters
- Robarts Research Institute, Western University, London, ON, Canada
| | - Elvis C S Chen
- Robarts Research Institute, Western University, London, ON, Canada.
| |
Collapse
|
39
|
Liu X, Kang S, Plishker W, Zaki G, Kane TD, Shekhar R. Laparoscopic stereoscopic augmented reality: toward a clinically viable electromagnetic tracking solution. J Med Imaging (Bellingham) 2016; 3:045001. [PMID: 27752522 DOI: 10.1117/1.jmi.3.4.045001] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2016] [Accepted: 09/08/2016] [Indexed: 11/14/2022] Open
Abstract
The purpose of this work was to develop a clinically viable laparoscopic augmented reality (AR) system employing stereoscopic (3-D) vision, laparoscopic ultrasound (LUS), and electromagnetic (EM) tracking to achieve image registration. We investigated clinically feasible solutions to mount the EM sensors on the 3-D laparoscope and the LUS probe. This led to a solution of integrating an externally attached EM sensor near the imaging tip of the LUS probe, only slightly increasing the overall diameter of the probe. Likewise, a solution for mounting an EM sensor on the handle of the 3-D laparoscope was proposed. The spatial image-to-video registration accuracy of the AR system was measured to be [Formula: see text] and [Formula: see text] for the left- and right-eye channels, respectively. The AR system contributed 58-ms latency to stereoscopic visualization. We further performed an animal experiment to demonstrate the use of the system as a visualization approach for laparoscopic procedures. In conclusion, we have developed an integrated, compact, and EM tracking-based stereoscopic AR visualization system, which has the potential for clinical use. The system has been demonstrated to achieve clinically acceptable accuracy and latency. This work is a critical step toward clinical translation of AR visualization for laparoscopic procedures.
Collapse
Affiliation(s)
- Xinyang Liu
- Sheikh Zayed Institute for Pediatric Surgical Innovation , Children's National Health System, 111 Michigan Avenue NW, Washington, DC 20010, United States
| | - Sukryool Kang
- Sheikh Zayed Institute for Pediatric Surgical Innovation , Children's National Health System, 111 Michigan Avenue NW, Washington, DC 20010, United States
| | - William Plishker
- IGI Technologies, Inc. , 387 Technology Drive #3110D, College Park, Maryland 20742, United States
| | - George Zaki
- IGI Technologies, Inc. , 387 Technology Drive #3110D, College Park, Maryland 20742, United States
| | - Timothy D Kane
- Sheikh Zayed Institute for Pediatric Surgical Innovation , Children's National Health System, 111 Michigan Avenue NW, Washington, DC 20010, United States
| | - Raj Shekhar
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System, 111 Michigan Avenue NW, Washington, DC 20010, United States; IGI Technologies, Inc., 387 Technology Drive #3110D, College Park, Maryland 20742, United States
| |
Collapse
|
40
|
Ntourakis D, Memeo R, Soler L, Marescaux J, Mutter D, Pessaux P. Augmented Reality Guidance for the Resection of Missing Colorectal Liver Metastases: An Initial Experience. World J Surg 2016; 40:419-26. [PMID: 26316112 DOI: 10.1007/s00268-015-3229-8] [Citation(s) in RCA: 58] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
BACKGROUND Modern chemotherapy achieves the shrinking of colorectal cancer liver metastases (CRLM) to such extent that they may disappear from radiological imaging. Disappearing CRLM rarely represents a complete pathological remission and have an important risk of recurrence. Augmented reality (AR) consists in the fusion of real-time patient images with a computer-generated 3D virtual patient model created from pre-operative medical imaging. The aim of this prospective pilot study is to investigate the potential of AR navigation as a tool to help locate and surgically resect missing CRLM. METHODS A 3D virtual anatomical model was created from thoracoabdominal CT-scans using customary software (VR RENDER(®), IRCAD). The virtual model was superimposed to the operative field using an Exoscope (VITOM(®), Karl Storz, Tüttlingen, Germany). Virtual and real images were manually registered in real-time using a video mixer, based on external anatomical landmarks with an estimated accuracy of 5 mm. This modality was tested in three patients, with four missing CRLM that had sizes from 12 to 24 mm, undergoing laparotomy after receiving pre-operative oxaliplatin-based chemotherapy. RESULTS AR display and fine registration was performed within 6 min. AR helped detect all four missing CRLM, and guided their resection. In all cases the planned security margin of 1 cm was clear and resections were confirmed to be R0 by pathology. There was no postoperative major morbidity or mortality. No local recurrence occurred in the follow-up period of 6-22 months. CONCLUSIONS This initial experience suggests that AR may be a helpful navigation tool for the resection of missing CRLM.
Collapse
Affiliation(s)
- Dimitrios Ntourakis
- IRCAD-IHU, University of Strasbourg, 1 place de l'Hôpital, 67091, Strasbourg, France.
| | - Ricardo Memeo
- IRCAD-IHU, University of Strasbourg, 1 place de l'Hôpital, 67091, Strasbourg, France
| | - Luc Soler
- IRCAD-IHU, University of Strasbourg, 1 place de l'Hôpital, 67091, Strasbourg, France
| | - Jacques Marescaux
- IRCAD-IHU, University of Strasbourg, 1 place de l'Hôpital, 67091, Strasbourg, France
| | - Didier Mutter
- IRCAD-IHU, University of Strasbourg, 1 place de l'Hôpital, 67091, Strasbourg, France
| | - Patrick Pessaux
- IRCAD-IHU, University of Strasbourg, 1 place de l'Hôpital, 67091, Strasbourg, France.
| |
Collapse
|
41
|
Ntourakis D, Memeo R, Soler L, Marescaux J, Mutter D, Pessaux P. Augmented Reality Guidance for the Resection of Missing Colorectal Liver Metastases: An Initial Experience. World J Surg 2016. [PMID: 26316112 DOI: 10.1007/-s00268-015-3229-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
BACKGROUND Modern chemotherapy achieves the shrinking of colorectal cancer liver metastases (CRLM) to such extent that they may disappear from radiological imaging. Disappearing CRLM rarely represents a complete pathological remission and have an important risk of recurrence. Augmented reality (AR) consists in the fusion of real-time patient images with a computer-generated 3D virtual patient model created from pre-operative medical imaging. The aim of this prospective pilot study is to investigate the potential of AR navigation as a tool to help locate and surgically resect missing CRLM. METHODS A 3D virtual anatomical model was created from thoracoabdominal CT-scans using customary software (VR RENDER(®), IRCAD). The virtual model was superimposed to the operative field using an Exoscope (VITOM(®), Karl Storz, Tüttlingen, Germany). Virtual and real images were manually registered in real-time using a video mixer, based on external anatomical landmarks with an estimated accuracy of 5 mm. This modality was tested in three patients, with four missing CRLM that had sizes from 12 to 24 mm, undergoing laparotomy after receiving pre-operative oxaliplatin-based chemotherapy. RESULTS AR display and fine registration was performed within 6 min. AR helped detect all four missing CRLM, and guided their resection. In all cases the planned security margin of 1 cm was clear and resections were confirmed to be R0 by pathology. There was no postoperative major morbidity or mortality. No local recurrence occurred in the follow-up period of 6-22 months. CONCLUSIONS This initial experience suggests that AR may be a helpful navigation tool for the resection of missing CRLM.
Collapse
Affiliation(s)
- Dimitrios Ntourakis
- IRCAD-IHU, University of Strasbourg, 1 place de l'Hôpital, 67091, Strasbourg, France.
| | - Ricardo Memeo
- IRCAD-IHU, University of Strasbourg, 1 place de l'Hôpital, 67091, Strasbourg, France
| | - Luc Soler
- IRCAD-IHU, University of Strasbourg, 1 place de l'Hôpital, 67091, Strasbourg, France
| | - Jacques Marescaux
- IRCAD-IHU, University of Strasbourg, 1 place de l'Hôpital, 67091, Strasbourg, France
| | - Didier Mutter
- IRCAD-IHU, University of Strasbourg, 1 place de l'Hôpital, 67091, Strasbourg, France
| | - Patrick Pessaux
- IRCAD-IHU, University of Strasbourg, 1 place de l'Hôpital, 67091, Strasbourg, France.
| |
Collapse
|
42
|
Bruckheimer E, Rotschild C, Dagan T, Amir G, Kaufman A, Gelman S, Birk E. Computer-generated real-time digital holography: first time use in clinical medical imaging. Eur Heart J Cardiovasc Imaging 2016; 17:845-9. [DOI: 10.1093/ehjci/jew087] [Citation(s) in RCA: 66] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/01/2016] [Accepted: 04/03/2016] [Indexed: 11/12/2022] Open
|
43
|
On-demand calibration and evaluation for electromagnetically tracked laparoscope in augmented reality visualization. Int J Comput Assist Radiol Surg 2016; 11:1163-71. [PMID: 27250853 DOI: 10.1007/s11548-016-1406-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2016] [Accepted: 03/24/2016] [Indexed: 10/21/2022]
Abstract
PURPOSE Common camera calibration methods employed in current laparoscopic augmented reality systems require the acquisition of multiple images of an entire checkerboard pattern from various poses. This lengthy procedure prevents performing laparoscope calibration in the operating room (OR). The purpose of this work was to develop a fast calibration method for electromagnetically (EM) tracked laparoscopes, such that the calibration can be performed in the OR on demand. METHODS We designed a mechanical tracking mount to uniquely and snugly position an EM sensor to an appropriate location on a conventional laparoscope. A tool named fCalib was developed to calibrate intrinsic camera parameters, distortion coefficients, and extrinsic parameters (transformation between the scope lens coordinate system and the EM sensor coordinate system) using a single image that shows an arbitrary portion of a special target pattern. For quick evaluation of calibration results in the OR, we integrated a tube phantom with fCalib prototype and overlaid a virtual representation of the tube on the live video scene. RESULTS We compared spatial target registration error between the common OpenCV method and the fCalib method in a laboratory setting. In addition, we compared the calibration re-projection error between the EM tracking-based fCalib and the optical tracking-based fCalib in a clinical setting. Our results suggest that the proposed method is comparable to the OpenCV method. However, changing the environment, e.g., inserting or removing surgical tools, might affect re-projection accuracy for the EM tracking-based approach. Computational time of the fCalib method averaged 14.0 s (range 3.5 s-22.7 s). CONCLUSIONS We developed and validated a prototype for fast calibration and evaluation of EM tracked conventional (forward viewing) laparoscopes. The calibration method achieved acceptable accuracy and was relatively fast and easy to be performed in the OR on demand.
Collapse
|
44
|
Thompson S, Stoyanov D, Schneider C, Gurusamy K, Ourselin S, Davidson B, Hawkes D, Clarkson MJ. Hand-eye calibration for rigid laparoscopes using an invariant point. Int J Comput Assist Radiol Surg 2016; 11:1071-80. [PMID: 26995597 PMCID: PMC4893361 DOI: 10.1007/s11548-016-1364-9] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2016] [Accepted: 02/24/2016] [Indexed: 01/22/2023]
Abstract
PURPOSE Laparoscopic liver resection has significant advantages over open surgery due to less patient trauma and faster recovery times, yet it can be difficult due to the restricted field of view and lack of haptic feedback. Image guidance provides a potential solution but one current challenge is in accurate "hand-eye" calibration, which determines the position and orientation of the laparoscope camera relative to the tracking markers. METHODS In this paper, we propose a simple and clinically feasible calibration method based on a single invariant point. The method requires no additional hardware, can be constructed by theatre staff during surgical setup, requires minimal image processing and can be visualised in real time. Real-time visualisation allows the surgical team to assess the calibration accuracy before use in surgery. In addition, in the laboratory, we have developed a laparoscope with an electromagnetic tracking sensor attached to the camera end and an optical tracking marker attached to the distal end. This enables a comparison of tracking performance. RESULTS We have evaluated our method in the laboratory and compared it to two widely used methods, "Tsai's method" and "direct" calibration. The new method is of comparable accuracy to existing methods, and we show RMS projected error due to calibration of 1.95 mm for optical tracking and 0.85 mm for EM tracking, versus 4.13 and 1.00 mm respectively, using existing methods. The new method has also been shown to be workable under sterile conditions in the operating room. CONCLUSION We have proposed a new method of hand-eye calibration, based on a single invariant point. Initial experience has shown that the method provides visual feedback, satisfactory accuracy and can be performed during surgery. We also show that an EM sensor placed near the camera would provide significantly improved image overlay accuracy.
Collapse
Affiliation(s)
- Stephen Thompson
- Centre for Medical Image Computing, Front Engineering Building, University College London, Malet Place, London, UK.
| | - Danail Stoyanov
- Centre for Medical Image Computing, Front Engineering Building, University College London, Malet Place, London, UK
| | - Crispin Schneider
- Division of Surgery, Hampstead Campus, UCL Medical School, Royal Free Hospital, 9th Floor, Rowland Hill Street, London, UK
| | - Kurinchi Gurusamy
- Division of Surgery, Hampstead Campus, UCL Medical School, Royal Free Hospital, 9th Floor, Rowland Hill Street, London, UK
| | - Sébastien Ourselin
- Centre for Medical Image Computing, Front Engineering Building, University College London, Malet Place, London, UK
| | - Brian Davidson
- Division of Surgery, Hampstead Campus, UCL Medical School, Royal Free Hospital, 9th Floor, Rowland Hill Street, London, UK
| | - David Hawkes
- Centre for Medical Image Computing, Front Engineering Building, University College London, Malet Place, London, UK
| | - Matthew J Clarkson
- Centre for Medical Image Computing, Front Engineering Building, University College London, Malet Place, London, UK
| |
Collapse
|
45
|
Markman A, Shen X, Hua H, Javidi B. Augmented reality three-dimensional object visualization and recognition with axially distributed sensing. OPTICS LETTERS 2016; 41:297-300. [PMID: 26766698 DOI: 10.1364/ol.41.000297] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
An augmented reality (AR) smartglass display combines real-world scenes with digital information enabling the rapid growth of AR-based applications. We present an augmented reality-based approach for three-dimensional (3D) optical visualization and object recognition using axially distributed sensing (ADS). For object recognition, the 3D scene is reconstructed, and feature extraction is performed by calculating the histogram of oriented gradients (HOG) of a sliding window. A support vector machine (SVM) is then used for classification. Once an object has been identified, the 3D reconstructed scene with the detected object is optically displayed in the smartglasses allowing the user to see the object, remove partial occlusions of the object, and provide critical information about the object such as 3D coordinates, which are not possible with conventional AR devices. To the best of our knowledge, this is the first report on combining axially distributed sensing with 3D object visualization and recognition for applications to augmented reality. The proposed approach can have benefits for many applications, including medical, military, transportation, and manufacturing.
Collapse
|
46
|
Precision insertion of percutaneous sacroiliac screws using a novel augmented reality-based navigation system: a pilot study. INTERNATIONAL ORTHOPAEDICS 2015; 40:1941-7. [PMID: 26572882 DOI: 10.1007/s00264-015-3028-8] [Citation(s) in RCA: 54] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2015] [Accepted: 10/21/2015] [Indexed: 10/22/2022]
Abstract
PURPOSE Augmented reality (AR) enables superimposition of virtual images onto the real world. The aim of this study is to present a novel AR-based navigation system for sacroiliac screw insertion and to evaluate its feasibility and accuracy in cadaveric experiments. METHODS Six cadavers with intact pelvises were employed in our study. They were CT scanned and the pelvis and vessels were segmented into 3D models. The ideal trajectory of the sacroiliac screw was planned and represented visually as a cylinder. For the intervention, the head mounted display created a real-time AR environment by superimposing the virtual 3D models onto the surgeon's field of view. The screws were drilled into the pelvis as guided by the trajectory represented by the cylinder. Following the intervention, a repeat CT scan was performed to evaluate the accuracy of the system, by assessing the screw positions and the deviations between the planned trajectories and inserted screws. RESULTS Post-operative CT images showed that all 12 screws were correctly placed with no perforation. The mean deviation between the planned trajectories and the inserted screws was 2.7 ± 1.2 mm at the bony entry point, 3.7 ± 1.1 mm at the screw tip, and the mean angular deviation between the two trajectories was 2.9° ± 1.1°. The mean deviation at the nerve root tunnels region on the sagittal plane was 3.6 ± 1.0 mm. CONCLUSIONS This study suggests an intuitive approach for guiding screw placement by way of AR-based navigation. This approach was feasible and accurate. It may serve as a valuable tool for assisting percutaneous sacroiliac screw insertion in live surgery.
Collapse
|
47
|
Suenaga H, Tran HH, Liao H, Masamune K, Dohi T, Hoshi K, Takato T. Vision-based markerless registration using stereo vision and an augmented reality surgical navigation system: a pilot study. BMC Med Imaging 2015; 15:51. [PMID: 26525142 PMCID: PMC4630916 DOI: 10.1186/s12880-015-0089-5] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2015] [Accepted: 10/09/2015] [Indexed: 11/15/2022] Open
Abstract
Background This study evaluated the use of an augmented reality navigation system that provides a markerless registration system using stereo vision in oral and maxillofacial surgery. Method A feasibility study was performed on a subject, wherein a stereo camera was used for tracking and markerless registration. The computed tomography data obtained from the volunteer was used to create an integral videography image and a 3-dimensional rapid prototype model of the jaw. The overlay of the subject’s anatomic site and its 3D-IV image were displayed in real space using a 3D-AR display. Extraction of characteristic points and teeth matching were done using parallax images from two stereo cameras for patient-image registration. Results Accurate registration of the volunteer’s anatomy with IV stereoscopic images via image matching was done using the fully automated markerless system, which recognized the incisal edges of the teeth and captured information pertaining to their position with an average target registration error of < 1 mm. These 3D-CT images were then displayed in real space with high accuracy using AR. Even when the viewing position was changed, the 3D images could be observed as if they were floating in real space without using special glasses. Conclusion Teeth were successfully used for registration via 3D image (contour) matching. This system, without using references or fiducial markers, displayed 3D-CT images in real space with high accuracy. The system provided real-time markerless registration and 3D image matching via stereo vision, which, combined with AR, could have significant clinical applications. Electronic supplementary material The online version of this article (doi:10.1186/s12880-015-0089-5) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Hideyuki Suenaga
- Department of Oral-Maxillofacial Surgery, Dentistry and Orthodontics, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo ku, Tokyo, 113 8656, Japan.
| | - Huy Hoang Tran
- Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan.
| | - Hongen Liao
- Department of Bioengineering, Graduate School of Engineering, The University of Tokyo, Tokyo, Japan. .,Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China.
| | - Ken Masamune
- Department of Mechano-Informatics, Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan. .,Faculty of Advanced Technology and Surgery, Institute of Advanced Biomedical Engineering and Science, Tokyo Women's Medical University, Tokyo, Japan.
| | - Takeyoshi Dohi
- Department of Mechanical Engineering, School of Engineering, Tokyo Denki University, Tokyo, Japan.
| | - Kazuto Hoshi
- Department of Oral-Maxillofacial Surgery, Dentistry and Orthodontics, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo ku, Tokyo, 113 8656, Japan.
| | - Tsuyoshi Takato
- Department of Oral-Maxillofacial Surgery, Dentistry and Orthodontics, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo ku, Tokyo, 113 8656, Japan.
| |
Collapse
|
48
|
Luo B, Ge S. Augmented reality for material processing within shielded radioactive environment. 2015 8TH INTERNATIONAL CONGRESS ON IMAGE AND SIGNAL PROCESSING (CISP) 2015. [DOI: 10.1109/cisp.2015.7407856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
49
|
Mahmud N, Cohen J, Tsourides K, Berzin TM. Computer vision and augmented reality in gastrointestinal endoscopy. Gastroenterol Rep (Oxf) 2015; 3:179-84. [PMID: 26133175 PMCID: PMC4527270 DOI: 10.1093/gastro/gov027] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/30/2015] [Accepted: 06/07/2015] [Indexed: 02/06/2023] Open
Abstract
Augmented reality (AR) is an environment-enhancing technology, widely applied in the computer sciences, which has only recently begun to permeate the medical field. Gastrointestinal endoscopy—which relies on the integration of high-definition video data with pathologic correlates—requires endoscopists to assimilate and process a tremendous amount of data in real time. We believe that AR is well positioned to provide computer-guided assistance with a wide variety of endoscopic applications, beginning with polyp detection. In this article, we review the principles of AR, describe its potential integration into an endoscopy set-up, and envisage a series of novel uses. With close collaboration between physicians and computer scientists, AR promises to contribute significant improvements to the field of endoscopy.
Collapse
Affiliation(s)
- Nadim Mahmud
- Department of Internal Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston MA, USA
| | - Jonah Cohen
- The Center for Advanced Endoscopy, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston MA, USA
| | - Kleovoulos Tsourides
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, Boston MA, USA
| | - Tyler M Berzin
- The Center for Advanced Endoscopy, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston MA, USA
| |
Collapse
|
50
|
Liu X, Su H, Kang S, Kane TD, Shekhar R. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2015; 9415. [PMID: 28943703 DOI: 10.1117/12.2082194] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.
Collapse
Affiliation(s)
- Xinyang Liu
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System 111 Michigan Avenue, NW Washington, DC, USA 20010
| | - He Su
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System 111 Michigan Avenue, NW Washington, DC, USA 20010.,School of Mechanical Engineering, Tianjin University, 92 Weijin Road, Nankai District, Tianjin, China 300072
| | - Sukryool Kang
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System 111 Michigan Avenue, NW Washington, DC, USA 20010
| | - Timothy D Kane
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System 111 Michigan Avenue, NW Washington, DC, USA 20010
| | - Raj Shekhar
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Health System 111 Michigan Avenue, NW Washington, DC, USA 20010
| |
Collapse
|