1
|
Lee Y. Three-Dimensional Dense Reconstruction: A Review of Algorithms and Datasets. SENSORS (BASEL, SWITZERLAND) 2024; 24:5861. [PMID: 39338606 PMCID: PMC11435907 DOI: 10.3390/s24185861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/16/2024] [Revised: 09/04/2024] [Accepted: 09/05/2024] [Indexed: 09/30/2024]
Abstract
Three-dimensional dense reconstruction involves extracting the full shape and texture details of three-dimensional objects from two-dimensional images. Although 3D reconstruction is a crucial and well-researched area, it remains an unsolved challenge in dynamic or complex environments. This work provides a comprehensive overview of classical 3D dense reconstruction techniques, including those based on geometric and optical models, as well as approaches leveraging deep learning. It also discusses the datasets used for deep learning and evaluates the performance and the strengths and limitations of deep learning methods on these datasets.
Collapse
Affiliation(s)
- Yangming Lee
- RoCAL Lab, Rochester Institute of Technology, Rochester, NY 14623, USA
| |
Collapse
|
2
|
Nardone V, Marmorino F, Germani MM, Cichowska-Cwalińska N, Menditti VS, Gallo P, Studiale V, Taravella A, Landi M, Reginelli A, Cappabianca S, Girnyi S, Cwalinski T, Boccardi V, Goyal A, Skokowski J, Oviedo RJ, Abou-Mrad A, Marano L. The Role of Artificial Intelligence on Tumor Boards: Perspectives from Surgeons, Medical Oncologists and Radiation Oncologists. Curr Oncol 2024; 31:4984-5007. [PMID: 39329997 PMCID: PMC11431448 DOI: 10.3390/curroncol31090369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2024] [Revised: 08/24/2024] [Accepted: 08/26/2024] [Indexed: 09/28/2024] Open
Abstract
The integration of multidisciplinary tumor boards (MTBs) is fundamental in delivering state-of-the-art cancer treatment, facilitating collaborative diagnosis and management by a diverse team of specialists. Despite the clear benefits in personalized patient care and improved outcomes, the increasing burden on MTBs due to rising cancer incidence and financial constraints necessitates innovative solutions. The advent of artificial intelligence (AI) in the medical field offers a promising avenue to support clinical decision-making. This review explores the perspectives of clinicians dedicated to the care of cancer patients-surgeons, medical oncologists, and radiation oncologists-on the application of AI within MTBs. Additionally, it examines the role of AI across various clinical specialties involved in cancer diagnosis and treatment. By analyzing both the potential and the challenges, this study underscores how AI can enhance multidisciplinary discussions and optimize treatment plans. The findings highlight the transformative role that AI may play in refining oncology care and sustaining the efficacy of MTBs amidst growing clinical demands.
Collapse
Affiliation(s)
- Valerio Nardone
- Department of Precision Medicine, University of Campania "L. Vanvitelli", 80131 Naples, Italy
| | - Federica Marmorino
- Unit of Medical Oncology 2, Azienda Ospedaliera Universitaria Pisana, 56126 Pisa, Italy
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy
| | - Marco Maria Germani
- Unit of Medical Oncology 2, Azienda Ospedaliera Universitaria Pisana, 56126 Pisa, Italy
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy
| | | | | | - Paolo Gallo
- Department of Precision Medicine, University of Campania "L. Vanvitelli", 80131 Naples, Italy
| | - Vittorio Studiale
- Unit of Medical Oncology 2, Azienda Ospedaliera Universitaria Pisana, 56126 Pisa, Italy
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy
| | - Ada Taravella
- Unit of Medical Oncology 2, Azienda Ospedaliera Universitaria Pisana, 56126 Pisa, Italy
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy
| | - Matteo Landi
- Unit of Medical Oncology 2, Azienda Ospedaliera Universitaria Pisana, 56126 Pisa, Italy
- Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, 56126 Pisa, Italy
| | - Alfonso Reginelli
- Department of Precision Medicine, University of Campania "L. Vanvitelli", 80131 Naples, Italy
| | - Salvatore Cappabianca
- Department of Precision Medicine, University of Campania "L. Vanvitelli", 80131 Naples, Italy
| | - Sergii Girnyi
- Department of General Surgery and Surgical Oncology, "Saint Wojciech" Hospital, "Nicolaus Copernicus" Health Center, 80-462 Gdańsk, Poland
| | - Tomasz Cwalinski
- Department of General Surgery and Surgical Oncology, "Saint Wojciech" Hospital, "Nicolaus Copernicus" Health Center, 80-462 Gdańsk, Poland
| | - Virginia Boccardi
- Division of Gerontology and Geriatrics, Department of Medicine and Surgery, University of Perugia, 06123 Perugia, Italy
| | - Aman Goyal
- Adesh Institute of Medical Sciences and Research, Bathinda 151109, Punjab, India
| | - Jaroslaw Skokowski
- Department of General Surgery and Surgical Oncology, "Saint Wojciech" Hospital, "Nicolaus Copernicus" Health Center, 80-462 Gdańsk, Poland
- Department of Medicine, Academy of Applied Medical and Social Sciences-AMiSNS: Akademia Medycznych I Spolecznych Nauk Stosowanych, 82-300 Elbląg, Poland
| | - Rodolfo J Oviedo
- Nacogdoches Medical Center, Nacogdoches, TX 75965, USA
- Tilman J. Fertitta Family College of Medicine, University of Houston, Houston, TX 77021, USA
- College of Osteopathic Medicine, Sam Houston State University, Conroe, TX 77304, USA
| | - Adel Abou-Mrad
- Centre Hospitalier Universitaire d'Orléans, 45100 Orléans, France
| | - Luigi Marano
- Department of General Surgery and Surgical Oncology, "Saint Wojciech" Hospital, "Nicolaus Copernicus" Health Center, 80-462 Gdańsk, Poland
- Department of Medicine, Academy of Applied Medical and Social Sciences-AMiSNS: Akademia Medycznych I Spolecznych Nauk Stosowanych, 82-300 Elbląg, Poland
| |
Collapse
|
3
|
Yang Z, Dai J, Pan J. 3D reconstruction from endoscopy images: A survey. Comput Biol Med 2024; 175:108546. [PMID: 38704902 DOI: 10.1016/j.compbiomed.2024.108546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Revised: 01/05/2024] [Accepted: 04/28/2024] [Indexed: 05/07/2024]
Abstract
Three-dimensional reconstruction of images acquired through endoscopes is playing a vital role in an increasing number of medical applications. Endoscopes used in the clinic are commonly classified as monocular endoscopes and binocular endoscopes. We have reviewed the classification of methods for depth estimation according to the type of endoscope. Basically, depth estimation relies on feature matching of images and multi-view geometry theory. However, these traditional techniques have many problems in the endoscopic environment. With the increasing development of deep learning techniques, there is a growing number of works based on learning methods to address challenges such as inconsistent illumination and texture sparsity. We have reviewed over 170 papers published in the 10 years from 2013 to 2023. The commonly used public datasets and performance metrics are summarized. We also give a taxonomy of methods and analyze the advantages and drawbacks of algorithms. Summary tables and result atlas are listed to facilitate the comparison of qualitative and quantitative performance of different methods in each category. In addition, we summarize commonly used scene representation methods in endoscopy and speculate on the prospects of deep estimation research in medical applications. We also compare the robustness performance, processing time, and scene representation of the methods to facilitate doctors and researchers in selecting appropriate methods based on surgical applications.
Collapse
Affiliation(s)
- Zhuoyue Yang
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, 37 Xueyuan Road, Haidian District, Beijing, 100191, China; Peng Cheng Lab, 2 Xingke 1st Street, Nanshan District, Shenzhen, Guangdong Province, 518000, China
| | - Ju Dai
- Peng Cheng Lab, 2 Xingke 1st Street, Nanshan District, Shenzhen, Guangdong Province, 518000, China
| | - Junjun Pan
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, 37 Xueyuan Road, Haidian District, Beijing, 100191, China; Peng Cheng Lab, 2 Xingke 1st Street, Nanshan District, Shenzhen, Guangdong Province, 518000, China.
| |
Collapse
|
4
|
Schmidt A, Mohareri O, DiMaio S, Yip MC, Salcudean SE. Tracking and mapping in medical computer vision: A review. Med Image Anal 2024; 94:103131. [PMID: 38442528 DOI: 10.1016/j.media.2024.103131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 02/08/2024] [Accepted: 02/29/2024] [Indexed: 03/07/2024]
Abstract
As computer vision algorithms increase in capability, their applications in clinical systems will become more pervasive. These applications include: diagnostics, such as colonoscopy and bronchoscopy; guiding biopsies, minimally invasive interventions, and surgery; automating instrument motion; and providing image guidance using pre-operative scans. Many of these applications depend on the specific visual nature of medical scenes and require designing algorithms to perform in this environment. In this review, we provide an update to the field of camera-based tracking and scene mapping in surgery and diagnostics in medical computer vision. We begin with describing our review process, which results in a final list of 515 papers that we cover. We then give a high-level summary of the state of the art and provide relevant background for those who need tracking and mapping for their clinical applications. After which, we review datasets provided in the field and the clinical needs that motivate their design. Then, we delve into the algorithmic side, and summarize recent developments. This summary should be especially useful for algorithm designers and to those looking to understand the capability of off-the-shelf methods. We maintain focus on algorithms for deformable environments while also reviewing the essential building blocks in rigid tracking and mapping since there is a large amount of crossover in methods. With the field summarized, we discuss the current state of the tracking and mapping methods along with needs for future algorithms, needs for quantification, and the viability of clinical applications. We then provide some research directions and questions. We conclude that new methods need to be designed or combined to support clinical applications in deformable environments, and more focus needs to be put into collecting datasets for training and evaluation.
Collapse
Affiliation(s)
- Adam Schmidt
- Department of Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver V6T 1Z4, BC, Canada.
| | - Omid Mohareri
- Advanced Research, Intuitive Surgical, 1020 Kifer Rd, Sunnyvale, CA 94086, USA
| | - Simon DiMaio
- Advanced Research, Intuitive Surgical, 1020 Kifer Rd, Sunnyvale, CA 94086, USA
| | - Michael C Yip
- Department of Electrical and Computer Engineering, University of California San Diego, 9500 Gilman Dr, La Jolla, CA 92093, USA
| | - Septimiu E Salcudean
- Department of Electrical and Computer Engineering, University of British Columbia, 2329 West Mall, Vancouver V6T 1Z4, BC, Canada
| |
Collapse
|
5
|
Song J, Zhang R, Zhu Q, Lin J, Ghaffari M. BDIS-SLAM: a lightweight CPU-based dense stereo SLAM for surgery. Int J Comput Assist Radiol Surg 2024; 19:811-820. [PMID: 38238493 DOI: 10.1007/s11548-023-03055-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Accepted: 12/21/2023] [Indexed: 05/18/2024]
Abstract
PURPOSE Common dense stereo simultaneous localization and mapping (SLAM) approaches in minimally invasive surgery (MIS) require high-end parallel computational resources for real-time implementation. Yet, it is not always feasible since the computational resources should be allocated to other tasks like segmentation, detection, and tracking. To solve the problem of limited parallel computational power, this research aims at a lightweight dense stereo SLAM system that works on a single-core CPU and achieves real-time performance (more than 30 Hz in typical scenarios). METHODS A new dense stereo mapping module is integrated with the ORB-SLAM2 system and named BDIS-SLAM. Our new dense stereo mapping module includes stereo matching and 3D dense depth mosaic methods. Stereo matching is achieved with the recently proposed CPU-level real-time matching algorithm Bayesian Dense Inverse Searching (BDIS). A BDIS-based shape recovery and a depth mosaic strategy are integrated as a new thread and coupled with the backbone ORB-SLAM2 system for real-time stereo shape recovery. RESULTS Experiments on in vivo data sets show that BDIS-SLAM runs at over 30 Hz speed on modern single-core CPU in typical endoscopy/colonoscopy scenarios. BDIS-SLAM only consumes around an additional 12 % time compared with the backbone ORB-SLAM2. Although our lightweight BDIS-SLAM simplifies the process by ignoring deformation and fusion procedures, it can provide a usable dense mapping for modern MIS on computationally constrained devices. CONCLUSION The proposed BDIS-SLAM is a lightweight stereo dense SLAM system for MIS. It achieves 30 Hz on a modern single-core CPU in typical endoscopy/colonoscopy scenarios (image size around 640 × 480 ). BDIS-SLAM provides a low-cost solution for dense mapping in MIS and has the potential to be applied in surgical robots and AR systems. Code is available at https://github.com/JingweiSong/BDIS-SLAM .
Collapse
Affiliation(s)
- Jingwei Song
- United Imaging Research Institute of Intelligent Imaging, Beijing, 100144, China.
- University of Michigan, Ann Arbor, MI, 48109, USA.
| | - Ray Zhang
- University of Michigan, Ann Arbor, MI, 48109, USA
| | - Qiuchen Zhu
- University of Technology Sydney, Sydney, NSW, 2007, Australia
| | - Jianyu Lin
- Imperial College London, London, SW72AZ, UK
| | | |
Collapse
|
6
|
Yu X, Zhao J, Wu H, Wang A. A Novel Evaluation Method for SLAM-Based 3D Reconstruction of Lumen Panoramas. SENSORS (BASEL, SWITZERLAND) 2023; 23:7188. [PMID: 37631725 PMCID: PMC10459170 DOI: 10.3390/s23167188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 08/09/2023] [Accepted: 08/10/2023] [Indexed: 08/27/2023]
Abstract
Laparoscopy is employed in conventional minimally invasive surgery to inspect internal cavities by viewing two-dimensional images on a monitor. This method has a limited field of view and provides insufficient information for surgeons, increasing surgical complexity. Utilizing simultaneous localization and mapping (SLAM) technology to reconstruct laparoscopic scenes can offer more comprehensive and intuitive visual feedback. Moreover, the precision of the reconstructed models is a crucial factor for further applications of surgical assistance systems. However, challenges such as data scarcity and scale uncertainty hinder effective assessment of the accuracy of endoscopic monocular SLAM reconstructions. Therefore, this paper proposes a technique that incorporates existing knowledge from calibration objects to supplement metric information and resolve scale ambiguity issues, and it quantifies the endoscopic reconstruction accuracy based on local alignment metrics. The experimental results demonstrate that the reconstructed models restore realistic scales and enable error analysis for laparoscopic SLAM reconstruction systems. This suggests that for the evaluation of monocular SLAM three-dimensional (3D) reconstruction accuracy in minimally invasive surgery scenarios, our proposed scheme for recovering scale factors is viable, and our evaluation outcomes can serve as criteria for measuring reconstruction precision.
Collapse
Affiliation(s)
- Xiaoyu Yu
- College of Electron and Information, University of Electronic Science and Technology of China, Zhongshan Institute, Zhongshan 528402, China;
- Heilongjiang Province Key Laboratory of Laser Spectroscopy Technology and Application, Harbin University of Science and Technology, Harbin 150080, China (A.W.)
| | - Jianbo Zhao
- Heilongjiang Province Key Laboratory of Laser Spectroscopy Technology and Application, Harbin University of Science and Technology, Harbin 150080, China (A.W.)
| | - Haibin Wu
- Heilongjiang Province Key Laboratory of Laser Spectroscopy Technology and Application, Harbin University of Science and Technology, Harbin 150080, China (A.W.)
| | - Aili Wang
- Heilongjiang Province Key Laboratory of Laser Spectroscopy Technology and Application, Harbin University of Science and Technology, Harbin 150080, China (A.W.)
| |
Collapse
|
7
|
Liu R, Liu Z, Lu J, Zhang G, Zuo Z, Sun B, Zhang J, Sheng W, Guo R, Zhang L, Hua X. Sparse-to-dense coarse-to-fine depth estimation for colonoscopy. Comput Biol Med 2023; 160:106983. [PMID: 37187133 DOI: 10.1016/j.compbiomed.2023.106983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Revised: 04/17/2023] [Accepted: 04/27/2023] [Indexed: 05/17/2023]
Abstract
Colonoscopy, as the golden standard for screening colon cancer and diseases, offers considerable benefits to patients. However, it also imposes challenges on diagnosis and potential surgery due to the narrow observation perspective and limited perception dimension. Dense depth estimation can overcome the above limitations and offer doctors straightforward 3D visual feedback. To this end, we propose a novel sparse-to-dense coarse-to-fine depth estimation solution for colonoscopic scenes based on the direct SLAM algorithm. The highlight of our solution is that we utilize the scattered 3D points obtained from SLAM to generate accurate and dense depth in full resolution. This is done by a deep learning (DL)-based depth completion network and a reconstruction system. The depth completion network effectively extracts texture, geometry, and structure features from sparse depth along with RGB data to recover the dense depth map. The reconstruction system further updates the dense depth map using a photometric error-based optimization and a mesh modeling approach to reconstruct a more accurate 3D model of colons with detailed surface texture. We show the effectiveness and accuracy of our depth estimation method on near photo-realistic challenging colon datasets. Experiments demonstrate that the strategy of sparse-to-dense coarse-to-fine can significantly improve the performance of depth estimation and smoothly fuse direct SLAM and DL-based depth estimation into a complete dense reconstruction system.
Collapse
Affiliation(s)
- Ruyu Liu
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou, 311121, China; Haixi Institutes, Chinese Academy of Sciences Quanzhou Institute of Equipment Manufacturing, Quanzhou, 362000, China
| | - Zhengzhe Liu
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou, 311121, China
| | - Jiaming Lu
- School of Computer Science and Engineering, Tianjin University of Technology, Tianjin, 300384, China
| | - Guodao Zhang
- Department of Digital Media Technology, Hangzhou Dianzi University, Hangzhou, 310018, China
| | - Zhigui Zuo
- Department of Colorectal Surgery, the First Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325035, China
| | - Bo Sun
- Haixi Institutes, Chinese Academy of Sciences Quanzhou Institute of Equipment Manufacturing, Quanzhou, 362000, China
| | - Jianhua Zhang
- School of Computer Science and Engineering, Tianjin University of Technology, Tianjin, 300384, China
| | - Weiguo Sheng
- School of Information Science and Technology, Hangzhou Normal University, Hangzhou, 311121, China
| | - Ran Guo
- Cyberspace Institute Advanced Technology, Guangzhou University, Guangzhou, 510006, China.
| | - Lejun Zhang
- Cyberspace Institute Advanced Technology, Guangzhou University, Guangzhou, 510006, China; College of Information Engineering, Yangzhou University, Yangzhou, 225127, China; Research and Development Center for E-Learning, Ministry of Education, Beijing, 100039, China
| | - Xiaozhen Hua
- Department of Pediatrics, Cangnan Affiliated Hospital of Wenzhou Medical University, Wenzhou, 325800, China.
| |
Collapse
|
8
|
Mehedi IM, Rao KP, Alotaibi FM, Alkanfery HM. Intelligent Wireless Capsule Endoscopy for the Diagnosis of Gastrointestinal Diseases. Diagnostics (Basel) 2023; 13:diagnostics13081445. [PMID: 37189546 DOI: 10.3390/diagnostics13081445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2023] [Revised: 04/08/2023] [Accepted: 04/15/2023] [Indexed: 05/17/2023] Open
Abstract
Through a wireless capsule endoscope (WCE) fitted with a miniature camera (about an inch), this study aims to examine the role of wireless capsule endoscopy (WCE) in the diagnosis, monitoring, and evaluation of GI (gastrointestinal) disorders. In a wearable belt recorder, a capsule travels through the digestive tract and takes pictures. It attempts to find tiny components that can be used to enhance the WCE. To accomplish this, we followed the steps below: Researching current capsule endoscopy through databases, designing and simulating the device using computers, implanting the system and finding tiny components compatible with capsule size, testing the system and eliminating noise and other problems, and analyzing the results. In the present study, it was shown that a spherical WCE shaper and a smaller WCE with a size of 13.5 diameter, a high resolution, and a high frame rate (8-32 fps) could help patients with pains due to the traditional capsules and provide more accurate pictures as well as prolong the battery life. In addition, the capsule can also be used to reconstruct 3D images. Simulation experiments showed that spherical endoscopic devices are more advantageous than commercial capsule-shaped endoscopic devices for wireless applications. We found that the sphere's velocity through the fluid was greater than the capsule's.
Collapse
Affiliation(s)
- Ibrahim M Mehedi
- Department of Electrical and Computer Engineering (ECE), King Abdulaziz University, Jeddah 21589, Saudi Arabia
- Center of Excellence in Intelligent Engineering Systems (CEIES), King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - K Prahlad Rao
- Department of Electrical and Computer Engineering (ECE), King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Fahad Mushhabbab Alotaibi
- Department of Electrical and Computer Engineering (ECE), King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Hadi Mohsen Alkanfery
- Department of Electrical and Computer Engineering (ECE), King Abdulaziz University, Jeddah 21589, Saudi Arabia
| |
Collapse
|
9
|
Li Y. Deep causal learning for robotic intelligence. Front Neurorobot 2023; 17:1128591. [PMID: 36910267 PMCID: PMC9992986 DOI: 10.3389/fnbot.2023.1128591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 01/30/2023] [Indexed: 02/24/2023] Open
Abstract
This invited Review discusses causal learning in the context of robotic intelligence. The Review introduces the psychological findings on causal learning in human cognition, as well as the traditional statistical solutions for causal discovery and causal inference. Additionally, we examine recent deep causal learning algorithms, with a focus on their architectures and the benefits of using deep nets, and discuss the gap between deep causal learning and the needs of robotic intelligence.
Collapse
Affiliation(s)
- Yangming Li
- RoCAL, Rochester Institute of Technology, Rochester, NY, United States
| |
Collapse
|
10
|
Song Z, Zhang W, Zhang W, Paolo D. A Novel Biopsy Capsule Robot Based on High-Speed Cutting Tissue. CYBORG AND BIONIC SYSTEMS 2022; 2022:9783517. [PMID: 39081833 PMCID: PMC11288281 DOI: 10.34133/2022/9783517] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 07/17/2022] [Indexed: 08/02/2024] Open
Abstract
The capsule robot (CR) is a promising endoscopic method in gastrointestinal diagnosis because of its low discomfort to users. Most CRs are used to acquire image information only and lack the ability to collect samples. Although some biopsy capsule robots (BCRs) have been developed, it remains challenging to acquire the intestinal tissue while avoiding tearing and adhesion due to the flexibility of colonic tissue. In this study, we develop a BCR with a novel sampling strategy in which soft tissue is scratched with sharp blades rotating at high speed to avoid tissue tearing. In the BCR design, a spiral spring with prestored energy is used to release high energy within a short period of time, which is difficult for a motor or magnet to perform within a small capacity installation space. The energy of the tightened spiral spring is transmitted to drive sharp blades to rotate quickly via a designed gear mechanism. To guarantee reliable sampling, a Bowden cable is used to transmit the user's manipulation to trigger the rotation of the blades, and the triggering force transmitted by the cable can be monitored in real time by a force sensor installed at the manipulating end. A prototype of the proposed BCR is designed and fabricated, and its performance is tested through in vitro experiments. The results show that the proposed BCR is effective and the size of its acquired samples satisfies clinical requirements.
Collapse
Affiliation(s)
- Zhibin Song
- Key Laboratory of Mechanism Theory and Equipment Design of the Ministry of Education, Tianjin University, Tianjin 300072, China
| | - Wenjie Zhang
- Key Laboratory of Mechanism Theory and Equipment Design of the Ministry of Education, Tianjin University, Tianjin 300072, China
| | - Wenhui Zhang
- Beijing Shijitan Hospital, Capital Medical University, Beijing 100084, China
| | - Dario Paolo
- Key Laboratory of Mechanism Theory and Equipment Design of the Ministry of Education, Tianjin University, Tianjin 300072, China
- The BioRobotics Institute, Scuola Superiore Sant’Anna, 56100 Pisa, Italy
| |
Collapse
|
11
|
ZHANG S, LIN B, ZHOU P, LIU S. Analysis of clinical efficacy and prognostic side effects of radiotherapy with Teggio capsule on 78 elderly patients with esophageal cancer. Minerva Med 2022; 113:758-759. [DOI: 10.23736/s0026-4806.20.06839-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
12
|
Liu X, Li Z, Ishii M, Hager GD, Taylor RH, Unberath M. SAGE: SLAM with Appearance and Geometry Prior for Endoscopy. IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION : ICRA : [PROCEEDINGS]. IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION 2022; 2022:5587-5593. [PMID: 36937551 PMCID: PMC10018746 DOI: 10.1109/icra46639.2022.9812257] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
In endoscopy, many applications (e.g., surgical navigation) would benefit from a real-time method that can simultaneously track the endoscope and reconstruct the dense 3D geometry of the observed anatomy from a monocular endoscopic video. To this end, we develop a Simultaneous Localization and Mapping system by combining the learning-based appearance and optimizable geometry priors and factor graph optimization. The appearance and geometry priors are explicitly learned in an end-to-end differentiable training pipeline to master the task of pair-wise image alignment, one of the core components of the SLAM system. In our experiments, the proposed SLAM system is shown to robustly handle the challenges of texture scarceness and illumination variation that are commonly seen in endoscopy. The system generalizes well to unseen endoscopes and subjects and performs favorably compared with a state-of-the-art feature-based SLAM system. The code repository is available at https://github.com/lppllppl920/SAGE-SLAM.git.
Collapse
Affiliation(s)
- Xingtong Liu
- Computer Science Department, Johns Hopkins University (JHU), Baltimore, MD 21287 USA
| | - Zhaoshuo Li
- Computer Science Department, Johns Hopkins University (JHU), Baltimore, MD 21287 USA
| | - Masaru Ishii
- Johns Hopkins Medical Institutions, Baltimore, MD 21224 USA
| | - Gregory D Hager
- Computer Science Department, Johns Hopkins University (JHU), Baltimore, MD 21287 USA
| | - Russell H Taylor
- Computer Science Department, Johns Hopkins University (JHU), Baltimore, MD 21287 USA
| | - Mathias Unberath
- Computer Science Department, Johns Hopkins University (JHU), Baltimore, MD 21287 USA
| |
Collapse
|
13
|
AIM in Endoscopy Procedures. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
14
|
Fu Z, Jin Z, Zhang C, Dai Y, Gao X, Wang Z, Li L, Ding G, Hu H, Wang P, Ye X. Visual-electromagnetic system: A novel fusion-based monocular localization, reconstruction, and measurement for flexible ureteroscopy. Int J Med Robot 2021; 17:e2274. [PMID: 33960604 DOI: 10.1002/rcs.2274] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2020] [Revised: 02/16/2021] [Accepted: 05/03/2021] [Indexed: 12/29/2022]
Abstract
BACKGROUND During flexible ureteroscopy (FURS), surgeons may lose orientation due to intrarenal structural similarities and complex shape of the pyelocaliceal cavity. Decision-making required after initially misjudging stone size will also increase the operative time and risk of severe complications. METHODS A intraoperative navigation system based on electromagnetic tracking (EMT) and simultaneous localization and mapping (SLAM) was proposed to track the tip of the ureteroscope and reconstruct a dense intrarenal three-dimensional (3D) map. Furthermore, the contour lines of stones were segmented to measure the size. RESULTS Our system was evaluated on a kidney phantom, achieving an absolute trajectory accuracy root mean square error (RMSE) of 0.6 mm. The median error of the longitudinal and transversal measurements was 0.061 and 0.074 mm, respectively. The in vivo experiment also demonstrated the effectiveness. CONCLUSION The proposed system worked effectively in tracking and measurement. Further, this system can be extended to other surgical applications involving cavities, branches and intelligent robotic surgery.
Collapse
Affiliation(s)
- Zuoming Fu
- Biosensor National Special Laboratory, Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Ziyi Jin
- Biosensor National Special Laboratory, Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Chongan Zhang
- Biosensor National Special Laboratory, Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Yu Dai
- College of Artificial Intelligence, Nankai University, Tianjin, China
| | - Xiaofeng Gao
- Department of Urology, Changhai Hospital, Shanghai, China
| | - Zeyu Wang
- Department of Urology, Changhai Hospital, Shanghai, China
| | - Ling Li
- Department of Urology, Changhai Hospital, Shanghai, China
| | - Guoqing Ding
- Department of Urology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Haiyi Hu
- Department of Urology, Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Peng Wang
- Biosensor National Special Laboratory, Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Xuesong Ye
- Biosensor National Special Laboratory, Key Laboratory of Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| |
Collapse
|
15
|
Chang KP, Lin SH, Chu YW. Artificial intelligence in gastrointestinal radiology: A review with special focus on recent development of magnetic resonance and computed tomography. Artif Intell Gastroenterol 2021; 2:27-41. [DOI: 10.35712/aig.v2.i2.27] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/21/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI), particularly the deep learning technology, have been proven influential to radiology in the recent decade. Its ability in image classification, segmentation, detection and reconstruction tasks have substantially assisted diagnostic radiology, and has even been viewed as having the potential to perform better than radiologists in some tasks. Gastrointestinal radiology, an important subspecialty dealing with complex anatomy and various modalities including endoscopy, have especially attracted the attention of AI researchers and engineers worldwide. Consequently, recently many tools have been developed for lesion detection and image construction in gastrointestinal radiology, particularly in the fields for which public databases are available, such as diagnostic abdominal magnetic resonance imaging (MRI) and computed tomography (CT). This review will provide a framework for understanding recent advancements of AI in gastrointestinal radiology, with a special focus on hepatic and pancreatobiliary diagnostic radiology with MRI and CT. For fields where AI is less developed, this review will also explain the difficulty in AI model training and possible strategies to overcome the technical issues. The authors’ insights of possible future development will be addressed in the last section.
Collapse
Affiliation(s)
- Kai-Po Chang
- PhD Program in Medical Biotechnology, National Chung Hsing University, Taichung 40227, Taiwan
- Department of Pathology, China Medical University Hospital, Taichung 40447, Taiwan
| | - Shih-Huan Lin
- PhD Program in Medical Biotechnology, National Chung Hsing University, Taichung 40227, Taiwan
| | - Yen-Wei Chu
- PhD Program in Medical Biotechnology, National Chung Hsing University, Taichung 40227, Taiwan
- Institute of Genomics and Bioinformatics, National Chung Hsing University, Taichung 40227, Taiwan
- Institute of Molecular Biology, National Chung Hsing University, Taichung 40227, Taiwan
- Agricultural Biotechnology Center, National Chung Hsing University, Taichung 40227, Taiwan
- Biotechnology Center, National Chung Hsing University, Taichung 40227, Taiwan
- PhD Program in Translational Medicine, National Chung Hsing University, Taichung 40227, Taiwan
- Rong Hsing Research Center for Translational Medicine, Taichung 40227, Taiwan
| |
Collapse
|
16
|
Yang H, Hu B. Application of artificial intelligence to endoscopy on common gastrointestinal benign diseases. Artif Intell Gastrointest Endosc 2021; 2:25-35. [DOI: 10.37126/aige.v2.i2.25] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 03/17/2021] [Accepted: 04/20/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) has been widely involved in every aspect of healthcare in the preclinical stage. In the digestive system, AI has been trained to assist auxiliary examinations including histopathology, endoscopy, ultrasonography, computerized tomography, and magnetic resonance imaging in detection, diagnosis, classification, differentiation, prognosis, and quality control. In the field of endoscopy, the application of AI, such as automatic detection, diagnosis, classification, and invasion depth, in early gastrointestinal (GI) cancers has received wide attention. There is a paucity of studies of AI application on common GI benign diseases based on endoscopy. In the review, we provide an overview of AI applications to endoscopy on common GI benign diseases including in the esophagus, stomach, intestine, and colon. It indicates that AI will gradually become an indispensable part of normal endoscopic detection and diagnosis of common GI benign diseases as clinical data, algorithms, and other related work are constantly repeated and improved.
Collapse
Affiliation(s)
- Hang Yang
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| | - Bing Hu
- Department of Gastroenterology, West China Hospital, Sichuan University, Chengdu 610041, Sichuan Province, China
| |
Collapse
|
17
|
Ozyoruk KB, Gokceler GI, Bobrow TL, Coskun G, Incetan K, Almalioglu Y, Mahmood F, Curto E, Perdigoto L, Oliveira M, Sahin H, Araujo H, Alexandrino H, Durr NJ, Gilbert HB, Turan M. EndoSLAM dataset and an unsupervised monocular visual odometry and depth estimation approach for endoscopic videos. Med Image Anal 2021; 71:102058. [PMID: 33930829 DOI: 10.1016/j.media.2021.102058] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Revised: 01/23/2021] [Accepted: 03/29/2021] [Indexed: 02/07/2023]
Abstract
Deep learning techniques hold promise to develop dense topography reconstruction and pose estimation methods for endoscopic videos. However, currently available datasets do not support effective quantitative benchmarking. In this paper, we introduce a comprehensive endoscopic SLAM dataset consisting of 3D point cloud data for six porcine organs, capsule and standard endoscopy recordings, synthetically generated data as well as clinically in use conventional endoscope recording of the phantom colon with computed tomography(CT) scan ground truth. A Panda robotic arm, two commercially available capsule endoscopes, three conventional endoscopes with different camera properties, two high precision 3D scanners, and a CT scanner were employed to collect data from eight ex-vivo porcine gastrointestinal (GI)-tract organs and a silicone colon phantom model. In total, 35 sub-datasets are provided with 6D pose ground truth for the ex-vivo part: 18 sub-datasets for colon, 12 sub-datasets for stomach, and 5 sub-datasets for small intestine, while four of these contain polyp-mimicking elevations carried out by an expert gastroenterologist. To verify the applicability of this data for use with real clinical systems, we recorded a video sequence with a state-of-the-art colonoscope from a full representation silicon colon phantom. Synthetic capsule endoscopy frames from stomach, colon, and small intestine with both depth and pose annotations are included to facilitate the study of simulation-to-real transfer learning algorithms. Additionally, we propound Endo-SfMLearner, an unsupervised monocular depth and pose estimation method that combines residual networks with a spatial attention module in order to dictate the network to focus on distinguishable and highly textured tissue regions. The proposed approach makes use of a brightness-aware photometric loss to improve the robustness under fast frame-to-frame illumination changes that are commonly seen in endoscopic videos. To exemplify the use-case of the EndoSLAM dataset, the performance of Endo-SfMLearner is extensively compared with the state-of-the-art: SC-SfMLearner, Monodepth2, and SfMLearner. The codes and the link for the dataset are publicly available at https://github.com/CapsuleEndoscope/EndoSLAM. A video demonstrating the experimental setup and procedure is accessible as Supplementary Video 1.
Collapse
Affiliation(s)
| | | | - Taylor L Bobrow
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Gulfize Coskun
- Institute of Biomedical Engineering, Bogazici University, Turkey
| | - Kagan Incetan
- Institute of Biomedical Engineering, Bogazici University, Turkey
| | | | - Faisal Mahmood
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA; Cancer Data Science, Dana Farber Cancer Institute, Boston, MA, USA; Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
| | - Eva Curto
- Institute for Systems and Robotics, University of Coimbra, Portugal
| | - Luis Perdigoto
- Institute for Systems and Robotics, University of Coimbra, Portugal
| | - Marina Oliveira
- Institute for Systems and Robotics, University of Coimbra, Portugal
| | - Hasan Sahin
- Institute of Biomedical Engineering, Bogazici University, Turkey
| | - Helder Araujo
- Institute for Systems and Robotics, University of Coimbra, Portugal
| | - Henrique Alexandrino
- Faculty of Medicine, Clinical Academic Center of Coimbra, University of Coimbra, Coimbra, Portugal
| | - Nicholas J Durr
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Hunter B Gilbert
- Department of Mechanical and Industrial Engineering, Louisiana State University, Baton Rouge, LA, USA
| | - Mehmet Turan
- Institute of Biomedical Engineering, Bogazici University, Turkey.
| |
Collapse
|
18
|
Marzullo A, Moccia S, Calimeri F, De Momi E. AIM in Endoscopy Procedures. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_164-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
|
19
|
Application of artificial intelligence in surgery. Front Med 2020; 14:417-430. [DOI: 10.1007/s11684-020-0770-0] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2019] [Accepted: 03/05/2020] [Indexed: 12/14/2022]
|
20
|
Feng XW, Feng DZ. A Robust Nonrigid Point Set Registration Method Based on Collaborative Correspondences. SENSORS 2020; 20:s20113248. [PMID: 32517316 PMCID: PMC7308981 DOI: 10.3390/s20113248] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Revised: 06/04/2020] [Accepted: 06/05/2020] [Indexed: 11/16/2022]
Abstract
The nonrigid point set registration is one of the bottlenecks and has the wide applications in computer vision, pattern recognition, image fusion, video processing, and so on. In a nonrigid point set registration problem, finding the point-to-point correspondences is challengeable because of the various image degradations. In this paper, a robust method is proposed to accurately determine the correspondences by fusing the two complementary structural features, including the spatial location of a point and the local structure around it. The former is used to define the absolute distance (AD), and the latter is exploited to define the relative distance (RD). The AD-correspondences and the RD-correspondences can be established based on AD and RD, respectively. The neighboring corresponding consistency is employed to assign the confidence for each RD-correspondence. The proposed heuristic method combines the AD-correspondences and the RD-correspondences to determine the corresponding relationship between two point sets, which can significantly improve the corresponding accuracy. Subsequently, the thin plate spline (TPS) is employed as the transformation function. At each step, the closed-form solutions of the affine and nonaffine parts of TPS can be independently and robustly solved. It facilitates to analyze and control the registration process. Experimental results demonstrate that our method can achieve better performance than several existing state-of-the-art methods.
Collapse
|
21
|
Gulati S, Emmanuel A, Patel M, Williams S, Haji A, Hayee B, Neumann H. Artificial intelligence in luminal endoscopy. Ther Adv Gastrointest Endosc 2020; 13:2631774520935220. [PMID: 32637935 PMCID: PMC7315657 DOI: 10.1177/2631774520935220] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Accepted: 05/22/2020] [Indexed: 12/15/2022] Open
Abstract
Artificial intelligence is a strong focus of interest for global health development. Diagnostic endoscopy is an attractive substrate for artificial intelligence with a real potential to improve patient care through standardisation of endoscopic diagnosis and to serve as an adjunct to enhanced imaging diagnosis. The possibility to amass large data to refine algorithms makes adoption of artificial intelligence into global practice a potential reality. Initial studies in luminal endoscopy involve machine learning and are retrospective. Improvement in diagnostic performance is appreciable through the adoption of deep learning. Research foci in the upper gastrointestinal tract include the diagnosis of neoplasia, including Barrett's, squamous cell and gastric where prospective and real-time artificial intelligence studies have been completed demonstrating a benefit of artificial intelligence-augmented endoscopy. Deep learning applied to small bowel capsule endoscopy also appears to enhance pathology detection and reduce capsule reading time. Prospective evaluation including the first randomised trial has been performed in the colon, demonstrating improved polyp and adenoma detection rates; however, these appear to be relevant to small polyps. There are potential additional roles of artificial intelligence relevant to improving the quality of endoscopic examinations, training and triaging of referrals. Further large-scale, multicentre and cross-platform validation studies are required for the robust incorporation of artificial intelligence-augmented diagnostic luminal endoscopy into our routine clinical practice.
Collapse
Affiliation(s)
- Shraddha Gulati
- King’s Institute of Therapeutic Endoscopy, King’s College Hospital NHS Foundation Trust, London, UK
| | - Andrew Emmanuel
- King’s Institute of Therapeutic Endoscopy, King’s College Hospital NHS Foundation Trust, London, UK
| | - Mehul Patel
- King’s Institute of Therapeutic Endoscopy, King’s College Hospital NHS Foundation Trust, London, UK
| | - Sophie Williams
- King’s Institute of Therapeutic Endoscopy, King’s College Hospital NHS Foundation Trust, London, UK
| | - Amyn Haji
- King’s Institute of Therapeutic Endoscopy, King’s College Hospital NHS Foundation Trust, London, UK
| | - Bu’Hussain Hayee
- King’s Institute of Therapeutic Endoscopy, King’s College Hospital NHS Foundation Trust, London, UK
| | - Helmut Neumann
- Department of Interdisciplinary Endoscopy, University Hospital Mainz, 55131 Mainz, Germany
| |
Collapse
|
22
|
A High-Accuracy Indoor-Positioning Method with Automated RGB-D Image Database Construction. REMOTE SENSING 2019. [DOI: 10.3390/rs11212572] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
High-accuracy indoor positioning is a prerequisite to satisfy the increasing demands of position-based services in complex indoor scenes. Current indoor visual-positioning methods mainly include image retrieval-based methods, visual landmarks-based methods, and learning-based methods. To better overcome the limitations of traditional methods such as them being labor-intensive, of poor accuracy, and time-consuming, this paper proposes a novel indoor-positioning method with automated red, green, blue and depth (RGB-D) image database construction. First, strategies for automated database construction are developed to reduce the workload of manually selecting database images and ensure the requirements of high-accuracy indoor positioning. The database is automatically constructed according to the rules, which is more objective and improves the efficiency of the image-retrieval process. Second, by combining the automated database construction module, convolutional neural network (CNN)-based image-retrieval module, and strict geometric relations-based pose estimation module, we obtain a high-accuracy indoor-positioning system. Furthermore, in order to verify the proposed method, we conducted extensive experiments on the public indoor environment dataset. The detailed experimental results demonstrated the effectiveness and efficiency of our indoor-positioning method.
Collapse
|
23
|
Son D, Gilbert H, Sitti M. Magnetically Actuated Soft Capsule Endoscope for Fine-Needle Biopsy. Soft Robot 2019; 7:10-21. [PMID: 31418640 DOI: 10.1089/soro.2018.0171] [Citation(s) in RCA: 69] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
Wireless capsule endoscopes have revolutionized diagnostic procedures in the gastrointestinal (GI) tract by minimizing discomfort and trauma. Biopsy procedures, which are often necessary for a confirmed diagnosis of an illness, have been incorporated recently into robotic capsule endoscopes to improve their diagnostic functionality beyond only imaging. However, capsule robots to date have only been able to acquire biopsy samples of superficial tissues of the GI tract, which could generate false-negative diagnostic results if the diseased tissue is under the surface of the GI tract. To improve their diagnostic accuracy for submucosal tumors/diseases, we propose a magnetically actuated soft robotic capsule robot, which takes biopsy samples in a deep tissue of a stomach using the fine-needle biopsy technique. We present the design, control, and human-machine interfacing methods for the fine-needle biopsy capsule robot. Ex vivo experiments in a porcine stomach show 85% yield for the biopsy of phantom tumors located underneath the first layers of the stomach wall.
Collapse
Affiliation(s)
- Donghoon Son
- Physical Intelligence Department, Max Planck Institute for Intelligent Systems, Stuttgart, Germany
| | - Hunter Gilbert
- Physical Intelligence Department, Max Planck Institute for Intelligent Systems, Stuttgart, Germany.,Department of Mechanical and Industrial Engineering, Louisiana State University, Baton Rouge, Louisiana
| | - Metin Sitti
- Physical Intelligence Department, Max Planck Institute for Intelligent Systems, Stuttgart, Germany.,School of Medicine and School of Engineering, Koc University, Istanbul, Turkey
| |
Collapse
|
24
|
Wang M, Shi Q, Song S, Hu C, Meng MQH. A Novel Relative Position Estimation Method for Capsule Robot Moving in Gastrointestinal Tract. SENSORS 2019; 19:s19122746. [PMID: 31248092 PMCID: PMC6630487 DOI: 10.3390/s19122746] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2019] [Revised: 06/14/2019] [Accepted: 06/16/2019] [Indexed: 12/19/2022]
Abstract
Recently, a variety of positioning and tracking methods have been proposed for capsule robots moving in the gastrointestinal (GI) tract to provide real-time unobstructed spatial pose results. However, the current absolute position-based result cannot match the GI structure due to its unstructured environment. To overcome this disadvantage and provide a proper position description method to match the GI tract, we here present a relative position estimation method for tracking the capsule robot, which uses the moving distance of the robot along the GI tract to indicate the position result. The procedure of the proposed method is as follows: firstly, the absolute position results of the capsule robot are obtained with the magnetic tracking method; then, the moving status of the robot along the GI tract is determined according to the moving direction; and finally, the movement trajectory of the capsule robot is fitted with the Bézier curve, where the moving distance can then be evaluated using the integral method. Compared to state-of-the-art capsule tracking methods, the proposed method can directly help to guide medical instruments by providing physicians the insertion distance in patients’ bodies, which cannot be done based on absolute position results. Moreover, as relative distance information was used, no reference tracking objects needed to be mounted onto the human body. The experimental results prove that the proposed method achieves a good distance estimation of the capsule robot moving in the simulation platform.
Collapse
Affiliation(s)
- Min Wang
- Shenzhen Engineering Lab for Medical Intelligent Wireless Ultrasonic Imaging Technology, Harbin Institute of Technolgoy, Shenzhen 518055, China.
| | - Qinyuan Shi
- Shenzhen Engineering Lab for Medical Intelligent Wireless Ultrasonic Imaging Technology, Harbin Institute of Technolgoy, Shenzhen 518055, China.
| | - Shuang Song
- Shenzhen Engineering Lab for Medical Intelligent Wireless Ultrasonic Imaging Technology, Harbin Institute of Technolgoy, Shenzhen 518055, China.
| | - Chao Hu
- Ningbo Institute of Technology, Zhejiang University, Ningbo 315000, China.
| | - Max Q-H Meng
- Robotics, Perception and Artificial Intelligence Lab, The Chinese University of Hong Kong, N.T., Hong Kong 999077, China.
| |
Collapse
|
25
|
Park J, Hwang Y, Yoon JH, Park MG, Kim J, Lim YJ, Chun HJ. Recent Development of Computer Vision Technology to Improve Capsule Endoscopy. Clin Endosc 2019; 52:328-333. [PMID: 30786704 PMCID: PMC6680009 DOI: 10.5946/ce.2018.172] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/08/2018] [Accepted: 11/25/2018] [Indexed: 12/24/2022] Open
Abstract
Capsule endoscopy (CE) is a preferred diagnostic method for analyzing small bowel diseases. However, capsule endoscopes capture a sparse number of images because of their mechanical limitations. Post-procedural management using computational methods can enhance image quality. Additional information, including depth, can be obtained by using recently developed computer vision techniques. It is possible to measure the size of lesions and track the trajectory of capsule endoscopes using the computer vision technology, without requiring additional equipment. Moreover, the computational analysis of CE images can help detect lesions more accurately within a shorter time. Newly introduced deep leaning-based methods have shown more remarkable results over traditional computerized approaches. A large-scale standard dataset should be prepared to develop an optimal algorithms for improving the diagnostic yield of CE. The close collaboration between information technology and medical professionals is needed.
Collapse
Affiliation(s)
- Junseok Park
- Digestive Disease Center, Institute for Digestive Research, Department of Internal Medicine, Soonchunhyang University College of Medicine, Seoul, Korea
| | - Youngbae Hwang
- Intelligent Image Processing Research Center, Korea Electronics Technology Institute (KETI), Seongnam, Korea
| | - Ju-Hong Yoon
- Intelligent Image Processing Research Center, Korea Electronics Technology Institute (KETI), Seongnam, Korea
| | - Min-Gyu Park
- Intelligent Image Processing Research Center, Korea Electronics Technology Institute (KETI), Seongnam, Korea
| | - Jungho Kim
- Intelligent Image Processing Research Center, Korea Electronics Technology Institute (KETI), Seongnam, Korea
| | - Yun Jeong Lim
- Department of Internal Medicine, Dongguk University Ilsan Hospital, Dongguk University College of Medicine, Goyang, Korea
| | - Hoon Jai Chun
- Division of Gastroenterology and Hepatology, Department of Internal Medicine, Institute of Gastrointestinal Medical Instrument Research, Korea University College of Medicine, Seoul, Korea
| |
Collapse
|
26
|
Mahmoud N, Collins T, Hostettler A, Soler L, Doignon C, Montiel JMM. Live Tracking and Dense Reconstruction for Handheld Monocular Endoscopy. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:79-89. [PMID: 30010552 DOI: 10.1109/tmi.2018.2856109] [Citation(s) in RCA: 47] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Contemporary endoscopic simultaneous localization and mapping (SLAM) methods accurately compute endoscope poses; however, they only provide a sparse 3-D reconstruction that poorly describes the surgical scene. We propose a novel dense SLAM method whose qualities are: 1) monocular, requiring only RGB images of a handheld monocular endoscope; 2) fast, providing endoscope positional tracking and 3-D scene reconstruction, running in parallel threads; 3) dense, yielding an accurate dense reconstruction; 4) robust, to the severe illumination changes, poor texture and small deformations that are typical in endoscopy; and 5) self-contained, without needing any fiducials nor external tracking devices and, therefore, it can be smoothly integrated into the surgical workflow. It works as follows. First, accurate cluster frame poses are estimated using the sparse SLAM feature matches. The system segments clusters of video frames according to parallax criteria. Next, dense matches between cluster frames are computed in parallel by a variational approach that combines zero mean normalized cross correlation and a gradient Huber norm regularizer. This combination copes with challenging lighting and textures at an affordable time budget on a modern GPU. It can outperform pure stereo reconstructions, because the frames cluster can provide larger parallax from the endoscope's motion. We provide an extensive experimental validation on real sequences of the porcine abdominal cavity, both in-vivo and ex-vivo. We also show a qualitative evaluation on human liver. In addition, we show a comparison with the other dense SLAM methods showing the performance gain in terms of accuracy, density, and computation time.
Collapse
|
27
|
Tabak AF. Hydrodynamic Impedance Correction for Reduced‐Order Modeling of Spermatozoa‐Like Soft Micro‐Robots. ADVANCED THEORY AND SIMULATIONS 2018. [DOI: 10.1002/adts.201800130] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Ahmet Fatih Tabak
- Mechatronics Engineering DepartmentFaculty of EngineeringOkan University Akfirat‐Tuzla/Istanbul 34959 Turkey
| |
Collapse
|
28
|
Song J, Wang J, Zhao L, Huang S, Dissanayake G. MIS-SLAM: Real-Time Large-Scale Dense Deformable SLAM System in Minimal Invasive Surgery Based on Heterogeneous Computing. IEEE Robot Autom Lett 2018. [DOI: 10.1109/lra.2018.2856519] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
29
|
Turan M, Almalioglu Y, Araujo H, Konukoglu E, Sitti M. Deep EndoVO: A recurrent convolutional neural network (RCNN) based visual odometry approach for endoscopic capsule robots. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2017.10.014] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
30
|
A deep learning based fusion of RGB camera information and magnetic localization information for endoscopic capsule robots. INTERNATIONAL JOURNAL OF INTELLIGENT ROBOTICS AND APPLICATIONS 2017; 1:442-450. [PMID: 29250590 PMCID: PMC5727155 DOI: 10.1007/s41315-017-0039-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2017] [Accepted: 11/14/2017] [Indexed: 12/21/2022]
Abstract
A reliable, real time localization functionality is crutial for actively controlled capsule endoscopy robots, which are an emerging, minimally invasive diagnostic and therapeutic technology for the gastrointestinal (GI) tract. In this study, we extend the success of deep learning approaches from various research fields to the problem of sensor fusion for endoscopic capsule robots. We propose a multi-sensor fusion based localization approach which combines endoscopic camera information and magnetic sensor based localization information. The results performed on real pig stomach dataset show that our method achieves sub-millimeter precision for both translational and rotational movements.
Collapse
|
31
|
Chen L, Tang W, John NW. Real-time geometry-aware augmented reality in minimally invasive surgery. Healthc Technol Lett 2017; 4:163-167. [PMID: 29184658 PMCID: PMC5683199 DOI: 10.1049/htl.2017.0068] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Accepted: 07/31/2017] [Indexed: 11/25/2022] Open
Abstract
The potential of augmented reality (AR) technology to assist minimally invasive surgery (MIS) lies in its computational performance and accuracy in dealing with challenging MIS scenes. Even with the latest hardware and software technologies, achieving both real-time and accurate augmented information overlay in MIS is still a formidable task. In this Letter, the authors present a novel real-time AR framework for MIS that achieves interactive geometric aware AR in endoscopic surgery with stereo views. The authors' framework tracks the movement of the endoscopic camera and simultaneously reconstructs a dense geometric mesh of the MIS scene. The movement of the camera is predicted by minimising the re-projection error to achieve a fast tracking performance, while the three-dimensional mesh is incrementally built by a dense zero mean normalised cross-correlation stereo-matching method to improve the accuracy of the surface reconstruction. The proposed system does not require any prior template or pre-operative scan and can infer the geometric information intra-operatively in real time. With the geometric information available, the proposed AR framework is able to interactively add annotations, localisation of tumours and vessels, and measurement labelling with greater precision and accuracy compared with the state-of-the-art approaches.
Collapse
Affiliation(s)
- Long Chen
- Department of Creative Technology, Bournemouth University, Poole, UK
| | - Wen Tang
- Department of Creative Technology, Bournemouth University, Poole, UK
| | - Nigel W. John
- Deaprtment of Computer Science, University of Chester, Chester, UK
| |
Collapse
|