1
|
Shi Y, Zhu P, Wang T, Mai H, Yeh X, Yang L, Wang J. Dynamic Virtual Fixture Generation Based on Intra-Operative 3D Image Feedback in Robot-Assisted Minimally Invasive Thoracic Surgery. SENSORS (BASEL, SWITZERLAND) 2024; 24:492. [PMID: 38257585 PMCID: PMC10820968 DOI: 10.3390/s24020492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 01/09/2024] [Accepted: 01/10/2024] [Indexed: 01/24/2024]
Abstract
This paper proposes a method for generating dynamic virtual fixtures with real-time 3D image feedback to facilitate human-robot collaboration in medical robotics. Seamless shared control in a dynamic environment, like that of a surgical field, remains challenging despite extensive research on collaborative control and planning. To address this problem, our method dynamically creates virtual fixtures to guide the manipulation of a trocar-placing robot arm using the force field generated by point cloud data from an RGB-D camera. Additionally, the "view scope" concept selectively determines the region for computational points, thereby reducing computational load. In a phantom experiment for robot-assisted port incision in minimally invasive thoracic surgery, our method demonstrates substantially improved accuracy for port placement, reducing error and completion time by 50% (p=1.06×10-2) and 35% (p=3.23×10-2), respectively. These results suggest that our proposed approach is promising in improving surgical human-robot collaboration.
Collapse
Affiliation(s)
- Yunze Shi
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (Y.S.); (T.W.); (H.M.)
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310058, China
| | - Peizhang Zhu
- Flexiv Ltd., Santa Clara, CA 95054, USA; (P.Z.); (X.Y.)
| | - Tengyue Wang
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (Y.S.); (T.W.); (H.M.)
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310058, China
| | - Haonan Mai
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (Y.S.); (T.W.); (H.M.)
| | - Xiyang Yeh
- Flexiv Ltd., Santa Clara, CA 95054, USA; (P.Z.); (X.Y.)
| | - Liangjing Yang
- ZJU-UIUC Institute, International Campus, Zhejiang University, Haining 314400, China; (Y.S.); (T.W.); (H.M.)
- School of Mechanical Engineering, Zhejiang University, Hangzhou 310058, China
- Department of Mechanical Engineering, University of Illinois Urbana-Champaign, Urbana, IL 61801, USA
| | - Jingfan Wang
- Flexiv Ltd., Santa Clara, CA 95054, USA; (P.Z.); (X.Y.)
| |
Collapse
|
2
|
Han Z, Tian H, Han X, Wu J, Zhang W, Li C, Qiu L, Duan X, Tian W. A Respiratory Motion Prediction Method Based on LSTM-AE with Attention Mechanism for Spine Surgery. CYBORG AND BIONIC SYSTEMS 2024; 5:0063. [PMID: 38188983 PMCID: PMC10769044 DOI: 10.34133/cbsystems.0063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Accepted: 09/21/2023] [Indexed: 01/09/2024] Open
Abstract
Respiratory motion-induced vertebral movements can adversely impact intraoperative spine surgery, resulting in inaccurate positional information of the target region and unexpected damage during the operation. In this paper, we propose a novel deep learning architecture for respiratory motion prediction, which can adapt to different patients. The proposed method utilizes an LSTM-AE with attention mechanism network that can be trained using few-shot datasets during operation. To ensure real-time performance, a dimension reduction method based on the respiration-induced physical movement of spine vertebral bodies is introduced. The experiment collected data from prone-positioned patients under general anaesthesia to validate the prediction accuracy and time efficiency of the LSTM-AE-based motion prediction method. The experimental results demonstrate that the presented method (RMSE: 4.39%) outperforms other methods in terms of accuracy within a learning time of 2 min. The maximum predictive errors under the latency of 333 ms with respect to the x, y, and z axes of the optical camera system were 0.13, 0.07, and 0.10 mm, respectively, within a motion range of 2 mm.
Collapse
Affiliation(s)
- Zhe Han
- School of Medical Technology,
Beijing Institute of Technology, Beijing, China
| | - Huanyu Tian
- School of Mechatronical Engineering,
Beijing Institute of Technology, Beijing, China
| | | | | | - Weijun Zhang
- School of Medical Technology,
Beijing Institute of Technology, Beijing, China
| | - Changsheng Li
- School of Mechatronical Engineering,
Beijing Institute of Technology, Beijing, China
| | - Liang Qiu
- Department of Radiation Oncology,
Stanford University, Stanford, CA, USA
| | - Xingguang Duan
- School of Medical Technology,
Beijing Institute of Technology, Beijing, China
- School of Mechatronical Engineering,
Beijing Institute of Technology, Beijing, China
| | - Wei Tian
- School of Medical Technology,
Beijing Institute of Technology, Beijing, China
- Ji Shui Tan Hospital, Beijing, China
| |
Collapse
|