1
|
Chen Z, Li H, Yu H, Zhao Y, Ma J, Zhang C, Zhang H. Designing of Airspeed Measurement Method for UAVs Based on MEMS Pressure Sensors. SENSORS (BASEL, SWITZERLAND) 2024; 24:5853. [PMID: 39275762 DOI: 10.3390/s24175853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Revised: 08/29/2024] [Accepted: 08/29/2024] [Indexed: 09/16/2024]
Abstract
Airspeed measurement is crucial for UAV control. To achieve accurate airspeed measurements for UAVs, this paper calculates airspeed data by measuring changes in air pressure and temperature. Based on this, a data processing method based on mechanical filtering and the improved AR-SHAKF algorithm is proposed to indirectly measure airspeed with high precision. In particular, a mathematical model for an airspeed measurement system was established, and an installation method for the pressure sensor was designed to measure the total pressure, static pressure, and temperature. Secondly, the measurement principle of the sensor was analyzed, and a metal tube was installed to act as a mechanical filter, particularly in cases where the aircraft has a significant impact on the gas flow field. Furthermore, a time series model was used to establish the sensor state equation and the initial noise values. It also enhanced the Sage-Husa adaptive filter to analyze the unavoidable error impact of initial noise values. By constraining the range of measurement noise, it achieved adaptive noise estimation. To validate the superiority of the proposed method, a low-complexity airspeed measurement device based on MEMS pressure sensors was designed. The results demonstrate that the airspeed measurement device and the designed velocity measurement method can effectively calculate airspeed with high measurement accuracy and strong interference resistance.
Collapse
Affiliation(s)
- Zhipeng Chen
- School of Mechanical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Haojie Li
- School of Mechanical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Hang Yu
- School of Mechanical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Yuan Zhao
- School of Mechanical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Jing Ma
- School of Mechanical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Chuanhao Zhang
- School of Mechanical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
| | - He Zhang
- School of Mechanical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China
| |
Collapse
|
2
|
Wen M, Shcherbakov P, Xu Y, Li J, Hu Y, Zhou Q, Liang H, Yuan L, Zhang X. A temporal enhanced semi-supervised training framework for needle segmentation in 3D ultrasound images. Phys Med Biol 2024; 69:115023. [PMID: 38684166 DOI: 10.1088/1361-6560/ad450b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Accepted: 04/29/2024] [Indexed: 05/02/2024]
Abstract
Objective.Automated biopsy needle segmentation in 3D ultrasound images can be used for biopsy navigation, but it is quite challenging due to the low ultrasound image resolution and interference similar to the needle appearance. For 3D medical image segmentation, such deep learning networks as convolutional neural network and transformer have been investigated. However, these segmentation methods require numerous labeled data for training, have difficulty in meeting the real-time segmentation requirement and involve high memory consumption.Approach.In this paper, we have proposed the temporal information-based semi-supervised training framework for fast and accurate needle segmentation. Firstly, a novel circle transformer module based on the static and dynamic features has been designed after the encoders for extracting and fusing the temporal information. Then, the consistency constraints of the outputs before and after combining temporal information are proposed to provide the semi-supervision for the unlabeled volume. Finally, the model is trained using the loss function which combines the cross-entropy and Dice similarity coefficient (DSC) based segmentation loss with mean square error based consistency loss. The trained model with the single ultrasound volume input is applied to realize the needle segmentation in ultrasound volume.Main results.Experimental results on three needle ultrasound datasets acquired during the beagle biopsy show that our approach is superior to the most competitive mainstream temporal segmentation model and semi-supervised method by providing higher DSC (77.1% versus 76.5%), smaller needle tip position (1.28 mm versus 1.87 mm) and length (1.78 mm versus 2.19 mm) errors on the kidney dataset as well as DSC (78.5% versus 76.9%), needle tip position (0.86 mm versus 1.12 mm) and length (1.01 mm versus 1.26 mm) errors on the prostate dataset.Significance.The proposed method can significantly enhance needle segmentation accuracy by training with sequential images at no additional cost. This enhancement may further improve the effectiveness of biopsy navigation systems.
Collapse
Affiliation(s)
- Mingwei Wen
- Department of Biomedical Engineering, College of Life Science and Technology, Huazhong University of Science and Technology, No 1037, Luoyu Road, Wuhan 430074, People's Republic of China
| | - Pavel Shcherbakov
- Institute for Control Science, Russian Academy of Sciences, 65, Profsoyuznaya str., Moscow 117997, Russia
| | - Yang Xu
- Department of Biomedical Engineering, College of Life Science and Technology, Huazhong University of Science and Technology, No 1037, Luoyu Road, Wuhan 430074, People's Republic of China
- Hubei Medical Devices Quality Supervision and Test Institute, Wuhan, 430075, People's Republic of China
| | - Jing Li
- Hubei Medical Devices Quality Supervision and Test Institute, Wuhan, 430075, People's Republic of China
| | - Yi Hu
- Hubei Medical Devices Quality Supervision and Test Institute, Wuhan, 430075, People's Republic of China
| | - Quan Zhou
- Department of Biomedical Engineering, College of Life Science and Technology, Huazhong University of Science and Technology, No 1037, Luoyu Road, Wuhan 430074, People's Republic of China
| | - Huageng Liang
- Department of Urology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, No 13, Hangkong Road, Wuhan 430022, People's Republic of China
| | - Li Yuan
- Department of Ultrasound imaging, Wuhan Children's Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, People's Republic of China
| | - Xuming Zhang
- Department of Biomedical Engineering, College of Life Science and Technology, Huazhong University of Science and Technology, No 1037, Luoyu Road, Wuhan 430074, People's Republic of China
| |
Collapse
|
3
|
Che H, Qin J, Chen Y, Ji Z, Yan Y, Yang J, Wang Q, Liang C, Wu J. Improving Needle Tip Tracking and Detection in Ultrasound-Based Navigation System Using Deep Learning-Enabled Approach. IEEE J Biomed Health Inform 2024; 28:2930-2942. [PMID: 38215329 DOI: 10.1109/jbhi.2024.3353343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2024]
Abstract
Ultrasound-guided percutaneous interventions have numerous advantages over traditional techniques. Accurate needle placement in the target anatomy is crucial for successful intervention, and reliable visual information is essential to achieve this. However, previous studies have revealed several challenges, such as the variability in needle echogenicity and the common misalignment of the ultrasound beam and the needle. Advanced techniques have been developed to optimize needle visualization, including hardware-based and image-processing-based methods. This paper proposes a novel strategy of integrating ultrasound-based deep learning approaches into an optical navigation system to enhance needle visualization and improve tip positioning accuracy. Both the tracking and detection algorithms are optimized utilizing optical tracking information. The information is introduced into the tracking network to define the search patch update strategy and form a trajectory reference to correct tracking results. In the detection network, the original image is processed according to the needle insertion position and current position given by the optical localization system to locate a coarse region, and the depth-score criterion is adopted to optimize detection results. Extensive experiments demonstrate that our approach achieves promising tip tracking and detection performance with tip localization errors of 1.11 ± 0.59 mm and 1.17 ± 0.70 mm, respectively. Moreover, we establish a paired dataset consisting of ultrasound images and their corresponding spatial tip coordinates acquired from the optical tracking system and conduct real puncture experiments to verify the effectiveness of the proposed methods. Our approach significantly improves needle visualization and provides physicians with visual guidance for posture adjustment.
Collapse
|
4
|
Yan W, Ding Q, Chen J, Yan K, Tang RSY, Cheng SS. Learning-based needle tip tracking in 2D ultrasound by fusing visual tracking and motion prediction. Med Image Anal 2023; 88:102847. [PMID: 37307759 DOI: 10.1016/j.media.2023.102847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2022] [Revised: 01/29/2023] [Accepted: 05/17/2023] [Indexed: 06/14/2023]
Abstract
Visual trackers are the most commonly adopted approach for needle tip tracking in ultrasound (US)-based procedures. However, they often perform unsatisfactorily in biological tissues due to the significant background noise and anatomical occlusion. This paper presents a learning-based needle tip tracking system, which consists of not only a visual tracking module, but also a motion prediction module. In the visual tracking module, two sets of masks are designed to improve the tracker's discriminability, and a template update submodule is used to keep up to date with the needle tip's current appearance. In the motion prediction module, a Transformer network-based prediction architecture estimates the target's current position according to its historical position data to tackle the problem of target's temporary disappearance. A data fusion module then integrates the results from the visual tracking and motion prediction modules to provide robust and accurate tracking results. Our proposed tracking system showed distinct improvement against other state-of-the-art trackers during the motorized needle insertion experiments in both gelatin phantom and biological tissue environments (e.g. 78% against <60% in terms of the tracking success rate in the most challenging scenario of "In-plane-static" during the tissue experiments). Its robustness was also verified in manual needle insertion experiments under varying needle velocities and directions, and occasional temporary needle tip disappearance, with its tracking success rate being >18% higher than the second best performing tracking system. The proposed tracking system, with its computational efficiency, tracking robustness, and tracking accuracy, will lead to safer targeting during existing clinical practice of US-guided needle operations and potentially be integrated in a tissue biopsy robotic system.
Collapse
Affiliation(s)
- Wanquan Yan
- Department of Mechanical and Automation Engineering and T Stone Robotics Institute, The Chinese University of Hong Kong, Hong Kong
| | - Qingpeng Ding
- Department of Mechanical and Automation Engineering and T Stone Robotics Institute, The Chinese University of Hong Kong, Hong Kong
| | - Jianghua Chen
- Department of Mechanical and Automation Engineering and T Stone Robotics Institute, The Chinese University of Hong Kong, Hong Kong
| | - Kim Yan
- Department of Mechanical and Automation Engineering and T Stone Robotics Institute, The Chinese University of Hong Kong, Hong Kong
| | - Raymond Shing-Yan Tang
- Department of Medicine and Therapeutics and Institute of Digestive Disease, The Chinese University of Hong Kong, Hong Kong
| | - Shing Shin Cheng
- Department of Mechanical and Automation Engineering and T Stone Robotics Institute, The Chinese University of Hong Kong, Hong Kong; Institute of Medical Intelligence and XR, Multi-scale Medical Robotics Center, and Shun Hing Institute of Advanced Engineering, The Chinese University of Hong Kong, Hong Kong.
| |
Collapse
|
5
|
Slope Estimation Method of Electric Vehicles Based on Improved Sage–Husa Adaptive Kalman Filter. ENERGIES 2022. [DOI: 10.3390/en15114126] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
In order to deal with many influence factors of electric vehicles in driving under complex conditions, this paper establishes the system state equation based on the longitudinal dynamics equation of vehicle. Combined with the improved Sage–Husa adaptive Kalman filter algorithm, the road slope estimation model is established. After the driving speed and rough slope observation are input into the slope estimation model, the accurate road slope estimation at the current time can be obtained. The road slope estimation method is compared with the original Sage–Husa adaptive Kalman filter road slope estimation method through three groups of road tests in different slope ranges, and the accuracy and stability advantages of the proposed algorithm in road conditions with large slopes are verified.
Collapse
|
6
|
Radar Target Tracking for Unmanned Surface Vehicle Based on Square Root Sage-Husa Adaptive Robust Kalman Filter. SENSORS 2022; 22:s22082924. [PMID: 35458914 PMCID: PMC9030864 DOI: 10.3390/s22082924] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 04/03/2022] [Accepted: 04/04/2022] [Indexed: 02/01/2023]
Abstract
Dynamic information such as the position and velocity of the target detected by marine radar is frequently susceptible to external measurement white noise generated by the oscillations of an unmanned surface vehicle (USV) and target. Although the Sage–Husa adaptive Kalman filter (SHAKF) has been applied to the target tracking field, the precision and stability of SHAKF remain to be improved. In this paper, a square root Sage–Husa adaptive robust Kalman filter (SR-SHARKF) algorithm together with the constant jerk model is proposed, which can not only solve the problem of filtering divergence triggered by numerical rounding errors, inaccurate system mathematics, and noise statistical models, but also improve the filtering accuracy. First, a novel square root decomposition method is proposed in the SR-SHARKF algorithm for decomposing the covariance matrix of SHAKF to assure its non-negative definiteness. After that, a three-segment approach is adopted to balance the observed and predicted states by evaluating the adaptive scale factor. Finally, the unbiased and the biased noise estimators are integrated while the interval scope of the measurement noise is constrained to jointly evaluate the measurement and observation noise for better adaptability and reliability. Simulation and experimental results demonstrate the effectiveness of the proposed algorithm in eliminating white noise triggered by the USV and target oscillations.
Collapse
|
7
|
Yan Y, Tang L, Huang H, Yu Q, Xu H, Chen Y, Chen M, Zhang Q. Four-quadrant fast compressive tracking of breast ultrasound videos for computer-aided response evaluation of neoadjuvant chemotherapy in mice. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 217:106698. [PMID: 35217304 DOI: 10.1016/j.cmpb.2022.106698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 01/26/2022] [Accepted: 02/08/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Neoadjuvant chemotherapy (NAC) is a valuable treatment approach for locally advanced breast cancer. Contrast-enhanced ultrasound (CEUS) potentially enables the assessment of therapeutic response to NAC. In order to evaluate the response accurately, quantitatively and objectively, a method that can effectively compensate motions of breast cancer in CEUS videos is urgently needed. METHODS We proposed the four-quadrant fast compressive tracking (FQFCT) approach to automatically perform CEUS video tracking and compensation for mice undergoing NAC. The FQFCT divided a tracking window into four smaller windows at four quadrants of a breast lesion and formulated the tracking at each quadrant as a binary classification task. After the FQFCT of breast cancer videos, the quantitative features of CEUS including the mean transit time (MTT) were computed. All mice showed a pathological response to NAC. The features between pre- (day 1) and post-treatment (day 3 and day 5) in these responders were statistically compared. RESULTS When we tracked the CEUS videos of mice with the FQFCT, the average tracking error of FQFCT was 0.65 mm, reduced by 46.72% compared with the classic fast compressive tracking method (1.22 mm). After compensation with the FQFCT, the MTT on day 5 of the NAC was significantly different from the MTT before NAC (day 1) (p = 0.013). CONCLUSIONS The FQFCT improves the accuracy of CEUS video tracking and contributes to the computer-aided response evaluation of NAC for breast cancer in mice.
Collapse
Affiliation(s)
- Yifei Yan
- The SMART (Smart Medicine and AI-Based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Lei Tang
- Department of Ultrasound, Tongren Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai 200050, China
| | - Haibo Huang
- The SMART (Smart Medicine and AI-Based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Qihui Yu
- The SMART (Smart Medicine and AI-Based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Haohao Xu
- The SMART (Smart Medicine and AI-Based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Ying Chen
- The SMART (Smart Medicine and AI-Based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China
| | - Man Chen
- Department of Ultrasound, Tongren Hospital, School of Medicine, Shanghai Jiaotong University, Shanghai 200050, China.
| | - Qi Zhang
- The SMART (Smart Medicine and AI-Based Radiology Technology) Lab, Shanghai Institute for Advanced Communication and Data Science, Shanghai University, Shanghai, China; School of Communication and Information Engineering, Shanghai University, Shanghai 200444, China.
| |
Collapse
|