1
|
De Sanctis L, Carnevale A, Antonacci C, Faiella E, Schena E, Longo UG. Six-Degree-of-Freedom Freehand 3D Ultrasound: A Low-Cost Computer Vision-Based Approach for Orthopedic Applications. Diagnostics (Basel) 2024; 14:1501. [PMID: 39061637 DOI: 10.3390/diagnostics14141501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2024] [Revised: 06/30/2024] [Accepted: 07/10/2024] [Indexed: 07/28/2024] Open
Abstract
In orthopedics, X-rays and computed tomography (CT) scans play pivotal roles in diagnosing and treating bone pathologies. Machine bulkiness and the emission of ionizing radiation remain the main problems associated with these techniques. The accessibility and low risks related to ultrasound handling make it a popular 2D imaging method. Indeed, 3D ultrasound assembles 2D slices into a 3D volume. This study aimed to implement a probe-tracking method for 6 DoF 3D ultrasound. The proposed method involves a dodecahedron with ArUco markers attached, enabling computer vision tracking of the ultrasound probe's position and orientation. The algorithm focuses on the data acquisition phase but covers the basic reconstruction required for data generation and analysis. In the best case, the analysis revealed an average error norm of 2.858 mm with a standard deviation norm of 5.534 mm compared to an infrared optical tracking system used as a reference. This study demonstrates the feasibility of performing volumetric imaging without ionizing radiation or bulky systems. This marker-based approach shows promise for enhancing orthopedic imaging, providing a more accessible imaging modality for helping clinicians to diagnose pathologies regarding complex joints, such as the shoulder, replacing standard infrared tracking systems known to suffer from marker occlusion problems.
Collapse
Affiliation(s)
- Lorenzo De Sanctis
- Fondazione Policlinico Universitario Campus Bio-Medico, Via Álvaro del Portillo, 200, 00128 Rome, Italy
- Research Unit of Orthopaedic and Trauma Surgery, Department of Medicine and Surgery, Università Campus Bio-Medico di Roma, Via Álvaro del Portillo, 21, 00128 Rome, Italy
| | - Arianna Carnevale
- Fondazione Policlinico Universitario Campus Bio-Medico, Via Álvaro del Portillo, 200, 00128 Rome, Italy
| | - Carla Antonacci
- Fondazione Policlinico Universitario Campus Bio-Medico, Via Álvaro del Portillo, 200, 00128 Rome, Italy
- Laboratory of Measurement and Biomedical Instrumentation, Department of Engineering, Università Campus Bio-Medico di Roma, Via Álvaro del Portillo, 21, 00128 Rome, Italy
| | - Eliodoro Faiella
- Fondazione Policlinico Universitario Campus Bio-Medico, Via Álvaro del Portillo, 200, 00128 Rome, Italy
| | - Emiliano Schena
- Fondazione Policlinico Universitario Campus Bio-Medico, Via Álvaro del Portillo, 200, 00128 Rome, Italy
- Laboratory of Measurement and Biomedical Instrumentation, Department of Engineering, Università Campus Bio-Medico di Roma, Via Álvaro del Portillo, 21, 00128 Rome, Italy
| | - Umile Giuseppe Longo
- Fondazione Policlinico Universitario Campus Bio-Medico, Via Álvaro del Portillo, 200, 00128 Rome, Italy
- Research Unit of Orthopaedic and Trauma Surgery, Department of Medicine and Surgery, Università Campus Bio-Medico di Roma, Via Álvaro del Portillo, 21, 00128 Rome, Italy
| |
Collapse
|
2
|
Fan K, Cai Y, Shen E, Wang Y, Yuan J, Tao C, Liu X. Elevation Resolution Enhancement Oriented 3D Ultrasound Imaging. ULTRASONIC IMAGING 2024:1617346241259049. [PMID: 38903053 DOI: 10.1177/01617346241259049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/22/2024]
Abstract
Three-dimensional (3D) ultrasound imaging can be accomplished by reconstructing a sequence of two-dimensional (2D) ultrasound images. However, 2D ultrasound images usually suffer from low resolution in the elevation direction, thereby impacting the accuracy of 3D reconstructed results. The lateral resolution of 2D ultrasound is known to significantly exceed the elevation resolution. By combining scanning sequences acquired from orthogonal directions, the effects of poor elevation resolution can be mitigated through a composite reconstructing process. Moreover, capturing ultrasound images from multiple perspectives necessitates a precise probe positioning method with a wide angle of coverage. Optical tracking is popularly used for probe positioning for its high accuracy and environment-robustness. In this paper, a novel large-angle accurate optical positioning method is used for enhancing resolution in 3D ultrasound imaging through orthogonal-view scanning and composite reconstruction. Experiments on two phantoms proved that our method could significantly improve reconstruction accuracy in the elevation direction of the probe compared with single-angle parallel scanning. The results indicate that our method holds the potential to improve current 3D ultrasound imaging techniques.
Collapse
Affiliation(s)
- Kai Fan
- School of Electronic Science and Engineering, Nanjing University, Nanjing, China
| | - Yunye Cai
- School of Electronic Science and Engineering, Nanjing University, Nanjing, China
| | - Enxiang Shen
- School of Electronic Science and Engineering, Nanjing University, Nanjing, China
| | - Yuxin Wang
- School of Electronic Science and Engineering, Nanjing University, Nanjing, China
| | - Jie Yuan
- School of Electronic Science and Engineering, Nanjing University, Nanjing, China
| | - Chao Tao
- School of Physics, Nanjing University, Nanjing, China
| | - Xiaojun Liu
- School of Physics, Nanjing University, Nanjing, China
| |
Collapse
|
3
|
Liu T, Han S, Xie L, Xing W, Liu C, Li B, Ta D. Super-resolution reconstruction of ultrasound image using a modified diffusion model. Phys Med Biol 2024; 69:125026. [PMID: 38636526 DOI: 10.1088/1361-6560/ad4086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 04/18/2024] [Indexed: 04/20/2024]
Abstract
Objective. This study aims to perform super-resolution (SR) reconstruction of ultrasound images using a modified diffusion model, designated as the diffusion model for ultrasound image super-resolution (DMUISR). SR involves converting low-resolution images to high-resolution ones, and the proposed model is designed to enhance the suitability of diffusion models for this task in the context of ultrasound imaging.Approach. DMUISR incorporates a multi-layer self-attention (MLSA) mechanism and a wavelet-transform based low-resolution image (WTLR) encoder to enhance its suitability for ultrasound image SR tasks. The model takes interpolated and magnified images as input and outputs high-quality, detailed SR images. The study utilized 1,334 ultrasound images from the public fetal head-circumference dataset (HC18) for evaluation.Main results. Experiments were conducted at 2× , 4× , and 8× magnification factors. DMUISR outperformed mainstream ultrasound SR methods (Bicubic, VDSR, DECUSR, DRCN, REDNet, SRGAN) across all scales, providing high-quality images with clear structures and rich detailed textures in both hard and soft tissue regions. DMUISR successfully accomplished multiscale SR reconstruction while suppressing over-smoothing and mode collapse problems. Quantitative results showed that DMUISR achieved the best performance in terms of learned perceptual image patch similarity, with a significant decrease of over 50% at all three magnification factors (2× , 4× , and 8× ), as well as improvements in peak signal-to-noise ratio and structural similarity index measure. Ablation experiments validated the effectiveness of the MLSA mechanism and WTLR encoder in improving DMUISR's SR performance. Furthermore, by reducing the number of diffusion steps, the computational time of DMUISR was shortened to nearly one-tenth of its original while maintaining image quality without significant degradation.Significance. This study demonstrates that the modified diffusion model, DMUISR, provides superior performance for SR reconstruction of ultrasound images and has potential in improving imaging quality in the medical ultrasound field.
Collapse
Affiliation(s)
- Tianyu Liu
- Institute of Biomedical Engineering & Technology, Academy for Engineering and Technology, Fudan University, Shanghai 200433, People's Republic of China
| | - Shuai Han
- Institute of Biomedical Engineering & Technology, Academy for Engineering and Technology, Fudan University, Shanghai 200433, People's Republic of China
| | - Linru Xie
- Institute of Biomedical Engineering & Technology, Academy for Engineering and Technology, Fudan University, Shanghai 200433, People's Republic of China
| | - Wenyu Xing
- Institute of Biomedical Engineering & Technology, Academy for Engineering and Technology, Fudan University, Shanghai 200433, People's Republic of China
| | - Chengcheng Liu
- Institute of Biomedical Engineering & Technology, Academy for Engineering and Technology, Fudan University, Shanghai 200433, People's Republic of China
- State Key Laboratory of Integrated Chips and Systems, Fudan University, Shanghai 201203, People's Republic of China
| | - Boyi Li
- Institute of Biomedical Engineering & Technology, Academy for Engineering and Technology, Fudan University, Shanghai 200433, People's Republic of China
| | - Dean Ta
- State Key Laboratory of Integrated Chips and Systems, Fudan University, Shanghai 201203, People's Republic of China
- Department of Biomedical Engineering, School of Information Science and Technology, Fudan University, Shanghai 200438, People's Republic of China
| |
Collapse
|
4
|
Harindranath A, Shah K, Devadass D, George A, Banerjee Krishnan K, Arora M. IMU-Assisted Manual 3D-Ultrasound Imaging Using Motion-Constrained Swept-Fan Scans. ULTRASONIC IMAGING 2024; 46:164-177. [PMID: 38597330 DOI: 10.1177/01617346241242718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/11/2024]
Abstract
Three-dimensional (3D) ultrasonic imaging can enable post-facto plane of interest selection. It can be performed with devices such as wobbler probes, matrix probes, and sensor-based probes. Ultrasound systems that support 3D-imaging are expensive with added hardware complexity compared to 2D-imaging systems. An inertial measurement unit (IMU) can potentially be used for 3D-imaging by using it to track the motion of a one-dimensional array probe and constraining its motion in one degree of freedom (1-DoF) rotation (swept-fan). This work demonstrates the feasibility of an affordable IMU-assisted manual 3D-ultrasound scanner (IAM3US). A consumer-grade IMU-assisted 3D scanner prototype is designed with two support structures for swept-fan. After proper IMU calibration, an appropriate KF-based algorithm estimates the probe orientation during the swept-fan. An improved scanline-based reconstruction method is used for volume reconstruction. The evaluation of the IAM3US system is done by imaging a tennis ball filled with water and the head region of a fetal phantom. From fetal phantom reconstructed volumes, suitable 2D planes are extracted for biparietal diameter (BPD) manual measurements. Later, in-vivo data is collected. The novel contributions of this paper are (1) the application of a recently proposed algorithm for orientation estimation of swept-fan for 3D imaging, chosen based on the noise characteristics of selected consumer grade IMU (2) assessment of the quality of the 1-DoF swept-fan scan with a deflection detector along with monitoring of maximum angular rate during the scan and (3) two probe holder designs to aid the operator in performing the 1-DoF rotational motion and (4) end-to-end 3D-imaging system-integration. Phantom studies and preliminary in-vivo obstetric scans performed on two patients illustrate the usability of the system for diagnosis purposes.
Collapse
Affiliation(s)
- Aparna Harindranath
- Centre for Product Design and Manufacturing, Indian Institute of Science, Bangalore, India
- Department of Earth Science and Engineering, Royal School of Mines, Imperial College London, London, UK
| | - Komal Shah
- Centre for Product Design and Manufacturing, Indian Institute of Science, Bangalore, India
| | | | - Arun George
- St. Johns Research Institute, Bangalore, India
| | | | - Manish Arora
- Centre for Product Design and Manufacturing, Indian Institute of Science, Bangalore, India
| |
Collapse
|
5
|
Hêches J, Marcadent S, Fernandez A, Adjahou S, Meuwly JY, Thiran JP, Desseauve D, Favre J. Accuracy and Reliability of Pelvimetry Measures Obtained by Manual or Automatic Labeling of Three-Dimensional Pelvic Models. J Clin Med 2024; 13:689. [PMID: 38337383 PMCID: PMC10856490 DOI: 10.3390/jcm13030689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 01/19/2024] [Accepted: 01/22/2024] [Indexed: 02/12/2024] Open
Abstract
(1) Background: The morphology of the pelvic cavity is important for decision-making in obstetrics. This study aimed to estimate the accuracy and reliability of pelvimetry measures obtained when radiologists manually label anatomical landmarks on three-dimensional (3D) pelvic models. A second objective was to design an automatic labeling method. (2) Methods: Three operators segmented 10 computed tomography scans each. Three radiologists then labeled 12 anatomical landmarks on the pelvic models, which allowed for the calculation of 15 pelvimetry measures. Additionally, an automatic labeling method was developed based on a reference pelvic model, including reference anatomical landmarks, matching the individual pelvic models. (3) Results: Heterogeneity among landmarks in radiologists' labeling accuracy was observed, with some landmarks being rarely mislabeled by more than 4 mm and others being frequently mislabeled by 10 mm or more. The propagation to the pelvimetry measures was limited; only one out of the 15 measures reported a median error above 5 mm or 5°, and all measures showed moderate to excellent inter-radiologist reliability. The automatic method outperformed manual labeling. (4) Conclusions: This study confirmed the suitability of pelvimetry measures based on manual labeling of 3D pelvic models. Automatic labeling offers promising perspectives to decrease the demand on radiologists, standardize the labeling, and describe the pelvic cavity in more detail.
Collapse
Affiliation(s)
- Johann Hêches
- Swiss BioMotion Lab, Lausanne University Hospital (CHUV) and University of Lausanne (UNIL), CH-1011 Lausanne, Switzerland;
| | - Sandra Marcadent
- Signal Processing Laboratory 5, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland; (S.M.); (J.-P.T.)
| | - Anna Fernandez
- Women-Mother-Child Department, Lausanne University Hospital (CHUV) and University of Lausanne (UNIL), CH-1011 Lausanne, Switzerland; (A.F.); (S.A.); (D.D.)
| | - Stephen Adjahou
- Women-Mother-Child Department, Lausanne University Hospital (CHUV) and University of Lausanne (UNIL), CH-1011 Lausanne, Switzerland; (A.F.); (S.A.); (D.D.)
| | - Jean-Yves Meuwly
- Department of Radiology, Lausanne University Hospital (CHUV) and University of Lausanne (UNIL), CH-1011 Lausanne, Switzerland;
| | - Jean-Philippe Thiran
- Signal Processing Laboratory 5, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland; (S.M.); (J.-P.T.)
- Department of Radiology, Lausanne University Hospital (CHUV) and University of Lausanne (UNIL), CH-1011 Lausanne, Switzerland;
| | - David Desseauve
- Women-Mother-Child Department, Lausanne University Hospital (CHUV) and University of Lausanne (UNIL), CH-1011 Lausanne, Switzerland; (A.F.); (S.A.); (D.D.)
| | - Julien Favre
- Swiss BioMotion Lab, Lausanne University Hospital (CHUV) and University of Lausanne (UNIL), CH-1011 Lausanne, Switzerland;
- The Sense Innovation and Research Center, CH-1007 Lausanne, Switzerland
| |
Collapse
|
6
|
Kim M, Pelivanov I, O'Donnell M. Review of Deep Learning Approaches for Interleaved Photoacoustic and Ultrasound (PAUS) Imaging. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:1591-1606. [PMID: 37910419 PMCID: PMC10788151 DOI: 10.1109/tuffc.2023.3329119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/03/2023]
Abstract
Photoacoustic (PA) imaging provides optical contrast at relatively large depths within the human body, compared to other optical methods, at ultrasound (US) spatial resolution. By integrating real-time PA and US (PAUS) modalities, PAUS imaging has the potential to become a routine clinical modality bringing the molecular sensitivity of optics to medical US imaging. For applications where the full capabilities of clinical US scanners must be maintained in PAUS, conventional limited view and bandwidth transducers must be used. This approach, however, cannot provide high-quality maps of PA sources, especially vascular structures. Deep learning (DL) using data-driven modeling with minimal human design has been very effective in medical imaging, medical data analysis, and disease diagnosis, and has the potential to overcome many of the technical limitations of current PAUS imaging systems. The primary purpose of this article is to summarize the background and current status of DL applications in PAUS imaging. It also looks beyond current approaches to identify remaining challenges and opportunities for robust translation of PAUS technologies to the clinic.
Collapse
|
7
|
Men Q, Teng C, Drukker L, Papageorghiou AT, Noble JA. Gaze-probe joint guidance with multi-task learning in obstetric ultrasound scanning. Med Image Anal 2023; 90:102981. [PMID: 37863638 PMCID: PMC7615231 DOI: 10.1016/j.media.2023.102981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 08/31/2023] [Accepted: 09/26/2023] [Indexed: 10/22/2023]
Abstract
In this work, we exploit multi-task learning to jointly predict the two decision-making processes of gaze movement and probe manipulation that an experienced sonographer would perform in routine obstetric scanning. A multimodal guidance framework, Multimodal-GuideNet, is proposed to detect the causal relationship between a real-world ultrasound video signal, synchronized gaze, and probe motion. The association between the multi-modality inputs is learned and shared through a modality-aware spatial graph that leverages useful cross-modal dependencies. By estimating the probability distribution of probe and gaze movements in real scans, the predicted guidance signals also allow inter- and intra-sonographer variations and avoid a fixed scanning path. We validate the new multi-modality approach on three types of obstetric scanning examinations, and the result consistently outperforms single-task learning under various guidance policies. To simulate sonographer's attention on multi-structure images, we also explore multi-step estimation in gaze guidance, and its visual results show that the prediction allows multiple gaze centers that are substantially aligned with underlying anatomical structures.
Collapse
Affiliation(s)
- Qianhui Men
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, OX3 7DQ, United Kingdom.
| | - Clare Teng
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, OX3 7DQ, United Kingdom
| | - Lior Drukker
- Nuffield Department of Women's & Reproductive Health, University of Oxford, Oxford, OX3 9DU, United Kingdom; Department of Obstetrics and Gynecology, Tel-Aviv University, Tel Aviv, Ramat Aviv, 69978, Israel
| | - Aris T Papageorghiou
- Nuffield Department of Women's & Reproductive Health, University of Oxford, Oxford, OX3 9DU, United Kingdom
| | - J Alison Noble
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, OX3 7DQ, United Kingdom
| |
Collapse
|
8
|
Lu Y, Fan K, Yuan J, Chen Y, Ge Y, Tao C, Liu X. Free scan real time 3D ultrasound imaging with shading artefacts removal. ULTRASONICS 2023; 135:107091. [PMID: 37515837 DOI: 10.1016/j.ultras.2023.107091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 06/19/2023] [Accepted: 06/21/2023] [Indexed: 07/31/2023]
Abstract
Ultrasound imaging (USI) is a widely adopted imaging method in clinical diagnosis owing to its low cost, convenience, and safety. However, due to the complex acoustic attenuation, two-dimensional (2D) USI lacks the capability to achieve a clear imaging result when the target is shaded by high echo tissues. This paper proposes a three-dimensional (3D) free-scan real-time ultrasound imaging (FRUSI) method. By integrating 2D ultrasound image sequences around the region of interest (ROI) with a real-time and spatially accurate probe tracking method, the proposed FRUSI system provides clear and accurate ultrasound images for medical study. The experiment results on reconstruction precision and accuracy show the potential ability of our proposed system to provide high-quality 3D ultrasound imaging. Moreover, previously shaded targets can be discerned clearly in the same scan plane in both phantom studies and in vivo studies on the human finger joint. The performance of the proposed FRUSI system has demonstrated its potential value for clinical diagnosis to provide high ultrasound imaging quality and rich details in spatial information. Due to the convenient setup, the FRUSI system might potentially be expanded to other ultrasound imaging modalities.
Collapse
Affiliation(s)
- Yanchen Lu
- School of Electronic Science and Engineering, Nanjing University, Nanjing 210046, China
| | - Kai Fan
- School of Electronic Science and Engineering, Nanjing University, Nanjing 210046, China
| | - Jie Yuan
- School of Electronic Science and Engineering, Nanjing University, Nanjing 210046, China.
| | - Ying Chen
- School of Electronic Science and Engineering, Nanjing University, Nanjing 210046, China
| | - Yun Ge
- School of Electronic Science and Engineering, Nanjing University, Nanjing 210046, China
| | - Chao Tao
- School of Physics, Nanjing University, Nanjing 210046, China
| | - Xiaojun Liu
- School of Physics, Nanjing University, Nanjing 210046, China
| |
Collapse
|
9
|
Jiang Z, Zhou Y, Cao D, Navab N. DefCor-Net: Physics-aware ultrasound deformation correction. Med Image Anal 2023; 90:102923. [PMID: 37688982 DOI: 10.1016/j.media.2023.102923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 05/22/2023] [Accepted: 08/01/2023] [Indexed: 09/11/2023]
Abstract
The recovery of morphologically accurate anatomical images from deformed ones is challenging in ultrasound (US) image acquisition, but crucial to accurate and consistent diagnosis, particularly in the emerging field of computer-assisted diagnosis. This article presents a novel physics-aware deformation correction approach based on a coarse-to-fine, multi-scale deep neural network (DefCor-Net). To achieve pixel-wise performance, DefCor-Net incorporates biomedical knowledge by estimating pixel-wise stiffness online using a U-shaped feature extractor. The deformation field is then computed using polynomial regression by integrating the measured force applied by the US probe. Based on real-time estimation of pixel-by-pixel tissue properties, the learning-based approach enables the potential for anatomy-aware deformation correction. To demonstrate the effectiveness of the proposed DefCor-Net, images recorded at multiple locations on forearms and upper arms of six volunteers are used to train and validate DefCor-Net. The results demonstrate that DefCor-Net can significantly improve the accuracy of deformation correction to recover the original geometry (Dice Coefficient: from 14.3±20.9 to 82.6±12.1 when the force is 6N). Code:https://github.com/KarolineZhy/DefCorNet.
Collapse
Affiliation(s)
- Zhongliang Jiang
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany.
| | - Yue Zhou
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany
| | | | - Nassir Navab
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany; Computer Aided Medical Procedures, Johns Hopkins University, Baltimore, MD, USA
| |
Collapse
|
10
|
Li Q, Shen Z, Li Q, Barratt DC, Dowrick T, Clarkson MJ, Vercauteren T, Hu Y. Long-term Dependency for 3D Reconstruction of Freehand Ultrasound Without External Tracker. IEEE Trans Biomed Eng 2023; PP:1033-1042. [PMID: 37856260 DOI: 10.1109/tbme.2023.3325551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2023]
Abstract
OBJECTIVE Reconstructing freehand ultrasound in 3D without any external tracker has been a long-standing challenge in ultrasound-assisted procedures. We aim to define new ways of parameterising long-term dependencies, and evaluate the performance. METHODS First, long-term dependency is encoded by transformation positions within a frame sequence. This is achieved by combining a sequence model with a multi-transformation prediction. Second, two dependency factors are proposed, anatomical image content and scanning protocol, for contributing towards accurate reconstruction. Each factor is quantified experimentally by reducing respective training variances. RESULTS 1) The added long-term dependency up to 400 frames at 20 frames per second (fps) indeed improved reconstruction, with an up to 82.4% lowered accumulated error, compared with the baseline performance. The improvement was found to be dependent on sequence length, transformation interval and scanning protocol and, unexpectedly, not on the use of recurrent networks with long-short term modules; 2) Decreasing either anatomical or protocol variance in training led to poorer reconstruction accuracy. Interestingly, greater performance was gained from representative protocol patterns, than from representative anatomical features. CONCLUSION The proposed algorithm uses hyperparameter tuning to effectively utilise long-term dependency. The proposed dependency factors are of practical significance in collecting diverse training data, regulating scanning protocols and developing efficient networks. SIGNIFICANCE The proposed new methodology with publicly available volunteer data and code for parametersing the long-term dependency, experimentally shown to be valid sources of performance improvement, which could potentially lead to better model development and practical optimisation of the reconstruction application.
Collapse
|
11
|
Amiri Tehrani Zade A, Jalili Aziz M, Majedi H, Mirbagheri A, Ahmadian A. Spatiotemporal analysis of speckle dynamics to track invisible needle in ultrasound sequences using convolutional neural networks: a phantom study. Int J Comput Assist Radiol Surg 2023; 18:1373-1382. [PMID: 36745339 DOI: 10.1007/s11548-022-02812-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 12/13/2022] [Indexed: 02/07/2023]
Abstract
PURPOSE Accurate needle placement into the target point is critical for ultrasound interventions like biopsies and epidural injections. However, aligning the needle to the thin plane of the transducer is a challenging issue as it leads to the decay of visibility by the naked eye. Therefore, we have developed a CNN-based framework to track the needle using the spatiotemporal features of the speckle dynamics. METHODS There are three key techniques to optimize the network for our application. First, we used Gunnar-Farneback (GF) as a traditional motion field estimation technique to augment the model input with the spatiotemporal features extracted from the stack of consecutive frames. We also designed an efficient network based on the state-of-the-art Yolo framework (nYolo). Lastly, the Assisted Excitation (AE) module was added at the neck of the network to handle the imbalance problem. RESULTS Fourteen freehand ultrasound sequences were collected by inserting an injection needle steeply into the Ultrasound Compatible Lumbar Epidural Simulator and Femoral Vascular Access Ezono test phantoms. We divided the dataset into two sub-categories. In the second category, in which the situation is more challenging and the needle is totally invisible, the angle and tip localization error were 2.43 ± 1.14° and 2.3 ± 1.76 mm using Yolov3+GF+AE and 2.08 ± 1.18° and 2.12 ± 1.43 mm using nYolo+GF+AE. CONCLUSION The proposed method has the potential to track the needle in a more reliable operation compared to other state-of-the-art methods and can accurately localize it in 2D B-mode US images in real time, allowing it to be used in current ultrasound intervention procedures.
Collapse
Affiliation(s)
- Amin Amiri Tehrani Zade
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran, Iran
- Image-Guided Surgery Group, Research Centre for Biomedical Technologies and Robotics (RCBTR), Tehran University of Medical Sciences, Tehran, Iran
| | - Maryam Jalili Aziz
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran, Iran
- Image-Guided Surgery Group, Research Centre for Biomedical Technologies and Robotics (RCBTR), Tehran University of Medical Sciences, Tehran, Iran
| | - Hossein Majedi
- Pain Research Center, Neuroscience Institute, Tehran University of Medical Sciences, Tehran, Iran
- Department of Anesthesiology, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Alireza Mirbagheri
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran, Iran
- Robotic Group, Research Centre for Biomedical Technologies and Robotics (RCBTR), Tehran University of Medical Sciences, Tehran, Iran
| | - Alireza Ahmadian
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran, Iran.
- Image-Guided Surgery Group, Research Centre for Biomedical Technologies and Robotics (RCBTR), Tehran University of Medical Sciences, Tehran, Iran.
| |
Collapse
|
12
|
Bottenus N. Implementation of constrained swept synthetic aperture using a mechanical fixture. APPLIED SCIENCES (BASEL, SWITZERLAND) 2023; 13:4797. [PMID: 38711800 PMCID: PMC11072168 DOI: 10.3390/app13084797] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/08/2024]
Abstract
Resolution and target detectability in ultrasound imaging are directly tied to the size of the imaging array. This is especially important for imaging at depth, such as in the detection and diagnosis of hepatocellular carcinoma and other lesions in the liver. Swept synthetic aperture (SSA) imaging has shown promise for building large effective apertures from small physical arrays using motion, but has required bulky fixtures and external motion tracking for precise positioning. In this study we present an approach that constrains the transducer motion with a simple linear sliding fixture and estimates motion from the ultrasound data itself using either speckle tracking or channel correlation. We demonstrate in simulation and phantom experiments the ability of both techniques to accurately estimate lateral transducer motion and form SSA images with improved resolution and target detectability. We observed errors under 83 μm across a 50 mm sweep in simulation and found improvements of up to 61% in resolution and up to 33% in lesion detectability experimentally even imaging through ex vivo tissue layers. This approach will increase the accessibility of SSA imaging and allow us to test its use in clinical settings.
Collapse
Affiliation(s)
- Nick Bottenus
- Department of Mechanical Engineering, University of Colorado Boulder, Boulder, CO 80516, USA
| |
Collapse
|
13
|
Luo M, Yang X, Wang H, Dou H, Hu X, Huang Y, Ravikumar N, Xu S, Zhang Y, Xiong Y, Xue W, Frangi AF, Ni D, Sun L. RecON: Online learning for sensorless freehand 3D ultrasound reconstruction. Med Image Anal 2023; 87:102810. [PMID: 37054648 DOI: 10.1016/j.media.2023.102810] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 01/11/2023] [Accepted: 04/04/2023] [Indexed: 04/08/2023]
Abstract
Sensorless freehand 3D ultrasound (US) reconstruction based on deep networks shows promising advantages, such as large field of view, relatively high resolution, low cost, and ease of use. However, existing methods mainly consider vanilla scan strategies with limited inter-frame variations. These methods thus are degraded on complex but routine scan sequences in clinics. In this context, we propose a novel online learning framework for freehand 3D US reconstruction under complex scan strategies with diverse scanning velocities and poses. First, we devise a motion-weighted training loss in training phase to regularize the scan variation frame-by-frame and better mitigate the negative effects of uneven inter-frame velocity. Second, we effectively drive online learning with local-to-global pseudo supervisions. It mines both the frame-level contextual consistency and the path-level similarity constraint to improve the inter-frame transformation estimation. We explore a global adversarial shape before transferring the latent anatomical prior as supervision. Third, we build a feasible differentiable reconstruction approximation to enable the end-to-end optimization of our online learning. Experimental results illustrate that our freehand 3D US reconstruction framework outperformed current methods on two large, simulated datasets and one real dataset. In addition, we applied the proposed framework to clinical scan videos to further validate its effectiveness and generalizability.
Collapse
|
14
|
Heinrich MP, Siebert H, Graf L, Mischkewitz S, Hansen L. Robust and Realtime Large Deformation Ultrasound Registration Using End-to-End Differentiable Displacement Optimisation. SENSORS (BASEL, SWITZERLAND) 2023; 23:2876. [PMID: 36991588 PMCID: PMC10056872 DOI: 10.3390/s23062876] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 02/19/2023] [Accepted: 02/22/2023] [Indexed: 06/19/2023]
Abstract
Image registration for temporal ultrasound sequences can be very beneficial for image-guided diagnostics and interventions. Cooperative human-machine systems that enable seamless assistance for both inexperienced and expert users during ultrasound examinations rely on robust, realtime motion estimation. Yet rapid and irregular motion patterns, varying image contrast and domain shifts in imaging devices pose a severe challenge to conventional realtime registration approaches. While learning-based registration networks have the promise of abstracting relevant features and delivering very fast inference times, they come at the potential risk of limited generalisation and robustness for unseen data; in particular, when trained with limited supervision. In this work, we demonstrate that these issues can be overcome by using end-to-end differentiable displacement optimisation. Our method involves a trainable feature backbone, a correlation layer that evaluates a large range of displacement options simultaneously and a differentiable regularisation module that ensures smooth and plausible deformation. In extensive experiments on public and private ultrasound datasets with very sparse ground truth annotation the method showed better generalisation abilities and overall accuracy than a VoxelMorph network with the same feature backbone, while being two times faster at inference.
Collapse
Affiliation(s)
- Mattias P. Heinrich
- Institute of Medical Informatics, Universität zu Lübeck, 23562 Lübeck, Germany
| | - Hanna Siebert
- Institute of Medical Informatics, Universität zu Lübeck, 23562 Lübeck, Germany
| | - Laura Graf
- Institute of Medical Informatics, Universität zu Lübeck, 23562 Lübeck, Germany
| | | | | |
Collapse
|
15
|
Guo H, Chao H, Xu S, Wood BJ, Wang J, Yan P. Ultrasound Volume Reconstruction From Freehand Scans Without Tracking. IEEE Trans Biomed Eng 2023; 70:970-979. [PMID: 36103448 PMCID: PMC10011008 DOI: 10.1109/tbme.2022.3206596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
Transrectal ultrasound is commonly used for guiding prostate cancer biopsy, where 3D ultrasound volume reconstruction is often desired. Current methods for 3D reconstruction from freehand ultrasound scans require external tracking devices to provide spatial information of an ultrasound transducer. This paper presents a novel deep learning approach for sensorless ultrasound volume reconstruction, which efficiently exploits content correspondence between ultrasound frames to reconstruct 3D volumes without external tracking. The underlying deep learning model, deep contextual-contrastive network (DC 2-Net), utilizes self-attention to focus on the speckle-rich areas to estimate spatial movement and then minimizes a margin ranking loss for contrastive feature learning. A case-wise correlation loss over the entire input video helps further smooth the estimated trajectory. We train and validate DC 2-Net on two independent datasets, one containing 619 transrectal scans and the other having 100 transperineal scans. Our proposed approach attained superior performance compared with other methods, with a drift rate of 9.64 % and a prostate Dice of 0.89. The promising results demonstrate the capability of deep neural networks for universal ultrasound volume reconstruction from freehand 2D ultrasound scans without tracking information.
Collapse
|
16
|
Van Heumen S, Riksen JJ, Singh MKA, Van Soest G, Vasilic D. LED-based photoacoustic imaging for preoperative visualization of lymphatic vessels in patients with secondary limb lymphedema. PHOTOACOUSTICS 2023; 29:100446. [PMID: 36632606 PMCID: PMC9826814 DOI: 10.1016/j.pacs.2022.100446] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 12/16/2022] [Accepted: 12/27/2022] [Indexed: 06/17/2023]
Abstract
Lymphedema is the accumulation of protein-rich fluid in the interstitium (i.e., dermal backflow (DBF)). Preoperative imaging of the lymphatic vessels is a prerequisite for lymphovenous bypass surgical planning. We investigated the visualization of lymphatic vessels and veins using light-emitting diode (LED)-based photoacoustic imaging (PAI). Indocyanine-green mediated near-infrared fluorescence lymphography (NIRF-L) was done in fifteen patients with secondary limb lymphedema. Photoacoustic images were acquired in locations where lymphatic vessels and DBF were observed with NIRF-L. We demonstrated that LED-based PAI can visualize and differentiate lymphatic vessels and veins even in the presence of DBF. We observed lymphatic and blood vessels up to depths of 8.3 and 8.6 mm, respectively. Superficial lymphatic vessels and veins can be visualized using LED-based PAI even in the presence of DBF showing the potential for pre-operative assessment. Further development of the technique is needed to improve its usability in clinical settings.
Collapse
Affiliation(s)
- Saskia Van Heumen
- Department of Plastic and Reconstructive Surgery, Erasmus MC, University Medical Center, Rotterdam, the Netherlands
- Department of Cardiology, Erasmus MC, University Medical Center, Rotterdam, the Netherlands
| | - Jonas J.M. Riksen
- Department of Cardiology, Erasmus MC, University Medical Center, Rotterdam, the Netherlands
| | | | - Gijs Van Soest
- Department of Cardiology, Erasmus MC, University Medical Center, Rotterdam, the Netherlands
| | - Dalibor Vasilic
- Department of Plastic and Reconstructive Surgery, Erasmus MC, University Medical Center, Rotterdam, the Netherlands
| |
Collapse
|
17
|
Men Q, Teng C, Drukker L, Papageorghiou AT, Noble JA. Multimodal-GuideNet: Gaze-Probe Bidirectional Guidance in Obstetric Ultrasound Scanning. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2022; 13437:94-103. [PMID: 36649382 PMCID: PMC7614062 DOI: 10.1007/978-3-031-16449-1_10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
Eye trackers can provide visual guidance to sonographers during ultrasound (US) scanning. Such guidance is potentially valuable for less experienced operators to improve their scanning skills on how to manipulate the probe to achieve the desired plane. In this paper, a multimodal guidance approach (Multimodal-GuideNet) is proposed to capture the stepwise dependency between a real-world US video signal, synchronized gaze, and probe motion within a unified framework. To understand the causal relationship between gaze movement and probe motion, our model exploits multitask learning to jointly learn two related tasks: predicting gaze movements and probe signals that an experienced sonographer would perform in routine obstetric scanning. The two tasks are associated by a modality-aware spatial graph to detect the co-occurrence among the multi-modality inputs and share useful cross-modal information. Instead of a deterministic scanning path, Multimodal-GuideNet allows for scanning diversity by estimating the probability distribution of real scans. Experiments performed with three typical obstetric scanning examinations show that the new approach outperforms single-task learning for both probe motion guidance and gaze movement prediction. Multimodal-GuideNet also provides a visual guidance signal with an error rate of less than 10 pixels for a 224 × 288 US image.
Collapse
Affiliation(s)
- Qianhui Men
- Institute of Biomedical Engineering, University of Oxford, Oxford, UK
| | - Clare Teng
- Institute of Biomedical Engineering, University of Oxford, Oxford, UK
| | - Lior Drukker
- Nuffield Department of Women's & Reproductive Health, University of Oxford, Oxford, UK
- Department of Obstetrics and Gynecology, Tel-Aviv University, Israel
| | - Aris T Papageorghiou
- Nuffield Department of Women's & Reproductive Health, University of Oxford, Oxford, UK
| | - J Alison Noble
- Institute of Biomedical Engineering, University of Oxford, Oxford, UK
| |
Collapse
|
18
|
Zhao C, Droste R, Drukker L, Papageorghiou AT, Alison Noble J. USPoint: Self-Supervised Interest Point Detection and Description for Ultrasound-Probe Motion Estimation During Fine-Adjustment Standard Fetal Plane Finding. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2022; 2022:104-114. [PMID: 37223131 PMCID: PMC7614558 DOI: 10.1007/978-3-031-16449-1_11] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Ultrasound (US)-probe motion estimation is a fundamental problem in automated standard plane locating during obstetric US diagnosis. Most recent existing recent works employ deep neural network (DNN) to regress the probe motion. However, these deep regressionbased methods leverage the DNN to overfit on the specific training data, which is naturally lack of generalization ability for the clinical application. In this paper, we are back to generalized US feature learning rather than deep parameter regression. We propose a self-supervised learned local detector and descriptor, named USPoint, for US-probe motion estimation during the fine-adjustment phase of fetal plane acquisition. Specifically, a hybrid neural architecture is designed to simultaneously extract a local feature, and further estimate the probe motion. By embedding a differentiable USPoint-based motion estimation inside the proposed network architecture, the USPoint learns the keypoint detector, scores and descriptors from motion error alone, which doesn't require expensive human-annotation of local features. The two tasks, local feature learning and motion estimation, are jointly learned in a unified framework to enable collaborative learning with the aim of mutual benefit. To the best of our knowledge, it is the first learned local detector and descriptor tailored for the US image. Experimental evaluation on real clinical data demonstrates the resultant performance improvement on feature matching and motion estimation for potential clinical value. A video demo can be found online: https://youtu.be/JGzHuTQVlBs.
Collapse
Affiliation(s)
- Cheng Zhao
- Institute of Biomedical Engineering, University of Oxford, Oxford, UK
| | - Richard Droste
- Institute of Biomedical Engineering, University of Oxford, Oxford, UK
| | - Lior Drukker
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
| | - Aris T Papageorghiou
- Nuffield Department of Women's and Reproductive Health, University of Oxford, Oxford, UK
| | - J Alison Noble
- Institute of Biomedical Engineering, University of Oxford, Oxford, UK
| |
Collapse
|
19
|
Mikaeili M, Bilge HŞ. Trajectory estimation of ultrasound images based on convolutional neural network. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103965] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
20
|
Stretched reconstruction based on 2D freehand ultrasound for peripheral artery imaging. Int J Comput Assist Radiol Surg 2022; 17:1281-1288. [PMID: 35486303 DOI: 10.1007/s11548-022-02636-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Accepted: 04/06/2022] [Indexed: 11/27/2022]
Abstract
PURPOSE Endovascular revascularization is becoming the established first-line treatment of peripheral artery disease (PAD). Ultrasound (US) imaging is used pre-operatively to make the first diagnosis and is often followed by a CT angiography (CTA). US provides a non-invasive and non-ionizing method for the visualization of arteries and lesion(s). This paper proposes to generate a 3D stretched reconstruction of the femoral artery from a sequence of 2D US B-mode frames. METHODS The proposed method is solely image-based. A Mask-RCNN is used to segment the femoral artery on the 2D US frames. In-plane registration is achieved by aligning the artery segmentation masks. Subsequently, a convolutional neural network (CNN) predicts the out-of-plane translation. After processing all input frames and re-sampling the volume according to the vessel's centerline, the whole femoral artery can be visualized on a single slice of the resulting stretched view. RESULTS 111 tracked US sequences of the left or right femoral arteries have been acquired on 18 healthy volunteers. fivefold cross-validation was used to validate our method and achieve an absolute mean error of 0.28 ± 0.28 mm and a median drift error of 8.98%. CONCLUSION This study demonstrates the feasibility of freehand US stretched reconstruction following a deep learning strategy for imaging the femoral artery. Stretched views are generated and can give rich diagnosis information in the pre-operative planning of PAD procedures. This visualization could replace traditional 3D imaging in the pre-operative planning process, and during the pre-operative diagnosis phase, to identify, locate, and size stenosis/thrombosis lesions.
Collapse
|
21
|
Bharadwaj S, Prasad S, Almekkawy M. An Upgraded Siamese Neural Network for Motion Tracking in Ultrasound Image Sequences. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2021; 68:3515-3527. [PMID: 34232873 DOI: 10.1109/tuffc.2021.3095299] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Deep learning is heavily being borrowed to solve problems in medical imaging applications, and Siamese neural networks are the front runners of motion tracking. In this article, we propose to upgrade one such Siamese architecture-based neural network for robust and accurate landmark tracking in ultrasound images to improve the quality of image-guided radiation therapy. Although several researchers have improved the Siamese architecture-based networks with sophisticated detection modules and by incorporating transfer learning, the inherent assumptions of the constant position model and the missing motion model remain unaddressed limitations. In our proposed model, we overcome these limitations by introducing two modules into the original architecture. We employ a reference template update to resolve the constant position model and a linear Kalman filter (LKF) to address the missing motion model. Moreover, we demonstrate that the proposed architecture provides promising results without transfer learning. The proposed model was submitted to an open challenge organized by MICCAI and was evaluated exhaustively on the Liver US Tracking (CLUST) 2D dataset. Experimental results proved that the proposed model tracked the landmarks with promising accuracy. Furthermore, we also induced synthetic occlusions to perform a qualitative analysis of the proposed approach. The evaluations were performed on the training set of the CLUST 2D dataset. The proposed method outperformed the original Siamese architecture by a significant margin.
Collapse
|
22
|
Dong J, Fu T, Lin Y, Deng Q, Fan J, Song H, Cheng Z, Liang P, Wang Y, Yang J. Hole-filling based on content loss indexed 3D partial convolution network for freehand ultrasound reconstruction. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 211:106421. [PMID: 34583228 DOI: 10.1016/j.cmpb.2021.106421] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2020] [Accepted: 09/12/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE During the 3D reconstruction of ultrasound volume from 2D B-scan ultrasound images, holes are usually found in the reconstructed 3D volumes due to the fast scans. This condition will affect the positioning and judgment of the doctor to the lesion. Hence, in this study, we propose to fill the holes by using a novel content loss indexed 3D partial convolution network for 3D freehand ultrasound volume reconstruction. The network can synthesize novel ultrasound volume structures and reconstruct ultrasound volume with missing regions with variable sizes and at arbitrary locations. METHODS First, the 3D partial convolution is introduced into the convolutional layer, which is masked and renormalized to be conditioned on only valid voxels. Then, the mask in the next layer is automatically updated as a part of the forward pass. To better preserve texture and structure details of the reconstruction results, we couple the adversarial loss of the least squares generative adversarial network (LSGAN) with the innovative content loss, which consists of the context loss, the feature-matching loss and the total variation loss. Thereafter, we introduce a novel spectral-normalized LSGAN by adding spectral normalization (SN) to the generator and discriminator of the LSGAN. The proposed method is simple in formulation, and is stable in training. RESULTS Experiments on public and in-vivo ultrasound datasets and comparisons with popular algorithms demonstrate that the proposed approach can generate high-quality hole-filling results with preserved perceptual image details. CONCLUSIONS Considering the high quality of the hole-filling results, the proposed method can effectively fill the missing regions in the reconstructed 3D ultrasound volume from 2D ultrasound image sequences.
Collapse
Affiliation(s)
- Jiahui Dong
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Tianyu Fu
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China.
| | - Yucong Lin
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China; Institute of Engineering Medicine, Beijing Institute of Technology, Beijing 100081, China
| | - Qiaoling Deng
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Jingfan Fan
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Hong Song
- School of Software, Beijing Institute of Technology, Beijing 100081, China
| | - Zhigang Cheng
- Department of Interventional Ultrasound, Chinese PLA General Hospital, Beijing 100853, China
| | - Ping Liang
- Department of Interventional Ultrasound, Chinese PLA General Hospital, Beijing 100853, China
| | - Yongtian Wang
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Jian Yang
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China.
| |
Collapse
|
23
|
de Ruijter J, Muijsers JJM, van de Vosse FN, van Sambeek MRHM, Lopata RGP. A Generalized Approach for Automatic 3-D Geometry Assessment of Blood Vessels in Transverse Ultrasound Images Using Convolutional Neural Networks. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2021; 68:3326-3335. [PMID: 34143734 DOI: 10.1109/tuffc.2021.3090461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Accurate 3-D geometries of arteries and veins are important clinical data for diagnosis of arterial disease and intervention planning. Automatic segmentation of vessels in the transverse view suffers from the low lateral resolution and contrast. Convolutional neural networks are a promising tool for automatic segmentation of medical images, outperforming the traditional segmentation methods with high robustness. In this study, we aim to create a general, robust, and accurate method to segment the lumen-wall boundary of healthy central and peripheral vessels in large field-of-view freehand ultrasound (US) datasets. Data were acquired using the freehand US, in combination with a probe tracker. A total of ±36 000 cross-sectional images, acquired in the common, internal, and external carotid artery ( N = 37 ), in the radial, ulnar artery, and cephalic vein ( N = 12 ), and in the femoral artery ( N = 5 ) were included. To create masks (of the lumen) for training data, a conventional automatic segmentation method was used. The neural networks were trained on: 1) data of all vessels and 2) the carotid artery only. The performance was compared and tested using an open-access dataset. The recall, precision, DICE, and intersection over union (IoU) were calculated. Overall, segmentation was successful in the carotid and peripheral arteries. The Multires U-net architecture performs best overall with DICE = 0.93 when trained on the total dataset. Future studies will focus on the inclusion of vascular pathologies.
Collapse
|
24
|
Xie Y, Liao H, Zhang D, Zhou L, Chen F. Image-Based 3D Ultrasound Reconstruction with Optical Flow via Pyramid Warping Network. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3539-3542. [PMID: 34892003 DOI: 10.1109/embc46164.2021.9630853] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
3D Ultrasound (US) contains rich spatial information which is helpful for medical diagnosis. However, current reconstruction methods with tracking devices are not suitable for clinical application. The sensorless freehand methods reconstruct based on US images which is less accuracy. In this paper, we proposed a network which reconstructs the US volume based on US images features and optical flow features. We proposed the pyramid warping layer which merges the image features and optical flow features with warping operation. To fuse the warped features of different scales in different pyramid levels, we adopted the fusion module using the attention mechanism. Meanwhile, we adopted the channel attention and spatial attention to our network. Our method was evaluated in 100 freehand US sweeps of human forearms which exhibits the efficient performance on volume reconstruction compared with other methods.
Collapse
|
25
|
Tang S, Yang X, Shajudeen P, Sears C, Taraballi F, Weiner B, Tasciotti E, Dollahon D, Park H, Righetti R. A CNN-based method to reconstruct 3-D spine surfaces from US images in vivo. Med Image Anal 2021; 74:102221. [PMID: 34520960 DOI: 10.1016/j.media.2021.102221] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Revised: 08/26/2021] [Accepted: 08/27/2021] [Indexed: 01/12/2023]
Abstract
Three-dimensional (3-D) reconstruction of the spine surface is of strong clinical relevance for the diagnosis and prognosis of spine disorders and intra-operative image guidance. In this paper, we report a new technique to reconstruct lumbar spine surfaces in 3-D from non-invasive ultrasound (US) images acquired in free-hand mode. US images randomly sampled from in vivo scans of 9 rabbits were used to train a U-net convolutional neural network (CNN). More specifically, a late fusion (LF)-based U-net trained jointly on B-mode and shadow-enhanced B-mode images was generated by fusing two individual U-nets and expanding the set of trainable parameters to around twice the capacity of a basic U-net. This U-net was then applied to predict spine surface labels in in vivo images obtained from another rabbit, which were then used for 3-D spine surface reconstruction. The underlying pose of the transducer during the scan was estimated by registering stacks of US images to a geometrical model derived from corresponding CT data and used to align detected surface points. Final performance of the reconstruction method was assessed by computing the mean absolute error (MAE) between pairs of spine surface points detected from US and CT and by counting the total number of surface points detected from US. Comparison was made between the LF-based U-net and a previously developed phase symmetry (PS)-based method. Using the LF-based U-net, the averaged number of US surface points across the lumbar region increased by 21.61% and MAE reduced by 26.28% relative to the PS-based method. The overall MAE (in mm) was 0.24±0.29. Based on these results, we conclude that: 1) the proposed U-net can detect the spine posterior arch with low MAE and large number of US surface points and 2) the newly proposed reconstruction framework may complement and, under certain circumstances, be used without the aid of an external tracking system in intra-operative spine applications.
Collapse
Affiliation(s)
- Songyuan Tang
- Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX 77843, USA
| | - Xu Yang
- Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX 77843, USA
| | - Peer Shajudeen
- Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX 77843, USA
| | - Candice Sears
- Houston Methodist Hospital, Department of Orthopedics and Sports Medicine, Center for Musculoskeletal Regeneration, Houston 77030, USA
| | - Francesca Taraballi
- Houston Methodist Hospital, Department of Orthopedics and Sports Medicine, Center for Musculoskeletal Regeneration, Houston 77030, USA
| | - Bradley Weiner
- Houston Methodist Hospital, Department of Orthopedics and Sports Medicine, Center for Musculoskeletal Regeneration, Houston 77030, USA
| | - Ennio Tasciotti
- Houston Methodist Hospital, Department of Orthopedics and Sports Medicine, Center for Musculoskeletal Regeneration, Houston 77030, USA
| | - Devon Dollahon
- Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX 77843, USA
| | - Hangue Park
- Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX 77843, USA
| | - Raffaella Righetti
- Department of Electrical and Computer Engineering, Texas A&M University, College Station, TX 77843, USA.
| |
Collapse
|
26
|
Cai Q, Peng C, Lu JY, Prieto JC, Rosenbaum AJ, Stringer JSA, Jiang X. Performance Enhanced Ultrasound Probe Tracking With a Hemispherical Marker Rigid Body. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2021; 68:2155-2163. [PMID: 33560983 DOI: 10.1109/tuffc.2021.3058145] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Among tracking techniques applied in the 3-D freehand ultrasound (US), the camera-based tracking method is relatively mature and reliable. However, constrained by manufactured marker rigid bodies, the US probe is usually limited to operate within a narrow rotational range before occlusion issues affect accurate and robust tracking performance. Thus, this study proposed a hemispherical marker rigid body to hold passive noncoplanar markers so that the markers could be identified by the camera, mitigating self-occlusion. The enlarged rotational range provides greater freedom for sonographers while performing examinations. The single-axis rotational and translational tracking performances of the system, equipped with the newly designed marker rigid body, were investigated and evaluated. Tracking with the designed marker rigid body achieved high tracking accuracy with 0.57° for the single-axis rotation and 0.01 mm for the single-axis translation for sensor distance between 1.5 and 2 m. In addition to maintaining high accuracy, the system also possessed an enhanced ability to capture over 99.76% of the motion data in the experiments. The results demonstrated that with the designed marker rigid body, the missing data were remarkably reduced from over 15% to less than 0.5%, which enables interpolation in the data postprocessing. An imaging test was further conducted, and the volume reconstruction of a four-month fetal phantom was demonstrated using the motion data obtained from the tracking system.
Collapse
|
27
|
Tattoo tomography: Freehand 3D photoacoustic image reconstruction with an optical pattern. Int J Comput Assist Radiol Surg 2021; 16:1101-1110. [PMID: 33993409 PMCID: PMC8260532 DOI: 10.1007/s11548-021-02399-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Accepted: 05/02/2021] [Indexed: 11/28/2022]
Abstract
Purpose Photoacoustic tomography (PAT) is a novel imaging technique that can spatially resolve both morphological and functional tissue properties, such as vessel topology and tissue oxygenation. While this capacity makes PAT a promising modality for the diagnosis, treatment, and follow-up of various diseases, a current drawback is the limited field of view provided by the conventionally applied 2D probes.
Methods In this paper, we present a novel approach to 3D reconstruction of PAT data (Tattoo tomography) that does not require an external tracking system and can smoothly be integrated into clinical workflows. It is based on an optical pattern placed on the region of interest prior to image acquisition. This pattern is designed in a way that a single tomographic image of it enables the recovery of the probe pose relative to the coordinate system of the pattern, which serves as a global coordinate system for image compounding.
Results To investigate the feasibility of Tattoo tomography, we assessed the quality of 3D image reconstruction with experimental phantom data and in vivo forearm data. The results obtained with our prototype indicate that the Tattoo method enables the accurate and precise 3D reconstruction of PAT data and may be better suited for this task than the baseline method using optical tracking. Conclusions In contrast to previous approaches to 3D ultrasound (US) or PAT reconstruction, the Tattoo approach neither requires complex external hardware nor training data acquired for a specific application. It could thus become a valuable tool for clinical freehand PAT.
Collapse
|
28
|
Borgbjerg J, Hørlyck A. Web-Based GPU-Accelerated Application for Multiplanar Reconstructions from Conventional 2D Ultrasound. ULTRASCHALL IN DER MEDIZIN (STUTTGART, GERMANY : 1980) 2021; 42:194-201. [PMID: 31487752 DOI: 10.1055/a-0999-5347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
PURPOSE In ultrasound education there is a need for interactive web-based learning resources. The purpose of this project was to develop a web-based application that enables the generation and exploration of volumetric datasets from cine loops obtained with conventional 2D ultrasound. MATERIALS AND METHODS JavaScript code for ultrasound video loading and the generation of volumetric datasets was created and merged with an existing web-based imaging viewer based on JavaScript and HTML5. The Web Graphics Library was utilized to enable hardware-accelerated image rendering. RESULTS The result is a web application that works in most major browsers without any plug-ins. It allows users to load a conventional 2D ultrasound cine loop which can subsequently be manipulated with on-the-fly multiplanar reconstructions as in a Digital Imaging and Communications in Medicine (DICOM) viewer. The application is freely accessible at (http://www.castlemountain.dk/atlas/index.php?page=mulrecon&mulreconPage=sonoviewer) where a demonstration of web-based sharing of generated cases can also be found. CONCLUSION The developed web-based application is unique in its ability to easily perform loading of one's own ultrasound clips and conduct multiplanar reconstructions where interactive cases can be shared on the Internet.
Collapse
Affiliation(s)
| | - Arne Hørlyck
- Radiology, Aarhus-University-Hospital, Aarhus, Denmark
| |
Collapse
|
29
|
Ramalhinho J, Tregidgo HFJ, Gurusamy K, Hawkes DJ, Davidson B, Clarkson MJ. Registration of Untracked 2D Laparoscopic Ultrasound to CT Images of the Liver Using Multi-Labelled Content-Based Image Retrieval. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1042-1054. [PMID: 33326379 DOI: 10.1109/tmi.2020.3045348] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Laparoscopic Ultrasound (LUS) is recommended as a standard-of-care when performing laparoscopic liver resections as it images sub-surface structures such as tumours and major vessels. Given that LUS probes are difficult to handle and some tumours are iso-echoic, registration of LUS images to a pre-operative CT has been proposed as an image-guidance method. This registration problem is particularly challenging due to the small field of view of LUS, and usually depends on both a manual initialisation and tracking to compose a volume, hindering clinical translation. In this paper, we extend a proposed registration approach using Content-Based Image Retrieval (CBIR), removing the requirement for tracking or manual initialisation. Pre-operatively, a set of possible LUS planes is simulated from CT and a descriptor generated for each image. Then, a Bayesian framework is employed to estimate the most likely sequence of CT simulations that matches a series of LUS images. We extend our CBIR formulation to use multiple labelled objects and constrain the registration by separating liver vessels into portal vein and hepatic vein branches. The value of this new labeled approach is demonstrated in retrospective data from 5 patients. Results show that, by including a series of 5 untracked images in time, a single LUS image can be registered with accuracies ranging from 5.7 to 16.4 mm with a success rate of 78%. Initialisation of the LUS to CT registration with the proposed framework could potentially enable the clinical translation of these image fusion techniques.
Collapse
|
30
|
Shin Y, Yang J, Lee YH, Kim S. Artificial intelligence in musculoskeletal ultrasound imaging. Ultrasonography 2021; 40:30-44. [PMID: 33242932 PMCID: PMC7758096 DOI: 10.14366/usg.20080] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Revised: 09/04/2020] [Accepted: 09/06/2020] [Indexed: 12/14/2022] Open
Abstract
Ultrasonography (US) is noninvasive and offers real-time, low-cost, and portable imaging that facilitates the rapid and dynamic assessment of musculoskeletal components. Significant technological improvements have contributed to the increasing adoption of US for musculoskeletal assessments, as artificial intelligence (AI)-based computer-aided detection and computer-aided diagnosis are being utilized to improve the quality, efficiency, and cost of US imaging. This review provides an overview of classical machine learning techniques and modern deep learning approaches for musculoskeletal US, with a focus on the key categories of detection and diagnosis of musculoskeletal disorders, predictive analysis with classification and regression, and automated image segmentation. Moreover, we outline challenges and a range of opportunities for AI in musculoskeletal US practice.
Collapse
Affiliation(s)
- YiRang Shin
- Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, Seoul, Korea
| | - Jaemoon Yang
- Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, Seoul, Korea
- Systems Molecular Radiology at Yonsei (SysMolRaY), Seoul, Korea
- Severance Biomedical Science Institute (SBSI), Yonsei University College of Medicine, Seoul, Korea
| | - Young Han Lee
- Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, Seoul, Korea
| | - Sungjun Kim
- Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science (CCIDS), Yonsei University College of Medicine, Seoul, Korea
| |
Collapse
|
31
|
Evain E, Faraz K, Grenier T, Garcia D, De Craene M, Bernard O. A Pilot Study on Convolutional Neural Networks for Motion Estimation From Ultrasound Images. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2020; 67:2565-2573. [PMID: 32112679 DOI: 10.1109/tuffc.2020.2976809] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In recent years, deep learning (DL) has been successfully applied to the analysis and processing of ultrasound images. To date, most of this research has focused on segmentation and view recognition. This article benchmarks different convolutional neural network algorithms for motion estimation in ultrasound imaging. We evaluated and compared several networks derived from FlowNet2, one of the most efficient architectures in computer vision. The networks were tested with and without transfer learning, and the best configuration was compared against the particle imaging velocimetry method, a popular state-of-the-art block-matching algorithm. Rotations are known to be difficult to track from ultrasound images due to a significant speckle decorrelation. We thus focused on the images of rotating disks, which could be tracked through speckle features only. Our database consisted of synthetic and in vitro B-mode images after log compression and covered a large range of rotational speeds. One of the FlowNet2 subnetworks, FlowNet2SD, produced competitive results with a motion field error smaller than 1 pixel on real data after transfer learning based on the simulated data. These errors remain small for a large velocity range without the need for hyperparameter tuning, which indicates the high potential and adaptability of DL solutions to motion estimation in ultrasound imaging.
Collapse
|
32
|
Droste R, Drukker L, Papageorghiou AT, Noble JA. Automatic Probe Movement Guidance for Freehand Obstetric Ultrasound. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2020; 12263:583-592. [PMID: 33103163 DOI: 10.1007/978-3-030-59716-0_56] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Abstract
We present the first system that provides real-time probe movement guidance for acquiring standard planes in routine freehand obstetric ultrasound scanning. Such a system can contribute to the world-wide deployment of obstetric ultrasound scanning by lowering the required level of operator expertise. The system employs an artificial neural network that receives the ultrasound video signal and the motion signal of an inertial measurement unit (IMU) that is attached to the probe, and predicts a guidance signal. The network termed US-GuideNet predicts either the movement towards the standard plane position (goal prediction), or the next movement that an expert sonographer would perform (action prediction). While existing models for other ultrasound applications are trained with simulations or phantoms, we train our model with real-world ultrasound video and probe motion data from 464 routine clinical scans by 17 accredited sonographers. Evaluations for 3 standard plane types show that the model provides a useful guidance signal with an accuracy of 88.8 % for goal prediction and 90.9 % for action prediction.
Collapse
Affiliation(s)
- Richard Droste
- Institute of Biomedical Engineering, University of Oxford, Oxford, UK.,Nuffield Department of Womens & Reproductive Health, University of Oxford, Oxford, UK
| | - Lior Drukker
- Institute of Biomedical Engineering, University of Oxford, Oxford, UK.,Nuffield Department of Womens & Reproductive Health, University of Oxford, Oxford, UK
| | - Aris T Papageorghiou
- Institute of Biomedical Engineering, University of Oxford, Oxford, UK.,Nuffield Department of Womens & Reproductive Health, University of Oxford, Oxford, UK
| | - J Alison Noble
- Institute of Biomedical Engineering, University of Oxford, Oxford, UK.,Nuffield Department of Womens & Reproductive Health, University of Oxford, Oxford, UK
| |
Collapse
|
33
|
Yi J, Kang HK, Kwon JH, Kim KS, Park MH, Seong YK, Kim DW, Ahn B, Ha K, Lee J, Hah Z, Bang WC. Technology trends and applications of deep learning in ultrasonography: image quality enhancement, diagnostic support, and improving workflow efficiency. Ultrasonography 2020; 40:7-22. [PMID: 33152846 PMCID: PMC7758107 DOI: 10.14366/usg.20102] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Accepted: 09/14/2020] [Indexed: 12/12/2022] Open
Abstract
In this review of the most recent applications of deep learning to ultrasound imaging, the architectures of deep learning networks are briefly explained for the medical imaging applications of classification, detection, segmentation, and generation. Ultrasonography applications for image processing and diagnosis are then reviewed and summarized, along with some representative imaging studies of the breast, thyroid, heart, kidney, liver, and fetal head. Efforts towards workflow enhancement are also reviewed, with an emphasis on view recognition, scanning guide, image quality assessment, and quantification and measurement. Finally some future prospects are presented regarding image quality enhancement, diagnostic support, and improvements in workflow efficiency, along with remarks on hurdles, benefits, and necessary collaborations.
Collapse
Affiliation(s)
- Jonghyon Yi
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Ho Kyung Kang
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Jae-Hyun Kwon
- DR Imaging R&D Lab, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Kang-Sik Kim
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Moon Ho Park
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Yeong Kyeong Seong
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Dong Woo Kim
- Product Strategy Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Byungeun Ahn
- Product Strategy Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Kilsu Ha
- Product Strategy Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Jinyong Lee
- System R&D Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Zaegyoo Hah
- System R&D Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Won-Chul Bang
- Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seoul, Korea.,Product Strategy Team, Samsung Medison Co., Ltd., Seoul, Korea
| |
Collapse
|
34
|
Zaffino P, Moccia S, De Momi E, Spadea MF. A Review on Advances in Intra-operative Imaging for Surgery and Therapy: Imagining the Operating Room of the Future. Ann Biomed Eng 2020; 48:2171-2191. [PMID: 32601951 DOI: 10.1007/s10439-020-02553-6] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2020] [Accepted: 06/17/2020] [Indexed: 12/19/2022]
Abstract
With the advent of Minimally Invasive Surgery (MIS), intra-operative imaging has become crucial for surgery and therapy guidance, allowing to partially compensate for the lack of information typical of MIS. This paper reviews the advancements in both classical (i.e. ultrasounds, X-ray, optical coherence tomography and magnetic resonance imaging) and more recent (i.e. multispectral, photoacoustic and Raman imaging) intra-operative imaging modalities. Each imaging modality was analyzed, focusing on benefits and disadvantages in terms of compatibility with the operating room, costs, acquisition time and image characteristics. Tables are included to summarize this information. New generation of hybrid surgical room and algorithms for real time/in room image processing were also investigated. Each imaging modality has its own (site- and procedure-specific) peculiarities in terms of spatial and temporal resolution, field of view and contrasted tissues. Besides the benefits that each technique offers for guidance, considerations about operators and patient risk, costs, and extra time required for surgical procedures have to be considered. The current trend is to equip surgical rooms with multimodal imaging systems, so as to integrate multiple information for real-time data extraction and computer-assisted processing. The future of surgery is to enhance surgeons eye to minimize intra- and after-surgery adverse events and provide surgeons with all possible support to objectify and optimize the care-delivery process.
Collapse
Affiliation(s)
- Paolo Zaffino
- Department of Experimental and Clinical Medicine, Universitá della Magna Graecia, Catanzaro, Italy
| | - Sara Moccia
- Department of Information Engineering (DII), Universitá Politecnica delle Marche, via Brecce Bianche, 12, 60131, Ancona, AN, Italy.
| | - Elena De Momi
- Department of Electronics, Information and Bioengineering (DEIB), Politecnico di Milano, Piazza Leonardo da Vinci, 32, 20133, Milano, MI, Italy
| | - Maria Francesca Spadea
- Department of Experimental and Clinical Medicine, Universitá della Magna Graecia, Catanzaro, Italy
| |
Collapse
|
35
|
Carton FX, Chabanas M, Le Lann F, Noble JH. Automatic segmentation of brain tumor resections in intraoperative ultrasound images using U-Net. J Med Imaging (Bellingham) 2020; 7:031503. [PMID: 32090137 PMCID: PMC7026519 DOI: 10.1117/1.jmi.7.3.031503] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2019] [Accepted: 01/17/2020] [Indexed: 11/14/2022] Open
Abstract
To compensate for the intraoperative brain tissue deformation, computer-assisted intervention methods have been used to register preoperative magnetic resonance images with intraoperative images. In order to model the deformation due to tissue resection, the resection cavity needs to be segmented in intraoperative images. We present an automatic method to segment the resection cavity in intraoperative ultrasound (iUS) images. We trained and evaluated two-dimensional (2-D) and three-dimensional (3-D) U-Net networks on two datasets of 37 and 13 cases that contain images acquired from different ultrasound systems. The best overall performing method was the 3-D network, which resulted in a 0.72 mean and 0.88 median Dice score over the whole dataset. The 2-D network also had good results with less computation time, with a median Dice score over 0.8. We also evaluated the sensitivity of network performance to training and testing with images from different ultrasound systems and image field of view. In this application, we found specialized networks to be more accurate for processing similar images than a general network trained with all the data. Overall, promising results were obtained for both datasets using specialized networks. This motivates further studies with additional clinical data, to enable training and validation of a clinically viable deep-learning model for automated delineation of the tumor resection cavity in iUS images.
Collapse
Affiliation(s)
- François-Xavier Carton
- University of Grenoble Alpes, CNRS, Grenoble INP, TIMC-IMAG, Grenoble, France
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
| | - Matthieu Chabanas
- University of Grenoble Alpes, CNRS, Grenoble INP, TIMC-IMAG, Grenoble, France
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
| | - Florian Le Lann
- Grenoble Alpes University Hospital, Department of Neurosurgery, Grenoble, France
| | - Jack H. Noble
- Vanderbilt University, Department of Electrical Engineering and Computer Science, Nashville, Tennessee, United States
| |
Collapse
|
36
|
Jiang Z, Grimm M, Zhou M, Esteban J, Simson W, Zahnd G, Navab N. Automatic Normal Positioning of Robotic Ultrasound Probe Based Only on Confidence Map Optimization and Force Measurement. IEEE Robot Autom Lett 2020. [DOI: 10.1109/lra.2020.2967682] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
37
|
3D Hermite Transform Optical Flow Estimation inLeft Ventricle CT Sequences. SENSORS 2020; 20:s20030595. [PMID: 31973153 PMCID: PMC7038175 DOI: 10.3390/s20030595] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2019] [Revised: 12/19/2019] [Accepted: 01/10/2020] [Indexed: 12/23/2022]
Abstract
Heart diseases are the most important causes of death in the world and over the years, thestudy of cardiac movement has been carried out mainly in two dimensions, however, it is important toconsider that the deformations due to the movement of the heart occur in a three-dimensional space.The 3D + t analysis allows to describe most of the motions of the heart, for example, the twistingmotion that takes place on every beat cycle that allows us identifying abnormalities of the heartwalls. Therefore, it is necessary to develop algorithms that help specialists understand the cardiacmovement. In this work, we developed a new approach to determine the cardiac movement inthree dimensions using a differential optical flow approach in which we use the steered Hermitetransform (SHT) which allows us to decompose cardiac volumes taking advantage of it as a model ofthe human vision system (HVS). Our proposal was tested in complete cardiac computed tomography(CT) volumes ( 3D + t), as well as its respective left ventricular segmentation. The robustness tonoise was tested with good results. The evaluation of the results was carried out through errors inforwarding reconstruction, from the volume at time t to time t + 1 using the optical flow obtained(interpolation errors). The parameters were tuned extensively. In the case of the 2D algorithm, theinterpolation errors and normalized interpolation errors are very close and below the values reportedin ground truth flows. In the case of the 3D algorithm, the results were compared with another similarmethod in 3D and the interpolation errors remained below 0.1. These results of interpolation errorsfor complete cardiac volumes and the left ventricle are shown graphically for clarity. Finally, a seriesof graphs are observed where the characteristic of contraction and dilation of the left ventricle isevident through the representation of the 3D optical flow.
Collapse
|
38
|
Vercauteren T, Unberath M, Padoy N, Navab N. CAI4CAI: The Rise of Contextual Artificial Intelligence in Computer Assisted Interventions. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2020; 108:198-214. [PMID: 31920208 PMCID: PMC6952279 DOI: 10.1109/jproc.2019.2946993] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Revised: 09/12/2019] [Accepted: 10/04/2019] [Indexed: 05/10/2023]
Abstract
Data-driven computational approaches have evolved to enable extraction of information from medical images with a reliability, accuracy and speed which is already transforming their interpretation and exploitation in clinical practice. While similar benefits are longed for in the field of interventional imaging, this ambition is challenged by a much higher heterogeneity. Clinical workflows within interventional suites and operating theatres are extremely complex and typically rely on poorly integrated intra-operative devices, sensors, and support infrastructures. Taking stock of some of the most exciting developments in machine learning and artificial intelligence for computer assisted interventions, we highlight the crucial need to take context and human factors into account in order to address these challenges. Contextual artificial intelligence for computer assisted intervention, or CAI4CAI, arises as an emerging opportunity feeding into the broader field of surgical data science. Central challenges being addressed in CAI4CAI include how to integrate the ensemble of prior knowledge and instantaneous sensory information from experts, sensors and actuators; how to create and communicate a faithful and actionable shared representation of the surgery among a mixed human-AI actor team; how to design interventional systems and associated cognitive shared control schemes for online uncertainty-aware collaborative decision making ultimately producing more precise and reliable interventions.
Collapse
Affiliation(s)
- Tom Vercauteren
- School of Biomedical Engineering & Imaging SciencesKing’s College LondonLondonWC2R 2LSU.K.
| | - Mathias Unberath
- Department of Computer ScienceJohns Hopkins UniversityBaltimoreMD21218USA
| | - Nicolas Padoy
- ICube institute, CNRS, IHU Strasbourg, University of Strasbourg67081StrasbourgFrance
| | - Nassir Navab
- Fakultät für InformatikTechnische Universität München80333MunichGermany
| |
Collapse
|
39
|
Zheng M, Szabo TL, Mohamadi A, Snyder BD. Long-Duration Tracking of Cervical-Spine Kinematics With Ultrasound. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2019; 66:1699-1707. [PMID: 31484114 DOI: 10.1109/tuffc.2019.2928184] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Cervical-spine (C-spine) pathoanatomy is commonly evaluated by plane radiographs, computed tomography (CT), or magnetic resonance imaging (MRI); however, these modalities are unable to directly measure the dynamic mechanical properties of the functional spinal units (FSU) comprising the C-spine that account for its functional performance. We have developed an ultrasound-based technique that provides a non-invasive, real-time, quantitative, in vivo assessment of C-spine kinematics and FSU viscoelastic properties. The fidelity of the derived measurements is predicated on accurate tracking of vertebral motion over a prolonged time duration. The purpose of this work was to present a bundle adjustment method that enables accurate tracking of the relative motion of contiguous cervical vertebrae from ultrasound radio-frequency data. The tracking method was validated using both a plastic anatomical model of a cervical vertebra undergoing prescribed displacements and also human cadaveric C-spine specimens subjected to physiologically relevant loading configurations. While the velocity of motion and thickness of the surrounding soft tissue envelope affected accuracy, using the bundle adjustment method, B-mode ultrasound was capable of accurately tracking vertebral motion under clinically relevant physiologic conditions. Therefore, B-mode ultrasound can be used to evaluate in vivo real-time C-spine kinematics and FSU mechanical properties in environments where radiographs, CT, or MRI cannot be used.
Collapse
|
40
|
Malathi M, Sinthia P. Brain Tumour Segmentation Using Convolutional Neural Network with Tensor Flow. Asian Pac J Cancer Prev 2019; 20:2095-2101. [PMID: 31350971 PMCID: PMC6745230 DOI: 10.31557/apjcp.2019.20.7.2095] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Accepted: 07/09/2019] [Indexed: 11/25/2022] Open
Abstract
Introduction: The determination of tumour extent is a major challenging task in brain tumour planning and quantitative evaluation. Magnetic Resonance Imaging (MRI) is one of the non-invasive technique has emanated as a front- line diagnostic tool for brain tumour without ionizing radiation. Objective: Among brain tumours, gliomas are the most common aggressive, leading to a very short life expectancy in their highest grade. In the clinical practice manual segmentation is a time consuming task and their performance is highly depended on the operator’s experience. Methods: This paper proposes fully automatic segmentation of brain tumour using convolutional neural network. Further, it uses high grade gilomas brain image from BRATS 2015 database. The suggested work accomplishes brain tumour segmentation using tensor flow, in which the anaconda frameworks are used to implement high level mathematical functions. The survival rates of patients are improved by early diagnosis of brain tumour. Results: Hence, the research work segments brain tumour into four classes like edema, non-enhancing tumour, enhancing tumour and necrotic tumour. Brain tumour segmentation needs to separate healthy tissues from tumour regions such as advancing tumour, necrotic core and surrounding edema. This is an essential step in diagnosis and treatment planning, both of which need to take place quickly in case of a malignancy in order to maximize the likelihood of successful treatment.
Collapse
Affiliation(s)
- M Malathi
- Saveetha Engineering College,Chennai, India.
| | - P Sinthia
- Saveetha Engineering College,Chennai, India.
| |
Collapse
|
41
|
Evaluation of a novel tomographic ultrasound device for abdominal examinations. PLoS One 2019; 14:e0218754. [PMID: 31242250 PMCID: PMC6594674 DOI: 10.1371/journal.pone.0218754] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2019] [Accepted: 06/09/2019] [Indexed: 11/24/2022] Open
Abstract
Conventional ultrasound (US) is the first-line imaging method for abdominal pathologies, but its diagnostic accuracy is operator-dependent, and data storage is usually limited to two-dimensional images. A novel tomographic US system (Curefab CS, Munich, Germany) processes imaging data combined with three-dimensional spatial information using a magnetic field tracking. This enables standardized image presentation in axial planes and a review of the entire examination. The applicability and diagnostic performance of this tomographic US approach was analyzed in an abdominal setting using conventional US as reference. Tomographic US data were successfully compiled in all subjects of a training cohort (20 healthy volunteers) and in 50 patients with abdominal lesions. Image quality (35% and 79% for training and patient cohort respectively) and completeness of organ visualization (45% and 44%) were frequently impaired in tomographic US compared to conventional US. Conventional and tomographic US showed good agreement for measurement of organ sizes in the training cohort (right liver lobe and both kidneys with a median deviation of 5%). In the patient cohort, tomographic US identified 57 of 74 hepatic or renal lesions detected by conventional ultrasound (sensitivity 77%). In conclusion, this study illustrates the diagnostic potential of abdominal tomographic US, but current significant limitations of the tomographic ultrasound device demand further technical improvements before this and comparable approaches can be implemented in clinical practice.
Collapse
|