1
|
Grube S, Latus S, Behrendt F, Riabova O, Neidhardt M, Schlaefer A. Needle tracking in low-resolution ultrasound volumes using deep learning. Int J Comput Assist Radiol Surg 2024; 19:1975-1981. [PMID: 39002100 PMCID: PMC11442564 DOI: 10.1007/s11548-024-03234-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 07/03/2024] [Indexed: 07/15/2024]
Abstract
PURPOSE Clinical needle insertion into tissue, commonly assisted by 2D ultrasound imaging for real-time navigation, faces the challenge of precise needle and probe alignment to reduce out-of-plane movement. Recent studies investigate 3D ultrasound imaging together with deep learning to overcome this problem, focusing on acquiring high-resolution images to create optimal conditions for needle tip detection. However, high-resolution also requires a lot of time for image acquisition and processing, which limits the real-time capability. Therefore, we aim to maximize the US volume rate with the trade-off of low image resolution. We propose a deep learning approach to directly extract the 3D needle tip position from sparsely sampled US volumes. METHODS We design an experimental setup with a robot inserting a needle into water and chicken liver tissue. In contrast to manual annotation, we assess the needle tip position from the known robot pose. During insertion, we acquire a large data set of low-resolution volumes using a 16 × 16 element matrix transducer with a volume rate of 4 Hz. We compare the performance of our deep learning approach with conventional needle segmentation. RESULTS Our experiments in water and liver show that deep learning outperforms the conventional approach while achieving sub-millimeter accuracy. We achieve mean position errors of 0.54 mm in water and 1.54 mm in liver for deep learning. CONCLUSION Our study underlines the strengths of deep learning to predict the 3D needle positions from low-resolution ultrasound volumes. This is an important milestone for real-time needle navigation, simplifying the alignment of needle and ultrasound probe and enabling a 3D motion analysis.
Collapse
Affiliation(s)
- Sarah Grube
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, Hamburg, Germany.
| | - Sarah Latus
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, Hamburg, Germany
| | - Finn Behrendt
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, Hamburg, Germany
| | - Oleksandra Riabova
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, Hamburg, Germany
| | - Maximilian Neidhardt
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, Hamburg, Germany
| | - Alexander Schlaefer
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, Hamburg, Germany
| |
Collapse
|
2
|
Bengs M, Sprenger J, Gerlach S, Neidhardt M, Schlaefer A. Real-Time Motion Analysis With 4D Deep Learning for Ultrasound-Guided Radiotherapy. IEEE Trans Biomed Eng 2023; 70:2690-2699. [PMID: 37030809 DOI: 10.1109/tbme.2023.3262422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023]
Abstract
Motion compensation in radiation therapy is a challenging scenario that requires estimating and forecasting motion of tissue structures to deliver the target dose. Ultrasound offers direct imaging of tissue in real-time and is considered for image guidance in radiation therapy. Recently, fast volumetric ultrasound has gained traction, but motion analysis with such high-dimensional data remains difficult. While deep learning could bring many advantages, such as fast data processing and high performance, it remains unclear how to process sequences of hundreds of image volumes efficiently and effectively. We present a 4D deep learning approach for real-time motion estimation and forecasting using long-term 4D ultrasound data. Using motion traces acquired during radiation therapy combined with various tissue types, our results demonstrate that long-term motion estimation can be performed markerless with a tracking error of 0.35±0.2 mm and with an inference time of less than 5 ms. Also, we demonstrate forecasting directly from the image data up to 900 ms into the future. Overall, our findings highlight that 4D deep learning is a promising approach for motion analysis during radiotherapy.
Collapse
|
3
|
Secoli R, Matheson E, Pinzi M, Galvan S, Donder A, Watts T, Riva M, Zani DD, Bello L, Rodriguez y Baena F. Modular robotic platform for precision neurosurgery with a bio-inspired needle: System overview and first in-vivo deployment. PLoS One 2022; 17:e0275686. [PMID: 36260553 PMCID: PMC9581417 DOI: 10.1371/journal.pone.0275686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Accepted: 09/22/2022] [Indexed: 11/23/2022] Open
Abstract
Over the past 10 years, minimally invasive surgery (MIS) has shown significant benefits compared to conventional surgical techniques, with reduced trauma, shorter hospital stays, and shorter patient recovery times. In neurosurgical MIS procedures, inserting a straight tool (e.g. catheter) is common practice in applications ranging from biopsy and laser ablation, to drug delivery and fluid evacuation. How to handle tissue deformation, target migration and access to deep-seated anatomical structures remain an open challenge, affecting both the preoperative planning phase and eventual surgical intervention. Here, we present the first neurosurgical platform in the literature, able to deliver an implantable steerable needle for a range of diagnostic and therapeutic applications, with a short-term focus on localised drug delivery. This work presents the system's architecture and first in vivo deployment with an optimised surgical workflow designed for pre-clinical trials with the ovine model, which demonstrate appropriate function and safe implantation.
Collapse
Affiliation(s)
- Riccardo Secoli
- The Mechatronics in Medicine Lab, Department of Mechanical Engineering, Imperial College London, London, United Kingdom
- * E-mail:
| | - Eloise Matheson
- The Mechatronics in Medicine Lab, Department of Mechanical Engineering, Imperial College London, London, United Kingdom
| | - Marlene Pinzi
- The Mechatronics in Medicine Lab, Department of Mechanical Engineering, Imperial College London, London, United Kingdom
| | - Stefano Galvan
- The Mechatronics in Medicine Lab, Department of Mechanical Engineering, Imperial College London, London, United Kingdom
| | - Abdulhamit Donder
- The Mechatronics in Medicine Lab, Department of Mechanical Engineering, Imperial College London, London, United Kingdom
| | - Thomas Watts
- The Mechatronics in Medicine Lab, Department of Mechanical Engineering, Imperial College London, London, United Kingdom
| | - Marco Riva
- Department of Biomedical Sciences, Humanitas University, Milan, Italy
- Istituto di Ricovero e Cura a Carattere Scientifico Humanitas Research Hospital Rozzano, Rozzano, Italy
| | - Davide Danilo Zani
- Department of Veterinary Medicine, Universitá degli Studi di Milano, Lodi, Italy
| | - Lorenzo Bello
- Department of Oncology and Hematology-Oncology, Universitá degli Studi di Milano, Milan, Italy
| | - Ferdinando Rodriguez y Baena
- The Mechatronics in Medicine Lab, Department of Mechanical Engineering, Imperial College London, London, United Kingdom
| |
Collapse
|
4
|
Sprenger J, Bengs M, Gerlach S, Neidhardt M, Schlaefer A. Systematic analysis of volumetric ultrasound parameters for markerless 4D motion tracking. Int J Comput Assist Radiol Surg 2022; 17:2131-2139. [PMID: 35597846 PMCID: PMC9515030 DOI: 10.1007/s11548-022-02665-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Accepted: 04/27/2022] [Indexed: 11/05/2022]
Abstract
OBJECTIVES Motion compensation is an interesting approach to improve treatments of moving structures. For example, target motion can substantially affect dose delivery in radiation therapy, where methods to detect and mitigate the motion are widely used. Recent advances in fast, volumetric ultrasound have rekindled the interest in ultrasound for motion tracking. We present a setup to evaluate ultrasound based motion tracking and we study the effect of imaging rate and motion artifacts on its performance. METHODS We describe an experimental setup to acquire markerless 4D ultrasound data with precise ground truth from a robot and evaluate different real-world trajectories and system settings toward accurate motion estimation. We analyze motion artifacts in continuously acquired data by comparing to data recorded in a step-and-shoot fashion. Furthermore, we investigate the trade-off between the imaging frequency and resolution. RESULTS The mean tracking errors show that continuously acquired data leads to similar results as data acquired in a step-and-shoot fashion. We report mean tracking errors up to 2.01 mm and 1.36 mm on the continuous data for the lower and higher resolution, respectively, while step-and-shoot data leads to mean tracking errors of 2.52 mm and 0.98 mm. CONCLUSIONS We perform a quantitative analysis of different system settings for motion tracking with 4D ultrasound. We can show that precise tracking is feasible and additional motion in continuously acquired data does not impair the tracking. Moreover, the analysis of the frequency resolution trade-off shows that a high imaging resolution is beneficial in ultrasound tracking.
Collapse
Affiliation(s)
- Johanna Sprenger
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, Hamburg, Germany.
| | - Marcel Bengs
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, Hamburg, Germany
| | - Stefan Gerlach
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, Hamburg, Germany
| | - Maximilian Neidhardt
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, Hamburg, Germany
| | - Alexander Schlaefer
- Institute of Medical Technology and Intelligent Systems, Hamburg University of Technology, Hamburg, Germany
| |
Collapse
|
5
|
Jamal A, Yuan T, Galvan S, Castellano A, Riva M, Secoli R, Falini A, Bello L, Rodriguez y Baena F, Dini D. Insights into Infusion-Based Targeted Drug Delivery in the Brain: Perspectives, Challenges and Opportunities. Int J Mol Sci 2022; 23:3139. [PMID: 35328558 PMCID: PMC8949870 DOI: 10.3390/ijms23063139] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 03/09/2022] [Accepted: 03/10/2022] [Indexed: 01/31/2023] Open
Abstract
Targeted drug delivery in the brain is instrumental in the treatment of lethal brain diseases, such as glioblastoma multiforme, the most aggressive primary central nervous system tumour in adults. Infusion-based drug delivery techniques, which directly administer to the tissue for local treatment, as in convection-enhanced delivery (CED), provide an important opportunity; however, poor understanding of the pressure-driven drug transport mechanisms in the brain has hindered its ultimate success in clinical applications. In this review, we focus on the biomechanical and biochemical aspects of infusion-based targeted drug delivery in the brain and look into the underlying molecular level mechanisms. We discuss recent advances and challenges in the complementary field of medical robotics and its use in targeted drug delivery in the brain. A critical overview of current research in these areas and their clinical implications is provided. This review delivers new ideas and perspectives for further studies of targeted drug delivery in the brain.
Collapse
Affiliation(s)
- Asad Jamal
- Department of Mechanical Engineering, Imperial College London, London SW7 2AZ, UK; (T.Y.); (S.G.); (R.S.); (F.R.y.B.)
| | - Tian Yuan
- Department of Mechanical Engineering, Imperial College London, London SW7 2AZ, UK; (T.Y.); (S.G.); (R.S.); (F.R.y.B.)
| | - Stefano Galvan
- Department of Mechanical Engineering, Imperial College London, London SW7 2AZ, UK; (T.Y.); (S.G.); (R.S.); (F.R.y.B.)
| | - Antonella Castellano
- Vita-Salute San Raffaele University, 20132 Milan, Italy; (A.C.); (A.F.)
- Neuroradiology Unit and CERMAC, IRCCS Ospedale San Raffaele, 20132 Milan, Italy
| | - Marco Riva
- Department of Medical Biotechnology and Translational Medicine, Università degli Studi di Milano, Via Festa del Perdono 7, 20122 Milan, Italy;
| | - Riccardo Secoli
- Department of Mechanical Engineering, Imperial College London, London SW7 2AZ, UK; (T.Y.); (S.G.); (R.S.); (F.R.y.B.)
| | - Andrea Falini
- Vita-Salute San Raffaele University, 20132 Milan, Italy; (A.C.); (A.F.)
- Neuroradiology Unit and CERMAC, IRCCS Ospedale San Raffaele, 20132 Milan, Italy
| | - Lorenzo Bello
- Department of Oncology and Hematology-Oncology, Universitá degli Studi di Milano, 20122 Milan, Italy;
| | - Ferdinando Rodriguez y Baena
- Department of Mechanical Engineering, Imperial College London, London SW7 2AZ, UK; (T.Y.); (S.G.); (R.S.); (F.R.y.B.)
| | - Daniele Dini
- Department of Mechanical Engineering, Imperial College London, London SW7 2AZ, UK; (T.Y.); (S.G.); (R.S.); (F.R.y.B.)
| |
Collapse
|
6
|
Delaunay R, Hu Y, Vercauteren T. An unsupervised learning approach to ultrasound strain elastography with spatio-temporal consistency. Phys Med Biol 2021; 66. [PMID: 34298531 PMCID: PMC8417818 DOI: 10.1088/1361-6560/ac176a] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Accepted: 07/23/2021] [Indexed: 12/19/2022]
Abstract
Quasi-static ultrasound elastography (USE) is an imaging modality that measures deformation (i.e. strain) of tissue in response to an applied mechanical force. In USE, the strain modulus is traditionally obtained by deriving the displacement field estimated between a pair of radio-frequency data. In this work we propose a recurrent network architecture with convolutional long-short-term memory decoder blocks to improve displacement estimation and spatio-temporal continuity between time series ultrasound frames. The network is trained in an unsupervised way, by optimising a similarity metric between the reference and compressed image. Our training loss is also composed of a regularisation term that preserves displacement continuity by directly optimising the strain smoothness, and a temporal continuity term that enforces consistency between successive strain predictions. In addition, we propose an open-access in vivo database for quasi-static USE, which consists of radio-frequency data sequences captured on the arm of a human volunteer. Our results from numerical simulation and in vivo data suggest that our recurrent neural network can account for larger deformations, as compared with two other feed-forward neural networks. In all experiments, our recurrent network outperformed the state-of-the-art for both learning-based and optimisation-based methods, in terms of elastographic signal-to-noise ratio, strain consistency, and image similarity. Finally, our open-source code provides a 3D-slicer visualisation module that can be used to process ultrasound RF frames in real-time, at a rate of up to 20 frames per second, using a standard GPU.
Collapse
Affiliation(s)
- Rémi Delaunay
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower Street, London WC1E 6BT, United Kingdom.,School of Biomedical Engineering & Imaging Sciences, King's College London, Strand, London WC2R 2LS, United Kingdom
| | - Yipeng Hu
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower Street, London WC1E 6BT, United Kingdom
| | - Tom Vercauteren
- Wellcome/EPSRC Centre for Interventional and Surgical Sciences, University College London, Gower Street, London WC1E 6BT, United Kingdom.,School of Biomedical Engineering & Imaging Sciences, King's College London, Strand, London WC2R 2LS, United Kingdom
| |
Collapse
|
7
|
Stuart MB, Jensen PM, Olsen JTR, Kristensen AB, Schou M, Dammann B, Sorensen HHB, Jensen JA. Real-Time Volumetric Synthetic Aperture Software Beamforming of Row-Column Probe Data. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2021; 68:2608-2618. [PMID: 33830920 DOI: 10.1109/tuffc.2021.3071810] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Two delay-and-sum beamformers for 3-D synthetic aperture imaging with row-column addressed arrays are presented. Both beamformers are software implementations for graphics processing unit (GPU) execution with dynamic apodizations and third-order polynomial subsample interpolation. The first beamformer was written in the MATLAB programming language and the second was written in C/C++ with the compute unified device architecture (CUDA) extensions by NVIDIA. Performance was measured as volume rate and sample throughput on three different GPUs: a 1050 Ti, a 1080 Ti, and a TITAN V. The beamformers were evaluated across 112 combinations of output geometry, depth range, transducer array size, number of virtual sources, floating-point precision, and Nyquist rate or in-phase/quadrature beamforming using analytic signals. Real-time imaging defined as more than 30 volumes per second was attained by the CUDA beamformer on the three GPUs for 13, 27, and 43 setups, respectively. The MATLAB beamformer did not attain real-time imaging for any setup. The median, single-precision sample throughput of the CUDA beamformer was 4.9, 20.8, and 33.5 Gsamples/s on the three GPUs, respectively. The throughput of CUDA beamformer was an order of magnitude higher than that of the MATLAB beamformer.
Collapse
|
8
|
Khan C, Dei K, Schlunk S, Ozgun K, Byram B. A Real-Time, GPU-Based Implementation of Aperture Domain Model Image REconstruction. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2021; 68:2101-2116. [PMID: 33531299 PMCID: PMC8532145 DOI: 10.1109/tuffc.2021.3056334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
Multipath and off-axis scattering are two of the primary mechanisms for ultrasound image degradation. To address their impact, we have proposed Aperture Domain Model Image REconstruction (ADMIRE). This algorithm utilizes a model-based approach in order to identify and suppress sources of acoustic clutter. The ability of ADMIRE to suppress clutter and improve image quality has been demonstrated in previous works, but its use for real-time imaging has been infeasible due to its significant computational requirements. However, in recent years, the use of graphics processing units (GPUs) for general-purpose computing has enabled the significant acceleration of compute-intensive algorithms. This is because many modern GPUs have thousands of computational cores that can be utilized to perform massively parallel processing. Therefore, in this work, we have developed a GPU-based implementation of ADMIRE. The implementation on a single GPU provides a speedup of two orders of magnitude when compared to a serial CPU implementation, and additional speedup is achieved when the computations are distributed across two GPUs. In addition, we demonstrate the feasibility of the GPU implementation to be used for real-time imaging by interfacing it with a Verasonics Vantage 128 ultrasound research system. Moreover, we show that other beamforming techniques, such as delay-and-sum (DAS) and short-lag spatial coherence (SLSC), can be computed and simultaneously displayed with ADMIRE. The frame rate depends upon various parameters, and this is exhibited in the multiple imaging cases that are presented. An open-source code repository containing CPU and GPU implementations of ADMIRE is also provided.
Collapse
|
9
|
Pinzi M, Vakharia VN, Hwang BY, Anderson WS, Duncan JS, Baena FRY. Computer Assisted Planning for Curved Laser Interstitial Thermal Therapy. IEEE Trans Biomed Eng 2021; 68:2957-2964. [PMID: 33534700 DOI: 10.1109/tbme.2021.3056749] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Laser interstitial thermal therapy (LiTT) is a minimally invasive alternative to conventional open surgery for drug-resistant focal mesial temporal lobe epilepsy (MTLE). Recent studies suggest that higher seizure freedom rates are correlated with maximal ablation of the mesial hippocampal head, whilst sparing of the parahippocampal gyrus (PHG) may reduce neuropsychological sequelae. Current commercially available laser catheters are inserted following manually planned straight-line trajectories, which cannot conform to curved brain structures, such as the hippocampus, without causing collateral damage or requiring multiple insertions. The clinical feasibility and potential of curved LiTT trajectories through steerable needles has yet to be investigated. This is the focus of our work. We propose a GPU-accelerated computer-assisted planning (CAP) algorithm for steerable needle insertions that generates optimized curved 3D trajectories with maximal ablation of the amygdalohippocampal complex and minimal collateral damage to nearby structures, while accounting for a variable ablation diameter ( 5-15mm). Simulated trajectories and ablations were performed on 5 patients with mesial temporal sclerosis (MTS), which were identified from a prospectively managed database. The algorithm generated obstacle-free paths with significantly greater target area ablation coverage and lower PHG ablation variance compared to straight line trajectories. The presented CAP algorithm returns increased ablation of the amygdalohippocampal complex, with lower patient risk scores compared to straight-line trajectories. This is the first clinical application of preoperative planning for steerable needle based LiTT. This study suggests that steerable needles have the potential to improve LiTT procedure efficacy whilst improving the safety and should thus be investigated further.
Collapse
|
10
|
Tong Y, Lu W, Yu Y, Shen Y. Application of machine learning in ophthalmic imaging modalities. EYE AND VISION 2020; 7:22. [PMID: 32322599 PMCID: PMC7160952 DOI: 10.1186/s40662-020-00183-6] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/16/2019] [Accepted: 03/10/2020] [Indexed: 12/27/2022]
Abstract
In clinical ophthalmology, a variety of image-related diagnostic techniques have begun to offer unprecedented insights into eye diseases based on morphological datasets with millions of data points. Artificial intelligence (AI), inspired by the human multilayered neuronal system, has shown astonishing success within some visual and auditory recognition tasks. In these tasks, AI can analyze digital data in a comprehensive, rapid and non-invasive manner. Bioinformatics has become a focus particularly in the field of medical imaging, where it is driven by enhanced computing power and cloud storage, as well as utilization of novel algorithms and generation of data in massive quantities. Machine learning (ML) is an important branch in the field of AI. The overall potential of ML to automatically pinpoint, identify and grade pathological features in ocular diseases will empower ophthalmologists to provide high-quality diagnosis and facilitate personalized health care in the near future. This review offers perspectives on the origin, development, and applications of ML technology, particularly regarding its applications in ophthalmic imaging modalities.
Collapse
Affiliation(s)
- Yan Tong
- 1Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| | - Wei Lu
- 1Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| | - Yue Yu
- 1Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China
| | - Yin Shen
- 1Eye Center, Renmin Hospital of Wuhan University, Wuhan, 430060 Hubei China.,2Medical Research Institute, Wuhan University, Wuhan, Hubei China
| |
Collapse
|
11
|
Evaluation of a novel tomographic ultrasound device for abdominal examinations. PLoS One 2019; 14:e0218754. [PMID: 31242250 PMCID: PMC6594674 DOI: 10.1371/journal.pone.0218754] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2019] [Accepted: 06/09/2019] [Indexed: 11/24/2022] Open
Abstract
Conventional ultrasound (US) is the first-line imaging method for abdominal pathologies, but its diagnostic accuracy is operator-dependent, and data storage is usually limited to two-dimensional images. A novel tomographic US system (Curefab CS, Munich, Germany) processes imaging data combined with three-dimensional spatial information using a magnetic field tracking. This enables standardized image presentation in axial planes and a review of the entire examination. The applicability and diagnostic performance of this tomographic US approach was analyzed in an abdominal setting using conventional US as reference. Tomographic US data were successfully compiled in all subjects of a training cohort (20 healthy volunteers) and in 50 patients with abdominal lesions. Image quality (35% and 79% for training and patient cohort respectively) and completeness of organ visualization (45% and 44%) were frequently impaired in tomographic US compared to conventional US. Conventional and tomographic US showed good agreement for measurement of organ sizes in the training cohort (right liver lobe and both kidneys with a median deviation of 5%). In the patient cohort, tomographic US identified 57 of 74 hepatic or renal lesions detected by conventional ultrasound (sensitivity 77%). In conclusion, this study illustrates the diagnostic potential of abdominal tomographic US, but current significant limitations of the tomographic ultrasound device demand further technical improvements before this and comparable approaches can be implemented in clinical practice.
Collapse
|
12
|
Hyun D, Crowley ALC, LeFevre M, Cleve J, Rosenberg J, Dahl JJ. Improved Visualization in Difficult-to-Image Stress Echocardiography Patients Using Real-Time Harmonic Spatial Coherence Imaging. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2019; 66:433-441. [PMID: 30530322 PMCID: PMC7012506 DOI: 10.1109/tuffc.2018.2885777] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Stress echocardiography is used to detect myocardial ischemia by evaluating cardiovascular function both at rest and at elevated heart rates. Stress echocardiography requires excellent visualization of the left ventricle (LV) throughout the cardiac cycle. However, LV endocardial border visualization is often negatively impacted by high levels of clutter associated with patient obesity, which has risen dramatically worldwide in recent decades. Short-lag spatial coherence (SLSC) imaging has demonstrated reduced clutter in several applications. In this work, a computationally efficient formulation of SLSC was implemented into an object-oriented graphics processing unit-based software beamformer, enabling real-time (>30 frames per second) SLSC echocardiography on a research ultrasound scanner. The system was then used to image 15 difficult-to-image stress echocardiography patients in a comparison study of tissue harmonic imaging (THI) and harmonic spatial coherence imaging (HSCI). Video clips of four standard stress echocardiography views acquired with either THI or HSCI were provided in random shuffled order to three experienced readers. Each reader rated the visibility of 17 LV segments as "invisible," "suboptimally visualized," or "well visualized," with the first two categories indicating a need for contrast agent. In a symmetry test unadjusted for patientwise clustering, HSCI demonstrated a clear superiority over THI ( ). When measured on a per-patient basis, the median total score significantly favored HSCI with . When collapsing the ratings to a two-level scale ("needs contrast" versus "well visualized"), HSCI once again showed an overall superiority over THI, with by McNemar test adjusted for clustering.
Collapse
|
13
|
Teikari P, Najjar RP, Schmetterer L, Milea D. Embedded deep learning in ophthalmology: making ophthalmic imaging smarter. Ther Adv Ophthalmol 2019; 11:2515841419827172. [PMID: 30911733 PMCID: PMC6425531 DOI: 10.1177/2515841419827172] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2018] [Accepted: 12/20/2018] [Indexed: 01/22/2023] Open
Abstract
Deep learning has recently gained high interest in ophthalmology due to its ability to detect clinically significant features for diagnosis and prognosis. Despite these significant advances, little is known about the ability of various deep learning systems to be embedded within ophthalmic imaging devices, allowing automated image acquisition. In this work, we will review the existing and future directions for 'active acquisition'-embedded deep learning, leading to as high-quality images with little intervention by the human operator. In clinical practice, the improved image quality should translate into more robust deep learning-based clinical diagnostics. Embedded deep learning will be enabled by the constantly improving hardware performance with low cost. We will briefly review possible computation methods in larger clinical systems. Briefly, they can be included in a three-layer framework composed of edge, fog, and cloud layers, the former being performed at a device level. Improved egde-layer performance via 'active acquisition' serves as an automatic data curation operator translating to better quality data in electronic health records, as well as on the cloud layer, for improved deep learning-based clinical data mining.
Collapse
Affiliation(s)
- Petteri Teikari
- Visual Neurosciences Group, Singapore Eye Research Institute, Singapore
- Advanced Ocular Imaging, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - Raymond P. Najjar
- Visual Neurosciences Group, Singapore Eye Research Institute, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, National University of Singapore, Singapore
| | - Leopold Schmetterer
- Visual Neurosciences Group, Singapore Eye Research Institute, Singapore
- Advanced Ocular Imaging, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
- Christian Doppler Laboratory for Ocular and Dermal Effects of Thiomers, Medical University of Vienna, Vienna, Austria
| | - Dan Milea
- Visual Neurosciences Group, Singapore Eye Research Institute, Singapore
- Ophthalmology and Visual Sciences Academic Clinical Program, Duke-NUS Medical School, National University of Singapore, Singapore
- Neuro-Ophthalmology Department, Singapore National Eye Centre, Singapore
| |
Collapse
|