1
|
Zhang J, Bell MAL. Overfit detection method for deep neural networks trained to beamform ultrasound images. ULTRASONICS 2025; 148:107562. [PMID: 39746284 PMCID: PMC11839378 DOI: 10.1016/j.ultras.2024.107562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2024] [Revised: 12/18/2024] [Accepted: 12/20/2024] [Indexed: 01/04/2025]
Abstract
Deep neural networks (DNNs) have remarkable potential to reconstruct ultrasound images. However, this promise can suffer from overfitting to training data, which is typically detected via loss function monitoring during an otherwise time-consuming training process or via access to new sources of test data. We present a method to detect overfitting with associated evaluation approaches that only require knowledge of a network architecture and associated trained weights. Three types of artificial DNN inputs (i.e., zeros, ones, and Gaussian noise), unseen during DNN training, were input to three DNNs designed for ultrasound image formation, trained on multi-site data, and submitted to the Challenge on Ultrasound Beamforming with Deep Learning (CUBDL). Overfitting was detected using these artificial DNN inputs. Qualitative and quantitative comparisons of DNN-created images to ground truth images immediately revealed signs of overfitting (e.g., zeros input produced mean output values ≥0.08, ones input produced mean output values ≤0.07, with corresponding image-to-image normalized correlations ≤0.8). The proposed approach is promising to detect overfitting without requiring lengthy network retraining or the curation of additional test data. Potential applications include sanity checks during federated learning, as well as optimization, security, public policy, regulation creation, and benchmarking.
Collapse
Affiliation(s)
- Jiaxin Zhang
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Muyinatu A Lediju Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA; Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA; Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
2
|
Xiao D, Yu ACH. Beamforming-integrated neural networks for ultrasound imaging. ULTRASONICS 2025; 145:107474. [PMID: 39378772 DOI: 10.1016/j.ultras.2024.107474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/19/2024] [Revised: 08/18/2024] [Accepted: 09/13/2024] [Indexed: 10/10/2024]
Abstract
Sparse matrix beamforming (SMB) is a computationally efficient reformulation of delay-and-sum (DAS) beamforming as a single sparse matrix multiplication. This reformulation can potentially dovetail with machine learning platforms like TensorFlow and PyTorch that already support sparse matrix operations. In this work, using SMB principles, we present the development of beamforming-integrated neural networks (BINNs) that can rationally infer ultrasound images directly from pre-beamforming channel-domain radiofrequency (RF) datasets. To demonstrate feasibility, a toy BINN was first designed with two 2D-convolution layers that were respectively placed both before and after an SMB layer. This toy BINN correctly updated kernel weights in all convolution layers, demonstrating efficiency in both training (PyTorch - 133 ms, TensorFlow - 22 ms) and inference (PyTorch - 4 ms, TensorFlow - 5 ms). As an application demonstration, another BINN with two RF-domain convolution layers, an SMB layer, and three image-domain convolution layers was designed to infer high-quality B-mode images in vivo from single-shot plane-wave channel RF data. When trained using 31-angle compounded plane wave images (3000 frames from 22 human volunteers), this BINN showed mean-square logarithmic error improvements of 21.3 % and 431 % in the inferred B-mode image quality respectively comparing to an image-to-image convolutional neural network (CNN) and an RF-to-image CNN with the same number of layers and learnable parameters (3,777). Overall, by including an SMB layer to adopt prior knowledge of DAS beamforming, BINN shows potential as a new type of informed machine learning framework for ultrasound imaging.
Collapse
Affiliation(s)
- Di Xiao
- Schlegel-UW Research Institute for Aging, University of Waterloo, Waterloo, Canada
| | - Alfred C H Yu
- Schlegel-UW Research Institute for Aging, University of Waterloo, Waterloo, Canada.
| |
Collapse
|
3
|
Cho H, Park S, Kang J, Yoo Y. Deep coherence learning: An unsupervised deep beamformer for high quality single plane wave imaging in medical ultrasound. ULTRASONICS 2024; 143:107408. [PMID: 39094387 DOI: 10.1016/j.ultras.2024.107408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 07/16/2024] [Accepted: 07/17/2024] [Indexed: 08/04/2024]
Abstract
Plane wave imaging (PWI) in medical ultrasound is becoming an important reconstruction method with high frame rates and new clinical applications. Recently, single PWI based on deep learning (DL) has been studied to overcome lowered frame rates of traditional PWI with multiple PW transmissions. However, due to the lack of appropriate ground truth images, DL-based PWI still remains challenging for performance improvements. To address this issue, in this paper, we propose a new unsupervised learning approach, i.e., deep coherence learning (DCL)-based DL beamformer (DL-DCL), for high-quality single PWI. In DL-DCL, the DL network is trained to predict highly correlated signals with a unique loss function from a set of PW data, and the trained DL model encourages high-quality PWI from low-quality single PW data. In addition, the DL-DCL framework based on complex baseband signals enables a universal beamformer. To assess the performance of DL-DCL, simulation, phantom and in vivo studies were conducted with public datasets, and it was compared with traditional beamformers (i.e., DAS with 75-PWs and DMAS with 1-PW) and other DL-based methods (i.e., supervised learning approach with 1-PW and generative adversarial network (GAN) with 1-PW). From the experiments, the proposed DL-DCL showed comparable results with DMAS with 1-PW and DAS with 75-PWs in spatial resolution, and it outperformed all comparison methods in contrast resolution. These results demonstrated that the proposed unsupervised learning approach can address the inherent limitations of traditional PWIs based on DL, and it also showed great potential in clinical settings with minimal artifacts.
Collapse
Affiliation(s)
- Hyunwoo Cho
- Department of Electronic Engineering, Sogang University, Seoul 04107, South Korea
| | - Seongjun Park
- Department of Electronic Engineering, Sogang University, Seoul 04107, South Korea
| | - Jinbum Kang
- Department of Biomedical Software Engineering, The Catholic University of Korea, Bucheon 14662, South Korea.
| | - Yangmo Yoo
- Department of Electronic Engineering, Sogang University, Seoul 04107, South Korea; Department of Biomedical Engineering, Sogang University, Seoul 04107, South Korea
| |
Collapse
|
4
|
Song P, Andre M, Chitnis P, Xu S, Croy T, Wear K, Sikdar S. Clinical, Safety, and Engineering Perspectives on Wearable Ultrasound Technology: A Review. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2024; 71:730-744. [PMID: 38090856 PMCID: PMC11416895 DOI: 10.1109/tuffc.2023.3342150] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/10/2024]
Abstract
Wearable ultrasound has the potential to become a disruptive technology enabling new applications not only in traditional clinical settings, but also in settings where ultrasound is not currently used. Understanding the basic engineering principles and limitations of wearable ultrasound is critical for clinicians, scientists, and engineers to advance potential applications and translate the technology from bench to bedside. Wearable ultrasound devices, especially monitoring devices, have the potential to apply acoustic energy to the body for far longer durations than conventional diagnostic ultrasound systems. Thus, bioeffects associated with prolonged acoustic exposure as well as skin health need to be carefully considered for wearable ultrasound devices. This article reviews emerging clinical applications, safety considerations, and future engineering and clinical research directions for wearable ultrasound technology.
Collapse
|
5
|
Spainhour J, Smart K, Becker S, Bottenus N. Optimization of array encoding for ultrasound imaging. Phys Med Biol 2024; 69:125024. [PMID: 38815603 DOI: 10.1088/1361-6560/ad5249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Accepted: 05/30/2024] [Indexed: 06/01/2024]
Abstract
Objective. The transmit encoding model for synthetic aperture imaging is a robust and flexible framework for understanding the effects of acoustic transmission on ultrasound image reconstruction. Our objective is to use machine learning (ML) to construct scanning sequences, parameterized by time delays and apodization weights, that produce high-quality B-mode images.Approach. We use a custom ML model in PyTorch with simulated RF data from Field II to probe the space of possible encoding sequences for those that minimize a loss function that describes image quality. This approach is made computationally feasible by a novel formulation of the derivative for delay-and-sum beamforming.Main results. When trained for a specified experimental setting (imaging domain, hardware restrictions, etc), our ML model produces optimized encoding sequences that, when deployed in the REFoCUS imaging framework, improve a number of standard quality metrics over conventional sequences including resolution, field of view, and contrast. We demonstrate these results experimentally on both wire targets and a tissue-mimicking phantom.Significance. This work demonstrates that the set of commonly used encoding schemes represent only a narrow subset of those available. Additionally, it demonstrates the value for ML tasks in synthetic transmit aperture imaging to consider the beamformer within the model, instead of purely as a post-processing step.
Collapse
Affiliation(s)
- Jacob Spainhour
- Department of Applied Mathematics, University of Colorado Boulder, Boulder, CO, United States of America
| | - Korben Smart
- Department of Physics, University of Colorado Boulder, Boulder, CO, United States of America
| | - Stephen Becker
- Department of Applied Mathematics, University of Colorado Boulder, Boulder, CO, United States of America
| | - Nick Bottenus
- Paul M. Rady Department of Mechanical Engineering, University of Colorado Boulder, Boulder, CO, United States of America
| |
Collapse
|
6
|
Bosco E, Spairani E, Toffali E, Meacci V, Ramalli A, Matrone G. A Deep Learning Approach for Beamforming and Contrast Enhancement of Ultrasound Images in Monostatic Synthetic Aperture Imaging: A Proof-of-Concept. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:376-382. [PMID: 38899024 PMCID: PMC11186640 DOI: 10.1109/ojemb.2024.3401098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 03/29/2024] [Accepted: 05/08/2024] [Indexed: 06/21/2024] Open
Abstract
Goal: In this study, we demonstrate that a deep neural network (DNN) can be trained to reconstruct high-contrast images, resembling those produced by the multistatic Synthetic Aperture (SA) method using a 128-element array, leveraging pre-beamforming radiofrequency (RF) signals acquired through the monostatic SA approach. Methods: A U-net was trained using 27200 pairs of RF signals, simulated considering a monostatic SA architecture, with their corresponding delay-and-sum beamformed target images in a multistatic 128-element SA configuration. The contrast was assessed on 500 simulated test images of anechoic/hyperechoic targets. The DNN's performance in reconstructing experimental images of a phantom and different in vivo scenarios was tested too. Results: The DNN, compared to the simple monostatic SA approach used to acquire pre-beamforming signals, generated better-quality images with higher contrast and reduced noise/artifacts. Conclusions: The obtained results suggest the potential for the development of a single-channel setup, simultaneously providing good-quality images and reducing hardware complexity.
Collapse
Affiliation(s)
- Edoardo Bosco
- Department of Electrical, Computer and Biomedical EngineeringUniversity of Pavia27100PaviaItaly
| | - Edoardo Spairani
- Department of Electrical, Computer and Biomedical EngineeringUniversity of Pavia27100PaviaItaly
| | - Eleonora Toffali
- Department of Electrical, Computer and Biomedical EngineeringUniversity of Pavia27100PaviaItaly
| | - Valentino Meacci
- Department of Information EngineeringUniversity of Florence50134FlorenceItaly
| | - Alessandro Ramalli
- Department of Information EngineeringUniversity of Florence50134FlorenceItaly
| | - Giulia Matrone
- Department of Electrical, Computer and Biomedical EngineeringUniversity of Pavia27100PaviaItaly
| |
Collapse
|
7
|
Pitman WMK, Xiao D, Yiu BYS, Chee AJY, Yu ACH. Branched Convolutional Neural Networks for Receiver Channel Recovery in High-Frame-Rate Sparse-Array Ultrasound Imaging. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2024; 71:558-571. [PMID: 38564354 DOI: 10.1109/tuffc.2024.3383660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
For high-frame-rate ultrasound imaging, it remains challenging to implement on compact systems as a sparse imaging configuration with limited array channels. One key issue is that the resulting image quality is known to be mediocre not only because unfocused plane-wave excitations are used but also because grating lobes would emerge in sparse-array configurations. In this article, we present the design and use of a new channel recovery framework to infer full-array plane-wave channel datasets for periodically sparse arrays that operate with as few as one-quarter of the full-array aperture. This framework is based on a branched encoder-decoder convolutional neural network (CNN) architecture, which was trained using full-array plane-wave channel data collected from human carotid arteries (59 864 training acquisitions; 5-MHz imaging frequency; 20-MHz sampling rate; plane-wave steering angles between -15° and 15° in 1° increments). Three branched encoder-decoder CNNs were separately trained to recover missing channels after differing degrees of channelwise downsampling (2, 3, and 4 times). The framework's performance was tested on full-array and downsampled plane-wave channel data acquired from an in vitro point target, human carotid arteries, and human brachioradialis muscle. Results show that when inferred full-array plane-wave channel data were used for beamforming, spatial aliasing artifacts in the B-mode images were suppressed for all degrees of channel downsampling. In addition, the image contrast was enhanced compared with B-mode images obtained from beamforming with downsampled channel data. When the recovery framework was implemented on an RTX-2080 GPU, the three investigated degrees of downsampling all achieved the same inference time of 4 ms. Overall, the proposed framework shows promise in enhancing the quality of high-frame-rate ultrasound images generated using a sparse-array imaging setup.
Collapse
|
8
|
Zhao L, Fong TC, Bell MAL. Detection of COVID-19 features in lung ultrasound images using deep neural networks. COMMUNICATIONS MEDICINE 2024; 4:41. [PMID: 38467808 PMCID: PMC10928066 DOI: 10.1038/s43856-024-00463-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 02/16/2024] [Indexed: 03/13/2024] Open
Abstract
BACKGROUND Deep neural networks (DNNs) to detect COVID-19 features in lung ultrasound B-mode images have primarily relied on either in vivo or simulated images as training data. However, in vivo images suffer from limited access to required manual labeling of thousands of training image examples, and simulated images can suffer from poor generalizability to in vivo images due to domain differences. We address these limitations and identify the best training strategy. METHODS We investigated in vivo COVID-19 feature detection with DNNs trained on our carefully simulated datasets (40,000 images), publicly available in vivo datasets (174 images), in vivo datasets curated by our team (958 images), and a combination of simulated and internal or external in vivo datasets. Seven DNN training strategies were tested on in vivo B-mode images from COVID-19 patients. RESULTS Here, we show that Dice similarity coefficients (DSCs) between ground truth and DNN predictions are maximized when simulated data are mixed with external in vivo data and tested on internal in vivo data (i.e., 0.482 ± 0.211), compared with using only simulated B-mode image training data (i.e., 0.464 ± 0.230) or only external in vivo B-mode training data (i.e., 0.407 ± 0.177). Additional maximization is achieved when a separate subset of the internal in vivo B-mode images are included in the training dataset, with the greatest maximization of DSC (and minimization of required training time, or epochs) obtained after mixing simulated data with internal and external in vivo data during training, then testing on the held-out subset of the internal in vivo dataset (i.e., 0.735 ± 0.187). CONCLUSIONS DNNs trained with simulated and in vivo data are promising alternatives to training with only real or only simulated data when segmenting in vivo COVID-19 lung ultrasound features.
Collapse
Affiliation(s)
- Lingyi Zhao
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Tiffany Clair Fong
- Department of Emergency Medicine, Johns Hopkins Medicine, Baltimore, MD, USA
| | - Muyinatu A Lediju Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA.
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA.
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
9
|
Li X, Zhang X, Fan C, Chen Y, Zheng J, Gao J, Shen Y. Deconvolution based on sparsity and continuity improves the quality of ultrasound image. Comput Biol Med 2024; 169:107860. [PMID: 38159397 DOI: 10.1016/j.compbiomed.2023.107860] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 12/13/2023] [Accepted: 12/13/2023] [Indexed: 01/03/2024]
Abstract
The application of ultrasound (US) image has been limited by its limited resolution, inherent speckle noise, and the impact of clutter and artifacts, especially in the miniaturized devices with restricted hardware conditions. In order to solve these problems, many researchers have explored a number of hardware modifications as well as algorithmic improvements, but further improvements in resolution, signal-to-noise ratio (SNR) and contrast are still needed. In this paper, a deconvolution algorithm based on sparsity and continuity (DBSC) is proposed to obtain the higher resolution, SNR, and, contrast. The algorithm begins with a relatively bold Wiener filtering for initial enhancement of image resolution in preprocessing, but it also introduces ringing noise and compromises the SNR. In further processing, the noise is suppressed based on the characteristic that the adjacent pixels of the US image are continuous as long as Nyquist sampling criterion is met, and the extraction of high-frequency information is balanced by using relatively sparse. Subsequently, the theory and experiments demonstrate that relative sparsity and continuity are general properties of US images. DBSC is compared with other deconvolution strategies through simulations and experiments, and US imaging under different transmission channels is also investigated. The final results show that the proposed method can greatly improve the resolution, as well as provide significant advantages in terms of contrast and SNR, and is also feasible in applications to devices with limited hardware.
Collapse
Affiliation(s)
- Xiangyu Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China
| | - Xin Zhang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China.
| | - Chaolin Fan
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China
| | - Yifei Chen
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China
| | - Jie Zheng
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China
| | - Jie Gao
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China
| | - Yi Shen
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin, 150001, China
| |
Collapse
|
10
|
Simson WA, Paschali M, Sideri-Lampretsa V, Navab N, Dahl JJ. Investigating pulse-echo sound speed estimation in breast ultrasound with deep learning. ULTRASONICS 2024; 137:107179. [PMID: 37939413 PMCID: PMC10842235 DOI: 10.1016/j.ultras.2023.107179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 09/30/2023] [Accepted: 10/07/2023] [Indexed: 11/10/2023]
Abstract
Ultrasound is an adjunct tool to mammography that can quickly and safely aid physicians in diagnosing breast abnormalities. Clinical ultrasound often assumes a constant sound speed to form diagnostic B-mode images. However, the components of breast tissue, such as glandular tissue, fat, and lesions, differ in sound speed. Given a constant sound speed assumption, these differences can degrade the quality of reconstructed images via phase aberration. Sound speed images can be a powerful tool for improving image quality and identifying diseases if properly estimated. To this end, we propose a supervised deep-learning approach for sound speed estimation from analytic ultrasound signals. We develop a large-scale simulated ultrasound dataset that generates representative breast tissue samples by modeling breast gland, skin, and lesions with varying echogenicity and sound speed. We adopt a fully convolutional neural network architecture trained on a simulated dataset to produce an estimated sound speed map. The simulated tissue is interrogated with a plane wave transmit sequence, and the complex-value reconstructed images are used as input for the convolutional network. The network is trained on the sound speed distribution map of the simulated data, and the trained model can estimate sound speed given reconstructed pulse-echo signals. We further incorporate thermal noise augmentation during training to enhance model robustness to artifacts found in real ultrasound data. To highlight the ability of our model to provide accurate sound speed estimations, we evaluate it on simulated, phantom, and in-vivo breast ultrasound data.
Collapse
Affiliation(s)
- Walter A Simson
- Chair for Computer Aided Medical Procedures and Augmented Reality, Technical University of Munich, Munich, Germany; Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA.
| | - Magdalini Paschali
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Vasiliki Sideri-Lampretsa
- Institute for Artificial Intelligence and Informatics in Medicine, Technical University of Munich, Munich, Germany
| | - Nassir Navab
- Chair for Computer Aided Medical Procedures and Augmented Reality, Technical University of Munich, Munich, Germany; Chair for Computer Aided Medical Procedures, Whiting School of Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Jeremy J Dahl
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| |
Collapse
|
11
|
Sharahi HJ, Acconcia CN, Li M, Martel A, Hynynen K. A Convolutional Neural Network for Beamforming and Image Reconstruction in Passive Cavitation Imaging. SENSORS (BASEL, SWITZERLAND) 2023; 23:8760. [PMID: 37960460 PMCID: PMC10650508 DOI: 10.3390/s23218760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 10/18/2023] [Accepted: 10/20/2023] [Indexed: 11/15/2023]
Abstract
Convolutional neural networks (CNNs), initially developed for image processing applications, have recently received significant attention within the field of medical ultrasound imaging. In this study, passive cavitation imaging/mapping (PCI/PAM), which is used to map cavitation sources based on the correlation of signals across an array of receivers, is evaluated. Traditional reconstruction techniques in PCI, such as delay-and-sum, yield high spatial resolution at the cost of a substantial computational time. This results from the resource-intensive process of determining sensor weights for individual pixels in these methodologies. Consequently, the use of conventional algorithms for image reconstruction does not meet the speed requirements that are essential for real-time monitoring. Here, we show that a three-dimensional (3D) convolutional network can learn the image reconstruction algorithm for a 16×16 element matrix probe with a receive frequency ranging from 256 kHz up to 1.0 MHz. The network was trained and evaluated using simulated data representing point sources, resulting in the successful reconstruction of volumetric images with high sensitivity, especially for single isolated sources (100% in the test set). As the number of simultaneous sources increased, the network's ability to detect weaker intensity sources diminished, although it always correctly identified the main lobe. Notably, however, network inference was remarkably fast, completing the task in approximately 178 s for a dataset comprising 650 frames of 413 volume images with signal duration of 20μs. This processing speed is roughly thirty times faster than a parallelized implementation of the traditional time exposure acoustics algorithm on the same GPU device. This would open a new door for PCI application in the real-time monitoring of ultrasound ablation.
Collapse
Affiliation(s)
- Hossein J. Sharahi
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada (A.M.)
| | - Christopher N. Acconcia
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada (A.M.)
| | - Matthew Li
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada (A.M.)
| | - Anne Martel
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada (A.M.)
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5G 1L7, Canada
| | - Kullervo Hynynen
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada (A.M.)
- Department of Medical Biophysics, University of Toronto, Toronto, ON M5G 1L7, Canada
- Institute of Biomedical Engineering, University of Toronto, Toronto, ON M5S 3G9, Canada
| |
Collapse
|
12
|
Qu X, Ren C, Wang Z, Fan S, Zheng D, Wang S, Lin H, Jiang J, Xing W. Complex Transformer Network for Single-Angle Plane-Wave Imaging. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:2234-2246. [PMID: 37544831 DOI: 10.1016/j.ultrasmedbio.2023.07.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 06/05/2023] [Accepted: 07/09/2023] [Indexed: 08/08/2023]
Abstract
OBJECTIVE Plane-wave imaging (PWI) is a high-frame-rate imaging technique that sacrifices image quality. Deep learning can potentially enhance plane-wave image quality, but processing complex in-phase and quadrature (IQ) data and suppressing incoherent signals pose challenges. To address these challenges, we present a complex transformer network (CTN) that integrates complex convolution and complex self-attention (CSA) modules. METHODS The CTN operates in a four-step process: delaying complex IQ data from a 0° single-angle plane wave for each pixel as CTN input data; extracting reconstruction features with a complex convolution layer; suppressing irrelevant features derived from incoherent signals with two CSA modules; and forming output images with another complex convolution layer. The training labels are generated by minimum variance (MV). RESULTS Simulation, phantom and in vivo experiments revealed that CTN produced comparable- or even higher-quality images than MV, but with much shorter computation time. Evaluation metrics included contrast ratio, contrast-to-noise ratio, generalized contrast-to-noise ratio and lateral and axial full width at half-maximum and were -11.59 dB, 1.16, 0.68, 278 μm and 329 μm for simulation, respectively, and 9.87 dB, 0.96, 0.62, 357 μm and 305 μm for the phantom experiment, respectively. In vivo experiments further indicated that CTN could significantly improve details that were previously vague or even invisible in DAS and MV images. And after being accelerated by GPU, the CTN runtime (76.03 ms) was comparable to that of delay-and-sum (DAS, 61.24 ms). CONCLUSION The proposed CTN significantly improved the image contrast, resolution and some unclear details by the MV beamformer, making it an efficient tool for high-frame-rate imaging.
Collapse
Affiliation(s)
- Xiaolei Qu
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Chujian Ren
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Zihao Wang
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Shuangchun Fan
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Dezhi Zheng
- Research Institute for Frontier Science, Beihang University, Beijing, China
| | - Shuai Wang
- Research Institute for Frontier Science, Beihang University, Beijing, China
| | - Hongxiang Lin
- Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou, China
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, USA
| | - Weiwei Xing
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China.
| |
Collapse
|
13
|
Ali R, Brevett T, Zhuang L, Bendjador H, Podkowa AS, Hsieh SS, Simson W, Sanabria SJ, Herickhoff CD, Dahl JJ. Aberration correction in diagnostic ultrasound: A review of the prior field and current directions. Z Med Phys 2023; 33:267-291. [PMID: 36849295 PMCID: PMC10517407 DOI: 10.1016/j.zemedi.2023.01.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 12/17/2022] [Accepted: 01/09/2023] [Indexed: 02/27/2023]
Abstract
Medical ultrasound images are reconstructed with simplifying assumptions on wave propagation, with one of the most prominent assumptions being that the imaging medium is composed of a constant sound speed. When the assumption of a constant sound speed are violated, which is true in most in vivoor clinical imaging scenarios, distortion of the transmitted and received ultrasound wavefronts appear and degrade the image quality. This distortion is known as aberration, and the techniques used to correct for the distortion are known as aberration correction techniques. Several models have been proposed to understand and correct for aberration. In this review paper, aberration and aberration correction are explored from the early models and correction techniques, including the near-field phase screen model and its associated correction techniques such as nearest-neighbor cross-correlation, to more recent models and correction techniques that incorporate spatially varying aberration and diffractive effects, such as models and techniques that rely on the estimation of the sound speed distribution in the imaging medium. In addition to historical models, future directions of ultrasound aberration correction are proposed.
Collapse
Affiliation(s)
- Rehman Ali
- Department of Imaging Sciences, University of Rochester Medical Center, Rochester, NY, USA
| | - Thurston Brevett
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Louise Zhuang
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Hanna Bendjador
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Anthony S Podkowa
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Scott S Hsieh
- Department of Radiology, Mayo Clinic, Rochester, MN, USA
| | - Walter Simson
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA
| | - Sergio J Sanabria
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA; University of Deusto/ Ikerbasque Basque Foundation for Science, Bilbao, Spain
| | - Carl D Herickhoff
- Department of Biomedical Engineering, University of Memphis, TN, USA
| | - Jeremy J Dahl
- Department of Radiology, Stanford University School of Medicine, Stanford, CA, USA.
| |
Collapse
|
14
|
Wasih M, Ahmad S, Almekkawy M. A robust cascaded deep neural network for image reconstruction of single plane wave ultrasound RF data. ULTRASONICS 2023; 132:106981. [PMID: 36913830 DOI: 10.1016/j.ultras.2023.106981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 03/02/2023] [Accepted: 03/03/2023] [Indexed: 05/29/2023]
Abstract
Reconstruction of ultrasound data from single plane wave Radio Frequency (RF) data is a challenging task. The traditional Delay and Sum (DAS) method produces an image with low resolution and contrast, if employed with RF data from only a single plane wave. A Coherent Compounding (CC) method that reconstructs the image by coherently summing the individual DAS images was proposed to enhance the image quality. However, CC relies on a large number of plane waves to accurately sum the individual DAS images, hence it produces high quality images but with low frame rate that may not be suitable for time-demanding applications. Therefore, there is a need for a method that can create a high quality image with higher frame rates. Furthermore, the method needs to be robust against the input transmission angle of the plane wave. To reduce the method's dependence on the input angle, we propose to unify the RF data at different angles by learning a linear data transformation from different angled data to a common, 0° data. We further propose a cascade of two independent neural networks to reconstruct an image, similar in quality to CC, by making use of a single plane wave. The first network, denoted as "PixelNet", is a fully Convolutional Neural Network (CNN) which takes in the transformed time-delayed RF data as input. PixelNet learns optimal pixel weights that get element-wise multiplied with the single angle DAS image. The second network is a conditional Generative Adversarial Network (cGAN) which is used to further enhance the image quality. Our networks were trained on the publicly available PICMUS and CPWC datasets and evaluated on a completely separate, CUBDL dataset obtained from different acquisition settings than the training dataset. The results thus obtained on the testing dataset, demonstrate the networks' ability to generalize well on unseen data, with frame rates better than the CC method. This paves the way for applications that require high-quality images reconstructed at higher frame rates.
Collapse
Affiliation(s)
- Mohammad Wasih
- The Pennsylvania State University, University Park, PA, 16802, USA.
| | - Sahil Ahmad
- The Pennsylvania State University, University Park, PA, 16802, USA.
| | | |
Collapse
|
15
|
Collins GC, Rojas SS, Bercu ZL, Desai JP, Lindsey BD. Supervised segmentation for guiding peripheral revascularization with forward-viewing, robotically steered ultrasound guidewire. Med Phys 2023; 50:3459-3474. [PMID: 36906877 PMCID: PMC10272103 DOI: 10.1002/mp.16350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 01/19/2023] [Accepted: 02/26/2023] [Indexed: 03/13/2023] Open
Abstract
BACKGROUND Approximately 500 000 patients present with critical limb ischemia (CLI) each year in the U.S., requiring revascularization to avoid amputation. While peripheral arteries can be revascularized via minimally invasive procedures, 25% of cases with chronic total occlusions are unsuccessful due to inability to route the guidewire beyond the proximal occlusion. Improvements to guidewire navigation would lead to limb salvage in a greater number of patients. PURPOSE Integrating ultrasound imaging into the guidewire could enable direct visualization of routes for guidewire advancement. In order to navigate a robotically-steerable guidewire with integrated imaging beyond a chronic occlusion proximal to the symptomatic lesion for revascularization, acquired ultrasound images must be segmented to visualize the path for guidewire advancement. METHODS The first approach for automated segmentation of viable paths through occlusions in peripheral arteries is demonstrated in simulations and experimentally-acquired data with a forward-viewing, robotically-steered guidewire imaging system. B-mode ultrasound images formed via synthetic aperture focusing (SAF) were segmented using a supervised approach (U-net architecture). A total of 2500 simulated images were used to train the classifier to distinguish the vessel wall and occlusion from viable paths for guidewire advancement. First, the size of the synthetic aperture resulting in the highest classification performance was determined in simulations (90 test images) and compared with traditional classifiers (global thresholding, local adaptive thresholding, and hierarchical classification). Next, classification performance as a function of the diameter of the remaining lumen (0.5 to 1.5 mm) in the partially-occluded artery was tested using both simulated (60 test images at each of 7 diameters) and experimental data sets. Experimental test data sets were acquired in four 3D-printed phantoms from human anatomy and six ex vivo porcine arteries. Accuracy of classifying the path through the artery was evaluated using microcomputed tomography of phantoms and ex vivo arteries as a ground truth for comparison. RESULTS An aperture size of 3.8 mm resulted in the best-performing classification based on sensitivity and Jaccard index, with a significant increase in Jaccard index (p < 0.05) as aperture diameter increased. In comparing the performance of the supervised classifier and traditional classification strategies with simulated test data, sensitivity and F1 score for U-net were 0.95 ± 0.02 and 0.96 ± 0.01, respectively, compared to 0.83 ± 0.03 and 0.41 ± 0.13 for the best-performing conventional approach, hierarchical classification. In simulated test images, sensitivity (p < 0.05) and Jaccard index both increased with increasing artery diameter (p < 0.05). Classification of images acquired in artery phantoms with remaining lumen diameters ≥ 0.75 mm resulted in accuracies > 90%, while mean accuracy decreased to 82% when artery diameter decreased to 0.5 mm. For testing in ex vivo arteries, average binary accuracy, F1 score, Jaccard index, and sensitivity each exceeded 0.9. CONCLUSIONS Segmentation of ultrasound images of partially-occluded peripheral arteries acquired with a forward-viewing, robotically-steered guidewire system was demonstrated for the first-time using representation learning. This could represent a fast, accurate approach for guiding peripheral revascularization.
Collapse
Affiliation(s)
- Graham C. Collins
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA, 30309
| | - Stephan Strassle Rojas
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA, 30309
| | - Zachary L. Bercu
- Interventional Radiology, Emory University School of Medicine, Atlanta, GA, USA, 30308
| | - Jaydev P. Desai
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA, 30309
| | - Brooks D. Lindsey
- Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology and Emory University, Atlanta, GA, USA, 30309
- School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, GA, USA, 30309
| |
Collapse
|
16
|
Artificial intelligence-aided method to detect uterine fibroids in ultrasound images: a retrospective study. Sci Rep 2023; 13:3714. [PMID: 36878941 PMCID: PMC9988965 DOI: 10.1038/s41598-022-26771-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Accepted: 12/20/2022] [Indexed: 03/08/2023] Open
Abstract
We explored a new artificial intelligence-assisted method to assist junior ultrasonographers in improving the diagnostic performance of uterine fibroids and further compared it with senior ultrasonographers to confirm the effectiveness and feasibility of the artificial intelligence method. In this retrospective study, we collected a total of 3870 ultrasound images from 667 patients with a mean age of 42.45 years ± 6.23 [SD] for those who received a pathologically confirmed diagnosis of uterine fibroids and 570 women with a mean age of 39.24 years ± 5.32 [SD] without uterine lesions from Shunde Hospital of Southern Medical University between 2015 and 2020. The DCNN model was trained and developed on the training dataset (2706 images) and internal validation dataset (676 images). To evaluate the performance of the model on the external validation dataset (488 images), we assessed the diagnostic performance of the DCNN with ultrasonographers possessing different levels of seniority. The DCNN model aided the junior ultrasonographers (Averaged) in diagnosing uterine fibroids with higher accuracy (94.72% vs. 86.63%, P < 0.001), sensitivity (92.82% vs. 83.21%, P = 0.001), specificity (97.05% vs. 90.80%, P = 0.009), positive predictive value (97.45% vs. 91.68%, P = 0.007), and negative predictive value (91.73% vs. 81.61%, P = 0.001) than they achieved alone. Their ability was comparable to that of senior ultrasonographers (Averaged) in terms of accuracy (94.72% vs. 95.24%, P = 0.66), sensitivity (92.82% vs. 93.66%, P = 0.73), specificity (97.05% vs. 97.16%, P = 0.79), positive predictive value (97.45% vs. 97.57%, P = 0.77), and negative predictive value (91.73% vs. 92.63%, P = 0.75). The DCNN-assisted strategy can considerably improve the uterine fibroid diagnosis performance of junior ultrasonographers to make them more comparable to senior ultrasonographers.
Collapse
|
17
|
Luijten B, Chennakeshava N, Eldar YC, Mischi M, van Sloun RJG. Ultrasound Signal Processing: From Models to Deep Learning. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:677-698. [PMID: 36635192 DOI: 10.1016/j.ultrasmedbio.2022.11.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 11/02/2022] [Accepted: 11/05/2022] [Indexed: 06/17/2023]
Abstract
Medical ultrasound imaging relies heavily on high-quality signal processing to provide reliable and interpretable image reconstructions. Conventionally, reconstruction algorithms have been derived from physical principles. These algorithms rely on assumptions and approximations of the underlying measurement model, limiting image quality in settings where these assumptions break down. Conversely, more sophisticated solutions based on statistical modeling or careful parameter tuning or derived from increased model complexity can be sensitive to different environments. Recently, deep learning-based methods, which are optimized in a data-driven fashion, have gained popularity. These model-agnostic techniques often rely on generic model structures and require vast training data to converge to a robust solution. A relatively new paradigm combines the power of the two: leveraging data-driven deep learning and exploiting domain knowledge. These model-based solutions yield high robustness and require fewer parameters and training data than conventional neural networks. In this work we provide an overview of these techniques from the recent literature and discuss a wide variety of ultrasound applications. We aim to inspire the reader to perform further research in this area and to address the opportunities within the field of ultrasound signal processing. We conclude with a future perspective on model-based deep learning techniques for medical ultrasound.
Collapse
Affiliation(s)
- Ben Luijten
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | - Nishith Chennakeshava
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Yonina C Eldar
- Faculty of Math and Computer Science, Weizmann Institute of Science, Rehovot, Israel
| | - Massimo Mischi
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Ruud J G van Sloun
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands; Philips Research, Eindhoven, The Netherlands
| |
Collapse
|
18
|
Fouad M, Abd El Ghany MA, Schmitz G. A Single-Shot Harmonic Imaging Approach Utilizing Deep Learning for Medical Ultrasound. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; PP:237-252. [PMID: 37018250 DOI: 10.1109/tuffc.2023.3234230] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Tissue Harmonic Imaging (THI) is an invaluable tool in clinical ultrasound owing to its enhanced contrast resolution and reduced reverberation clutter in comparison to fundamental mode imaging. However, harmonic content separation based on high pass filtering suffers from potential contrast degradation or lower axial resolution due to spectral leakage. Whereas nonlinear multi-pulse harmonic imaging schemes, such as amplitude modulation and pulse inversion, suffer from a reduced framerate and comparatively higher motion artifacts due to the necessity of at least two pulse echo acquisitions. To address this problem, we propose a deep-learning-based single-shot harmonic imaging technique capable of generating comparable image quality to pulse amplitude modulation methods, yet at a higher framerate and with fewer motion artifacts. Specifically, an asymmetric convolutional encoder-decoder structure is designed to estimate the combination of the echoes resulting from the half-amplitude transmissions using the echo produced from the full amplitude transmission as input. The echoes were acquired with the checkerboard amplitude modulation technique for training. The model was evaluated across various targets and samples to illustrate generalizability as well as the possibility and impact of transfer learning. Furthermore, for possible interpretability of the network, we investigate if the latent space of the encoder holds information on the nonlinearity parameter of the medium. We demonstrate the ability of the proposed approach to generate harmonic images with a single firing that are comparable to those from a multi-pulse acquisition.
Collapse
|
19
|
Gao J, Xu L, Zou Q, Zhang B, Wang D, Wan M. A progressively dual reconstruction network for plane wave beamforming with both paired and unpaired training data. ULTRASONICS 2023; 127:106833. [PMID: 36070635 DOI: 10.1016/j.ultras.2022.106833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 08/12/2022] [Accepted: 08/16/2022] [Indexed: 06/15/2023]
Abstract
High-frame-rate plane wave (PW) imaging suffers from unsatisfactory image quality due to the absence of focus in transmission. Although coherent compounding from tens of PWs can improve PW image quality, it in turn results in a decreased frame rate, which is limited for tracking fast moving tissues. To overcome the trade-off between frame rate and image quality, we propose a progressively dual reconstruction network (PDRN) to achieve adaptive beamforming and enhance the image quality via both supervised and transfer learning in the condition of single or a few PWs transmission. Specifically, the proposed model contains a progressive network and a dual network to form a close loop and provide collaborative supervision for model optimization. The progressive network takes the channel delay of each spatial point as input and progressively learns coherent compounding beamformed data with increased numbers of steered PWs step by step. The dual network learns the downsampling process and reconstructs the beamformed data with decreased numbers of steered PWs, which reduces the space of the possible learning functions and improves the model's discriminative ability. In addition, the dual network is leveraged to perform transfer learning for the training data without sufficient steered PWs. The simulated, in vivo, vocal cords (VCs), and public available CUBDL dataset are collected for model evaluation.
Collapse
Affiliation(s)
- Junling Gao
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an 710049, PR China
| | - Lei Xu
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an 710049, PR China; Xi'an Hospital of Traditional Chinese Medicine, Xi'an 710021, PR China
| | - Qin Zou
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an 710049, PR China
| | - Bo Zhang
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an 710049, PR China
| | - Diya Wang
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an 710049, PR China.
| | - Mingxi Wan
- The Key Laboratory of Biomedical Information Engineering of Ministry of Education, School of Life Science and Technology, Xi'an Jiaotong University, Xi'an 710049, PR China.
| |
Collapse
|
20
|
Eslami L, Mohammadzadeh Asl B. Adaptive subarray coherence based post-filter using array gain in medical ultrasound imaging. ULTRASONICS 2022; 126:106808. [PMID: 35921724 DOI: 10.1016/j.ultras.2022.106808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 07/15/2022] [Accepted: 07/19/2022] [Indexed: 06/15/2023]
Abstract
This paper presents an adaptive subarray coherence-based post-filter (ASCBP) applied to the eigenspace-based forward-backward minimum variance (ESB-FBMV) beamformer to simultaneously improve image quality and beamformer robustness. Additionally, the ASCBP can separate close targets. The ASCBP uses an adaptive noise power weight based on the concept of the beamformer's array gain (AG) to suppress the noise adaptively and achieve improved images. Moreover, a square neighborhood average was applied to the ASCBP in order to provide more smoothed square neighborhood ASCBP (SN-ASCBP) values and improve the speckle quality. Through simulations of point phantoms and cyst phantoms and experimental validation, the performance of the proposed methods was compared to that of delay-and-sum (DAS), MV-based beamformers, and subarray coherence-based post-filter (SCBP). The simulated results demonstrated that the ASCBP method improved the full width at half maximum (FWHM) by 57 % and the coherent interference suppression power (CISP) by 52 dB compared to the SCBP post-filter. Considering the experimental results, the SN-ASCBP method presented the best enhancement in terms of generalized contrast to noise ratio (gCNR) and contrast ratio (CR) while the ASCBP showed the best improvement in FWHM among other methods. Furthermore, the proposed methods presented a striking performance in low SNRs. The results of evaluating the different methods under aberration and sound speed error illustrated the better robustness of the proposed methods in comparison with others.
Collapse
Affiliation(s)
- Leila Eslami
- Department of Biomedical Engineering, Tarbiat Modares University, Tehran 14115-111, Iran
| | | |
Collapse
|
21
|
Noda T, Azuma T, Ohtake Y, Sakuma I, Tomii N. Ultrasound Imaging With a Flexible Probe Based on Element Array Geometry Estimation Using Deep Neural Network. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:3232-3242. [PMID: 36170409 DOI: 10.1109/tuffc.2022.3210701] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Conventionally, ultrasound (US) diagnosis is performed using hand-held rigid probes. Such devices are difficult to be used for long-term monitoring because they need to be continuously pressed against the body to remove the air between the probe and body. Flexible probes, which can deform and effectively adhere to the body, are a promising technology for long-term monitoring applications. However, owing to the flexible element array geometry, the reconstructed image becomes blurred and distorted. In this study, we propose a flexible probe U.S. imaging method based on element array geometry estimation from radio frequency (RF) data using a deep neural network (DNN). The input and output of the DNN are the RF data and parameters that determine the element array geometry, respectively. The DNN was first trained from scratch with simulation data and then fine-tuned with in vivo data. The DNN performance was evaluated according to the element position mean absolute error (MAE) and the reconstructed image quality. The reconstructed image quality was evaluated with peak-signal-to-noise ratio (PSNR) and mean structural similarity (MSSIM). In the test conducted with simulation data, the average element position MAE was 0.86 mm, and the average reconstructed image PSNR and MSSIM were 20.6 and 0.791, respectively. In the test conducted with in vivo data, the average element position MAE was 1.11 mm, and the average reconstructed image PSNR and MSSIM were 19.4 and 0.798, respectively. The average estimation time was 0.045 s. These results demonstrate the feasibility of the proposed method for long-term real-time monitoring using flexible probes.
Collapse
|
22
|
Goudarzi S, Basarab A, Rivaz H. Inverse Problem of Ultrasound Beamforming With Denoising-Based Regularized Solutions. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:2906-2916. [PMID: 35969567 DOI: 10.1109/tuffc.2022.3198874] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
During the past few years, inverse problem formulations of ultrasound beamforming have attracted growing interest. They usually pose beamforming as a minimization problem of a fidelity term resulting from the measurement model plus a regularization term that enforces a certain class on the resulting image. Here, we take advantage of alternating direction method of multipliers to propose a flexible framework in which each term is optimized separately. Furthermore, the proposed beamforming formulation is extended to replace the regularization term with a denoising algorithm, based on the recent approaches called plug-and-play (PnP) and regularization by denoising (RED). Such regularizations are shown in this work to better preserve speckle texture, an important feature in ultrasound imaging, than sparsity-based approaches previously proposed in the literature. The efficiency of the proposed methods is evaluated on simulations, real phantoms, and in vivo data available from a plane-wave imaging challenge in medical ultrasound. Furthermore, a comprehensive comparison with existing ultrasound beamforming methods is also provided. These results show that the RED algorithm gives the best image quality in terms of contrast index while preserving the speckle statistics.
Collapse
|
23
|
Zhang F, Luo L, Zhang Y, Gao X, Li J. A Convolutional Neural Network for Ultrasound Plane Wave Image Segmentation With a Small Amount of Phase Array Channel Data. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:2270-2281. [PMID: 35552134 DOI: 10.1109/tuffc.2022.3174637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Single-angle plane wave has a huge potential in ultrasound high frame rate imaging, which, however, has a number of difficulties, such as low imaging quality and poor segmentation results. To overcome these difficulties, an end-to-end convolutional neural network (CNN) structure from single-angle channel data was proposed to segment images in this article. The network removed the traditional beamforming process and used raw radio frequency (RF) data as input to directly obtain segmented image. The signal features at each depth were extracted and concatenated to obtain the feature map by a special depth signal extraction module, and the feature map was then put into the residual encoder and decoder to obtain the output. A simulated hypoechoic cysts dataset of 2000 and an actual industrial defect dataset of 900 were used for training separately. Good results have been achieved in both simulated medical cysts segmentation and actual industrial defects segmentation. Experiments were conducted on both datasets with phase array sparse element data as input, and segmentation results were obtained for both. On the whole, this work achieved better quality segmented images with shorter processing time from single-angle plane wave channel data using CNNs; compared with other methods, our network has been greatly improved in intersection over union (IOU), F1 score, and processing time. Also, it indicated that the feasibility of applying deep learning in image segmentation can be improved using phase array sparse element data as input.
Collapse
|
24
|
Mamistvalov A, Amar A, Kessler N, Eldar YC. Deep-Learning Based Adaptive Ultrasound Imaging From Sub-Nyquist Channel Data. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:1638-1648. [PMID: 35312618 DOI: 10.1109/tuffc.2022.3160859] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Traditional beamforming of medical ultrasound images relies on sampling rates significantly higher than the actual Nyquist rate of the received signals. This results in large amounts of data to store and process, imposing hardware and software challenges on the development of ultrasound machinery and algorithms, and impacting the resulting performance. In light of the capabilities demonstrated by deep learning methods over the past years across a variety of fields, including medical imaging, it is natural to consider their ability to recover high-quality ultrasound images from partial data. Here, we propose an approach for deep-learning-based reconstruction of B-mode images from temporally and spatially sub-sampled channel data. We begin by considering sub-Nyquist sampled data, time-aligned in the frequency domain and transformed back to the time domain. The data are further sampled spatially so that only a subset of the received signals is acquired. The partial data is used to train an encoder-decoder convolutional neural network (CNN), using as targets minimum-variance (MV) beamformed signals that were generated from the original, fully-sampled data. Our approach yields high-quality B-mode images, with up to two times higher resolution than previously proposed reconstruction approaches (NESTA) from compressed data as well as delay-and-sum (DAS) beamforming of the fully-sampled data. In terms of contrast-to- noise ratio (CNR), our results are comparable to MV beamforming of the fully-sampled data, and provide up to 2 dB higher CNR values than DAS and NESTA, thus enabling better and more efficient imaging than what is used in clinical practice today.
Collapse
|
25
|
Zhao L, Lediju Bell MA. A Review of Deep Learning Applications in Lung Ultrasound Imaging of COVID-19 Patients. BME FRONTIERS 2022; 2022:9780173. [PMID: 36714302 PMCID: PMC9880989 DOI: 10.34133/2022/9780173] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023] Open
Abstract
The massive and continuous spread of COVID-19 has motivated researchers around the world to intensely explore, understand, and develop new techniques for diagnosis and treatment. Although lung ultrasound imaging is a less established approach when compared to other medical imaging modalities such as X-ray and CT, multiple studies have demonstrated its promise to diagnose COVID-19 patients. At the same time, many deep learning models have been built to improve the diagnostic efficiency of medical imaging. The integration of these initially parallel efforts has led multiple researchers to report deep learning applications in medical imaging of COVID-19 patients, most of which demonstrate the outstanding potential of deep learning to aid in the diagnosis of COVID-19. This invited review is focused on deep learning applications in lung ultrasound imaging of COVID-19 and provides a comprehensive overview of ultrasound systems utilized for data acquisition, associated datasets, deep learning models, and comparative performance.
Collapse
Affiliation(s)
- Lingyi Zhao
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA
| | - Muyinatu A. Lediju Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA,Department of Computer Science, Johns Hopkins University, Baltimore, USA,Department of Biomedical Engineering, Johns Hopkins University, Baltimore, USA
| |
Collapse
|
26
|
Vayyeti A, Thittai AK. Optimally-weighted non-linear beamformer for conventional focused beam ultrasound imaging systems. Sci Rep 2021; 11:21622. [PMID: 34732736 PMCID: PMC8566575 DOI: 10.1038/s41598-021-00741-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2021] [Accepted: 10/14/2021] [Indexed: 11/09/2022] Open
Abstract
A novel non-linear beamforming method, namely, filtered delay optimally-weighted multiply and sum (F-DowMAS) beamforming is reported for conventional focused beamforming (CFB) technique. The performance of F-DowMAS was compared against delay and sum (DAS), filtered delay multiply and sum (F-DMAS), filtered delay weight multiply and sum (F-DwMAS) and filter delay Euclidian weighted multiply and sum (F-DewMAS) methods. Notably, in the proposed method the optimal adaptive weights are computed for each imaging point to compensate for the effects due to spatial variations in beam pattern in CFB technique. F-DowMAS, F-DMAS, and DAS were compared in terms of the resulting image quality metrics, Lateral resolution (LR), axial resolution (AR), contrast ratio (CR) and contrast-to-noise ratio (CNR), estimated from experiments on a commercially available tissue-mimicking phantom. The results demonstrate that F-DowMAS improved the AR by 57.04% and 46.95%, LR by 58.21% and 53.40%, CR by 67.35% and 39.25%, and CNR by 44.04% and 30.57% compared to those obtained using DAS and F-DMAS, respectively. Thus, it can be concluded that the newly proposed F-DowMAS outperforms DAS and F-DMAS. As an aside, we also show that the optimal weighting strategy can be extended to benefit DAS.
Collapse
Affiliation(s)
- Anudeep Vayyeti
- Biomedical Ultrasound Laboratory, Department of Applied Mechanics, Indian Institute of Technology, Madras, Chennai, India
| | - Arun K Thittai
- Biomedical Ultrasound Laboratory, Department of Applied Mechanics, Indian Institute of Technology, Madras, Chennai, India.
| |
Collapse
|