1
|
Pitman WMK, Xiao D, Yiu BYS, Chee AJY, Yu ACH. Branched Convolutional Neural Networks for Receiver Channel Recovery in High-Frame-Rate Sparse-Array Ultrasound Imaging. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2024; 71:558-571. [PMID: 38564354 DOI: 10.1109/tuffc.2024.3383660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
For high-frame-rate ultrasound imaging, it remains challenging to implement on compact systems as a sparse imaging configuration with limited array channels. One key issue is that the resulting image quality is known to be mediocre not only because unfocused plane-wave excitations are used but also because grating lobes would emerge in sparse-array configurations. In this article, we present the design and use of a new channel recovery framework to infer full-array plane-wave channel datasets for periodically sparse arrays that operate with as few as one-quarter of the full-array aperture. This framework is based on a branched encoder-decoder convolutional neural network (CNN) architecture, which was trained using full-array plane-wave channel data collected from human carotid arteries (59 864 training acquisitions; 5-MHz imaging frequency; 20-MHz sampling rate; plane-wave steering angles between -15° and 15° in 1° increments). Three branched encoder-decoder CNNs were separately trained to recover missing channels after differing degrees of channelwise downsampling (2, 3, and 4 times). The framework's performance was tested on full-array and downsampled plane-wave channel data acquired from an in vitro point target, human carotid arteries, and human brachioradialis muscle. Results show that when inferred full-array plane-wave channel data were used for beamforming, spatial aliasing artifacts in the B-mode images were suppressed for all degrees of channel downsampling. In addition, the image contrast was enhanced compared with B-mode images obtained from beamforming with downsampled channel data. When the recovery framework was implemented on an RTX-2080 GPU, the three investigated degrees of downsampling all achieved the same inference time of 4 ms. Overall, the proposed framework shows promise in enhancing the quality of high-frame-rate ultrasound images generated using a sparse-array imaging setup.
Collapse
|
2
|
Qi B, Tian X, Fu L, Li Y, Chan KS, Ling C, Yim W, Zhang S, Jokerst JV. Deep learning assisted sparse array ultrasound imaging. PLoS One 2023; 18:e0293468. [PMID: 37903113 PMCID: PMC10615290 DOI: 10.1371/journal.pone.0293468] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 10/13/2023] [Indexed: 11/01/2023] Open
Abstract
This study aims to restore grating lobe artifacts and improve the image resolution of sparse array ultrasonography via a deep learning predictive model. A deep learning assisted sparse array was developed using only 64 or 16 channels out of the 128 channels in which the pitch is two or eight times the original array. The deep learning assisted sparse array imaging system was demonstrated on ex vivo porcine teeth. 64- and 16-channel sparse array images were used as the input and corresponding 128-channel dense array images were used as the ground truth. The structural similarity index measure, mean squared error, and peak signal-to-noise ratio of predicted images improved significantly (p < 0.0001). The resolution of predicted images presented close values to ground truth images (0.18 mm and 0.15 mm versus 0.15 mm). The gingival thickness measurement showed a high level of agreement between the predicted sparse array images and the ground truth images, as indicated with a bias of -0.01 mm and 0.02 mm for the 64- and 16-channel predicted images, respectively, and a Pearson's r = 0.99 (p < 0.0001) for both. The gingival thickness bias measured by deep learning assisted sparse array imaging and clinical probing needle was found to be <0.05 mm. Additionally, the deep learning model showed capability of generalization. To conclude, the deep learning assisted sparse array can reconstruct high-resolution ultrasound image using only 16 channels of 128 channels. The deep learning model performed generalization capability for the 64-channel array, while the 16-channel array generalization would require further optimization.
Collapse
Affiliation(s)
- Baiyan Qi
- Materials Science and Engineering Program, University of California San Diego, La Jolla, California, United States of America
| | - Xinyu Tian
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
| | - Lei Fu
- Department of NanoEngineering, University of California San Diego, La Jolla, California, United States of America
| | - Yi Li
- Department of NanoEngineering, University of California San Diego, La Jolla, California, United States of America
| | - Kai San Chan
- Biomedical Engineering Program, The University of Hong Kong, Hong Kong SAR, China
| | - Chuxuan Ling
- Department of NanoEngineering, University of California San Diego, La Jolla, California, United States of America
| | - Wonjun Yim
- Materials Science and Engineering Program, University of California San Diego, La Jolla, California, United States of America
| | - Shiming Zhang
- Department of Electrical and Electronic Engineering, The University of Hong Kong, Hong Kong SAR, China
| | - Jesse V. Jokerst
- Materials Science and Engineering Program, University of California San Diego, La Jolla, California, United States of America
- Department of NanoEngineering, University of California San Diego, La Jolla, California, United States of America
- Department of Radiology, University of California San Diego, La Jolla, California, United States of America
| |
Collapse
|
3
|
Nguon LS, Park S. Extended aperture image reconstruction for plane-wave imaging. ULTRASONICS 2023; 134:107096. [PMID: 37392616 DOI: 10.1016/j.ultras.2023.107096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 05/05/2023] [Accepted: 06/26/2023] [Indexed: 07/03/2023]
Abstract
B-mode images undergo degradation in the boundary region because of the limited number of elements in the ultrasound probe. Herein, a deep learning-based extended aperture image reconstruction method is proposed to reconstruct a B-mode image with an enhanced boundary region. The proposed network can reconstruct an image using pre-beamformed raw data received from the half-aperture of the probe. To generate a high-quality training target without degradation in the boundary region, the target data were acquired using the full-aperture. Training data were acquired from an experimental study using a tissue-mimicking phantom, vascular phantom, and simulation of random point scatterers. Compared with plane-wave images from delay and sum beamforming, the proposed extended aperture image reconstruction method achieves improvement at the boundary region in terms of the multi-scale structure of similarity and peak signal-to-noise ratio by 8% and 4.10 dB in resolution evaluation phantom, 7% and 3.15 dB in contrast speckle phantom, and 5% and 3 dB in in vivo study of carotid artery imaging. The findings in this study prove the feasibility of a deep learning-based extended aperture image reconstruction method for boundary region improvement.
Collapse
Affiliation(s)
- Leang Sim Nguon
- Department of Electronic and Electrical Engineering, Ewha Womans University, Seoul 03760, Korea
| | - Suhyun Park
- Department of Electronic and Electrical Engineering, Ewha Womans University, Seoul 03760, Korea.
| |
Collapse
|
4
|
Soylu U, Oelze ML. A Data-Efficient Deep Learning Strategy for Tissue Characterization via Quantitative Ultrasound: Zone Training. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:368-377. [PMID: 37027531 PMCID: PMC10224776 DOI: 10.1109/tuffc.2023.3245988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
Deep learning (DL) powered biomedical ultrasound imaging is an emerging research field where researchers adapt the image analysis capabilities of DL algorithms to biomedical ultrasound imaging settings. A major roadblock to wider adoption of DL powered biomedical ultrasound imaging is that acquisition of large and diverse datasets is expensive in clinical settings, which is a requirement for successful DL implementation. Hence, there is a constant need for developing data-efficient DL techniques to turn DL powered biomedical ultrasound imaging into reality. In this work, we develop a data-efficient DL training strategy for classifying tissues based on the ultrasonic backscattered RF data, i.e., quantitative ultrasound (QUS), which we named zone training. In zone training, we propose to divide the complete field of view of an ultrasound image into multiple zones associated with different regions of a diffraction pattern and then, train separate DL networks for each zone. The main advantage of zone training is that it requires less training data to achieve high accuracy. In this work, three different tissue-mimicking phantoms were classified by a DL network. The results demonstrated that zone training can require a factor of 2-3 less training data in low data regime to achieve similar classification accuracies compared to a conventional training strategy.
Collapse
|