1
|
Spainhour J, Smart K, Becker S, Bottenus N. Optimization of array encoding for ultrasound imaging. Phys Med Biol 2024; 69:125024. [PMID: 38815603 DOI: 10.1088/1361-6560/ad5249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Accepted: 05/30/2024] [Indexed: 06/01/2024]
Abstract
Objective. The transmit encoding model for synthetic aperture imaging is a robust and flexible framework for understanding the effects of acoustic transmission on ultrasound image reconstruction. Our objective is to use machine learning (ML) to construct scanning sequences, parameterized by time delays and apodization weights, that produce high-quality B-mode images.Approach. We use a custom ML model in PyTorch with simulated RF data from Field II to probe the space of possible encoding sequences for those that minimize a loss function that describes image quality. This approach is made computationally feasible by a novel formulation of the derivative for delay-and-sum beamforming.Main results. When trained for a specified experimental setting (imaging domain, hardware restrictions, etc), our ML model produces optimized encoding sequences that, when deployed in the REFoCUS imaging framework, improve a number of standard quality metrics over conventional sequences including resolution, field of view, and contrast. We demonstrate these results experimentally on both wire targets and a tissue-mimicking phantom.Significance. This work demonstrates that the set of commonly used encoding schemes represent only a narrow subset of those available. Additionally, it demonstrates the value for ML tasks in synthetic transmit aperture imaging to consider the beamformer within the model, instead of purely as a post-processing step.
Collapse
Affiliation(s)
- Jacob Spainhour
- Department of Applied Mathematics, University of Colorado Boulder, Boulder, CO, United States of America
| | - Korben Smart
- Department of Physics, University of Colorado Boulder, Boulder, CO, United States of America
| | - Stephen Becker
- Department of Applied Mathematics, University of Colorado Boulder, Boulder, CO, United States of America
| | - Nick Bottenus
- Paul M. Rady Department of Mechanical Engineering, University of Colorado Boulder, Boulder, CO, United States of America
| |
Collapse
|
2
|
Lu J, Millioz F, Varray F, Poree J, Provost J, Bernard O, Garcia D, Friboulet D. Ultrafast Cardiac Imaging Using Deep Learning for Speckle-Tracking Echocardiography. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:1761-1772. [PMID: 37862280 DOI: 10.1109/tuffc.2023.3326377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2023]
Abstract
High-quality ultrafast ultrasound imaging is based on coherent compounding from multiple transmissions of plane waves (PW) or diverging waves (DW). However, compounding results in reduced frame rate, as well as destructive interferences from high-velocity tissue motion if motion compensation (MoCo) is not considered. While many studies have recently shown the interest of deep learning for the reconstruction of high-quality static images from PW or DW, its ability to achieve such performance while maintaining the capability of tracking cardiac motion has yet to be assessed. In this article, we addressed such issue by deploying a complex-weighted convolutional neural network (CNN) for image reconstruction and a state-of-the-art speckle-tracking method. The evaluation of this approach was first performed by designing an adapted simulation framework, which provides specific reference data, i.e., high-quality, motion artifact-free cardiac images. The obtained results showed that, while using only three DWs as input, the CNN-based approach yielded an image quality and a motion accuracy equivalent to those obtained by compounding 31 DWs free of motion artifacts. The performance was then further evaluated on nonsimulated, experimental in vitro data, using a spinning disk phantom. This experiment demonstrated that our approach yielded high-quality image reconstruction and motion estimation, under a large range of velocities and outperforms a state-of-the-art MoCo-based approach at high velocities. Our method was finally assessed on in vivo datasets and showed consistent improvement in image quality and motion estimation compared to standard compounding. This demonstrates the feasibility and effectiveness of deep learning reconstruction for ultrafast speckle-tracking echocardiography.
Collapse
|
3
|
Qu X, Ren C, Wang Z, Fan S, Zheng D, Wang S, Lin H, Jiang J, Xing W. Complex Transformer Network for Single-Angle Plane-Wave Imaging. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:2234-2246. [PMID: 37544831 DOI: 10.1016/j.ultrasmedbio.2023.07.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 06/05/2023] [Accepted: 07/09/2023] [Indexed: 08/08/2023]
Abstract
OBJECTIVE Plane-wave imaging (PWI) is a high-frame-rate imaging technique that sacrifices image quality. Deep learning can potentially enhance plane-wave image quality, but processing complex in-phase and quadrature (IQ) data and suppressing incoherent signals pose challenges. To address these challenges, we present a complex transformer network (CTN) that integrates complex convolution and complex self-attention (CSA) modules. METHODS The CTN operates in a four-step process: delaying complex IQ data from a 0° single-angle plane wave for each pixel as CTN input data; extracting reconstruction features with a complex convolution layer; suppressing irrelevant features derived from incoherent signals with two CSA modules; and forming output images with another complex convolution layer. The training labels are generated by minimum variance (MV). RESULTS Simulation, phantom and in vivo experiments revealed that CTN produced comparable- or even higher-quality images than MV, but with much shorter computation time. Evaluation metrics included contrast ratio, contrast-to-noise ratio, generalized contrast-to-noise ratio and lateral and axial full width at half-maximum and were -11.59 dB, 1.16, 0.68, 278 μm and 329 μm for simulation, respectively, and 9.87 dB, 0.96, 0.62, 357 μm and 305 μm for the phantom experiment, respectively. In vivo experiments further indicated that CTN could significantly improve details that were previously vague or even invisible in DAS and MV images. And after being accelerated by GPU, the CTN runtime (76.03 ms) was comparable to that of delay-and-sum (DAS, 61.24 ms). CONCLUSION The proposed CTN significantly improved the image contrast, resolution and some unclear details by the MV beamformer, making it an efficient tool for high-frame-rate imaging.
Collapse
Affiliation(s)
- Xiaolei Qu
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Chujian Ren
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Zihao Wang
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Shuangchun Fan
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
| | - Dezhi Zheng
- Research Institute for Frontier Science, Beihang University, Beijing, China
| | - Shuai Wang
- Research Institute for Frontier Science, Beihang University, Beijing, China
| | - Hongxiang Lin
- Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou, China
| | - Jue Jiang
- Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, USA
| | - Weiwei Xing
- School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China.
| |
Collapse
|
4
|
Bilodeau M, Amyot FA, Masson P, Quaegebeur N. Real-time ultrasound phase imaging. ULTRASONICS 2023; 134:107086. [PMID: 37390638 DOI: 10.1016/j.ultras.2023.107086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Revised: 06/14/2023] [Accepted: 06/16/2023] [Indexed: 07/02/2023]
Abstract
The Correlation-Based (CB) imaging method is characterized by its high spatial resolution capabilities, but it is known to require heavy computational resources due to its high complexity. This paper shows that the CB imaging method can be used to estimate the phase of the complex reflection coefficients contained in the observation window. The resulting Correlation-Based Phase Imaging (CBPI) method can be used to segment and identify different features or tissue elasticity variations in a given medium. A Numerical validation is first proposed by considering a set of fifteen point-like scatterers on a Verasonics Simulator. Then, three experimental datasets are used to show the potential of CBPI on scatterers and specular reflectors. In vitro imaging results are first presented to show that CBPI allows retrieving phase information on hyperechoic reflectors, but also on weak reflectors such as elasticity targets. It is demonstrated that CBPI helps distinguishing regions of different elasticity, but of same low-contrast echogenicity, which is otherwise impossible with standard B-mode or Synthetic Aperture Focusing Techniques (SAFT). Then, CBPI of a needle in an ex vivo chicken breast is performed to show that the method works on specular reflectors. It is shown that the phase of the different interfaces associated to the first wall of the needle are well reconstructed using CBPI. The heterogeneous architecture used to enable real-time CBPI is presented. A Nvidia GeForce RTX 2080 Ti Graphics Processing Unit (GPU) is used to process the real-time acquired signals from a Verasonics Vantage 128 research echograph. Frame rates of 18 frames per second are achieved for the whole acquisition and signal processing chain on standard a 500 × 200 pixels grid.
Collapse
Affiliation(s)
- Maxime Bilodeau
- GAUS, Department of Mechanical Engineering, Sherbrooke, J1K 2R1, QC, Canada.
| | | | - Patrice Masson
- GAUS, Department of Mechanical Engineering, Sherbrooke, J1K 2R1, QC, Canada; CRCHUS, Université de Sherbrooke, Sherbrooke, J1K 2R1, QC, Canada.
| | - Nicolas Quaegebeur
- GAUS, Department of Mechanical Engineering, Sherbrooke, J1K 2R1, QC, Canada; CRCHUS, Université de Sherbrooke, Sherbrooke, J1K 2R1, QC, Canada.
| |
Collapse
|
5
|
Soylu U, Oelze ML. A Data-Efficient Deep Learning Strategy for Tissue Characterization via Quantitative Ultrasound: Zone Training. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:368-377. [PMID: 37027531 PMCID: PMC10224776 DOI: 10.1109/tuffc.2023.3245988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
Deep learning (DL) powered biomedical ultrasound imaging is an emerging research field where researchers adapt the image analysis capabilities of DL algorithms to biomedical ultrasound imaging settings. A major roadblock to wider adoption of DL powered biomedical ultrasound imaging is that acquisition of large and diverse datasets is expensive in clinical settings, which is a requirement for successful DL implementation. Hence, there is a constant need for developing data-efficient DL techniques to turn DL powered biomedical ultrasound imaging into reality. In this work, we develop a data-efficient DL training strategy for classifying tissues based on the ultrasonic backscattered RF data, i.e., quantitative ultrasound (QUS), which we named zone training. In zone training, we propose to divide the complete field of view of an ultrasound image into multiple zones associated with different regions of a diffraction pattern and then, train separate DL networks for each zone. The main advantage of zone training is that it requires less training data to achieve high accuracy. In this work, three different tissue-mimicking phantoms were classified by a DL network. The results demonstrated that zone training can require a factor of 2-3 less training data in low data regime to achieve similar classification accuracies compared to a conventional training strategy.
Collapse
|
6
|
Zhang J, Huang L, Luo J. Deep Null Space Learning Improves Dataset Recovery for High Frame Rate Synthetic Transmit Aperture Imaging. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; PP:219-236. [PMID: 37015712 DOI: 10.1109/tuffc.2022.3232139] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Synthetic transmit aperture (STA) imaging benefits from the two-way dynamic focusing to achieve optimal lateral resolution and contrast resolution in the full field of view, at the cost of low frame rate (FR) and low signal-to-noise ratio (SNR). In our previous studies, compressed sensing based synthetic transmit aperture (CS-STA) and minimal l2-norm least squares (LS-STA) methods were proposed to recover the complete STA dataset from fewer Hadamard-encoded plane wave (PW) transmissions. Results demonstrated that, compared with STA imaging, CS/LS-STA can maintain the high resolution of STA in the full field of view and improve the contrast in the deep region with increased FR. However, these methods would introduce errors to the recovered STA datasets and subsequently produce severe artifacts to the beamformed images, especially in the shallow region. Recently, we discovered that the theoretical explanation for the error introduced in the LS-STA-based recovery is that the LS-STA method neglects the null space component of the real STA dataset. To deal with this problem, we propose to train a convolutional neural network under the null space learning framework (CNN-Null) to estimate the missing null space component) for high-accuracy recovery of the STA dataset from fewer Hadamard-encoded PW transmissions. The mapping between the low-quality STA dataset (i.e., the range space component of the real STA dataset recovered using the LS-STA method) and the missing null space component of the real STA dataset was learned by the network with the high-quality STA dataset (obtained using full Hadamard-encoded STA imaging, HE-STA) as training labels. The performance of the proposed CNN-Null method was compared with the baseline LS-STA, conventional STA, and HE-STA methods, in terms of visual quality, normalized root-mean-square error (NRMSE), generalized contrast-to-noise ratio (gCNR), and lateral full width at half maximum (FWHM). The results demonstrate that the proposed method can greatly improve the recovery accuracy of the STA datasets (lower NRMSE) and therefore effectively suppress the artifacts presented in the images (especially in the shallow region) obtained using the LS-STA method (with a gCNR improvement of 0.4 in the cross-sectional carotid artery images). In addition, the proposed method can maintain the high lateral resolution of STA with fewer (as low as 16) PW transmissions, as LS-STA does.
Collapse
|
7
|
Bottenus N, Spainhour J, Becker S. Comparison of spatial encodings for ultrasound imaging. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; PP:52-63. [PMID: 37015484 DOI: 10.1109/tuffc.2022.3228218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Ultrasound pulse sequencing and receive signal focusing work hand-in-hand to determine image quality. These are commonly linked by geometry, for example using focused beams or planewaves in transmission paired with appropriate time-of-flight calculations for focusing. Spatial encoding allows a broader class of array transmissions but requires decoding of the recorded echoes before geometric focusing can be applied. Recent work has expanded spatial encoding to include not only element apodizations but also element time delays. This powerful technique allows for a unified beamforming strategy across different pulse sequences and increased flexibility in array signal processing given access to estimates of individual transmit element signals, but trade-offs in image quality between these encodings has not been previously studied. We evaluate in simulation several commonly used time delay and amplitude encodings and investigate optimization of the parameter space for each. Using signal-to-noise ratio, point resolution, and lesion detectability we found trade-offs between focused beams, planewaves, and Hadamard weight encodings. Beams with broader geometries maintained a wider field-of-view after decoding at the cost of signal-to-noise ratio (SNR) and lesion detectability. Focused beams and planewaves showed slightly reduced resolution compared to Hadamard weights in some cases, especially close to the array. We also found overall degraded image quality using random weight or random delay encodings. We validate these findings with experimental phantom imaging for select cases. We believe that these findings provide a starting point for sequence optimization and for improved image quality using the spatial encoding approach for imaging.
Collapse
|
8
|
Eslami L, Mohammadzadeh Asl B. Adaptive subarray coherence based post-filter using array gain in medical ultrasound imaging. ULTRASONICS 2022; 126:106808. [PMID: 35921724 DOI: 10.1016/j.ultras.2022.106808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 07/15/2022] [Accepted: 07/19/2022] [Indexed: 06/15/2023]
Abstract
This paper presents an adaptive subarray coherence-based post-filter (ASCBP) applied to the eigenspace-based forward-backward minimum variance (ESB-FBMV) beamformer to simultaneously improve image quality and beamformer robustness. Additionally, the ASCBP can separate close targets. The ASCBP uses an adaptive noise power weight based on the concept of the beamformer's array gain (AG) to suppress the noise adaptively and achieve improved images. Moreover, a square neighborhood average was applied to the ASCBP in order to provide more smoothed square neighborhood ASCBP (SN-ASCBP) values and improve the speckle quality. Through simulations of point phantoms and cyst phantoms and experimental validation, the performance of the proposed methods was compared to that of delay-and-sum (DAS), MV-based beamformers, and subarray coherence-based post-filter (SCBP). The simulated results demonstrated that the ASCBP method improved the full width at half maximum (FWHM) by 57 % and the coherent interference suppression power (CISP) by 52 dB compared to the SCBP post-filter. Considering the experimental results, the SN-ASCBP method presented the best enhancement in terms of generalized contrast to noise ratio (gCNR) and contrast ratio (CR) while the ASCBP showed the best improvement in FWHM among other methods. Furthermore, the proposed methods presented a striking performance in low SNRs. The results of evaluating the different methods under aberration and sound speed error illustrated the better robustness of the proposed methods in comparison with others.
Collapse
Affiliation(s)
- Leila Eslami
- Department of Biomedical Engineering, Tarbiat Modares University, Tehran 14115-111, Iran
| | | |
Collapse
|
9
|
Xiao D, Pitman WMK, Yiu BYS, Chee AJY, Yu ACH. Minimizing Image Quality Loss After Channel Count Reduction for Plane Wave Ultrasound via Deep Learning Inference. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:2849-2861. [PMID: 35862334 DOI: 10.1109/tuffc.2022.3192854] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
High-frame-rate ultrasound imaging uses unfocused transmissions to insonify an entire imaging view for each transmit event, thereby enabling frame rates over 1000 frames per second (fps). At these high frame rates, it is naturally challenging to realize real-time transfer of channel-domain raw data from the transducer to the system back end. Our work seeks to halve the total data transfer rate by uniformly decimating the receive channel count by 50% and, in turn, doubling the array pitch. We show that despite the reduced channel count and the inevitable use of a sparse array aperture, the resulting beamformed image quality can be maintained by designing a custom convolutional encoder-decoder neural network to infer the radio frequency (RF) data of the nullified channels. This deep learning framework was trained with in vivo human carotid data (5-MHz plane wave imaging, 128 channels, 31 steering angles over a 30° span, and 62 799 frames in total). After training, the network was tested on an in vitro point target scenario that was dissimilar to the training data, in addition to in vivo carotid validation datasets. In the point target phantom image beamformed from inferred channel data, spatial aliasing artifacts attributed to array pitch doubling were found to be reduced by up to 10 dB. For carotid imaging, our proposed approach yielded a lumen-to-tissue contrast that was on average within 3 dB compared to the full-aperture image, whereas without channel data inferencing, the carotid lumen was obscured. When implemented on an RTX-2080 GPU, the inference time to apply the trained network was 4 ms, which favors real-time imaging. Overall, our technique shows that with the help of deep learning, channel data transfer rates can be effectively halved with limited impact on the resulting image quality.
Collapse
|
10
|
Zhang J, Liu J, Fan W, Qiu W, Luo J. Partial Hadamard encoded synthetic transmit aperture for high frame rate imaging with minimal l2-norm least squares method. Phys Med Biol 2022; 67. [PMID: 35349987 DOI: 10.1088/1361-6560/ac6202] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Accepted: 03/29/2022] [Indexed: 11/12/2022]
Abstract
Objective.Synthetic transmit aperture (STA) ultrasound imaging is well known for ideal focusing in the full field of view. However, it suffers from low signal-to-noise ratio (SNR) and low frame rate, because each transducer element must be activated individually. In our previous study, we encoded all the transducer elements with partial Hadamard matrix and reconstructed the complete STA dataset with compressed sensing (CS) algorithm (CS-STA). As all the elements are activated in each transmission and the number of transmissions is smaller than that of STA, this method can achieve higher SNR and higher frame rate. Its main drawback is the time-consuming CS reconstruction (∼hours). In this study, we propose to accelerate the complete STA dataset reconstruction with minimall2-norm least squares method.Approach.Partial Hadamard apodized plane wave (PW) transmissions were performed to acquire the PW dataset. Thereafter, the complete STA dataset can be reconstructed from the PW dataset with minimall2-norm least squares method. Due to the orthogonality of partial Hadamard matrix, the minimall2-norm least squares solution can be easily calculated.Main results.The proposed method is tested with simulation data and experimental phantom andin-vivodata. The results demonstrate that the proposed method achieves ∼5 × 103times faster reconstruction speed than CS algorithm. The simulation results demonstrate that the proposed method is capable of achieving the same accuracy as the conventional CS-STA method for the STA dataset reconstruction. The simulations, phantom andin-vivoexperiments show that the proposed method is capable of improving the generalized contrast-to-noise ratio (gCNR) and SNR with maintained spatial resolution and fewer transmissions, compared with STA.Significance.In conclusion, the improved image quality and reduced computational time of LS-STA pave the way for its real-time applications in the clinics.
Collapse
Affiliation(s)
- Jingke Zhang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, People's Republic of China
| | - Jing Liu
- Shenzhen Mindray Bio-Medical Electronics Co., Ltd, Shenzhen 518057, People's Republic of China
| | - Wei Fan
- Shenzhen Mindray Bio-Medical Electronics Co., Ltd, Shenzhen 518057, People's Republic of China
| | - Weibao Qiu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China.,Shenzhen Key Laboratory of Ultrasound Imaging and Therapy, Shenzhen 518055, People's Republic of China
| | - Jianwen Luo
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, People's Republic of China
| |
Collapse
|
11
|
Wang Y, Xie X, He Q, Liao H, Zhang H, Luo J. Hadamard-Encoded Synthetic Transmit Aperture Imaging for Improved Lateral Motion Estimation in Ultrasound Elastography. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:1204-1218. [PMID: 35100113 DOI: 10.1109/tuffc.2022.3148332] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Lateral motion estimation has been a challenge in ultrasound elastography mainly due to the low resolution, low sampling frequency, and lack of phase information in the lateral direction. Synthetic transmit aperture (STA) can achieve high resolution due to two-way focusing and can beamform high-density image lines for improved lateral motion estimation with the disadvantages of low signal-to-noise ratio (SNR) and limited penetration depth. In this study, Hadamard-encoded STA (Hadamard-STA) is proposed for the improvement of lateral motion estimation in elastography, and it is compared with STA and conventional focused wave (CFW) imaging. Simulations, phantom, and in vivo experiments were conducted to make the comparison. The normalized root mean square error (NRMSE) and the contrast-to-noise ratio (CNR) were calculated as the evaluation criteria in the simulations. The results show that, at a noise level of -10 dB and an applied strain of -1% (compression), Hadamard-STA decreases the NRMSEs of lateral displacements by 46.92% and 35.35%, decreases the NRMSEs of lateral strains by 52.34% and 39.75%, and increases the CNRs by 9.70 and 9.75 dB compared with STA and CFW, respectively. In the phantom experiments performed on a heterogeneous tissue-mimicking phantom, the sum of squared differences (SSD) between the reference and the motion-compensated RF data, and the CNR were calculated as the evaluation criteria. At an applied strain of -1.80%, Hadamard-STA is found to decrease the SSDs by 20.91% and 30.99% and increase the CNRs by 14.15 and 24.66 dB compared with STA and CFW, respectively. In the experiments performed on a breast phantom, Hadamard-STA achieves better visualization of the breast inclusion with a clearer boundary between the inclusion and the background than STA and CFW. The in vivo experiments were performed on a patient with a breast tumor, and the tumor could also be better visualized with a more homogeneous background in the strain image obtained by Hadamard-STA than by STA and CFW. These results demonstrate that Hadamard-STA achieves a substantial improvement in lateral motion estimation and maybe a competitive method for quasi-static elastography.
Collapse
|
12
|
Zhang J, Wang Y, Liu J, He Q, Wang R, Liao H, Luo J. Acceleration of reconstruction for compressed sensing based synthetic transmit aperture imaging by using in-phase/quadrature data. ULTRASONICS 2022; 118:106576. [PMID: 34530394 DOI: 10.1016/j.ultras.2021.106576] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 09/01/2021] [Accepted: 09/01/2021] [Indexed: 06/13/2023]
Abstract
Compressed sensing-based synthetic transmit aperture (CS-STA) was previously proposed to recover the full radio-frequency (RF) channel dataset of synthetic transmit aperture (STA) from that of a smaller number of randomly apodized plane wave (PW) transmissions. In this way, the imaging frame rate (FR) and contrast are improved with maintained spatial resolution, compared with those of STA. Because CS-STA reconstruction is repeated for all receive elements and RF samples (with a high sampling frequency), the recovery of STA dataset in RF domain is time-consuming. In the meantime, a large amount of RF data needs to be transferred and stored, resulting in an increase of system complexity and required memory space. In this study, CS-STA is extended to in-phase/quadrature (IQ) domain (with lower sampling frequency) for the recovery of baseband STA IQ dataset to accelerate the CS-STA reconstruction by reducing the amount of data to be processed. More importantly, CS-STA reconstruction using IQ data is of practical importance, as clinical ultrasound systems typically record baseband IQ signal instead of RF signal. Simulations, phantom and in vivo experiments verify the feasibility of CS-STA in IQ domain for the recovery of STA dataset. More specifically, CS-STA using IQ data achieves similar image quality and appreciably improves reconstruction speed (by ∼3 times) compared with that using RF data. These findings demonstrate that IQ-domain CS-STA is capable of relieving the computational and storage burdens, which may facilitate the implementation of CS-STA in practical ultrasound systems.
Collapse
Affiliation(s)
- Jingke Zhang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Yuanyuan Wang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Jing Liu
- Shenzhen Mindray Bio-Medical Electronics Co., LTD, Shenzhen 518055, China
| | - Qiong He
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China; Tsinghua-Peking Joint Center for Life Sciences Department, Tsinghua University, Beijing 100084, China
| | - Rui Wang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Hongen Liao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Jianwen Luo
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China.
| |
Collapse
|