1
|
Viñals R, Thiran JP. A KL Divergence-Based Loss for In Vivo Ultrafast Ultrasound Image Enhancement with Deep Learning. J Imaging 2023; 9:256. [PMID: 38132674 PMCID: PMC10744220 DOI: 10.3390/jimaging9120256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 11/15/2023] [Accepted: 11/20/2023] [Indexed: 12/23/2023] Open
Abstract
Ultrafast ultrasound imaging, characterized by high frame rates, generates low-quality images. Convolutional neural networks (CNNs) have demonstrated great potential to enhance image quality without compromising the frame rate. However, CNNs have been mostly trained on simulated or phantom images, leading to suboptimal performance on in vivo images. In this study, we present a method to enhance the quality of single plane wave (PW) acquisitions using a CNN trained on in vivo images. Our contribution is twofold. Firstly, we introduce a training loss function that accounts for the high dynamic range of the radio frequency data and uses the Kullback-Leibler divergence to preserve the probability distributions of the echogenicity values. Secondly, we conduct an extensive performance analysis on a large new in vivo dataset of 20,000 images, comparing the predicted images to the target images resulting from the coherent compounding of 87 PWs. Applying a volunteer-based dataset split, the peak signal-to-noise ratio and structural similarity index measure increase, respectively, from 16.466 ± 0.801 dB and 0.105 ± 0.060, calculated between the single PW and target images, to 20.292 ± 0.307 dB and 0.272 ± 0.040, between predicted and target images. Our results demonstrate significant improvements in image quality, effectively reducing artifacts.
Collapse
Affiliation(s)
- Roser Viñals
- Signal Processing Laboratory 5 (LTS5), École Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland;
| | - Jean-Philippe Thiran
- Signal Processing Laboratory 5 (LTS5), École Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne, Switzerland;
- Department of Radiology, University Hospital Center (CHUV) and University of Lausanne (UNIL), 1011 Lausanne, Switzerland
| |
Collapse
|
2
|
Tunable image quality control of 3-D ultrasound using switchable CycleGAN. Med Image Anal 2023; 83:102651. [PMID: 36327653 DOI: 10.1016/j.media.2022.102651] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 06/03/2022] [Accepted: 10/07/2022] [Indexed: 11/07/2022]
Abstract
In contrast to 2-D ultrasound (US) for uniaxial plane imaging, a 3-D US imaging system can visualize a volume along three axial planes. This allows for a full view of the anatomy, which is useful for gynecological (GYN) and obstetrical (OB) applications. Unfortunately, the 3-D US has an inherent limitation in resolution compared to the 2-D US. In the case of 3-D US with a 3-D mechanical probe, for example, the image quality is comparable along the beam direction, but significant deterioration in image quality is often observed in the other two axial image planes. To address this, here we propose a novel unsupervised deep learning approach to improve 3-D US image quality. In particular, using unmatched high-quality 2-D US images as a reference, we trained a recently proposed switchable CycleGAN architecture so that every mapping plane in 3-D US can learn the image quality of 2-D US images. Thanks to the switchable architecture, our network can also provide real-time control of image enhancement level based on user preference, which is ideal for a user-centric scanner setup. Extensive experiments with clinical evaluation confirm that our method offers significantly improved image quality as well user-friendly flexibility.
Collapse
|
3
|
Wifstad SV, Lovstakken L, Avdal J, Berg EAR, Torp H, Grenne B, Fiorentini S. Quantifying Valve Regurgitation Using 3-D Doppler Ultrasound Images and Deep Learning. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:3317-3326. [PMID: 36315529 DOI: 10.1109/tuffc.2022.3218281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Accurate quantification of cardiac valve regurgitation jets is fundamental for guiding treatment. Cardiac ultrasound is the preferred diagnostic tool, but current methods for measuring the regurgitant volume (RVol) are limited by low accuracy and high interobserver variability. Following recent research, quantitative estimators of orifice size and RVol based on high frame rate 3-D ultrasound have been proposed, but measurement accuracy is limited by the wide point spread function (PSF) relative to the orifice size. The aim of this article was to investigate the use of deep learning to estimate both the orifice size and the RVol. A simulation model was developed to simulate the power-Doppler images of blood flow through orifices with different geometries. A convolutional neural network (CNN) was trained on 30 000 image pairs. The network was used to reconstruct orifices from power-Doppler data, which facilitated estimators for regurgitant orifice areas and flow volumes. We demonstrate that the network improves orifice shape reconstruction, as well as the accuracy of orifice area and flow volume estimation, compared with a previous approach based on thresholding of the power-Doppler signal (THD), and compared with spatially invariant deconvolution (DC). Our approach reduces the area estimation error on simulations: (THD: 13.2 ± 9.9 mm2, DC: 12.8 ± 15.8 mm2, and ours: 3.5 ± 3.2 mm2). In a phantom experiment, our approach reduces both area estimation error (THD: 10.4 ± 8.4 mm2, DC: 10.98 ± 8.17, and ours: 9.9 ± 6.0 mm2) and flow rate estimation error (THD: 20.3 ± 9.9 ml/s, DC: 18.14 ± 13.01 ml/s, and ours: 7.1 ± 10.6 ml/s). We also demonstrate in vivo feasibility for six patients with aortic insufficiency, compared with standard echocardiography and magnetic resonance references.
Collapse
|
4
|
Moinuddin M, Khan S, Alsaggaf AU, Abdulaal MJ, Al-Saggaf UM, Ye JC. Medical ultrasound image speckle reduction and resolution enhancement using texture compensated multi-resolution convolution neural network. Front Physiol 2022; 13:961571. [DOI: 10.3389/fphys.2022.961571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2022] [Accepted: 10/19/2022] [Indexed: 11/16/2022] Open
Abstract
Ultrasound (US) imaging is a mature technology that has widespread applications especially in the healthcare sector. Despite its widespread use and popularity, it has an inherent disadvantage that ultrasound images are prone to speckle and other kinds of noise. The image quality in the low-cost ultrasound imaging systems is degraded due to the presence of such noise and low resolution of such ultrasound systems. Herein, we propose a method for image enhancement where, the overall quality of the US images is improved by simultaneous enhancement of US image resolution and noise suppression. To avoid over-smoothing and preserving structural/texture information, we devise texture compensation in our proposed method to retain the useful anatomical features. Moreover, we also utilize US image formation physics knowledge to generate augmentation datasets which can improve the training of our proposed method. Our experimental results showcase the performance of the proposed network as well as the effectiveness of the utilization of US physics knowledge to generate augmentation datasets.
Collapse
|
5
|
Qu H, Liu H, Jiang S, Wang J, Hou Y. Discovery the inverse variational problems from noisy data by physics-constrained machine learning. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04079-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
6
|
Li H, Bhatt M, Qu Z, Zhang S, Hartel MC, Khademhosseini A, Cloutier G. Deep learning in ultrasound elastography imaging: A review. Med Phys 2022; 49:5993-6018. [PMID: 35842833 DOI: 10.1002/mp.15856] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2021] [Revised: 02/04/2022] [Accepted: 07/06/2022] [Indexed: 11/11/2022] Open
Abstract
It is known that changes in the mechanical properties of tissues are associated with the onset and progression of certain diseases. Ultrasound elastography is a technique to characterize tissue stiffness using ultrasound imaging either by measuring tissue strain using quasi-static elastography or natural organ pulsation elastography, or by tracing a propagated shear wave induced by a source or a natural vibration using dynamic elastography. In recent years, deep learning has begun to emerge in ultrasound elastography research. In this review, several common deep learning frameworks in the computer vision community, such as multilayer perceptron, convolutional neural network, and recurrent neural network are described. Then, recent advances in ultrasound elastography using such deep learning techniques are revisited in terms of algorithm development and clinical diagnosis. Finally, the current challenges and future developments of deep learning in ultrasound elastography are prospected. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Hongliang Li
- Laboratory of Biorheology and Medical Ultrasonics, University of Montreal Hospital Research Center, Montréal, Québec, Canada.,Institute of Biomedical Engineering, University of Montreal, Montréal, Québec, Canada
| | - Manish Bhatt
- Laboratory of Biorheology and Medical Ultrasonics, University of Montreal Hospital Research Center, Montréal, Québec, Canada
| | - Zhen Qu
- Laboratory of Biorheology and Medical Ultrasonics, University of Montreal Hospital Research Center, Montréal, Québec, Canada
| | - Shiming Zhang
- California Nanosystems Institute, University of California, Los Angeles, California, USA
| | - Martin C Hartel
- California Nanosystems Institute, University of California, Los Angeles, California, USA
| | - Ali Khademhosseini
- California Nanosystems Institute, University of California, Los Angeles, California, USA
| | - Guy Cloutier
- Laboratory of Biorheology and Medical Ultrasonics, University of Montreal Hospital Research Center, Montréal, Québec, Canada.,Institute of Biomedical Engineering, University of Montreal, Montréal, Québec, Canada.,Department of Radiology, Radio-Oncology and Nuclear Medicine, University of Montreal, Montréal, Québec, Canada
| |
Collapse
|
7
|
Lu J, Millioz F, Garcia D, Salles S, Ye D, Friboulet D. Complex Convolutional Neural Networks for Ultrafast Ultrasound Imaging Reconstruction From In-Phase/Quadrature Signal. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:592-603. [PMID: 34767508 DOI: 10.1109/tuffc.2021.3127916] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Ultrafast ultrasound imaging remains an active area of interest in the ultrasound community due to its ultrahigh frame rates. Recently, a wide variety of studies based on deep learning have sought to improve ultrafast ultrasound imaging. Most of these approaches have been performed on radio frequency (RF) signals. However, in- phase/quadrature (I/Q) digital beamformers are now widely used as low-cost strategies. In this work, we used complex convolutional neural networks for reconstruction of ultrasound images from I/Q signals. We recently described a convolutional neural network architecture called ID-Net, which exploited an inception layer designed for reconstruction of RF diverging-wave ultrasound images. In the present study, we derive the complex equivalent of this network, i.e., complex-valued inception for diverging-wave network (CID-Net) that operates on I/Q data. We provide experimental evidence that CID-Net provides the same image quality as that obtained from RF-trained convolutional neural networks, i.e., using only three I/Q images, CID-Net produces high-quality images that can compete with those obtained by coherently compounding 31 RF images. Moreover, we show that CID-Net outperforms the straightforward architecture that consists of processing real and imaginary parts of the I/Q signal separately, which thereby indicates the importance of consistently processing the I/Q signals using a network that exploits the complex nature of such signals.
Collapse
|
8
|
Usman M, Khan S, Park S, Wahab A. AFP-SRC: identification of antifreeze proteins using sparse representation classifier. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06558-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
9
|
AoP-LSE: Antioxidant Proteins Classification Using Deep Latent Space Encoding of Sequence Features. Curr Issues Mol Biol 2021; 43:1489-1501. [PMID: 34698113 PMCID: PMC8928959 DOI: 10.3390/cimb43030105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Revised: 09/28/2021] [Accepted: 09/29/2021] [Indexed: 11/16/2022] Open
Abstract
It is of utmost importance to develop a computational method for accurate prediction of antioxidants, as they play a vital role in the prevention of several diseases caused by oxidative stress. In this correspondence, we present an effective computational methodology based on the notion of deep latent space encoding. A deep neural network classifier fused with an auto-encoder learns class labels in a pruned latent space. This strategy has eliminated the need to separately develop classifier and the feature selection model, allowing the standalone model to effectively harness discriminating feature space and perform improved predictions. A thorough analytical study has been presented alongwith the PCA/tSNE visualization and PCA-GCNR scores to show the discriminating power of the proposed method. The proposed method showed a high MCC value of 0.43 and a balanced accuracy of 76.2%, which is superior to the existing models. The model has been evaluated on an independent dataset during which it outperformed the contemporary methods by correctly identifying the novel proteins with an accuracy of 95%.
Collapse
|
10
|
Guo S, Feng H, Feng W, Lv G, Chen D, Liu Y, Wu X. Automatic Quantification of Subsurface Defects by Analyzing Laser Ultrasonic Signals Using Convolutional Neural Networks and Wavelet Transform. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2021; 68:3216-3225. [PMID: 34106854 DOI: 10.1109/tuffc.2021.3087949] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The conventional machine learning algorithm for analyzing ultrasonic signals to detect structural defects necessarily identifies and extracts either time- or frequency-domain features manually, which has problems in reliability and effectiveness. This work proposes a novel approach by combining convolutional neural networks (CNNs) and wavelet transform to analyze the laser-generated ultrasonic signals for detecting the width of subsurface defects accurately. The novelty of this work is to convert the laser ultrasonic signals into the scalograms (images) via wavelet transform, which are subsequently utilized as the image input for the pretrained CNN to extract the defect features automatically to quantify the width of defects, avoiding the necessity and inaccuracy induced by artificial feature selection. The experimentally validated numerical model that simulates the interaction of laser-generated ultrasonic waves with subsurface defects is first established, which is further utilized to generate adequate laser ultrasonic signals for training the CNN model. A total number of 3104 data are obtained from simulation and experiments, with 2480 simulated signals for training the CNN model and the remaining 620 simulated data together with 4 experimental signals for verifying the performance of the proposed algorithm. This approach achieves the prediction accuracy of 98.5% on validation set, particularly with the prediction accuracy of 100% for the four experimental data. This work proves the feasibility and reliability of the proposed method for quantifying the width of subsurface defects and can be further expanded as a universal approach to various other defects detection, such as defect locations and shapes.
Collapse
|