1
|
Vilimek D, Kubicek J, Golian M, Jaros R, Kahankova R, Hanzlikova P, Barvik D, Krestanova A, Penhaker M, Cerny M, Prokop O, Buzga M. Comparative analysis of wavelet transform filtering systems for noise reduction in ultrasound images. PLoS One 2022; 17:e0270745. [PMID: 35797331 PMCID: PMC9262246 DOI: 10.1371/journal.pone.0270745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Accepted: 06/16/2022] [Indexed: 11/19/2022] Open
Abstract
Wavelet transform (WT) is a commonly used method for noise suppression and feature extraction from biomedical images. The selection of WT system settings significantly affects the efficiency of denoising procedure. This comparative study analyzed the efficacy of the proposed WT system on real 292 ultrasound images from several areas of interest. The study investigates the performance of the system for different scaling functions of two basic wavelet bases, Daubechies and Symlets, and their efficiency on images artificially corrupted by three kinds of noise. To evaluate our extensive analysis, we used objective metrics, namely structural similarity index (SSIM), correlation coefficient, mean squared error (MSE), peak signal-to-noise ratio (PSNR) and universal image quality index (Q-index). Moreover, this study includes clinical insights on selected filtration outcomes provided by clinical experts. The results show that the efficiency of the filtration strongly depends on the specific wavelet system setting, type of ultrasound data, and the noise present. The findings presented may provide a useful guideline for researchers, software developers, and clinical professionals to obtain high quality images.
Collapse
Affiliation(s)
- Dominik Vilimek
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Jan Kubicek
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Milos Golian
- Human Motion Diagnostic Center, Department of Human Movement Studies, University of Ostrava, Ostrava, Czech Republic
| | - Rene Jaros
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Radana Kahankova
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, Ostrava, Czech Republic
- * E-mail:
| | - Pavla Hanzlikova
- Department of Imaging Method, Faculty of Medicine, University of Ostrava, Ostrava, Czech Republic
| | - Daniel Barvik
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Alice Krestanova
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Marek Penhaker
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Martin Cerny
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | | | - Marek Buzga
- Human Motion Diagnostic Center, Department of Human Movement Studies, University of Ostrava, Ostrava, Czech Republic
- Deparment of Physiology and Pathophysiology, Faculty of Medicine, University of Ostrava, Ostrava, Czech Republic
| |
Collapse
|
2
|
The use of deep learning methods in low-dose computed tomography image reconstruction: a systematic review. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-022-00724-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
AbstractConventional reconstruction techniques, such as filtered back projection (FBP) and iterative reconstruction (IR), which have been utilised widely in the image reconstruction process of computed tomography (CT) are not suitable in the case of low-dose CT applications, because of the unsatisfying quality of the reconstructed image and inefficient reconstruction time. Therefore, as the demand for CT radiation dose reduction continues to increase, the use of artificial intelligence (AI) in image reconstruction has become a trend that attracts more and more attention. This systematic review examined various deep learning methods to determine their characteristics, availability, intended use and expected outputs concerning low-dose CT image reconstruction. Utilising the methodology of Kitchenham and Charter, we performed a systematic search of the literature from 2016 to 2021 in Springer, Science Direct, arXiv, PubMed, ACM, IEEE, and Scopus. This review showed that algorithms using deep learning technology are superior to traditional IR methods in noise suppression, artifact reduction and structure preservation, in terms of improving the image quality of low-dose reconstructed images. In conclusion, we provided an overview of the use of deep learning approaches in low-dose CT image reconstruction together with their benefits, limitations, and opportunities for improvement.
Collapse
|
3
|
Adaptive Image Denoising Method Based on Diffusion Equation and Deep Learning. JOURNAL OF ROBOTICS 2022. [DOI: 10.1155/2022/7115551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Effective noise removal has become a hot topic in image denoising research while preserving important details of an image. An adaptive threshold image denoising algorithm based on fitting diffusion is proposed. Firstly, the diffusion coefficient in the diffusion equation is improved, and the fitting diffusion coefficient is established to overcome the defects of texture detail loss and edge degradation caused by excessive diffusion intensity. Then, the threshold function is adaptively designed and improved so that it can automatically control the threshold of the function according to the maximum gray value of the image and the number of iterations, so as to further preserve the important details of the image such as edge and texture. A neural network is used to realize image denoising because of its good learning ability of image statistical characteristics, mainly by the diffusion equation and deep learning (CNN) algorithm as the foundation, focus on the effects of activation function of network optimization, using multiple feature extraction technology in-depth networks to study the characteristics of the input image richer, and how to better use the adaptive algorithm on the depth of diffusion equation and optimization backpropagation learning. The training speed of the model is accelerated and the convergence of the algorithm is improved. Combined with batch standardization and residual learning technology, the image denoising network model based on deep residual learning of the convolutional network is designed with better denoising performance. Finally, the algorithm is compared with other excellent denoising algorithms. From the comparison results, it can be seen that the improved denoising algorithm in this paper can also improve the detail restoration of denoised images without losing the sharpness. Moreover, it has better PSNR than other excellent denoising algorithms at different noise standard deviations. The PSNR of the new algorithm is greatly improved compared with the classical algorithm, which can effectively suppress the noise and protect the image edge and detail information.
Collapse
|
4
|
Current and emerging artificial intelligence applications in chest imaging: a pediatric perspective. Pediatr Radiol 2022; 52:2120-2130. [PMID: 34471961 PMCID: PMC8409695 DOI: 10.1007/s00247-021-05146-0] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 05/22/2021] [Accepted: 06/28/2021] [Indexed: 12/19/2022]
Abstract
Artificial intelligence (AI) applications for chest radiography and chest CT are among the most developed applications in radiology. More than 40 certified AI products are available for chest radiography or chest CT. These AI products cover a wide range of abnormalities, including pneumonia, pneumothorax and lung cancer. Most applications are aimed at detecting disease, complemented by products that characterize or quantify tissue. At present, none of the thoracic AI products is specifically designed for the pediatric population. However, some products developed to detect tuberculosis in adults are also applicable to children. Software is under development to detect early changes of cystic fibrosis on chest CT, which could be an interesting application for pediatric radiology. In this review, we give an overview of current AI products in thoracic radiology and cover recent literature about AI in chest radiography, with a focus on pediatric radiology. We also discuss possible pediatric applications.
Collapse
|
5
|
An Overview of Supervised Machine Learning Methods and Data Analysis for COVID-19 Detection. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:4733167. [PMID: 34853669 PMCID: PMC8629644 DOI: 10.1155/2021/4733167] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Revised: 08/16/2021] [Accepted: 10/11/2021] [Indexed: 12/16/2022]
Abstract
Methods Our analysis and machine learning algorithm is based on most cited two clinical datasets from the literature: one from San Raffaele Hospital Milan Italia and the other from Hospital Israelita Albert Einstein São Paulo Brasilia. The datasets were processed to select the best features that most influence the target, and it turned out that almost all of them are blood parameters. EDA (Exploratory Data Analysis) methods were applied to the datasets, and a comparative study of supervised machine learning models was done, after which the support vector machine (SVM) was selected as the one with the best performance. Results SVM being the best performant is used as our proposed supervised machine learning algorithm. An accuracy of 99.29%, sensitivity of 92.79%, and specificity of 100% were obtained with the dataset from Kaggle (https://www.kaggle.com/einsteindata4u/covid19) after applying optimization to SVM. The same procedure and work were performed with the dataset taken from San Raffaele Hospital (https://zenodo.org/record/3886927#.YIluB5AzbMV). Once more, the SVM presented the best performance among other machine learning algorithms, and 92.86%, 93.55%, and 90.91% for accuracy, sensitivity, and specificity, respectively, were obtained. Conclusion The obtained results, when compared with others from the literature based on these same datasets, are superior, leading us to conclude that our proposed solution is reliable for the COVID-19 diagnosis.
Collapse
|
6
|
Alla Takam C, Tchagna Kouanou A, Samba O, Mih Attia T, Tchiotsop D. Big Data Framework Using Spark Architecture for Dose Optimization Based on Deep Learning in Medical Imaging. ARTIF INTELL 2021. [DOI: 10.5772/intechopen.97746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Deep learning and machine learning provide more consistent tools and powerful functions for recognition, classification, reconstruction, noise reduction, quantification and segmentation in biomedical image analysis. Some breakthroughs. Recently, some applications of deep learning and machine learning for low-dose optimization in computed tomography have been developed. Due to reconstruction and processing technology, it has become crucial to develop architectures and/or methods based on deep learning algorithms to minimize radiation during computed tomography scan inspections. This chapter is an extension work done by Alla et al. in 2020 and explain that work very well. This chapter introduces the deep learning for computed tomography scan low-dose optimization, shows examples described in the literature, briefly discusses new methods for computed tomography scan image processing, and provides conclusions. We propose a pipeline for low-dose computed tomography scan image reconstruction based on the literature. Our proposed pipeline relies on deep learning and big data technology using Spark Framework. We will discuss with the pipeline proposed in the literature to finally derive the efficiency and importance of our pipeline. A big data architecture using computed tomography images for low-dose optimization is proposed. The proposed architecture relies on deep learning and allows us to develop effective and appropriate methods to process dose optimization with computed tomography scan images. The real realization of the image denoising pipeline shows us that we can reduce the radiation dose and use the pipeline we recommend to improve the quality of the captured image.
Collapse
|
7
|
The use of deep learning towards dose optimization in low-dose computed tomography: A scoping review. Radiography (Lond) 2021; 28:208-214. [PMID: 34325998 DOI: 10.1016/j.radi.2021.07.010] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 06/10/2021] [Accepted: 07/09/2021] [Indexed: 11/21/2022]
Abstract
INTRODUCTION Low-dose computed tomography tends to produce lower image quality than normal dose computed tomography (CT) although it can help to reduce radiation hazards of CT scanning. Research has shown that Artificial Intelligence (AI) technologies, especially deep learning can help enhance the image quality of low-dose CT by denoising images. This scoping review aims to create an overview on how AI technologies, especially deep learning, can be used in dose optimisation for low-dose CT. METHODS Literature searches of ProQuest, PubMed, Cinahl, ScienceDirect, EbscoHost Ebook Collection and Ovid were carried out to find research articles published between the years 2015 and 2020. In addition, manual search was conducted in SweMed+, SwePub, NORA, Taylor & Francis Online and Medic. RESULTS Following a systematic search process, the review comprised of 16 articles. Articles were organised according to the effects of the deep learning networks, e.g. image noise reduction, image restoration. Deep learning can be used in multiple ways to facilitate dose optimisation in low-dose CT. Most articles discuss image noise reduction in low-dose CT. CONCLUSION Deep learning can be used in the optimisation of patients' radiation dose. Nevertheless, the image quality is normally lower in low-dose CT (LDCT) than in regular-dose CT scans because of smaller radiation doses. With the help of deep learning, the image quality can be improved to equate the regular-dose computed tomography image quality. IMPLICATIONS TO PRACTICE Lower dose may decrease patients' radiation risk but may affect the image quality of CT scans. Artificial intelligence technologies can be used to improve image quality in low-dose CT scans. Radiologists and radiographers should have proper education and knowledge about the techniques used.
Collapse
|
8
|
Biomedical Image Classification in a Big Data Architecture Using Machine Learning Algorithms. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:9998819. [PMID: 34122785 PMCID: PMC8191587 DOI: 10.1155/2021/9998819] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/06/2021] [Revised: 05/09/2021] [Accepted: 05/25/2021] [Indexed: 12/13/2022]
Abstract
In modern-day medicine, medical imaging has undergone immense advancements and can capture several biomedical images from patients. In the wake of this, to assist medical specialists, these images can be used and trained in an intelligent system in order to aid the determination of the different diseases that can be identified from analyzing these images. Classification plays an important role in this regard; it enhances the grouping of these images into categories of diseases and optimizes the next step of a computer-aided diagnosis system. The concept of classification in machine learning deals with the problem of identifying to which set of categories a new population belongs. When category membership is known, the classification is done on the basis of a training set of data containing observations. The goal of this paper is to perform a survey of classification algorithms for biomedical images. The paper then describes how these algorithms can be applied to a big data architecture by using the Spark framework. This paper further proposes the classification workflow based on the observed optimal algorithms, Support Vector Machine and Deep Learning as drawn from the literature. The algorithm for the feature extraction step during the classification process is presented and can be customized in all other steps of the proposed classification workflow.
Collapse
|
9
|
Evaluation of Organ Dose and Image Quality Metrics of Pediatric CT Chest-Abdomen-Pelvis (CAP) Examination: An Anthropomorphic Phantom Study. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11052047] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
The aim of this study is to investigate the impact of CT acquisition parameter setting on organ dose and its influence on image quality metrics in pediatric phantom during CT examination. The study was performed on 64-slice multidetector CT scanner (MDCT) Siemens Definition AS (Siemens Sector Healthcare, Forchheim, Germany) using various CT CAP protocols (P1–P9). Tube potential for P1, P2, and P3 protocols were fixed at 100 kVp while P4, P5, and P6 were fixed at 80 kVp with used of various reference noise values. P7, P8, and P9 were the modification of P1 with changes on slice collimation, pitch factor, and tube current modulation (TCM), respectively. TLD-100 chips were inserted into the phantom slab number 7, 9, 10, 12, 13, and 14 to represent thyroid, lung, liver, stomach, gonads, and skin, respectively. The image quality metrics, signal to noise ratio (SNR) and contrast to noise ratio (CNR) values were obtained from the CT console. As a result, this study indicates a potential reduction in the absorbed dose up to 20% to 50% along with reducing tube voltage, tube current, and increasing the slice collimation. There is no significant difference (p > 0.05) observed between the protocols and image metrics.
Collapse
|