1
|
Kim JY. Performance Evaluation of Ultrasound Images Using Non-Local Means Algorithm with Adaptive Isotropic Search Window for Improved Detection of Salivary Gland Diseases: A Pilot Study. Diagnostics (Basel) 2024; 14:1433. [PMID: 39001323 PMCID: PMC11241115 DOI: 10.3390/diagnostics14131433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Revised: 07/01/2024] [Accepted: 07/03/2024] [Indexed: 07/16/2024] Open
Abstract
Speckle noise in ultrasound images (UIs) significantly reduces the accuracy of disease diagnosis. The aim of this study was to quantitatively evaluate its feasibility in salivary gland ultrasound imaging by modeling the adaptive non-local means (NLM) algorithm. UIs were obtained using an open-source device provided by SonoSkills and FUJIFILM Healthcare Europe. The adaptive NLM algorithm automates optimization by modeling the isotropic search window, eliminating the need for manual configuration in conventional NLM methods. The coefficient of variation (COV), contrast-to-noise ratio (CNR), and edge rise distance (ERD) were used as quantitative evaluation parameters. UIs of the salivary glands revealed evident visualization of the internal echo shape of the malignant tumor and calcification line using the adaptive NLM algorithm. Improved COV and CNR results (approximately 4.62 and 2.15 times, respectively) compared with noisy images were achieved. Additionally, when the adaptive NLM algorithm was applied to the UIs of patients with salivary gland sialolithiasis, the noisy images and ERD values were calculated almost similarly. In conclusion, this study demonstrated the applicability of the adaptive NLM algorithm in optimizing search window parameters for salivary gland UIs.
Collapse
Affiliation(s)
- Ji-Youn Kim
- Department of Dental Hygiene, Gachon University, 191, Hambakmoero, Yeonsu-gu, Incheon 21936, Republic of Korea
| |
Collapse
|
2
|
Hussain D, Hyeon Gu Y. Exploring the Impact of Noise and Image Quality on Deep Learning Performance in DXA Images. Diagnostics (Basel) 2024; 14:1328. [PMID: 39001219 PMCID: PMC11240833 DOI: 10.3390/diagnostics14131328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Revised: 06/19/2024] [Accepted: 06/20/2024] [Indexed: 07/16/2024] Open
Abstract
BACKGROUND AND OBJECTIVE Segmentation of the femur in Dual-Energy X-ray (DXA) images poses challenges due to reduced contrast, noise, bone shape variations, and inconsistent X-ray beam penetration. In this study, we investigate the relationship between noise and certain deep learning (DL) techniques for semantic segmentation of the femur to enhance segmentation and bone mineral density (BMD) accuracy by incorporating noise reduction methods into DL models. METHODS Convolutional neural network (CNN)-based models were employed to segment femurs in DXA images and evaluate the effects of noise reduction filters on segmentation accuracy and their effect on BMD calculation. Various noise reduction techniques were integrated into DL-based models to enhance image quality before training. We assessed the performance of the fully convolutional neural network (FCNN) in comparison to noise reduction algorithms and manual segmentation methods. RESULTS Our study demonstrated that the FCNN outperformed noise reduction algorithms in enhancing segmentation accuracy and enabling precise calculation of BMD. The FCNN-based segmentation approach achieved a segmentation accuracy of 98.84% and a correlation coefficient of 0.9928 for BMD measurements, indicating its effectiveness in the clinical diagnosis of osteoporosis. CONCLUSIONS In conclusion, integrating noise reduction techniques into DL-based models significantly improves femur segmentation accuracy in DXA images. The FCNN model, in particular, shows promising results in enhancing BMD calculation and clinical diagnosis of osteoporosis. These findings highlight the potential of DL techniques in addressing segmentation challenges and improving diagnostic accuracy in medical imaging.
Collapse
Affiliation(s)
- Dildar Hussain
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul 05006, Republic of Korea
| | - Yeong Hyeon Gu
- Department of Artificial Intelligence and Data Science, Sejong University, Seoul 05006, Republic of Korea
| |
Collapse
|
3
|
Yao L, Wang J, Wu Z, Du Q, Yang X, Li M, Zheng J. Parallel processing model for low-dose computed tomography image denoising. Vis Comput Ind Biomed Art 2024; 7:14. [PMID: 38865022 PMCID: PMC11169366 DOI: 10.1186/s42492-024-00165-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 05/20/2024] [Indexed: 06/13/2024] Open
Abstract
Low-dose computed tomography (LDCT) has gained increasing attention owing to its crucial role in reducing radiation exposure in patients. However, LDCT-reconstructed images often suffer from significant noise and artifacts, negatively impacting the radiologists' ability to accurately diagnose. To address this issue, many studies have focused on denoising LDCT images using deep learning (DL) methods. However, these DL-based denoising methods have been hindered by the highly variable feature distribution of LDCT data from different imaging sources, which adversely affects the performance of current denoising models. In this study, we propose a parallel processing model, the multi-encoder deep feature transformation network (MDFTN), which is designed to enhance the performance of LDCT imaging for multisource data. Unlike traditional network structures, which rely on continual learning to process multitask data, the approach can simultaneously handle LDCT images within a unified framework from various imaging sources. The proposed MDFTN consists of multiple encoders and decoders along with a deep feature transformation module (DFTM). During forward propagation in network training, each encoder extracts diverse features from its respective data source in parallel and the DFTM compresses these features into a shared feature space. Subsequently, each decoder performs an inverse operation for multisource loss estimation. Through collaborative training, the proposed MDFTN leverages the complementary advantages of multisource data distribution to enhance its adaptability and generalization. Numerous experiments were conducted on two public datasets and one local dataset, which demonstrated that the proposed network model can simultaneously process multisource data while effectively suppressing noise and preserving fine structures. The source code is available at https://github.com/123456789ey/MDFTN .
Collapse
Affiliation(s)
- Libing Yao
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Jiping Wang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Zhongyi Wu
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.
| | - Qiang Du
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Xiaodong Yang
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Ming Li
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China.
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.
| | - Jian Zheng
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China
- Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| |
Collapse
|
4
|
Liu H, Liu J, Zhou W, Xu B, Yue Z, Xiong D, Yang X. Noise correction in differential phase contrast for improving phase sensitivity. OPTICS EXPRESS 2024; 32:16629-16644. [PMID: 38858864 DOI: 10.1364/oe.516623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 04/07/2024] [Indexed: 06/12/2024]
Abstract
Differential phase contrast (DPC) imaging relies on computational analysis to extract quantitative phase information from phase gradient images. However, even modest noise level can introduce errors that propagate through the computational process, degrading the quality of the final phase result and further reducing phase sensitivity. Here, we introduce the noise-corrected DPC (ncDPC) to enhance phase sensitivity. This approach is based on a theoretical DPC model that effectively considers most relevant noise sources in the camera and non-uniform illumination in DPC. In particular, the dominating shot noise and readout noise variance can be jointly estimated using frequency analysis and further corrected by block-matching 3D (BM3D) method. Finally, the denoised images are used for phase retrieval based on the common Tikhonov inversion. Our results, based on both simulated and experimental data, demonstrate that ncDPC outperforms the traditional DPC (tDPC), enabling significant improvements in both phase reconstruction quality and phase sensitivity. Besides, we have demonstrated the broad applicability of ncDPC by showing its performance in various experimental datasets.
Collapse
|
5
|
Alzubaidi M, Shah U, Agus M, Househ M. FetSAM: Advanced Segmentation Techniques for Fetal Head Biometrics in Ultrasound Imagery. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:281-295. [PMID: 38766538 PMCID: PMC11100952 DOI: 10.1109/ojemb.2024.3382487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 03/05/2024] [Accepted: 03/24/2024] [Indexed: 05/22/2024] Open
Abstract
Goal: FetSAM represents a cutting-edge deep learning model aimed at revolutionizing fetal head ultrasound segmentation, thereby elevating prenatal diagnostic precision. Methods: Utilizing a comprehensive dataset-the largest to date for fetal head metrics-FetSAM incorporates prompt-based learning. It distinguishes itself with a dual loss mechanism, combining Weighted DiceLoss and Weighted Lovasz Loss, optimized through AdamW and underscored by class weight adjustments for better segmentation balance. Performance benchmarks against prominent models such as U-Net, DeepLabV3, and Segformer highlight its efficacy. Results: FetSAM delivers unparalleled segmentation accuracy, demonstrated by a DSC of 0.90117, HD of 1.86484, and ASD of 0.46645. Conclusion: FetSAM sets a new benchmark in AI-enhanced prenatal ultrasound analysis, providing a robust, precise tool for clinical applications and pushing the envelope of prenatal care with its groundbreaking dataset and segmentation capabilities.
Collapse
Affiliation(s)
- Mahmood Alzubaidi
- College of Science and EngineeringHamad Bin Khalifa UniversityDoha34110Qatar
| | - Uzair Shah
- College of Science and EngineeringHamad Bin Khalifa UniversityDoha34110Qatar
| | - Marco Agus
- College of Science and EngineeringHamad Bin Khalifa UniversityDoha34110Qatar
| | - Mowafa Househ
- College of Science and EngineeringHamad Bin Khalifa UniversityDoha34110Qatar
| |
Collapse
|
6
|
Kobayashi D, Hayashi H, Nishigami R, Maeda T, Asahara T, Kanazawa Y, Katsumata A, Kimoto N, Yamamoto S. A blurring correction method suitable to analyze quantitative x-ray images derived from energy-resolving photon counting detector. Phys Med Biol 2024; 69:075023. [PMID: 38452379 DOI: 10.1088/1361-6560/ad3119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 03/07/2024] [Indexed: 03/09/2024]
Abstract
Objective.The purpose of this study is to propose a novel blurring correction method that enables accurate quantitative analysis of the object edge when using energy-resolving photon counting detectors (ERPCDs). Although the ERPCDs have the ability to generate various quantitative analysis techniques, such as the derivations of effective atomic number (Zeff) and bone mineral density values, at the object edge in these quantitative images, accurate quantitative information cannot be obtained. This is because image blurring prevents the gathering of accurate primary x-ray attenuation information.Approach.We developed the following procedure for blurring correction. A 5 × 5 pixels masking region was set as the processing area, and the pixels affected by blurring were extracted from the analysis of pixel value distribution. The blurred pixel values were then corrected to the proper values estimated by analyzing minimum and/or maximum values in the set mask area. The suitability of our correction method was verified by a simulation study and an experiment using a prototype ERPCD.Main results. WhenZeffimage of aluminum objects (Zeff= 13) were analyzed without applying our correction method, regardless of raw data or correction data applying a conventional edge enhancement method, the properZeffvalues could not be derived for the object edge. In contrast, when applying our correction method, 82% of pixels affected by blurring were corrected and the properZeffvalues were calculated for those pixels. As a result of investigating the applicability limits of our method through simulation, it was proven that it works effectively for objects with 4 × 4 pixels or more.Significance. Our method is effective in correcting image blurring when the quantitative image is calculated based on multiple images. It will become an in-demand technology for putting a quantitative diagnosis into actual medical examinations.
Collapse
Affiliation(s)
- Daiki Kobayashi
- Graduate School of Medical Sciences, Kanazawa University, Ishikawa, 920-0942, Japan
| | - Hiroaki Hayashi
- College of Medical, Pharmaceutical and Health Sciences, Kanazawa University, Ishikawa, 920-0942, Japan
| | - Rina Nishigami
- Graduate School of Medical Sciences, Kanazawa University, Ishikawa, 920-0942, Japan
| | - Tatsuya Maeda
- Graduate School of Medical Sciences, Kanazawa University, Ishikawa, 920-0942, Japan
| | - Takashi Asahara
- Graduate School of Medical Sciences, Kanazawa University, Ishikawa, 920-0942, Japan
| | - Yuki Kanazawa
- Graduate School of Biomedical Sciences, Tokushima University, Tokushima, 770-8503, Japan
| | | | | | | |
Collapse
|
7
|
Wang R, Liu X, Tan G. Coupling speckle noise suppression with image classification for deep-learning-aided ultrasound diagnosis. Phys Med Biol 2024; 69:065001. [PMID: 38359452 DOI: 10.1088/1361-6560/ad29bb] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Accepted: 02/15/2024] [Indexed: 02/17/2024]
Abstract
Objective. During deep-learning-aided (DL-aided) ultrasound (US) diagnosis, US image classification is a foundational task. Due to the existence of serious speckle noise in US images, the performance of DL models may be degraded. Pre-denoising US images before their use in DL models is usually a logical choice. However, our investigation suggests that pre-speckle-denoising is not consistently advantageous. Furthermore, due to the decoupling of speckle denoising from the subsequent DL classification, investing intensive time in parameter tuning is inevitable to attain the optimal denoising parameters for various datasets and DL models. Pre-denoising will also add extra complexity to the classification task and make it no longer end-to-end.Approach. In this work, we propose a multi-scale high-frequency-based feature augmentation (MSHFFA) module that couples feature augmentation and speckle noise suppression with specific DL models, preserving an end-to-end fashion. In MSHFFA, the input US image is first decomposed to multi-scale low-frequency and high-frequency components (LFC and HFC) with discrete wavelet transform. Then, multi-scale augmentation maps are obtained by computing the correlation between LFC and HFC. Last, the original DL model features are augmented with multi-scale augmentation maps.Main results. On two public US datasets, all six renowned DL models exhibited enhanced F1-scores compared with their original versions (by 1.31%-8.17% on the POCUS dataset and 0.46%-3.89% on the BLU dataset) after using the MSHFFA module, with only approximately 1% increase in model parameter count.Significance. The proposed MSHFFA has broad applicability and commendable efficiency and thus can be used to enhance the performance of DL-aided US diagnosis. The codes are available athttps://github.com/ResonWang/MSHFFA.
Collapse
Affiliation(s)
- Ruixin Wang
- College of Computer Science and Software Engineering, Hohai University, Nanjing 210098, People's Republic of China
| | - Xiaohui Liu
- The First People's Hospital of Kunshan, Affiliated Kunshan Hospital of Jiangsu University, Kunshan 215300, People's Republic of China
| | - Guoping Tan
- College of Computer Science and Software Engineering, Hohai University, Nanjing 210098, People's Republic of China
| |
Collapse
|
8
|
Abbasian Ardakani A, Mohammadi A, Vogl TJ, Kuzan TY, Acharya UR. AdaRes: A deep learning-based model for ultrasound image denoising: Results of image quality metrics, radiomics, artificial intelligence, and clinical studies. JOURNAL OF CLINICAL ULTRASOUND : JCU 2024; 52:131-143. [PMID: 37983736 DOI: 10.1002/jcu.23607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 10/24/2023] [Accepted: 10/26/2023] [Indexed: 11/22/2023]
Abstract
PURPOSE The quality of ultrasound images is degraded by speckle and Gaussian noises. This study aims to develop a deep-learning (DL)-based filter for ultrasound image denoising. METHODS A novel DL-based filter using adaptive residual (AdaRes) learning was proposed. Five image quality metrics (IQMs) and 27 radiomics features were used to evaluate denoising results. The effect of our proposed filter, AdaRes, on four pre-trained convolutional neural network (CNN) classification models and three radiologists was assessed. RESULTS AdaRes filter was tested on both natural and ultrasound image databases. IQMs results indicate that AdaRes could remove noises in three different noise levels with the highest performances. In addition, a radiomics study proved that AdaRes did not distort tissue textures and it could preserve most radiomics features. AdaRes could also improve the performance classification using CNNs in different settings. Finally, AdaRes also improved the mean overall performance (AUC) of three radiologists from 0.494 to 0.702 in the classification of benign and malignant lesions. CONCLUSIONS AdaRes filtered out noises on ultrasound images more effectively and can be used as an auxiliary preprocessing step in computer-aided diagnosis systems. Radiologists may use it to remove unwanted noises and improve the ultrasound image quality before the interpretation.
Collapse
Affiliation(s)
- Ali Abbasian Ardakani
- Department of Radiology Technology, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Afshin Mohammadi
- Department of Radiology, Faculty of Medicine, Urmia University of Medical Science, Urmia, Iran
| | - Thomas J Vogl
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt am Main, Germany
| | - Taha Yusuf Kuzan
- Department of Radiology, Sancaktepe Sehit Prof. Dr. Ilhan Varank Training and Research Hospital, Istanbul, Turkey
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Queensland, Australia
- Centre for Health Research, University of Southern Queensland, Springfield, Queensland, Australia
| |
Collapse
|
9
|
Ketola JHJ, Inkinen SI, Mäkelä T, Kaasalainen T, Peltonen JI, Kangasniemi M, Volmonen K, Kortesniemi M. Automatic chest computed tomography image noise quantification using deep learning. Phys Med 2024; 117:103186. [PMID: 38042062 DOI: 10.1016/j.ejmp.2023.103186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 11/15/2023] [Accepted: 11/23/2023] [Indexed: 12/04/2023] Open
Abstract
PURPOSE This study aimed to develop a deep learning (DL) method for noise quantification for clinical chest computed tomography (CT) images without the need for repeated scanning or homogeneous tissue regions. METHODS A comprehensive phantom CT dataset (three dose levels, six reconstruction methods, amounting to 9240 slices) was acquired and used to train a convolutional neural network (CNN) to output an estimate of local image noise standard deviations (SD) from a single CT scan input. The CNN model consisting of seven convolutional layers was trained on the phantom image dataset representing a range of scan parameters and was tested with phantom images acquired in a variety of different scan conditions, as well as publicly available chest CT images to produce clinical noise SD maps. RESULTS Noise SD maps predicted by the CNN agreed well with the ground truth both visually and numerically in the phantom dataset (errors of < 5 HU for most scan parameter combinations). In addition, the noise SD estimates obtained from clinical chest CT images were similar to running-average based reference estimates in areas without prominent tissue interfaces. CONCLUSIONS Predicting local noise magnitudes without the need for repeated scans is feasible using DL. Our implementation trained with phantom data was successfully applied to open-source clinical data with heterogeneous tissue borders and textures. We suggest that automatic DL noise mapping from clinical patient images could be used as a tool for objective CT image quality estimation and protocol optimization.
Collapse
Affiliation(s)
- Juuso H J Ketola
- Radiology, HUS Diagnostic Center, University of Helsinki and Helsinki University Hospital, Finland
| | - Satu I Inkinen
- Radiology, HUS Diagnostic Center, University of Helsinki and Helsinki University Hospital, Finland
| | - Teemu Mäkelä
- Radiology, HUS Diagnostic Center, University of Helsinki and Helsinki University Hospital, Finland; Department of Physics, University of Helsinki, P.O. Box 64, FI-00014 Helsinki, Finland.
| | - Touko Kaasalainen
- Radiology, HUS Diagnostic Center, University of Helsinki and Helsinki University Hospital, Finland
| | - Juha I Peltonen
- Radiology, HUS Diagnostic Center, University of Helsinki and Helsinki University Hospital, Finland
| | - Marko Kangasniemi
- Radiology, HUS Diagnostic Center, University of Helsinki and Helsinki University Hospital, Finland
| | - Kirsi Volmonen
- Radiology, HUS Diagnostic Center, University of Helsinki and Helsinki University Hospital, Finland
| | - Mika Kortesniemi
- Radiology, HUS Diagnostic Center, University of Helsinki and Helsinki University Hospital, Finland
| |
Collapse
|
10
|
Vora N, Polleys CM, Sakellariou F, Georgalis G, Thieu HT, Genega EM, Jahanseir N, Patra A, Miller E, Georgakoudi I. Restoration of metabolic functional metrics from label-free, two-photon human tissue images using multiscale deep-learning-based denoising algorithms. JOURNAL OF BIOMEDICAL OPTICS 2023; 28:126006. [PMID: 38144697 PMCID: PMC10742979 DOI: 10.1117/1.jbo.28.12.126006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 10/23/2023] [Accepted: 11/28/2023] [Indexed: 12/26/2023]
Abstract
Significance Label-free, two-photon excited fluorescence (TPEF) imaging captures morphological and functional metabolic tissue changes and enables enhanced understanding of numerous diseases. However, noise and other artifacts present in these images severely complicate the extraction of biologically useful information. Aim We aim to employ deep neural architectures in the synthesis of a multiscale denoising algorithm optimized for restoring metrics of metabolic activity from low-signal-to-noise ratio (SNR), TPEF images. Approach TPEF images of reduced nicotinamide adenine dinucleotide (phosphate) (NAD(P)H) and flavoproteins (FAD) from freshly excised human cervical tissues are used to assess the impact of various denoising models, preprocessing methods, and data on metrics of image quality and the recovery of six metrics of metabolic function from the images relative to ground truth images. Results Optimized recovery of the redox ratio and mitochondrial organization is achieved using a novel algorithm based on deep denoising in the wavelet transform domain. This algorithm also leads to significant improvements in peak-SNR (PSNR) and structural similarity index measure (SSIM) for all images. Interestingly, other models yield even higher PSNR and SSIM improvements, but they are not optimal for recovery of metabolic function metrics. Conclusions Denoising algorithms can recover diagnostically useful information from low SNR label-free TPEF images and will be useful for the clinical translation of such imaging.
Collapse
Affiliation(s)
- Nilay Vora
- Tufts University, Department of Biomedical Engineering, Medford, Massachusetts, United States
| | - Christopher M. Polleys
- Tufts University, Department of Biomedical Engineering, Medford, Massachusetts, United States
| | | | - Georgios Georgalis
- Tufts University, Data Intensive Studies Center, Medford, Massachusetts, United States
| | - Hong-Thao Thieu
- Tufts University School of Medicine, Tufts Medical Center, Department of Obstetrics and Gynecology, Boston, Massachusetts, United States
| | - Elizabeth M. Genega
- Tufts University School of Medicine, Tufts Medical Center, Department of Pathology and Laboratory Medicine, Boston, Massachusetts, United States
| | - Narges Jahanseir
- Tufts University School of Medicine, Tufts Medical Center, Department of Pathology and Laboratory Medicine, Boston, Massachusetts, United States
| | - Abani Patra
- Tufts University, Data Intensive Studies Center, Medford, Massachusetts, United States
- Tufts University, Department of Mathematics, Medford, Massachusetts, United States
| | - Eric Miller
- Tufts University, Department of Electrical and Computer Engineering, Medford, Massachusetts, United States
- Tufts University, Tufts Institute for Artificial Intelligence, Medford, Massachusetts, United States
| | - Irene Georgakoudi
- Tufts University, Department of Biomedical Engineering, Medford, Massachusetts, United States
| |
Collapse
|
11
|
Yuan N, Wang L, Ye C, Deng Z, Zhang J, Zhu Y. Self-supervised structural similarity-based convolutional neural network for cardiac diffusion tensor image denoising. Med Phys 2023; 50:6137-6150. [PMID: 36775901 DOI: 10.1002/mp.16301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Revised: 12/12/2022] [Accepted: 01/03/2023] [Indexed: 02/14/2023] Open
Abstract
BACKGROUND Diffusion tensor imaging (DTI) is a promising technique for non-invasively investigating the myocardial fiber structures of human heart. However, low signal-to-noise ratio (SNR) has been a major limit of cardiac DTI to prevent us from detecting myocardium structure accurately. Therefore, it is important to remove the effect of noise on diffusion weighted (DW) images. PURPOSE Although the conventional and deep learning-based denoising methods have shown the potential to deal with effectively the noise in DW images, most of them are redundant information dependent or require the noise-free images as golden standard. In addition, the existed DW image denoising methods often suffer from problems of over-smoothing. To address these issues, we propose a self-supervised learning model, structural similarity based convolutional neural network with edge-weighted loss (SSECNN), to remove the noise effectively in cardiac DTI. METHODS Considering that the DW images acquired along different diffusion directions have structural similarity, and the noise in these DW images is independent and identically distributed, the structural similarity-based matching algorithm is proposed to search for the most similar DW images. Such similar noisy DW image pairs are then used as the input and target of the denoising network SSECNN, which consists of several convolutional and residual blocks. Through the self-supervised training with these image pairs, the network can restore the clean DW images and retain the correlations between the denoised DW images along different directions. To avoid the over-smoothing problem, we design a novel edge-weighted loss which enables the network to adaptively adjust the loss weights with iterations and therefore to improve the detail preserve ability of the model. To verify the superiority of the proposed method, comparisons with state-of-the-art (SOTA) denoising methods are performed on both synthetic and real acquired DTI datasets. RESULTS Experimental results show that SSECNN can effectively reduce the noise in the DW images while preserving detailed texture and edge information and therefore achieve better performance in DTI reconstruction. For synthetic dataset, compared to the SOTA method, the root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structure similarity index measure (SSIM) between the denoised DW images obtained with SSECNN and noise-free DW images are improved by 6.94%, 1.98%, and 0.76% respectively when the noise level is 10%. As for the acquired cardiac DTI dataset, the SSECNN method could significantly improve SNR and contrast to noise ratio (CNR) of cardiac DW images and achieve more regular helix angle (HA) and transverse angle (TA) maps. The ablation experimental results validate that using the structure similarity-based method to search the similar DW image pairs yield the smallest loss, and with the help of the edge-weighted loss, the denoised DW images and diffusion metric maps can preserve more details. CONCLUSIONS The proposed SSECNN method can fully explore the similarity between the DW images along different diffusion directions. Using such similarity and an edge-weighted loss enable us to denoise cardiac DTI effectively in a self-supervised manner. Our method can overcome the redundancy information dependence and over-smoothing problem of the SOTA methods.
Collapse
Affiliation(s)
- Nannan Yuan
- Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Lihui Wang
- Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Chen Ye
- Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Zeyu Deng
- Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Jian Zhang
- Key Laboratory of Intelligent Medical Image Analysis and Precise Diagnosis of Guizhou Province, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Yuemin Zhu
- Univ Lyon, INSA Lyon, CNRS, Inserm, CREATIS UMR 5220, U1206, Lyon, France
| |
Collapse
|
12
|
Xin L, Zhuo W, Liu H, Xie T. Guided block matching and 4-D transform domain filter projection denoising method for dynamic PET image reconstruction. EJNMMI Phys 2023; 10:59. [PMID: 37747587 PMCID: PMC10519923 DOI: 10.1186/s40658-023-00580-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2023] [Accepted: 09/15/2023] [Indexed: 09/26/2023] Open
Abstract
PURPOSE Dynamic PET is an essential tool in oncology due to its ability to visualize and quantify radiotracer uptake, which has the potential to improve imaging quality. However, image noise caused by a low photon count in dynamic PET is more significant than in static PET. This study aims to develop a novel denoising method, namely the Guided Block Matching and 4-D Transform Domain Filter (GBM4D) projection, to enhance dynamic PET image reconstruction. METHODS The sinogram was first transformed using the Anscombe method, then denoised using a combination of hard thresholding and Wiener filtering. Each denoising step involved guided block matching and grouping, collaborative filtering, and weighted averaging. The guided block matching was performed on accumulated PET sinograms to prevent mismatching due to low photon counts. The performance of the proposed denoising method (GBM4D) was compared to other methods such as wavelet, total variation, non-local means, and BM3D using computer simulations on the Shepp-Logan and digital brain phantoms. The denoising methods were also applied to real patient data for evaluation. RESULTS In all phantom studies, GBM4D outperformed other denoising methods in all time frames based on the structural similarity and peak signal-to-noise ratio. Moreover, GBM4D yielded the lowest root mean square error in the time-activity curve of all tissues and produced the highest image quality when applied to real patient data. CONCLUSION GBM4D demonstrates excellent denoising and edge-preserving capabilities, as validated through qualitative and quantitative assessments of both temporal and spatial denoising performance.
Collapse
Affiliation(s)
- Lin Xin
- Institute of Radiation Medicine, Fudan University, 2094 Xietu Road, Shanghai, 200032, China
| | - Weihai Zhuo
- Institute of Radiation Medicine, Fudan University, 2094 Xietu Road, Shanghai, 200032, China
| | - Haikuan Liu
- Institute of Radiation Medicine, Fudan University, 2094 Xietu Road, Shanghai, 200032, China
| | - Tianwu Xie
- Institute of Radiation Medicine, Fudan University, 2094 Xietu Road, Shanghai, 200032, China.
| |
Collapse
|
13
|
Mayfield JD, Bailey K, Borkowski AA, Viswanadhan N. Pilot Lightweight Denoising Algorithm for Multiple Sclerosis on Spine MRI. J Digit Imaging 2023; 36:1877-1884. [PMID: 37069452 PMCID: PMC10406747 DOI: 10.1007/s10278-023-00816-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Revised: 03/11/2023] [Accepted: 03/13/2023] [Indexed: 04/19/2023] Open
Abstract
Multiple sclerosis (MS) is a severely debilitating disease which requires accurate and timely diagnosis. MRI is the primary diagnostic vehicle; however, it is susceptible to noise and artifact which can limit diagnostic accuracy. A myriad of denoising algorithms have been developed over the years for medical imaging yet the models continue to become more complex. We developed a lightweight algorithm which utilizes the image's inherent noise via dictionary learning to improve image quality without high computational complexity or pretraining through a process known as orthogonal matching pursuit (OMP). Our algorithm is compared to existing traditional denoising algorithms to evaluate performance on real noise that would commonly be encountered in a clinical setting. Fifty patients with a history of MS who received 1.5 T MRI of the spine between the years of 2018 and 2022 were retrospectively identified in accordance with local IRB policies. Native resolution 5 mm sagittal images were selected from T2 weighted sequences for evaluation using various denoising techniques including our proposed OMP denoising algorithm. Peak signal to noise ratio (PSNR) and structural similarity index (SSIM) were measured. While wavelet denoising demonstrated an expected higher PSNR than other models, its SSIM was variable and consistently underperformed its comparators (0.94 ± 0.10). Our pilot OMP denoising algorithm provided superior performance with greater consistency in terms of SSIM (0.99 ± 0.01) with similar PSNR to non-local means filtering (NLM), both of which were superior to other comparators (OMP 37.6 ± 2.2, NLM 38.0 ± 1.8). The superior performance of our OMP denoising algorithm in comparison to traditional models is promising for clinical utility. Given its individualized and lightweight approach, implementation into PACS may be more easily incorporated. It is our hope that this technology will provide improved diagnostic accuracy and workflow optimization for Neurologists and Radiologists, as well as improved patient outcomes.
Collapse
Affiliation(s)
- John D Mayfield
- USF Health Department of Radiology, 2 Tampa General Circle, STC 6103, 33612, Tampa, FL, USA.
| | - Katie Bailey
- Department of Radiology, James A. Haley VA Medical Center, Tampa, FL, USA
| | - Andrew A Borkowski
- Artificial Intelligence Service, AI Center Lead, USF Morsani College of Medicine, National Artificial Intelligence Institute, James A. Haley Veterans' Hospital, Tampa, FL, USA
| | | |
Collapse
|
14
|
Houhou R, Quansah E, Meyer-Zedler T, Schmitt M, Hoffmann F, Guntinas-Lichius O, Popp J, Bocklitz T. Comparison of denoising tools for the reconstruction of nonlinear multimodal images. BIOMEDICAL OPTICS EXPRESS 2023; 14:3259-3278. [PMID: 37497515 PMCID: PMC10368050 DOI: 10.1364/boe.477384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Revised: 04/06/2023] [Accepted: 04/11/2023] [Indexed: 07/28/2023]
Abstract
Biophotonic multimodal imaging techniques provide deep insights into biological samples such as cells or tissues. However, the measurement time increases dramatically when high-resolution multimodal images (MM) are required. To address this challenge, mathematical methods can be used to shorten the acquisition time for such high-quality images. In this research, we compared standard methods, e.g., the median filter method and the phase retrieval method via the Gerchberg-Saxton algorithm with artificial intelligence (AI) based methods using MM images of head and neck tissues. The AI methods include two approaches: the first one is a transfer learning-based technique that uses the pre-trained network DnCNN. The second approach is the training of networks using augmented head and neck MM images. In this manner, we compared the Noise2Noise network, the MIRNet network, and our deep learning network namely incSRCNN, which is derived from the super-resolution convolutional neural network and inspired by the inception network. These methods reconstruct improved images using measured low-quality (LQ) images, which were measured in approximately 2 seconds. The evaluation was performed on artificial LQ images generated by degrading high-quality (HQ) images measured in 8 seconds using Poisson noise. The results showed the potential of using deep learning on these multimodal images to improve the data quality and reduce the acquisition time. Our proposed network has the advantage of having a simple architecture compared with similar-performing but highly parametrized networks DnCNN, MIRNet, and Noise2Noise.
Collapse
Affiliation(s)
- Rola Houhou
- Institute of Physical Chemistry and Abbe Center of Photonics, Friedrich Schiller University, Helmholtzweg 4, 07743 Jena, Germany
- Leibniz Institute of Photonic Technology (Member of Leibniz Health Technologies), Albert-Einstein-Straße 9, 07745 Jena, Germany
| | - Elsie Quansah
- Institute of Physical Chemistry and Abbe Center of Photonics, Friedrich Schiller University, Helmholtzweg 4, 07743 Jena, Germany
- Leibniz Institute of Photonic Technology (Member of Leibniz Health Technologies), Albert-Einstein-Straße 9, 07745 Jena, Germany
| | - Tobias Meyer-Zedler
- Institute of Physical Chemistry and Abbe Center of Photonics, Friedrich Schiller University, Helmholtzweg 4, 07743 Jena, Germany
- Leibniz Institute of Photonic Technology (Member of Leibniz Health Technologies), Albert-Einstein-Straße 9, 07745 Jena, Germany
| | - Michael Schmitt
- Institute of Physical Chemistry and Abbe Center of Photonics, Friedrich Schiller University, Helmholtzweg 4, 07743 Jena, Germany
| | - Franziska Hoffmann
- Department of Otorhinolaryngology, Institute of Phoniatry/Pedaudiology, Jena University Hospital, Jena, Germany
| | - Orlando Guntinas-Lichius
- Department of Otorhinolaryngology, Institute of Phoniatry/Pedaudiology, Jena University Hospital, Jena, Germany
| | - Jürgen Popp
- Institute of Physical Chemistry and Abbe Center of Photonics, Friedrich Schiller University, Helmholtzweg 4, 07743 Jena, Germany
- Leibniz Institute of Photonic Technology (Member of Leibniz Health Technologies), Albert-Einstein-Straße 9, 07745 Jena, Germany
| | - Thomas Bocklitz
- Institute of Physical Chemistry and Abbe Center of Photonics, Friedrich Schiller University, Helmholtzweg 4, 07743 Jena, Germany
- Leibniz Institute of Photonic Technology (Member of Leibniz Health Technologies), Albert-Einstein-Straße 9, 07745 Jena, Germany
- Institute of Computer Science, Faculty of Mathematics, Physics and Computer Science, University Bayreuth, Universitaetsstraße 30, 95447 Bayreuth, Germany
| |
Collapse
|
15
|
Zhang J, Zhu Y, Yu W, Ma J. Considering Image Information and Self-Similarity: A Compositional Denoising Network. SENSORS (BASEL, SWITZERLAND) 2023; 23:5915. [PMID: 37447765 PMCID: PMC10347252 DOI: 10.3390/s23135915] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 06/15/2023] [Accepted: 06/21/2023] [Indexed: 07/15/2023]
Abstract
Recently, convolutional neural networks (CNNs) have been widely used in image denoising, and their performance has been enhanced through residual learning. However, previous research mostly focused on optimizing the network architecture of CNNs, ignoring the limitations of the commonly used residual learning. This paper identifies two of its limitations, which are the neglect of image information and the lack of effective consideration of image self-similarity. To solve these limitations, this paper proposes a compositional denoising network (CDN), which contains two sub-paths, the image information path (IIP) and the noise estimation path (NEP), respectively. IIP is trained via an image-to-image method to extract image information. For NEP, it utilizes image self-similarity from the perspective of training. This similarity-based training method constrains NEP to output similar estimated noise distributions for different image patches with a specific kind of noise. Finally, image information and noise distribution information are comprehensively considered for image denoising. Experimental results indicate that CDN outperforms other CNN-based methods in both synthetic and real-world image denoising, achieving state-of-the-art performance.
Collapse
Affiliation(s)
- Jiahong Zhang
- The State Key Laboratory of Media Convergence and Communication, Communication University of China, Beijing 100024, China;
| | - Yonggui Zhu
- The School of Data Science and Media Intelligence, Communication University of China, Beijing 100024, China;
| | - Wenshu Yu
- School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu 610054, China
| | - Jingning Ma
- The School of Data Science and Media Intelligence, Communication University of China, Beijing 100024, China;
| |
Collapse
|
16
|
Atal DK. Optimal Deep CNN-Based Vectorial Variation Filter for Medical Image Denoising. J Digit Imaging 2023; 36:1216-1236. [PMID: 36650303 PMCID: PMC10287890 DOI: 10.1007/s10278-022-00768-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 12/22/2022] [Accepted: 12/23/2022] [Indexed: 01/19/2023] Open
Abstract
Medical imaging has acquired more attention due to the emerging design of wireless technologies, the internet, and data storage. The reflection of these technologies has gained attraction in medicine and medical sciences facilitating the diagnosis and treatment of different diseases in an effective manner. However, medical images are vulnerable to noise, which can make the image unclear and perplex the identification. Thus, denoising of medical images is imperative for processing medical images. This paper devises a novel optimal deep convolution neural network-based vectorial variation (ODVV) filter for denoising medical computed tomography (CT) images and Lena images. Here, the input medical images are fed to a noisy pixel map identification module wherein the deep convolutional neural network (Deep CNN) is adapted for discovering noisy pixel maps. Here, Deep CNN training is done with the Adam algorithm. Once noisy pixels are identified, it is further given to noise removal module which is performed using the proposed optimization algorithm, namely Feedback Artificial Lion (FAL). Here, the FAL is devised by combining the FAT and Lion algorithm. After noise removal, the pixel enhancement is performed using the vectorial total variation norm to get final pixel-enhanced image. The proposed FAL algorithm offered enhanced performance in contrast to other techniques with the highest peak signal-to-noise ratio (PSNR) of 24.149 dB, highest second-derivative-like measure of enhancement (SDME) of 32.142 dB, highest structural index similarity (SSIM) of 0.800, and Edge Preserve Index (EPI) of 0.9267.
Collapse
Affiliation(s)
- Dinesh Kumar Atal
- Dept. of Biomedical Engineering, Deenbandhu Chhotu Ram University of Science and Technology, Sonipat, Haryana, 131039, India.
| |
Collapse
|
17
|
Endocrine Tumor Classification via Machine-Learning-Based Elastography: A Systematic Scoping Review. Cancers (Basel) 2023; 15:cancers15030837. [PMID: 36765794 PMCID: PMC9913672 DOI: 10.3390/cancers15030837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 01/26/2023] [Accepted: 01/27/2023] [Indexed: 01/31/2023] Open
Abstract
Elastography complements traditional medical imaging modalities by mapping tissue stiffness to identify tumors in the endocrine system, and machine learning models can further improve diagnostic accuracy and reliability. Our objective in this review was to summarize the applications and performance of machine-learning-based elastography on the classification of endocrine tumors. Two authors independently searched electronic databases, including PubMed, Scopus, Web of Science, IEEEXpress, CINAHL, and EMBASE. Eleven (n = 11) articles were eligible for the review, of which eight (n = 8) focused on thyroid tumors and three (n = 3) considered pancreatic tumors. In all thyroid studies, the researchers used shear-wave ultrasound elastography, whereas the pancreas researchers applied strain elastography with endoscopy. Traditional machine learning approaches or the deep feature extractors were used to extract the predetermined features, followed by classifiers. The applied deep learning approaches included the convolutional neural network (CNN) and multilayer perceptron (MLP). Some researchers considered the mixed or sequential training of B-mode and elastographic ultrasound data or fusing data from different image segmentation techniques in machine learning models. All reviewed methods achieved an accuracy of ≥80%, but only three were ≥90% accurate. The most accurate thyroid classification (94.70%) was achieved by applying sequential training CNN; the most accurate pancreas classification (98.26%) was achieved using a CNN-long short-term memory (LSTM) model integrating elastography with B-mode and Doppler images.
Collapse
|
18
|
Göreke V. A novel method based on Wiener filter for denoising Poisson noise from medical X-Ray images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
19
|
Hatt M, Krizsan AK, Rahmim A, Bradshaw TJ, Costa PF, Forgacs A, Seifert R, Zwanenburg A, El Naqa I, Kinahan PE, Tixier F, Jha AK, Visvikis D. Joint EANM/SNMMI guideline on radiomics in nuclear medicine : Jointly supported by the EANM Physics Committee and the SNMMI Physics, Instrumentation and Data Sciences Council. Eur J Nucl Med Mol Imaging 2023; 50:352-375. [PMID: 36326868 PMCID: PMC9816255 DOI: 10.1007/s00259-022-06001-6] [Citation(s) in RCA: 40] [Impact Index Per Article: 40.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Accepted: 10/09/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE The purpose of this guideline is to provide comprehensive information on best practices for robust radiomics analyses for both hand-crafted and deep learning-based approaches. METHODS In a cooperative effort between the EANM and SNMMI, we agreed upon current best practices and recommendations for relevant aspects of radiomics analyses, including study design, quality assurance, data collection, impact of acquisition and reconstruction, detection and segmentation, feature standardization and implementation, as well as appropriate modelling schemes, model evaluation, and interpretation. We also offer an outlook for future perspectives. CONCLUSION Radiomics is a very quickly evolving field of research. The present guideline focused on established findings as well as recommendations based on the state of the art. Though this guideline recognizes both hand-crafted and deep learning-based radiomics approaches, it primarily focuses on the former as this field is more mature. This guideline will be updated once more studies and results have contributed to improved consensus regarding the application of deep learning methods for radiomics. Although methodological recommendations in the present document are valid for most medical image modalities, we focus here on nuclear medicine, and specific recommendations when necessary are made for PET/CT, PET/MR, and quantitative SPECT.
Collapse
Affiliation(s)
- M Hatt
- LaTIM, INSERM, UMR 1101, Univ Brest, Brest, France
| | | | - A Rahmim
- Departments of Radiology and Physics, University of British Columbia, Vancouver, BC, Canada
| | - T J Bradshaw
- Department of Radiology, University of Wisconsin, Madison, WI, USA
| | - P F Costa
- Department of Nuclear Medicine, West German Cancer Center, University of Duisburg-Essen and German Cancer Consortium (DKTK)-University Hospital Essen, Essen, Germany
| | | | - R Seifert
- Department of Nuclear Medicine, West German Cancer Center, University of Duisburg-Essen and German Cancer Consortium (DKTK)-University Hospital Essen, Essen, Germany.
- Department of Nuclear Medicine, Münster University Hospital, Münster, Germany.
| | - A Zwanenburg
- OncoRay-National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, Germany
- National Center for Tumor Diseases (NCT/UCC), Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - I El Naqa
- Department of Machine Learning, Moffitt Cancer Center, Tampa, FL, 33626, USA
| | - P E Kinahan
- Imaging Research Laboratory, PET/CT Physics, Department of Radiology, UW Medical Center, University of Washington, Seattle, WA, USA
| | - F Tixier
- LaTIM, INSERM, UMR 1101, Univ Brest, Brest, France
| | - A K Jha
- McKelvey School of Engineering and Mallinckrodt Institute of Radiology, Washington University in St. Louis, Saint Louis, MO, USA
| | - D Visvikis
- LaTIM, INSERM, UMR 1101, Univ Brest, Brest, France
| |
Collapse
|
20
|
Chicco D, Shiradkar R. Ten quick tips for computational analysis of medical images. PLoS Comput Biol 2023; 19:e1010778. [PMID: 36602952 PMCID: PMC9815662 DOI: 10.1371/journal.pcbi.1010778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023] Open
Abstract
Medical imaging is a great asset for modern medicine, since it allows physicians to spatially interrogate a disease site, resulting in precise intervention for diagnosis and treatment, and to observe particular aspect of patients' conditions that otherwise would not be noticeable. Computational analysis of medical images, moreover, can allow the discovery of disease patterns and correlations among cohorts of patients with the same disease, thus suggesting common causes or providing useful information for better therapies and cures. Machine learning and deep learning applied to medical images, in particular, have produced new, unprecedented results that can pave the way to advanced frontiers of medical discoveries. While computational analysis of medical images has become easier, however, the possibility to make mistakes or generate inflated or misleading results has become easier, too, hindering reproducibility and deployment. In this article, we provide ten quick tips to perform computational analysis of medical images avoiding common mistakes and pitfalls that we noticed in multiple studies in the past. We believe our ten guidelines, if taken into practice, can help the computational-medical imaging community to perform better scientific research that eventually can have a positive impact on the lives of patients worldwide.
Collapse
Affiliation(s)
- Davide Chicco
- Institute of Health Policy Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
| | - Rakesh Shiradkar
- Department of Biomedical Engineering, Emory University, Atlanta, Georgia, United States of America
| |
Collapse
|
21
|
Abstract
Applying computational statistics or machine learning methods to data is a key component of many scientific studies, in any field, but alone might not be sufficient to generate robust and reliable outcomes and results. Before applying any discovery method, preprocessing steps are necessary to prepare the data to the computational analysis. In this framework, data cleaning and feature engineering are key pillars of any scientific study involving data analysis and that should be adequately designed and performed since the first phases of the project. We call "feature" a variable describing a particular trait of a person or an observation, recorded usually as a column in a dataset. Even if pivotal, these data cleaning and feature engineering steps sometimes are done poorly or inefficiently, especially by beginners and unexperienced researchers. For this reason, we propose here our quick tips for data cleaning and feature engineering on how to carry out these important preprocessing steps correctly avoiding common mistakes and pitfalls. Although we designed these guidelines with bioinformatics and health informatics scenarios in mind, we believe they can more in general be applied to any scientific area. We therefore target these guidelines to any researcher or practitioners wanting to perform data cleaning or feature engineering. We believe our simple recommendations can help researchers and scholars perform better computational analyses that can lead, in turn, to more solid outcomes and more reliable discoveries.
Collapse
Affiliation(s)
- Davide Chicco
- Institute of Health Policy Management and Evaluation, University of Toronto, Toronto, Ontario, Canada
- * E-mail:
| | - Luca Oneto
- Dipartimento di Informatica Bioingegneria Robotica e Ingegneria dei Sistemi, Università di Genova, Genoa, Italy
- ZenaByte S.r.l., Genoa, Italy
| | - Erica Tavazzi
- Dipartimento di Ingegneria dell’Informazione, Università di Padova, Padua, Italy
| |
Collapse
|
22
|
Endorectal ultrasound radiomics in locally advanced rectal cancer patients: despeckling and radiotherapy response prediction using machine learning. ABDOMINAL RADIOLOGY (NEW YORK) 2022; 47:3645-3659. [PMID: 35951085 DOI: 10.1007/s00261-022-03625-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Revised: 07/13/2022] [Accepted: 07/15/2022] [Indexed: 01/18/2023]
Abstract
PURPOSE The current study aimed to evaluate the association of endorectal ultrasound (EUS) radiomics features at different denoising filters based on machine learning algorithms and to predict radiotherapy response in locally advanced rectal cancer (LARC) patients. METHODS The EUS images of forty-three LARC patients, as a predictive biomarker for predicting the treatment response of neoadjuvant chemoradiotherapy (NCRT), were investigated. For despeckling, the EUS images were preprocessed by traditional filters (bilateral, wiener, lee, frost, median, and wavelet filters). The rectal tumors were delineated by two readers separately, and radiomics features were extracted. The least absolute shrinkage and selection operator were used for feature selection. Classifiers including logistic regression (LR), K-nearest neighbor (KNN), support vector machine (SVM), random forest, naive Bayes, and decision tree were trained using stratified fivefold cross-validation for model development. The area under the curve (AUC) of the receiver operating characteristic curve followed by accuracy, precision, sensitivity, and specificity were obtained for model performance assessment. RESULTS The wavelet filter had the best results with means of AUC: 0.83, accuracy: 77.41%, precision: 82.15%, and sensitivity: 79.41%. LR and SVM by having AUC: 0.71 and 0.76; accuracy: 70.0% and 71.5%; precision: 75.0% and 73.0%; sensitivity: 69.8% and 80.2%; and specificity: 70.0% and 60.9% had the highest model's performance, respectively. CONCLUSION This study demonstrated that the EUS-based radiomics model could serve as pretreatment biomarkers in predicting pathologic features of rectal cancer. The wavelet filter and machine learning methods (LR and SVM) had good results on the EUS images of rectal cancer.
Collapse
|
23
|
Farea Shaaf Z, Mahadi Abdul Jamil M, Ambar R, Abd Wahab MH. Convolutional Neural Network for Denoising Left Ventricle Magnetic Resonance Images. COMPUTATIONAL INTELLIGENCE AND MACHINE LEARNING APPROACHES IN BIOMEDICAL ENGINEERING AND HEALTH CARE SYSTEMS 2022:1-14. [DOI: 10.2174/9781681089553122010004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Medical image processing is critical in disease detection and prediction. For
example, they locate lesions and measure an organ's morphological structures.
Currently, cardiac magnetic resonance imaging (CMRI) plays an essential role in
cardiac motion tracking and analyzing regional and global heart functions with high
accuracy and reproducibility. Cardiac MRI datasets are images taken during the heart's
cardiac cycles. These datasets require expert labeling to accurately recognize features
and train neural networks to predict cardiac disease. Any erroneous prediction caused
by image impairment will impact patients' diagnostic decisions. As a result, image
preprocessing is used, including enhancement tools such as filtering and denoising.
This paper introduces a denoising algorithm that uses a convolution neural network
(CNN) to delineate left ventricle (LV) contours (endocardium and epicardium borders)
from MRI images. With only a small amount of training data from the EMIDEC
database, this network performs well for MRI image denoising.
Collapse
Affiliation(s)
- Zakarya Farea Shaaf
- Universiti Tun Hussein Onn Malaysia,Biomedical Engineering Modelling and Simulation Research Group, Department Of Electronic Engineering, Faculty of Electrical And Electronic Engineering,,Johor,Malaysia
| | - Muhammad Mahadi Abdul Jamil
- Biomedical Engineering Modelling and Simulation Research Group, Department Of Electronic Engineering, Faculty of Electrical And Electronic Engineering,Universiti Tun Hussein Onn Malaysia,Johor,Malaysia
| | - Radzi Ambar
- Universiti Tun Hussein Onn Malaysia,Biomedical Engineering Modelling and Simulation Research Group, Department Of Electronic Engineering, Faculty of Electrical And Electronic Engineering,Johor,Malaysia
| | - Mohd Helmy Abd Wahab
- Universiti Tun Hussein Onn Malaysia,Biomedical Engineering Modelling and Simulation Research Group, Department Of Electronic Engineering, Faculty of Electrical And Electronic Engineering,Johor,Malaysia,86400
| |
Collapse
|
24
|
Sanabria SJ, Pirmoazen AM, Dahl J, Kamaya A, El Kaffas A. Comparative Study of Raw Ultrasound Data Representations in Deep Learning to Classify Hepatic Steatosis. ULTRASOUND IN MEDICINE & BIOLOGY 2022; 48:2060-2078. [PMID: 35914993 DOI: 10.1016/j.ultrasmedbio.2022.05.031] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 05/23/2022] [Accepted: 05/24/2022] [Indexed: 06/15/2023]
Abstract
Adiposity accumulation in the liver is an early-stage indicator of non-alcoholic fatty liver disease. Analysis of ultrasound (US) backscatter echoes from liver parenchyma with deep learning (DL) may offer an affordable alternative for hepatic steatosis staging. The aim of this work was to compare DL classification scores for liver steatosis using different data representations constructed from raw US data. Steatosis in N = 31 patients with confirmed or suspected non-alcoholic fatty liver disease was stratified based on fat-fraction cutoff values using magnetic resonance imaging as a reference standard. US radiofrequency (RF) frames (raw data) and clinical B-mode images were acquired. Intermediate image formation stages were modeled from RF data. Power spectrum representations and phase representations were also calculated. Co-registered patches were used to independently train 1-, 2- and 3-D convolutional neural networks (CNNs), and classifications scores were compared with cross-validation. There were 67,800 patches available for 2-D/3-D classification and 1,830,600 patches for 1-D classification. The results were also compared with radiologist B-mode annotations and quantitative ultrasound (QUS) metrics. Patch classification scores (area under the receiver operating characteristic curve [AUROC]) revealed significant reductions along successive stages of the image formation process (p < 0.001). Patient AUROCs were 0.994 for RF data and 0.938 for clinical B-mode images. For all image formation stages, 2-D CNNs revealed higher patch and patient AUROCs than 1-D CNNs. CNNs trained with power spectrum representations converged faster than those trained with RF data. Phase information, which is usually discarded in the image formation process, provided a patient AUROC of 0.988. DL models trained with RF and power spectrum data (AUROC = 0.998) provided higher scores than conventional QUS metrics and multiparametric combinations thereof (AUROC = 0.986). Radiologist annotations indicated lower hepatic steatosis classification accuracies (Acc = 0.914) with respect to magnetic resonance imaging proton density fat fraction that DL models (Acc = 0.989). Access to raw ultrasound data combined with artificial intelligence techniques may offer superior opportunities for quantitative tissue diagnostics than conventional sonographic images.
Collapse
Affiliation(s)
- Sergio J Sanabria
- Department of Radiology, Stanford University, Stanford, California, USA; Deusto Institute of Technology, University of Deusto/Ikerbasque, Basque Foundation for Science, Bilbao, Spain.
| | - Amir M Pirmoazen
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Jeremy Dahl
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Aya Kamaya
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Ahmed El Kaffas
- Department of Radiology, Stanford University, Stanford, California, USA
| |
Collapse
|
25
|
Ezhei M, Plonka G, Rabbani H. Retinal optical coherence tomography image analysis by a restricted Boltzmann machine. BIOMEDICAL OPTICS EXPRESS 2022; 13:4539-4558. [PMID: 36187262 PMCID: PMC9484437 DOI: 10.1364/boe.458753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 06/06/2022] [Accepted: 07/07/2022] [Indexed: 06/16/2023]
Abstract
Optical coherence tomography (OCT) is an emerging imaging technique for ophthalmic disease diagnosis. Two major problems in OCT image analysis are image enhancement and image segmentation. Deep learning methods have achieved excellent performance in image analysis. However, most of the deep learning-based image analysis models are supervised learning-based approaches and need a high volume of training data (e.g., reference clean images for image enhancement and accurate annotated images for segmentation). Moreover, acquiring reference clean images for OCT image enhancement and accurate annotation of the high volume of OCT images for segmentation is hard. So, it is difficult to extend these deep learning methods to the OCT image analysis. We propose an unsupervised learning-based approach for OCT image enhancement and abnormality segmentation, where the model can be trained without reference images. The image is reconstructed by Restricted Boltzmann Machine (RBM) by defining a target function and minimizing it. For OCT image enhancement, each image is independently learned by the RBM network and is eventually reconstructed. In the reconstruction phase, we use the ReLu function instead of the Sigmoid function. Reconstruction of images given by the RBM network leads to improved image contrast in comparison to other competitive methods in terms of contrast to noise ratio (CNR). For anomaly detection, hyper-reflective foci (HF) as one of the first signs in retinal OCTs of patients with diabetic macular edema (DME) are identified based on image reconstruction by RBM and post-processing by removing the HFs candidates outside the area between the first and the last retinal layers. Our anomaly detection method achieves a high ability to detect abnormalities.
Collapse
Affiliation(s)
- Mansooreh Ezhei
- Medical Image & Signal Processing Research Center, Isfahan Univ. of Medical Sciences, Isfahan, 8174673461, Iran
| | - Gerlind Plonka
- Institute for Numerical and Applied Mathematics, Georg-August-University Göttingen, Göttingen, Germany
| | - Hossein Rabbani
- Medical Image & Signal Processing Research Center, Isfahan Univ. of Medical Sciences, Isfahan, 8174673461, Iran
| |
Collapse
|
26
|
Chen J, Wee L, Dekker A, Bermejo I. Improving reproducibility and performance of radiomics in low-dose CT using cycle GANs. J Appl Clin Med Phys 2022; 23:e13739. [PMID: 35906893 PMCID: PMC9588275 DOI: 10.1002/acm2.13739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 05/29/2022] [Accepted: 07/11/2022] [Indexed: 11/09/2022] Open
Abstract
Background As a means to extract biomarkers from medical imaging, radiomics has attracted increased attention from researchers. However, reproducibility and performance of radiomics in low‐dose CT scans are still poor, mostly due to noise. Deep learning generative models can be used to denoise these images and in turn improve radiomics’ reproducibility and performance. However, most generative models are trained on paired data, which can be difficult or impossible to collect. Purpose In this article, we investigate the possibility of denoising low‐dose CTs using cycle generative adversarial networks (GANs) to improve radiomics reproducibility and performance based on unpaired datasets. Methods and materials Two cycle GANs were trained: (1) from paired data, by simulating low‐dose CTs (i.e., introducing noise) from high‐dose CTs and (2) from unpaired real low dose CTs. To accelerate convergence, during GAN training, a slice‐paired training strategy was introduced. The trained GANs were applied to three scenarios: (1) improving radiomics reproducibility in simulated low‐dose CT images and (2) same‐day repeat low dose CTs (RIDER dataset), and (3) improving radiomics performance in survival prediction. Cycle GAN results were compared with a conditional GAN (CGAN) and an encoder–decoder network (EDN) trained on simulated paired data. Results The cycle GAN trained on simulated data improved concordance correlation coefficients (CCC) of radiomic features from 0.87 (95%CI, [0.833,0.901]) to 0.93 (95%CI, [0.916,0.949]) on simulated noise CT and from 0.89 (95%CI, [0.881,0.914]) to 0.92 (95%CI, [0.908,0.937]) on the RIDER dataset, as well improving the area under the receiver operating characteristic curve (AUC) of survival prediction from 0.52 (95%CI, [0.511,0.538]) to 0.59 (95%CI, [0.578,0.602]). The cycle GAN trained on real data increased the CCCs of features in RIDER to 0.95 (95%CI, [0.933,0.961]) and the AUC of survival prediction to 0.58 (95%CI, [0.576,0.596]). Conclusion The results show that cycle GANs trained on both simulated and real data can improve radiomics’ reproducibility and performance in low‐dose CT and achieve similar results compared to CGANs and EDNs.
Collapse
Affiliation(s)
- Junhua Chen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, ET, Netherlands
| | - Leonard Wee
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, ET, Netherlands
| | - Andre Dekker
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, ET, Netherlands
| | - Inigo Bermejo
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, ET, Netherlands
| |
Collapse
|
27
|
Cascarano P, Franchini G, Kobler E, Porta F, Sebastiani A. Constrained and unconstrained deep image prior optimization models with automatic regularization. COMPUTATIONAL OPTIMIZATION AND APPLICATIONS 2022; 84:125-149. [PMID: 35909881 PMCID: PMC9326425 DOI: 10.1007/s10589-022-00392-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Accepted: 06/17/2022] [Indexed: 06/15/2023]
Abstract
Deep Image Prior (DIP) is currently among the most efficient unsupervised deep learning based methods for ill-posed inverse problems in imaging. This novel framework relies on the implicit regularization provided by representing images as the output of generative Convolutional Neural Network (CNN) architectures. So far, DIP has been shown to be an effective approach when combined with classical and novel regularizers. Unfortunately, to obtain appropriate solutions, all the models proposed up to now require an accurate estimate of the regularization parameter. To overcome this difficulty, we consider a locally adapted regularized unconstrained model whose local regularization parameters are automatically estimated for additively separable regularizers. Moreover, we propose a novel constrained formulation in analogy to Morozov's discrepancy principle which enables the application of a broader range of regularizers. Both the unconstrained and the constrained models are solved via the proximal gradient descent-ascent method. Numerical results demonstrate the robustness with respect to image content, noise levels and hyperparameters of the proposed models on both denoising and deblurring of simulated as well as real natural and medical images.
Collapse
Affiliation(s)
| | - Giorgia Franchini
- Department of Physics, Informatics and Mathematics, University of Modena and Reggio Emilia, Modena, Italy
| | - Erich Kobler
- Institute of Computer Graphics, University of Linz, Linz, Austria
| | - Federica Porta
- Department of Physics, Informatics and Mathematics, University of Modena and Reggio Emilia, Modena, Italy
| | | |
Collapse
|
28
|
Vilimek D, Kubicek J, Golian M, Jaros R, Kahankova R, Hanzlikova P, Barvik D, Krestanova A, Penhaker M, Cerny M, Prokop O, Buzga M. Comparative analysis of wavelet transform filtering systems for noise reduction in ultrasound images. PLoS One 2022; 17:e0270745. [PMID: 35797331 PMCID: PMC9262246 DOI: 10.1371/journal.pone.0270745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Accepted: 06/16/2022] [Indexed: 11/19/2022] Open
Abstract
Wavelet transform (WT) is a commonly used method for noise suppression and feature extraction from biomedical images. The selection of WT system settings significantly affects the efficiency of denoising procedure. This comparative study analyzed the efficacy of the proposed WT system on real 292 ultrasound images from several areas of interest. The study investigates the performance of the system for different scaling functions of two basic wavelet bases, Daubechies and Symlets, and their efficiency on images artificially corrupted by three kinds of noise. To evaluate our extensive analysis, we used objective metrics, namely structural similarity index (SSIM), correlation coefficient, mean squared error (MSE), peak signal-to-noise ratio (PSNR) and universal image quality index (Q-index). Moreover, this study includes clinical insights on selected filtration outcomes provided by clinical experts. The results show that the efficiency of the filtration strongly depends on the specific wavelet system setting, type of ultrasound data, and the noise present. The findings presented may provide a useful guideline for researchers, software developers, and clinical professionals to obtain high quality images.
Collapse
Affiliation(s)
- Dominik Vilimek
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Jan Kubicek
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Milos Golian
- Human Motion Diagnostic Center, Department of Human Movement Studies, University of Ostrava, Ostrava, Czech Republic
| | - Rene Jaros
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Radana Kahankova
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, Ostrava, Czech Republic
- * E-mail:
| | - Pavla Hanzlikova
- Department of Imaging Method, Faculty of Medicine, University of Ostrava, Ostrava, Czech Republic
| | - Daniel Barvik
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Alice Krestanova
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Marek Penhaker
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | - Martin Cerny
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB - Technical University of Ostrava, Ostrava, Czech Republic
| | | | - Marek Buzga
- Human Motion Diagnostic Center, Department of Human Movement Studies, University of Ostrava, Ostrava, Czech Republic
- Deparment of Physiology and Pathophysiology, Faculty of Medicine, University of Ostrava, Ostrava, Czech Republic
| |
Collapse
|
29
|
Xiong T, Ye W. Improved Adaptive Kalman-Median Filter for Line-Scan X-ray Transmission Image. SENSORS 2022; 22:s22134993. [PMID: 35808488 PMCID: PMC9269855 DOI: 10.3390/s22134993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Revised: 06/29/2022] [Accepted: 06/29/2022] [Indexed: 12/10/2022]
Abstract
With their wide application in industrial fields, the denoising and/or filtering of line-scan images is becoming more important, which also affects the quality of their subsequent recognition or classification. Based on the application of single source dual-energy X-ray transmission (DE-XRT) line-scan in-line material sorting and the different horizontal and vertical characteristics of line-scan images, an improved adaptive Kalman-median filter (IAKMF) was proposed for several kinds of noises of an energy integral detector. The filter was realized through the determination of the off-line noise total covariance, the covariance distribution coefficient between the process noise and measurement noise, the adaptive covariance scale coefficient, calculation scanning mode and single line median filter. The experimental results show that the proposed filter has the advantages of simple code, good real-time control, high precision, small artifacts, convenience and practicality. It can take into account the filtering of high-frequency random noise, the retention of low-frequency real signal fluctuation and the preservation of shape features. The filter also has a good practical application value and can be improved and extended to other line-scan image filtering scenarios.
Collapse
Affiliation(s)
- Tianzhong Xiong
- College of Mechanical & Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China;
- College of Mechanical & Electrical Engineering, Sanjiang University, Nanjing 210012, China
| | - Wenhua Ye
- College of Mechanical & Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China;
- Correspondence:
| |
Collapse
|
30
|
Cheikh F, Benhassine NE, Sbaa S. Fetal phonocardiogram signals denoising using improved complete ensemble (EMD) with adaptive noise and optimal thresholding of wavelet coefficients. BIOMED ENG-BIOMED TE 2022; 67:237-247. [PMID: 35647890 DOI: 10.1515/bmt-2022-0006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Accepted: 05/17/2022] [Indexed: 11/15/2022]
Abstract
Although fetal phonocardiogram (fPCG) signals have become a good indicator for discovered heart disease, they may be contaminated by various noises that reduce the signals quality and the final diagnosis decision. Moreover, the noise may cause the risk of the data to misunderstand the heart signal and to misinterpret it. The main objective of this paper is to effectively remove noise from the fPCG signal to make it clinically feasible. So, we proposed a novel noise reduction method based on Improved Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (ICEEMDAN), wavelet threshold and Crow Search Algorithm (CSA). This noise reduction method, named ICEEMDAN-DWT-CSA, has three major advantages. They were, (i) A better suppress of mode mixing and a minimized number of IMFs, (ii) A choice of wavelet corresponding to the study signal proven by the literature and (iii) Selection of the optimal threshold value. Firstly, the noisy fPCG signal is decomposed into Intrinsic Mode Functions (IMFs) by the (ICEEMDAN). Each noisy IMFs were decomposed by the Discrete Wavelet Transform (DWT). Then, the optimal threshold value using the (CSA) technique is selected and the thresholding function is carried out in the detail's coefficients. Secondly, each denoised (IMFs) is reconstructed by applying the Inverse Discrete Wavelet Transform (IDWT). Finally, all these denoised (IMFs) are combined to get the denoised fPCG signal. The performance of the proposed method has been evaluated by Signal to Noise Ratio (SNR), Mean Square Error (MSE) and the Correlation Coefficient (COR). The experiment gave a better result than some standard methods.
Collapse
Affiliation(s)
- Fethi Cheikh
- Department of Electrical Engineering, University of Biskra, Biskra, Algeria.,Laboratory of LESIA, University of Biskra, Biskra, Algeria
| | - Nasser Edinne Benhassine
- Department of Mathematics and Informatics, Aflou university Center, Aflou, Algeria.,Advanced Control Laboratory (LABCAV), University 8 Mai 1945 Guelma, Guelma, Algeri
| | - Salim Sbaa
- Department of Electrical Engineering, University of Biskra, Biskra, Algeria.,Laboratory of LESIA, University of Biskra, Biskra, Algeria
| |
Collapse
|
31
|
Geng M, Meng X, Yu J, Zhu L, Jin L, Jiang Z, Qiu B, Li H, Kong H, Yuan J, Yang K, Shan H, Han H, Yang Z, Ren Q, Lu Y. Content-Noise Complementary Learning for Medical Image Denoising. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:407-419. [PMID: 34529565 DOI: 10.1109/tmi.2021.3113365] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Medical imaging denoising faces great challenges, yet is in great demand. With its distinctive characteristics, medical imaging denoising in the image domain requires innovative deep learning strategies. In this study, we propose a simple yet effective strategy, the content-noise complementary learning (CNCL) strategy, in which two deep learning predictors are used to learn the respective content and noise of the image dataset complementarily. A medical image denoising pipeline based on the CNCL strategy is presented, and is implemented as a generative adversarial network, where various representative networks (including U-Net, DnCNN, and SRDenseNet) are investigated as the predictors. The performance of these implemented models has been validated on medical imaging datasets including CT, MR, and PET. The results show that this strategy outperforms state-of-the-art denoising algorithms in terms of visual quality and quantitative metrics, and the strategy demonstrates a robust generalization capability. These findings validate that this simple yet effective strategy demonstrates promising potential for medical image denoising tasks, which could exert a clinical impact in the future. Code is available at: https://github.com/gengmufeng/CNCL-denoising.
Collapse
|
32
|
Detection and Classification of Knee Injuries from MR Images Using the MRNet Dataset with Progressively Operating Deep Learning Methods. MACHINE LEARNING AND KNOWLEDGE EXTRACTION 2021. [DOI: 10.3390/make3040050] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
This study aimed to build progressively operating deep learning models that could detect meniscus injuries, anterior cruciate ligament (ACL) tears and knee abnormalities in magnetic resonance imaging (MRI). The Stanford Machine Learning Group MRNet dataset was employed in the study, which included MRI image indexes in the coronal, sagittal, and axial axes, each having 1130 trains and 120 validation items. The study is divided into three sections. In the first section, suitable images are selected to determine the disease in the image index based on the disturbance under examination. It is also used to identify images that have been misclassified or are noisy and/or damaged to the degree that they cannot be utilised for diagnosis in the first section. The study employed the 50-layer residual networks (ResNet50) model in this section. The second part of the study involves locating the region to be focused on based on the disturbance that is targeted to be diagnosed in the image under examination. A novel model was built by integrating the convolutional neural networks (CNN) and the denoising autoencoder models in the second section. The third section is dedicated to making a diagnosis of the disease. In this section, a novel ResNet50 model is trained to identify disease diagnoses or abnormalities, independent of the ResNet50 model used in the first section. The images that each model selects as output after training are referred to as progressively operating deep learning methods since they are supplied as an input to the following model.
Collapse
|
33
|
Chaddad A, Li J, Lu Q, Li Y, Okuwobi IP, Tanougast C, Desrosiers C, Niazi T. Can Autism Be Diagnosed with Artificial Intelligence? A Narrative Review. Diagnostics (Basel) 2021; 11:2032. [PMID: 34829379 PMCID: PMC8618159 DOI: 10.3390/diagnostics11112032] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Revised: 10/31/2021] [Accepted: 10/31/2021] [Indexed: 11/16/2022] Open
Abstract
Radiomics with deep learning models have become popular in computer-aided diagnosis and have outperformed human experts on many clinical tasks. Specifically, radiomic models based on artificial intelligence (AI) are using medical data (i.e., images, molecular data, clinical variables, etc.) for predicting clinical tasks such as autism spectrum disorder (ASD). In this review, we summarized and discussed the radiomic techniques used for ASD analysis. Currently, the limited radiomic work of ASD is related to the variation of morphological features of brain thickness that is different from texture analysis. These techniques are based on imaging shape features that can be used with predictive models for predicting ASD. This review explores the progress of ASD-based radiomics with a brief description of ASD and the current non-invasive technique used to classify between ASD and healthy control (HC) subjects. With AI, new radiomic models using the deep learning techniques will be also described. To consider the texture analysis with deep CNNs, more investigations are suggested to be integrated with additional validation steps on various MRI sites.
Collapse
Affiliation(s)
- Ahmad Chaddad
- School of Artificial Intelligence, Guilin Universiy of Electronic Technology, Guilin 541004, China; (J.L.); (Q.L.); (Y.L.); (I.P.O.)
- The Laboratory for Imagery, Vision and Artificial Intelligence, École de Technologie Supérieure (ETS), Montreal, QC H3C 1K3, Canada;
| | - Jiali Li
- School of Artificial Intelligence, Guilin Universiy of Electronic Technology, Guilin 541004, China; (J.L.); (Q.L.); (Y.L.); (I.P.O.)
| | - Qizong Lu
- School of Artificial Intelligence, Guilin Universiy of Electronic Technology, Guilin 541004, China; (J.L.); (Q.L.); (Y.L.); (I.P.O.)
| | - Yujie Li
- School of Artificial Intelligence, Guilin Universiy of Electronic Technology, Guilin 541004, China; (J.L.); (Q.L.); (Y.L.); (I.P.O.)
| | - Idowu Paul Okuwobi
- School of Artificial Intelligence, Guilin Universiy of Electronic Technology, Guilin 541004, China; (J.L.); (Q.L.); (Y.L.); (I.P.O.)
| | - Camel Tanougast
- Laboratoire de Conception, Optimisation et Modélisation des Systèmes, University of Lorraine, 57070 Metz, France;
| | - Christian Desrosiers
- The Laboratory for Imagery, Vision and Artificial Intelligence, École de Technologie Supérieure (ETS), Montreal, QC H3C 1K3, Canada;
| | - Tamim Niazi
- Lady Davis Institute for Medical Research, McGill University, Montreal, QC H3T 1E2, Canada;
| |
Collapse
|
34
|
Kulathilake KASH, Abdullah NA, Bandara AMRR, Lai KW. InNetGAN: Inception Network-Based Generative Adversarial Network for Denoising Low-Dose Computed Tomography. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:9975762. [PMID: 34552709 PMCID: PMC8452440 DOI: 10.1155/2021/9975762] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 08/18/2021] [Accepted: 08/27/2021] [Indexed: 12/24/2022]
Abstract
Low-dose Computed Tomography (LDCT) has gained a great deal of attention in clinical procedures due to its ability to reduce the patient's risk of exposure to the X-ray radiation. However, reducing the X-ray dose increases the quantum noise and artifacts in the acquired LDCT images. As a result, it produces visually low-quality LDCT images that adversely affect the disease diagnosing and treatment planning in clinical procedures. Deep Learning (DL) has recently become the cutting-edge technology of LDCT denoising due to its high performance and data-driven execution compared to conventional denoising approaches. Although the DL-based models perform fairly well in LDCT noise reduction, some noise components are still retained in denoised LDCT images. One reason for this noise retention is the direct transmission of feature maps through the skip connections of contraction and extraction path-based DL modes. Therefore, in this study, we propose a Generative Adversarial Network with Inception network modules (InNetGAN) as a solution for filtering the noise transmission through skip connections and preserving the texture and fine structure of LDCT images. The proposed Generator is modeled based on the U-net architecture. The skip connections in the U-net architecture are modified with three different inception network modules to filter out the noise in the feature maps passing over them. The quantitative and qualitative experimental results have shown the performance of the InNetGAN model in reducing noise and preserving the subtle structures and texture details in LDCT images compared to the other state-of-the-art denoising algorithms.
Collapse
Affiliation(s)
- K. A. Saneera Hemantha Kulathilake
- Department of Computer System and Technology, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
- Department of Computing, Faculty of Applied Sciences, Rajarata University of Sri Lanka, Mihintale, Sri Lanka
| | - Nor Aniza Abdullah
- Department of Computer System and Technology, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| | | | - Khin Wee Lai
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| |
Collapse
|
35
|
Rawat S, Rana K, Kumar V. A novel complex-valued convolutional neural network for medical image denoising. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102859] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
36
|
Ilesanmi AE, Ilesanmi TO. Methods for image denoising using convolutional neural network: a review. COMPLEX INTELL SYST 2021. [DOI: 10.1007/s40747-021-00428-4] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
AbstractImage denoising faces significant challenges, arising from the sources of noise. Specifically, Gaussian, impulse, salt, pepper, and speckle noise are complicated sources of noise in imaging. Convolutional neural network (CNN) has increasingly received attention in image denoising task. Several CNN methods for denoising images have been studied. These methods used different datasets for evaluation. In this paper, we offer an elaborate study on different CNN techniques used in image denoising. Different CNN methods for image denoising were categorized and analyzed. Popular datasets used for evaluating CNN image denoising methods were investigated. Several CNN image denoising papers were selected for review and analysis. Motivations and principles of CNN methods were outlined. Some state-of-the-arts CNN image denoising methods were depicted in graphical forms, while other methods were elaborately explained. We proposed a review of image denoising with CNN. Previous and recent papers on image denoising with CNN were selected. Potential challenges and directions for future research were equally fully explicated.
Collapse
|
37
|
Zhang Y, Shao Y, Shen J, Lu Y, Zheng Z, Sidib Y, Yu B. Infrared image impulse noise suppression using tensor robust principal component analysis and truncated total variation. APPLIED OPTICS 2021; 60:4916-4929. [PMID: 34143054 DOI: 10.1364/ao.421081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Accepted: 04/25/2021] [Indexed: 06/12/2023]
Abstract
Infrared image denoising is an essential inverse problem that has been widely applied in many fields. However, when suppressing impulse noise, existing methods lead to blurred object details and loss of image information. Moreover, computational efficiency is another challenge for existing methods when processing infrared images with large resolution. An infrared image impulse-noise-suppression method is introduced based on tensor robust principal component analysis. Specifically, we propose a randomized tensor singular-value thresholding algorithm to solve the tensor kernel norm based on the matrix stochastic singular-value decomposition and tensor singular-value threshold. Combined with the image blocking, it can not only ensure the denoising performance but also greatly improve the algorithm's efficiency. Finally, truncated total variation is applied to improve the smoothness of the denoised image. Experimental results indicate that the proposed algorithm outperforms the state-of-the-art methods in computational efficiency, denoising effect, and detail feature preservation.
Collapse
|
38
|
Kulathilake KASH, Abdullah NA, Sabri AQM, Lai KW. A review on Deep Learning approaches for low-dose Computed Tomography restoration. COMPLEX INTELL SYST 2021; 9:2713-2745. [PMID: 34777967 PMCID: PMC8164834 DOI: 10.1007/s40747-021-00405-x] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Accepted: 05/18/2021] [Indexed: 02/08/2023]
Abstract
Computed Tomography (CT) is a widely use medical image modality in clinical medicine, because it produces excellent visualizations of fine structural details of the human body. In clinical procedures, it is desirable to acquire CT scans by minimizing the X-ray flux to prevent patients from being exposed to high radiation. However, these Low-Dose CT (LDCT) scanning protocols compromise the signal-to-noise ratio of the CT images because of noise and artifacts over the image space. Thus, various restoration methods have been published over the past 3 decades to produce high-quality CT images from these LDCT images. More recently, as opposed to conventional LDCT restoration methods, Deep Learning (DL)-based LDCT restoration approaches have been rather common due to their characteristics of being data-driven, high-performance, and fast execution. Thus, this study aims to elaborate on the role of DL techniques in LDCT restoration and critically review the applications of DL-based approaches for LDCT restoration. To achieve this aim, different aspects of DL-based LDCT restoration applications were analyzed. These include DL architectures, performance gains, functional requirements, and the diversity of objective functions. The outcome of the study highlights the existing limitations and future directions for DL-based LDCT restoration. To the best of our knowledge, there have been no previous reviews, which specifically address this topic.
Collapse
Affiliation(s)
- K. A. Saneera Hemantha Kulathilake
- Department of Computer System and Technology, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| | - Nor Aniza Abdullah
- Department of Computer System and Technology, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| | - Aznul Qalid Md Sabri
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| | - Khin Wee Lai
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, 50603 Kuala Lumpur, Malaysia
| |
Collapse
|
39
|
Ilesanmi AE, Idowu OP, Chaumrattanakul U, Makhanov SS. Multiscale hybrid algorithm for pre-processing of ultrasound images. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102396] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
40
|
Kondapalli SH, Chakrabartty S. Sub-Nanowatt Ultrasonic Bio-Telemetry Using B-Scan Imaging. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2021; 2:17-25. [PMID: 33748769 PMCID: PMC7978362 DOI: 10.1109/ojemb.2021.3053174] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
Goal: The objective of this paper is to investigate if the use of a B-scan ultrasound imaging system can reduce the energy requirements, and hence the power-dissipation requirements to support wireless bio-telemetry at an implantable device. Methods: B-scan imaging data were acquired using a commercial 256-element linear ultrasound transducer array which was driven by a commercial echoscope. As a transmission medium, we used a water-bath and the operation of the implantable device was emulated using a commercial-off-the-shelf micro-controller board. The telemetry parameters (e.g. transmission rate and transmission power) were wirelessly controlled using a two-way radio-frequency transceiver. B-scan imaging data were post-processed using a maximum-threshold decoder and the quality of the ultrasonic telemetry link was quantified in terms of its bit-error-rate (BER). Results: Measured results show that a reliable B-scan communication link with an implantable device can be achieved at transmission power levels of 100 pW and for implantation depths greater than 10 cm. Conclusions: In this paper we demonstrated that a combination of B-scan imaging and a simple decoding algorithm can significantly reduce the energy-budget requirements for reliable ultrasonic telemetry.
Collapse
Affiliation(s)
- Sri Harsha Kondapalli
- Department of Electrical and Systems Engineering at Washington University in St. Louis, St. Louis, MO 63130 USA
| | - Shantanu Chakrabartty
- Department of Electrical and Systems Engineering at Washington University in St. Louis, St. Louis, MO 63130 USA
| |
Collapse
|
41
|
Novel FBP based sparse-view CT reconstruction scheme using self-shaping spatial filter based morphological operations and scaled reprojections. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102323] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
|
42
|
Talha SMU, Mairaj T, Yousuf WB, Zahed JA. Region-Based Segmentation and Wiener Pilot-Based Novel Amoeba Denoising Scheme for CT Imaging. SCANNING 2020; 2020:6172046. [PMID: 33381254 PMCID: PMC7752284 DOI: 10.1155/2020/6172046] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 09/28/2020] [Accepted: 11/21/2020] [Indexed: 06/12/2023]
Abstract
Computed tomography (CT) is one of the most common and beneficial medical imaging schemes, but the associated high radiation dose injurious to the patient is always a concern. Therefore, postprocessing-based enhancement of a CT reconstructed image acquired using a reduced dose is an active research area. Amoeba- (or spatially variant kernel-) based filtering is a strong candidate scheme for postprocessing of the CT image, which adapts its shape according to the image contents. In the reported research work, the amoeba filtering is customized for postprocessing of CT images acquired at a reduced X-ray dose. The proposed scheme modifies both the pilot image formation and amoeba shaping mechanism of the conventional amoeba implementation. The proposed scheme uses a Wiener filter-based pilot image, while region-based segmentation is used for amoeba shaping instead of the conventional amoeba distance-based approach. The merits of the proposed scheme include being more suitable for CT images because of the similar region-based and symmetric nature of the human body anatomy, image smoothing without compromising on the edge details, and being adaptive in nature and more robust to noise. The performance of the proposed amoeba scheme is compared to the traditional amoeba kernel in the image denoising application for CT images using filtered back projection (FBP) on sparse-view projections. The scheme is supported by computer simulations using fan-beam projections of clinically reconstructed and simulated head CT phantoms. The scheme is tested using multiple image quality matrices, in the presence of additive projection noise. The scheme implementation significantly improves the image quality visually and statistically, providing better contrast and image smoothing without compromising on edge details. Promising results indicate the efficacy of the proposed scheme.
Collapse
Affiliation(s)
- Syed Muhammad Umar Talha
- Department of Electrical Engineering, Pakistan Navy Engineering College, National University of Sciences and Technology, H-12 Islamabad, Pakistan
- Department of Telecommunication Engineering, Sir Syed University of Engineering & Technology, Karachi, Pakistan
| | - Tariq Mairaj
- Department of Electrical Engineering, Pakistan Navy Engineering College, National University of Sciences and Technology, H-12 Islamabad, Pakistan
| | - Waleed Bin Yousuf
- Department of Electrical Engineering, Pakistan Navy Engineering College, National University of Sciences and Technology, H-12 Islamabad, Pakistan
| | - Jawwad Ali Zahed
- Department of Electrical Engineering, Pakistan Navy Engineering College, National University of Sciences and Technology, H-12 Islamabad, Pakistan
| |
Collapse
|
43
|
Learning Medical Image Denoising with Deep Dynamic Residual Attention Network. MATHEMATICS 2020. [DOI: 10.3390/math8122192] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Image denoising performs a prominent role in medical image analysis. In many cases, it can drastically accelerate the diagnostic process by enhancing the perceptual quality of noisy image samples. However, despite the extensive practicability of medical image denoising, the existing denoising methods illustrate deficiencies in addressing the diverse range of noise appears in the multidisciplinary medical images. This study alleviates such challenging denoising task by learning residual noise from a substantial extent of data samples. Additionally, the proposed method accelerates the learning process by introducing a novel deep network, where the network architecture exploits the feature correlation known as the attention mechanism and combines it with spatially refine residual features. The experimental results illustrate that the proposed method can outperform the existing works by a substantial margin in both quantitative and qualitative comparisons. Also, the proposed method can handle real-world image noise and can improve the performance of different medical image analysis tasks without producing any visually disturbing artefacts.
Collapse
|
44
|
Image Denoising Using Non-Local Means (NLM) Approach in Magnetic Resonance (MR) Imaging: A Systematic Review. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10207028] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
The non-local means (NLM) noise reduction algorithm is well known as an excellent technique for removing noise from a magnetic resonance (MR) image to improve the diagnostic accuracy. In this study, we undertook a systematic review to determine the effectiveness of the NLM noise reduction algorithm in MR imaging. A systematic literature search was conducted of three databases of publications dating from January 2000 to March 2020; of the 82 publications reviewed, 25 were included in this study. The subjects were categorized into four major frameworks and analyzed for each research result. Research in NLM noise reduction for MR images has been increasing worldwide; however, it was found to have slightly decreased since 2016. It was found that the NLM technique was most frequently used on brain images taken using the general MR imaging technique; these were most frequently performed during simultaneous real and simulated experimental studies. In particular, comparison parameters were frequently used to evaluate the effectiveness of the algorithm on MR images. The ultimate goal is to provide an accurate method for the diagnosis of disease, and our conclusion is that the NLM noise reduction algorithm is a promising method of achieving this goal.
Collapse
|