1
|
Ahn C, Kim JH. AntiHalluciNet: A Potential Auditing Tool of the Behavior of Deep Learning Denoising Models in Low-Dose Computed Tomography. Diagnostics (Basel) 2023; 14:96. [PMID: 38201404 PMCID: PMC10795730 DOI: 10.3390/diagnostics14010096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 12/14/2023] [Accepted: 12/30/2023] [Indexed: 01/12/2024] Open
Abstract
Gaining the ability to audit the behavior of deep learning (DL) denoising models is of crucial importance to prevent potential hallucinations and adversarial clinical consequences. We present a preliminary version of AntiHalluciNet, which is designed to predict spurious structural components embedded in the residual noise from DL denoising models in low-dose CT and assess its feasibility for auditing the behavior of DL denoising models. We created a paired set of structure-embedded and pure noise images and trained AntiHalluciNet to predict spurious structures in the structure-embedded noise images. The performance of AntiHalluciNet was evaluated by using a newly devised residual structure index (RSI), which represents the prediction confidence based on the presence of structural components in the residual noise image. We also evaluated whether AntiHalluciNet could assess the image fidelity of a denoised image by using only a noise component instead of measuring the SSIM, which requires both reference and test images. Then, we explored the potential of AntiHalluciNet for auditing the behavior of DL denoising models. AntiHalluciNet was applied to three DL denoising models (two pre-trained models, RED-CNN and CTformer, and a commercial software, ClariCT.AI [version 1.2.3]), and whether AntiHalluciNet could discriminate between the noise purity performances of DL denoising models was assessed. AntiHalluciNet demonstrated an excellent performance in predicting the presence of structural components. The RSI values for the structural-embedded and pure noise images measured using the 50% low-dose dataset were 0.57 ± 31 and 0.02 ± 0.02, respectively, showing a substantial difference with a p-value < 0.0001. The AntiHalluciNet-derived RSI could differentiate between the quality of the degraded denoised images, with measurement values of 0.27, 0.41, 0.48, and 0.52 for the 25%, 50%, 75%, and 100% mixing rates of the degradation component, which showed a higher differentiation potential compared with the SSIM values of 0.9603, 0.9579, 0.9490, and 0.9333. The RSI measurements from the residual images of the three DL denoising models showed a distinct distribution, being 0.28 ± 0.06, 0.21 ± 0.06, and 0.15 ± 0.03 for RED-CNN, CTformer, and ClariCT.AI, respectively. AntiHalluciNet has the potential to predict the structural components embedded in the residual noise from DL denoising models in low-dose CT. With AntiHalluciNet, it is feasible to audit the performance and behavior of DL denoising models in clinical environments where only residual noise images are available.
Collapse
Affiliation(s)
- Chulkyun Ahn
- Department of Transdisciplinary Studies, Program in Biomedical Radiation Sciences, Graduate School of Convergence Science and Technology, Seoul National University, Seoul 08826, Republic of Korea;
- ClariPi Research, ClariPi, Seoul 03088, Republic of Korea
| | - Jong Hyo Kim
- Department of Transdisciplinary Studies, Program in Biomedical Radiation Sciences, Graduate School of Convergence Science and Technology, Seoul National University, Seoul 08826, Republic of Korea;
- ClariPi Research, ClariPi, Seoul 03088, Republic of Korea
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, Seoul 08826, Republic of Korea
- Department of Radiology, Seoul National University College of Medicine, Seoul 03080, Republic of Korea
- Department of Radiology, Seoul National University Hospital, Seoul 03080, Republic of Korea
- Center for Medical-IT Convergence Technology Research, Advanced Institutes of Convergence Technology, Suwon-si 16229, Republic of Korea
| |
Collapse
|
2
|
Siracusano G, La Corte A, Nucera AG, Gaeta M, Chiappini M, Finocchio G. Effective processing pipeline PACE 2.0 for enhancing chest x-ray contrast and diagnostic interpretability. Sci Rep 2023; 13:22471. [PMID: 38110512 PMCID: PMC10728198 DOI: 10.1038/s41598-023-49534-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 12/09/2023] [Indexed: 12/20/2023] Open
Abstract
Preprocessing is an essential task for the correct analysis of digital medical images. In particular, X-ray imaging might contain artifacts, low contrast, diffractions or intensity inhomogeneities. Recently, we have developed a procedure named PACE that is able to improve chest X-ray (CXR) images including the enforcement of clinical evaluation of pneumonia originated by COVID-19. At the clinical benchmark state of this tool, there have been found some peculiar conditions causing a reduction of details over large bright regions (as in ground-glass opacities and in pleural effusions in bedridden patients) and resulting in oversaturated areas. Here, we have significantly improved the overall performance of the original approach including the results in those specific cases by developing PACE2.0. It combines 2D image decomposition, non-local means denoising, gamma correction, and recursive algorithms to improve image quality. The tool has been evaluated using three metrics: contrast improvement index, information entropy, and effective measure of enhancement, resulting in an average increase of 35% in CII, 7.5% in ENT, 95.6% in EME and 13% in BRISQUE against original radiographies. Additionally, the enhanced images were fed to a pre-trained DenseNet-121 model for transfer learning, resulting in an increase in classification accuracy from 80 to 94% and recall from 89 to 97%, respectively. These improvements led to a potential enhancement of the interpretability of lesion detection in CXRs. PACE2.0 has the potential to become a valuable tool for clinical decision support and could help healthcare professionals detect pneumonia more accurately.
Collapse
Affiliation(s)
- Giulio Siracusano
- Department of Electric, Electronic and Computer Engineering, University of Catania, Viale Andrea Doria 6, 95125, Catania, Italy.
| | - Aurelio La Corte
- Department of Electric, Electronic and Computer Engineering, University of Catania, Viale Andrea Doria 6, 95125, Catania, Italy
| | - Annamaria Giuseppina Nucera
- Unit of Radiology, Department of Advanced Diagnostic-Therapeutic Technologies, "Bianchi-Melacrino-Morelli" Hospital, Reggio Calabria, Via Giuseppe Melacrino, 21, 89124, Reggio Calabria, Italy
| | - Michele Gaeta
- Department of Biomedical Sciences, Dental and of Morphological and Functional Images, University of Messina, Via Consolare Valeria 1, 98125, Messina, Italy
| | - Massimo Chiappini
- Istituto Nazionale di Geofisica e Vulcanologia (INGV), Via di Vigna Murata 605, 00143, Rome, Italy.
- Maris Scarl, Via Vigna Murata 606, 00143, Rome, Italy.
| | - Giovanni Finocchio
- Istituto Nazionale di Geofisica e Vulcanologia (INGV), Via di Vigna Murata 605, 00143, Rome, Italy.
- Department of Mathematical and Computer Sciences, Physical Sciences and Earth Sciences, University of Messina, V.le F. Stagno D'Alcontres 31, 98166, Messina, Italy.
| |
Collapse
|
3
|
Manzano-Patron JP, Moeller S, Andersson JLR, Ugurbil K, Yacoub E, Sotiropoulos SN. DENOISING DIFFUSION MRI: CONSIDERATIONS AND IMPLICATIONS FOR ANALYSIS. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.24.550348. [PMID: 37546835 PMCID: PMC10402048 DOI: 10.1101/2023.07.24.550348] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
Development of diffusion MRI (dMRI) denoising approaches has experienced considerable growth over the last years. As noise can inherently reduce accuracy and precision in measurements, its effects have been well characterised both in terms of uncertainty increase in dMRI-derived features and in terms of biases caused by the noise floor, the smallest measurable signal given the noise level. However, gaps in our knowledge still exist in objectively characterising dMRI denoising approaches in terms of both of these effects and assessing their efficacy. In this work, we reconsider what a denoising method should and should not do and we accordingly define criteria to characterise the performance. We propose a comprehensive set of evaluations, including i) benefits in improving signal quality and reducing noise variance, ii) gains in reducing biases and the noise floor and improving, iii) preservation of spatial resolution, iv) agreement of denoised data against a gold standard, v) gains in downstream parameter estimation (precision and accuracy), vi) efficacy in enabling noise-prone applications, such as ultra-high-resolution imaging. We further provide newly acquired complex datasets (magnitude and phase) with multiple repeats that sample different SNR regimes to highlight performance differences under different scenarios. Without loss of generality, we subsequently apply a number of exemplar patch-based denoising algorithms to these datasets, including Non-Local Means, Marchenko-Pastur PCA (MPPCA) in the magnitude and complex domain and NORDIC, and compare them with respect to the above criteria and against a gold standard complex average of multiple repeats. We demonstrate that all tested denoising approaches reduce noise-related variance, but not always biases from the elevated noise floor. They all induce a spatial resolution penalty, but its extent can vary depending on the method and the implementation. Some denoising approaches agree with the gold standard more than others and we demonstrate challenges in even defining such a standard. Overall, we show that dMRI denoising performed in the complex domain is advantageous to magnitude domain denoising with respect to all the above criteria.
Collapse
Affiliation(s)
| | - Steen Moeller
- Center for Magnetic Resonance Research, University of Minnesota, USA
| | | | - Kamil Ugurbil
- Center for Magnetic Resonance Research, University of Minnesota, USA
| | - Essa Yacoub
- Center for Magnetic Resonance Research, University of Minnesota, USA
| | - Stamatios N Sotiropoulos
- Sir Peter Mansfield Imaging Centre, School of Medicine, University of Nottingham, UK
- Wellcome Centre for Integrative Neuroimaging, University of Oxford, UK
- Nottingham Biomedical Research Centre, Queen's Medical Centre, University of Nottingham, UK
| |
Collapse
|
4
|
Kim CH, Chung MJ, Cha YK, Oh S, Kim KG, Yoo H. The impact of deep learning reconstruction in low dose computed tomography on the evaluation of interstitial lung disease. PLoS One 2023; 18:e0291745. [PMID: 37756357 PMCID: PMC10529569 DOI: 10.1371/journal.pone.0291745] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 09/05/2023] [Indexed: 09/29/2023] Open
Abstract
To evaluate the effect of the deep learning model reconstruction (DLM) method in terms of image quality and diagnostic agreement in low-dose computed tomography (LDCT) for interstitial lung disease (ILD), 193 patients who underwent LDCT for suspected ILD were retrospectively reviewed. Datasets were reconstructed using filtered back projection (FBP), adaptive statistical iterative reconstruction Veo (ASiR-V), and DLM. For image quality analysis, the signal, noise, signal-to-noise ratio (SNR), blind/referenceless image spatial quality evaluator (BRISQUE), and visual scoring were evaluated. Also, CT patterns of usual interstitial pneumonia (UIP) were classified according to the 2022 idiopathic pulmonary fibrosis (IPF) diagnostic criteria. The differences between CT images subjected to FBP, ASiR-V 30%, and DLM were evaluated. The image noise and BRISQUE scores of DLM images was lower and SNR was higher than that of the ASiR-V and FBP images (ASiR-V vs. DLM, p < 0.001 and FBP vs. DLR-M, p < 0.001, respectively). The agreement of the diagnostic categorization of IPF between the three reconstruction methods was almost perfect (κ = 0.992, CI 0.990-0.994). Image quality was improved with DLM compared to ASiR-V and FBP.
Collapse
Affiliation(s)
- Chu hyun Kim
- Center for Health Promotion, Samsung Medical Center, Seoul, Republic of Korea
- Department of Radiology and AI Research Center, Samsung Medical Center, Sungkyunkwan University, Seoul, Korea
| | - Myung Jin Chung
- Department of Radiology and AI Research Center, Samsung Medical Center, Sungkyunkwan University, Seoul, Korea
- Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Yoon Ki Cha
- Department of Radiology and AI Research Center, Samsung Medical Center, Sungkyunkwan University, Seoul, Korea
| | - Seok Oh
- Gil Medical Center, Department of Biomedical Engineering, Gachon University College of Medicine, Incheon, Korea
| | - Kwang gi Kim
- Gil Medical Center, Department of Biomedical Engineering, Gachon University College of Medicine, Incheon, Korea
| | - Hongseok Yoo
- Division of Pulmonary and Critical Care Medicine, Samsung Medical Center, School of Medicine, Sungkyunkwan University, Seoul, South Korea
| |
Collapse
|
5
|
Automatic No-Reference kidney tissue whole slide image quality assessment based on composite fusion models. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
6
|
Blind image quality assessment of magnetic resonance images with statistics of local intensity extrema. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.05.061] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
7
|
Validation of a Saliency Map for Assessing Image Quality in Nuclear Medicine: Experimental Study Outcomes. RADIATION 2022. [DOI: 10.3390/radiation2030018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Recently, the use of saliency maps to evaluate the image quality of nuclear medicine images has been reported. However, that study only compared qualitative visual evaluations and did not perform a quantitative assessment. The study’s aim was to demonstrate the possibility of using saliency maps (calculated from intensity and flicker) to assess nuclear medicine image quality by comparison with the evaluator’s gaze data obtained from an eye-tracking device. We created 972 positron emission tomography images by changing the position of the hot sphere, imaging time, and number of iterations in the iterative reconstructions. Pearson’s correlation coefficient between the saliency map calculated from each image and the evaluator’s gaze data during image presentation was calculated. A strong correlation (r ≥ 0.94) was observed between the saliency map (intensity) and the evaluator’s gaze data. This trend was also observed in images obtained from a clinical device. For short acquisition times, the gaze to the hot sphere position was higher for images with fewer iterations during the iterative reconstruction. However, no differences in iterations were found when the acquisition time increased. Saliency by flicker could be applied to clinical images without preprocessing, although compared with the gaze image, it increased slowly.
Collapse
|
8
|
Stępień I, Oszust M. A Brief Survey on No-Reference Image Quality Assessment Methods for Magnetic Resonance Images. J Imaging 2022; 8:160. [PMID: 35735959 PMCID: PMC9224540 DOI: 10.3390/jimaging8060160] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 05/31/2022] [Accepted: 06/01/2022] [Indexed: 02/08/2023] Open
Abstract
No-reference image quality assessment (NR-IQA) methods automatically and objectively predict the perceptual quality of images without access to a reference image. Therefore, due to the lack of pristine images in most medical image acquisition systems, they play a major role in supporting the examination of resulting images and may affect subsequent treatment. Their usage is particularly important in magnetic resonance imaging (MRI) characterized by long acquisition times and a variety of factors that influence the quality of images. In this work, a survey covering recently introduced NR-IQA methods for the assessment of MR images is presented. First, typical distortions are reviewed and then popular NR methods are characterized, taking into account the way in which they describe MR images and create quality models for prediction. The survey also includes protocols used to evaluate the methods and popular benchmark databases. Finally, emerging challenges are outlined along with an indication of the trends towards creating accurate image prediction models.
Collapse
Affiliation(s)
- Igor Stępień
- Doctoral School of Engineering and Technical Sciences, Rzeszow University of Technology, al. Powstancow Warszawy 12, 35-959 Rzeszow, Poland;
| | - Mariusz Oszust
- Department of Computer and Control Engineering, Rzeszow University of Technology, Wincentego Pola 2, 35-959 Rzeszow, Poland
| |
Collapse
|
9
|
Hu Q, Gois FNB, Costa R, Zhang L, Yin L, Magai N, de Albuquerque VHC. Explainable artificial intelligence-based edge fuzzy images for COVID-19 detection and identification. Appl Soft Comput 2022; 123:108966. [PMID: 35582662 PMCID: PMC9102011 DOI: 10.1016/j.asoc.2022.108966] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Revised: 02/17/2022] [Accepted: 04/26/2022] [Indexed: 11/30/2022]
Abstract
The COVID-19 pandemic continues to wreak havoc on the world’s population’s health and well-being. Successful screening of infected patients is a critical step in the fight against it, with radiology examination using chest radiography being one of the most important screening methods. For the definitive diagnosis of COVID-19 disease, reverse-transcriptase polymerase chain reaction remains the gold standard. Currently available lab tests may not be able to detect all infected individuals; new screening methods are required. We propose a Multi-Input Transfer Learning COVID-Net fuzzy convolutional neural network to detect COVID-19 instances from torso X-ray, motivated by the latter and the open-source efforts in this research area. Furthermore, we use an explainability method to investigate several Convolutional Networks COVID-Net forecasts in an effort to not only gain deeper insights into critical factors associated with COVID-19 instances, but also to aid clinicians in improving screening. We show that using transfer learning and pre-trained models, we can detect it with a high degree of accuracy. Using X-ray images, we chose four neural networks to predict its probability. Finally, in order to achieve better results, we considered various methods to verify the techniques proposed here. As a result, we were able to create a model with an AUC of 1.0 and accuracy, precision, and recall of 0.97. The model was quantized for use in Internet of Things devices and maintained a 0.95 percent accuracy.
Collapse
Affiliation(s)
- Qinhua Hu
- School of Chemical Engineering and Energy Technology, Dongguan University of Technology, Dongguan 523808, China
| | | | | | - Lijuan Zhang
- DGUT-CNAM Institute, Dongguan University of Technology, Dongguan 523106, China
| | - Ling Yin
- School of Mechanical Engineering, Dongguan University of Technology, Dongguan 523808, China
| | - Naercio Magai
- Instituto Superior Técnico (IST), Universidade de Lisboa, Portugal
| | - Victor Hugo C de Albuquerque
- Graduate Program on Teleinformatics Engineering, Federal University of Ceará, Fortaleza/CE, Brazil.,Graduate Program on Electrical Engineering, Federal University of Ceará, Fortaleza/CE, Brazil
| |
Collapse
|
10
|
Saincher R, Kumar S, Gopalkrishna P, Maithri M, Sherigar P. Comparison of color accuracy and picture quality of digital SLR, point and shoot and mobile cameras used for dental intraoral photography - A pilot study. Heliyon 2022; 8:e09262. [PMID: 35464702 PMCID: PMC9026587 DOI: 10.1016/j.heliyon.2022.e09262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 08/01/2021] [Accepted: 04/05/2022] [Indexed: 11/15/2022] Open
Abstract
The present study aimed to compare the picture quality and color accuracy of three cameras, namely, Point and shoot, DSLR and mobile cameras, and determine the most suitable camera for dental photography (intra-orally and for casts). A computer program, namely, NRM (No-Reference matrix BRISQUE), was used to evaluate the quality of the photos taken by three cameras. Further, color accuracy was determined by computation of total color difference (ΔE) by identifying the L∗a∗b∗ values. The ANOVA (Kruskal-Wallis) analysis was done to assess the difference in the quality of cast photos, and it showed a statistically significant difference (p < 0.05) between the cameras. The post hoc analysis showed the NRM value of Point and shoot (18.93 ± 2.04) better than the Mobile phone (20.59 ± 2.65). However, no statistically significant difference was obtained while assessing the picture quality of the intraoral photographs using One-Way ANOVA (Fisher's) (P = 0.05). Evaluation of total color difference (ΔE) showed fewer differences between the DSLR and the Point and shoot than the mobile camera. There was no statistically significant difference in ΔE value in the participant photographs. The L in the LAB values of both the cast and the participant photograph showed a similar result, with the mobile phone showing a lighter value than the other two cameras. The B value in the participant photos showed a significant difference between the mobile and the Point and shoot cameras. The quality of Point and shoot, DSLR, and mobile cameras were equally good for taking pictures of any external surface, but the mobile camera offered more brightness and appeared more yellow. On the other hand, the quality was similar for intraoral images with mobile and Point and shoot cameras, although color accuracy was better with Point and shoot and DSLR cameras.
Collapse
Affiliation(s)
- Rishi Saincher
- Manipal College of Dental Sciences, Manipal, Manipal Academy of Higher Education, Manipal - 576104, Udupi, Karnataka, India
| | - Santhosh Kumar
- Department of Periodontology, Manipal College of Dental Sciences, Manipal, Manipal Academy of Higher Education, Manipal - 576104, Udupi, Karnataka, India
| | - Pratibha Gopalkrishna
- Department of Periodontology, Manipal College of Dental Sciences, Manipal, Manipal Academy of Higher Education, Manipal - 576104, Udupi, Karnataka, India
| | - M Maithri
- Department of Mechatronics, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal - 576104, Udupi, Karnataka, India
| | - Pradeep Sherigar
- Department of Prosthodontics, Manipal College of Dental Sciences, Manipal, Manipal Academy of Higher Education, Manipal - 576104, Udupi, Karnataka, India
| |
Collapse
|
11
|
Jung GS, Won YH. Simple method of acquiring high-quality light fields based on the chromatic aberration of only one defocused image pair. OPTICS EXPRESS 2021; 29:36417-36429. [PMID: 34809052 DOI: 10.1364/oe.440835] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Accepted: 10/11/2021] [Indexed: 06/13/2023]
Abstract
Direct light field acquisition method using a lens array requires a complex system and has a low resolution. On the other hand, the light fields can be also acquired indirectly by back-projection of the focal stack images without lens array, providing a resolution as high as the sensor resolution. However, it also requires the bulky optical system design to fix field-of-view (FOV) between the focal stacks, and an additional device for sensor shifting. Also, the reconstructed light field is texture-dependent and low-quality because it uses either a high-pass filter or a guided filter for back-projection. This paper presents a simple light field acquisition method based on chromatic aberration of only one defocused image pair. An image with chromatic aberration has a different defocus distribution for each R, G, and B channel. Thus, the focal stack can be synthesized with structural similarity (SSIM) 0.96 from only one defocused image pair. Then this image pair is also used to estimate the depth map by depth-from-defocus (DFD) using chromatic aberration (chromatic DFD). The depth map obtained by chromatic DFD is used for high-quality light field reconstruction. Compared to existing light field indirect acquisition, the proposed method requires only one pair of defocused images and can clearly reconstruct light field images with Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) scores lowered by 17%-38% and with Perception-Based Image Quality Evaluator (PIQE) scores lowered by 19%-45%. A defocused image pair is acquired by our customized compact optical system consisting of only three lenses, including a varifocal lens. Image processing and image quality evaluation are all performed using MATLAB.
Collapse
|
12
|
Huang B, Xiao H, Liu W, Zhang Y, Wu H, Wang W, Yang Y, Yang Y, Miller GW, Li T, Cai J. MRI super-resolution via realistic downsampling with adversarial learning. Phys Med Biol 2021; 66. [PMID: 34474407 DOI: 10.1088/1361-6560/ac232e] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Accepted: 09/02/2021] [Indexed: 11/12/2022]
Abstract
Many deep learning (DL) frameworks have demonstrated state-of-the-art performance in the super-resolution (SR) task of magnetic resonance imaging, but most performances have been achieved with simulated low-resolution (LR) images rather than LR images from real acquisition. Due to the limited generalizability of the SR network, enhancement is not guaranteed for real LR images because of the unreality of the training LR images. In this study, we proposed a DL-based SR framework with an emphasis on data construction to achieve better performance on real LR MR images. The framework comprised two steps: (a) downsampling training using a generative adversarial network (GAN) to construct more realistic and perfectly matched LR/high-resolution (HR) pairs. The downsampling GAN input was real LR and HR images. The generator translated the HR images to LR images and the discriminator distinguished the patch-level difference between the synthetic and real LR images. (b) SR training was performed using an enhance4d deep super-resolution network (EDSR). In the controlled experiments, three EDSRs were trained using our proposed method, Gaussian blur, and k-space zero-filling. As for the data, liver MR images were obtained from 24 patients using breath-hold serial LR and HR scans (only HR images were used in the conventional methods). The k-space zero-filling group delivered almost zero enhancement on the real LR images and the Gaussian group produced a considerable number of artifacts. The proposed method exhibited significantly better resolution enhancement and fewer artifacts compared with the other two networks. Our method outperformed the Gaussian method by an improvement of 0.111 ± 0.016 in the structural similarity index and 2.76 ± 0.98 dB in the peak signal-to-noise ratio. The blind/reference-less image spatial quality evaluator metric of the conventional Gaussian method and proposed method were 46.6 ± 4.2 and 34.1 ± 2.4, respectively.
Collapse
Affiliation(s)
- Bangyan Huang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, People's Republic of China
| | - Haonan Xiao
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, People's Republic of China
| | - Weiwei Liu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital and Institute, Peking University Cancer Hospital and Institute, Beijing, People's Republic of China
| | - Yibao Zhang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital and Institute, Peking University Cancer Hospital and Institute, Beijing, People's Republic of China
| | - Hao Wu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital and Institute, Peking University Cancer Hospital and Institute, Beijing, People's Republic of China
| | - Weihu Wang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education/Beijing), Department of Radiation Oncology, Beijing Cancer Hospital and Institute, Peking University Cancer Hospital and Institute, Beijing, People's Republic of China
| | - Yunhuan Yang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, People's Republic of China
| | - Yidong Yang
- Department of Engineering and Applied Physics, University of Science and Technology of China, Hefei, People's Republic of China
| | - G Wilson Miller
- Department of Radiology and Medical Imaging, The University of Virginia, Charlottesville, VA, United States of America
| | - Tian Li
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, People's Republic of China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong SAR, People's Republic of China
| |
Collapse
|
13
|
Sharma S, Batta V, Chidambaranathan M, Mathialagan P, Mani G, Kiruthika M, Datta B, Kamineni S, Reddy G, Masilamani S, Vijayan S, Amanatullah DF. Knee Implant Identification by Fine-Tuning Deep Learning Models. Indian J Orthop 2021; 55:1295-1305. [PMID: 34824729 PMCID: PMC8586384 DOI: 10.1007/s43465-021-00529-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Accepted: 09/12/2021] [Indexed: 02/04/2023]
Abstract
BACKGROUND Identification of implant model from primary knee arthroplasty in pre-op planning of revision surgery is a challenging task with added delay. The direct impact of this inability to identify the implants in time leads to the increase in complexity in surgery. Deep learning in the medical field for diagnosis has shown promising results in getting better with every iteration. This study aims to find an optimal solution for the problem of identification of make and model of knee arthroplasty prosthesis using automated deep learning models. METHODS Deep learning algorithms were used to classify knee arthroplasty implant models. The training, validation and test comprised of 1078 radiographs with a total of 6 knee arthroplasty implant models with anterior-posterior (AP) and lateral views. The performance of the model was calculated using accuracy, sensitivity, and area under the receiver-operating characteristic curve (AUC), which were compared against multiple models trained for comparative in-depth analysis with saliency maps for visualization. RESULTS After training for a total of 30 epochs on all 6 models, the model performing the best obtained an accuracy of 96.38%, the sensitivity of 97.2% and AUC of 0.985 on an external testing dataset consisting of 162 radiographs. The best performing model correctly and uniquely identified the implants which could be visualized using saliency maps. CONCLUSION Deep learning models can be used to differentiate between 6 knee arthroplasty implant models. Saliency maps give us a better understanding of which regions the model is focusing on while predicting the results.
Collapse
Affiliation(s)
- Sukkrit Sharma
- Department of Computer Science and Engineering, School of Computing, SRM Institute of Science and Technology, Potheri, Kattankulathur, Chengalpattu District, Tamil Nadu 603203 India
| | - Vineet Batta
- Department of Orthopaedic, Luton and Dunstable University College London Hospitals NHS Foundation Trust, Luton, UK
| | - Malathy Chidambaranathan
- Department of Computer Science and Engineering, School of Computing, SRM Institute of Science and Technology, Potheri, Kattankulathur, Chengalpattu District, Tamil Nadu 603203 India
| | - Prabhakaran Mathialagan
- Department of Computer Science and Engineering, School of Computing, SRM Institute of Science and Technology, Potheri, Kattankulathur, Chengalpattu District, Tamil Nadu 603203 India
| | - Gayathri Mani
- Department of Computer Science and Engineering, School of Computing, SRM Institute of Science and Technology, Potheri, Kattankulathur, Chengalpattu District, Tamil Nadu 603203 India
| | - M. Kiruthika
- Department of Orthopaedic, Luton and Dunstable University College London Hospitals NHS Foundation Trust, Luton, UK
| | - Barun Datta
- Army Research and Referral, New Delhi, India
| | | | | | | | - Sandeep Vijayan
- Kasturba Medical College, Manipal, Manipal Academy of Higher Education, Udupi, Karnataka India
| | | |
Collapse
|
14
|
Buytaert D, Taeymans Y, De Wolf D, Bacher K. Evaluation of a no-reference image quality metric for projection X-ray imaging using a 3D printed patient-specific phantom. Phys Med 2021; 89:29-40. [PMID: 34343764 DOI: 10.1016/j.ejmp.2021.07.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 07/06/2021] [Accepted: 07/13/2021] [Indexed: 11/19/2022] Open
Abstract
PURPOSE Feasability of a no-reference image quality metric was assessed on patient-like images using a patient-specific phantom simulating a frame of a coronary angiogram. METHODS One background and one contrast-filled frame of a coronary angiogram, acquired using a clinical imaging protocol, were selected from a Philips Integris Allura FD (Philips Healthcare, Best, The Netherlands). The background frame's pixels were extruded to a thickness proportional to their grey value. One phantom was 3D printed using composite 80% bronze filament (max. thickness of 5.1 mm), the other was a custom PMMA cast (max thickness of 8.5 cm). A vessel mold was created from the contrast-filled frame and injected with a solution of 320 mg I/ml contrast fluid (75%), water and gelatin. Still X-ray frames of the vessel mold + background phantom + 16 cm PMMA were acquired at manually selected different exposure settings using a Philips Azurion (Philips Healthcare, Best, The Netherlands) in User Quality Control Mode and were exported as RAW images. The signal-difference-to-noise-ratio-squared (SDNR2) and a spatial-domain-equivalent of the noise equivalent quanta (NEQSDE) were calculated. The Spearman's correlation of the latter parameters with a no-reference perceptual image quality metric (NIQE) was investigated. RESULTS The bronze phantom showed better resemblance to the original patient frame selected from a coronary angiogram of an actual patient, with better contrast and less blur than the PMMA phantom. Both phantoms were imaged using a comparable imaging protocol to the one used to acquire the original frame. The bronze phantom was hence used together with the vessel mold for image quality measurements on the 165 still phantom frames. A strong correlation was noted between NEQSDE and NIQE (SROCC = -0.99, p < 0.0005) and between SDNR2 and NIQE (SROCC = -0.97, p < 0.0005). CONCLUSION Using a cost-effective and easy to realize patient-specific phantom we were able to generate patient-like X-ray frames. NIQE as a no-reference image quality model has the potential to predict physical image quality from patient images.
Collapse
Affiliation(s)
- Dimitri Buytaert
- Department of Human Structure and Repair, Ghent University, Ghent, Belgium.
| | - Yves Taeymans
- Heart Center, Ghent University Hospital, Ghent, Belgium.
| | - Daniël De Wolf
- Department of Paediatric Cardiology, Ghent University Hospital, Ghent, Belgium.
| | - Klaus Bacher
- Department of Human Structure and Repair, Ghent University, Ghent, Belgium.
| |
Collapse
|
15
|
Stępień I, Obuchowicz R, Piórkowski A, Oszust M. Fusion of Deep Convolutional Neural Networks for No-Reference Magnetic Resonance Image Quality Assessment. SENSORS 2021; 21:s21041043. [PMID: 33546412 PMCID: PMC7913522 DOI: 10.3390/s21041043] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Revised: 01/28/2021] [Accepted: 01/29/2021] [Indexed: 12/02/2022]
Abstract
The quality of magnetic resonance images may influence the diagnosis and subsequent treatment. Therefore, in this paper, a novel no-reference (NR) magnetic resonance image quality assessment (MRIQA) method is proposed. In the approach, deep convolutional neural network architectures are fused and jointly trained to better capture the characteristics of MR images. Then, to improve the quality prediction performance, the support vector machine regression (SVR) technique is employed on the features generated by fused networks. In the paper, several promising network architectures are introduced, investigated, and experimentally compared with state-of-the-art NR-IQA methods on two representative MRIQA benchmark datasets. One of the datasets is introduced in this work. As the experimental validation reveals, the proposed fusion of networks outperforms related approaches in terms of correlation with subjective opinions of a large number of experienced radiologists.
Collapse
Affiliation(s)
- Igor Stępień
- Doctoral School of Engineering and Technical Sciences at the Rzeszow University of Technology, al. Powstancow Warszawy 12, 35-959 Rzeszow, Poland;
| | - Rafał Obuchowicz
- Department of Diagnostic Imaging, Jagiellonian University Medical College, 19 Kopernika Street, 31-501 Cracow, Poland;
| | - Adam Piórkowski
- Department of Biocybernetics and Biomedical Engineering, AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Cracow, Poland;
| | - Mariusz Oszust
- Department of Computer and Control Engineering, Rzeszow University of Technology, W. Pola 2, 35-959 Rzeszow, Poland
- Correspondence:
| |
Collapse
|
16
|
Kaushik H, Singh D, Tiwari S, Kaur M, Jeong CW, Nam Y, Attique Khan M. Screening of COVID-19 Patients Using Deep Learning and IoT Framework. COMPUTERS, MATERIALS & CONTINUA 2021; 69:3459-3475. [DOI: 10.32604/cmc.2021.017337] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Accepted: 04/14/2021] [Indexed: 08/25/2024]
|
17
|
Ghose S, Datta S, Batta V, Malathy C, M G. Artificial Intelligence based identification of Total Knee Arthroplasty Implants. 2020 3RD INTERNATIONAL CONFERENCE ON INTELLIGENT SUSTAINABLE SYSTEMS (ICISS) 2020. [DOI: 10.1109/iciss49785.2020.9315956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/27/2023]
|
18
|
Risnandar, Prakasa E, Erwin IM, Gojali EA, Herlan, Lestari P. Deep salient wood image-based quality assessment. SN APPLIED SCIENCES 2020. [DOI: 10.1007/s42452-020-2671-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
|
19
|
Obuchowicz R, Oszust M, Bielecka M, Bielecki A, Piórkowski A. Magnetic Resonance Image Quality Assessment by Using Non-Maximum Suppression and Entropy Analysis. ENTROPY 2020; 22:e22020220. [PMID: 33285994 PMCID: PMC7516651 DOI: 10.3390/e22020220] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2020] [Revised: 02/04/2020] [Accepted: 02/13/2020] [Indexed: 12/17/2022]
Abstract
An investigation of diseases using magnetic resonance (MR) imaging requires automatic image quality assessment methods able to exclude low-quality scans. Such methods can be also employed for an optimization of parameters of imaging systems or evaluation of image processing algorithms. Therefore, in this paper, a novel blind image quality assessment (BIQA) method for the evaluation of MR images is introduced. It is observed that the result of filtering using non-maximum suppression (NMS) strongly depends on the perceptual quality of an input image. Hence, in the method, the image is first processed by the NMS with various levels of acceptable local intensity difference. Then, the quality is efficiently expressed by the entropy of a sequence of extrema numbers obtained with the thresholded NMS. The proposed BIQA approach is compared with ten state-of-the-art techniques on a dataset containing MR images and subjective scores provided by 31 experienced radiologists. The Pearson, Spearman, Kendall correlation coefficients and root mean square error for the method assessing images in the dataset were 0.6741, 0.3540, 0.2428, and 0.5375, respectively. The extensive experimental evaluation of the BIQA methods reveals that the introduced measure outperforms related techniques by a large margin as it correlates better with human scores.
Collapse
Affiliation(s)
- Rafał Obuchowicz
- Department of Diagnostic Imaging, Jagiellonian University Medical College, 19 Kopernika Street, 31-501 Cracow, Poland;
| | - Mariusz Oszust
- Department of Computer and Control Engineering, Rzeszow University of Technology, W. Pola 2, 35-959 Rzeszow, Poland;
| | - Marzena Bielecka
- Faculty of Geology, Geophysics and Environmental Protection, AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Cracow, Poland
- Correspondence:
| | - Andrzej Bielecki
- Faculty of Electrical Engineering, Automation, Computer Science and Biomedical Engineering, AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Cracow, Poland;
| | - Adam Piórkowski
- Department of Biocybernetics and Biomedical Engineering, AGH University of Science and Technology, al. Mickiewicza 30, 30-059 Cracow, Poland;
| |
Collapse
|
20
|
Oszust M, Piórkowski A, Obuchowicz R. No‐reference image quality assessment of magnetic resonance images with high‐boost filtering and local features. Magn Reson Med 2020; 84:1648-1660. [DOI: 10.1002/mrm.28201] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2019] [Revised: 01/14/2020] [Accepted: 01/16/2020] [Indexed: 12/31/2022]
Affiliation(s)
- Mariusz Oszust
- Department of Computer and Control Engineering Rzeszów University of Technology Rzeszów Poland
| | - Adam Piórkowski
- Department of Biocybernetics and Biomedical Engineering AGH University of Science and Technology Kraków Poland
| | - Rafał Obuchowicz
- Department of Diagnostic Imaging Jagiellonian University Medical College Kraków Poland
| |
Collapse
|
21
|
Bielecka M, Bielecki A, Obuchowicz R, Piórkowski A. Universal Measure for Medical Image Quality Evaluation Based on Gradient Approach. LECTURE NOTES IN COMPUTER SCIENCE 2020. [PMCID: PMC7303719 DOI: 10.1007/978-3-030-50423-6_30] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
In this paper, a new universal measure of medical images quality is proposed. The measure is based on the analysis of the image by using gradient methods. The number of isolated peaks in the examined image, as a function of the threshold value, is the basis of the assessment of the image quality. It turns out that for higher quality images the curvature of the graph of the said function has a higher value for lower threshold values. On the basis of the observed property, a new method of no-reference image quality assessment has been created. The experimental verification confirmed the method efficiency. The correlation between the arrangement depending on the image quality done by an expert and by using the proposed method is equal to 0.74. This means that the proposed method gives a correlation of higher than the best methods described in the literature. The proposed measure is useful to maximize the image quality while minimizing the time of medical examination.
Collapse
|
22
|
Qiao M, Wang Y, Berendsen FF, van der Geest RJ, Tao Q. Fully automated segmentation of the left atrium, pulmonary veins, and left atrial appendage from magnetic resonance angiography by joint-atlas-optimization. Med Phys 2019; 46:2074-2084. [PMID: 30861147 PMCID: PMC6849806 DOI: 10.1002/mp.13475] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Revised: 01/17/2019] [Accepted: 01/18/2019] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Atrial fibrillation (AF) originating from the left atrium (LA) and pulmonary veins (PVs) is the most prevalent cardiac electrophysiological disorder. Accurate segmentation and quantification of the LA chamber, PVs, and left atrial appendage (LAA) provides clinically important references for treatment of AF patients. The purpose of this work is to realize objective segmentation of the LA chamber, PVs, and LAA in an accurate and fully automated manner. METHODS In this work, we proposed a new approach, named joint-atlas-optimization, to segment the LA chamber, PVs, and LAA from magnetic resonance angiography (MRA) images. We formulated the segmentation as a single registration problem between the given image and all N atlas images, instead of N separate registration between the given image and an individual atlas image. Level sets was applied to refine the atlas-based segmentation. Using the publically available LA benchmark database, we compared the proposed joint-atlas-optimization approach to the conventional pairwise atlas approach and evaluated the segmentation performance in terms of Dice index and surface-to-surface (S2S) distance to the manual ground truth. RESULTS The proposed joint-atlas-optimization method showed systemically improved accuracy and robustness over the pairwise atlas approach. The Dice of LA segmentation using joint-atlas-optimization was 0.93 ± 0.04, compared to 0.91 ± 0.04 by the pairwise approach (P < 0.05). The mean S2S distance was 1.52 ± 0.58 mm, compared to 1.83 ± 0.75 mm (P < 0.05). In particular, it produced significantly improved segmentation accuracy of the LAA and PVs, the small distant part in LA geometry that is intrinsically difficult to segment using the conventional pairwise approach. The Dice of PVs segmentation was 0.69 ± 0.16, compared to 0.49 ± 0.15 (P < 0.001). The Dice of LAA segmentation was 0.91 ± 0.03, compared to 0.88 ± 0.05 (P < 0.01). CONCLUSION The proposed joint-atlas optimization method can segment the complex LA geometry in a fully automated manner. Compared to the conventional atlas approach in a pairwise manner, our method improves the performance on small distal parts of LA, for example, PVs and LAA, the geometrical and quantitative assessment of which is clinically interesting.
Collapse
Affiliation(s)
- Menyun Qiao
- Biomedical Engineering Center, Fudan University, Shanghai, 200433, China
| | - Yuanyuan Wang
- Biomedical Engineering Center, Fudan University, Shanghai, 200433, China
| | - Floris F Berendsen
- Department of Radiology, Leiden University Medical Center, Leiden, 2300 RC, The Netherlands
| | - Rob J van der Geest
- Department of Radiology, Leiden University Medical Center, Leiden, 2300 RC, The Netherlands
| | - Qian Tao
- Department of Radiology, Leiden University Medical Center, Leiden, 2300 RC, The Netherlands
| |
Collapse
|
23
|
Local Indicators of Spatial Autocorrelation (LISA): Application to Blind Noise-Based Perceptual Quality Metric Index for Magnetic Resonance Images. J Imaging 2019; 5:jimaging5010020. [PMID: 34465703 PMCID: PMC8320873 DOI: 10.3390/jimaging5010020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2018] [Revised: 12/16/2018] [Accepted: 01/02/2019] [Indexed: 11/16/2022] Open
Abstract
Noise-based quality evaluation of MRI images is highly desired in noise-dominant environments. Current noise-based MRI quality evaluation methods have drawbacks which limit their effective performance. Traditional full-reference methods such as SNR and most of the model-based techniques cannot provide perceptual quality metrics required for accurate diagnosis, treatment and monitoring of diseases. Although techniques based on the Moran coefficients are perceptual quality metrics, they are full-reference methods and will be ineffective in applications where the reference image is not available. Furthermore, the predicted quality scores are difficult to interpret because their quality indices are not standardized. In this paper, we propose a new no-reference perceptual quality evaluation method for grayscale images such as MRI images. Our approach is formulated to mimic how humans perceive an image. It transforms noise level into a standardized perceptual quality score. Global Moran statistics is combined with local indicators of spatial autocorrelation in the form of local Moran statistics. Quality score is predicted from perceptually weighted combination of clustered and random pixels. Performance evaluation, comparative performance evaluation and validation by human observers, shows that the proposed method will be a useful tool in the evaluation of retrospectively acquired MRI images and the evaluation of noise reduction algorithms.
Collapse
|
24
|
Yu S, Dai G, Wang Z, Li L, Wei X, Xie Y. A consistency evaluation of signal-to-noise ratio in the quality assessment of human brain magnetic resonance images. BMC Med Imaging 2018; 18:17. [PMID: 29769079 PMCID: PMC5956758 DOI: 10.1186/s12880-018-0256-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2017] [Accepted: 04/30/2018] [Indexed: 01/08/2023] Open
Abstract
Background Quality assessment of medical images is highly related to the quality assurance, image interpretation and decision making. As to magnetic resonance (MR) images, signal-to-noise ratio (SNR) is routinely used as a quality indicator, while little knowledge is known of its consistency regarding different observers. Methods In total, 192, 88, 76 and 55 brain images are acquired using T2*, T1, T2 and contrast-enhanced T1 (T1C) weighted MR imaging sequences, respectively. To each imaging protocol, the consistency of SNR measurement is verified between and within two observers, and white matter (WM) and cerebral spinal fluid (CSF) are alternately used as the tissue region of interest (TOI) for SNR measurement. The procedure is repeated on another day within 30 days. At first, overlapped voxels in TOIs are quantified with Dice index. Then, test-retest reliability is assessed in terms of intra-class correlation coefficient (ICC). After that, four models (BIQI, BLIINDS-II, BRISQUE and NIQE) primarily used for the quality assessment of natural images are borrowed to predict the quality of MR images. And in the end, the correlation between SNR values and predicted results is analyzed. Results To the same TOI in each MR imaging sequence, less than 6% voxels are overlapped between manual delineations. In the quality estimation of MR images, statistical analysis indicates no significant difference between observers (Wilcoxon rank sum test, pw ≥ 0.11; paired-sample t test, pp ≥ 0.26), and good to very good intra- and inter-observer reliability are found (ICC, picc ≥ 0.74). Furthermore, Pearson correlation coefficient (rp) suggests that SNRwm correlates strongly with BIQI, BLIINDS-II and BRISQUE in T2* (rp ≥ 0.78), BRISQUE and NIQE in T1 (rp ≥ 0.77), BLIINDS-II in T2 (rp ≥ 0.68) and BRISQUE and NIQE in T1C (rp ≥ 0.62) weighted MR images, while SNRcsf correlates strongly with BLIINDS-II in T2* (rp ≥ 0.63) and in T2 (rp ≥ 0.64) weighted MR images. Conclusions The consistency of SNR measurement is validated regarding various observers and MR imaging protocols. When SNR measurement performs as the quality indicator of MR images, BRISQUE and BLIINDS-II can be conditionally used for the automated quality estimation of human brain MR images.
Collapse
Affiliation(s)
- Shaode Yu
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Guangzhe Dai
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shenyang, China
| | - Zhaoyang Wang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.,Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shenyang, China
| | - Leida Li
- School of Information and Control Engineering, Chinese University of Mining and Technology, Xuzhou, China
| | - Xinhua Wei
- Department of Radiology, Guangzhou First Peoples Hospital, Guangzhou Medical University, Guangzhou, China.,The Second Affiliated Hospital, South China University of Technology, Guangzhou, China
| | - Yaoqin Xie
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China.
| |
Collapse
|