1
|
Shao M, Byrd DW, Mitra J, Behnia F, Lee JH, Iravani A, Sadic M, Chen DL, Wollenweber SD, Abbey CK, Kinahan PE, Ahn S. A deep learning anthropomorphic model observer for a detection task in PET. Med Phys 2024. [PMID: 39008812 DOI: 10.1002/mp.17303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Revised: 05/15/2024] [Accepted: 06/24/2024] [Indexed: 07/17/2024] Open
Abstract
BACKGROUND Lesion detection is one of the most important clinical tasks in positron emission tomography (PET) for oncology. An anthropomorphic model observer (MO) designed to replicate human observers (HOs) in a detection task is an important tool for assessing task-based image quality. The channelized Hotelling observer (CHO) has been the most popular anthropomorphic MO. Recently, deep learning MOs (DLMOs), mostly based on convolutional neural networks (CNNs), have been investigated for various imaging modalities. However, there have been few studies on DLMOs for PET. PURPOSE The goal of the study is to investigate whether DLMOs can predict HOs better than conventional MOs such as CHO in a two-alternative forced-choice (2AFC) detection task using PET images with real anatomical variability. METHODS Two types of DLMOs were implemented: (1) CNN DLMO, and (2) CNN-SwinT DLMO that combines CNN and Swin Transformer (SwinT) encoders. Lesion-absent PET images were reconstructed from clinical data, and lesion-present images were reconstructed with adding simulated lesion sinogram data. Lesion-present and lesion-absent PET image pairs were labeled by eight HOs consisting of four radiologists and four image scientists in a 2AFC detection task. In total, 2268 pairs of lesion-present and lesion-absent images were used for training, 324 pairs for validation, and 324 pairs for test. CNN DLMO, CNN-SwinT DLMO, CHO with internal noise, and non-prewhitening matched filter (NPWMF) were compared in the same train-test paradigm. For comparison, six quantitative metrics including prediction accuracy, mean squared errors (MSEs) and correlation coefficients, which measure how well a MO predicts HOs, were calculated in a 9-fold cross-validation experiment. RESULTS In terms of the accuracy and MSE metrics, CNN DLMO and CNN-SwinT DLMO showed better performance than CHO and NPWMF, and CNN-SwinT DLMO showed the best performance among the MOs evaluated. CONCLUSIONS DLMO can predict HOs more accurately than conventional MOs such as CHO in PET lesion detection. Combining SwinT and CNN encoders can improve the DLMO prediction performance compared to using CNN only.
Collapse
Affiliation(s)
- Muhan Shao
- GE HealthCare Technology and Innovation Center, Niskayuna, New York, USA
| | - Darrin W Byrd
- Department of Radiology, University of Washington, Seattle, Washington, USA
| | - Jhimli Mitra
- GE HealthCare Technology and Innovation Center, Niskayuna, New York, USA
| | - Fatemeh Behnia
- Department of Radiology, University of Washington, Seattle, Washington, USA
| | - Jean H Lee
- Department of Radiology, University of Washington, Seattle, Washington, USA
| | - Amir Iravani
- Department of Radiology, University of Washington, Seattle, Washington, USA
| | - Murat Sadic
- Department of Radiology, University of Washington, Seattle, Washington, USA
| | - Delphine L Chen
- Department of Radiology, University of Washington, Seattle, Washington, USA
| | | | - Craig K Abbey
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, California, USA
| | - Paul E Kinahan
- Department of Radiology, University of Washington, Seattle, Washington, USA
| | - Sangtae Ahn
- GE HealthCare Technology and Innovation Center, Niskayuna, New York, USA
| |
Collapse
|
2
|
Stocker D, Sommer C, Gueng S, Stäuble J, Özden I, Griessinger J, Weyland MS, Lutters G, Scheidegger S. Probabilistic U-Net model observer for the DDC method in CT scan protocol optimization. Phys Med Biol 2024; 69:115026. [PMID: 38657639 DOI: 10.1088/1361-6560/ad4302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 04/24/2024] [Indexed: 04/26/2024]
Abstract
Optimizing complex imaging procedures within Computed Tomography, considering both dose and image quality, presents significant challenges amidst rapid technological advancements and the adoption of machine learning (ML) methods. A crucial metric in this context is the Difference-Detailed Curve, which relies on human observer studies. However, these studies are labor-intensive and prone to both inter- and intra-observer variability. To tackle these issues, a ML-based model observer utilizing the U-Net architecture and a Bayesian methodology is proposed. In order to train a model observer unaffected by the spatial arrangement of low-contrast objects, the image preprocessing incorporates a Gaussian Process-based noise model. Additionally, gradient-weighted class activation mapping is utilized to gain insights into the model observer's decision-making process. By training on data from a diverse group of observers, well-calibrated probabilistic predictions that quantify observer variability are achieved. Leveraging the principles of Beta regression, the Bayesian methodology is used to derive a model observer performance metric, effectively gauging the model observer's strength in terms of an 'effective number of observers'. Ultimately, this framework enables to predict the DDC distribution by applying thresholds to the inferred probabilities (Part of this work has been presented at: Stocker D, Sommer C, Gueng S, Stäuble J, Özden I, Griessinger J, Weyland M S, Lutters G, Scheidegger S (2023). Probabilistic U-Net Model Observer for the DDC Method in CT Scan Protocol Optimization. The 56th SSRMP Annual Meeting 2023, November 30. - December 1., 2023, Luzern, Switzerland).
Collapse
Affiliation(s)
- David Stocker
- ZHAW School of Engineering, 8401 Winterthur, Switzerland
| | | | - Sarah Gueng
- ZHAW School of Engineering, 8401 Winterthur, Switzerland
| | - Jason Stäuble
- ZHAW School of Engineering, 8401 Winterthur, Switzerland
| | - Ismail Özden
- Fachstelle Strahlenschutz und Medizinphysik, Kantonsspital Aarau, 5000 Aarau, Switzerland
| | - Jennifer Griessinger
- Fachstelle Strahlenschutz und Medizinphysik, Kantonsspital Aarau, 5000 Aarau, Switzerland
| | | | - Gerd Lutters
- Fachstelle Strahlenschutz und Medizinphysik, Kantonsspital Aarau, 5000 Aarau, Switzerland
| | - Stephan Scheidegger
- ZHAW School of Engineering, 8401 Winterthur, Switzerland
- Fachstelle Strahlenschutz und Medizinphysik, Kantonsspital Aarau, 5000 Aarau, Switzerland
| |
Collapse
|
3
|
Zhou Z, Gong H, Hsieh S, McCollough CH, Yu L. Image quality evaluation in deep-learning-based CT noise reduction using virtual imaging trial methods: Contrast-dependent spatial resolution. Med Phys 2024. [PMID: 38555876 DOI: 10.1002/mp.17029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 02/19/2024] [Accepted: 02/26/2024] [Indexed: 04/02/2024] Open
Abstract
BACKGROUND Deep-learning-based image reconstruction and noise reduction methods (DLIR) have been increasingly deployed in clinical CT. Accurate image quality assessment of these methods is challenging as the performance measured using physical phantoms may not represent the true performance of DLIR in patients since DLIR is trained mostly on patient images. PURPOSE In this work, we aim to develop a patient-data-based virtual imaging trial framework and, as a first application, use it to measure the spatial resolution properties of a DLIR method. METHODS The patient-data-based virtual imaging trial framework consists of five steps: (1) insertion of lesions into projection domain data using the acquisition geometry of the patient exam to simulate different lesion characteristics; (2) insertion of noise into projection domain data using a realistic photon statistical model of the CT system to simulate different dose levels; (3) creation of DLIR-processed images from projection or image data; (4) creation of ensembles of DLIR-processed patient images from a large number of noise and lesion realizations; and (5) evaluation of image quality using ensemble DLIR images. This framework was applied to measure the spatial resolution of a ResNet based deep convolutional neural network (DCNN) trained on patient images. Lesions in a cylindrical shape and different contrast levels (-500, -100, -50, -20, -10 HU) were inserted to the lower right lobe of the liver in a patient case. Multiple dose levels were simulated (50%, 25%, 12.5%). Each lesion and dose condition had 600 noise realizations. Multiple reconstruction and denoising methods were used on all the noise realizations, including the original filtered-backprojection (FBP), iterative reconstruction (IR), and the DCNN method with three different strength setting (DCNN-weak, DCNN-medium, and DCNN-strong). Mean lesion signal was calculated by performing ensemble averaging of all the noise realizations for each lesion and dose condition and then subtracting the lesion-present images from the lesion absent images. Modulation transfer functions (MTFs) both in-plane and along the z-axis were calculated based on the mean lesion signals. The standard deviations of MTFs at each condition were estimated with bootstrapping: randomly sampling (with replacement) all the DLIR/FBP/IR images from the ensemble data (600 samples) at each condition. The impact of varying lesion contrast, dose levels, and denoising strengths were evaluated. Statistical analysis with paired t-test was used to compare the z-axis and in-plane spatial resolution of five algorithms for five different contrasts and three dose levels. RESULTS The in-plane and z-axis spatial resolution degradation of DCNN becomes more severe as the contrast or radiation dose decreased, or DCNN denoising strength increased. In comparison with FBP, a 59.5% and 4.1% reduction of in-plane and z-axis MTF (in terms of spatial frequencies at 50% MTF), respectively, was observed at low contrast (-10 HU) for DCNN with the highest denoising strength at 25% routine dose level. When the dose level reduces from 50% to 12.5% of routine dose, the in-plane and z-axis MTFs reduces from 92.1% to 76.3%, and from 98.9% to 95.5%, respectively, at contrast of -100 HU, using FBP as the reference. For most conditions of contrasts and dose levels, significant differences were found among the five algorithms, with the following relationship in both in-plane and cross-plane spatial resolution: FBP > DCNN-Weak > IR > DCNN-Medium > DCNN-Strong. The spatial resolution difference among algorithms decreases at higher contrast or dose levels. CONCLUSIONS A patient-data-based virtual imaging trial framework was developed and applied to measuring the spatial resolution properties of a DCNN noise reduction method at different contrast and dose levels using real patient data. As with other non-linear image reconstruction and post-processing techniques, the evaluated DCNN method degraded the in-plane and z-axis spatial resolution at lower contrast levels, lower radiation dose, and higher denoising strength.
Collapse
Affiliation(s)
- Zhongxing Zhou
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | - Hao Gong
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | - Scott Hsieh
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| | | | - Lifeng Yu
- Department of Radiology, Mayo Clinic, Rochester, Minnesota, USA
| |
Collapse
|
4
|
Zhou Z, Inoue A, McCollough CH, Yu L. Self-trained deep convolutional neural network for noise reduction in CT. J Med Imaging (Bellingham) 2023; 10:044008. [PMID: 37636895 PMCID: PMC10449263 DOI: 10.1117/1.jmi.10.4.044008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Revised: 08/04/2023] [Accepted: 08/08/2023] [Indexed: 08/29/2023] Open
Abstract
Purpose Supervised deep convolutional neural network (CNN)-based methods have been actively used in clinical CT to reduce image noise. The networks of these methods are typically trained using paired high- and low-quality data from a massive number of patients and/or phantom images. This training process is tedious, and the network trained under a given condition may not be generalizable to patient images acquired and reconstructed under different conditions. We propose a self-trained deep CNN (ST_CNN) method for noise reduction in CT that does not rely on pre-existing training datasets. Approach The ST_CNN training was accomplished using extensive data augmentation in the projection domain, and the inference was applied to the data itself. Specifically, multiple independent noise insertions were applied to the original patient projection data to generate multiple realizations of low-quality projection data. Then, rotation augmentation was adopted for both the original and low-quality projection data by applying the rotation angle directly on the projection data so that images were rotated at arbitrary angles without introducing additional bias. A large number of paired low- and high-quality images from the same patient were reconstructed and paired for training the ST_CNN model. Results No significant difference was found between the ST_CNN and conventional CNN models in terms of the peak signal-to-noise ratio and structural similarity index measure. The ST_CNN model outperformed the conventional CNN model in terms of noise texture and homogeneity in liver parenchyma as well as better subjective visualization of liver lesions. The ST_CNN may sacrifice the sharpness of vessels slightly compared to the conventional CNN model but without affecting the visibility of peripheral vessels or diagnosis of vascular pathology. Conclusions The proposed ST_CNN method trained from the data itself may achieve similar image quality in comparison with conventional deep CNN denoising methods pre-trained on external datasets.
Collapse
Affiliation(s)
- Zhongxing Zhou
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Akitoshi Inoue
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | | | - Lifeng Yu
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| |
Collapse
|
5
|
Viriyasaranon T, Chun JW, Koh YH, Cho JH, Jung MK, Kim SH, Kim HJ, Lee WJ, Choi JH, Woo SM. Annotation-Efficient Deep Learning Model for Pancreatic Cancer Diagnosis and Classification Using CT Images: A Retrospective Diagnostic Study. Cancers (Basel) 2023; 15:3392. [PMID: 37444502 DOI: 10.3390/cancers15133392] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 06/26/2023] [Accepted: 06/26/2023] [Indexed: 07/15/2023] Open
Abstract
The aim of this study was to develop a novel deep learning (DL) model without requiring large-annotated training datasets for detecting pancreatic cancer (PC) using computed tomography (CT) images. This retrospective diagnostic study was conducted using CT images collected from 2004 and 2019 from 4287 patients diagnosed with PC. We proposed a self-supervised learning algorithm (pseudo-lesion segmentation (PS)) for PC classification, which was trained with and without PS and validated on randomly divided training and validation sets. We further performed cross-racial external validation using open-access CT images from 361 patients. For internal validation, the accuracy and sensitivity for PC classification were 94.3% (92.8-95.4%) and 92.5% (90.0-94.4%), and 95.7% (94.5-96.7%) and 99.3 (98.4-99.7%) for the convolutional neural network (CNN) and transformer-based DL models (both with PS), respectively. Implementing PS on a small-sized training dataset (randomly sampled 10%) increased accuracy by 20.5% and sensitivity by 37.0%. For external validation, the accuracy and sensitivity were 82.5% (78.3-86.1%) and 81.7% (77.3-85.4%) and 87.8% (84.0-90.8%) and 86.5% (82.3-89.8%) for the CNN and transformer-based DL models (both with PS), respectively. PS self-supervised learning can increase DL-based PC classification performance, reliability, and robustness of the model for unseen, and even small, datasets. The proposed DL model is potentially useful for PC diagnosis.
Collapse
Affiliation(s)
- Thanaporn Viriyasaranon
- Graduate Program in System Health Science and Engineering, Division of Mechanical and Biomedical Engineering, Ewha Womans University, Seoul 03760, Republic of Korea
| | - Jung Won Chun
- Center for Liver and Pancreatobiliary Cancer, National Cancer Center, Goyang 10408, Republic of Korea
| | - Young Hwan Koh
- Center for Liver and Pancreatobiliary Cancer, National Cancer Center, Goyang 10408, Republic of Korea
| | - Jae Hee Cho
- Department of Internal Medicine, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul 03722, Republic of Korea
| | - Min Kyu Jung
- Department of Internal Medicine, Kyungpook National University Hospital, Daegu 41944, Republic of Korea
| | - Seong-Hun Kim
- Department of Internal Medicine, Research Institute of Clinical Medicine of Jeonbuk National University-Biomedical Research Institute of Jeonbuk National University Hospital, Jeonju 54907, Republic of Korea
| | - Hyo Jung Kim
- Department of Gastroenterology, Korea University Guro Hospital, Seoul 10408, Republic of Korea
| | - Woo Jin Lee
- Center for Liver and Pancreatobiliary Cancer, National Cancer Center, Goyang 10408, Republic of Korea
| | - Jang-Hwan Choi
- Graduate Program in System Health Science and Engineering, Division of Mechanical and Biomedical Engineering, Ewha Womans University, Seoul 03760, Republic of Korea
| | - Sang Myung Woo
- Center for Liver and Pancreatobiliary Cancer, National Cancer Center, Goyang 10408, Republic of Korea
| |
Collapse
|
6
|
Valeri F, Bartolucci M, Cantoni E, Carpi R, Cisbani E, Cupparo I, Doria S, Gori C, Grigioni M, Lasagni L, Marconi A, Mazzoni LN, Miele V, Pradella S, Risaliti G, Sanguineti V, Sona D, Vannucchi L, Taddeucci A. UNet and MobileNet CNN-based model observers for CT protocol optimization: comparative performance evaluation by means of phantom CT images. J Med Imaging (Bellingham) 2023; 10:S11904. [PMID: 36895439 PMCID: PMC9989681 DOI: 10.1117/1.jmi.10.s1.s11904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 02/09/2023] [Indexed: 03/09/2023] Open
Abstract
Purpose The aim of this work is the development and characterization of a model observer (MO) based on convolutional neural networks (CNNs), trained to mimic human observers in image evaluation in terms of detection and localization of low-contrast objects in CT scans acquired on a reference phantom. The final goal is automatic image quality evaluation and CT protocol optimization to fulfill the ALARA principle. Approach Preliminary work was carried out to collect localization confidence ratings of human observers for signal presence/absence from a dataset of 30,000 CT images acquired on a PolyMethyl MethAcrylate phantom containing inserts filled with iodinated contrast media at different concentrations. The collected data were used to generate the labels for the training of the artificial neural networks. We developed and compared two CNN architectures based respectively on Unet and MobileNetV2, specifically adapted to achieve the double tasks of classification and localization. The CNN evaluation was performed by computing the area under localization-ROC curve (LAUC) and accuracy metrics on the test dataset. Results The mean of absolute percentage error between the LAUC of the human observer and MO was found to be below 5% for the most significative test data subsets. An elevated inter-rater agreement was achieved in terms of S-statistics and other common statistical indices. Conclusions Very good agreement was measured between the human observer and MO, as well as between the performance of the two algorithms. Therefore, this work is highly supportive of the feasibility of employing CNN-MO combined with a specifically designed phantom for CT protocol optimization programs.
Collapse
Affiliation(s)
- Federico Valeri
- Università degli Studi di Firenze, Dipartimento di Fisica e Astronomia, Florence, Italy.,Università degli Studi di Firenze, Scuola di Scienze della Salute Umana, Florence, Italy
| | - Maurizio Bartolucci
- Ospedale S. Stefano, Azienda USL Toscana Centro, SOC Radiodiagnostica, Prato, Italy
| | - Elena Cantoni
- Università degli Studi di Firenze, Dipartimento di Fisica e Astronomia, Florence, Italy
| | - Roberto Carpi
- Ospedale Santa Maria Nuova, Azienda USL Toscana Centro, SOC Radiologia, Florence, Italy
| | - Evaristo Cisbani
- Istituto Superiore di Sanità, Centro Nazionale Tecnologie Innvative in Sanità Pubblica, Rome, Italy
| | - Ilaria Cupparo
- Università degli Studi di Firenze, Dipartimento di Fisica e Astronomia, Florence, Italy.,Università degli Studi di Firenze, Scuola di Scienze della Salute Umana, Florence, Italy
| | - Sandra Doria
- Istituto di Chimica dei Composti OrganoMetallici, Consiglio Nazionale delle Ricerche, Florence, Italy.,Università degli Studi di Firenze, European Laboratory for Nonlinear Spectroscopy, Florence, Italy
| | - Cesare Gori
- Università degli Studi di Firenze, Dipartimento di Fisica e Astronomia, Florence, Italy
| | - Mauro Grigioni
- Istituto Superiore di Sanità, Centro Nazionale Tecnologie Innvative in Sanità Pubblica, Rome, Italy
| | - Lorenzo Lasagni
- Università degli Studi di Firenze, Dipartimento di Fisica e Astronomia, Florence, Italy.,Università degli Studi di Firenze, Scuola di Scienze della Salute Umana, Florence, Italy
| | - Alessandro Marconi
- Università degli Studi di Firenze, Dipartimento di Fisica e Astronomia, Florence, Italy
| | - Lorenzo Nicola Mazzoni
- Ospedale San Jacopo, Azienda USL Toscana Centro, UO Fisica Sanitaria Prato e Pistoia, Pistoia, Italy
| | - Vittorio Miele
- Azienda Ospedaliero-Universitaria Careggi, SOD Radiodiagnostica di Emergenza-Urgenza, Florence, Italy
| | - Silvia Pradella
- Azienda Ospedaliero-Universitaria Careggi, SOD Radiodiagnostica di Emergenza-Urgenza, Florence, Italy
| | - Guido Risaliti
- Università degli Studi di Firenze, Dipartimento di Fisica e Astronomia, Florence, Italy
| | - Valentina Sanguineti
- Istituto Italiano di Tecnologia, Pattern Analysis & Computer Vision, Genoa, Italy
| | - Diego Sona
- Fondazione Bruno Kessler, Data Science for Health Unit, Trento, Italy
| | - Letizia Vannucchi
- Ospedale S. Jacopo, AUSL Toscana Centro, SOC Radiodiagnostica, Pistoia, Italy
| | - Adriana Taddeucci
- Azienda Ospedaliero-Universitaria Careggi, UO Fisica Sanitaria, Florence, Italy.,Istituto Nazionale di Fisica Nucleare - Sezione di Firenze, Sesto Fiorentino, Italy
| |
Collapse
|
7
|
Lee W, Cho E, Kim W, Choi H, Beck KS, Yoon HJ, Baek J, Choi JH. No-reference perceptual CT image quality assessment based on a self-supervised learning framework. MACHINE LEARNING: SCIENCE AND TECHNOLOGY 2022. [DOI: 10.1088/2632-2153/aca87d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
Abstract
Abstract
Accurate image quality assessment (IQA) is crucial to optimize computed tomography (CT) image protocols while keeping the radiation dose as low as reasonably achievable. In the medical domain, IQA is based on how well an image provides a useful and efficient presentation necessary for physicians to make a diagnosis. Moreover, IQA results should be consistent with radiologists’ opinions on image quality, which is accepted as the gold standard for medical IQA. As such, the goals of medical IQA are greatly different from those of natural IQA. In addition, the lack of pristine reference images or radiologists’ opinions in a real-time clinical environment makes IQA challenging. Thus, no-reference IQA (NR-IQA) is more desirable in clinical settings than full-reference IQA (FR-IQA). Leveraging an innovative self-supervised training strategy for object detection models by detecting virtually inserted objects with geometrically simple forms, we propose a novel NR-IQA method, named deep detector IQA (D2IQA), that can automatically calculate the quantitative quality of CT images. Extensive experimental evaluations on clinical and anthropomorphic phantom CT images demonstrate that our D2IQA is capable of robustly computing perceptual image quality as it varies according to relative dose levels. Moreover, when considering the correlation between the evaluation results of IQA metrics and radiologists’ quality scores, our D2IQA is marginally superior to other NR-IQA metrics and even shows performance competitive with FR-IQA metrics.
Collapse
|