1
|
Ketola JHJ, Inkinen SI, Mäkelä T, Kaasalainen T, Peltonen JI, Kangasniemi M, Volmonen K, Kortesniemi M. Automatic chest computed tomography image noise quantification using deep learning. Phys Med 2024; 117:103186. [PMID: 38042062 DOI: 10.1016/j.ejmp.2023.103186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 11/15/2023] [Accepted: 11/23/2023] [Indexed: 12/04/2023] Open
Abstract
PURPOSE This study aimed to develop a deep learning (DL) method for noise quantification for clinical chest computed tomography (CT) images without the need for repeated scanning or homogeneous tissue regions. METHODS A comprehensive phantom CT dataset (three dose levels, six reconstruction methods, amounting to 9240 slices) was acquired and used to train a convolutional neural network (CNN) to output an estimate of local image noise standard deviations (SD) from a single CT scan input. The CNN model consisting of seven convolutional layers was trained on the phantom image dataset representing a range of scan parameters and was tested with phantom images acquired in a variety of different scan conditions, as well as publicly available chest CT images to produce clinical noise SD maps. RESULTS Noise SD maps predicted by the CNN agreed well with the ground truth both visually and numerically in the phantom dataset (errors of < 5 HU for most scan parameter combinations). In addition, the noise SD estimates obtained from clinical chest CT images were similar to running-average based reference estimates in areas without prominent tissue interfaces. CONCLUSIONS Predicting local noise magnitudes without the need for repeated scans is feasible using DL. Our implementation trained with phantom data was successfully applied to open-source clinical data with heterogeneous tissue borders and textures. We suggest that automatic DL noise mapping from clinical patient images could be used as a tool for objective CT image quality estimation and protocol optimization.
Collapse
Affiliation(s)
- Juuso H J Ketola
- Radiology, HUS Diagnostic Center, University of Helsinki and Helsinki University Hospital, Finland
| | - Satu I Inkinen
- Radiology, HUS Diagnostic Center, University of Helsinki and Helsinki University Hospital, Finland
| | - Teemu Mäkelä
- Radiology, HUS Diagnostic Center, University of Helsinki and Helsinki University Hospital, Finland; Department of Physics, University of Helsinki, P.O. Box 64, FI-00014 Helsinki, Finland.
| | - Touko Kaasalainen
- Radiology, HUS Diagnostic Center, University of Helsinki and Helsinki University Hospital, Finland
| | - Juha I Peltonen
- Radiology, HUS Diagnostic Center, University of Helsinki and Helsinki University Hospital, Finland
| | - Marko Kangasniemi
- Radiology, HUS Diagnostic Center, University of Helsinki and Helsinki University Hospital, Finland
| | - Kirsi Volmonen
- Radiology, HUS Diagnostic Center, University of Helsinki and Helsinki University Hospital, Finland
| | - Mika Kortesniemi
- Radiology, HUS Diagnostic Center, University of Helsinki and Helsinki University Hospital, Finland
| |
Collapse
|
2
|
Eftekharian M, Nodehi A, Enayatifar R. ML-DSTnet: A Novel Hybrid Model for Breast Cancer Diagnosis Improvement Based on Image Processing Using Machine Learning and Dempster-Shafer Theory. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:7510419. [PMID: 37954096 PMCID: PMC10635746 DOI: 10.1155/2023/7510419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 11/18/2022] [Accepted: 04/25/2023] [Indexed: 11/14/2023]
Abstract
Medical intelligence detection systems have changed with the help of artificial intelligence and have also faced challenges. Breast cancer diagnosis and classification are part of this medical intelligence system. Early detection can lead to an increase in treatment options. On the other hand, uncertainty is a case that has always been with the decision-maker. The system's parameters cannot be accurately estimated, and the wrong decision is made. To solve this problem, we have proposed a method in this article that reduces the ignorance of the problem with the help of Dempster-Shafer theory so that we can make a better decision. This research on the MIAS dataset, based on image processing machine learning and Dempster-Shafer mathematical theory, tries to improve the diagnosis and classification of benign, malignant masses. We first determine the results of the diagnosis of mass type with MLP by using the texture feature and CNN. We combine the results of the two classifications with Dempster-Shafer theory and improve its accuracy. The obtained results show that the proposed approach has better performance than others based on evaluation criteria such as accuracy of 99.10%, sensitivity of 98.4%, and specificity of 100%.
Collapse
Affiliation(s)
- Mohsen Eftekharian
- Department of Computer Engineering, Gorgan Branch, Islamic Azad University, Gorgan, Iran
| | - Ali Nodehi
- Department of Computer Engineering, Gorgan Branch, Islamic Azad University, Gorgan, Iran
| | - Rasul Enayatifar
- Department of Computer Engineering, Firoozkooh Branch, Islamic Azad University, Firoozkooh, Iran
| |
Collapse
|
3
|
Alis D, Kartal MS, Seker ME, Guroz B, Basar Y, Arslan A, Sirolu S, Kurtcan S, Denizoglu N, Tuzun U, Yildirim D, Oksuz I, Karaarslan E. Deep learning for assessing image quality in bi-parametric prostate MRI: A feasibility study. Eur J Radiol 2023; 165:110924. [PMID: 37354768 DOI: 10.1016/j.ejrad.2023.110924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Revised: 05/15/2023] [Accepted: 06/09/2023] [Indexed: 06/26/2023]
Abstract
BACKGROUND Although systems such as Prostate Imaging Quality (PI-QUAL) have been proposed for quality assessment, visual evaluations by human readers remain somewhat inconsistent, particularly among less-experienced readers. OBJECTIVES To assess the feasibility of deep learning (DL) for the automated assessment of image quality in bi-parametric MRI scans and compare its performance to that of less-experienced readers. METHODS We used bi-parametric prostate MRI scans from the PI-CAI dataset in this study. A 3-point Likert scale, consisting of poor, moderate, and excellent, was utilized for assessing image quality. Three expert readers established the ground-truth labels for the development (500) and testing sets (100). We trained a 3D DL model on the development set using probabilistic prostate masks and an ordinal loss function. Four less-experienced readers scored the testing set for performance comparison. RESULTS The kappa scores between the DL model and the expert consensus for T2W images and ADC maps were 0.42 and 0.61, representing moderate and good levels of agreement. The kappa scores between the less-experienced readers and the expert consensus for T2W images and ADC maps ranged from 0.39 to 0.56 (fair to moderate) and from 0.39 to 0.62 (fair to good). CONCLUSIONS Deep learning (DL) can offer performance comparable to that of less-experienced readers when assessing image quality in bi-parametric prostate MRI, making it a viable option for an automated quality assessment tool. We suggest that DL models trained on more representative datasets, annotated by a larger group of experts, could yield reliable image quality assessment and potentially substitute or assist visual evaluations by human readers.
Collapse
Affiliation(s)
- Deniz Alis
- Acibadem Mehmet Ali Aydinlar University, School of Medicine, Department of Radiology, Istanbul, 34457, Turkey.
| | | | - Mustafa Ege Seker
- Acibadem Mehmet Ali Aydinlar University, School of Medicine, Istanbul, 34752, Turkey
| | - Batuhan Guroz
- Acibadem Mehmet Ali Aydinlar University, School of Medicine, Department of Radiology, Istanbul, 34457, Turkey
| | - Yeliz Basar
- Acibadem Healthcare Group, Department of Radiology, Istanbul, 34457, Turkey
| | - Aydan Arslan
- Umraniye Training and Research Hospital, Department of Radiology, Istanbul, 34764, Turkey
| | - Sabri Sirolu
- Istanbul Sisli Hamidiye Etfal Training and Research Hospital, Department of Radiology, Istanbul, 34396, Turkey
| | - Serpil Kurtcan
- Acibadem Healthcare Group, Department of Radiology, Istanbul, 34457, Turkey.
| | - Nurper Denizoglu
- Acibadem Healthcare Group, Department of Radiology, Istanbul, 34457, Turkey.
| | - Umit Tuzun
- Neolife, Radiology Center, Istanbul, 34340, Turkey.
| | - Duzgun Yildirim
- Acibadem Mehmet Ali Aydinlar University, School of Vocational Sciences, Department of Radiology, Istanbul, 34457, Turkey.
| | - Ilkay Oksuz
- Istanbul Technical University, Department of Computer Engineering, Istanbul, 34467, Turkey
| | - Ercan Karaarslan
- Cumhuriyet University, School of Medicine, Sivas, 581407, Turkey.
| |
Collapse
|
4
|
Tang L, Hui Y, Yang H, Zhao Y, Tian C. Medical image fusion quality assessment based on conditional generative adversarial network. Front Neurosci 2022; 16:986153. [PMID: 36033610 PMCID: PMC9400712 DOI: 10.3389/fnins.2022.986153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2022] [Accepted: 07/13/2022] [Indexed: 11/23/2022] Open
Abstract
Multimodal medical image fusion (MMIF) has been proven to effectively improve the efficiency of disease diagnosis and treatment. However, few works have explored dedicated evaluation methods for MMIF. This paper proposes a novel quality assessment method for MMIF based on the conditional generative adversarial networks. First, with the mean opinion scores (MOS) as the guiding condition, the feature information of the two source images is extracted separately through the dual channel encoder-decoder. The features of different levels in the encoder-decoder are hierarchically input into the self-attention feature block, which is a fusion strategy for self-identifying favorable features. Then, the discriminator is used to improve the fusion objective of the generator. Finally, we calculate the structural similarity index between the fake image and the true image, and the MOS corresponding to the maximum result will be used as the final assessment result of the fused image quality. Based on the established MMIF database, the proposed method achieves the state-of-the-art performance among the comparison methods, with excellent agreement with subjective evaluations, indicating that the method is effective in the quality assessment of medical fusion images.
Collapse
Affiliation(s)
- Lu Tang
- School of Medical Imaging, Xuzhou Medical University, Xuzhou, China
| | - Yu Hui
- School of Medical Imaging, Xuzhou Medical University, Xuzhou, China
| | - Hang Yang
- School of Medical Imaging, Xuzhou Medical University, Xuzhou, China
| | - Yinghong Zhao
- School of Medical Imaging, Xuzhou Medical University, Xuzhou, China
| | - Chuangeng Tian
- School of Information and Electrical Engineering, Xuzhou University of Technology, Xuzhou, China
| |
Collapse
|
5
|
Cheuque C, Querales M, León R, Salas R, Torres R. An Efficient Multi-Level Convolutional Neural Network Approach for White Blood Cells Classification. Diagnostics (Basel) 2022; 12:diagnostics12020248. [PMID: 35204339 PMCID: PMC8871319 DOI: 10.3390/diagnostics12020248] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 12/17/2021] [Accepted: 12/28/2021] [Indexed: 01/27/2023] Open
Abstract
The evaluation of white blood cells is essential to assess the quality of the human immune system; however, the assessment of the blood smear depends on the pathologist’s expertise. Most machine learning tools make a one-level classification for white blood cell classification. This work presents a two-stage hybrid multi-level scheme that efficiently classifies four cell groups: lymphocytes and monocytes (mononuclear) and segmented neutrophils and eosinophils (polymorphonuclear). At the first level, a Faster R-CNN network is applied for the identification of the region of interest of white blood cells, together with the separation of mononuclear cells from polymorphonuclear cells. Once separated, two parallel convolutional neural networks with the MobileNet structure are used to recognize the subclasses in the second level. The results obtained using Monte Carlo cross-validation show that the proposed model has a performance metric of around 98.4% (accuracy, recall, precision, and F1-score). The proposed model represents a good alternative for computer-aided diagnosis (CAD) tools for supporting the pathologist in the clinical laboratory in assessing white blood cells from blood smear images.
Collapse
Affiliation(s)
- César Cheuque
- Facultad de Ingeniería, Universidad Andres Bello, Viña del Mar 2531015, Chile; (C.C.); (R.L.)
| | - Marvin Querales
- Escuela de Tecnología Médica, Universidad de Valparaíso, Viña del Mar 2540064, Chile;
| | - Roberto León
- Facultad de Ingeniería, Universidad Andres Bello, Viña del Mar 2531015, Chile; (C.C.); (R.L.)
| | - Rodrigo Salas
- Centro de Investigación y Desarrollo en Ingeniería en Salud, Escuela de Ingeniería C. Biomédica, Universidad de Valparaíso, Valparaíso 2362905, Chile;
- Instituto Milenio Intelligent Healthcare Engineering, Valparaíso 2362905, Chile
| | - Romina Torres
- Facultad de Ingeniería, Universidad Andres Bello, Viña del Mar 2531015, Chile; (C.C.); (R.L.)
- Instituto Milenio Intelligent Healthcare Engineering, Valparaíso 2362905, Chile
- Correspondence: ; Tel.: +56-32-2845315
| |
Collapse
|
6
|
Classification of Breast Cancer in Mammograms with Deep Learning Adding a Fifth Class. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app112311398] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
Breast cancer is one of the diseases of most profound concern, with the most prevalence worldwide, where early detections and diagnoses play the leading role against this disease achieved through imaging techniques such as mammography. Radiologists tend to have a high false positive rate for mammography diagnoses and an accuracy of around 82%. Currently, deep learning (DL) techniques have shown promising results in the early detection of breast cancer by generating computer-aided diagnosis (CAD) systems implementing convolutional neural networks (CNNs). This work focuses on applying, evaluating, and comparing the architectures: AlexNet, GoogLeNet, Resnet50, and Vgg19 to classify breast lesions after using transfer learning with fine-tuning and training the CNN with regions extracted from the MIAS and INbreast databases. We analyzed 14 classifiers, involving 4 classes as several researches have done it before, corresponding to benign and malignant microcalcifications and masses, and as our main contribution, we also added a 5th class for the normal tissue of the mammary parenchyma increasing the correct detection; in order to evaluate the architectures with a statistical analysis based on the received operational characteristics (ROC), the area under the curve (AUC), F1 Score, accuracy, precision, sensitivity, and specificity. We generate the best results with the CNN GoogLeNet trained with five classes on a balanced database with an AUC of 99.29%, F1 Score of 91.92%, the accuracy of 91.92%, precision of 92.15%, sensitivity of 91.70%, and specificity of 97.66%, concluding that GoogLeNet is optimal as a classifier in a CAD system to deal with breast cancer.
Collapse
|