1
|
Ji H, Li J, Zhu X, Fan L, Jiang W, Chen Y. Enhancing assisted diagnostic accuracy in scalp psoriasis: A Multi-Network Fusion Object Detection Framework for dermoscopic pattern diagnosis. Skin Res Technol 2024; 30:e13698. [PMID: 38634154 PMCID: PMC11024501 DOI: 10.1111/srt.13698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2024] [Accepted: 04/02/2024] [Indexed: 04/19/2024]
Abstract
BACKGROUND Dermoscopy is a common method of scalp psoriasis diagnosis, and several artificial intelligence techniques have been used to assist dermoscopy in the diagnosis of nail fungus disease, the most commonly used being the convolutional neural network algorithm; however, convolutional neural networks are only the most basic algorithm, and the use of object detection algorithms to assist dermoscopy in the diagnosis of scalp psoriasis has not been reported. OBJECTIVES Establishment of a dermoscopic modality diagnostic framework for scalp psoriasis based on object detection technology and image enhancement to improve diagnostic efficiency and accuracy. METHODS We analyzed the dermoscopic patterns of scalp psoriasis diagnosed at 72nd Group army hospital of PLA from January 1, 2020 to December 31, 2021, and selected scalp seborrheic dermatitis as a control group. Based on dermoscopic images and major dermoscopic patterns of scalp psoriasis and scalp seborrheic dermatitis, we investigated a multi-network fusion object detection framework based on the object detection technique Faster R-CNN and the image enhancement technique contrast limited adaptive histogram equalization (CLAHE), for assisting in the diagnosis of scalp psoriasis and scalp seborrheic dermatitis, as well as to differentiate the major dermoscopic patterns of the two diseases. The diagnostic performance of the multi-network fusion object detection framework was compared with that between dermatologists. RESULTS A total of 1876 dermoscopic images were collected, including 1218 for scalp psoriasis versus 658 for scalp seborrheic dermatitis. Based on these images, training and testing are performed using a multi-network fusion object detection framework. The results showed that the test accuracy, specificity, sensitivity, and Youden index for the diagnosis of scalp psoriasis was: 91.0%, 89.5%, 91.0%, and 0.805, and for the main dermoscopic patterns of scalp psoriasis and scalp seborrheic dermatitis, the diagnostic results were: 89.9%, 97.7%, 89.9%, and 0.876. Comparing the diagnostic results with those of five dermatologists, the fusion framework performs better than the dermatologists' diagnoses. CONCLUSIONS Studies have shown some differences in dermoscopic patterns between scalp psoriasis and scalp seborrheic dermatitis. The proposed multi-network fusion object detection framework has higher diagnostic performance for scalp psoriasis than for dermatologists.
Collapse
Affiliation(s)
- Honghai Ji
- School of Electronics & Control EngineeringNorth China University of TechnologyBeijingChina
| | - Jiaqi Li
- School of Electronics & Control EngineeringNorth China University of TechnologyBeijingChina
| | - Xiaoyang Zhu
- Department of Dermatology72nd Group army hospital of PLAHuzhouChina
| | - Lingling Fan
- School of AutomationBeijing Information Science and Technology UniversityBeijingChina
| | - Weiwei Jiang
- Department of Dermatology72nd Group army hospital of PLAHuzhouChina
- Department of DermatologyShanghai Key Laboratory of Medical MycologyChangzheng HospitalNaval Medical UniversityShanghaiChina
| | - Yang Chen
- Department of Dermatology72nd Group army hospital of PLAHuzhouChina
| |
Collapse
|
2
|
Yu Y, Gao G, Gao X, Zhang Z, He Y, Shi L, Kang Z. A study on the radiomic correlation between CBCT and pCT scans based on modified 3D-RUnet image segmentation. Front Oncol 2024; 14:1301710. [PMID: 38463234 PMCID: PMC10921553 DOI: 10.3389/fonc.2024.1301710] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Accepted: 02/06/2024] [Indexed: 03/12/2024] Open
Abstract
Purpose The present study is based on evidence indicating a potential correlation between cone-beam CT (CBCT) measurements of tumor size, shape, and the stage of locally advanced rectal cancer. To further investigate this relationship, the study quantitatively assesses the correlation between positioning CT (pCT) and CBCT in the radiomics features of these cancers, and examines their potential for substitution. Methods In this study, 103 patients diagnosed with locally advanced rectal cancer and undergoing neoadjuvant chemoradiotherapy were selected as participants. Their CBCT and pCT images were used to divide the participants into two groups: a training set and a validation set, with a 7:3 ratio. An improved conventional 3D-RUNet (CLA-UNet) deep learning model was trained on the training set data and then applied to the validation set. The DSC, HD95 and ASSD were calculated for quantitative evaluation purposes. Then, radiomics features were extracted from 30 patients of the test set. Results The experiments demonstrate that, the modified model achieves an average DSC score 0.792 for pCT and 0.672 for CBCT scans. 1037 features were extracted from each patient's CBCT and pCT images, 73 image features were found to have R values greater than 0.9, including three features related to the staging and prognosis of rectal cancer. Conclusion In this study, we proposed an automatic, fast, and consistent method for rectal cancer GTV segmentation for pCT and CBCT scans. The findings of radiomic results indicate that CBCT images have significant research value in the field of radiomics.
Collapse
Affiliation(s)
- Yanjuan Yu
- College of Electronic Engineering, Zhangzhou Institute of Technology, Zhangzhou, Fujian, China
| | - Guanglu Gao
- Department of Radiation Oncology, the First Affiliated Hospital of Xiamen University, Xiamen, Fujian, China
| | - Xiang Gao
- Department of Radiation Oncology, the First Affiliated Hospital of Xiamen University, Xiamen, Fujian, China
| | - Zongkai Zhang
- Department of Radiation Oncology, the First Affiliated Hospital of Xiamen University, Xiamen, Fujian, China
| | - Yipeng He
- Department of Radiation Oncology, the First Affiliated Hospital of Xiamen University, Xiamen, Fujian, China
| | - Liwan Shi
- Department of Radiation Oncology, the First Affiliated Hospital of Xiamen University, Xiamen, Fujian, China
| | - Zheng Kang
- Department of Radiation Oncology, the First Affiliated Hospital of Xiamen University, Xiamen, Fujian, China
| |
Collapse
|
3
|
Prince R, Niu Z, Khan ZY, Emmanuel M, Patrick N. COVID-19 detection from chest X-ray images using CLAHE-YCrCb, LBP, and machine learning algorithms. BMC Bioinformatics 2024; 25:28. [PMID: 38233764 PMCID: PMC10792799 DOI: 10.1186/s12859-023-05427-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2023] [Accepted: 07/20/2023] [Indexed: 01/19/2024] Open
Abstract
BACKGROUND COVID-19 is a disease that caused a contagious respiratory ailment that killed and infected hundreds of millions. It is necessary to develop a computer-based tool that is fast, precise, and inexpensive to detect COVID-19 efficiently. Recent studies revealed that machine learning and deep learning models accurately detect COVID-19 using chest X-ray (CXR) images. However, they exhibit notable limitations, such as a large amount of data to train, larger feature vector sizes, enormous trainable parameters, expensive computational resources (GPUs), and longer run-time. RESULTS In this study, we proposed a new approach to address some of the above-mentioned limitations. The proposed model involves the following steps: First, we use contrast limited adaptive histogram equalization (CLAHE) to enhance the contrast of CXR images. The resulting images are converted from CLAHE to YCrCb color space. We estimate reflectance from chrominance using the Illumination-Reflectance model. Finally, we use a normalized local binary patterns histogram generated from reflectance (Cr) and YCb as the classification feature vector. Decision tree, Naive Bayes, support vector machine, K-nearest neighbor, and logistic regression were used as the classification algorithms. The performance evaluation on the test set indicates that the proposed approach is superior, with accuracy rates of 99.01%, 100%, and 98.46% across three different datasets, respectively. Naive Bayes, a probabilistic machine learning algorithm, emerged as the most resilient. CONCLUSION Our proposed method uses fewer handcrafted features, affordable computational resources, and less runtime than existing state-of-the-art approaches. Emerging nations where radiologists are in short supply can adopt this prototype. We made both coding materials and datasets accessible to the general public for further improvement. Check the manuscript's availability of the data and materials under the declaration section for access.
Collapse
Affiliation(s)
- Rukundo Prince
- Department of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Zhendong Niu
- Department of Computer Science and Technology, Beijing Institute of Technology, Beijing, China.
| | - Zahid Younas Khan
- Computer Science and Information Technology, University of Azad Jammu and Kashmir, Kashmir, Pakistan
| | - Masabo Emmanuel
- Software Engineering, African Center of Excellence in Data Science(ACE-DS), and the African Center of Excellence in Internet of Things(ACEIoT), University of Rwanda, Kigali, Rwanda
| | - Niyishaka Patrick
- Computer and Information Sciences, University of Hyderabad, Hyderabad, India
| |
Collapse
|
4
|
Nithya VP, Mohanasundaram N, Santhosh R. An Early Detection and Classification of Alzheimer's Disease Framework Based on ResNet-50. Curr Med Imaging 2023:CMIR-EPUB-134056. [PMID: 37622561 DOI: 10.2174/1573405620666230825113344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 07/06/2023] [Accepted: 07/20/2023] [Indexed: 08/26/2023]
Abstract
OBJECTIVE The objective of this study is to develop a more effective early detection system for Alzheimer's disease (AD) using a Deep Residual Network (ResNet) model by addressing the issue of convolutional layers in conventional Convolutional Neural Networks (CNN) and applying image preprocessing techniques. METHODS The proposed method involves using Contrast Limited Adaptive Histogram Equalizer (CLAHE) and Boosted Anisotropic Diffusion Filters (BADF) for equalization and noise removal and K-means clustering for segmentation. A ResNet-50 model with shortcut links between three residual layers is proposed to extract features more efficiently. ResNet-50 is preferred over other ResNet types due to its intermediate depth, striking a balance between computational efficiency and improved performance, making it a widely adopted and effective architecture for various computer vision tasks. While other ResNet variations may offer higher depths, they are more prone to overfitting and computational complexity, which can hinder their practical application. The proposed method is evaluated on a dataset of MRI scans of AD patients. RESULTS The proposed method achieved high accuracy and minimum losses of 95% and 0.12, respectively. While some models showed better accuracy, they were prone to overfitting. In contrast, the suggested framework, based on the ResNet-50 model, demonstrated superior performance in terms of various performance metrics, providing a robust and reliable approach to Alzheimer's disease categorization. CONCLUSION The proposed ResNet-50 model with shortcut links between three residual layers, combined with image preprocessing techniques, provides an effective early detection system for AD. The study demonstrates the potential of deep learning and image processing techniques in developing accurate and efficient diagnostic tools for AD. The proposed method improves the existing approaches to AD classification and provides a promising framework for future research in this area.
Collapse
Affiliation(s)
- V P Nithya
- Department of Computer Science and Engineering, Karpagam Academy of Higher Education, Coimbatore, Tamil Nadu, India
| | - N Mohanasundaram
- Department of Computer Science and Engineering, Faculty 0f Engineering, Karpagam Academy of Higher Education, Coimbatore, Tamil Nadu, India
| | - R Santhosh
- Department of Computer Science and Engineering, Faculty 0f Engineering, Karpagam Academy of Higher Education, Coimbatore, Tamil Nadu, India
| |
Collapse
|
5
|
Sidhu RK, Sachdeva J, Katoch D. Segmentation of retinal blood vessels by a novel hybrid technique- Principal Component Analysis (PCA) and Contrast Limited Adaptive Histogram Equalization ( CLAHE). Microvasc Res 2023; 148:104477. [PMID: 36746364 DOI: 10.1016/j.mvr.2023.104477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Revised: 12/22/2022] [Accepted: 01/11/2023] [Indexed: 02/05/2023]
Abstract
Diabetic Retinopathy is a persistent disease of eyes that may lead to permanent loss of sight. In this paper, methodology is proposed to segment region of interest (ROI) i.e. new blood vessels in fundus images of retina of Diabetic Retinopathy (DR). The database of 50 fundus retinal images of healthy subjects and DR patients is fetched from Post Graduate Institute of Medical Education and Research (PGIMER), Chandigarh, India. The experimental set up consists of three set of experiments for the disease. For DR, in the first stage of automated blood vessel segmentation, gray-scale image is produced from the colored image using Principal Component Analysis (PCA) in the preprocessing step. The contrast enhancement by the Contrast Limited Adaptive Histogram Equalization (CLAHE) highlights the retinal blood vessels in the gray-scale image i.e. it unsheathed newly formed retinal blood vessels whereas PCA preserved their texture and color discrimination in DR images. The expert ophthalmologist(s) scrutiny on both internet repository and real time data acted as the gold standard for further analysis and formation of the proposed method. Further, ophthalmologists ascertained the forming of new blood vessels only on the disc region and divulging them, which were impossible with the naked eye. These operations help in extracting retinal blood vessels present on the disc and non-disc region of the image. The comparison of the results are done with the state of art methods like watershed transform. It is observed from the results that the new blood vessels are better segmented by the proposed methodology and are marked by the experienced ophthalmologist for validation. Further, for quantitative analysis, the features are extracted from new blood vessels as they are crucial for scientific interpretation. The results of the features lie in permissible limits such as no. of segments vary from 2 to 5 and length of segments varies from 49 to 164 pixels. Similarly, other features such as gray level of new blood vessels lie in 0.296-0.935 normalized range, coefficient with variations in gray level in the range of 0.658-10.10 and distance from vessel origin lie in the range of 56-82 pixels respectively. Both quantitative and qualitative results show that the methodologies proposed boosted the ophthalmic and clinical diagnosis. The developed method further handled the false detection of vessels near the optic disk boundary, under-segmentation of thin vessels, detection of pathological anomalies such as exudates, micro-aneurysms and cotton wool spots. From the numerical analysis, ophthalmologist extracted the information of number of vessels formed, length of the new vessels, observation that the new vessels appearing are less homogenous than the normal vessels. Also about the new vessels, whether they lie on the centre of disc region or towards its edges. These parameters lie as per the findings of the ophthalmologists on retinal images and automated detection helped in monitoring and comprehensive patient assessment. The experimental results show case that the proposed method has higher sensitivity, specificity and accuracy as compared to state of art methods i.e. 0.9023, 0.9610 and 0.9921, respectively. Similar results are obtained on retinal fundus images of PGIMER Chandigarh with sensitivity-0.9234, specificity-0.9955 and accuracy-0.9682.
Collapse
Affiliation(s)
- R K Sidhu
- Department of Electronics and Communication Engineering, Chandigarh University, Mohali, India.
| | - Jainy Sachdeva
- Department of Electrical and Instrumentation Engineering, Thapar Institute of Engineering & Technology, Patiala, India.
| | - D Katoch
- Department of Ophthalmology, Advanced Eye Centre, PGIMER, Chandigarh, India.
| |
Collapse
|
6
|
Alshamrani K, Alshamrani HA, Alqahtani FF, Almutairi BS. Enhancement of Mammographic Images Using Histogram-Based Techniques for Their Classification Using CNN. Sensors (Basel) 2022; 23:235. [PMID: 36616832 PMCID: PMC9824687 DOI: 10.3390/s23010235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 12/21/2022] [Accepted: 12/22/2022] [Indexed: 06/17/2023]
Abstract
In the world, one in eight women will develop breast cancer. Men can also develop it, but less frequently. This condition starts with uncontrolled cell division brought on by a change in the genes that regulate cell division and growth, which leads to the development of a nodule or tumour. These tumours can be either benign, which poses no health risk, or malignant, also known as cancerous, which puts patients' lives in jeopardy and has the potential to spread. The most common way to diagnose this problem is via mammograms. This kind of examination enables the detection of abnormalities in breast tissue, such as masses and microcalcifications, which are thought to be indicators of the presence of disease. This study aims to determine how histogram-based image enhancement methods affect the classification of mammograms into five groups: benign calcifications, benign masses, malignant calcifications, malignant masses, and healthy tissue, as determined by a CAD system of automatic mammography classification using convolutional neural networks. Both Contrast-limited Adaptive Histogram Equalization (CAHE) and Histogram Intensity Windowing (HIW) will be used (CLAHE). By improving the contrast between the image's background, fibrous tissue, dense tissue, and sick tissue, which includes microcalcifications and masses, the mammography histogram is modified using these procedures. In order to help neural networks, learn, the contrast has been increased to make it easier to distinguish between various types of tissue. The proportion of correctly classified images could rise with this technique. Using Deep Convolutional Neural Networks, a model was developed that allows classifying different types of lesions. The model achieved an accuracy of 62%, based on mini-MIAS data. The final goal of the project is the creation of an update algorithm that will be incorporated into the CAD system and will enhance the automatic identification and categorization of microcalcifications and masses. As a result, it would be possible to increase the possibility of early disease identification, which is important because early discovery increases the likelihood of a cure to almost 100%.
Collapse
Affiliation(s)
- Khalaf Alshamrani
- PhD Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran 6641, Saudi Arabia
| | - Hassan A. Alshamrani
- PhD Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran 6641, Saudi Arabia
| | - Fawaz F. Alqahtani
- PhD Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran 6641, Saudi Arabia
| | - Bander S. Almutairi
- Radiology Department, King Abdulaziz University Hospital, Jeddah 3646, Saudi Arabia
| |
Collapse
|
7
|
Islam MR, Nahiduzzaman M. Complex features extraction with deep learning model for the detection of COVID19 from CT scan images using ensemble based machine learning approach. Expert Syst Appl 2022; 195:116554. [PMID: 35136286 PMCID: PMC8813716 DOI: 10.1016/j.eswa.2022.116554] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 01/05/2022] [Accepted: 01/14/2022] [Indexed: 05/05/2023]
Abstract
Recently the most infectious disease is the novel Coronavirus disease (COVID 19) creates a devastating effect on public health in more than 200 countries in the world. Since the detection of COVID19 using reverse transcription-polymerase chain reaction (RT-PCR) is time-consuming and error-prone, the alternative solution of detection is Computed Tomography (CT) images. In this paper, Contrast Limited Histogram Equalization (CLAHE) was applied to CT images as a preprocessing step for enhancing the quality of the images. After that, we developed a novel Convolutional Neural Network (CNN) model that extracted 100 prominent features from a total of 2482 CT scan images. These extracted features were then deployed to various machine learning algorithms - Gaussian Naive Bayes (GNB), Support Vector Machine (SVM), Decision Tree (DT), Logistic Regression (LR), and Random Forest (RF). Finally, we proposed an ensemble model for the COVID19 CT image classification. We also showed various performance comparisons with the state-of-art methods. Our proposed model outperforms the state-of-art models and achieved an accuracy, precision, and recall score of 99.73%, 99.46%, and 100%, respectively.
Collapse
Affiliation(s)
- Md Robiul Islam
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Nahiduzzaman
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| |
Collapse
|
8
|
Nneji GU, Cai J, Deng J, Monday HN, Hossin MA, Nahar S. Identification of Diabetic Retinopathy Using Weighted Fusion Deep Learning Based on Dual-Channel Fundus Scans. Diagnostics (Basel) 2022; 12:540. [PMID: 35204628 DOI: 10.3390/diagnostics12020540] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2022] [Revised: 02/09/2022] [Accepted: 02/14/2022] [Indexed: 01/14/2023] Open
Abstract
It is a well-known fact that diabetic retinopathy (DR) is one of the most common causes of visual impairment between the ages of 25 and 74 around the globe. Diabetes is caused by persistently high blood glucose levels, which leads to blood vessel aggravations and vision loss. Early diagnosis can minimise the risk of proliferated diabetic retinopathy, which is the advanced level of this disease, and having higher risk of severe impairment. Therefore, it becomes important to classify DR stages. To this effect, this paper presents a weighted fusion deep learning network (WFDLN) to automatically extract features and classify DR stages from fundus scans. The proposed framework aims to treat the issue of low quality and identify retinopathy symptoms in fundus images. Two channels of fundus images, namely, the contrast-limited adaptive histogram equalization (CLAHE) fundus images and the contrast-enhanced canny edge detection (CECED) fundus images are processed by WFDLN. Fundus-related features of CLAHE images are extracted by fine-tuned Inception V3, whereas the features of CECED fundus images are extracted using fine-tuned VGG-16. Both channels’ outputs are merged in a weighted approach, and softmax classification is used to determine the final recognition result. Experimental results show that the proposed network can identify the DR stages with high accuracy. The proposed method tested on the Messidor dataset reports an accuracy level of 98.5%, sensitivity of 98.9%, and specificity of 98.0%, whereas on the Kaggle dataset, the proposed model reports an accuracy level of 98.0%, sensitivity of 98.7%, and specificity of 97.8%. Compared with other models, our proposed network achieves comparable performance.
Collapse
|
9
|
Shamila Ebenezer A, Deepa Kanmani S, Sivakumar M, Jeba Priya S. Effect of image transformation on EfficientNet model for COVID-19 CT image classification. Mater Today Proc 2021; 51:2512-2519. [PMID: 34926175 PMCID: PMC8666302 DOI: 10.1016/j.matpr.2021.12.121] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
The Novel Corona Virus 2019 has drastically affected millions of people all around the world and was a huge threat to the human race since its evolution in 2019. Chest CT images are considered to be one of the indicative sources for diagnosis of COVID-19 by most of the researchers in the research community. Several researchers have proposed various models for the prediction of COVID-19 using CT images using Artificial Intelligence based algorithms (Alimadadi e al., 2020 [19], Srinivasa Rao and Vazquez, 2020 [20], Vaishya et al., 2020 [21]). EfficientNet is one of the powerful Convolutional Neural Network models proposed by Tan and Le (2019). The objective of this study is to explore the effect of image enhancement algorithms such as Laplace transform, Wavelet transforms, Adaptive gamma correction and Contrast limited adaptive histogram equalization (CLAHE) on Chest CT images for the classification of Covid-19 using the EfficientNet algorithm. SARS- COV-2 (Soares et al., 2020) dataset is used in this study. The images were preprocessed and brightness augmented. The EfficientNet algorithm is implemented and the performance is evaluated by adding the four image enhancement algorithms. The CLAHE based EfficientNet model yielded an accuracy of 94.56%, precision of 95%, recall of 91%, and F1 of 93%. This study shows that adding a CLAHE image enhancement to the EfficientNet model improves the performance of the powerful Convolutional Neural Network model in classifying the CT images for Covid-19.
Collapse
Affiliation(s)
- A Shamila Ebenezer
- Department of Computer Science and Engineering, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu 641114, India
| | - S Deepa Kanmani
- Department of Information Technology, Sri Krishna College of Engineering and Technology, Coimbatore, Tamil Nadu 641008, India
| | - Mahima Sivakumar
- Department of Computer Science and Engineering, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu 641114, India
| | - S Jeba Priya
- Department of Computer Science and Engineering, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu 641114, India
| |
Collapse
|
10
|
Hanlon KL, Wei G, Braue J, Correa-Selm L, Grichnik JM. Improving dermal level images from reflectance confocal microscopy using wavelet-based transformations and adaptive histogram equalization. Lasers Surg Med 2021; 54:384-391. [PMID: 34633691 DOI: 10.1002/lsm.23483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Revised: 08/10/2021] [Accepted: 09/25/2021] [Indexed: 11/11/2022]
Abstract
OBJECTIVES Reflectance confocal microscopy (RCM) generates scalar image data from serial depths in the skin, allowing in vivo examination of cellular features. The maximum imaging depth of RCM is approximately 250 µm, to the papillary dermis, or upper reticular dermis. Frequently, important diagnostic features are present in the dermis, hence improved visualization of deeper levels is advantageous. METHODS Low contrast and noise in dermal images were improved by employing a combination of wavelet-based transformations and contrast-limited adaptive histogram equalization. RESULTS Preserved details, noise reduction, increased contrast, and feature enhancement were observed in the resulting processed images. CONCLUSIONS Complex and combined wavelet-based enhancement approaches for dermal level images yielded reconstructions of higher quality than less sophisticated histogram-based strategies. Image optimization may improve the diagnostic accuracy of RCM, especially for entities with dermal findings.
Collapse
Affiliation(s)
- Katharine L Hanlon
- Department of Cutaneous Oncology, Cleveland Clinic Indian River Hospital, Scully Welsh Cancer Center, Vero Beach, Florida, USA.,Morsani College of Medicine, University of South Florida, Tampa, Florida, USA
| | - Grace Wei
- Morsani College of Medicine, University of South Florida, Tampa, Florida, USA
| | - Jonathan Braue
- Department of Cutaneous Oncology, Cleveland Clinic Indian River Hospital, Scully Welsh Cancer Center, Vero Beach, Florida, USA
| | - Lilia Correa-Selm
- Department of Cutaneous Oncology, Cleveland Clinic Indian River Hospital, Scully Welsh Cancer Center, Vero Beach, Florida, USA.,Morsani College of Medicine, University of South Florida, Tampa, Florida, USA
| | - James M Grichnik
- Department of Cutaneous Oncology, Cleveland Clinic Indian River Hospital, Scully Welsh Cancer Center, Vero Beach, Florida, USA.,Morsani College of Medicine, University of South Florida, Tampa, Florida, USA
| |
Collapse
|
11
|
Tasci E, Uluturk C, Ugur A. A voting-based ensemble deep learning method focusing on image augmentation and preprocessing variations for tuberculosis detection. Neural Comput Appl 2021; 33:15541-15555. [PMID: 34121816 PMCID: PMC8182991 DOI: 10.1007/s00521-021-06177-2] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Accepted: 05/27/2021] [Indexed: 11/12/2022]
Abstract
Tuberculosis (TB) is known as a potentially dangerous and infectious disease that affects mostly lungs worldwide. The detection and treatment of TB at an early stage are critical for preventing the disease and decreasing the risk of mortality and transmission of it to others. Nowadays, as the most common medical imaging technique, chest radiography (CXR) is useful for determining thoracic diseases. Computer-aided detection (CADe) systems are also crucial mechanisms to provide more reliable, efficient, and systematic approaches with accelerating the decision-making process of clinicians. In this study, we propose voting and preprocessing variations-based ensemble CNN model for TB detection. We utilize 40 different variations in fine-tuned CNN models based on InceptionV3 and Xception by also using CLAHE (contrast-limited adaptive histogram equalization) preprocessing technique and 10 different image transformations for data augmentation types. After analyzing all these combination schemes, three or five best classifier models are selected as base learners for voting operations. We apply the Bayesian optimization-based weighted voting and the average of probabilities as a combination rule in soft voting methods on two TB CXR image datasets to get better results in various numbers of models. The computational results indicate that the proposed method achieves 97.500% and 97.699% accuracy rates on Montgomery and Shenzhen datasets, respectively. Furthermore, our method outperforms state-of-the-art results for the two TB detection datasets in terms of accuracy rate.
Collapse
Affiliation(s)
- Erdal Tasci
- Computer Engineering Department, Ege University, Izmir, Turkey
| | - Caner Uluturk
- Computer Engineering Department, Ege University, Izmir, Turkey
| | - Aybars Ugur
- Computer Engineering Department, Ege University, Izmir, Turkey
| |
Collapse
|
12
|
Alwazzan MJ, Ismael MA, Ahmed AN. A Hybrid Algorithm to Enhance Colour Retinal Fundus Images Using a Wiener Filter and CLAHE. J Digit Imaging 2021; 34:750-9. [PMID: 33885992 DOI: 10.1007/s10278-021-00447-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2020] [Revised: 05/11/2020] [Accepted: 03/19/2021] [Indexed: 10/21/2022] Open
Abstract
Digital images used in the field of ophthalmology are among the most important methods for automatic detection of certain eye diseases. These processes include image enhancement as a primary step to assist optometrists in identifying diseases. Therefore, many algorithms and methods have been developed for the enhancement of retinal fundus images, which may experience challenges that typically accompany enhancement processes, such as artificial borders and dim lighting that mask image details. To eliminate these problems, a new algorithm is proposed in this paper based on separating colour images into three channels (red, green, and blue). The green channel is passed through a Wiener filter and reinforced using the CLAHE technique before merging with the original red and blue channels. Reducing the green channel noise with this approach is proven effective over the other colour channels. Results from the Contrast Improvement Index (CII) and linear index of fuzziness (r) test indicate the success of the proposed algorithm compared with alternate algorithms in the application of improving blood vessel imagery and other details within ten test fundus images selected from the DRIVER database.
Collapse
|
13
|
Li X, Li T, Zhao H, Dou Y, Pang C. Medical image enhancement in F-shift transformation domain. Health Inf Sci Syst 2019; 7:13. [PMID: 31354951 DOI: 10.1007/s13755-019-0075-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2019] [Accepted: 07/15/2019] [Indexed: 11/26/2022] Open
Abstract
Image enhancement technology plays an important role in the diagnosis and treatment of medical diseases. In this paper, we propose a method to automatically enhance medical images. The proposed method could be used to support clinical medical diagnosis, adjuvant therapy and curative effect diagnosis. This scheme uses contrast limited adaptive histogram equalization (CLAHE) method in F-shift transformation domain. Firstly, we adjust the overall brightness of the underexposed or overexposed image. Secondly, we perform CLAHE to enhance the low-frequency components obtained by one-level two-dimensional F-shift transformation (TDFS) on the adjusted images. At this stage, most of the coefficients in the high-frequency component can be changed to zero through properly setting the error bound. We then use inverse transformation to reconstruct image which is further enhanced with CLAHE. Compared to previous work, this approach takes into account not only the image enhancement, but also the data compression. Experimental results and comparison with state-of-the-art methods show that our proposed method has a better enhancement performance. Moreover, it has a certain data compression ability.
Collapse
Affiliation(s)
- Xiaoyun Li
- 1Institute of Applied Mathematics, Hebei Academy of Sciences, Shijiazhuang, China
- Hebei Authentication Technology Engineering Research Center, Shijiazhuang, China
| | - Tongliang Li
- 1Institute of Applied Mathematics, Hebei Academy of Sciences, Shijiazhuang, China
- Hebei Authentication Technology Engineering Research Center, Shijiazhuang, China
| | - Huanyu Zhao
- 1Institute of Applied Mathematics, Hebei Academy of Sciences, Shijiazhuang, China
- Hebei Authentication Technology Engineering Research Center, Shijiazhuang, China
| | - Yuwei Dou
- Amador Valley High School, 1155 Santa Rita Rd., Pleasanton, CA USA
| | - Chaoyi Pang
- 4The School of Computer and Data Engineering, Zhejiang University (NIT), Ningbo, China
| |
Collapse
|
14
|
Abstract
We present a novel technique to distinguish between an original image and its histogram equalized version. Histogram equalization and superpixel segmentation such as SLIC (simple linear iterative clustering) are very popular image processing tools. Based on these two concepts, we introduce a method for finding whether an image (grayscale) is histogram equalized or not. Because sometimes we see images that look visually similar but they are actually processed or changed by some image enhancement process such as histogram equalization. We can merely infer whether the image is dark, bright or has a small dynamic range. Moreover, we also compare the result of SLIC superpixels with three other superpixel segmentation algorithms namely, quick shift, watersheds, and Felzenszwalb's segmentation algorithm.
Collapse
Affiliation(s)
- Li Yao
- School of Computer Science and Engineering, Southeast University , Nanjing , P.R. China.,Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education , Nanjing , P.R. China
| | - Sohail Muhammad
- School of Computer Science and Engineering, Southeast University , Nanjing , P.R. China
| |
Collapse
|
15
|
Abstract
Pathological disorders may happen due to small changes in retinal blood vessels which may later turn into blindness. Hence, the accurate segmentation of blood vessels is becoming a challenging task for pathological analysis. This paper offers an unsupervised recursive method for extraction of blood vessels from ophthalmoscope images. First, a vessel-enhanced image is generated with the help of gamma correction and contrast-limited adaptive histogram equalization (CLAHE). Next, the vessels are extracted iteratively by applying an adaptive thresholding technique. At last, a final vessel segmented image is produced by applying a morphological cleaning operation. Evaluations are accompanied on the publicly available digital retinal images for vessel extraction (DRIVE) and Child Heart And Health Study in England (CHASE_DB1) databases using nine different measurements. The proposed method achieves average accuracies of 0.957 and 0.952 on DRIVE and CHASE_DB1 databases respectively.
Collapse
|