1
|
Kim K, Lee JH, Je Oh S, Chung MJ. AI-based computer-aided diagnostic system of chest digital tomography synthesis: Demonstrating comparative advantage with X-ray-based AI systems. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107643. [PMID: 37348439 DOI: 10.1016/j.cmpb.2023.107643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 05/26/2023] [Accepted: 06/03/2023] [Indexed: 06/24/2023]
Abstract
BACKGROUND Compared with chest X-ray (CXR) imaging, which is a single image projected from the front of the patient, chest digital tomosynthesis (CDTS) imaging can be more advantageous for lung lesion detection because it acquires multiple images projected from multiple angles of the patient. Various clinical comparative analysis and verification studies have been reported to demonstrate this, but there is no artificial intelligence (AI)-based comparative analysis studies. Existing AI-based computer-aided detection (CAD) systems for lung lesion diagnosis have been developed mainly based on CXR images; however, CAD-based on CDTS, which uses multi-angle images of patients in various directions, has not been proposed and verified for its usefulness compared to CXR-based counterparts. BACKGROUND AND OBJECTIVE This study develops and tests a CDTS-based AI CAD system to detect lung lesions to demonstrate performance improvements compared to CXR-based AI CAD. METHODS We used multiple (e.g., five) projection images as input for the CDTS-based AI model and a single-projection image as input for the CXR-based AI model to compare and evaluate the performance between models. Multiple/single projection input images were obtained by virtual projection on the three-dimensional (3D) stack of computed tomography (CT) slices of each patient's lungs from which the bed area was removed. These multiple images result from shooting from the front and left and right 30/60∘. The projected image captured from the front was used as the input for the CXR-based AI model. The CDTS-based AI model used all five projected images. The proposed CDTS-based AI model consisted of five AI models that received images in each of the five directions, and obtained the final prediction result through an ensemble of five models. Each model used WideResNet-50. To train and evaluate CXR- and CDTS-based AI models, 500 healthy data, 206 tuberculosis data, and 242 pneumonia data were used, and three three-fold cross-validation was applied. RESULTS The proposed CDTS-based AI CAD system yielded sensitivities of 0.782 and 0.785 and accuracies of 0.895 and 0.837 for the (binary classification) performance of detecting tuberculosis and pneumonia, respectively, against normal subjects. These results show higher performance than the sensitivity of 0.728 and 0.698 and accuracies of 0.874 and 0.826 for detecting tuberculosis and pneumonia through the CXR-based AI CAD, which only uses a single projection image in the frontal direction. We found that CDTS-based AI CAD improved the sensitivity of tuberculosis and pneumonia by 5.4% and 8.7% respectively, compared to CXR-based AI CAD without loss of accuracy. CONCLUSIONS This study comparatively proves that CDTS-based AI CAD technology can improve performance more than CXR. These results suggest that we can enhance the clinical application of CDTS. Our code is available at https://github.com/kskim-phd/CDTS-CAD-P.
Collapse
Affiliation(s)
- Kyungsu Kim
- Medical AI Research Center, Research Institute for Future Medicine, Samsung Medical Center, Seoul 06351, Republic of Korea; Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul 06351, Republic of Korea.
| | - Ju Hwan Lee
- Department of Health Sciences and Technology, SAIHST, Sungkyunkwan University, Seoul 06351, Republic of Korea
| | - Seong Je Oh
- Department of Health Sciences and Technology, SAIHST, Sungkyunkwan University, Seoul 06351, Republic of Korea
| | - Myung Jin Chung
- Medical AI Research Center, Research Institute for Future Medicine, Samsung Medical Center, Seoul 06351, Republic of Korea; Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Seoul 06351, Republic of Korea; Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul 06351, Republic of Korea.
| |
Collapse
|
2
|
Arya AD, Verma SS, Chakarabarti P, Chakrabarti T, Elngar AA, Kamali AM, Nami M. A systematic review on machine learning and deep learning techniques in the effective diagnosis of Alzheimer's disease. Brain Inform 2023; 10:17. [PMID: 37450224 PMCID: PMC10349019 DOI: 10.1186/s40708-023-00195-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Accepted: 06/02/2023] [Indexed: 07/18/2023] Open
Abstract
Alzheimer's disease (AD) is a brain-related disease in which the condition of the patient gets worse with time. AD is not a curable disease by any medication. It is impossible to halt the death of brain cells, but with the help of medication, the effects of AD can be delayed. As not all MCI patients will suffer from AD, it is required to accurately diagnose whether a mild cognitive impaired (MCI) patient will convert to AD (namely MCI converter MCI-C) or not (namely MCI non-converter MCI-NC), during early diagnosis. There are two modalities, positron emission tomography (PET) and magnetic resonance image (MRI), used by a physician for the diagnosis of Alzheimer's disease. Machine learning and deep learning perform exceptionally well in the field of computer vision where there is a requirement to extract information from high-dimensional data. Researchers use deep learning models in the field of medicine for diagnosis, prognosis, and even to predict the future health of the patient under medication. This study is a systematic review of publications using machine learning and deep learning methods for early classification of normal cognitive (NC) and Alzheimer's disease (AD).This study is an effort to provide the details of the two most commonly used modalities PET and MRI for the identification of AD, and to evaluate the performance of both modalities while working with different classifiers.
Collapse
Affiliation(s)
| | | | | | | | - Ahmed A. Elngar
- Faculty of Computers and Artificial Intelligence, Beni-Suef University, Beni-Suef, 62511 Egypt
| | - Ali-Mohammad Kamali
- Department of Neuroscience, School of Advanced Medical Sciences and Technologies, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Mohammad Nami
- Cognitive Neuropsychology Unit, Department of Social Sciences, Canadian University Dubai, Dubai, UAE
| |
Collapse
|
3
|
Lal S. TC-SegNet: robust deep learning network for fully automatic two-chamber segmentation of two-dimensional echocardiography. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-19. [PMID: 37362663 PMCID: PMC10238771 DOI: 10.1007/s11042-023-15524-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 10/03/2022] [Accepted: 04/19/2023] [Indexed: 06/28/2023]
Abstract
Heart chamber quantification is an essential clinical task to analyze heart abnormalities by evaluating the heart volume estimated through the endocardial border of the chambers. A precise heart chamber segmentation algorithm using echocardiography is essential for improving the diagnosis of cardiac disease. This paper proposes a robust two chamber segmentation network (TC-SegNet) for echocardiography which follows a U-Net architecture and effectively incorporates the proposed modified skip connection, Atrous Spatial Pyramid Pooling (ASPP) modules and squeeze and excitation modules. The TC-SegNet is evaluated on the open-source fully annotated dataset of cardiac acquisitions for multi-structure ultrasound segmentation (CAMUS). The proposed TC-SegNet obtained an average value of F1-score of 0.91, an average Dice score of 0.9284 and an IoU score of 0.8322 which are higher than the reference models used here for comparison. Further, Pixel error (PE) of 1.5109 which are significantly less than the comparison models. The segmentation results and metrics show that the proposed model outperforms the state-of-the-art segmentation methods.
Collapse
Affiliation(s)
- Shyam Lal
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, Mangaluru, 575025 Karnataka India
| |
Collapse
|
4
|
Gupta A, Mishra S, Sahu SC, Srinivasarao U, Naik KJ. Application of Convolutional Neural Networks for COVID-19 Detection in X-ray Images Using InceptionV3 and U-Net. NEW GENERATION COMPUTING 2023; 41:475-502. [PMID: 37229179 PMCID: PMC10173914 DOI: 10.1007/s00354-023-00217-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 04/25/2023] [Indexed: 05/27/2023]
Abstract
COVID-19 has expanded overall across the globe after its initial cases were discovered in December 2019 in Wuhan-China. Because the virus has impacted people's health worldwide, its fast identification is essential for preventing disease spread and reducing mortality rates. The reverse transcription polymerase chain reaction (RT-PCR) is the primary leading method for detecting COVID-19 disease; it has high costs and long turnaround times. Hence, quick and easy-to-use innovative diagnostic instruments are required. According to a new study, COVID-19 is linked to discoveries in chest X-ray pictures. The suggested approach includes a stage of pre-processing with lung segmentation, removing the surroundings that do not provide information pertinent to the task and may result in biased results. The InceptionV3 and U-Net deep learning models used in this work process the X-ray photo and classifies them as COVID-19 negative or positive. The CNN model that uses a transfer learning approach was trained. Finally, the findings are analyzed and interpreted through different examples. The obtained COVID-19 detection accuracy is around 99% for the best models.
Collapse
Affiliation(s)
- Aman Gupta
- Department of Computer Science and Engineering, National Institute of Technology Raipur, Raipur , Chhattisgarh India
| | - Shashank Mishra
- Department of Computer Science and Engineering, National Institute of Technology Raipur, Raipur , Chhattisgarh India
| | - Sourav Chandan Sahu
- Department of Computer Science and Engineering, National Institute of Technology Raipur, Raipur , Chhattisgarh India
| | - Ulligaddala Srinivasarao
- Department of Computer Science and Engineering, National Institute of Technology Raipur, Raipur , Chhattisgarh India
| | - K. Jairam Naik
- Department of Computer Science and Engineering, National Institute of Technology Raipur, Raipur , Chhattisgarh India
| |
Collapse
|
5
|
Ramtekkar PK, Pandey A, Pawar MK. Accurate detection of brain tumor using optimized feature selection based on deep learning techniques. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-31. [PMID: 37362641 PMCID: PMC10126578 DOI: 10.1007/s11042-023-15239-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Revised: 03/12/2023] [Accepted: 03/30/2023] [Indexed: 06/28/2023]
Abstract
An unusual increase of nerves inside the brain, which disturbs the actual working of the brain, is called a brain tumor. It has led to the death of lots of lives. To save people from this disease timely detection and the right cure is the need of time. Finding of tumor-affected cells in the human brain is a cumbersome and time- consuming task. However, the accuracy and time required to detect brain tumors is a big challenge in the arena of image processing. This research paper proposes a novel, accurate and optimized system to detect brain tumors. The system follows the activities like, preprocessing, segmentation, feature extraction, optimization and detection. For preprocessing system uses a compound filter, which is a composition of Gaussian, mean and median filters. Threshold and histogram techniques are applied for image segmentation. Grey level co-occurrence matrix (GLCM) is used for feature extraction. The optimized convolution neural network (CNN) technique is applied here that uses whale optimization and grey wolf optimization for best feature selection. Detection of brain tumors is achieved through CNN classifier. This system compares its performance with another modern technique of optimization by using accuracy, precision and recall parameters and claims the supremacy of this work. This system is implemented in the Python programming language. The brain tumor detection accuracy of this optimized system has been measured at 98.9%.
Collapse
Affiliation(s)
- Praveen Kumar Ramtekkar
- University Institute of Technology, Rajiv Gandhi Proudyogiki Vishwavidyalaya, Bhopal, Madhya Pradesh India
| | - Anjana Pandey
- University Institute of Technology, Rajiv Gandhi Proudyogiki Vishwavidyalaya, Bhopal, Madhya Pradesh India
| | - Mahesh Kumar Pawar
- University Institute of Technology, Rajiv Gandhi Proudyogiki Vishwavidyalaya, Bhopal, Madhya Pradesh India
| |
Collapse
|
6
|
Qiu S, Ma J, Ma Z. IRCM-Caps: An X-ray image detection method for COVID-19. THE CLINICAL RESPIRATORY JOURNAL 2023; 17:364-373. [PMID: 36922395 DOI: 10.1111/crj.13599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 02/12/2023] [Accepted: 02/20/2023] [Indexed: 03/18/2023]
Abstract
OBJECTIVE COVID-19 is ravaging the world, but traditional reverse transcription-polymerase reaction (RT-PCR) tests are time-consuming and have a high false-negative rate and lack of medical equipment. Therefore, lung imaging screening methods are proposed to diagnose COVID-19 due to its fast test speed. Currently, the commonly used convolutional neural network (CNN) model requires a large number of datasets, and the accuracy of the basic capsule network for multiple classification is limital. For this reason, this paper proposes a novel model based on CNN and CapsNet. METHODS The proposed model integrates CNN and CapsNet. And attention mechanism module and multi-branch lightweight module are applied to enhance performance. Use the contrast adaptive histogram equalization (CLAHE) algorithm to preprocess the image to enhance image contrast. The preprocessed images are input into the network for training, and ReLU was used as the activation function to adjust the parameters to achieve the optimal. RESULT The test dataset includes 1200 X-ray images (400 COVID-19, 400 viral pneumonia, and 400 normal), and we replace CNN of VGG16, InceptionV3, Xception, Inception-Resnet-v2, ResNet50, DenseNet121, and MoblieNetV2 and integrate with CapsNet. Compared with CapsNet, this network improves 6.96%, 7.83%, 9.37%, 10.47%, and 10.38% in accuracy, area under the curve (AUC), recall, and F1 scores, respectively. In the binary classification experiment, compared with CapsNet, the accuracy, AUC, accuracy, recall rate, and F1 score were increased by 5.33%, 5.34%, 2.88%, 8.00%, and 5.56%, respectively. CONCLUSION The proposed embedded the advantages of traditional convolutional neural network and capsule network and has a good classification effect on small COVID-19 X-ray image dataset.
Collapse
Affiliation(s)
- Shuo Qiu
- School of Computer Science and Engineering, North Minzu University, Yinchuan, China
| | - Jinlin Ma
- School of Computer Science and Engineering, North Minzu University, Yinchuan, China.,Key Laboratory of Intelligent Information Processing of Image and Graphics, State Ethnic Affairs Commission, Yinchuan, China
| | - Ziping Ma
- School of Mathematics and Information Science, North Minzu University, Yinchuan, China
| |
Collapse
|
7
|
Barman U, Pathak C, Mazumder NK. Comparative assessment of Pest damage identification of coconut plant using damage texture and color analysis. MULTIMEDIA TOOLS AND APPLICATIONS 2023; 82:1-23. [PMID: 36712953 PMCID: PMC9874181 DOI: 10.1007/s11042-023-14369-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 06/27/2022] [Accepted: 01/02/2023] [Indexed: 06/18/2023]
Abstract
Coconut cultivation is a promising agricultural activity. But to keep the coconut plants pest-free, the detection of various pest damage in coconut plants is of utmost importance for the cultivators. The processes that the cultivators use to detect pest damage in coconut plants are conventional methods, experts' views, or some laboratory techniques. But these procedures are not adequate in the detection of coconut damage identification. In this study, 16 different color and texture features are reported for 1265 coconut pest damage images by extracting the color and texture features of the damage images in the color and grey domain after the damage segmentation using the thresholding technique. The Gray Level Co-occurrence Matrix (GLCM) and Gray Level Run Length Matrix (GLRLM) techniques are applied to extract the texture features of the damages and two Artificial Neural Network (ANN) architectures are reported to classify the extracted data features of the damages into 5 different classes such as Eriophyid_Mite, Rhinoceros_Beetle, Red_Palm_Weevil, Rugose_Spiraling_White_fly, and Rugose_in_Mature with an average testing accuracy of almost 100% respectively. To compare the results with the other machine learning techniques, the Support Vector Machine(SVM), Decision Tree (DT), and Naïve Bayes (NB) are also introduced for damage identification where the SVM methods also report almost 100% accuracy on the fuse features of GLCM and GLRLM. The results of the ANN and SVM are compared by finding the confusion matrix, precision, recall, and f-1 score of the ANN model with the DT and NB classifier. The ANN and SVM outperform in all matrices and they can be used as the base model for further study of coconut pest damage identification using deep learning techniques.
Collapse
Affiliation(s)
- Utpal Barman
- Department of CSE, The Assam Kaziranga University, Jorhat, Assam India
| | | | - Nirmal Kumar Mazumder
- Department of Plant Pathology, BN College of Agriculture, AAU, Biswanath Chariali, Assam India
| |
Collapse
|
8
|
Chen H, Jiang Y, Ko H, Loew M. A teacher-student framework with Fourier Transform augmentation for COVID-19 infection segmentation in CT images. Biomed Signal Process Control 2023; 79:104250. [PMID: 36188130 PMCID: PMC9510070 DOI: 10.1016/j.bspc.2022.104250] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 08/11/2022] [Accepted: 09/18/2022] [Indexed: 11/23/2022]
Abstract
Automatic segmentation of infected regions in computed tomography (CT) images is necessary for the initial diagnosis of COVID-19. Deep-learning-based methods have the potential to automate this task but require a large amount of data with pixel-level annotations. Training a deep network with annotated lung cancer CT images, which are easier to obtain, can alleviate this problem to some extent. However, this approach may suffer from a reduction in performance when applied to unseen COVID-19 images during the testing phase, caused by the difference in the image intensity and object region distribution between the training set and test set. In this paper, we proposed a novel unsupervised method for COVID-19 infection segmentation that aims to learn the domain-invariant features from lung cancer and COVID-19 images to improve the generalization ability of the segmentation network for use with COVID-19 CT images. First, to address the intensity difference, we proposed a novel data augmentation module based on Fourier Transform, which transfers the annotated lung cancer data into the style of COVID-19 image. Secondly, to reduce the distribution difference, we designed a teacher-student network to learn rotation-invariant features for segmentation. The experiments demonstrated that even without getting access to the annotations of the COVID-19 CT images during the training phase, the proposed network can achieve a state-of-the-art segmentation performance on COVID-19 infection.
Collapse
Affiliation(s)
- Han Chen
- School of Electrical Engineering, Korea University, Seoul, South Korea
| | - Yifan Jiang
- School of Electrical Engineering, Korea University, Seoul, South Korea
| | - Hanseok Ko
- School of Electrical Engineering, Korea University, Seoul, South Korea
| | - Murray Loew
- Biomedical Engineering, George Washington University, Washington D.C., USA
| |
Collapse
|
9
|
Li W, Du L, Liao J, Yin D, Xu X. Classification of COVID-19 images based on transfer learning and feature fusion. THE IMAGING SCIENCE JOURNAL 2022. [DOI: 10.1080/13682199.2022.2151724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Affiliation(s)
- Wei Li
- School of Automation and Information Engineering, Sichuan University of Science and Engineering, Zigong, People’s Republic of China
- Artificial Intelligence Key Laboratory of Sichuan Province, Yibin, People’s Republic of China
| | - Lingyan Du
- School of Automation and Information Engineering, Sichuan University of Science and Engineering, Zigong, People’s Republic of China
- Artificial Intelligence Key Laboratory of Sichuan Province, Yibin, People’s Republic of China
| | - Jun Liao
- School of Automation and Information Engineering, Sichuan University of Science and Engineering, Zigong, People’s Republic of China
- Artificial Intelligence Key Laboratory of Sichuan Province, Yibin, People’s Republic of China
| | - Dongsheng Yin
- School of Automation and Information Engineering, Sichuan University of Science and Engineering, Zigong, People’s Republic of China
- Artificial Intelligence Key Laboratory of Sichuan Province, Yibin, People’s Republic of China
| | - Xiaoru Xu
- School of Automation and Information Engineering, Sichuan University of Science and Engineering, Zigong, People’s Republic of China
- Artificial Intelligence Key Laboratory of Sichuan Province, Yibin, People’s Republic of China
| |
Collapse
|
10
|
Wang W, Liu S, Xu H, Deng L. COVIDX-LwNet: A Lightweight Network Ensemble Model for the Detection of COVID-19 Based on Chest X-ray Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:8578. [PMID: 36366277 PMCID: PMC9655773 DOI: 10.3390/s22218578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 11/02/2022] [Accepted: 11/03/2022] [Indexed: 06/16/2023]
Abstract
Recently, the COVID-19 pandemic coronavirus has put a lot of pressure on health systems around the world. One of the most common ways to detect COVID-19 is to use chest X-ray images, which have the advantage of being cheap and fast. However, in the early days of the COVID-19 outbreak, most studies applied pretrained convolutional neural network (CNN) models, and the features produced by the last convolutional layer were directly passed into the classification head. In this study, the proposed ensemble model consists of three lightweight networks, Xception, MobileNetV2 and NasNetMobile as three original feature extractors, and then three base classifiers are obtained by adding the coordinated attention module, LSTM and a new classification head to the original feature extractors. The classification results from the three base classifiers are then fused by a confidence fusion method. Three publicly available chest X-ray datasets for COVID-19 testing were considered, with ternary (COVID-19, normal and other pneumonia) and quaternary (COVID-19, normal) analyses performed on the first two datasets, bacterial pneumonia and viral pneumonia classification, and achieved high accuracy rates of 95.56% and 91.20%, respectively. The third dataset was used to compare the performance of the model compared to other models and the generalization ability on different datasets. We performed a thorough ablation study on the first dataset to understand the impact of each proposed component. Finally, we also performed visualizations. These saliency maps not only explain key prediction decisions of the model, but also help radiologists locate areas of infection. Through extensive experiments, it was finally found that the results obtained by the proposed method are comparable to the state-of-the-art methods.
Collapse
Affiliation(s)
| | - Shuxian Liu
- School of Information Science and Engineering, Xinjiang University, Urumqi 830017, China
| | | | | |
Collapse
|
11
|
Manoharan SN, Kumar KMVM, Vadivelan N. A Novel CNN-TLSTM Approach for Dengue Disease Identification and Prevention using IoT-Fog Cloud Architecture. Neural Process Lett 2022; 55:1951-1973. [PMID: 36039275 PMCID: PMC9402409 DOI: 10.1007/s11063-022-10971-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/08/2022] [Indexed: 12/02/2022]
Abstract
One of the mosquito-borne pandemic viral infections is Dengue which is mostly transmitted to humans by the Aedes agypti or female Aedes albopictis mosquitoes. The dengue disease expansion is mainly due to the different factors such as climate change, socioeconomic factors, viral evolution, globalization, etc. The unavailability of certain antiviral therapy and specific vaccine increases the risk of the dengue disease spreading even further. This arises the need for a novel technique that overcomes the complexities associated with dengue disease prediction such as low reporting level, misclassification, and incompatible disease monitoring framework. This paper mainly overcomes the above-mentioned problems by integrating the Internet of Things (IoT), fog-cloud, and deep learning techniques for efficient dengue monitoring. A compatible disease monitoring framework is formed via the IoT devices and the reports are effectively created and transferred to the healthcare facilities via the fog-cloud model. The misdiagnosis error is overcome in this paper using the novel Hybrid Convolutional Neural Network (CNN) with Tanh Long and Short Term Memory (TLSTM) based Adaptive Teaching Learning Based Optimization (ATLBO) algorithm. The ATLBO optimized CNN-TLSTM architecture mainly analyzes the dengue-related parameters such as Soft Bleeding, Muscle Pain, Joint Pain, Skin rash, Fever, Water Site, Carbon Dioxide, Water Site Humidity, Water Site Temperature, etc. for an efficient clinical decision making and timely disease diagnosis. The experimental results are conducted using a real-time dataset and its performance is validated using various performance metrics. When compared in terms of different statistical parameters such as accuracy, f-measure, mean square error, and reliability, the proposed method offers superior results in the case of dengue disease detection than other existing methods. The ATLBO optimized hybrid CNN-TLSTM shows an accuracy of 96.9%, a precision of 95.7%, recall of 96.8%, and F-measure of 96.2% which is relatively high when compared to the existing techniques. The proposed model is capable of identifying the patients in a certain geographical region and preventing the disease emergency via immediate disease diagnosis and alerting the healthcare officials to offer the stipulated services.
Collapse
Affiliation(s)
- S. N. Manoharan
- Department of Computer Science & Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai, Tamil Nadu India
| | - K. M. V. Madan Kumar
- Computer Science and Engineering, Teegala Krishna Reddy Engineering College, Hyderabad, India
| | - N. Vadivelan
- Computer Science and Engineering, Teegala Krishna Reddy Engineering College, Hyderabad, India
| |
Collapse
|
12
|
Latha D, Bell TB, Sheela CJJ. Red lesion in fundus image with hexagonal pattern feature and two-level segmentation. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:26143-26161. [PMID: 35368859 PMCID: PMC8959564 DOI: 10.1007/s11042-022-12667-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Revised: 12/16/2021] [Accepted: 02/21/2022] [Indexed: 06/14/2023]
Abstract
Red lesion identification at its early stage is very essential for the treatment of diabetic retinopathy to prevent loss of vision. This work proposes a red lesion detection algorithm that uses Hexagonal pattern-based features with two-level segmentation that can detect hemorrhage and microaneurysms in the fundus image. The proposed scheme initially pre-processes the fundus image followed by a two-level segmentation. The level 1 segmentation eliminates the background whereas the level 2 segmentation eliminates the blood vessels that introduce more false positives. A hexagonal pattern-based feature is extracted from the red lesion candidates which can highly differentiate the lesion from non-lesion regions. The hexagonal pattern features are then trained using the recurrent neural network and are classified to eliminate the false negatives. For the evaluation of the proposed red lesion algorithm, the datasets namely ROC challenge, e-ophtha, DiaretDB1, and Messidor are used with the metrics such as Accuracy, Recall, Precision, F1 score, Specificity, and AUC. The scheme provides an average Accuracy, Recall (Sensitivity), Precision, F1 score, Specificity, and AUC of 95.48%, 84.54%, 97.3%, 90.47%, 86.81% and 93.43% respectively.
Collapse
Affiliation(s)
- D. Latha
- Department of PG Computer Science, Nesamony Memorial Christian College, Marthandam, India
| | - T. Beula Bell
- Department of Computer Applications, Nesamony Memorial Christian College, Marthandam, India
| | - C. Jaspin Jeba Sheela
- Department of PG Computer Science, Nesamony Memorial Christian College, Marthandam, India
| |
Collapse
|
13
|
Panahi A, Askari Moghadam R, Akrami M, Madani K. Deep Residual Neural Network for COVID-19 Detection from Chest X-ray Images. SN COMPUTER SCIENCE 2022; 3:169. [PMID: 35224513 PMCID: PMC8860458 DOI: 10.1007/s42979-022-01067-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Accepted: 11/27/2021] [Indexed: 12/22/2022]
Abstract
The COVID-19 diffused quickly throughout the world and converted as a pandemic. It has caused a destructive effect on both regular lives, common health and global business. It is crucial to identify positive patients as shortly as desirable to limit this epidemic’s further diffusion and to manage immediately affected cases. The demand for quick assistant distinguishing devices has developed. Recent findings achieved utilizing radiology imaging systems propose that such images include salient data about the COVID-19. The utilization of progressive artificial intelligence (AI) methods linked by radiological imaging can help the reliable diagnosis of COVID-19. As radiography images can recognize pneumonia infections, this research brings an accurate and automatic technique based on a deep residual network to analyze chest X-ray images to monitor COVID-19 and diagnose verified patients. The physician states that it is significantly challenging to separate COVID-19 from common viral and bacterial pneumonia, while COVID-19 is additionally a variety of viruses. The proposed network is expanded to perform detailed diagnostics for two multi-class classification (COVID-19, Normal, Viral Pneumonia) and (COVID-19, Normal, Viral Pneumonia, Bacterial Pneumonia) and binary classification. By comparing the proposed network with the popular methods on public databases, the results show that the proposed algorithm can provide an accuracy of 92.1% in classifying multi-classes of COVID-19, normal, viral pneumonia, and bacterial pneumonia cases. It can be applied to support radiologists in verifying their first viewpoint.
Collapse
Affiliation(s)
- Amirhossein Panahi
- Faculty of New Sciences and Technologies, University of Tehran, Tehran, Iran
| | | | - Mohammadreza Akrami
- Faculty of New Sciences and Technologies, University of Tehran, Tehran, Iran
| | - Kurosh Madani
- LISSI Lab, Senart-FB Institute of Technology, University Paris Est-Creteil (UPEC), Lieusaint, France
| |
Collapse
|
14
|
Detection and Prevention of Virus Infection. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1368:21-52. [DOI: 10.1007/978-981-16-8969-7_2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|