1
|
Liu X, Li W, Miao S, Liu F, Han K, Bezabih TT. HAMMF: Hierarchical attention-based multi-task and multi-modal fusion model for computer-aided diagnosis of Alzheimer's disease. Comput Biol Med 2024; 176:108564. [PMID: 38744010 DOI: 10.1016/j.compbiomed.2024.108564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 04/15/2024] [Accepted: 05/05/2024] [Indexed: 05/16/2024]
Abstract
Alzheimer's disease (AD) is a progressive neurodegenerative condition, and early intervention can help slow its progression. However, integrating multi-dimensional information and deep convolutional networks increases the model parameters, affecting diagnosis accuracy and efficiency and hindering clinical diagnostic model deployment. Multi-modal neuroimaging can offer more precise diagnostic results, while multi-task modeling of classification and regression tasks can enhance the performance and stability of AD diagnosis. This study proposes a Hierarchical Attention-based Multi-task Multi-modal Fusion model (HAMMF) that leverages multi-modal neuroimaging data to concurrently learn AD classification tasks, cognitive score regression, and age regression tasks using attention-based techniques. Firstly, we preprocess MRI and PET image data to obtain two modal data, each containing distinct information. Next, we incorporate a novel Contextual Hierarchical Attention Module (CHAM) to aggregate multi-modal features. This module employs channel and spatial attention to extract fine-grained pathological features from unimodal image data across various dimensions. Using these attention mechanisms, the Transformer can effectively capture correlated features of multi-modal inputs. Lastly, we adopt multi-task learning in our model to investigate the influence of different variables on diagnosis, with a primary classification task and a secondary regression task for optimal multi-task prediction performance. Our experiments utilized MRI and PET images from 720 subjects in the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. The results show that our proposed model achieves an overall accuracy of 93.15% for AD/NC recognition, and the visualization results demonstrate its strong pathological feature recognition performance.
Collapse
Affiliation(s)
- Xiao Liu
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
| | - Weimin Li
- School of Computer Engineering and Science, Shanghai University, Shanghai, China.
| | - Shang Miao
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
| | - Fangyu Liu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; University of Chinese Academy of Sciences, Beijing, China; BGI-Shenzhen, Shenzhen, China
| | - Ke Han
- Medical and Health Center, Liaocheng People's Hospital, LiaoCheng, China
| | - Tsigabu T Bezabih
- School of Computer Engineering and Science, Shanghai University, Shanghai, China
| |
Collapse
|
2
|
Velayudham A, Kumar KM, Priya M S K. Enhancing clinical diagnostics: novel denoising methodology for brain MRI with adaptive masking and modified non-local block. Med Biol Eng Comput 2024:10.1007/s11517-024-03122-y. [PMID: 38761289 DOI: 10.1007/s11517-024-03122-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 05/06/2024] [Indexed: 05/20/2024]
Abstract
Medical image denoising has been a subject of extensive research, with various techniques employed to enhance image quality and facilitate more accurate diagnostics. The evolution of denoising methods has highlighted impressive results but struggled to strike equilibrium between noise reduction and edge preservation which limits its applicability in various domains. This paper manifests the novel methodology that integrates an adaptive masking strategy, transformer-based U-Net Prior generator, edge enhancement module, and modified non-local block (MNLB) for denoising brain MRI clinical images. The adaptive masking strategy maintains the vital information through dynamic mask generation while the prior generator by capturing hierarchical features regenerates the high-quality prior MRI images. Finally, these images are fed to the edge enhancement module to boost structural information by maintaining crucial edge details, and the MNLB produces the denoised output by deriving non-local contextual information. The comprehensive experimental assessment is performed by employing two datasets namely the brain tumor MRI dataset and Alzheimer's dataset for diverse metrics and compared with conventional denoising approaches. The proposed denoising methodology achieves a PSNR of 40.965 and SSIM of 0.938 on the Alzheimer's dataset and also achieves a PSNR of 40.002 and SSIM of 0.926 on the brain tumor MRI dataset at a noise level of 50% revealing its supremacy in noise minimization. Furthermore, the impact of different masking ratios on denoising performance is analyzed which reveals that the proposed method showed PSNR of 40.965, SSIM of 0.938, MAE of 5.847, and MSE of 3.672 at the masking ratio of 60%. Moreover, the findings pave the way for the advancement of clinical image processing, facilitating precise detection of tumors in clinical MRI images.
Collapse
Affiliation(s)
- A Velayudham
- Department of Computer Science and Engineering, Jansons Institute of Technology, Coimbatore, Tamil Nadu, India.
| | - K Madhan Kumar
- Electronics and Communication Engineering, PET Engineering College, Vallioor, Tamil Nadu, India
| | - Krishna Priya M S
- Department of Artificial Intelligence and Data Science, Jansons Institute of Technology, Coimbatore, Tamil Nadu, India
| |
Collapse
|
3
|
Fang L, Jiang Y. Dual path parallel hierarchical diagnosis model for intracranial tumors based on multi-feature entropy weight. Comput Biol Med 2024; 173:108353. [PMID: 38520918 DOI: 10.1016/j.compbiomed.2024.108353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 02/23/2024] [Accepted: 03/19/2024] [Indexed: 03/25/2024]
Abstract
The grading diagnosis of intracranial tumors is a key step in formulating clinical treatment plans and surgical guidelines. To effectively grade the diagnosis of intracranial tumors, this paper proposes a dual path parallel hierarchical model that can automatically grade the diagnosis of intracranial tumors with high accuracy. In this model, prior features of solid tumor mass and intratumoral necrosis are extracted. Then the optimal division of the data set is achieved through multi-feature entropy weight. The multi-modal input is realized by the dual path structure. Multiple features are superimposed and fused to achieve the image grading. The model has been tested on the actual clinical medical images provided by the Second Affiliated Hospital of Dalian Medical University. The experiment shows that the proposed model has good generalization ability, with an accuracy of 0.990. The proposed model can be applied to clinical diagnosis and has practical application prospects.
Collapse
Affiliation(s)
- Lingling Fang
- School of Computer Science and Artificial Intelligence, Liaoning Normal University, Dalian City, Liaoning Province, China.
| | - Yumeng Jiang
- School of Computer Science and Artificial Intelligence, Liaoning Normal University, Dalian City, Liaoning Province, China
| |
Collapse
|
4
|
Saeed Z, Bouhali O, Ji JX, Hammoud R, Al-Hammadi N, Aouadi S, Torfeh T. Cancerous and Non-Cancerous MRI Classification Using Dual DCNN Approach. Bioengineering (Basel) 2024; 11:410. [PMID: 38790279 PMCID: PMC11118162 DOI: 10.3390/bioengineering11050410] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Revised: 04/11/2024] [Accepted: 04/13/2024] [Indexed: 05/26/2024] Open
Abstract
Brain cancer is a life-threatening disease requiring close attention. Early and accurate diagnosis using non-invasive medical imaging is critical for successful treatment and patient survival. However, manual diagnosis by radiologist experts is time-consuming and has limitations in processing large datasets efficiently. Therefore, efficient systems capable of analyzing vast amounts of medical data for early tumor detection are urgently needed. Deep learning (DL) with deep convolutional neural networks (DCNNs) emerges as a promising tool for understanding diseases like brain cancer through medical imaging modalities, especially MRI, which provides detailed soft tissue contrast for visualizing tumors and organs. DL techniques have become more and more popular in current research on brain tumor detection. Unlike traditional machine learning methods requiring manual feature extraction, DL models are adept at handling complex data like MRIs and excel in classification tasks, making them well-suited for medical image analysis applications. This study presents a novel Dual DCNN model that can accurately classify cancerous and non-cancerous MRI samples. Our Dual DCNN model uses two well-performed DL models, i.e., inceptionV3 and denseNet121. Features are extracted from these models by appending a global max pooling layer. The extracted features are then utilized to train the model with the addition of five fully connected layers and finally accurately classify MRI samples as cancerous or non-cancerous. The fully connected layers are retrained to learn the extracted features for better accuracy. The technique achieves 99%, 99%, 98%, and 99% of accuracy, precision, recall, and f1-scores, respectively. Furthermore, this study compares the Dual DCNN's performance against various well-known DL models, including DenseNet121, InceptionV3, ResNet architectures, EfficientNetB2, SqueezeNet, VGG16, AlexNet, and LeNet-5, with different learning rates. This study indicates that our proposed approach outperforms these established models in terms of performance.
Collapse
Affiliation(s)
- Zubair Saeed
- Department of Electrical & Computer Engineering, Texas A&M University, College Station, TX 47080, USA;
- Department of Electrical & Computer Engineering, Texas A&M University at Qatar, Doha 23874, Qatar;
| | - Othmane Bouhali
- Department of Electrical & Computer Engineering, Texas A&M University at Qatar, Doha 23874, Qatar;
- Department of Science & Arts, Texas A&M University at Qatar, Doha 23874, Qatar
- Qatar Center for Quantum Computing, College of Science and Engineering, Hamad Bin Khalifa University, Doha 34110, Qatar
| | - Jim Xiuquan Ji
- Department of Electrical & Computer Engineering, Texas A&M University, College Station, TX 47080, USA;
- Department of Electrical & Computer Engineering, Texas A&M University at Qatar, Doha 23874, Qatar;
| | - Rabih Hammoud
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, Doha 3050, Qatar; (R.H.); (N.A.-H.); (S.A.); (T.T.)
| | - Noora Al-Hammadi
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, Doha 3050, Qatar; (R.H.); (N.A.-H.); (S.A.); (T.T.)
| | - Souha Aouadi
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, Doha 3050, Qatar; (R.H.); (N.A.-H.); (S.A.); (T.T.)
| | - Tarraf Torfeh
- Department of Radiation Oncology, National Center for Cancer Care and Research, Hamad Medical Corporation, Doha 3050, Qatar; (R.H.); (N.A.-H.); (S.A.); (T.T.)
| |
Collapse
|
5
|
Bhimavarapu U, Chintalapudi N, Battineni G. Brain Tumor Detection and Categorization with Segmentation of Improved Unsupervised Clustering Approach and Machine Learning Classifier. Bioengineering (Basel) 2024; 11:266. [PMID: 38534540 DOI: 10.3390/bioengineering11030266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 02/28/2024] [Accepted: 03/04/2024] [Indexed: 03/28/2024] Open
Abstract
There is no doubt that brain tumors are one of the leading causes of death in the world. A biopsy is considered the most important procedure in cancer diagnosis, but it comes with drawbacks, including low sensitivity, risks during biopsy treatment, and a lengthy wait for results. Early identification provides patients with a better prognosis and reduces treatment costs. The conventional methods of identifying brain tumors are based on medical professional skills, so there is a possibility of human error. The labor-intensive nature of traditional approaches makes healthcare resources expensive. A variety of imaging methods are available to detect brain tumors, including magnetic resonance imaging (MRI) and computed tomography (CT). Medical imaging research is being advanced by computer-aided diagnostic processes that enable visualization. Using clustering, automatic tumor segmentation leads to accurate tumor detection that reduces risk and helps with effective treatment. This study proposed a better Fuzzy C-Means segmentation algorithm for MRI images. To reduce complexity, the most relevant shape, texture, and color features are selected. The improved Extreme Learning machine classifies the tumors with 98.56% accuracy, 99.14% precision, and 99.25% recall. The proposed classifier consistently demonstrates higher accuracy across all tumor classes compared to existing models. Specifically, the proposed model exhibits accuracy improvements ranging from 1.21% to 6.23% when compared to other models. This consistent enhancement in accuracy emphasizes the robust performance of the proposed classifier, suggesting its potential for more accurate and reliable brain tumor classification. The improved algorithm achieved accuracy, precision, and recall rates of 98.47%, 98.59%, and 98.74% on the Fig share dataset and 99.42%, 99.75%, and 99.28% on the Kaggle dataset, respectively, which surpasses competing algorithms, particularly in detecting glioma grades. The proposed algorithm shows an improvement in accuracy, of approximately 5.39%, in the Fig share dataset and of 6.22% in the Kaggle dataset when compared to existing models. Despite challenges, including artifacts and computational complexity, the study's commitment to refining the technique and addressing limitations positions the improved FCM model as a noteworthy advancement in the realm of precise and efficient brain tumor identification.
Collapse
Affiliation(s)
- Usharani Bhimavarapu
- Department of Computer Science and Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram 522302, India
| | - Nalini Chintalapudi
- Clinical Research Centre, School of Medicinal and Health Products Sciences, University of Camerino, 62032 Camerino, Italy
| | - Gopi Battineni
- Clinical Research Centre, School of Medicinal and Health Products Sciences, University of Camerino, 62032 Camerino, Italy
| |
Collapse
|
6
|
Alhussan AA, Eid MM, Towfek SK, Khafaga DS. Breast Cancer Classification Depends on the Dynamic Dipper Throated Optimization Algorithm. Biomimetics (Basel) 2023; 8:163. [PMID: 37092415 PMCID: PMC10123690 DOI: 10.3390/biomimetics8020163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 04/12/2023] [Accepted: 04/14/2023] [Indexed: 04/25/2023] Open
Abstract
According to the American Cancer Society, breast cancer is the second largest cause of mortality among women after lung cancer. Women's death rates can be decreased if breast cancer is diagnosed and treated early. Due to the lengthy duration of manual breast cancer diagnosis, an automated approach is necessary for early cancer identification. This research proposes a novel framework integrating metaheuristic optimization with deep learning and feature selection for robustly classifying breast cancer from ultrasound images. The structure of the proposed methodology consists of five stages, namely, data augmentation to improve the learning of convolutional neural network (CNN) models, transfer learning using GoogleNet deep network for feature extraction, selection of the best set of features using a novel optimization algorithm based on a hybrid of dipper throated and particle swarm optimization algorithms, and classification of the selected features using CNN optimized using the proposed optimization algorithm. To prove the effectiveness of the proposed approach, a set of experiments were conducted on a breast cancer dataset, freely available on Kaggle, to evaluate the performance of the proposed feature selection method and the performance of the optimized CNN. In addition, statistical tests were established to study the stability and difference of the proposed approach compared to state-of-the-art approaches. The achieved results confirmed the superiority of the proposed approach with a classification accuracy of 98.1%, which is better than the other approaches considered in the conducted experiments.
Collapse
Affiliation(s)
- Amel Ali Alhussan
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Marwa M. Eid
- Faculty of Artificial Intelligence, Delta University for Science and Technology, Mansoura 35712, Egypt
| | - S. K. Towfek
- Delta Higher Institute for Engineering and Technology, Mansoura 35111, Egypt
- Computer Science and Intelligent Systems Research Center, Blacksburg, VA 24060, USA
| | - Doaa Sami Khafaga
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| |
Collapse
|
7
|
Ghaleb Al-Mekhlafi Z, Mohammed Senan E, Sulaiman Alshudukhi J, Abdulkarem Mohammed B. Hybrid Techniques for Diagnosing Endoscopy Images for Early Detection of Gastrointestinal Disease Based on Fusion Features. INT J INTELL SYST 2023. [DOI: 10.1155/2023/8616939] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/05/2023]
Abstract
Gastrointestinal (GI) diseases, particularly tumours, are considered one of the most widespread and dangerous diseases and thus need timely health care for early detection to reduce deaths. Endoscopy technology is an effective technique for diagnosing GI diseases, thus producing a video containing thousands of frames. However, it is difficult to analyse all the images by a gastroenterologist, and it takes a long time to keep track of all the frames. Thus, artificial intelligence systems provide solutions to this challenge by analysing thousands of images with high speed and effective accuracy. Hence, systems with different methodologies are developed in this work. The first methodology for diagnosing endoscopy images of GI diseases is by using VGG-16 + SVM and DenseNet-121 + SVM. The second methodology for diagnosing endoscopy images of gastrointestinal diseases by artificial neural network (ANN) is based on fused features between VGG-16 and DenseNet-121 before and after high-dimensionality reduction by the principal component analysis (PCA). The third methodology is by ANN and is based on the fused features between VGG-16 and handcrafted features and features fused between DenseNet-121 and the handcrafted features. Herein, handcrafted features combine the features of gray level cooccurrence matrix (GLCM), discrete wavelet transform (DWT), fuzzy colour histogram (FCH), and local binary pattern (LBP) methods. All systems achieved promising results for diagnosing endoscopy images of the gastroenterology data set. The ANN network reached an accuracy, sensitivity, precision, specificity, and an AUC of 98.9%, 98.70%, 98.94%, 99.69%, and 99.51%, respectively, based on fused features of the VGG-16 and the handcrafted.
Collapse
Affiliation(s)
- Zeyad Ghaleb Al-Mekhlafi
- Department of Information and Computer Science, College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
| | - Ebrahim Mohammed Senan
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Alrazi University, Sana’a, Yemen
| | - Jalawi Sulaiman Alshudukhi
- Department of Information and Computer Science, College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
| | - Badiea Abdulkarem Mohammed
- Department of Computer Engineering, College of Computer Science and Engineering, University of Ha’il, Ha’il 81481, Saudi Arabia
| |
Collapse
|
8
|
Srivastava J, Prakash J, Srivastava A. Hybrid deep learning algorithm for brain tumour detection. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2167624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Affiliation(s)
- Jyoti Srivastava
- Department of ITCA, Madan Mohan Malaviya University of Technology, Gorakhpur, UP, India
| | - Jay Prakash
- Department of ITCA, Madan Mohan Malaviya University of Technology, Gorakhpur, UP, India
| | - Ashish Srivastava
- Department of Computer Engineering & Application, GLA University, Mathura, UP, India
| |
Collapse
|
9
|
CXray-EffDet: Chest Disease Detection and Classification from X-ray Images Using the EfficientDet Model. Diagnostics (Basel) 2023; 13:diagnostics13020248. [PMID: 36673057 PMCID: PMC9857576 DOI: 10.3390/diagnostics13020248] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 12/28/2022] [Accepted: 01/05/2023] [Indexed: 01/11/2023] Open
Abstract
The competence of machine learning approaches to carry out clinical expertise tasks has recently gained a lot of attention, particularly in the field of medical-imaging examination. Among the most frequently used clinical-imaging modalities in the healthcare profession is chest radiography, which calls for prompt reporting of the existence of potential anomalies and illness diagnostics in images. Automated frameworks for the recognition of chest abnormalities employing X-rays are being introduced in health departments. However, the reliable detection and classification of particular illnesses in chest X-ray samples is still a complicated issue because of the complex structure of radiographs, e.g., the large exposure dynamic range. Moreover, the incidence of various image artifacts and extensive inter- and intra-category resemblances further increases the difficulty of chest disease recognition procedures. The aim of this study was to resolve these existing problems. We propose a deep learning (DL) approach to the detection of chest abnormalities with the X-ray modality using the EfficientDet (CXray-EffDet) model. More clearly, we employed the EfficientNet-B0-based EfficientDet-D0 model to compute a reliable set of sample features and accomplish the detection and classification task by categorizing eight categories of chest abnormalities using X-ray images. The effective feature computation power of the CXray-EffDet model enhances the power of chest abnormality recognition due to its high recall rate, and it presents a lightweight and computationally robust approach. A large test of the model employing a standard database from the National Institutes of Health (NIH) was conducted to demonstrate the chest disease localization and categorization performance of the CXray-EffDet model. We attained an AUC score of 0.9080, along with an IOU of 0.834, which clearly determines the competency of the introduced model.
Collapse
|
10
|
MSeg-Net: A Melanoma Mole Segmentation Network Using CornerNet and Fuzzy -Means Clustering. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:7502504. [PMID: 36276999 PMCID: PMC9586776 DOI: 10.1155/2022/7502504] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Accepted: 09/17/2022] [Indexed: 11/18/2022]
Abstract
Melanoma is a dangerous form of skin cancer that results in the demise of patients at the developed stage. Researchers have attempted to develop automated systems for the timely recognition of this deadly disease. However, reliable and precise identification of melanoma moles is a tedious and complex activity as there exist huge differences in the mass, structure, and color of the skin lesions. Additionally, the incidence of noise, blurring, and chrominance changes in the suspected images further enhance the complexity of the detection procedure. In the proposed work, we try to overcome the limitations of the existing work by presenting a deep learning (DL) model. Descriptively, after accomplishing the preprocessing task, we have utilized an object detection approach named CornerNet model to detect melanoma lesions. Then the localized moles are passed as input to the fuzzy K-means (FLM) clustering approach to perform the segmentation task. To assess the segmentation power of the proposed approach, two standard databases named ISIC-2017 and ISIC-2018 are employed. Extensive experimentation has been conducted to demonstrate the robustness of the proposed approach through both numeric and pictorial results. The proposed approach is capable of detecting and segmenting the moles of arbitrary shapes and orientations. Furthermore, the presented work can tackle the presence of noise, blurring, and brightness variations as well. We have attained the segmentation accuracy values of 99.32% and 99.63% over the ISIC-2017 and ISIC-2018 databases correspondingly which clearly depicts the effectiveness of our model for the melanoma mole segmentation.
Collapse
|
11
|
Albahli S, Nawaz M. DCNet: DenseNet-77-based CornerNet model for the tomato plant leaf disease detection and classification. FRONTIERS IN PLANT SCIENCE 2022; 13:957961. [PMID: 36160977 PMCID: PMC9499263 DOI: 10.3389/fpls.2022.957961] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 08/12/2022] [Indexed: 06/16/2023]
Abstract
Early recognition of tomato plant leaf diseases is mandatory to improve the food yield and save agriculturalists from costly spray procedures. The correct and timely identification of several tomato plant leaf diseases is a complicated task as the healthy and affected areas of plant leaves are highly similar. Moreover, the incidence of light variation, color, and brightness changes, and the occurrence of blurring and noise on the images further increase the complexity of the detection process. In this article, we have presented a robust approach for tackling the existing issues of tomato plant leaf disease detection and classification by using deep learning. We have proposed a novel approach, namely the DenseNet-77-based CornerNet model, for the localization and classification of the tomato plant leaf abnormalities. Specifically, we have used the DenseNet-77 as the backbone network of the CornerNet. This assists in the computing of the more nominative set of image features from the suspected samples that are later categorized into 10 classes by the one-stage detector of the CornerNet model. We have evaluated the proposed solution on a standard dataset, named PlantVillage, which is challenging in nature as it contains samples with immense brightness alterations, color variations, and leaf images with different dimensions and shapes. We have attained an average accuracy of 99.98% over the employed dataset. We have conducted several experiments to assure the effectiveness of our approach for the timely recognition of the tomato plant leaf diseases that can assist the agriculturalist to replace the manual systems.
Collapse
Affiliation(s)
- Saleh Albahli
- Department of Information Technology, College of Computer, Qassim University, Buraydah, Saudi Arabia
| | - Marriam Nawaz
- Department of Computer Science, University of Engineering and Technology–Taxila, Taxila, Pakistan
- Department of Software Engineering, University of Engineering and Technology–Taxila, Taxila, Pakistan
| |
Collapse
|
12
|
Albahli S, Nazir T. AI-CenterNet CXR: An artificial intelligence (AI) enabled system for localization and classification of chest X-ray disease. Front Med (Lausanne) 2022; 9:955765. [PMID: 36111113 PMCID: PMC9469020 DOI: 10.3389/fmed.2022.955765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2022] [Accepted: 07/21/2022] [Indexed: 12/03/2022] Open
Abstract
Machine learning techniques have lately attracted a lot of attention for their potential to execute expert-level clinical tasks, notably in the area of medical image analysis. Chest radiography is one of the most often utilized diagnostic imaging modalities in medical practice, and it necessitates timely coverage regarding the presence of probable abnormalities and disease diagnoses in the images. Computer-aided solutions for the identification of chest illness using chest radiography are being developed in medical imaging research. However, accurate localization and categorization of specific disorders in chest X-ray images is still a challenging problem due to the complex nature of radiographs, presence of different distortions, high inter-class similarities, and intra-class variations in abnormalities. In this work, we have presented an Artificial Intelligence (AI)-enabled fully automated approach using an end-to-end deep learning technique to improve the accuracy of thoracic illness diagnosis. We proposed AI-CenterNet CXR, a customized CenterNet model with an improved feature extraction network for the recognition of multi-label chest diseases. The enhanced backbone computes deep key points that improve the abnormality localization accuracy and, thus, overall disease classification performance. Moreover, the proposed architecture is lightweight and computationally efficient in comparison to the original CenterNet model. We have performed extensive experimentation to validate the effectiveness of the proposed technique using the National Institutes of Health (NIH) Chest X-ray dataset. Our method achieved an overall Area Under the Curve (AUC) of 0.888 and an average IOU of 0.801 to detect and classify the eight types of chest abnormalities. Both the qualitative and quantitative findings reveal that the suggested approach outperforms the existing methods, indicating the efficacy of our approach.
Collapse
Affiliation(s)
- Saleh Albahli
- Department of Information Technology, College of Computer, Qassim University, Buraydah, Saudi Arabia
| | - Tahira Nazir
- Faculty of Computing, Riphah International University, Islamabad, Pakistan
| |
Collapse
|
13
|
Huang B, Tan G, Dou H, Cui Z, Song Y, Zhou T. Mutual gain adaptive network for segmenting brain stroke lesions. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109568] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
14
|
Infrared Small-Target Detection Based on Radiation Characteristics with a Multimodal Feature Fusion Network. REMOTE SENSING 2022. [DOI: 10.3390/rs14153570] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Infrared small-target detection has widespread influences on anti-missile warning, precise weapon guidance, infrared stealth and anti-stealth, military reconnaissance, and other national defense fields. However, small targets are easily submerged in background clutter noise and have fewer pixels and shape features. Furthermore, random target positions and irregular motion can lead to target detection being carried out in the whole space–time domain. This could result in a large amount of calculation, and the accuracy and real-time performance are difficult to be guaranteed. Therefore, infrared small-target detection is still a challenging and far-reaching research hotspot. To solve the above problem, a novel multimodal feature fusion network (MFFN) is proposed, based on morphological characteristics, infrared radiation, and motion characteristics, which could compensate for the deficiency in the description of single modal characteristics of small targets and improve the recognition precision. Our innovations introduced in the paper are addressed in the following three aspects: Firstly, in the morphological domain, we propose a network with the skip-connected feature pyramid network (SCFPN) and dilated convolutional block attention module integrated with Resblock (DAMR) introduced to the backbone, which is designed to improve the feature extraction ability for infrared small targets. Secondly, in the radiation characteristic domain, we propose a prediction model of atmospheric transmittance based on deep neural networks (DNNs), which predicts the atmospheric transmittance effectively without being limited by the complex environment to improve the measurement accuracy of radiation characteristics. Finally, the dilated convolutional-network-based bidirectional encoder representation from a transformers (DC-BERT) structure combined with an attention mechanism is proposed for the feature extraction of radiation and motion characteristics. Finally, experiments on our self-established optoelectronic equipment detected dataset (OEDD) show that our method is superior to eight state-of-the-art algorithms in terms of the accuracy and robustness of infrared small-target detection. The comparative experimental results of four kinds of target sequences indicate that the average recognition rate Pavg is 92.64%, the mean average precision (mAP) is 92.01%, and the F1 score is 90.52%.
Collapse
|
15
|
Special Issue: “Machine Learning for Computer-Aided Diagnosis in Biomedical Imaging”. Diagnostics (Basel) 2022; 12:diagnostics12061331. [PMID: 35741141 PMCID: PMC9222049 DOI: 10.3390/diagnostics12061331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 05/25/2022] [Indexed: 02/04/2023] Open
|
16
|
Breast Cancer Mammograms Classification Using Deep Neural Network and Entropy-Controlled Whale Optimization Algorithm. Diagnostics (Basel) 2022; 12:diagnostics12020557. [PMID: 35204646 PMCID: PMC8871265 DOI: 10.3390/diagnostics12020557] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Revised: 01/22/2022] [Accepted: 01/30/2022] [Indexed: 02/04/2023] Open
Abstract
Breast cancer has affected many women worldwide. To perform detection and classification of breast cancer many computer-aided diagnosis (CAD) systems have been established because the inspection of the mammogram images by the radiologist is a difficult and time taken task. To early diagnose the disease and provide better treatment lot of CAD systems were established. There is still a need to improve existing CAD systems by incorporating new methods and technologies in order to provide more precise results. This paper aims to investigate ways to prevent the disease as well as to provide new methods of classification in order to reduce the risk of breast cancer in women's lives. The best feature optimization is performed to classify the results accurately. The CAD system's accuracy improved by reducing the false-positive rates.The Modified Entropy Whale Optimization Algorithm (MEWOA) is proposed based on fusion for deep feature extraction and perform the classification. In the proposed method, the fine-tuned MobilenetV2 and Nasnet Mobile are applied for simulation. The features are extracted, and optimization is performed. The optimized features are fused and optimized by using MEWOA. Finally, by using the optimized deep features, the machine learning classifiers are applied to classify the breast cancer images. To extract the features and perform the classification, three publicly available datasets are used: INbreast, MIAS, and CBIS-DDSM. The maximum accuracy achieved in INbreast dataset is 99.7%, MIAS dataset has 99.8% and CBIS-DDSM has 93.8%. Finally, a comparison with other existing methods is performed, demonstrating that the proposed algorithm outperforms the other approaches.
Collapse
|
17
|
Jabeen K, Khan MA, Alhaisoni M, Tariq U, Zhang YD, Hamza A, Mickus A, Damaševičius R. Breast Cancer Classification from Ultrasound Images Using Probability-Based Optimal Deep Learning Feature Fusion. SENSORS 2022; 22:s22030807. [PMID: 35161552 PMCID: PMC8840464 DOI: 10.3390/s22030807] [Citation(s) in RCA: 56] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 01/12/2022] [Accepted: 01/17/2022] [Indexed: 12/11/2022]
Abstract
After lung cancer, breast cancer is the second leading cause of death in women. If breast cancer is detected early, mortality rates in women can be reduced. Because manual breast cancer diagnosis takes a long time, an automated system is required for early cancer detection. This paper proposes a new framework for breast cancer classification from ultrasound images that employs deep learning and the fusion of the best selected features. The proposed framework is divided into five major steps: (i) data augmentation is performed to increase the size of the original dataset for better learning of Convolutional Neural Network (CNN) models; (ii) a pre-trained DarkNet-53 model is considered and the output layer is modified based on the augmented dataset classes; (iii) the modified model is trained using transfer learning and features are extracted from the global average pooling layer; (iv) the best features are selected using two improved optimization algorithms known as reformed differential evaluation (RDE) and reformed gray wolf (RGW); and (v) the best selected features are fused using a new probability-based serial approach and classified using machine learning algorithms. The experiment was conducted on an augmented Breast Ultrasound Images (BUSI) dataset, and the best accuracy was 99.1%. When compared with recent techniques, the proposed framework outperforms them.
Collapse
Affiliation(s)
- Kiran Jabeen
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Muhammad Attique Khan
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Majed Alhaisoni
- College of Computer Science and Engineering, University of Ha’il, Ha’il 55211, Saudi Arabia;
| | - Usman Tariq
- College of Computer Engineering and Science, Prince Sattam Bin Abdulaziz University, Al-Kharaj 11942, Saudi Arabia;
| | - Yu-Dong Zhang
- Department of Informatics, University of Leicester, Leicester LE1 7RH, UK;
| | - Ameer Hamza
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Artūras Mickus
- Department of Applied Informatics, Vytautas Magnus University, LT-44404 Kaunas, Lithuania;
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, LT-44404 Kaunas, Lithuania;
- Correspondence:
| |
Collapse
|
18
|
Feasibility of Synthetic Computed Tomography Images Generated from Magnetic Resonance Imaging Scans Using Various Deep Learning Methods in the Planning of Radiation Therapy for Prostate Cancer. Cancers (Basel) 2021; 14:cancers14010040. [PMID: 35008204 PMCID: PMC8750723 DOI: 10.3390/cancers14010040] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 12/17/2021] [Accepted: 12/20/2021] [Indexed: 11/17/2022] Open
Abstract
Simple Summary MRI-only simulation in radiation therapy (RT) planning has received attention because the CT scan can be omitted. For MRI-only simulation, synthetic CT (sCT) is necessary for the dose calculation. Various methodologies have been suggested for the generation of sCT and, recently, methods using the deep learning approaches are actively investigated. GAN and cycle-consistent GAN (CycGAN) have been mainly tested, however, very limited studies compared the qualities of sCTs generated from these methods or suggested other models for sCT generation. We have compared GAN, CycGAN, and, reference-guided GAN (RgGAN), a new model of deep learning method. We found that the performance in the HU conservation for soft tissue was poorest for GAN. All methods could generate sCTs feasible for VMAT planning with the trend that sCT generated from the RgGAN showed best performance in dosimetric conservation D98% and D95% than sCTs from other methodologies. Abstract We aimed to evaluate and compare the qualities of synthetic computed tomography (sCT) generated by various deep-learning methods in volumetric modulated arc therapy (VMAT) planning for prostate cancer. Simulation computed tomography (CT) and T2-weighted simulation magnetic resonance image from 113 patients were used in the sCT generation by three deep-learning approaches: generative adversarial network (GAN), cycle-consistent GAN (CycGAN), and reference-guided CycGAN (RgGAN), a new model which performed further adjustment of sCTs generated by CycGAN with available paired images. VMAT plans on the original simulation CT images were recalculated on the sCTs and the dosimetric differences were evaluated. For soft tissue, a significant difference in the mean Hounsfield unites (HUs) was observed between the original CT images and only sCTs from GAN (p = 0.03). The mean relative dose differences for planning target volumes or organs at risk were within 2% among the sCTs from the three deep-learning approaches. The differences in dosimetric parameters for D98% and D95% from original CT were lowest in sCT from RgGAN. In conclusion, HU conservation for soft tissue was poorest for GAN. There was the trend that sCT generated from the RgGAN showed best performance in dosimetric conservation D98% and D95% than sCTs from other methodologies.
Collapse
|
19
|
Improved Text Summarization of News Articles Using GA-HC and PSO-HC. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app112210511] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Automatic Text Summarization (ATS) is gaining attention because a large volume of data is being generated at an exponential rate. Due to easy internet availability globally, a large amount of data is being generated from social networking websites, news websites and blog websites. Manual summarization is time consuming, and it is difficult to read and summarize a large amount of content. Automatic text summarization is the solution to deal with this problem. This study proposed two automatic text summarization models which are Genetic Algorithm with Hierarchical Clustering (GA-HC) and Particle Swarm Optimization with Hierarchical Clustering (PSO-HC). The proposed models use a word embedding model with Hierarchal Clustering Algorithm to group sentences conveying almost same meaning. Modified GA and adaptive PSO based sentence ranking models are proposed for text summary in news text documents. Simulations are conducted and compared with other understudied algorithms to evaluate the performance of proposed methodology. Simulations results validate the superior performance of the proposed methodology.
Collapse
|