1
|
Tang W, Wu N, Xiao Q, Chen S, Gao P, He Y, Feng L. Early detection of cotton verticillium wilt based on root magnetic resonance images. FRONTIERS IN PLANT SCIENCE 2023; 14:1135718. [PMID: 37021317 PMCID: PMC10067745 DOI: 10.3389/fpls.2023.1135718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Accepted: 02/21/2023] [Indexed: 06/19/2023]
Abstract
Verticillium wilt (VW) is often referred to as the cancer of cotton and it has a detrimental effect on cotton yield and quality. Since the root system is the first to be infested, it is feasible to detect VW by root analysis in the early stages of the disease. In recent years, with the update of computing equipment and the emergence of large-scale high-quality data sets, deep learning has achieved remarkable results in computer vision tasks. However, in some specific areas, such as cotton root MRI image task processing, it will bring some challenges. For example, the data imbalance problem (there is a serious imbalance between the cotton root and the background in the segmentation task) makes it difficult for existing algorithms to segment the target. In this paper, we proposed two new methods to solve these problems. The effectiveness of the algorithms was verified by experimental results. The results showed that the new segmentation model improved the Dice and mIoU by 46% and 44% compared with the original model. And this model could segment MRI images of rapeseed root cross-sections well with good robustness and scalability. The new classification model improved the accuracy by 34.9% over the original model. The recall score and F1 score increased by 59% and 42%, respectively. The results of this paper indicate that MRI and deep learning have the potential for non-destructive early detection of VW diseases in cotton.
Collapse
Affiliation(s)
- Wentan Tang
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
- Key Laboratory of Spectroscopy Sensing, Ministry of Agriculture and Rural Affairs, Hangzhou, China
| | - Na Wu
- School of Information and Electronic Engineering, Zhejiang University of Science and Technology, Hangzhou, China
| | - Qinlin Xiao
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
- Key Laboratory of Spectroscopy Sensing, Ministry of Agriculture and Rural Affairs, Hangzhou, China
| | - Sishi Chen
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
- Key Laboratory of Spectroscopy Sensing, Ministry of Agriculture and Rural Affairs, Hangzhou, China
| | - Pan Gao
- College of Information Science and Technology, Shihezi University, Shihezi, China
| | - Yong He
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
- Key Laboratory of Spectroscopy Sensing, Ministry of Agriculture and Rural Affairs, Hangzhou, China
| | - Lei Feng
- College of Biosystems Engineering and Food Science, Zhejiang University, Hangzhou, China
- Key Laboratory of Spectroscopy Sensing, Ministry of Agriculture and Rural Affairs, Hangzhou, China
| |
Collapse
|
2
|
Ramzan M, Raza M, Sharif MI, Kadry S. Gastrointestinal Tract Polyp Anomaly Segmentation on Colonoscopy Images Using Graft-U-Net. J Pers Med 2022; 12:jpm12091459. [PMID: 36143244 PMCID: PMC9503374 DOI: 10.3390/jpm12091459] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 08/28/2022] [Accepted: 09/01/2022] [Indexed: 11/21/2022] Open
Abstract
Computer-aided polyp segmentation is a crucial task that supports gastroenterologists in examining and resecting anomalous tissue in the gastrointestinal tract. The disease polyps grow mainly in the colorectal area of the gastrointestinal tract and in the mucous membrane, which has protrusions of micro-abnormal tissue that increase the risk of incurable diseases such as cancer. So, the early examination of polyps can decrease the chance of the polyps growing into cancer, such as adenomas, which can change into cancer. Deep learning-based diagnostic systems play a vital role in diagnosing diseases in the early stages. A deep learning method, Graft-U-Net, is proposed to segment polyps using colonoscopy frames. Graft-U-Net is a modified version of UNet, which comprises three stages, including the preprocessing, encoder, and decoder stages. The preprocessing technique is used to improve the contrast of the colonoscopy frames. Graft-U-Net comprises encoder and decoder blocks where the encoder analyzes features, while the decoder performs the features’ synthesizing processes. The Graft-U-Net model offers better segmentation results than existing deep learning models. The experiments were conducted using two open-access datasets, Kvasir-SEG and CVC-ClinicDB. The datasets were prepared from the large bowel of the gastrointestinal tract by performing a colonoscopy procedure. The anticipated model outperforms in terms of its mean Dice of 96.61% and mean Intersection over Union (mIoU) of 82.45% with the Kvasir-SEG dataset. Similarly, with the CVC-ClinicDB dataset, the method achieved a mean Dice of 89.95% and an mIoU of 81.38%.
Collapse
Affiliation(s)
- Muhammad Ramzan
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Islamabad 47040, Pakistan
| | - Mudassar Raza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Islamabad 47040, Pakistan
- Correspondence:
| | - Muhammad Imran Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Islamabad 47040, Pakistan
| | - Seifedine Kadry
- Department of Applied Data Science, Noroff University College, 4612 Kristiansand, Norway
- Department of Electrical and Computer Engineering, Lebanese American University, Byblos 999095, Lebanon
| |
Collapse
|
3
|
Fati SM, Senan EM, Azar AT. Hybrid and Deep Learning Approach for Early Diagnosis of Lower Gastrointestinal Diseases. SENSORS (BASEL, SWITZERLAND) 2022; 22:4079. [PMID: 35684696 PMCID: PMC9185306 DOI: 10.3390/s22114079] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Revised: 05/21/2022] [Accepted: 05/24/2022] [Indexed: 05/27/2023]
Abstract
Every year, nearly two million people die as a result of gastrointestinal (GI) disorders. Lower gastrointestinal tract tumors are one of the leading causes of death worldwide. Thus, early detection of the type of tumor is of great importance in the survival of patients. Additionally, removing benign tumors in their early stages has more risks than benefits. Video endoscopy technology is essential for imaging the GI tract and identifying disorders such as bleeding, ulcers, polyps, and malignant tumors. Videography generates 5000 frames, which require extensive analysis and take a long time to follow all frames. Thus, artificial intelligence techniques, which have a higher ability to diagnose and assist physicians in making accurate diagnostic decisions, solve these challenges. In this study, many multi-methodologies were developed, where the work was divided into four proposed systems; each system has more than one diagnostic method. The first proposed system utilizes artificial neural networks (ANN) and feed-forward neural networks (FFNN) algorithms based on extracting hybrid features by three algorithms: local binary pattern (LBP), gray level co-occurrence matrix (GLCM), and fuzzy color histogram (FCH) algorithms. The second proposed system uses pre-trained CNN models which are the GoogLeNet and AlexNet based on the extraction of deep feature maps and their classification with high accuracy. The third proposed method uses hybrid techniques consisting of two blocks: the first block of CNN models (GoogLeNet and AlexNet) to extract feature maps; the second block is the support vector machine (SVM) algorithm for classifying deep feature maps. The fourth proposed system uses ANN and FFNN based on the hybrid features between CNN models (GoogLeNet and AlexNet) and LBP, GLCM and FCH algorithms. All the proposed systems achieved superior results in diagnosing endoscopic images for the early detection of lower gastrointestinal diseases. All systems produced promising results; the FFNN classifier based on the hybrid features extracted by GoogLeNet, LBP, GLCM and FCH achieved an accuracy of 99.3%, precision of 99.2%, sensitivity of 99%, specificity of 100%, and AUC of 99.87%.
Collapse
Affiliation(s)
- Suliman Mohamed Fati
- College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia;
| | - Ebrahim Mohammed Senan
- Department of Computer Science & Information Technology, Dr. Babasaheb Ambedkar Marathwada University, Aurangabad 431004, India;
| | - Ahmad Taher Azar
- College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia;
- Faculty of Computers and Artificial Intelligence, Benha University, Benha 13518, Egypt
| |
Collapse
|
4
|
|
5
|
Polyp Detection from Colorectum Images by Using Attentive YOLOv5. Diagnostics (Basel) 2021; 11:diagnostics11122264. [PMID: 34943501 PMCID: PMC8700704 DOI: 10.3390/diagnostics11122264] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2021] [Revised: 11/26/2021] [Accepted: 11/30/2021] [Indexed: 01/05/2023] Open
Abstract
Background: High-quality colonoscopy is essential to prevent the occurrence of colorectal cancers. The data of colonoscopy are mainly stored in the form of images. Therefore, artificial intelligence-assisted colonoscopy based on medical images is not only a research hotspot, but also one of the effective auxiliary means to improve the detection rate of adenomas. This research has become the focus of medical institutions and scientific research departments and has important clinical and scientific research value. Methods: In this paper, we propose a YOLOv5 model based on a self-attention mechanism for polyp target detection. This method uses the idea of regression, using the entire image as the input of the network and directly returning the target frame of this position in multiple positions of the image. In the feature extraction process, an attention mechanism is added to enhance the contribution of information-rich feature channels and weaken the interference of useless channels; Results: The experimental results show that the method can accurately identify polyp images, especially for the small polyps and the polyps with inconspicuous contrasts, and the detection speed is greatly improved compared with the comparison algorithm. Conclusions: This study will be of great help in reducing the missed diagnosis of clinicians during endoscopy and treatment, and it is also of great significance to the development of clinicians’ clinical work.
Collapse
|
6
|
Durak S, Bayram B, Bakırman T, Erkut M, Doğan M, Gürtürk M, Akpınar B. Deep neural network approaches for detecting gastric polyps in endoscopic images. Med Biol Eng Comput 2021; 59:1563-1574. [PMID: 34259974 DOI: 10.1007/s11517-021-02398-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Accepted: 06/18/2021] [Indexed: 12/18/2022]
Abstract
Gastrointestinal endoscopy is the primary method used for the diagnosis and treatment of gastric polyps. The early detection and removal of polyps is vitally important in preventing cancer development. Many studies indicate that a high workload can contribute to misdiagnosing gastric polyps, even for experienced physicians. In this study, we aimed to establish a deep learning-based computer-aided diagnosis system for automatic gastric polyp detection. A private gastric polyp dataset was generated for this purpose consisting of 2195 endoscopic images and 3031 polyp labels. Retrospective gastrointestinal endoscopy data from the Karadeniz Technical University, Farabi Hospital, were used in the study. YOLOv4, CenterNet, EfficientNet, Cross Stage ResNext50-SPP, YOLOv3, YOLOv3-SPP, Single Shot Detection, and Faster Regional CNN deep learning models were implemented and assessed to determine the most efficient model for precancerous gastric polyp detection. The dataset was split 70% and 30% for training and testing all the implemented models. YOLOv4 was determined to be the most accurate model, with an 87.95% mean average precision. We also evaluated all the deep learning models using a public gastric polyp dataset as the test data. The results show that YOLOv4 has significant potential applicability in detecting gastric polyps and can be used effectively in gastrointestinal CAD systems. Gastric Polyp Detection Process using Deep Learning with Private Dataset.
Collapse
Affiliation(s)
- Serdar Durak
- Faculty of Medicine, Department of Gastroenterology, Karadeniz Technical University, Trabzon, Turkey
| | - Bülent Bayram
- Department of Geoinformatics, Yildiz Technical University, Istanbul, Turkey
| | - Tolga Bakırman
- Department of Geoinformatics, Yildiz Technical University, Istanbul, Turkey.
| | - Murat Erkut
- Faculty of Medicine, Department of Gastroenterology, Karadeniz Technical University, Trabzon, Turkey
| | - Metehan Doğan
- Department of Geoinformatics, Yildiz Technical University, Istanbul, Turkey
| | - Mert Gürtürk
- Department of Geoinformatics, Yildiz Technical University, Istanbul, Turkey
| | - Burak Akpınar
- Department of Geoinformatics, Yildiz Technical University, Istanbul, Turkey
| |
Collapse
|
7
|
COVID-19 Detection from Chest X-ray Images Using Feature Fusion and Deep Learning. SENSORS 2021; 21:s21041480. [PMID: 33672585 PMCID: PMC8078171 DOI: 10.3390/s21041480] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Revised: 02/15/2021] [Accepted: 02/17/2021] [Indexed: 12/15/2022]
Abstract
Currently, COVID-19 is considered to be the most dangerous and deadly disease for the human body caused by the novel coronavirus. In December 2019, the coronavirus spread rapidly around the world, thought to be originated from Wuhan in China and is responsible for a large number of deaths. Earlier detection of the COVID-19 through accurate diagnosis, particularly for the cases with no obvious symptoms, may decrease the patient's death rate. Chest X-ray images are primarily used for the diagnosis of this disease. This research has proposed a machine vision approach to detect COVID-19 from the chest X-ray images. The features extracted by the histogram-oriented gradient (HOG) and convolutional neural network (CNN) from X-ray images were fused to develop the classification model through training by CNN (VGGNet). Modified anisotropic diffusion filtering (MADF) technique was employed for better edge preservation and reduced noise from the images. A watershed segmentation algorithm was used in order to mark the significant fracture region in the input X-ray images. The testing stage considered generalized data for performance evaluation of the model. Cross-validation analysis revealed that a 5-fold strategy could successfully impair the overfitting problem. This proposed feature fusion using the deep learning technique assured a satisfactory performance in terms of identifying COVID-19 compared to the immediate, relevant works with a testing accuracy of 99.49%, specificity of 95.7% and sensitivity of 93.65%. When compared to other classification techniques, such as ANN, KNN, and SVM, the CNN technique used in this study showed better classification performance. K-fold cross-validation demonstrated that the proposed feature fusion technique (98.36%) provided higher accuracy than the individual feature extraction methods, such as HOG (87.34%) or CNN (93.64%).
Collapse
|