1
|
Kebede SR, Waldamichael FG, Debelee TG, Aleme M, Bedane W, Mezgebu B, Merga ZC. Dual view deep learning for enhanced breast cancer screening using mammography. Sci Rep 2024; 14:3839. [PMID: 38360869 PMCID: PMC10869685 DOI: 10.1038/s41598-023-50797-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 12/26/2023] [Indexed: 02/17/2024] Open
Abstract
Breast cancer has the highest incidence rate among women in Ethiopia compared to other types of cancer. Unfortunately, many cases are detected at a stage where a cure is delayed or not possible. To address this issue, mammography-based screening is widely accepted as an effective technique for early detection. However, the interpretation of mammography images requires experienced radiologists in breast imaging, a resource that is limited in Ethiopia. In this research, we have developed a model to assist radiologists in mass screening for breast abnormalities and prioritizing patients. Our approach combines an ensemble of EfficientNet-based classifiers with YOLOv5, a suspicious mass detection method, to identify abnormalities. The inclusion of YOLOv5 detection is crucial in providing explanations for classifier predictions and improving sensitivity, particularly when the classifier fails to detect abnormalities. To further enhance the screening process, we have also incorporated an abnormality detection model. The classifier model achieves an F1-score of 0.87 and a sensitivity of 0.82. With the addition of suspicious mass detection, sensitivity increases to 0.89, albeit at the expense of a slightly lower F1-score of 0.79.
Collapse
Affiliation(s)
- Samuel Rahimeto Kebede
- Research Development Cluster, Ethiopian Artificial Intelligence Institute, Addis Ababa, 40782, Ethiopia.
- College of Engineering, Debre Berhan University, Debre Berhan, Ethiopia.
| | - Fraol Gelana Waldamichael
- Research Development Cluster, Ethiopian Artificial Intelligence Institute, Addis Ababa, 40782, Ethiopia
| | - Taye Girma Debelee
- Research Development Cluster, Ethiopian Artificial Intelligence Institute, Addis Ababa, 40782, Ethiopia
- College of Electrical and Mechanical Engineering, Addis Ababa Science and Technology University, Addis Ababa, 120611, Ethiopia
| | | | - Wubalem Bedane
- Radiology, St. Pauli Millenium Medical College, Addis Ababa, Ethiopia
| | - Bethelhem Mezgebu
- Radiology, St. Pauli Millenium Medical College, Addis Ababa, Ethiopia
| | | |
Collapse
|
2
|
Debelee TG. Skin Lesion Classification and Detection Using Machine Learning Techniques: A Systematic Review. Diagnostics (Basel) 2023; 13:3147. [PMID: 37835889 PMCID: PMC10572538 DOI: 10.3390/diagnostics13193147] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 09/22/2023] [Accepted: 09/24/2023] [Indexed: 10/15/2023] Open
Abstract
Skin lesions are essential for the early detection and management of a number of dermatological disorders. Learning-based methods for skin lesion analysis have drawn much attention lately because of improvements in computer vision and machine learning techniques. A review of the most-recent methods for skin lesion classification, segmentation, and detection is presented in this survey paper. The significance of skin lesion analysis in healthcare and the difficulties of physical inspection are discussed in this survey paper. The review of state-of-the-art papers targeting skin lesion classification is then covered in depth with the goal of correctly identifying the type of skin lesion from dermoscopic, macroscopic, and other lesion image formats. The contribution and limitations of various techniques used in the selected study papers, including deep learning architectures and conventional machine learning methods, are examined. The survey then looks into study papers focused on skin lesion segmentation and detection techniques that aimed to identify the precise borders of skin lesions and classify them accordingly. These techniques make it easier to conduct subsequent analyses and allow for precise measurements and quantitative evaluations. The survey paper discusses well-known segmentation algorithms, including deep-learning-based, graph-based, and region-based ones. The difficulties, datasets, and evaluation metrics particular to skin lesion segmentation are also discussed. Throughout the survey, notable datasets, benchmark challenges, and evaluation metrics relevant to skin lesion analysis are highlighted, providing a comprehensive overview of the field. The paper concludes with a summary of the major trends, challenges, and potential future directions in skin lesion classification, segmentation, and detection, aiming to inspire further advancements in this critical domain of dermatological research.
Collapse
Affiliation(s)
- Taye Girma Debelee
- Ethiopian Artificial Intelligence Institute, Addis Ababa 40782, Ethiopia;
- Department of Electrical and Computer Engineering, Addis Ababa Science and Technology University, Addis Ababa 16417, Ethiopia
| |
Collapse
|
3
|
Prakash BV, Kannan AR, Santhiyakumari N, Kumarganesh S, Raja DSS, Hephzipah JJ, MartinSagayam K, Pomplun M, Dang H. Meningioma brain tumor detection and classification using hybrid CNN method and RIDGELET transform. Sci Rep 2023; 13:14522. [PMID: 37666922 PMCID: PMC10477173 DOI: 10.1038/s41598-023-41576-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 08/29/2023] [Indexed: 09/06/2023] Open
Abstract
The detection of meningioma tumors is the most crucial task compared with other tumors because of their lower pixel intensity. Modern medical platforms require a fully automated system for meningioma detection. Hence, this study proposes a novel and highly efficient hybrid Convolutional neural network (HCNN) classifier to distinguish meningioma brain images from non-meningioma brain images. The HCNN classification technique consists of the Ridgelet transform, feature computations, classifier module, and segmentation algorithm. Pixel stability during the decomposition process was improved by the Ridgelet transform, and the features were computed from the coefficient of the Ridgelet. These features were classified using the HCNN classification approach, and tumor pixels were detected using the segmentation algorithm. The experimental results were analyzed for meningioma tumor images by applying the proposed method to the BRATS 2019 and Nanfang dataset. The proposed HCNN-based meningioma detection system achieved 99.31% sensitivity, 99.37% specificity, and 99.24% segmentation accuracy for the BRATS 2019 dataset. The proposed HCNN technique achieved99.35% sensitivity, 99.22% specificity, and 99.04% segmentation accuracy on brain Magnetic Resonance Imaging (MRI) in the Nanfang dataset. The proposed system obtains 99.81% classification accuracy, 99.2% sensitivity, 99.7% specificity and 99.8% segmentation accuracy on BRATS 2022 dataset. The experimental results of the proposed HCNN algorithm were compared with those of the state-of-the-art meningioma detection algorithms in this study.
Collapse
Affiliation(s)
- B V Prakash
- Faculty of Information Technology, Government College of Engineering, Erode, Tamil Nadu, India
| | - A Rajiv Kannan
- Faculty of Computer Science and Engineering, K.S.R College of Engineering, Namakkal, India
| | - N Santhiyakumari
- Department of ECE, Knowledge Institute of Technology, Salem, Tamil Nadu, India
| | - S Kumarganesh
- Department of ECE, Knowledge Institute of Technology, Salem, Tamil Nadu, India
| | - D Siva Sundhara Raja
- Faculty of Electronics and Communication Engineering, SACS MAVMM Engineering College, Madurai, Tamil Nadu, India
| | - J Jasmine Hephzipah
- Faculty of Electronics and Communication Engineering, R.M.K. Engineering College, Kavaraipettai, Tamil Nadu, India
| | - K MartinSagayam
- Department of ECE, Karunya Institute of Technology and Sciences, Coimbatore, India
| | - Marc Pomplun
- Department of Computer Science, University of Massachusetts Boston, Boston, MA, USA
| | - Hien Dang
- Department of Mathematics and Computer Science, Molloy University, Rockville Centre, NY, USA.
- Faculty of Computer Science and Engineering, Thuyloi University, Hanoi, Vietnam.
| |
Collapse
|
4
|
Lim CC, Ling AHW, Chong YF, Mashor MY, Alshantti K, Aziz ME. Comparative Analysis of Image Processing Techniques for Enhanced MRI Image Quality: 3D Reconstruction and Segmentation Using 3D U-Net Architecture. Diagnostics (Basel) 2023; 13:2377. [PMID: 37510120 PMCID: PMC10377862 DOI: 10.3390/diagnostics13142377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Revised: 06/29/2023] [Accepted: 07/05/2023] [Indexed: 07/30/2023] Open
Abstract
Osteosarcoma is a common type of bone tumor, particularly prevalent in children and adolescents between the ages of 5 and 25 who are experiencing growth spurts during puberty. Manual delineation of tumor regions in MRI images can be laborious and time-consuming, and results may be subjective and difficult to replicate. Therefore, a convolutional neural network (CNN) was developed to automatically segment osteosarcoma cancerous cells in three types of MRI images. The study consisted of five main stages. First, 3692 DICOM format MRI images were acquired from 46 patients, including T1-weighted, T2-weighted, and T1-weighted with injection of Gadolinium (T1W + Gd) images. Contrast stretching and median filter were applied to enhance image intensity and remove noise, and the pre-processed images were reconstructed into NIfTI format files for deep learning. The MRI images were then transformed to fit the CNN's requirements. A 3D U-Net architecture was proposed with optimized parameters to build an automatic segmentation model capable of segmenting osteosarcoma from the MRI images. The 3D U-Net segmentation model achieved excellent results, with mean dice similarity coefficients (DSC) of 83.75%, 85.45%, and 87.62% for T1W, T2W, and T1W + Gd images, respectively. However, the study found that the proposed method had some limitations, including poorly defined borders, missing lesion portions, and other confounding factors. In summary, an automatic segmentation method based on a CNN has been developed to address the challenge of manually segmenting osteosarcoma cancerous cells in MRI images. While the proposed method showed promise, the study revealed limitations that need to be addressed to improve its efficacy.
Collapse
Affiliation(s)
- Chee Chin Lim
- Faculty of Electronic Engineering & Technology, Universiti Malaysia Perlis, Arau 02600, Perlis, Malaysia
- Sport Engineering Research Centre (SERC), Universiti Malaysia Perlis, Arau 02600, Perlis, Malaysia
| | - Apple Ho Wei Ling
- Faculty of Electronic Engineering & Technology, Universiti Malaysia Perlis, Arau 02600, Perlis, Malaysia
| | - Yen Fook Chong
- Sport Engineering Research Centre (SERC), Universiti Malaysia Perlis, Arau 02600, Perlis, Malaysia
| | - Mohd Yusoff Mashor
- Faculty of Electronic Engineering & Technology, Universiti Malaysia Perlis, Arau 02600, Perlis, Malaysia
- Sport Engineering Research Centre (SERC), Universiti Malaysia Perlis, Arau 02600, Perlis, Malaysia
| | | | - Mohd Ezane Aziz
- Department of Radiology, Universiti Sains Malaysia, Kubang Kerian 16150, Kelantan, Malaysia
| |
Collapse
|
5
|
Fassbind B, Langenbucher A, Streich A. Automated cornea diagnosis using deep convolutional neural networks based on cornea topography maps. Sci Rep 2023; 13:6566. [PMID: 37085580 PMCID: PMC10121572 DOI: 10.1038/s41598-023-33793-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Accepted: 04/19/2023] [Indexed: 04/23/2023] Open
Abstract
Cornea topography maps allow ophthalmologists to screen and diagnose cornea pathologies. We aim to automatically identify any cornea abnormalities based on such cornea topography maps, with focus on diagnosing keratoconus. To do so, we represent the OCT scans as images and apply Convolutional Neural Networks (CNNs) for the automatic analysis. The model is based on a state-of-the-art ConvNeXt CNN architecture with weights fine-tuned for the given specific application using the cornea scans dataset. A set of 1940 consecutive screening scans from the Saarland University Hospital Clinic for Ophthalmology was annotated and used for model training and validation. All scans were recorded with a CASIA2 anterior segment Optical Coherence Tomography (OCT) scanner. The proposed model achieves a sensitivity of 98.46% and a specificity of 91.96% when distinguishing between healthy and pathological corneas. Our approach enables the screening of cornea pathologies and the classification of common pathologies like keratoconus. Furthermore, the approach is independent of the topography scanner and enables the visualization of those scan regions which drive the model's decisions.
Collapse
Affiliation(s)
- Benjamin Fassbind
- Department of Computer Science, Lucerne University of Applied Sciences and Arts, Rotkreuz/Zug, 6343, Switzerland.
| | - Achim Langenbucher
- Department of Experimental Ophthalmology, Saarland University, Homburg/Saar, 66123, Germany
| | - Andreas Streich
- Department of Computer Science, Lucerne University of Applied Sciences and Arts, Rotkreuz/Zug, 6343, Switzerland
| |
Collapse
|
6
|
Ali MU, Hussain SJ, Zafar A, Bhutta MR, Lee SW. WBM-DLNets: Wrapper-Based Metaheuristic Deep Learning Networks Feature Optimization for Enhancing Brain Tumor Detection. Bioengineering (Basel) 2023; 10:bioengineering10040475. [PMID: 37106662 PMCID: PMC10135892 DOI: 10.3390/bioengineering10040475] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 04/07/2023] [Accepted: 04/11/2023] [Indexed: 04/29/2023] Open
Abstract
This study presents wrapper-based metaheuristic deep learning networks (WBM-DLNets) feature optimization algorithms for brain tumor diagnosis using magnetic resonance imaging. Herein, 16 pretrained deep learning networks are used to compute the features. Eight metaheuristic optimization algorithms, namely, the marine predator algorithm, atom search optimization algorithm (ASOA), Harris hawks optimization algorithm, butterfly optimization algorithm, whale optimization algorithm, grey wolf optimization algorithm (GWOA), bat algorithm, and firefly algorithm, are used to evaluate the classification performance using a support vector machine (SVM)-based cost function. A deep-learning network selection approach is applied to determine the best deep-learning network. Finally, all deep features of the best deep learning networks are concatenated to train the SVM model. The proposed WBM-DLNets approach is validated based on an available online dataset. The results reveal that the classification accuracy is significantly improved by utilizing the features selected using WBM-DLNets relative to those obtained using the full set of deep features. DenseNet-201-GWOA and EfficientNet-b0-ASOA yield the best results, with a classification accuracy of 95.7%. Additionally, the results of the WBM-DLNets approach are compared with those reported in the literature.
Collapse
Affiliation(s)
- Muhammad Umair Ali
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Shaik Javeed Hussain
- Department of Electrical and Electronics, Global College of Engineering and Technology, Muscat 112, Oman
| | - Amad Zafar
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Muhammad Raheel Bhutta
- Department of Electrical and Computer Engineering, University of UTAH Asia Campus, Incheon 21985, Republic of Korea
| | - Seung Won Lee
- Department of Precision Medicine, Sungkyunkwan University School of Medicine, Suwon 16419, Republic of Korea
| |
Collapse
|
7
|
Pantic I, Cumic J, Dugalic S, Petroianu GA, Corridon PR. Gray level co-occurrence matrix and wavelet analyses reveal discrete changes in proximal tubule cell nuclei after mild acute kidney injury. Sci Rep 2023; 13:4025. [PMID: 36899130 PMCID: PMC10006226 DOI: 10.1038/s41598-023-31205-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 03/08/2023] [Indexed: 03/12/2023] Open
Abstract
Acute kidney injury (AKI) relates to an abrupt reduction in renal function resulting from numerous conditions. Morbidity, mortality, and treatment costs related to AKI are relatively high. This condition is strongly associated with damage to proximal tubule cells (PTCs), generating distinct patterns of transcriptional and epigenetic alterations that result in structural changes in the nuclei of this epithelium. To this date, AKI-related nuclear chromatin redistribution in PTCs is poorly understood, and it is unclear whether changes in PTC chromatin patterns can be detected using conventional microscopy during mild AKI, which can progress to more debilitating forms of injury. In recent years, gray level co-occurrence matrix (GLCM) analysis and discrete wavelet transform (DWT) have emerged as potentially valuable methods for identifying discrete structural changes in nuclear chromatin architecture that are not visible during the conventional histopathological exam. Here we present findings indicating that GLCM and DWT methods can be successfully used in nephrology to detect subtle nuclear morphological alterations associated with mild tissue injury demonstrated in rodents by inducing a mild form of AKI through ischemia-reperfusion injury. Our results show that mild ischemic AKI is associated with the reduction of local textural homogeneity of PTC nuclei quantified by GLCM and the increase of nuclear structural heterogeneity indirectly assessed with DWT energy coefficients. This rodent model allowed us to show that mild ischemic AKI is associated with the significant reduction of textural homogeneity of PTC nuclei, indirectly assessed by GLCM indicators and DWT energy coefficients.
Collapse
Affiliation(s)
- Igor Pantic
- Faculty of Medicine, Department of Medical Physiology, Laboratory for Cellular Physiology, University of Belgrade, Visegradska 26/II, 11129, Belgrade, Serbia
- University of Haifa, 199 Abba Hushi Blvd, Mount Carmel, 3498838, Haifa, Israel
- Department of Pharmacology, College of Medicine and Health Sciences, Khalifa University of Science and Technology, PO Box 127788, Abu Dhabi, UAE
| | - Jelena Cumic
- Faculty of Medicine, University of Belgrade, University Clinical Center of Serbia, Dr. Koste Todorovica 8, 11129, Belgrade, Serbia
| | - Stefan Dugalic
- Faculty of Medicine, University of Belgrade, University Clinical Center of Serbia, Dr. Koste Todorovica 8, 11129, Belgrade, Serbia
| | - Georg A Petroianu
- Department of Pharmacology, College of Medicine and Health Sciences, Khalifa University of Science and Technology, PO Box 127788, Abu Dhabi, UAE
| | - Peter R Corridon
- Department of Immunology and Physiology, College of Medicine and Health Sciences, Khalifa University of Science and Technology, PO Box 127788, Abu Dhabi, UAE.
- Healthcare Engineering Innovation Center, Biomedical Engineering, Khalifa University of Science and Technology, PO Box 127788, Abu Dhabi, UAE.
- Center for Biotechnology, Khalifa University of Science and Technology, PO Box 127788, Abu Dhabi, UAE.
- Indiana Center for Biological Microscopy, Indiana University School of Medicine, Indianapolis, IN, USA.
| |
Collapse
|
8
|
Robustness Fine-Tuning Deep Learning Model for Cancers Diagnosis Based on Histopathology Image Analysis. Diagnostics (Basel) 2023; 13:diagnostics13040699. [PMID: 36832186 PMCID: PMC9955143 DOI: 10.3390/diagnostics13040699] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 02/01/2023] [Accepted: 02/10/2023] [Indexed: 02/16/2023] Open
Abstract
Histopathology is the most accurate way to diagnose cancer and identify prognostic and therapeutic targets. The likelihood of survival is significantly increased by early cancer detection. With deep networks' enormous success, significant attempts have been made to analyze cancer disorders, particularly colon and lung cancers. In order to do this, this paper examines how well deep networks can diagnose various cancers using histopathology image processing. This work intends to increase the performance of deep learning architecture in processing histopathology images by constructing a novel fine-tuning deep network for colon and lung cancers. Such adjustments are performed using regularization, batch normalization, and hyperparameters optimization. The suggested fine-tuned model was evaluated using the LC2500 dataset. Our proposed model's average precision, recall, F1-score, specificity, and accuracy were 99.84%, 99.85%, 99.84%, 99.96%, and 99.94%, respectively. The experimental findings reveal that the suggested fine-tuned learning model based on the pre-trained ResNet101 network achieves higher results against recent state-of-the-art approaches and other current powerful CNN models.
Collapse
|
9
|
A Comprehensive Survey on the Progress, Process, and Challenges of Lung Cancer Detection and Classification. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:5905230. [PMID: 36569180 PMCID: PMC9788902 DOI: 10.1155/2022/5905230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 10/17/2022] [Accepted: 11/09/2022] [Indexed: 12/23/2022]
Abstract
Lung cancer is the primary reason of cancer deaths worldwide, and the percentage of death rate is increasing step by step. There are chances of recovering from lung cancer by detecting it early. In any case, because the number of radiologists is limited and they have been working overtime, the increase in image data makes it hard for them to evaluate the images accurately. As a result, many researchers have come up with automated ways to predict the growth of cancer cells using medical imaging methods in a quick and accurate way. Previously, a lot of work was done on computer-aided detection (CADe) and computer-aided diagnosis (CADx) in computed tomography (CT) scan, magnetic resonance imaging (MRI), and X-ray with the goal of effective detection and segmentation of pulmonary nodule, as well as classifying nodules as malignant or benign. But still, no complete comprehensive review that includes all aspects of lung cancer has been done. In this paper, every aspect of lung cancer is discussed in detail, including datasets, image preprocessing, segmentation methods, optimal feature extraction and selection methods, evaluation measurement matrices, and classifiers. Finally, the study looks into several lung cancer-related issues with possible solutions.
Collapse
|
10
|
Wei W, Jia G, Wu Z, Wang T, Wang H, Wei K, Cheng C, Liu Z, Zuo C. A multidomain fusion model of radiomics and deep learning to discriminate between PDAC and AIP based on 18F-FDG PET/CT images. Jpn J Radiol 2022; 41:417-427. [PMID: 36409398 PMCID: PMC9676903 DOI: 10.1007/s11604-022-01363-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 11/09/2022] [Indexed: 11/22/2022]
Abstract
PURPOSE To explore a multidomain fusion model of radiomics and deep learning features based on 18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT) images to distinguish pancreatic ductal adenocarcinoma (PDAC) and autoimmune pancreatitis (AIP), which could effectively improve the accuracy of diseases diagnosis. MATERIALS AND METHODS This retrospective study included 48 patients with AIP (mean age, 65 ± 12.0 years; range, 37-90 years) and 64 patients with PDAC patients (mean age, 66 ± 11.3 years; range, 32-88 years). Three different methods were discussed to identify PDAC and AIP based on 18F-FDG PET/CT images, including the radiomics model (RAD_model), the deep learning model (DL_model), and the multidomain fusion model (MF_model). We also compared the classification results of PET/CT, PET, and CT images in these three models. In addition, we explored the attributes of deep learning abstract features by analyzing the correlation between radiomics and deep learning features. Five-fold cross-validation was used to calculate receiver operating characteristic (ROC), area under the roc curve (AUC), accuracy (Acc), sensitivity (Sen), and specificity (Spe) to quantitatively evaluate the performance of different classification models. RESULTS The experimental results showed that the multidomain fusion model had the best comprehensive performance compared with radiomics and deep learning models, and the AUC, accuracy, sensitivity, specificity were 96.4% (95% CI 95.4-97.3%), 90.1% (95% CI 88.7-91.5%), 87.5% (95% CI 84.3-90.6%), and 93.0% (95% CI 90.3-95.6%), respectively. And our study proved that the multimodal features of PET/CT were superior to using either PET or CT features alone. First-order features of radiomics provided valuable complementary information for the deep learning model. CONCLUSION The preliminary results of this paper demonstrated that our proposed multidomain fusion model fully exploits the value of radiomics and deep learning features based on 18F-FDG PET/CT images, which provided competitive accuracy for the discrimination of PDAC and AIP.
Collapse
Affiliation(s)
- Wenting Wei
- School of Electronic and Information Engineering, Changchun University of Science and Technology, Changchun, 130022 China ,Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, 88 Keling Road, Suzhou, 215163 China
| | - Guorong Jia
- Department of Nuclear Medicine, The First Affiliated Hospital of Naval Medical University (Changhai Hospital), 168 Changhai Road, Shanghai, 200433 China
| | - Zhongyi Wu
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, 88 Keling Road, Suzhou, 215163 China
| | - Tao Wang
- Department of Nuclear Medicine, The First Affiliated Hospital of Naval Medical University (Changhai Hospital), 168 Changhai Road, Shanghai, 200433 China
| | - Heng Wang
- School of Electronic and Information Engineering, Changchun University of Science and Technology, Changchun, 130022 China ,Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, 88 Keling Road, Suzhou, 215163 China
| | - Kezhen Wei
- School of Electronic and Information Engineering, Changchun University of Science and Technology, Changchun, 130022 China ,Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, 88 Keling Road, Suzhou, 215163 China
| | - Chao Cheng
- Department of Nuclear Medicine, The First Affiliated Hospital of Naval Medical University (Changhai Hospital), 168 Changhai Road, Shanghai, 200433 China
| | - Zhaobang Liu
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, 88 Keling Road, Suzhou, 215163 China
| | - Changjing Zuo
- Department of Nuclear Medicine, The First Affiliated Hospital of Naval Medical University (Changhai Hospital), 168 Changhai Road, Shanghai, 200433 China
| |
Collapse
|
11
|
Lightweight Multireceptive Field CNN for 12-Lead ECG Signal Classification. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8413294. [PMID: 35978890 PMCID: PMC9377844 DOI: 10.1155/2022/8413294] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 07/05/2022] [Accepted: 07/19/2022] [Indexed: 11/18/2022]
Abstract
The electrical activity produced during the heartbeat is measured and recorded by an ECG. Cardiologists can interpret the ECG machine’s signals and determine the heart’s health condition and related causes of ECG signal abnormalities. However, cardiologist shortage is a challenge in both developing and developed countries. Moreover, the experience of a cardiologist matters in the accurate interpretation of the ECG signal, as the interpretation of ECG is quite tricky even for experienced doctors. Therefore, developing computer-aided ECG interpretation is required for its wide-reaching effect. 12-lead ECG generates a 1D signal with 12 channels among the well-known time-series data. Classical machine learning can develop automatic detection, but deep learning is more effective in the classification task. 1D-CNN is being widely used for CVDS detection from ECG datasets. However, adopting a deep learning model designed for computer vision can be problematic because of its massive parameters and the need for many samples to train. In many detection tasks ranging from semantic segmentation of medical images to time-series data classification, multireceptive field CNN has improved performance. Notably, the nature of the ECG dataset made performance improvement possible by using a multireceptive field CNN (MRF-CNN). Using MRF-CNN, it is possible to design a model that considers semantic context information within ECG signals with different sizes. As a result, this study has designed a multireceptive field CNN architecture for ECG classification. The proposed multireceptive field CNN architecture can improve the performance of ECG signal classification. We have achieved a 0.72
score and 0.93 AUC for 5 superclasses, a 0.46
score and 0.92 AUC for 20 subclasses, and a 0.31
score and 0.92 AUC for all the diagnostic classes of the PTB-XL dataset.
Collapse
|
12
|
Ma CY, Zhou JY, Xu XT, Qin SB, Han MF, Cao XH, Gao YZ, Xu L, Zhou JJ, Zhang W, Jia LC. Clinical evaluation of deep learning-based clinical target volume three-channel auto-segmentation algorithm for adaptive radiotherapy in cervical cancer. BMC Med Imaging 2022; 22:123. [PMID: 35810273 PMCID: PMC9271246 DOI: 10.1186/s12880-022-00851-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Accepted: 07/05/2022] [Indexed: 12/24/2022] Open
Abstract
Objectives Accurate contouring of the clinical target volume (CTV) is a key element of radiotherapy in cervical cancer. We validated a novel deep learning (DL)-based auto-segmentation algorithm for CTVs in cervical cancer called the three-channel adaptive auto-segmentation network (TCAS). Methods A total of 107 cases were collected and contoured by senior radiation oncologists (ROs). Each case consisted of the following: (1) contrast-enhanced CT scan for positioning, (2) the related CTV, (3) multiple plain CT scans during treatment and (4) the related CTV. After registration between (1) and (3) for the same patient, the aligned image and CTV were generated. Method 1 is rigid registration, method 2 is deformable registration, and the aligned CTV is seen as the result. Method 3 is rigid registration and TCAS, method 4 is deformable registration and TCAS, and the result is generated by a DL-based method. Results From the 107 cases, 15 pairs were selected as the test set. The dice similarity coefficient (DSC) of method 1 was 0.8155 ± 0.0368; the DSC of method 2 was 0.8277 ± 0.0315; the DSCs of method 3 and 4 were 0.8914 ± 0.0294 and 0.8921 ± 0.0231, respectively. The mean surface distance and Hausdorff distance of methods 3 and 4 were markedly better than those of method 1 and 2. Conclusions The TCAS achieved comparable accuracy to the manual delineation performed by senior ROs and was significantly better than direct registration.
Collapse
Affiliation(s)
- Chen-Ying Ma
- Department of Radiation Oncology, 1st Affiliated Hospital of Soochow University, No. 188 Shizi Street, Suzhou, 215123, China
| | - Ju-Ying Zhou
- Department of Radiation Oncology, 1st Affiliated Hospital of Soochow University, No. 188 Shizi Street, Suzhou, 215123, China.
| | - Xiao-Ting Xu
- Department of Radiation Oncology, 1st Affiliated Hospital of Soochow University, No. 188 Shizi Street, Suzhou, 215123, China
| | - Song-Bing Qin
- Department of Radiation Oncology, 1st Affiliated Hospital of Soochow University, No. 188 Shizi Street, Suzhou, 215123, China
| | - Miao-Fei Han
- Shanghai United Imaging Healthcare, Co. Ltd., Jiading, 201807, China
| | - Xiao-Huan Cao
- Shanghai United Imaging Healthcare, Co. Ltd., Jiading, 201807, China
| | - Yao-Zong Gao
- Shanghai United Imaging Healthcare, Co. Ltd., Jiading, 201807, China
| | - Lu Xu
- Shanghai United Imaging Healthcare, Co. Ltd., Jiading, 201807, China
| | - Jing-Jie Zhou
- Shanghai United Imaging Healthcare, Co. Ltd., Jiading, 201807, China
| | - Wei Zhang
- Shanghai United Imaging Healthcare, Co. Ltd., Jiading, 201807, China
| | - Le-Cheng Jia
- United Imaging Research Institute of Innovative Medical Equipment, Shenzhen, 518045, China
| |
Collapse
|
13
|
Abstract
Cereals are an important and major source of the human diet. They constitute more than two-thirds of the world’s food source and cover more than 56% of the world’s cultivatable land. These important sources of food are affected by a variety of damaging diseases, causing significant loss in annual production. In this regard, detection of diseases at an early stage and quantification of the severity has acquired the urgent attention of researchers worldwide. One emerging and popular approach for this task is the utilization of machine learning techniques. In this work, we have identified the most common and damaging diseases affecting cereal crop production, and we also reviewed 45 works performed on the detection and classification of various diseases that occur on six cereal crops within the past five years. In addition, we identified and summarised numerous publicly available datasets for each cereal crop, which the lack thereof we identified as the main challenges faced for researching the application of machine learning in cereal crop detection. In this survey, we identified deep convolutional neural networks trained on hyperspectral data as the most effective approach for early detection of diseases and transfer learning as the most commonly used and yielding the best result training method.
Collapse
|
14
|
A Hybrid Machine Learning Model Based on Global and Local Learner Algorithms for Diabetes Mellitus Prediction. JOURNAL OF BIOMIMETICS BIOMATERIALS AND BIOMEDICAL ENGINEERING 2022. [DOI: 10.4028/www.scientific.net/jbbbe.54.65] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Health is a critical condition for living things, even before the technology exists. Nowadays the healthcare domain provides a lot of scope for research as it has extremely evolved. The most researched areas of health sectors include diabetes mellitus (DM), breast cancer, brain tumor, etc. DM is a severe chronic disease that affects human health and has a high rate throughout the world. Early prediction of DM is important to reduce its risk and even avoid it. In this study, we propose a DM prediction model based on global and local learner algorithms. The proposed global and local learners stacking (GLLS) model; combines the prediction algorithms from two largely different but complementary machine learning paradigms, specifically XGBoost and NB from global learning whereas kNN and SVM (with RBF kernel) from local learning and aggregates them by stacking ensemble technique using LR as meta-learner. The effectiveness of the GLLS model was proved by comparing several performance measures and the results of different contrast experiments. The evaluation results on UCI Pima Indian diabetes data-set (PIDD) indicates the model has achieved the better prediction performance of 99.5%, 99.5%, 99.5%, 99.1%, and 100% in terms of accuracy, AUC, F1 score, sensitivity, and specificity respectively, compared to other research results mentioned in the literature. Moreover, to better validate the GLLS model performance, three additional medical data sets; Messidor, WBC, ILPD, are considered and the model also achieved an accuracy of 82.1%, 98.6%, and 89.3% respectively. Experimental results proved the effectiveness and superiority of our proposed GLLS model.
Collapse
|
15
|
Waldamichael FG, Debelee TG, Ayano YM. Coffee disease detection using a robust HSV color‐based segmentation and transfer learning for use on smartphones. INT J INTELL SYST 2021. [DOI: 10.1002/int.22747] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Affiliation(s)
| | - Taye Girma Debelee
- Research and Development Cluster Ethiopian Artificial Intelligence Center Addis Ababa Ethiopia
- Department of Electrical and Computer Engineering Addis Ababa Science and Technology University Addis Ababa Ethiopia
| | | |
Collapse
|
16
|
Rufo DD, Debelee TG, Ibenthal A, Negera WG. Diagnosis of Diabetes Mellitus Using Gradient Boosting Machine (LightGBM). Diagnostics (Basel) 2021; 11:1714. [PMID: 34574055 PMCID: PMC8467876 DOI: 10.3390/diagnostics11091714] [Citation(s) in RCA: 48] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Revised: 09/06/2021] [Accepted: 09/17/2021] [Indexed: 12/01/2022] Open
Abstract
Diabetes mellitus (DM) is a severe chronic disease that affects human health and has a high prevalence worldwide. Research has shown that half of the diabetic people throughout the world are unaware that they have DM and its complications are increasing, which presents new research challenges and opportunities. In this paper, we propose a preemptive diagnosis method for diabetes mellitus (DM) to assist or complement the early recognition of the disease in countries with low medical expert densities. Diabetes data are collected from the Zewditu Memorial Hospital (ZMHDD) in Addis Ababa, Ethiopia. Light Gradient Boosting Machine (LightGBM) is one of the most recent successful research findings for the gradient boosting framework that uses tree-based learning algorithms. It has low computational complexity and, therefore, is suited for applications in limited capacity regions such as Ethiopia. Thus, in this study, we apply the principle of LightGBM to develop an accurate model for the diagnosis of diabetes. The experimental results show that the prepared diabetes dataset is informative to predict the condition of diabetes mellitus. With accuracy, AUC, sensitivity, and specificity of 98.1%, 98.1%, 99.9%, and 96.3%, respectively, the LightGBM model outperformed KNN, SVM, NB, Bagging, RF, and XGBoost in the case of the ZMHDD dataset.
Collapse
Affiliation(s)
- Derara Duba Rufo
- College of Engineering and Technology, Dilla University, Dilla 419, Ethiopia;
| | - Taye Girma Debelee
- College of Electrical and Mechanical Engineering, Addis Ababa Science and Technology University, Addis Ababa 120611, Ethiopia;
- Ethiopian Artificial Intelligence Center, Addis Ababa 40782, Ethiopia;
| | - Achim Ibenthal
- Faculty of Engineering and Health, HAWK Universityof Applied Sciences and Arts, 37085 Göttingen, Germany
| | | |
Collapse
|
17
|
Biratu ES, Schwenker F, Ayano YM, Debelee TG. A Survey of Brain Tumor Segmentation and Classification Algorithms. J Imaging 2021; 7:jimaging7090179. [PMID: 34564105 PMCID: PMC8465364 DOI: 10.3390/jimaging7090179] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 08/25/2021] [Accepted: 08/28/2021] [Indexed: 01/16/2023] Open
Abstract
A brain Magnetic resonance imaging (MRI) scan of a single individual consists of several slices across the 3D anatomical view. Therefore, manual segmentation of brain tumors from magnetic resonance (MR) images is a challenging and time-consuming task. In addition, an automated brain tumor classification from an MRI scan is non-invasive so that it avoids biopsy and make the diagnosis process safer. Since the beginning of this millennia and late nineties, the effort of the research community to come-up with automatic brain tumor segmentation and classification method has been tremendous. As a result, there are ample literature on the area focusing on segmentation using region growing, traditional machine learning and deep learning methods. Similarly, a number of tasks have been performed in the area of brain tumor classification into their respective histological type, and an impressive performance results have been obtained. Considering state of-the-art methods and their performance, the purpose of this paper is to provide a comprehensive survey of three, recently proposed, major brain tumor segmentation and classification model techniques, namely, region growing, shallow machine learning and deep learning. The established works included in this survey also covers technical aspects such as the strengths and weaknesses of different approaches, pre- and post-processing techniques, feature extraction, datasets, and models' performance evaluation metrics.
Collapse
Affiliation(s)
- Erena Siyoum Biratu
- College of Electrical and Mechanical Engineering, Addis Ababa Science and Technology University, Addis Ababa 120611, Ethiopia; (E.S.B.); (T.G.D.)
| | - Friedhelm Schwenker
- Institute of Neural Information Processing, Ulm University, 89081 Ulm, Germany
- Correspondence:
| | | | - Taye Girma Debelee
- College of Electrical and Mechanical Engineering, Addis Ababa Science and Technology University, Addis Ababa 120611, Ethiopia; (E.S.B.); (T.G.D.)
- Ethiopian Artificial Intelligence Center, Addis Ababa 40782, Ethiopia;
| |
Collapse
|
18
|
Dulf EH, Bledea M, Mocan T, Mocan L. Automatic Detection of Colorectal Polyps Using Transfer Learning. SENSORS (BASEL, SWITZERLAND) 2021; 21:5704. [PMID: 34502594 PMCID: PMC8433882 DOI: 10.3390/s21175704] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 08/18/2021] [Accepted: 08/19/2021] [Indexed: 12/25/2022]
Abstract
Colorectal cancer is the second leading cause of cancer death and ranks third worldwide in diagnosed malignant pathologies (1.36 million new cases annually). An increase in the diversity of treatment options as well as a rising population require novel diagnostic tools. Current diagnostics involve critical human thinking, but the decisional process loses accuracy due to the increased number of modulatory factors involved. The proposed computer-aided diagnosis system analyses each colonoscopy and provides predictions that will help the clinician to make the right decisions. Artificial intelligence is included in the system both offline and online image processing tools. Aiming to improve the diagnostic process of colon cancer patients, an application was built that allows the easiest and most intuitive interaction between medical staff and the proposed diagnosis system. The developed tool uses two networks. The first, a convolutional neural network, is capable of classifying eight classes of tissue with a sensitivity of 98.13% and an F1 score of 98.14%, while the second network, based on semantic segmentation, can identify the malignant areas with a Jaccard index of 75.18%. The results could have a direct impact on personalised medicine combining clinical knowledge with the computing power of intelligent algorithms.
Collapse
Affiliation(s)
- Eva-H. Dulf
- Department of Automation, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, Memorandumului Str. 28, 400014 Cluj-Napoca, Romania;
| | - Marius Bledea
- Department of Automation, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, Memorandumului Str. 28, 400014 Cluj-Napoca, Romania;
| | - Teodora Mocan
- Department of Physiology, Iuliu Hatieganu University of Medicine and Pharmacy, 400349 Cluj-Napoca, Romania;
- Nanomedicine Department, Regional Institute of Gatroenterology and Hepatology, 400000 Cluj-Napoca, Romania
| | - Lucian Mocan
- Department of Surgery, 3-rd Surgery Clinic, Iuliu Hatieganu University of Medicine and Pharmacy, 400349 Cluj-Napoca, Romania;
| |
Collapse
|
19
|
Ali M, Ali R. Multi-Input Dual-Stream Capsule Network for Improved Lung and Colon Cancer Classification. Diagnostics (Basel) 2021; 11:1485. [PMID: 34441419 PMCID: PMC8393706 DOI: 10.3390/diagnostics11081485] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Revised: 08/12/2021] [Accepted: 08/13/2021] [Indexed: 12/19/2022] Open
Abstract
Lung and colon cancers are two of the most common causes of death and morbidity in humans. One of the most important aspects of appropriate treatment is the histopathological diagnosis of such cancers. As a result, the main goal of this study is to use a multi-input capsule network and digital histopathology images to build an enhanced computerized diagnosis system for detecting squamous cell carcinomas and adenocarcinomas of the lungs, as well as adenocarcinomas of the colon. Two convolutional layer blocks are used in the proposed multi-input capsule network. The CLB (Convolutional Layers Block) employs traditional convolutional layers, whereas the SCLB (Separable Convolutional Layers Block) employs separable convolutional layers. The CLB block takes unprocessed histopathology images as input, whereas the SCLB block takes uniquely pre-processed histopathological images. The pre-processing method uses color balancing, gamma correction, image sharpening, and multi-scale fusion as the major processes because histopathology slide images are typically red blue. All three channels (Red, Green, and Blue) are adequately compensated during the color balancing phase. The dual-input technique aids the model's ability to learn features more effectively. On the benchmark LC25000 dataset, the empirical analysis indicates a significant improvement in classification results. The proposed model provides cutting-edge performance in all classes, with 99.58% overall accuracy for lung and colon abnormalities based on histopathological images.
Collapse
Affiliation(s)
- Mumtaz Ali
- School of Computer Science, Huazhong University of Science and Technology, Wuhan 430074, China
- Department of Computer Systems Engineering, Sukkur IBA University, Sukkur 65200, Pakistan
| | - Riaz Ali
- Department of Computer Science, Sukkur IBA University, Sukkur 65200, Pakistan;
| |
Collapse
|
20
|
Zhang Y, Gorriz JM, Dong Z. Deep Learning in Medical Image Analysis. J Imaging 2021; 7:74. [PMID: 34460524 PMCID: PMC8321330 DOI: 10.3390/jimaging7040074] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Accepted: 04/16/2021] [Indexed: 11/19/2022] Open
Abstract
Over recent years, deep learning (DL) has established itself as a powerful tool across a broad spectrum of domains in imaging-e [...].
Collapse
Affiliation(s)
- Yudong Zhang
- School of Informatics, University of Leicester, Leicester LE1 7RH, UK
| | - Juan Manuel Gorriz
- Department of Signal Theory, Telematics and Communications, University of Granada, 18071 Granada, Spain;
| | - Zhengchao Dong
- Molecular Imaging and Neuropathology Division, Columbia University and New York State Psychiatric Institute, New York, NY 10032, USA;
| |
Collapse
|
21
|
Alzubaidi L, Zhang J, Humaidi AJ, Al-Dujaili A, Duan Y, Al-Shamma O, Santamaría J, Fadhel MA, Al-Amidie M, Farhan L. Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. JOURNAL OF BIG DATA 2021; 8:53. [PMID: 33816053 PMCID: PMC8010506 DOI: 10.1186/s40537-021-00444-8] [Citation(s) in RCA: 711] [Impact Index Per Article: 237.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 03/22/2021] [Indexed: 05/04/2023]
Abstract
In the last few years, the deep learning (DL) computing paradigm has been deemed the Gold Standard in the machine learning (ML) community. Moreover, it has gradually become the most widely used computational approach in the field of ML, thus achieving outstanding results on several complex cognitive tasks, matching or even beating those provided by human performance. One of the benefits of DL is the ability to learn massive amounts of data. The DL field has grown fast in the last few years and it has been extensively used to successfully address a wide range of traditional applications. More importantly, DL has outperformed well-known ML techniques in many domains, e.g., cybersecurity, natural language processing, bioinformatics, robotics and control, and medical information processing, among many others. Despite it has been contributed several works reviewing the State-of-the-Art on DL, all of them only tackled one aspect of the DL, which leads to an overall lack of knowledge about it. Therefore, in this contribution, we propose using a more holistic approach in order to provide a more suitable starting point from which to develop a full understanding of DL. Specifically, this review attempts to provide a more comprehensive survey of the most important aspects of DL and including those enhancements recently added to the field. In particular, this paper outlines the importance of DL, presents the types of DL techniques and networks. It then presents convolutional neural networks (CNNs) which the most utilized DL network type and describes the development of CNNs architectures together with their main features, e.g., starting with the AlexNet network and closing with the High-Resolution network (HR.Net). Finally, we further present the challenges and suggested solutions to help researchers understand the existing research gaps. It is followed by a list of the major DL applications. Computational tools including FPGA, GPU, and CPU are summarized along with a description of their influence on DL. The paper ends with the evolution matrix, benchmark datasets, and summary and conclusion.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000 Australia
- AlNidhal Campus, University of Information Technology & Communications, Baghdad, 10001 Iraq
| | - Jinglan Zhang
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000 Australia
| | - Amjad J. Humaidi
- Control and Systems Engineering Department, University of Technology, Baghdad, 10001 Iraq
| | - Ayad Al-Dujaili
- Electrical Engineering Technical College, Middle Technical University, Baghdad, 10001 Iraq
| | - Ye Duan
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211 USA
| | - Omran Al-Shamma
- AlNidhal Campus, University of Information Technology & Communications, Baghdad, 10001 Iraq
| | - J. Santamaría
- Department of Computer Science, University of Jaén, 23071 Jaén, Spain
| | - Mohammed A. Fadhel
- College of Computer Science and Information Technology, University of Sumer, Thi Qar, 64005 Iraq
| | - Muthana Al-Amidie
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211 USA
| | - Laith Farhan
- School of Engineering, Manchester Metropolitan University, Manchester, M1 5GD UK
| |
Collapse
|
22
|
Studies on the Geometrical Design of Spider Webs for Reinforced Composite Structures. JOURNAL OF COMPOSITES SCIENCE 2021. [DOI: 10.3390/jcs5020057] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Spider silk is an astonishingly tough biomaterial that consists almost entirely of large proteins. Studying the secrets behind the high strength nature of spider webs is very challenging due to their miniature size. In spite of their complex nature, researchers have always been inspired to mimic Nature for developing new products or enhancing the performance of existing technologies. Accordingly, the spider web can be taken as a model for optimal fiber orientation for composite materials to be used in critical structural applications. In this study an attempt is made to analyze the geometrical characteristics of the web construction building units such as spirals and radials. As a measurement tool, we have used a developed MATLAB algorithm code for measuring the node to node of rings and radials angle of orientation. Spider web image samples were collected randomly from an ecological niche with black background sample collection tools. The study shows that the radial angle of orientation is 12.7 degrees with 5 mm distance for the spirals’ mesh size. The extracted geometrical numeric values from the spider web show moderately skewed statistical data. The study sheds light on spider web utilization to develop an optimized fiber orientation reinforced composite structure for constructing, for instance, shell structures, pressure vessels and fuselage cones for the aviation industry.
Collapse
|
23
|
Biratu ES, Schwenker F, Debelee TG, Kebede SR, Negera WG, Molla HT. Enhanced Region Growing for Brain Tumor MR Image Segmentation. J Imaging 2021; 7:22. [PMID: 34460621 PMCID: PMC8321280 DOI: 10.3390/jimaging7020022] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Revised: 01/25/2021] [Accepted: 01/26/2021] [Indexed: 11/18/2022] Open
Abstract
A brain tumor is one of the foremost reasons for the rise in mortality among children and adults. A brain tumor is a mass of tissue that propagates out of control of the normal forces that regulate growth inside the brain. A brain tumor appears when one type of cell changes from its normal characteristics and grows and multiplies abnormally. The unusual growth of cells within the brain or inside the skull, which can be cancerous or non-cancerous has been the reason for the death of adults in developed countries and children in under developing countries like Ethiopia. The studies have shown that the region growing algorithm initializes the seed point either manually or semi-manually which as a result affects the segmentation result. However, in this paper, we proposed an enhanced region-growing algorithm for the automatic seed point initialization. The proposed approach's performance was compared with the state-of-the-art deep learning algorithms using the common dataset, BRATS2015. In the proposed approach, we applied a thresholding technique to strip the skull from each input brain image. After the skull is stripped the brain image is divided into 8 blocks. Then, for each block, we computed the mean intensities and from which the five blocks with maximum mean intensities were selected out of the eight blocks. Next, the five maximum mean intensities were used as a seed point for the region growing algorithm separately and obtained five different regions of interest (ROIs) for each skull stripped input brain image. The five ROIs generated using the proposed approach were evaluated using dice similarity score (DSS), intersection over union (IoU), and accuracy (Acc) against the ground truth (GT), and the best region of interest is selected as a final ROI. Finally, the final ROI was compared with different state-of-the-art deep learning algorithms and region-based segmentation algorithms in terms of DSS. Our proposed approach was validated in three different experimental setups. In the first experimental setup where 15 randomly selected brain images were used for testing and achieved a DSS value of 0.89. In the second and third experimental setups, the proposed approach scored a DSS value of 0.90 and 0.80 for 12 randomly selected and 800 brain images respectively. The average DSS value for the three experimental setups was 0.86.
Collapse
Affiliation(s)
- Erena Siyoum Biratu
- College of Electrical and Mechanical Engineering, Addis Ababa Science and Technology University, Addis Ababa 120611, Ethiopia;
| | - Friedhelm Schwenker
- Institute of Neural Information Processing, Ulm University, 89081 Ulm, Germany;
| | - Taye Girma Debelee
- College of Electrical and Mechanical Engineering, Addis Ababa Science and Technology University, Addis Ababa 120611, Ethiopia;
- Artificial Intelligence Center, Addis Ababa 40782, Ethiopia; (S.R.K.); (W.G.N.)
| | - Samuel Rahimeto Kebede
- Artificial Intelligence Center, Addis Ababa 40782, Ethiopia; (S.R.K.); (W.G.N.)
- Department of Electrical and Computer Engineering, Debreberhan University, Debre Berhan 445, Ethiopia
| | | | - Hasset Tamirat Molla
- College of Natural and Computational Science, Addis Ababa University, Addis Ababa 1176, Ethiopia;
| |
Collapse
|