1
|
Feng Z, Zhou R, Xia W, Wang S, Liu Y, Huang Z, Gan H. PDFF-CNN: An attention-guided dynamic multi-orientation feature fusion method for gestational age prediction on imbalanced fetal brain MRI dataset. Med Phys 2024; 51:3480-3494. [PMID: 38043088 DOI: 10.1002/mp.16875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2023] [Revised: 11/02/2023] [Accepted: 11/19/2023] [Indexed: 12/05/2023] Open
Abstract
BACKGROUND Fetal brain magnetic resonance imaging (MRI)-based gestational age prediction has been widely used to characterize normal fetal brain development and diagnose congenital brain malformations. PURPOSE The uncertainty of fetal position and external interference leads to variable localization and direction of the fetal brain. In addition, pregnant women typically concentrate on receiving MRI scans during the fetal anomaly scanning week, leading to an imbalanced distribution of fetal brain MRI data. The above-mentioned problems pose great challenges for deep learning-based fetal brain MRI gestational age prediction. METHODS In this study, a pyramid squeeze attention (PSA)-guided dynamic feature fusion CNN (PDFF-CNN) is proposed to robustly predict gestational ages from fetal brain MRI images on an imbalanced dataset. PDFF-CNN contains four components: transformation module, feature extraction module, dynamic feature fusion module, and balanced mean square error (MSE) loss. The transformation and feature extraction modules are employed by using the PSA to learn multiscale and multi-orientation feature representations in a parallel weight-sharing Siamese network. The dynamic feature fusion module automatically learns the weights of feature vectors generated in the feature extraction module to dynamically fuse multiscale and multi-orientation brain sulci and gyri features. Considering the fact of the imbalanced dataset, the balanced MSE loss is used to mitigate the negative impact of imbalanced data distribution on gestational age prediction performance. RESULTS Evaluated on an imbalanced fetal brain MRI dataset of 1327 routine clinical T2-weighted MRI images from 157 subjects, PDFF-CNN achieved promising gestational age prediction performance with an overall mean absolute error of 0.848 weeks and anR 2 $R^2$ of 0.904. Furthermore, the attention activation maps of PDFF-CNN were derived, which revealed regional features that contributed to gestational age prediction at each gestational stage. CONCLUSIONS These results suggest that the proposed PDFF-CNN might have broad clinical applicability in guiding treatment interventions and delivery planning, which has the potential to be helpful with prenatal diagnosis.
Collapse
Affiliation(s)
- Ziteng Feng
- School of Computer Science, Hubei University of Technology, Wuhan, China
| | - Ran Zhou
- School of Computer Science, Hubei University of Technology, Wuhan, China
| | - Wei Xia
- Wuhan Children's Hospital (Wuhan Maternal and Child Healthcare Hospital), Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Siru Wang
- School of Computer Science, Hubei University of Technology, Wuhan, China
| | - Yang Liu
- School of Computer Science, Hubei University of Technology, Wuhan, China
| | - Zhongwei Huang
- School of Computer Science, Hubei University of Technology, Wuhan, China
| | - Haitao Gan
- School of Computer Science, Hubei University of Technology, Wuhan, China
| |
Collapse
|
2
|
Sharkas M, Attallah O. Color-CADx: a deep learning approach for colorectal cancer classification through triple convolutional neural networks and discrete cosine transform. Sci Rep 2024; 14:6914. [PMID: 38519513 PMCID: PMC10959971 DOI: 10.1038/s41598-024-56820-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Accepted: 03/11/2024] [Indexed: 03/25/2024] Open
Abstract
Colorectal cancer (CRC) exhibits a significant death rate that consistently impacts human lives worldwide. Histopathological examination is the standard method for CRC diagnosis. However, it is complicated, time-consuming, and subjective. Computer-aided diagnostic (CAD) systems using digital pathology can help pathologists diagnose CRC faster and more accurately than manual histopathology examinations. Deep learning algorithms especially convolutional neural networks (CNNs) are advocated for diagnosis of CRC. Nevertheless, most previous CAD systems obtained features from one CNN, these features are of huge dimension. Also, they relied on spatial information only to achieve classification. In this paper, a CAD system is proposed called "Color-CADx" for CRC recognition. Different CNNs namely ResNet50, DenseNet201, and AlexNet are used for end-to-end classification at different training-testing ratios. Moreover, features are extracted from these CNNs and reduced using discrete cosine transform (DCT). DCT is also utilized to acquire spectral representation. Afterward, it is used to further select a reduced set of deep features. Furthermore, DCT coefficients obtained in the previous step are concatenated and the analysis of variance (ANOVA) feature selection approach is applied to choose significant features. Finally, machine learning classifiers are employed for CRC classification. Two publicly available datasets were investigated which are the NCT-CRC-HE-100 K dataset and the Kather_texture_2016_image_tiles dataset. The highest achieved accuracy reached 99.3% for the NCT-CRC-HE-100 K dataset and 96.8% for the Kather_texture_2016_image_tiles dataset. DCT and ANOVA have successfully lowered feature dimensionality thus reducing complexity. Color-CADx has demonstrated efficacy in terms of accuracy, as its performance surpasses that of the most recent advancements.
Collapse
Affiliation(s)
- Maha Sharkas
- Electronics and Communications Engineering Department, College of Engineering and Technology, Arab Academy for Science, Technology, and Maritime Transport, Alexandria, Egypt
| | - Omneya Attallah
- Electronics and Communications Engineering Department, College of Engineering and Technology, Arab Academy for Science, Technology, and Maritime Transport, Alexandria, Egypt.
- Wearables, Biosensing, and Biosignal Processing Laboratory, Arab Academy for Science, Technology and Maritime Transport, Alexandria, 21937, Egypt.
| |
Collapse
|
3
|
Khosravi P, Mohammadi S, Zahiri F, Khodarahmi M, Zahiri J. AI-Enhanced Detection of Clinically Relevant Structural and Functional Anomalies in MRI: Traversing the Landscape of Conventional to Explainable Approaches. J Magn Reson Imaging 2024. [PMID: 38243677 DOI: 10.1002/jmri.29247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 01/05/2024] [Accepted: 01/08/2024] [Indexed: 01/21/2024] Open
Abstract
Anomaly detection in medical imaging, particularly within the realm of magnetic resonance imaging (MRI), stands as a vital area of research with far-reaching implications across various medical fields. This review meticulously examines the integration of artificial intelligence (AI) in anomaly detection for MR images, spotlighting its transformative impact on medical diagnostics. We delve into the forefront of AI applications in MRI, exploring advanced machine learning (ML) and deep learning (DL) methodologies that are pivotal in enhancing the precision of diagnostic processes. The review provides a detailed analysis of preprocessing, feature extraction, classification, and segmentation techniques, alongside a comprehensive evaluation of commonly used metrics. Further, this paper explores the latest developments in ensemble methods and explainable AI, offering insights into future directions and potential breakthroughs. This review synthesizes current insights, offering a valuable guide for researchers, clinicians, and medical imaging experts. It highlights AI's crucial role in improving the precision and speed of detecting key structural and functional irregularities in MRI. Our exploration of innovative techniques and trends furthers MRI technology development, aiming to refine diagnostics, tailor treatments, and elevate patient care outcomes. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Pegah Khosravi
- Department of Biological Sciences, New York City College of Technology, CUNY, New York City, New York, USA
- The CUNY Graduate Center, City University of New York, New York City, New York, USA
| | - Saber Mohammadi
- Department of Biological Sciences, New York City College of Technology, CUNY, New York City, New York, USA
- Department of Biophysics, Tarbiat Modares University, Tehran, Iran
| | - Fatemeh Zahiri
- Department of Cell and Molecular Sciences, Kharazmi University, Tehran, Iran
| | | | - Javad Zahiri
- Department of Neuroscience, University of California San Diego, San Diego, California, USA
| |
Collapse
|
4
|
Vahedifard F, Ai HA, Supanich MP, Marathu KK, Liu X, Kocak M, Ansari SM, Akyuz M, Adepoju JO, Adler S, Byrd S. Automatic Ventriculomegaly Detection in Fetal Brain MRI: A Step-by-Step Deep Learning Model for Novel 2D-3D Linear Measurements. Diagnostics (Basel) 2023; 13:2355. [PMID: 37510099 PMCID: PMC10378043 DOI: 10.3390/diagnostics13142355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 07/07/2023] [Accepted: 07/09/2023] [Indexed: 07/30/2023] Open
Abstract
In this study, we developed an automated workflow using a deep learning model (DL) to measure the lateral ventricle linearly in fetal brain MRI, which are subsequently classified into normal or ventriculomegaly, defined as a diameter wider than 10 mm at the level of the thalamus and choroid plexus. To accomplish this, we first trained a UNet-based deep learning model to segment the brain of a fetus into seven different tissue categories using a public dataset (FeTA 2022) consisting of fetal T2-weighted images. Then, an automatic workflow was developed to perform lateral ventricle measurement at the level of the thalamus and choroid plexus. The test dataset included 22 cases of normal and abnormal T2-weighted fetal brain MRIs. Measurements performed by our AI model were compared with manual measurements performed by a general radiologist and a neuroradiologist. The AI model correctly classified 95% of fetal brain MRI cases into normal or ventriculomegaly. It could measure the lateral ventricle diameter in 95% of cases with less than a 1.7 mm error. The average difference between measurements was 0.90 mm in AI vs. general radiologists and 0.82 mm in AI vs. neuroradiologists, which are comparable to the difference between the two radiologists, 0.51 mm. In addition, the AI model also enabled the researchers to create 3D-reconstructed images, which better represent real anatomy than 2D images. When a manual measurement is performed, it could also provide both the right and left ventricles in just one cut, instead of two. The measurement difference between the general radiologist and the algorithm (p = 0.9827), and between the neuroradiologist and the algorithm (p = 0.2378), was not statistically significant. In contrast, the difference between general radiologists vs. neuroradiologists was statistically significant (p = 0.0043). To the best of our knowledge, this is the first study that performs 2D linear measurement of ventriculomegaly with a 3D model based on an artificial intelligence approach. The paper presents a step-by-step approach for designing an AI model based on several radiological criteria. Overall, this study showed that AI can automatically calculate the lateral ventricle in fetal brain MRIs and accurately classify them as abnormal or normal.
Collapse
Affiliation(s)
- Farzan Vahedifard
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - H Asher Ai
- Division for Diagnostic Medical Physics, Department of Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Mark P Supanich
- Division for Diagnostic Medical Physics, Department of Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Kranthi K Marathu
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Xuchu Liu
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Mehmet Kocak
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Shehbaz M Ansari
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Melih Akyuz
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Jubril O Adepoju
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Seth Adler
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Sharon Byrd
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| |
Collapse
|
5
|
Murmu A, Kumar P. A novel Gateaux derivatives with efficient DCNN-Resunet method for segmenting multi-class brain tumor. Med Biol Eng Comput 2023:10.1007/s11517-023-02824-z. [PMID: 37338739 DOI: 10.1007/s11517-023-02824-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Accepted: 03/14/2023] [Indexed: 06/21/2023]
Abstract
In hospitals and pathology, observing the features and locations of brain tumors in Magnetic Resonance Images (MRI) is a crucial task for assisting medical professionals in both treatment and diagnosis. The multi-class information about the brain tumor is often obtained from the patient's MRI dataset. However, this information may vary in different shapes and sizes for various brain tumors, making it difficult to detect their locations in the brain. To resolve these issues, a novel customized Deep Convolution Neural Network (DCNN) based Residual-Unet (ResUnet) model with Transfer Learning (TL) is proposed for predicting the locations of the brain tumor in an MRI dataset. The DCNN model has been used to extract the features from input images and select the Region Of Interest (ROI) by using the TL technique for training it faster. Furthermore, the min-max normalizing approach is used to enhance the color intensity value for particular ROI boundary edges in the brain tumor images. Specifically, the boundary edges of the brain tumors have been detected by utilizing Gateaux Derivatives (GD) method to identify the multi-class brain tumors precisely. The proposed scheme has been validated on two datasets namely the brain tumor, and Figshare MRI datasets for detecting multi-class Brain Tumor Segmentation (BTS).The experimental results have been analyzed by evaluation metrics namely, accuracy (99.78, and 99.03), Jaccard Coefficient (93.04, and 94.95), Dice Factor Coefficient (DFC) (92.37, and 91.94), Mean Absolute Error (MAE) (0.0019, and 0.0013), and Mean Squared Error (MSE) (0.0085, and 0.0012) for proper validation. The proposed system outperforms the state-of-the-art segmentation models on the MRI brain tumor dataset.
Collapse
Affiliation(s)
- Anita Murmu
- Computer Science and Engineering, National Institute of Technology Patna, Ashok Rajpath, Patna, 800005, Bihar, India.
| | - Piyush Kumar
- Computer Science and Engineering, National Institute of Technology Patna, Ashok Rajpath, Patna, 800005, Bihar, India
| |
Collapse
|
6
|
Vahedifard F, Adepoju JO, Supanich M, Ai HA, Liu X, Kocak M, Marathu KK, Byrd SE. Review of deep learning and artificial intelligence models in fetal brain magnetic resonance imaging. World J Clin Cases 2023; 11:3725-3735. [PMID: 37383127 PMCID: PMC10294149 DOI: 10.12998/wjcc.v11.i16.3725] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 01/30/2023] [Accepted: 05/06/2023] [Indexed: 06/02/2023] Open
Abstract
Central nervous system abnormalities in fetuses are fairly common, happening in 0.1% to 0.2% of live births and in 3% to 6% of stillbirths. So initial detection and categorization of fetal Brain abnormalities are critical. Manually detecting and segmenting fetal brain magnetic resonance imaging (MRI) could be time-consuming, and susceptible to interpreter experience. Artificial intelligence (AI) algorithms and machine learning approaches have a high potential for assisting in the early detection of these problems, improving the diagnosis process and follow-up procedures. The use of AI and machine learning techniques in fetal brain MRI was the subject of this narrative review paper. Using AI, anatomic fetal brain MRI processing has investigated models to predict specific landmarks and segmentation automatically. All gestation age weeks (17-38 wk) and different AI models (mainly Convolutional Neural Network and U-Net) have been used. Some models' accuracy achieved 95% and more. AI could help preprocess and post-process fetal images and reconstruct images. Also, AI can be used for gestational age prediction (with one-week accuracy), fetal brain extraction, fetal brain segmentation, and placenta detection. Some fetal brain linear measurements, such as Cerebral and Bone Biparietal Diameter, have been suggested. Classification of brain pathology was studied using diagonal quadratic discriminates analysis, K-nearest neighbor, random forest, naive Bayes, and radial basis function neural network classifiers. Deep learning methods will become more powerful as more large-scale, labeled datasets become available. Having shared fetal brain MRI datasets is crucial because there aren not many fetal brain pictures available. Also, physicians should be aware of AI's function in fetal brain MRI, particularly neuroradiologists, general radiologists, and perinatologists.
Collapse
Affiliation(s)
- Farzan Vahedifard
- Department of Diagnostic Radiology and Nuclear Medicine, Rush Medical College, Chicago, IL 606012, United States
| | - Jubril O Adepoju
- Department of Diagnostic Radiology and Nuclear Medicine, Rush Medical College, Chicago, IL 606012, United States
| | - Mark Supanich
- Division for Diagnostic Medical Physics, Department of Radiology and Nuclear Medicine, Rush University Medical Center, Chicago, IL 606012, United States
| | - Hua Asher Ai
- Division for Diagnostic Medical Physics, Department of Radiology and Nuclear Medicine, Rush University Medical Center, Chicago, IL 606012, United States
| | - Xuchu Liu
- Department of Diagnostic Radiology and Nuclear Medicine, Rush Medical College, Chicago, IL 606012, United States
| | - Mehmet Kocak
- Department of Diagnostic Radiology and Nuclear Medicine, Rush Medical College, Chicago, IL 606012, United States
| | - Kranthi K Marathu
- Department of Diagnostic Radiology and Nuclear Medicine, Rush Medical College, Chicago, IL 606012, United States
| | - Sharon E Byrd
- Department of Diagnostic Radiology and Nuclear Medicine, Rush Medical College, Chicago, IL 606012, United States
| |
Collapse
|
7
|
Hassan AM, Yahya A, Aboshosha A. A framework for classifying breast cancer based on deep features integration and selection. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08341-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/19/2023]
Abstract
AbstractDeep convolutional neural networks (DCNNs) are one of the most advanced techniques for classifying images in a range of applications. One of the most prevalent cancers that cause death in women is breast cancer. For survival rates to increase, early detection and treatment of breast cancer is essential. Deep learning (DL) can help radiologists diagnose and classify breast cancer lesions. This paper proposes a computer-aided system based on DL techniques for automatically classify breast cancer tumors in histopathological images. There are nine DCNN architectures used in this work. Four schemes are performed in the proposed framework to find the best approach. The first scheme consists of pre-trained DCNNs based on the transfer learning concept. The second scheme performs feature extraction of the DCNN architectures and uses a support vector machine (SVM) classifier for evaluation. The third one performs feature integration to show how the integrated deep features may enhance the SVM classifiers' accuracy. Finally, in the fourth scheme, the Chi-square (χ2) feature selection method is applied to reduce the large feature size in the feature integration step. The results of the proposed system present a promising performance for breast cancer classification with an accuracy of 99.24%. The system performance shows that the proposed tool is suitable to assist radiologists in diagnosing breast cancer tumors.
Collapse
|
8
|
Lalitha R, Krishna Prasad P, Rama Reddy T, Kavitha K, Srinivas R, Ravi Kiran B. Efficient adaptive enhanced adaboost based detection of spinal abnormalities by Machine learning approaches. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
9
|
Auto-MyIn: Automatic diagnosis of myocardial infarction via multiple GLCMs, CNNs, and SVMs. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
10
|
Fet-Net Algorithm for Automatic Detection of Fetal Orientation in Fetal MRI. Bioengineering (Basel) 2023; 10:bioengineering10020140. [PMID: 36829634 PMCID: PMC9952178 DOI: 10.3390/bioengineering10020140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 01/06/2023] [Accepted: 01/13/2023] [Indexed: 01/22/2023] Open
Abstract
Identifying fetal orientation is essential for determining the mode of delivery and for sequence planning in fetal magnetic resonance imaging (MRI). This manuscript describes a deep learning algorithm named Fet-Net, composed of convolutional neural networks (CNNs), which allows for the automatic detection of fetal orientation from a two-dimensional (2D) MRI slice. The architecture consists of four convolutional layers, which feed into a simple artificial neural network. Compared with eleven other prominent CNNs (different versions of ResNet, VGG, Xception, and Inception), Fet-Net has fewer architectural layers and parameters. From 144 3D MRI datasets indicative of vertex, breech, oblique and transverse fetal orientations, 6120 2D MRI slices were extracted to train, validate and test Fet-Net. Despite its simpler architecture, Fet-Net demonstrated an average accuracy and F1 score of 97.68% and a loss of 0.06828 on the 6120 2D MRI slices during a 5-fold cross-validation experiment. This architecture outperformed all eleven prominent architectures (p < 0.05). An ablation study proved each component's statistical significance and contribution to Fet-Net's performance. Fet-Net demonstrated robustness in classification accuracy even when noise was introduced to the images, outperforming eight of the 11 prominent architectures. Fet-Net's ability to automatically detect fetal orientation can profoundly decrease the time required for fetal MRI acquisition.
Collapse
|
11
|
GabROP: Gabor Wavelets-Based CAD for Retinopathy of Prematurity Diagnosis via Convolutional Neural Networks. Diagnostics (Basel) 2023; 13:diagnostics13020171. [PMID: 36672981 PMCID: PMC9857608 DOI: 10.3390/diagnostics13020171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 12/12/2022] [Accepted: 12/19/2022] [Indexed: 01/05/2023] Open
Abstract
One of the most serious and dangerous ocular problems in premature infants is retinopathy of prematurity (ROP), a proliferative vascular disease. Ophthalmologists can use automatic computer-assisted diagnostic (CAD) tools to help them make a safe, accurate, and low-cost diagnosis of ROP. All previous CAD tools for ROP diagnosis use the original fundus images. Unfortunately, learning the discriminative representation from ROP-related fundus images is difficult. Textural analysis techniques, such as Gabor wavelets (GW), can demonstrate significant texture information that can help artificial intelligence (AI) based models to improve diagnostic accuracy. In this paper, an effective and automated CAD tool, namely GabROP, based on GW and multiple deep learning (DL) models is proposed. Initially, GabROP analyzes fundus images using GW and generates several sets of GW images. Next, these sets of images are used to train three convolutional neural networks (CNNs) models independently. Additionally, the actual fundus pictures are used to build these networks. Using the discrete wavelet transform (DWT), texture features retrieved from every CNN trained with various sets of GW images are combined to create a textural-spectral-temporal demonstration. Afterward, for each CNN, these features are concatenated with spatial deep features obtained from the original fundus images. Finally, the previous concatenated features of all three CNN are incorporated using the discrete cosine transform (DCT) to lessen the size of features caused by the fusion process. The outcomes of GabROP show that it is accurate and efficient for ophthalmologists. Additionally, the effectiveness of GabROP is compared to recently developed ROP diagnostic techniques. Due to GabROP's superior performance compared to competing tools, ophthalmologists may be able to identify ROP more reliably and precisely, which could result in a reduction in diagnostic effort and examination time.
Collapse
|
12
|
Attallah O, Aslan MF, Sabanci K. A Framework for Lung and Colon Cancer Diagnosis via Lightweight Deep Learning Models and Transformation Methods. Diagnostics (Basel) 2022; 12:diagnostics12122926. [PMID: 36552933 PMCID: PMC9776637 DOI: 10.3390/diagnostics12122926] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2022] [Revised: 11/19/2022] [Accepted: 11/22/2022] [Indexed: 11/25/2022] Open
Abstract
Among the leading causes of mortality and morbidity in people are lung and colon cancers. They may develop concurrently in organs and negatively impact human life. If cancer is not diagnosed in its early stages, there is a great likelihood that it will spread to the two organs. The histopathological detection of such malignancies is one of the most crucial components of effective treatment. Although the process is lengthy and complex, deep learning (DL) techniques have made it feasible to complete it more quickly and accurately, enabling researchers to study a lot more patients in a short time period and for a lot less cost. Earlier studies relied on DL models that require great computational ability and resources. Most of them depended on individual DL models to extract features of high dimension or to perform diagnoses. However, in this study, a framework based on multiple lightweight DL models is proposed for the early detection of lung and colon cancers. The framework utilizes several transformation methods that perform feature reduction and provide a better representation of the data. In this context, histopathology scans are fed into the ShuffleNet, MobileNet, and SqueezeNet models. The number of deep features acquired from these models is subsequently reduced using principal component analysis (PCA) and fast Walsh-Hadamard transform (FHWT) techniques. Following that, discrete wavelet transform (DWT) is used to fuse the FWHT's reduced features obtained from the three DL models. Additionally, the three DL models' PCA features are concatenated. Finally, the diminished features as a result of PCA and FHWT-DWT reduction and fusion processes are fed to four distinct machine learning algorithms, reaching the highest accuracy of 99.6%. The results obtained using the proposed framework based on lightweight DL models show that it can distinguish lung and colon cancer variants with a lower number of features and less computational complexity compared to existing methods. They also prove that utilizing transformation methods to reduce features can offer a superior interpretation of the data, thus improving the diagnosis procedure.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria 1029, Egypt
- Correspondence:
| | - Muhammet Fatih Aslan
- Department of Electrical and Electronics Engineering, Karamanoglu Mehmetbey University, 70100 Karaman, Turkey
| | - Kadir Sabanci
- Department of Electrical and Electronics Engineering, Karamanoglu Mehmetbey University, 70100 Karaman, Turkey
| |
Collapse
|
13
|
Verma D, Agrawal S, Iwendi C, Sharma B, Bhatia S, Basheer S. A Novel Framework for Abnormal Risk Classification over Fetal Nuchal Translucency Using Adaptive Stochastic Gradient Descent Algorithm. Diagnostics (Basel) 2022; 12:2643. [PMID: 36359487 PMCID: PMC9689292 DOI: 10.3390/diagnostics12112643] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 09/14/2022] [Accepted: 09/22/2022] [Indexed: 11/25/2023] Open
Abstract
In most maternity hospitals, an ultrasound scan in the mid-trimester is now a standard element of antenatal care. More fetal abnormalities are being detected in scans as technology advances and ability improves. Fetal anomalies are developmental abnormalities in a fetus that arise during pregnancy, birth defects and congenital abnormalities are related terms. Fetal abnormalities have been commonly observed in industrialized countries over the previous few decades. Three out of every 1000 pregnant mothers suffer a fetal anomaly. This research work proposes an Adaptive Stochastic Gradient Descent Algorithm to evaluate the risk of fetal abnormality. Findings of this work suggest that proposed innovative method can successfully classify the anomalies linked with nuchal translucency thickening. Parameters such an accuracy, recall, precision, and F1-score are analyzed. The accuracy achieved through the suggested technique is 98.642.%.
Collapse
Affiliation(s)
- Deepti Verma
- Department of Computer Application, SAGE University, Indore 452020, India
| | - Shweta Agrawal
- Institute of Advance Computing, SAGE University, Indore 452020, India
| | - Celestine Iwendi
- School of Creative Technologies, University of Bolton, Bolton BL3 5AB, UK
| | - Bhisham Sharma
- Department of Computer Science & Engineering, School of Engineering and Technology, Chitkara University, Baddi 174103, India
| | - Surbhi Bhatia
- Department of Information Systems, College of Computer Science and Information Technology, King Faisal University, Al Ahsa 36362, Saudi Arabia
| | - Shakila Basheer
- Department of Information Systems, College of Computer and Information Science, Princess Nourah Bint Abdulrahman University, P.O. BOX 84428, Riyadh 11671, Saudi Arabia
| |
Collapse
|
14
|
Otjen JP, Moore MM, Romberg EK, Perez FA, Iyer RS. The current and future roles of artificial intelligence in pediatric radiology. Pediatr Radiol 2022; 52:2065-2073. [PMID: 34046708 DOI: 10.1007/s00247-021-05086-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Revised: 01/27/2021] [Accepted: 04/20/2021] [Indexed: 12/11/2022]
Abstract
Artificial intelligence (AI) is a broad and complicated concept that has begun to affect many areas of medicine, perhaps none so much as radiology. While pediatric radiology has been less affected than other radiology subspecialties, there are some well-developed and some nascent applications within the field. This review focuses on the use of AI within pediatric radiology for image interpretation, with descriptive summaries of the literature to date. We highlight common features that enable successful application of the technology, along with some of the limitations that can inhibit the development of this field. We present some ideas for further research in this area and challenges that must be overcome, with an understanding that technology often advances in unpredictable ways.
Collapse
Affiliation(s)
- Jeffrey P Otjen
- Department of Radiology, Seattle Children's Hospital, University of Washington School of Medicine, 4800 Sand Point Way NE, MA.7.220, Seattle, WA, 98105, USA
| | - Michael M Moore
- Department of Radiology, Penn State Children's Hospital, Penn State Health System, Hershey, PA, USA
| | - Erin K Romberg
- Department of Radiology, Seattle Children's Hospital, University of Washington School of Medicine, 4800 Sand Point Way NE, MA.7.220, Seattle, WA, 98105, USA
| | - Francisco A Perez
- Department of Radiology, Seattle Children's Hospital, University of Washington School of Medicine, 4800 Sand Point Way NE, MA.7.220, Seattle, WA, 98105, USA
| | - Ramesh S Iyer
- Department of Radiology, Seattle Children's Hospital, University of Washington School of Medicine, 4800 Sand Point Way NE, MA.7.220, Seattle, WA, 98105, USA.
| |
Collapse
|
15
|
Lo J, Lim A, Wagner MW, Ertl-Wagner B, Sussman D. Fetal Organ Anomaly Classification Network for Identifying Organ Anomalies in Fetal MRI. Front Artif Intell 2022; 5:832485. [PMID: 35372832 PMCID: PMC8972161 DOI: 10.3389/frai.2022.832485] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 02/14/2022] [Indexed: 11/13/2022] Open
Abstract
Rapid development in Magnetic Resonance Imaging (MRI) has played a key role in prenatal diagnosis over the last few years. Deep learning (DL) architectures can facilitate the process of anomaly detection and affected-organ classification, making diagnosis more accurate and observer-independent. We propose a novel DL image classification architecture, Fetal Organ Anomaly Classification Network (FOAC-Net), which uses squeeze-and-excitation (SE) and naïve inception (NI) modules to automatically identify anomalies in fetal organs. This architecture can identify normal fetal anatomy, as well as detect anomalies present in the (1) brain, (2) spinal cord, and (3) heart. In this retrospective study, we included fetal 3-dimensional (3D) SSFP sequences of 36 participants. We classified the images on a slice-by-slice basis. FOAC-Net achieved a classification accuracy of 85.06, 85.27, 89.29, and 82.20% when predicting brain anomalies, no anomalies (normal), spinal cord anomalies, and heart anomalies, respectively. In a comparison study, FOAC-Net outperformed other state-of-the-art classification architectures in terms of class-average F1 and accuracy. This work aims to develop a novel classification architecture identifying the affected organs in fetal MRI.
Collapse
Affiliation(s)
- Justin Lo
- Department of Electrical, Computer and Biomedical Engineering, Faculty of Engineering and Architectural Sciences, Ryerson University, Toronto, ON, Canada
- Institute for Biomedical Engineering, Science and Technology (iBEST), a partnership between St. Michael's Hospital and Ryerson University, Toronto, ON, Canada
| | - Adam Lim
- Department of Electrical, Computer and Biomedical Engineering, Faculty of Engineering and Architectural Sciences, Ryerson University, Toronto, ON, Canada
- Institute for Biomedical Engineering, Science and Technology (iBEST), a partnership between St. Michael's Hospital and Ryerson University, Toronto, ON, Canada
| | - Matthias W. Wagner
- Division of Neuroradiology, The Hospital for Sick Children, Toronto, ON, Canada
| | - Birgit Ertl-Wagner
- Division of Neuroradiology, The Hospital for Sick Children, Toronto, ON, Canada
- Department of Medical Imaging, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Dafna Sussman
- Department of Electrical, Computer and Biomedical Engineering, Faculty of Engineering and Architectural Sciences, Ryerson University, Toronto, ON, Canada
- Institute for Biomedical Engineering, Science and Technology (iBEST), a partnership between St. Michael's Hospital and Ryerson University, Toronto, ON, Canada
- Department of Obstetrics and Gynecology, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
- *Correspondence: Dafna Sussman
| |
Collapse
|
16
|
Meshaka R, Gaunt T, Shelmerdine SC. Artificial intelligence applied to fetal MRI: A scoping review of current research. Br J Radiol 2022:20211205. [PMID: 35286139 DOI: 10.1259/bjr.20211205] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
Artificial intelligence (AI) is defined as the development of computer systems to perform tasks normally requiring human intelligence. A subset of AI, known as machine learning (ML), takes this further by drawing inferences from patterns in data to 'learn' and 'adapt' without explicit instructions meaning that computer systems can 'evolve' and hopefully improve without necessarily requiring external human influences. The potential for this novel technology has resulted in great interest from the medical community regarding how it can be applied in healthcare. Within radiology, the focus has mostly been for applications in oncological imaging, although new roles in other subspecialty fields are slowly emerging.In this scoping review, we performed a literature search of the current state-of-the-art and emerging trends for the use of artificial intelligence as applied to fetal magnetic resonance imaging (MRI). Our search yielded several publications covering AI tools for anatomical organ segmentation, improved imaging sequences and aiding in diagnostic applications such as automated biometric fetal measurements and the detection of congenital and acquired abnormalities. We highlight our own perceived gaps in this literature and suggest future avenues for further research. It is our hope that the information presented highlights the varied ways and potential that novel digital technology could make an impact to future clinical practice with regards to fetal MRI.
Collapse
Affiliation(s)
- Riwa Meshaka
- Department of Clinical Radiology, Great Ormond Street Hospital for Children NHS Foundation Trust, Great Ormond Street, London, UK
| | - Trevor Gaunt
- Department of Radiology, University College London Hospitals NHS Foundation Trust, London, UK
| | - Susan C Shelmerdine
- Department of Clinical Radiology, Great Ormond Street Hospital for Children NHS Foundation Trust, Great Ormond Street, London, UK.,UCL Great Ormond Street Institute of Child Health, Great Ormond Street Hospital for Children, London, UK.,NIHR Great Ormond Street Hospital Biomedical Research Centre, 30 Guilford Street, Bloomsbury, London, UK.,Department of Radiology, St. George's Hospital, Blackshaw Road, London, UK
| |
Collapse
|
17
|
Attallah O. ECG-BiCoNet: An ECG-based pipeline for COVID-19 diagnosis using Bi-Layers of deep features integration. Comput Biol Med 2022; 142:105210. [PMID: 35026574 PMCID: PMC8730786 DOI: 10.1016/j.compbiomed.2022.105210] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Revised: 01/01/2022] [Accepted: 01/01/2022] [Indexed: 12/29/2022]
Abstract
The accurate and speedy detection of COVID-19 is essential to avert the fast propagation of the virus, alleviate lockdown constraints and diminish the burden on health organizations. Currently, the methods used to diagnose COVID-19 have several limitations, thus new techniques need to be investigated to improve the diagnosis and overcome these limitations. Taking into consideration the great benefits of electrocardiogram (ECG) applications, this paper proposes a new pipeline called ECG-BiCoNet to investigate the potential of using ECG data for diagnosing COVID-19. ECG-BiCoNet employs five deep learning models of distinct structural design. ECG-BiCoNet extracts two levels of features from two different layers of each deep learning technique. Features mined from higher layers are fused using discrete wavelet transform and then integrated with lower-layers features. Afterward, a feature selection approach is utilized. Finally, an ensemble classification system is built to merge predictions of three machine learning classifiers. ECG-BiCoNet accomplishes two classification categories, binary and multiclass. The results of ECG-BiCoNet present a promising COVID-19 performance with an accuracy of 98.8% and 91.73% for binary and multiclass classification categories. These results verify that ECG data may be used to diagnose COVID-19 which can help clinicians in the automatic diagnosis and overcome limitations of manual diagnosis.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, 1029, Egypt.
| |
Collapse
|
18
|
Pringle C, Kilday JP, Kamaly-Asl I, Stivaros SM. The role of artificial intelligence in paediatric neuroradiology. Pediatr Radiol 2022; 52:2159-2172. [PMID: 35347371 PMCID: PMC9537195 DOI: 10.1007/s00247-022-05322-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 08/22/2021] [Accepted: 02/11/2022] [Indexed: 01/17/2023]
Abstract
Imaging plays a fundamental role in the managing childhood neurologic, neurosurgical and neuro-oncological disease. Employing multi-parametric MRI techniques, such as spectroscopy and diffusion- and perfusion-weighted imaging, to the radiophenotyping of neuroradiologic conditions is becoming increasingly prevalent, particularly with radiogenomic analyses correlating imaging characteristics with molecular biomarkers of disease. However, integration into routine clinical practice remains elusive. With modern multi-parametric MRI now providing additional data beyond anatomy, informing on histology, biology and physiology, such metric-rich information can present as information overload to the treating radiologist and, as such, information relevant to an individual case can become lost. Artificial intelligence techniques are capable of modelling the vast radiologic, biological and clinical datasets that accompany childhood neurologic disease, such that this information can become incorporated in upfront prognostic modelling systems, with artificial intelligence techniques providing a plausible approach to this solution. This review examines machine learning approaches than can be used to underpin such artificial intelligence applications, with exemplars for each machine learning approach from the world literature. Then, within the specific use case of paediatric neuro-oncology, we examine the potential future contribution for such artificial intelligence machine learning techniques to offer solutions for patient care in the form of decision support systems, potentially enabling personalised medicine within this domain of paediatric radiologic practice.
Collapse
Affiliation(s)
- Catherine Pringle
- Children’s Brain Tumour Research Network (CBTRN), Royal Manchester Children’s Hospital, Manchester, UK ,Division of Informatics, Imaging, and Data Sciences, School of Health Sciences, Faculty of Biology, Medicine, and Health, University of Manchester, Manchester, UK
| | - John-Paul Kilday
- Children’s Brain Tumour Research Network (CBTRN), Royal Manchester Children’s Hospital, Manchester, UK ,The Centre for Paediatric, Teenage and Young Adult Cancer, Institute of Cancer Sciences, University of Manchester, Manchester, UK
| | - Ian Kamaly-Asl
- Children’s Brain Tumour Research Network (CBTRN), Royal Manchester Children’s Hospital, Manchester, UK ,The Centre for Paediatric, Teenage and Young Adult Cancer, Institute of Cancer Sciences, University of Manchester, Manchester, UK
| | - Stavros Michael Stivaros
- Division of Informatics, Imaging, and Data Sciences, School of Health Sciences, Faculty of Biology, Medicine, and Health, University of Manchester, Manchester, UK. .,Department of Paediatric Radiology, Royal Manchester Children's Hospital, Central Manchester University Hospitals NHS Foundation Trust, Oxford Road, Manchester, M13 9WL, UK. .,The Geoffrey Jefferson Brain Research Centre, Manchester Academic Health Science Centre, University of Manchester, Manchester, UK.
| |
Collapse
|
19
|
Attallah O. A computer-aided diagnostic framework for coronavirus diagnosis using texture-based radiomics images. Digit Health 2022; 8:20552076221092543. [PMID: 35433024 PMCID: PMC9005822 DOI: 10.1177/20552076221092543] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Accepted: 03/21/2022] [Indexed: 12/14/2022] Open
Abstract
The accurate and rapid detection of the novel coronavirus infection, coronavirus is very important to prevent the fast spread of such disease. Thus, reducing negative effects that influenced many industrial sectors, especially healthcare. Artificial intelligence techniques in particular deep learning could help in the fast and precise diagnosis of coronavirus from computed tomography images. Most artificial intelligence-based studies used the original computed tomography images to build their models; however, the integration of texture-based radiomics images and deep learning techniques could improve the diagnostic accuracy of the novel coronavirus diseases. This study proposes a computer-assisted diagnostic framework based on multiple deep learning and texture-based radiomics approaches. It first trains three Residual Networks (ResNets) deep learning techniques with two texture-based radiomics images including discrete wavelet transform and gray-level covariance matrix instead of the original computed tomography images. Then, it fuses the texture-based radiomics deep features sets extracted from each using discrete cosine transform. Thereafter, it further combines the fused texture-based radiomics deep features obtained from the three convolutional neural networks. Finally, three support vector machine classifiers are utilized for the classification procedure. The proposed method is validated experimentally on the benchmark severe respiratory syndrome coronavirus 2 computed tomography image dataset. The accuracies attained indicate that using texture-based radiomics (gray-level covariance matrix, discrete wavelet transform) images for training the ResNet-18 (83.22%, 74.9%), ResNet-50 (80.94%, 78.39%), and ResNet-101 (80.54%, 77.99%) is better than using the original computed tomography images (70.34%, 76.51%, and 73.42%) for ResNet-18, ResNet-50, and ResNet-101, respectively. Furthermore, the sensitivity, specificity, accuracy, precision, and F1-score achieved using the proposed computer-assisted diagnostic after the two fusion steps are 99.47%, 99.72%, 99.60%, 99.72%, and 99.60% which proves that combining texture-based radiomics deep features obtained from the three ResNets has boosted its performance. Thus, fusing multiple texture-based radiomics deep features mined from several convolutional neural networks is better than using only one type of radiomics approach and a single convolutional neural network. The performance of the proposed computer-assisted diagnostic framework allows it to be used by radiologists in attaining fast and accurate diagnosis.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
20
|
Attallah O. A deep learning-based diagnostic tool for identifying various diseases via facial images. Digit Health 2022; 8:20552076221124432. [PMID: 36105626 PMCID: PMC9465585 DOI: 10.1177/20552076221124432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 08/18/2022] [Indexed: 11/16/2022] Open
Abstract
With the current health crisis caused by the COVID-19 pandemic, patients have
become more anxious about infection, so they prefer not to have direct contact
with doctors or clinicians. Lately, medical scientists have confirmed that
several diseases exhibit corresponding specific features on the face the face.
Recent studies have indicated that computer-aided facial diagnosis can be a
promising tool for the automatic diagnosis and screening of diseases from facial
images. However, few of these studies used deep learning (DL) techniques. Most
of them focused on detecting a single disease, using handcrafted feature
extraction methods and conventional machine learning techniques based on
individual classifiers trained on small and private datasets using images taken
from a controlled environment. This study proposes a novel computer-aided facial
diagnosis system called FaceDisNet that uses a new public dataset based on
images taken from an unconstrained environment and could be employed for
forthcoming comparisons. It detects single and multiple diseases. FaceDisNet is
constructed by integrating several spatial deep features from convolutional
neural networks of various architectures. It does not depend only on spatial
features but also extracts spatial-spectral features. FaceDisNet searches for
the fused spatial-spectral feature set that has the greatest impact on the
classification. It employs two feature selection techniques to reduce the large
dimension of features resulting from feature fusion. Finally, it builds an
ensemble classifier based on stacking to perform classification. The performance
of FaceDisNet verifies its ability to diagnose single and multiple diseases.
FaceDisNet achieved a maximum accuracy of 98.57% and 98% after the ensemble
classification and feature selection steps for binary and multiclass
classification categories. These results prove that FaceDisNet is a reliable
tool and could be employed to avoid the difficulties and complications of manual
diagnosis. Also, it can help physicians achieve accurate diagnoses without the
need for physical contact with the patients.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
21
|
Attallah O. DIAROP: Automated Deep Learning-Based Diagnostic Tool for Retinopathy of Prematurity. Diagnostics (Basel) 2021; 11:2034. [PMID: 34829380 PMCID: PMC8620568 DOI: 10.3390/diagnostics11112034] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 09/24/2021] [Accepted: 11/01/2021] [Indexed: 12/12/2022] Open
Abstract
Retinopathy of Prematurity (ROP) affects preterm neonates and could cause blindness. Deep Learning (DL) can assist ophthalmologists in the diagnosis of ROP. This paper proposes an automated and reliable diagnostic tool based on DL techniques called DIAROP to support the ophthalmologic diagnosis of ROP. It extracts significant features by first obtaining spatial features from the four Convolution Neural Networks (CNNs) DL techniques using transfer learning and then applying Fast Walsh Hadamard Transform (FWHT) to integrate these features. Moreover, DIAROP explores the best-integrated features extracted from the CNNs that influence its diagnostic capability. The results of DIAROP indicate that DIAROP achieved an accuracy of 93.2% and an area under receiving operating characteristic curve (AUC) of 0.98. Furthermore, DIAROP performance is compared with recent ROP diagnostic tools. Its promising performance shows that DIAROP may assist the ophthalmologic diagnosis of ROP.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria 1029, Egypt
| |
Collapse
|
22
|
Intelligent Dermatologist Tool for Classifying Multiple Skin Cancer Subtypes by Incorporating Manifold Radiomics Features Categories. CONTRAST MEDIA & MOLECULAR IMAGING 2021; 2021:7192016. [PMID: 34621146 PMCID: PMC8457955 DOI: 10.1155/2021/7192016] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 08/20/2021] [Accepted: 09/01/2021] [Indexed: 02/06/2023]
Abstract
The rates of skin cancer (SC) are rising every year and becoming a critical health issue worldwide. SC's early and accurate diagnosis is the key procedure to reduce these rates and improve survivability. However, the manual diagnosis is exhausting, complicated, expensive, prone to diagnostic error, and highly dependent on the dermatologist's experience and abilities. Thus, there is a vital need to create automated dermatologist tools that are capable of accurately classifying SC subclasses. Recently, artificial intelligence (AI) techniques including machine learning (ML) and deep learning (DL) have verified the success of computer-assisted dermatologist tools in the automatic diagnosis and detection of SC diseases. Previous AI-based dermatologist tools are based on features which are either high-level features based on DL methods or low-level features based on handcrafted operations. Most of them were constructed for binary classification of SC. This study proposes an intelligent dermatologist tool to accurately diagnose multiple skin lesions automatically. This tool incorporates manifold radiomics features categories involving high-level features such as ResNet-50, DenseNet-201, and DarkNet-53 and low-level features including discrete wavelet transform (DWT) and local binary pattern (LBP). The results of the proposed intelligent tool prove that merging manifold features of different categories has a high influence on the classification accuracy. Moreover, these results are superior to those obtained by other related AI-based dermatologist tools. Therefore, the proposed intelligent tool can be used by dermatologists to help them in the accurate diagnosis of the SC subcategory. It can also overcome manual diagnosis limitations, reduce the rates of infection, and enhance survival rates.
Collapse
|
23
|
Single-Input Multi-Output U-Net for Automated 2D Foetal Brain Segmentation of MR Images. J Imaging 2021; 7:jimaging7100200. [PMID: 34677286 PMCID: PMC8536962 DOI: 10.3390/jimaging7100200] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 09/14/2021] [Accepted: 09/26/2021] [Indexed: 11/16/2022] Open
Abstract
In this work, we develop the Single-Input Multi-Output U-Net (SIMOU-Net), a hybrid network for foetal brain segmentation inspired by the original U-Net fused with the holistically nested edge detection (HED) network. The SIMOU-Net is similar to the original U-Net but it has a deeper architecture and takes account of the features extracted from each side output. It acts similar to an ensemble neural network, however, instead of averaging the outputs from several independently trained models, which is computationally expensive, our approach combines outputs from a single network to reduce the variance of predications and generalization errors. Experimental results using 200 normal foetal brains consisting of over 11,500 2D images produced Dice and Jaccard coefficients of 94.2 ± 5.9% and 88.7 ± 6.9%, respectively. We further tested the proposed network on 54 abnormal cases (over 3500 images) and achieved Dice and Jaccard coefficients of 91.2 ± 6.8% and 85.7 ± 6.6%, respectively.
Collapse
|
24
|
Attallah O. CoMB-Deep: Composite Deep Learning-Based Pipeline for Classifying Childhood Medulloblastoma and Its Classes. Front Neuroinform 2021; 15:663592. [PMID: 34122031 PMCID: PMC8193683 DOI: 10.3389/fninf.2021.663592] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 04/26/2021] [Indexed: 12/28/2022] Open
Abstract
Childhood medulloblastoma (MB) is a threatening malignant tumor affecting children all over the globe. It is believed to be the foremost common pediatric brain tumor causing death. Early and accurate classification of childhood MB and its classes are of great importance to help doctors choose the suitable treatment and observation plan, avoid tumor progression, and lower death rates. The current gold standard for diagnosing MB is the histopathology of biopsy samples. However, manual analysis of such images is complicated, costly, time-consuming, and highly dependent on the expertise and skills of pathologists, which might cause inaccurate results. This study aims to introduce a reliable computer-assisted pipeline called CoMB-Deep to automatically classify MB and its classes with high accuracy from histopathological images. This key challenge of the study is the lack of childhood MB datasets, especially its four categories (defined by the WHO) and the inadequate related studies. All relevant works were based on either deep learning (DL) or textural analysis feature extractions. Also, such studies employed distinct features to accomplish the classification procedure. Besides, most of them only extracted spatial features. Nevertheless, CoMB-Deep blends the advantages of textural analysis feature extraction techniques and DL approaches. The CoMB-Deep consists of a composite of DL techniques. Initially, it extracts deep spatial features from 10 convolutional neural networks (CNNs). It then performs a feature fusion step using discrete wavelet transform (DWT), a texture analysis method capable of reducing the dimension of fused features. Next, the CoMB-Deep explores the best combination of fused features, enhancing the performance of the classification process using two search strategies. Afterward, it employs two feature selection techniques on the fused feature sets selected in the previous step. A bi-directional long-short term memory (Bi-LSTM) network; a DL-based approach that is utilized for the classification phase. CoMB-Deep maintains two classification categories: binary category for distinguishing between the abnormal and normal cases and multi-class category to identify the subclasses of MB. The results of the CoMB-Deep for both classification categories prove that it is reliable. The results also indicate that the feature sets selected using both search strategies have enhanced the performance of Bi-LSTM compared to individual spatial deep features. CoMB-Deep is compared to related studies to verify its competitiveness, and this comparison confirmed its robustness and outperformance. Hence, CoMB-Deep can help pathologists perform accurate diagnoses, reduce misdiagnosis risks that could occur with manual diagnosis, accelerate the classification procedure, and decrease diagnosis costs.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
25
|
Attallah O, Sharkas M. GASTRO-CADx: a three stages framework for diagnosing gastrointestinal diseases. PeerJ Comput Sci 2021; 7:e423. [PMID: 33817058 PMCID: PMC7959662 DOI: 10.7717/peerj-cs.423] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 02/11/2021] [Indexed: 05/04/2023]
Abstract
Gastrointestinal (GI) diseases are common illnesses that affect the GI tract. Diagnosing these GI diseases is quite expensive, complicated, and challenging. A computer-aided diagnosis (CADx) system based on deep learning (DL) techniques could considerably lower the examination cost processes and increase the speed and quality of diagnosis. Therefore, this article proposes a CADx system called Gastro-CADx to classify several GI diseases using DL techniques. Gastro-CADx involves three progressive stages. Initially, four different CNNs are used as feature extractors to extract spatial features. Most of the related work based on DL approaches extracted spatial features only. However, in the following phase of Gastro-CADx, features extracted in the first stage are applied to the discrete wavelet transform (DWT) and the discrete cosine transform (DCT). DCT and DWT are used to extract temporal-frequency and spatial-frequency features. Additionally, a feature reduction procedure is performed in this stage. Finally, in the third stage of the Gastro-CADx, several combinations of features are fused in a concatenated manner to inspect the effect of feature combination on the output results of the CADx and select the best-fused feature set. Two datasets referred to as Dataset I and II are utilized to evaluate the performance of Gastro-CADx. Results indicated that Gastro-CADx has achieved an accuracy of 97.3% and 99.7% for Dataset I and II respectively. The results were compared with recent related works. The comparison showed that the proposed approach is capable of classifying GI diseases with higher accuracy compared to other work. Thus, it can be used to reduce medical complications, death-rates, in addition to the cost of treatment. It can also help gastroenterologists in producing more accurate diagnosis while lowering inspection time.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communication Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| | - Maha Sharkas
- Department of Electronics and Communication Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria, Egypt
| |
Collapse
|
26
|
Attallah O. MB-AI-His: Histopathological Diagnosis of Pediatric Medulloblastoma and its Subtypes via AI. Diagnostics (Basel) 2021; 11:359. [PMID: 33672752 PMCID: PMC7924641 DOI: 10.3390/diagnostics11020359] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Revised: 02/11/2021] [Accepted: 02/11/2021] [Indexed: 12/17/2022] Open
Abstract
Medulloblastoma (MB) is a dangerous malignant pediatric brain tumor that could lead to death. It is considered the most common pediatric cancerous brain tumor. Precise and timely diagnosis of pediatric MB and its four subtypes (defined by the World Health Organization (WHO)) is essential to decide the appropriate follow-up plan and suitable treatments to prevent its progression and reduce mortality rates. Histopathology is the gold standard modality for the diagnosis of MB and its subtypes, but manual diagnosis via a pathologist is very complicated, needs excessive time, and is subjective to the pathologists' expertise and skills, which may lead to variability in the diagnosis or misdiagnosis. The main purpose of the paper is to propose a time-efficient and reliable computer-aided diagnosis (CADx), namely MB-AI-His, for the automatic diagnosis of pediatric MB and its subtypes from histopathological images. The main challenge in this work is the lack of datasets available for the diagnosis of pediatric MB and its four subtypes and the limited related work. Related studies are based on either textural analysis or deep learning (DL) feature extraction methods. These studies used individual features to perform the classification task. However, MB-AI-His combines the benefits of DL techniques and textural analysis feature extraction methods through a cascaded manner. First, it uses three DL convolutional neural networks (CNNs), including DenseNet-201, MobileNet, and ResNet-50 CNNs to extract spatial DL features. Next, it extracts time-frequency features from the spatial DL features based on the discrete wavelet transform (DWT), which is a textural analysis method. Finally, MB-AI-His fuses the three spatial-time-frequency features generated from the three CNNs and DWT using the discrete cosine transform (DCT) and principal component analysis (PCA) to produce a time-efficient CADx system. MB-AI-His merges the privileges of different CNN architectures. MB-AI-His has a binary classification level for classifying among normal and abnormal MB images, and a multi-classification level to classify among the four subtypes of MB. The results of MB-AI-His show that it is accurate and reliable for both the binary and multi-class classification levels. It is also a time-efficient system as both the PCA and DCT methods have efficiently reduced the training execution time. The performance of MB-AI-His is compared with related CADx systems, and the comparison verified the powerfulness of MB-AI-His and its outperforming results. Therefore, it can support pathologists in the accurate and reliable diagnosis of MB and its subtypes from histopathological images. It can also reduce the time and cost of the diagnosis procedure which will correspondingly lead to lower death rates.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandria 1029, Egypt
| |
Collapse
|
27
|
Attallah O, Anwar F, Ghanem NM, Ismail MA. Histo-CADx: duo cascaded fusion stages for breast cancer diagnosis from histopathological images. PeerJ Comput Sci 2021; 7:e493. [PMID: 33987459 PMCID: PMC8093954 DOI: 10.7717/peerj-cs.493] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 03/26/2021] [Indexed: 05/06/2023]
Abstract
Breast cancer (BC) is one of the most common types of cancer that affects females worldwide. It may lead to irreversible complications and even death due to late diagnosis and treatment. The pathological analysis is considered the gold standard for BC detection, but it is a challenging task. Automatic diagnosis of BC could reduce death rates, by creating a computer aided diagnosis (CADx) system capable of accurately identifying BC at an early stage and decreasing the time consumed by pathologists during examinations. This paper proposes a novel CADx system named Histo-CADx for the automatic diagnosis of BC. Most related studies were based on individual deep learning methods. Also, studies did not examine the influence of fusing features from multiple CNNs and handcrafted features. In addition, related studies did not investigate the best combination of fused features that influence the performance of the CADx. Therefore, Histo-CADx is based on two stages of fusion. The first fusion stage involves the investigation of the impact of fusing several deep learning (DL) techniques with handcrafted feature extraction methods using the auto-encoder DL method. This stage also examines and searches for a suitable set of fused features that could improve the performance of Histo-CADx. The second fusion stage constructs a multiple classifier system (MCS) for fusing outputs from three classifiers, to further improve the accuracy of the proposed Histo-CADx. The performance of Histo-CADx is evaluated using two public datasets; specifically, the BreakHis and the ICIAR 2018 datasets. The results from the analysis of both datasets verified that the two fusion stages of Histo-CADx successfully improved the accuracy of the CADx compared to CADx constructed with individual features. Furthermore, using the auto-encoder for the fusion process has reduced the computation cost of the system. Moreover, the results after the two fusion stages confirmed that Histo-CADx is reliable and has the capacity of classifying BC more accurately compared to other latest studies. Consequently, it can be used by pathologists to help them in the accurate diagnosis of BC. In addition, it can decrease the time and effort needed by medical experts during the examination.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology, and Maritime Transport, Alexandria, Alexandria, Egypt
| | - Fatma Anwar
- Computer and Systems Engineering Department, Alexandria University, Alexandria, Egypt
| | - Nagia M. Ghanem
- Computer and Systems Engineering Department, Alexandria University, Alexandria, Egypt
| | - Mohamed A. Ismail
- Computer and Systems Engineering Department, Alexandria University, Alexandria, Egypt
| |
Collapse
|
28
|
A BCI System Based on Motor Imagery for Assisting People with Motor Deficiencies in the Limbs. Brain Sci 2020; 10:brainsci10110864. [PMID: 33212777 PMCID: PMC7697603 DOI: 10.3390/brainsci10110864] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 10/27/2020] [Accepted: 11/06/2020] [Indexed: 12/26/2022] Open
Abstract
Motor deficiencies constitute a significant problem affecting millions of people worldwide. Such people suffer from a debility in daily functioning, which may lead to decreased and incoherence in daily routines and deteriorate their quality of life (QoL). Thus, there is an essential need for assistive systems to help those people achieve their daily actions and enhance their overall QoL. This study proposes a novel brain–computer interface (BCI) system for assisting people with limb motor disabilities in performing their daily life activities by using their brain signals to control assistive devices. The extraction of useful features is vital for an efficient BCI system. Therefore, the proposed system consists of a hybrid feature set that feeds into three machine-learning (ML) classifiers to classify motor Imagery (MI) tasks. This hybrid feature selection (FS) system is practical, real-time, and an efficient BCI with low computation cost. We investigate different combinations of channels to select the combination that has the highest impact on performance. The results indicate that the highest achieved accuracies using a support vector machine (SVM) classifier are 93.46% and 86.0% for the BCI competition III–IVa dataset and the autocalibration and recurrent adaptation dataset, respectively. These datasets are used to test the performance of the proposed BCI. Also, we verify the effectiveness of the proposed BCI by comparing its performance with recent studies. We show that the proposed system is accurate and efficient. Future work can apply the proposed system to individuals with limb motor disabilities to assist them and test their capability to improve their QoL. Moreover, the forthcoming work can examine the system’s performance in controlling assistive devices such as wheelchairs or artificial limbs.
Collapse
|
29
|
Ragab DA, Attallah O. FUSI-CAD: Coronavirus (COVID-19) diagnosis based on the fusion of CNNs and handcrafted features. PeerJ Comput Sci 2020; 6:e306. [PMID: 33816957 PMCID: PMC7924442 DOI: 10.7717/peerj-cs.306] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Accepted: 09/30/2020] [Indexed: 05/19/2023]
Abstract
The precise and rapid diagnosis of coronavirus (COVID-19) at the very primary stage helps doctors to manage patients in high workload conditions. In addition, it prevents the spread of this pandemic virus. Computer-aided diagnosis (CAD) based on artificial intelligence (AI) techniques can be used to distinguish between COVID-19 and non-COVID-19 from the computed tomography (CT) imaging. Furthermore, the CAD systems are capable of delivering an accurate faster COVID-19 diagnosis, which consequently saves time for the disease control and provides an efficient diagnosis compared to laboratory tests. In this study, a novel CAD system called FUSI-CAD based on AI techniques is proposed. Almost all the methods in the literature are based on individual convolutional neural networks (CNN). Consequently, the FUSI-CAD system is based on the fusion of multiple different CNN architectures with three handcrafted features including statistical features and textural analysis features such as discrete wavelet transform (DWT), and the grey level co-occurrence matrix (GLCM) which were not previously utilized in coronavirus diagnosis. The SARS-CoV-2 CT-scan dataset is used to test the performance of the proposed FUSI-CAD. The results show that the proposed system could accurately differentiate between COVID-19 and non-COVID-19 images, as the accuracy achieved is 99%. Additionally, the system proved to be reliable as well. This is because the sensitivity, specificity, and precision attained to 99%. In addition, the diagnostics odds ratio (DOR) is ≥ 100. Furthermore, the results are compared with recent related studies based on the same dataset. The comparison verifies the competence of the proposed FUSI-CAD over the other related CAD systems. Thus, the novel FUSI-CAD system can be employed in real diagnostic scenarios for achieving accurate testing for COVID-19 and avoiding human misdiagnosis that might exist due to human fatigue. It can also reduce the time and exertion made by the radiologists during the examination process.
Collapse
Affiliation(s)
- Dina A. Ragab
- Electronics and Communications Engineering Department, Arab Academy for Science, Technology, and Maritime Transport (AASTMT), Alexandria, Egypt
| | - Omneya Attallah
- Electronics and Communications Engineering Department, Arab Academy for Science, Technology, and Maritime Transport (AASTMT), Alexandria, Egypt
| |
Collapse
|
30
|
Machine Learning for Brain Images Classification of Two Language Speakers. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2020; 2020:9045456. [PMID: 32587607 PMCID: PMC7294350 DOI: 10.1155/2020/9045456] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2019] [Revised: 02/11/2020] [Accepted: 02/20/2020] [Indexed: 11/18/2022]
Abstract
The image analysis of the brain with machine learning continues to be a relevant work for the detection of different characteristics of this complex organ. Recent research has observed that there are differences in the structure of the brain, specifically in white matter, when learning and using a second language. This work focuses on knowing the brain from the classification of Magnetic Resonance Images (MRIs) of bilingual and monolingual people who have English as their common language. Different artificial neural networks of a hidden layer were tested until reaching two neurons in that layer. The number of entries used was nine hundred and the classifier registered a high percentage of effectiveness. The training was supervised which could be improved in a future investigation. This task is usually carried out by an expert human with Tract-Based Spatial Statistics analysis and fractional anisotropy expressed in different colors on a screen. So, this proposal presents another option to quantitatively analyse this type of phenomena which allows to contribute to neuroscience by automatically detecting bilingual people of monolinguals by using machine learning from MRIs. This reinforces what is reported in manual detections and the way that a machine can do it.
Collapse
|
31
|
Cao P, Gao J, Zhang Z. Multi-View Based Multi-Model Learning for MCI Diagnosis. Brain Sci 2020; 10:brainsci10030181. [PMID: 32244855 PMCID: PMC7139974 DOI: 10.3390/brainsci10030181] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Accepted: 03/16/2020] [Indexed: 12/26/2022] Open
Abstract
Mild cognitive impairment (MCI) is the early stage of Alzheimer’s disease (AD). Automatic diagnosis of MCI by magnetic resonance imaging (MRI) images has been the focus of research in recent years. Furthermore, deep learning models based on 2D view and 3D view have been widely used in the diagnosis of MCI. The deep learning architecture can capture anatomical changes in the brain from MRI scans to extract the underlying features of brain disease. In this paper, we propose a multi-view based multi-model (MVMM) learning framework, which effectively combines the local information of 2D images with the global information of 3D images. First, we select some 2D slices from MRI images and extract the features representing 2D local information. Then, we combine them with the features representing 3D global information learned from 3D images to train the MVMM learning framework. We evaluate our model on the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database. The experimental results show that our proposed model can effectively recognize MCI through MRI images (accuracy of 87.50% for MCI/HC and accuracy of 83.18% for MCI/AD).
Collapse
|
32
|
Deep Learning Techniques for Automatic Detection of Embryonic Neurodevelopmental Disorders. Diagnostics (Basel) 2020; 10:diagnostics10010027. [PMID: 31936008 PMCID: PMC7169467 DOI: 10.3390/diagnostics10010027] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 01/01/2020] [Accepted: 01/05/2020] [Indexed: 11/16/2022] Open
Abstract
The increasing rates of neurodevelopmental disorders (NDs) are threatening pregnant women, parents, and clinicians caring for healthy infants and children. NDs can initially start through embryonic development due to several reasons. Up to three in 1000 pregnant women have embryos with brain defects; hence, the primitive detection of embryonic neurodevelopmental disorders (ENDs) is necessary. Related work done for embryonic ND classification is very limited and is based on conventional machine learning (ML) methods for feature extraction and classification processes. Feature extraction of these methods is handcrafted and has several drawbacks. Deep learning methods have the ability to deduce an optimum demonstration from the raw images without image enhancement, segmentation, and feature extraction processes, leading to an effective classification process. This article proposes a new framework based on deep learning methods for the detection of END. To the best of our knowledge, this is the first study that uses deep learning techniques for detecting END. The framework consists of four stages which are transfer learning, deep feature extraction, feature reduction, and classification. The framework depends on feature fusion. The results showed that the proposed framework was capable of identifying END from embryonic MRI images of various gestational ages. To verify the efficiency of the proposed framework, the results were compared with related work that used embryonic images. The performance of the proposed framework was competitive. This means that the proposed framework can be successively used for detecting END.
Collapse
|