1
|
Cao P, Derhaag J, Coonen E, Brunner H, Acharya G, Salumets A, Zamani Esteki M. Generative artificial intelligence to produce high-fidelity blastocyst-stage embryo images. Hum Reprod 2024; 39:1197-1207. [PMID: 38600621 PMCID: PMC11145014 DOI: 10.1093/humrep/deae064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 02/13/2024] [Indexed: 04/12/2024] Open
Abstract
STUDY QUESTION Can generative artificial intelligence (AI) models produce high-fidelity images of human blastocysts? SUMMARY ANSWER Generative AI models exhibit the capability to generate high-fidelity human blastocyst images, thereby providing substantial training datasets crucial for the development of robust AI models. WHAT IS KNOWN ALREADY The integration of AI into IVF procedures holds the potential to enhance objectivity and automate embryo selection for transfer. However, the effectiveness of AI is limited by data scarcity and ethical concerns related to patient data privacy. Generative adversarial networks (GAN) have emerged as a promising approach to alleviate data limitations by generating synthetic data that closely approximate real images. STUDY DESIGN, SIZE, DURATION Blastocyst images were included as training data from a public dataset of time-lapse microscopy (TLM) videos (n = 136). A style-based GAN was fine-tuned as the generative model. PARTICIPANTS/MATERIALS, SETTING, METHODS We curated a total of 972 blastocyst images as training data, where frames were captured within the time window of 110-120 h post-insemination at 1-h intervals from TLM videos. We configured the style-based GAN model with data augmentation (AUG) and pretrained weights (Pretrained-T: with translation equivariance; Pretrained-R: with translation and rotation equivariance) to compare their optimization on image synthesis. We then applied quantitative metrics including Fréchet Inception Distance (FID) and Kernel Inception Distance (KID) to assess the quality and fidelity of the generated images. Subsequently, we evaluated qualitative performance by measuring the intelligence behavior of the model through the visual Turing test. To this end, 60 individuals with diverse backgrounds and expertise in clinical embryology and IVF evaluated the quality of synthetic embryo images. MAIN RESULTS AND THE ROLE OF CHANCE During the training process, we observed consistent improvement of image quality that was measured by FID and KID scores. Pretrained and AUG + Pretrained initiated with remarkably lower FID and KID values compared to both Baseline and AUG + Baseline models. Following 5000 training iterations, the AUG + Pretrained-R model showed the highest performance of the evaluated five configurations with FID and KID scores of 15.2 and 0.004, respectively. Subsequently, we carried out the visual Turing test, such that IVF embryologists, IVF laboratory technicians, and non-experts evaluated the synthetic blastocyst-stage embryo images and obtained similar performance in specificity with marginal differences in accuracy and sensitivity. LIMITATIONS, REASONS FOR CAUTION In this study, we primarily focused the training data on blastocyst images as IVF embryos are primarily assessed in blastocyst stage. However, generation of an array of images in different preimplantation stages offers further insights into the development of preimplantation embryos and IVF success. In addition, we resized training images to a resolution of 256 × 256 pixels to moderate the computational costs of training the style-based GAN models. Further research is needed to involve a more extensive and diverse dataset from the formation of the zygote to the blastocyst stage, e.g. video generation, and the use of improved image resolution to facilitate the development of comprehensive AI algorithms and to produce higher-quality images. WIDER IMPLICATIONS OF THE FINDINGS Generative AI models hold promising potential in generating high-fidelity human blastocyst images, which allows the development of robust AI models as it can provide sufficient training datasets while safeguarding patient data privacy. Additionally, this may help to produce sufficient embryo imaging training data with different (rare) abnormal features, such as embryonic arrest, tripolar cell division to avoid class imbalances and reach to even datasets. Thus, generative models may offer a compelling opportunity to transform embryo selection procedures and substantially enhance IVF outcomes. STUDY FUNDING/COMPETING INTEREST(S) This study was supported by a Horizon 2020 innovation grant (ERIN, grant no. EU952516) and a Horizon Europe grant (NESTOR, grant no. 101120075) of the European Commission to A.S. and M.Z.E., the Estonian Research Council (grant no. PRG1076) to A.S., and the EVA (Erfelijkheid Voortplanting & Aanleg) specialty program (grant no. KP111513) of Maastricht University Medical Centre (MUMC+) to M.Z.E. TRIAL REGISTRATION NUMBER Not applicable.
Collapse
Affiliation(s)
- Ping Cao
- Department of Clinical Genetics, Maastricht University Medical Center+ (MUMC+), Maastricht, The Netherlands
- Department of Genetics and Cell Biology, GROW Research Institute for Oncology and Reproduction, Faculty of Health, Medicine and Life Sciences (FHML), Maastricht University, Maastricht, The Netherlands
| | - Josien Derhaag
- Department of Reproductive Medicine, Maastricht University Medical Center+ (MUMC+), Maastricht, The Netherlands
| | - Edith Coonen
- Department of Clinical Genetics, Maastricht University Medical Center+ (MUMC+), Maastricht, The Netherlands
- Department of Reproductive Medicine, Maastricht University Medical Center+ (MUMC+), Maastricht, The Netherlands
| | - Han Brunner
- Department of Clinical Genetics, Maastricht University Medical Center+ (MUMC+), Maastricht, The Netherlands
- Department of Genetics and Cell Biology, GROW Research Institute for Oncology and Reproduction, Faculty of Health, Medicine and Life Sciences (FHML), Maastricht University, Maastricht, The Netherlands
- Department of Human Genetics, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Ganesh Acharya
- Division of Obstetrics and Gynecology, Department of Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet, and Karolinska University Hospital, Stockholm, Sweden
- Women’s Health and Perinatology Research Group, Department of Clinical Medicine, UiT—The Arctic University of Norway, Tromsø, Norway
| | - Andres Salumets
- Division of Obstetrics and Gynecology, Department of Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet, and Karolinska University Hospital, Stockholm, Sweden
- Competence Centre on Health Technologies, Tartu, Estonia
- Department of Obstetrics and Gynecology, Institute of Clinical Medicine, University of Tartu, Tartu, Estonia
| | - Masoud Zamani Esteki
- Department of Clinical Genetics, Maastricht University Medical Center+ (MUMC+), Maastricht, The Netherlands
- Department of Genetics and Cell Biology, GROW Research Institute for Oncology and Reproduction, Faculty of Health, Medicine and Life Sciences (FHML), Maastricht University, Maastricht, The Netherlands
- Division of Obstetrics and Gynecology, Department of Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet, and Karolinska University Hospital, Stockholm, Sweden
| |
Collapse
|
2
|
Yousefpour Shahrivar R, Karami F, Karami E. Enhancing Fetal Anomaly Detection in Ultrasonography Images: A Review of Machine Learning-Based Approaches. Biomimetics (Basel) 2023; 8:519. [PMID: 37999160 PMCID: PMC10669151 DOI: 10.3390/biomimetics8070519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 10/05/2023] [Accepted: 10/26/2023] [Indexed: 11/25/2023] Open
Abstract
Fetal development is a critical phase in prenatal care, demanding the timely identification of anomalies in ultrasound images to safeguard the well-being of both the unborn child and the mother. Medical imaging has played a pivotal role in detecting fetal abnormalities and malformations. However, despite significant advances in ultrasound technology, the accurate identification of irregularities in prenatal images continues to pose considerable challenges, often necessitating substantial time and expertise from medical professionals. In this review, we go through recent developments in machine learning (ML) methods applied to fetal ultrasound images. Specifically, we focus on a range of ML algorithms employed in the context of fetal ultrasound, encompassing tasks such as image classification, object recognition, and segmentation. We highlight how these innovative approaches can enhance ultrasound-based fetal anomaly detection and provide insights for future research and clinical implementations. Furthermore, we emphasize the need for further research in this domain where future investigations can contribute to more effective ultrasound-based fetal anomaly detection.
Collapse
Affiliation(s)
- Ramin Yousefpour Shahrivar
- Department of Biology, College of Convergent Sciences and Technologies, Science and Research Branch, Islamic Azad University, Tehran, 14515-775, Iran
| | - Fatemeh Karami
- Department of Medical Genetics, Applied Biophotonics Research Center, Science and Research Branch, Islamic Azad University, Tehran, 14515-775, Iran
| | - Ebrahim Karami
- Department of Engineering and Applied Sciences, Memorial University of Newfoundland, St. John’s, NL A1B 3X5, Canada
| |
Collapse
|
3
|
Rather IH, Kumar S. Generative adversarial network based synthetic data training model for lightweight convolutional neural networks. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-23. [PMID: 37362646 PMCID: PMC10199442 DOI: 10.1007/s11042-023-15747-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 04/17/2023] [Accepted: 04/25/2023] [Indexed: 06/28/2023]
Abstract
Inadequate training data is a significant challenge for deep learning techniques, particularly in applications where data is difficult to get, and publicly available datasets are uncommon owing to ethical and privacy concerns. Various approaches, such as data augmentation and transfer learning, are employed to address this problem, which help to some extent in removing this limitation. However, after a certain amount of data augmentation, the quality of the generated data stalls, and transfer learning suffers from the issue of negative transfer. This paper proposes a novel generative adversarial network-based synthetic data training (GAN-ST) model to generate synthetic data for training a lightweight convolutional neural network (CNN). An enhanced generator is proposed to quickly saturate and cover the colour space of the training distribution. The GAN-ST model is based on Deep Convolutional Generative Adversarial Network(s) (DCGAN) and Conditional Generative Adversarial Network(s) (CGAN) models, which consist of an enhanced generator. The study evaluates the accuracy of a CNN model on the MNIST and CIFAR 10 datasets using both original and synthetic data. The results revealed an impressive classifier accuracy on the MNIST dataset, achieving an accuracy of 99.38% on GAN-ST-generated synthetic training data, which is only 0.05% lower than the performance on original data-based training. The classifier performance on the CIFAR dataset is also remarkable, achieving an accuracy of 90.23%. The performance of CNN trained using GAN-ST-based synthetic data is notable, with the most considerable improvement of 0.66% and 7.06%, over a single GAN-based synthetic data training for the MNIST and CIFAR datasets, respectively. By training two GANs independently, the GAN-ST model covers different parts of the original data distribution, resulting in a more diverse and realistic training data set for the classifier. This diverse set of synthetic data, when used to train a CNN, shows better generalization to new data, leading to improved classification accuracy.
Collapse
Affiliation(s)
- Ishfaq Hussain Rather
- School of Computer & Systems Sciences, Jawaharlal Nehru University, New Delhi, India
| | - Sushil Kumar
- School of Computer & Systems Sciences, Jawaharlal Nehru University, New Delhi, India
| |
Collapse
|
4
|
Esfandiari MA, Fallah Tafti M, Jafarnia Dabanloo N, Yousefirizi F. Detection of the rotator cuff tears using a novel convolutional neural network from magnetic resonance image (MRI). Heliyon 2023; 9:e15804. [PMID: 37206038 PMCID: PMC10189183 DOI: 10.1016/j.heliyon.2023.e15804] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 04/20/2023] [Accepted: 04/21/2023] [Indexed: 05/21/2023] Open
Abstract
The rotator cuff tear is a common situation for basketballers, handballers, or other athletes that strongly use their shoulders. This injury can be diagnosed precisely from a magnetic resonance (MR) image. In this paper, a novel deep learning-based framework is proposed to diagnose rotator cuff tear from MRI images of patients suspected of the rotator cuff tear. First, we collected 150 shoulders MRI images from two classes of rotator cuff tear patients and healthy ones with the same numbers. These images were observed by an orthopedic specialist and then tagged and used as input in the various configurations of the Convolutional Neural Network (CNN). At this stage, five different configurations of convolutional networks have been examined. Then, in the next step, the selected network with the highest accuracy is used to extract the deep features and classify the two classes of rotator cuff tear and healthy. Also, MRI images are feed to two quick pre-trained CNNs (MobileNetv2 and SqueezeNet) to compare with the proposed CNN. Finally, the evaluation is performed using the 5-fold cross-validation method. Also, a specific Graphical User Interface (GUI) was designed in the MATLAB environment for simplicity, which allows for testing by detecting the image class. The proposed CNN achieved higher accuracy than the two mentioned pre-trained CNNs. The average accuracy, precision, sensitivity, and specificity achieved by the best selected CNN configuration are equal to 92.67%, 91.13%, 91.75%, and 92.22%, respectively. The deep learning algorithm could accurately rule out significant rotator cuff tear based on shoulder MRI.
Collapse
Affiliation(s)
- Mohammad Amin Esfandiari
- Department of Biomedical Engineering, South Tehran Branch, Islamic Azad University, Tehran, Iran
| | - Mohammad Fallah Tafti
- Department of Biomedical Engineering, South Tehran Branch, Islamic Azad University, Tehran, Iran
- Corresponding author.
| | - Nader Jafarnia Dabanloo
- Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Fereshteh Yousefirizi
- School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran
| |
Collapse
|
5
|
Mostafa AM, Zakariah M, Aldakheel EA. Brain Tumor Segmentation Using Deep Learning on MRI Images. Diagnostics (Basel) 2023; 13:diagnostics13091562. [PMID: 37174953 PMCID: PMC10177460 DOI: 10.3390/diagnostics13091562] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 04/18/2023] [Accepted: 04/21/2023] [Indexed: 05/15/2023] Open
Abstract
Brain tumor (BT) diagnosis is a lengthy process, and great skill and expertise are required from radiologists. As the number of patients has expanded, so has the amount of data to be processed, making previous techniques both costly and ineffective. Many academics have examined a range of reliable and quick techniques for identifying and categorizing BTs. Recently, deep learning (DL) methods have gained popularity for creating computer algorithms that can quickly and reliably diagnose or segment BTs. To identify BTs in medical images, DL permits a pre-trained convolutional neural network (CNN) model. The suggested magnetic resonance imaging (MRI) images of BTs are included in the BT segmentation dataset, which was created as a benchmark for developing and evaluating algorithms for BT segmentation and diagnosis. There are 335 annotated MRI images in the collection. For the purpose of developing and testing BT segmentation and diagnosis algorithms, the brain tumor segmentation (BraTS) dataset was produced. A deep CNN was also utilized in the model-building process for segmenting BTs using the BraTS dataset. To train the model, a categorical cross-entropy loss function and an optimizer, such as Adam, were employed. Finally, the model's output successfully identified and segmented BTs in the dataset, attaining a validation accuracy of 98%.
Collapse
Affiliation(s)
- Almetwally M Mostafa
- Department of Information Systems, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia
| | - Mohammed Zakariah
- Department of Computer Science, College of Computer and Information Science, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia
| | - Eman Abdullah Aldakheel
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| |
Collapse
|
6
|
Dudeja T, Dubey SK, Bhatt AK. Ensembled EfficientNetB3 architecture for multi-class classification of tumours in MRI images. INTELLIGENT DECISION TECHNOLOGIES 2023. [DOI: 10.3233/idt-220150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
Healthcare informatics is one of the major concern domains in the processing of medical imaging for the diagnosis and treatment of brain tumours all over the world. Timely diagnosis of abnormal structures in brain tumours helps the clinical applications, medicines, doctors etc. in processing and analysing the medical imaging. The multi-class image classification of brain tumours faces challenges such as the scaling of large dataset, training of image datasets, efficiency, accuracy etc. EfficientNetB3 neural network scales the images in three dimensions resulting in improved accuracy. The novel neural network framework utilizes the optimization of an ensembled architecture of EfficientNetB3 with U-Net for MRI images which applies a semantic segmentation model for pre-trained backbone networks. The proposed neural model operates on a substantial network which will adapt the robustness by capturing the extraction of features in the U-Net encoder. The decoder will be enabling pixel-level localization at the definite precision level by an average ensemble of segmentation models. The ensembled pre-trained models will provide better training and prediction of abnormal structures in MRI images and thresholds for multi-classification of medical image visualization. The proposed model results in mean accuracy of 99.24 on the Kaggle dataset with 3064 images with a mean Dice score coefficient (DSC) of 0.9124 which is being compared with two state-of-art neural models.
Collapse
Affiliation(s)
- Tina Dudeja
- Department of Computer Science and Engineering, Amity University, Noida, Uttar Pradesh, India
| | - Sanjay Kumar Dubey
- Department of Computer Science and Engineering, Amity University, Noida, Uttar Pradesh, India
| | - Ashutosh Kumar Bhatt
- School of Computer Science and Information Technology, Uttarakhand Open University, Haldwani, Uttarakhand, India
| |
Collapse
|
7
|
Bajić F, Orel O, Habijan M. A Multi-Purpose Shallow Convolutional Neural Network for Chart Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:7695. [PMID: 36298046 PMCID: PMC9612160 DOI: 10.3390/s22207695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 10/06/2022] [Accepted: 10/07/2022] [Indexed: 06/16/2023]
Abstract
Charts are often used for the graphical representation of tabular data. Due to their vast expansion in various fields, it is necessary to develop computer algorithms that can easily retrieve and process information from chart images in a helpful way. Convolutional neural networks (CNNs) have succeeded in various image processing and classification tasks. Nevertheless, the success of training neural networks in terms of result accuracy and computational requirements requires careful construction of the network layers' and networks' parameters. We propose a novel Shallow Convolutional Neural Network (SCNN) architecture for chart-type classification and image generation. We validate the proposed novel network by using it in three different models. The first use case is a traditional SCNN classifier where the model achieves average classification accuracy of 97.14%. The second use case consists of two previously introduced SCNN-based models in parallel, with the same configuration, shared weights, and parameters mirrored and updated in both models. The model achieves average classification accuracy of 100%. The third proposed use case consists of two distinct models, a generator and a discriminator, which are both trained simultaneously using an adversarial process. The generated chart images are plausible to the originals. Extensive experimental analysis end evaluation is provided for the classification task of seven chart classes. The results show that the proposed SCNN is a powerful tool for chart image classification and generation, comparable with Deep Convolutional Neural Networks (DCNNs) but with higher efficiency, reduced computational time, and space complexity.
Collapse
Affiliation(s)
- Filip Bajić
- University Computing Centre, University of Zagreb, 10000 Zagreb, Croatia
| | - Ognjen Orel
- University Computing Centre, University of Zagreb, 10000 Zagreb, Croatia
| | - Marija Habijan
- Faculty of Electrical Engineering, Computer Science and Information Technology Osijek, 31000 Osijek, Croatia
| |
Collapse
|
8
|
Ullah N, Khan MS, Khan JA, Choi A, Anwar MS. A Robust End-to-End Deep Learning-Based Approach for Effective and Reliable BTD Using MR Images. SENSORS (BASEL, SWITZERLAND) 2022; 22:7575. [PMID: 36236674 PMCID: PMC9570935 DOI: 10.3390/s22197575] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 10/01/2022] [Accepted: 10/02/2022] [Indexed: 06/16/2023]
Abstract
Detection of a brain tumor in the early stages is critical for clinical practice and survival rate. Brain tumors arise in multiple shapes, sizes, and features with various treatment options. Tumor detection manually is challenging, time-consuming, and prone to error. Magnetic resonance imaging (MRI) scans are mostly used for tumor detection due to their non-invasive properties and also avoid painful biopsy. MRI scanning of one patient's brain generates many 3D images from multiple directions, making the manual detection of tumors very difficult, error-prone, and time-consuming. Therefore, there is a considerable need for autonomous diagnostics tools to detect brain tumors accurately. In this research, we have presented a novel TumorResnet deep learning (DL) model for brain detection, i.e., binary classification. The TumorResNet model employs 20 convolution layers with a leaky ReLU (LReLU) activation function for feature map activation to compute the most distinctive deep features. Finally, three fully connected classification layers are used to classify brain tumors MRI into normal and tumorous. The performance of the proposed TumorResNet architecture is evaluated on a standard Kaggle brain tumor MRI dataset for brain tumor detection (BTD), which contains brain tumor and normal MR images. The proposed model achieved a good accuracy of 99.33% for BTD. These experimental results, including the cross-dataset setting, validate the superiority of the TumorResNet model over the contemporary frameworks. This study offers an automated BTD method that aids in the early diagnosis of brain cancers. This procedure has a substantial impact on improving treatment options and patient survival.
Collapse
Affiliation(s)
- Naeem Ullah
- Department of Software Engineering, University of Engineering and Technology, Taxila 47050, Pakistan
| | - Mohammad Sohail Khan
- Department of Computer Software Engineering, University of Engineering and Technology Mardan, Mardan 23200, Pakistan
| | - Javed Ali Khan
- Department of Software Engineering, University of Science and Technology Bannu, Bannu 28100, Pakistan
| | - Ahyoung Choi
- Department of AI, Software Gachon University, Seongnem-si 13120, Korea
| | | |
Collapse
|
9
|
Alsubai S, Khan HU, Alqahtani A, Sha M, Abbas S, Mohammad UG. Ensemble deep learning for brain tumor detection. Front Comput Neurosci 2022; 16:1005617. [PMID: 36118133 PMCID: PMC9480978 DOI: 10.3389/fncom.2022.1005617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 08/18/2022] [Indexed: 11/29/2022] Open
Abstract
With the quick evolution of medical technology, the era of big data in medicine is quickly approaching. The analysis and mining of these data significantly influence the prediction, monitoring, diagnosis, and treatment of tumor disorders. Since it has a wide range of traits, a low survival rate, and an aggressive nature, brain tumor is regarded as the deadliest and most devastating disease. Misdiagnosed brain tumors lead to inadequate medical treatment, reducing the patient's life chances. Brain tumor detection is highly challenging due to the capacity to distinguish between aberrant and normal tissues. Effective therapy and long-term survival are made possible for the patient by a correct diagnosis. Despite extensive research, there are still certain limitations in detecting brain tumors because of the unusual distribution pattern of the lesions. Finding a region with a small number of lesions can be difficult because small areas tend to look healthy. It directly reduces the classification accuracy, and extracting and choosing informative features is challenging. A significant role is played by automatically classifying early-stage brain tumors utilizing deep and machine learning approaches. This paper proposes a hybrid deep learning model Convolutional Neural Network-Long Short Term Memory (CNN-LSTM) for classifying and predicting brain tumors through Magnetic Resonance Images (MRI). We experiment on an MRI brain image dataset. First, the data is preprocessed efficiently, and then, the Convolutional Neural Network (CNN) is applied to extract the significant features from images. The proposed model predicts the brain tumor with a significant classification accuracy of 99.1%, a precision of 98.8%, recall of 98.9%, and F1-measure of 99.0%.
Collapse
Affiliation(s)
- Shtwai Alsubai
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, AlKharj, Saudi Arabia
| | - Habib Ullah Khan
- Department of Accounting and Information Systems, College of Business and Economics, Qatar University, Doha, Qatar
- *Correspondence: Habib Ullah Khan
| | - Abdullah Alqahtani
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, AlKharj, Saudi Arabia
| | - Mohemmed Sha
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, AlKharj, Saudi Arabia
- Mohemmed Sha
| | - Sidra Abbas
- Department of Computer Science, COMSATS University, Islamabad, Pakistan
- Sidra Abbas
| | - Uzma Ghulam Mohammad
- Department of Computer Science and Software Engineering, International Islamic University, Islamabad, Pakistan
| |
Collapse
|
10
|
Orozco Torres JA, Medina Santiago A, Villegas Izaguirre JM, Amador García M, Delgado Hernández A. Hypertension Diagnosis with Backpropagation Neural Networks for Sustainability in Public Health. SENSORS (BASEL, SWITZERLAND) 2022; 22:5272. [PMID: 35890963 PMCID: PMC9316039 DOI: 10.3390/s22145272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 07/01/2022] [Accepted: 07/07/2022] [Indexed: 06/15/2023]
Abstract
This paper presents the development of a multilayer feed-forward neural network for the diagnosis of hypertension, based on a population-based study. For the development of this architecture, several physiological factors have been considered, which are vital to determining the risk of being hypertensive; a diagnostic system can offer a solution which is not easy to determine by conventional means. The results obtained demonstrate the sustainability of health conditions affecting humanity today as a consequence of the social environment in which we live, e.g., economics, stress, smoking, alcoholism, drug addiction, obesity, diabetes, physical inactivity, etc., which leads to hypertension. The results of the neural network-based diagnostic system show an effectiveness of 90%, thus generating a high expectation in diagnosing the risk of hypertension from the analyzed physiological data.
Collapse
Affiliation(s)
- Jorge Antonio Orozco Torres
- TecNM, Campus Tuxtla Gutiérrez, Carretera Panamericana Kilometro 1080, Tuxtla Gutiérrez 29050, Chiapas, Mexico
| | - Alejandro Medina Santiago
- National Science and Technology Council (Conacyt), Department of Computer Science, National Institute for Astrophysics, Optics and Electronics, San Andrés Cholula 72840, Puebla, Mexico
| | - José Manuel Villegas Izaguirre
- Facultad de Ciencias de la Ingeniería y Tecnología, Universidad Autónoma de Baja California, Boulevard Universitario #1000, Unidad Valle de las Palmas, Tijuana 21500, Baja California, Mexico; (J.M.V.I.); (A.D.H.)
- Center for Technological Research, Development and Innovation, University of Science and Technology Descartes, Tuxtla Gutiérrez 29065, Chiapas, Mexico
| | - Monica Amador García
- TecNM, Campus RioVerde, Carretera Rioverde-San Ciro Kilometro. 4.5, Rioverde 79610, San Luis Potosi, Mexico;
| | - Alberto Delgado Hernández
- Facultad de Ciencias de la Ingeniería y Tecnología, Universidad Autónoma de Baja California, Boulevard Universitario #1000, Unidad Valle de las Palmas, Tijuana 21500, Baja California, Mexico; (J.M.V.I.); (A.D.H.)
- Center for Technological Research, Development and Innovation, University of Science and Technology Descartes, Tuxtla Gutiérrez 29065, Chiapas, Mexico
| |
Collapse
|
11
|
Stefenon SF, Singh G, Yow KC, Cimatti A. Semi-ProtoPNet Deep Neural Network for the Classification of Defective Power Grid Distribution Structures. SENSORS 2022; 22:s22134859. [PMID: 35808353 PMCID: PMC9269338 DOI: 10.3390/s22134859] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Revised: 06/22/2022] [Accepted: 06/25/2022] [Indexed: 12/01/2022]
Abstract
Power distribution grids are typically installed outdoors and are exposed to environmental conditions. When contamination accumulates in the structures of the network, there may be shutdowns caused by electrical arcs. To improve the reliability of the network, visual inspections of the electrical power system can be carried out; these inspections can be automated using computer vision techniques based on deep neural networks. Based on this need, this paper proposes the Semi-ProtoPNet deep learning model to classify defective structures in the power distribution networks. The Semi-ProtoPNet deep neural network does not perform convex optimization of its last dense layer to maintain the impact of the negative reasoning process on image classification. The negative reasoning process rejects the incorrect classes of an input image; for this reason, it is possible to carry out an analysis with a low number of images that have different backgrounds, which is one of the challenges of this type of analysis. Semi-ProtoPNet achieves an accuracy of 97.22%, being superior to VGG-13, VGG-16, VGG-19, ResNet-34, ResNet-50, ResNet-152, DenseNet-121, DenseNet-161, DenseNet-201, and also models of the same class such as ProtoPNet, NP-ProtoPNet, Gen-ProtoPNet, and Ps-ProtoPNet.
Collapse
Affiliation(s)
- Stefano Frizzo Stefenon
- Fondazione Bruno Kessler, Via Sommarive 18, 38123 Trento, Italy;
- Department of Mathematics, Informatics and Physical Sciences, University of Udine, Via delle Scienze 206, 33100 Udine, Italy
- Correspondence:
| | - Gurmail Singh
- Faculty of Engineering and Applied Science, University of Regina, Wascana Parkway 3737, Regina, SK S4S 0A2, Canada; (G.S.); (K.-C.Y.)
| | - Kin-Choong Yow
- Faculty of Engineering and Applied Science, University of Regina, Wascana Parkway 3737, Regina, SK S4S 0A2, Canada; (G.S.); (K.-C.Y.)
| | | |
Collapse
|