1
|
Huang L, Ruan S, Xing Y, Feng M. A review of uncertainty quantification in medical image analysis: Probabilistic and non-probabilistic methods. Med Image Anal 2024; 97:103223. [PMID: 38861770 DOI: 10.1016/j.media.2024.103223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 03/16/2024] [Accepted: 05/27/2024] [Indexed: 06/13/2024]
Abstract
The comprehensive integration of machine learning healthcare models within clinical practice remains suboptimal, notwithstanding the proliferation of high-performing solutions reported in the literature. A predominant factor hindering widespread adoption pertains to an insufficiency of evidence affirming the reliability of the aforementioned models. Recently, uncertainty quantification methods have been proposed as a potential solution to quantify the reliability of machine learning models and thus increase the interpretability and acceptability of the results. In this review, we offer a comprehensive overview of the prevailing methods proposed to quantify the uncertainty inherent in machine learning models developed for various medical image tasks. Contrary to earlier reviews that exclusively focused on probabilistic methods, this review also explores non-probabilistic approaches, thereby furnishing a more holistic survey of research pertaining to uncertainty quantification for machine learning models. Analysis of medical images with the summary and discussion on medical applications and the corresponding uncertainty evaluation protocols are presented, which focus on the specific challenges of uncertainty in medical image analysis. We also highlight some potential future research work at the end. Generally, this review aims to allow researchers from both clinical and technical backgrounds to gain a quick and yet in-depth understanding of the research in uncertainty quantification for medical image analysis machine learning models.
Collapse
Affiliation(s)
- Ling Huang
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore
| | - Su Ruan
- Quantif, LITIS, University of Rouen Normandy, France.
| | - Yucheng Xing
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore
| | - Mengling Feng
- Saw Swee Hock School of Public Health, National University of Singapore, Singapore; Institute of Data Science, National University of Singapore, Singapore
| |
Collapse
|
2
|
Borna MR, Sepehri MM, Maleki B. An artificial intelligence algorithm to select most viable embryos considering current process in IVF labs. Front Artif Intell 2024; 7:1375474. [PMID: 38881952 PMCID: PMC11177761 DOI: 10.3389/frai.2024.1375474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 05/16/2024] [Indexed: 06/18/2024] Open
Abstract
Background The most common Assisted Reproductive Technology is In-Vitro Fertilization (IVF). During IVF, embryologists commonly perform a morphological assessment to evaluate embryo quality and choose the best embryo for transferring to the uterus. However, embryo selection through morphological assessment is subjective, so various embryologists obtain different conclusions. Furthermore, humans can consider only a limited number of visual parameters resulting in a poor IVF success rate. Artificial intelligence (AI) for embryo selection is objective and can include many parameters, leading to better IVF outcomes. Objectives This study sought to use AI to (1) predict pregnancy results based on embryo images, (2) assess using more than one image of the embryo in the prediction of pregnancy but based on the current process in IVF labs, and (3) compare results of AI-Based methods and embryologist experts in predicting pregnancy. Methods A data set including 252 Time-lapse Videos of embryos related to IVF performed between 2017 and 2020 was collected. Frames related to 19 ± 1, 43 ± 1, and 67 ± 1 h post-insemination were extracted. Well-Known CNN architectures with transfer learning have been applied to these images. The results have been compared with an algorithm that only uses the final image of embryos. Furthermore, the results have been compared with five experienced embryologists. Results To predict the pregnancy outcome, we applied five well-known CNN architectures (AlexNet, ResNet18, ResNet34, Inception V3, and DenseNet121). DeepEmbryo, using three images, predicts pregnancy better than the algorithm that only uses one final image. It also can predict pregnancy better than all embryologists. Different well-known architectures can successfully predict pregnancy chances with up to 75.0% accuracy using Transfer Learning. Conclusion We have developed DeepEmbryo, an AI-based tool that uses three static images to predict pregnancy. Additionally, DeepEmbryo uses images that can be obtained in the current IVF process in almost all IVF labs. AI-based tools have great potential for predicting pregnancy and can be used as a proper tool in the future.
Collapse
Affiliation(s)
- Mahdi-Reza Borna
- Department of IT Engineering, Faculty of Industrial and Systems Engineering, Tarbiat Modares University, Tehran, Iran
| | - Mohammad Mehdi Sepehri
- Department of IT Engineering, Faculty of Industrial and Systems Engineering, Tarbiat Modares University, Tehran, Iran
| | - Behnam Maleki
- Infertility Center, Department of Obstetrics and Gynecology, Mazandaran University of Medical Sciences, Sari, Iran
- Research and Clinical Center for Infertility, Yazd Reproductive Sciences Institute, Shahid Sadoughi University of Medical Sciences, Yazd, Iran
| |
Collapse
|
3
|
Hermosilla P, Soto R, Vega E, Suazo C, Ponce J. Skin Cancer Detection and Classification Using Neural Network Algorithms: A Systematic Review. Diagnostics (Basel) 2024; 14:454. [PMID: 38396492 PMCID: PMC10888121 DOI: 10.3390/diagnostics14040454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2023] [Revised: 02/07/2024] [Accepted: 02/10/2024] [Indexed: 02/25/2024] Open
Abstract
In recent years, there has been growing interest in the use of computer-assisted technology for early detection of skin cancer through the analysis of dermatoscopic images. However, the accuracy illustrated behind the state-of-the-art approaches depends on several factors, such as the quality of the images and the interpretation of the results by medical experts. This systematic review aims to critically assess the efficacy and challenges of this research field in order to explain the usability and limitations and highlight potential future lines of work for the scientific and clinical community. In this study, the analysis was carried out over 45 contemporary studies extracted from databases such as Web of Science and Scopus. Several computer vision techniques related to image and video processing for early skin cancer diagnosis were identified. In this context, the focus behind the process included the algorithms employed, result accuracy, and validation metrics. Thus, the results yielded significant advancements in cancer detection using deep learning and machine learning algorithms. Lastly, this review establishes a foundation for future research, highlighting potential contributions and opportunities to improve the effectiveness of skin cancer detection through machine learning.
Collapse
Affiliation(s)
- Pamela Hermosilla
- Escuela de Ingeniería Informática, Pontificia Universidad Católica de Valparaíso, Avenida Brasil 2241, Valparaíso 2362807, Chile (E.V.); (C.S.); (J.P.)
| | | | | | | | | |
Collapse
|
4
|
Zeineldin RA, Karar ME, Elshaer Z, Coburger J, Wirtz CR, Burgert O, Mathis-Ullrich F. Explainable hybrid vision transformers and convolutional network for multimodal glioma segmentation in brain MRI. Sci Rep 2024; 14:3713. [PMID: 38355678 PMCID: PMC10866944 DOI: 10.1038/s41598-024-54186-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 02/09/2024] [Indexed: 02/16/2024] Open
Abstract
Accurate localization of gliomas, the most common malignant primary brain cancer, and its different sub-region from multimodal magnetic resonance imaging (MRI) volumes are highly important for interventional procedures. Recently, deep learning models have been applied widely to assist automatic lesion segmentation tasks for neurosurgical interventions. However, these models are often complex and represented as "black box" models which limit their applicability in clinical practice. This article introduces new hybrid vision Transformers and convolutional neural networks for accurate and robust glioma segmentation in Brain MRI scans. Our proposed method, TransXAI, provides surgeon-understandable heatmaps to make the neural networks transparent. TransXAI employs a post-hoc explanation technique that provides visual interpretation after the brain tumor localization is made without any network architecture modifications or accuracy tradeoffs. Our experimental findings showed that TransXAI achieves competitive performance in extracting both local and global contexts in addition to generating explainable saliency maps to help understand the prediction of the deep network. Further, visualization maps are obtained to realize the flow of information in the internal layers of the encoder-decoder network and understand the contribution of MRI modalities in the final prediction. The explainability process could provide medical professionals with additional information about the tumor segmentation results and therefore aid in understanding how the deep learning model is capable of processing MRI data successfully. Thus, it enables the physicians' trust in such deep learning systems towards applying them clinically. To facilitate TransXAI model development and results reproducibility, we will share the source code and the pre-trained models after acceptance at https://github.com/razeineldin/TransXAI .
Collapse
Affiliation(s)
- Ramy A Zeineldin
- Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-University Erlangen-Nürnberg (FAU), 91052, Erlangen, Germany.
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany.
- Faculty of Electronic Engineering (FEE), Menoufia University, Minuf, 32952, Egypt.
| | - Mohamed E Karar
- Faculty of Electronic Engineering (FEE), Menoufia University, Minuf, 32952, Egypt
| | - Ziad Elshaer
- Department of Neurosurgery, University of Ulm, 89312, Günzburg, Germany
| | - Jan Coburger
- Department of Neurosurgery, University of Ulm, 89312, Günzburg, Germany
| | - Christian R Wirtz
- Department of Neurosurgery, University of Ulm, 89312, Günzburg, Germany
| | - Oliver Burgert
- Research Group Computer Assisted Medicine (CaMed), Reutlingen University, 72762, Reutlingen, Germany
| | - Franziska Mathis-Ullrich
- Department Artificial Intelligence in Biomedical Engineering (AIBE), Friedrich-Alexander-University Erlangen-Nürnberg (FAU), 91052, Erlangen, Germany
| |
Collapse
|
5
|
Dixit S, Kumar A, Srinivasan K, Vincent PMDR, Ramu Krishnan N. Advancing genome editing with artificial intelligence: opportunities, challenges, and future directions. Front Bioeng Biotechnol 2024; 11:1335901. [PMID: 38260726 PMCID: PMC10800897 DOI: 10.3389/fbioe.2023.1335901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 12/19/2023] [Indexed: 01/24/2024] Open
Abstract
Clustered regularly interspaced short palindromic repeat (CRISPR)-based genome editing (GED) technologies have unlocked exciting possibilities for understanding genes and improving medical treatments. On the other hand, Artificial intelligence (AI) helps genome editing achieve more precision, efficiency, and affordability in tackling various diseases, like Sickle cell anemia or Thalassemia. AI models have been in use for designing guide RNAs (gRNAs) for CRISPR-Cas systems. Tools like DeepCRISPR, CRISTA, and DeepHF have the capability to predict optimal guide RNAs (gRNAs) for a specified target sequence. These predictions take into account multiple factors, including genomic context, Cas protein type, desired mutation type, on-target/off-target scores, potential off-target sites, and the potential impacts of genome editing on gene function and cell phenotype. These models aid in optimizing different genome editing technologies, such as base, prime, and epigenome editing, which are advanced techniques to introduce precise and programmable changes to DNA sequences without relying on the homology-directed repair pathway or donor DNA templates. Furthermore, AI, in collaboration with genome editing and precision medicine, enables personalized treatments based on genetic profiles. AI analyzes patients' genomic data to identify mutations, variations, and biomarkers associated with different diseases like Cancer, Diabetes, Alzheimer's, etc. However, several challenges persist, including high costs, off-target editing, suitable delivery methods for CRISPR cargoes, improving editing efficiency, and ensuring safety in clinical applications. This review explores AI's contribution to improving CRISPR-based genome editing technologies and addresses existing challenges. It also discusses potential areas for future research in AI-driven CRISPR-based genome editing technologies. The integration of AI and genome editing opens up new possibilities for genetics, biomedicine, and healthcare, with significant implications for human health.
Collapse
Affiliation(s)
- Shriniket Dixit
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, India
| | - Anant Kumar
- School of Bioscience and Technology, Vellore Institute of Technology, Vellore, India
| | - Kathiravan Srinivasan
- School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, India
| | - P. M. Durai Raj Vincent
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore, India
| | - Nadesh Ramu Krishnan
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore, India
| |
Collapse
|
6
|
Saeed W, Shahbaz E, Maqsood Q, Ali SW, Mahnoor M. Cutaneous Oncology: Strategies for Melanoma Prevention, Diagnosis, and Therapy. Cancer Control 2024; 31:10732748241274978. [PMID: 39133519 DOI: 10.1177/10732748241274978] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/13/2024] Open
Abstract
Skin cancer comprises one-third of all diagnosed cancer cases and remains a major health concern. Genetic and environmental parameters serve as the two main risk factors associated with the development of skin cancer, with ultraviolet radiation being the most common environmental risk factor. Studies have also found fair complexion, arsenic toxicity, indoor tanning, and family history among the prevailing causes of skin cancer. Prevention and early diagnosis play a crucial role in reducing the frequency and ensuring effective management of skin cancer. Recent studies have focused on exploring minimally invasive or non-invasive diagnostic technologies along with artificial intelligence to facilitate rapid and accurate diagnosis. The treatment of skin cancer ranges from traditional surgical excision to various advanced methods such as phototherapy, radiotherapy, immunotherapy, targeted therapy, and combination therapy. Recent studies have focused on immunotherapy, with the introduction of new checkpoint inhibitors and personalized immunotherapy enhancing treatment efficacy. Advancements in multi-omics, nanotechnology, and artificial intelligence have further deepened the understanding of the mechanisms underlying tumoral growth and their interaction with therapeutic effects, which has paved the way for precision oncology. This review aims to highlight the recent advancements in the understanding and management of skin cancer, and provide an overview of existing and emerging diagnostic, prognostic, and therapeutic modalities, while highlighting areas that require further research to bridge the existing knowledge gaps.
Collapse
Affiliation(s)
- Wajeeha Saeed
- Department of Food Sciences, Faculty of Agricultural Sciences, University of the Punjab, Lahore, Pakistan
| | - Esha Shahbaz
- Department of Food Sciences, Faculty of Agricultural Sciences, University of the Punjab, Lahore, Pakistan
| | - Quratulain Maqsood
- Centre for Applied Molecular Biology, University of the Punjab, Lahore Pakistan
| | - Shinawar Waseem Ali
- Department of Food Sciences, Faculty of Agricultural Sciences, University of the Punjab, Lahore, Pakistan
| | - Muhammada Mahnoor
- Sehat Medical Complex Lake City, University of Lahore, Lahore Pakistan
| |
Collapse
|
7
|
Lai W, Kuang M, Wang X, Ghafariasl P, Sabzalian MH, Lee S. Skin cancer diagnosis (SCD) using Artificial Neural Network (ANN) and Improved Gray Wolf Optimization (IGWO). Sci Rep 2023; 13:19377. [PMID: 37938553 PMCID: PMC10632393 DOI: 10.1038/s41598-023-45039-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Accepted: 10/15/2023] [Indexed: 11/09/2023] Open
Abstract
Skin Cancer (SC) is one of the most dangerous types of cancer and if not treated in time, it can threaten the patient's life. With early diagnosis of this disease, treatment methods can be used more effectively and the progression of the disease can be prevented. Machine Learning (ML) techniques can be utilized as a useful and efficient tool for SCD. So far, various methods for automatic SCD based on ML techniques have been presented; However, this research field still requires the application of optimal and efficient models to increase the accuracy of SCD. Therefore, in this article, a new method for SCD using a combination of optimization techniques and Artificial Neural Networks (ANNs) is presented. The proposed method includes four steps: pre-processing, segmentation, feature extraction, and classification. Image segmentation for identifying the lesion region is performed using a Kohonen neural network, where the identified region of interest (ROI) is enhanced using the Greedy Search Algorithm (GSA). The proposed method, uses a Convolutional Neural Network (CNN) for extracting features from ROIs. Also, to classify features, an ANN is used, and by the Improved Gray Wolf Optimization (IGWO) algorithm, the number of neurons and weight vector are adjusted. In this method, a probabilistic model is used to improve the convergence speed of the GWO algorithm. Based on the evaluation results, using the IGWO model to optimize the structure and weight vector of the ANN can be effective in increasing the diagnosis accuracy by at least 5%. The results of implementing the proposed method and comparing its performance with previous methods also show that this method can diagnose SC in the ISIC-2016 and ISIC-2017 databases with an average accuracy of 97.09 and 95.17%, respectively; which improves accuracy by at least 0.5% compared to other methods.
Collapse
Affiliation(s)
- Wanqi Lai
- The First Clinical Medical School of Guangzhou University of Chinese Medicine, Guangzhou, 510405, Guangdong, China
| | - Meixia Kuang
- The First Clinical Medical School of Guangzhou University of Chinese Medicine, Guangzhou, 510405, Guangdong, China.
| | - Xiaorou Wang
- The First Clinical Medical School of Guangzhou University of Chinese Medicine, Guangzhou, 510405, Guangdong, China
| | - Parviz Ghafariasl
- Department of Industrial and Manufacturing Systems Engineering, Kansas State University, Manhattan, KS, 66506, USA
| | - Mohammad Hosein Sabzalian
- Department of Mechanical Engineering, Faculty of Engineering, University of Santiago of Chile (USACH), Avenida Libertador Bernardo O'Higgins 3363, 9170022, Santiago, Chile
| | - Sangkeum Lee
- Department of Computer Engineering, Hanbat National University, Daejeon, 34158, Korea.
| |
Collapse
|
8
|
Seoni S, Jahmunah V, Salvi M, Barua PD, Molinari F, Acharya UR. Application of uncertainty quantification to artificial intelligence in healthcare: A review of last decade (2013-2023). Comput Biol Med 2023; 165:107441. [PMID: 37683529 DOI: 10.1016/j.compbiomed.2023.107441] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 08/27/2023] [Accepted: 08/29/2023] [Indexed: 09/10/2023]
Abstract
Uncertainty estimation in healthcare involves quantifying and understanding the inherent uncertainty or variability associated with medical predictions, diagnoses, and treatment outcomes. In this era of Artificial Intelligence (AI) models, uncertainty estimation becomes vital to ensure safe decision-making in the medical field. Therefore, this review focuses on the application of uncertainty techniques to machine and deep learning models in healthcare. A systematic literature review was conducted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Our analysis revealed that Bayesian methods were the predominant technique for uncertainty quantification in machine learning models, with Fuzzy systems being the second most used approach. Regarding deep learning models, Bayesian methods emerged as the most prevalent approach, finding application in nearly all aspects of medical imaging. Most of the studies reported in this paper focused on medical images, highlighting the prevalent application of uncertainty quantification techniques using deep learning models compared to machine learning models. Interestingly, we observed a scarcity of studies applying uncertainty quantification to physiological signals. Thus, future research on uncertainty quantification should prioritize investigating the application of these techniques to physiological signals. Overall, our review highlights the significance of integrating uncertainty techniques in healthcare applications of machine learning and deep learning models. This can provide valuable insights and practical solutions to manage uncertainty in real-world medical data, ultimately improving the accuracy and reliability of medical diagnoses and treatment recommendations.
Collapse
Affiliation(s)
- Silvia Seoni
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | | | - Massimo Salvi
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy
| | - Prabal Datta Barua
- School of Business (Information System), University of Southern Queensland, Toowoomba, QLD, 4350, Australia; Faculty of Engineering and Information Technology, University of Technology Sydney, Sydney, NSW, 2007, Australia
| | - Filippo Molinari
- Biolab, PolitoBIOMedLab, Department of Electronics and Telecommunications, Politecnico di Torino, Turin, Italy.
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia
| |
Collapse
|
9
|
Sufyan M, Shokat Z, Ashfaq UA. Artificial intelligence in cancer diagnosis and therapy: Current status and future perspective. Comput Biol Med 2023; 165:107356. [PMID: 37688994 DOI: 10.1016/j.compbiomed.2023.107356] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Revised: 07/21/2023] [Accepted: 08/12/2023] [Indexed: 09/11/2023]
Abstract
Artificial intelligence (AI) in healthcare plays a pivotal role in combating many fatal diseases, such as skin, breast, and lung cancer. AI is an advanced form of technology that uses mathematical-based algorithmic principles similar to those of the human mind for cognizing complex challenges of the healthcare unit. Cancer is a lethal disease with many etiologies, including numerous genetic and epigenetic mutations. Cancer being a multifactorial disease is difficult to be diagnosed at an early stage. Therefore, genetic variations and other leading factors could be identified in due time through AI and machine learning (ML). AI is the synergetic approach for mining the drug targets, their mechanism of action, and drug-organism interaction from massive raw data. This synergetic approach is also facing several challenges in data mining but computational algorithms from different scientific communities for multi-target drug discovery are highly helpful to overcome the bottlenecks in AI for drug-target discovery. AI and ML could be the epicenter in the medical world for the diagnosis, treatment, and evaluation of almost any disease in the near future. In this comprehensive review, we explore the immense potential of AI and ML when integrated with the biological sciences, specifically in the context of cancer research. Our goal is to illuminate the many ways in which AI and ML are being applied to the study of cancer, from diagnosis to individualized treatment. We highlight the prospective role of AI in supporting oncologists and other medical professionals in making informed decisions and improving patient outcomes by examining the intersection of AI and cancer control. Although AI-based medical therapies show great potential, many challenges must be overcome before they can be implemented in clinical practice. We critically assess the current hurdles and provide insights into the future directions of AI-driven approaches, aiming to pave the way for enhanced cancer interventions and improved patient care.
Collapse
Affiliation(s)
- Muhammad Sufyan
- Department of Bioinformatics and Biotechnology, Government College University Faisalabad, Pakistan.
| | - Zeeshan Shokat
- Department of Bioinformatics and Biotechnology, Government College University Faisalabad, Pakistan.
| | - Usman Ali Ashfaq
- Department of Bioinformatics and Biotechnology, Government College University Faisalabad, Pakistan.
| |
Collapse
|
10
|
Sherkatghanad Z, Abdar M, Charlier J, Makarenkov V. Using traditional machine learning and deep learning methods for on- and off-target prediction in CRISPR/Cas9: a review. Brief Bioinform 2023; 24:7130974. [PMID: 37080758 DOI: 10.1093/bib/bbad131] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 03/07/2023] [Accepted: 03/13/2023] [Indexed: 04/22/2023] Open
Abstract
CRISPR/Cas9 (Clustered Regularly Interspaced Short Palindromic Repeats and CRISPR-associated protein 9) is a popular and effective two-component technology used for targeted genetic manipulation. It is currently the most versatile and accurate method of gene and genome editing, which benefits from a large variety of practical applications. For example, in biomedicine, it has been used in research related to cancer, virus infections, pathogen detection, and genetic diseases. Current CRISPR/Cas9 research is based on data-driven models for on- and off-target prediction as a cleavage may occur at non-target sequence locations. Nowadays, conventional machine learning and deep learning methods are applied on a regular basis to accurately predict on-target knockout efficacy and off-target profile of given single-guide RNAs (sgRNAs). In this paper, we present an overview and a comparative analysis of traditional machine learning and deep learning models used in CRISPR/Cas9. We highlight the key research challenges and directions associated with target activity prediction. We discuss recent advances in the sgRNA-DNA sequence encoding used in state-of-the-art on- and off-target prediction models. Furthermore, we present the most popular deep learning neural network architectures used in CRISPR/Cas9 prediction models. Finally, we summarize the existing challenges and discuss possible future investigations in the field of on- and off-target prediction. Our paper provides valuable support for academic and industrial researchers interested in the application of machine learning methods in the field of CRISPR/Cas9 genome editing.
Collapse
Affiliation(s)
- Zeinab Sherkatghanad
- Departement d'Informatique, Universite du Quebec a Montreal, H2X 3Y7, Montreal, QC, Canada
| | - Moloud Abdar
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, 3216, Geelong, VIC, Australia
| | - Jeremy Charlier
- Departement d'Informatique, Universite du Quebec a Montreal, H2X 3Y7, Montreal, QC, Canada
| | - Vladimir Makarenkov
- Departement d'Informatique, Universite du Quebec a Montreal, H2X 3Y7, Montreal, QC, Canada
| |
Collapse
|
11
|
Maqsood S, Damaševičius R. Multiclass skin lesion localization and classification using deep learning based features fusion and selection framework for smart healthcare. Neural Netw 2023; 160:238-258. [PMID: 36701878 DOI: 10.1016/j.neunet.2023.01.022] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 11/13/2022] [Accepted: 01/19/2023] [Indexed: 01/27/2023]
Abstract
BACKGROUND The idea of smart healthcare has gradually gained attention as a result of the information technology industry's rapid development. Smart healthcare uses next-generation technologies i.e., artificial intelligence (AI) and Internet of Things (IoT), to intelligently transform current medical methods to make them more efficient, dependable and individualized. One of the most prominent uses of telemedicine and e-health in medical image analysis is teledermatology. Telecommunications technologies are used in this industry to send medical information to professionals. Teledermatology is a useful method for the identification of skin lesions, particularly in rural locations, because the skin is visually perceptible. One of the most recent tools for diagnosing skin cancer is dermoscopy. To classify skin malignancies, numerous computational approaches have been proposed in the literature. However, difficulties still exist i.e., lesions with low contrast, imbalanced datasets, high level of memory complexity, and the extraction of redundant features. METHODS In this work, a unified CAD model is proposed based on a deep learning framework for skin lesion segmentation and classification. In the proposed approach, the source dermoscopic images are initially pre-processed using a contrast enhancement based modified bio-inspired multiple exposure fusion approach. In the second stage, a custom 26-layered convolutional neural network (CNN) architecture is designed to segment the skin lesion regions. In the third stage, four pre-trained CNN models (Xception, ResNet-50, ResNet-101 and VGG16) are modified and trained using transfer learning on the segmented lesion images. In the fourth stage, the deep features vectors are extracted from all the CNN models and fused using the convolutional sparse image decomposition fusion approach. In the fifth stage, the univariate measurement and Poisson distribution feature selection approach is used for the best features selection for classification. Finally, the selected features are fed to the multi-class support vector machine (MC-SVM) for the final classification. RESULTS The proposed approach employed to the HAM10000, ISIC2018, ISIC2019, and PH2 datasets and achieved an accuracy of 98.57%, 98.62%, 93.47%, and 98.98% respectively which are better than previous works. CONCLUSION When compared to renowned state-of-the-art methods, experimental results show that the proposed skin lesion detection and classification approach achieved higher performance in terms of both visually and enhanced quantitative evaluation with enhanced accuracy.
Collapse
Affiliation(s)
- Sarmad Maqsood
- Department of Software Engineering, Faculty of Informatics Engineering, Kaunas University of Technology, LT-51386 Kaunas, Lithuania.
| | - Robertas Damaševičius
- Department of Software Engineering, Faculty of Informatics Engineering, Kaunas University of Technology, LT-51386 Kaunas, Lithuania.
| |
Collapse
|
12
|
AI-Powered Diagnosis of Skin Cancer: A Contemporary Review, Open Challenges and Future Research Directions. Cancers (Basel) 2023; 15:cancers15041183. [PMID: 36831525 PMCID: PMC9953963 DOI: 10.3390/cancers15041183] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 02/07/2023] [Accepted: 02/08/2023] [Indexed: 02/15/2023] Open
Abstract
Skin cancer continues to remain one of the major healthcare issues across the globe. If diagnosed early, skin cancer can be treated successfully. While early diagnosis is paramount for an effective cure for cancer, the current process requires the involvement of skin cancer specialists, which makes it an expensive procedure and not easily available and affordable in developing countries. This dearth of skin cancer specialists has given rise to the need to develop automated diagnosis systems. In this context, Artificial Intelligence (AI)-based methods have been proposed. These systems can assist in the early detection of skin cancer and can consequently lower its morbidity, and, in turn, alleviate the mortality rate associated with it. Machine learning and deep learning are branches of AI that deal with statistical modeling and inference, which progressively learn from data fed into them to predict desired objectives and characteristics. This survey focuses on Machine Learning and Deep Learning techniques deployed in the field of skin cancer diagnosis, while maintaining a balance between both techniques. A comparison is made to widely used datasets and prevalent review papers, discussing automated skin cancer diagnosis. The study also discusses the insights and lessons yielded by the prior works. The survey culminates with future direction and scope, which will subsequently help in addressing the challenges faced within automated skin cancer diagnosis.
Collapse
|
13
|
Abdar M, Salari S, Qahremani S, Lam HK, Karray F, Hussain S, Khosravi A, Acharya UR, Makarenkov V, Nahavandi S. UncertaintyFuseNet: Robust uncertainty-aware hierarchical feature fusion model with Ensemble Monte Carlo Dropout for COVID-19 detection. AN INTERNATIONAL JOURNAL ON INFORMATION FUSION 2023; 90:364-381. [PMID: 36217534 PMCID: PMC9534540 DOI: 10.1016/j.inffus.2022.09.023] [Citation(s) in RCA: 16] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 09/23/2022] [Accepted: 09/25/2022] [Indexed: 05/03/2023]
Abstract
The COVID-19 (Coronavirus disease 2019) pandemic has become a major global threat to human health and well-being. Thus, the development of computer-aided detection (CAD) systems that are capable of accurately distinguishing COVID-19 from other diseases using chest computed tomography (CT) and X-ray data is of immediate priority. Such automatic systems are usually based on traditional machine learning or deep learning methods. Differently from most of the existing studies, which used either CT scan or X-ray images in COVID-19-case classification, we present a new, simple but efficient deep learning feature fusion model, called U n c e r t a i n t y F u s e N e t , which is able to classify accurately large datasets of both of these types of images. We argue that the uncertainty of the model's predictions should be taken into account in the learning process, even though most of the existing studies have overlooked it. We quantify the prediction uncertainty in our feature fusion model using effective Ensemble Monte Carlo Dropout (EMCD) technique. A comprehensive simulation study has been conducted to compare the results of our new model to the existing approaches, evaluating the performance of competing models in terms of Precision, Recall, F-Measure, Accuracy and ROC curves. The obtained results prove the efficiency of our model which provided the prediction accuracy of 99.08% and 96.35% for the considered CT scan and X-ray datasets, respectively. Moreover, our U n c e r t a i n t y F u s e N e t model was generally robust to noise and performed well with previously unseen data. The source code of our implementation is freely available at: https://github.com/moloud1987/UncertaintyFuseNet-for-COVID-19-Classification.
Collapse
Affiliation(s)
- Moloud Abdar
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Geelong, Australia
| | - Soorena Salari
- Department of Computer Science and Software Engineering, Concordia University, Montreal, Canada
| | - Sina Qahremani
- Department of Electrical Engineering, Sharif University of Technology, Tehran, Iran
| | - Hak-Keung Lam
- Centre for Robotics Research, Department of Engineering, King's College London, London, United Kingdom
| | - Fakhri Karray
- Centre for Pattern Analysis and Machine Intelligence, Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, Canada
- Department of Machine Learning, Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | - Sadiq Hussain
- System Administrator, Dibrugarh University, Dibrugarh, India
| | - Abbas Khosravi
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Geelong, Australia
| | - U Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Clementi, Singapore
- Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore
- Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
| | - Vladimir Makarenkov
- Department of Computer Science, University of Quebec in Montreal, Montreal, Canada
| | - Saeid Nahavandi
- Institute for Intelligent Systems Research and Innovation (IISRI), Deakin University, Geelong, Australia
| |
Collapse
|
14
|
Shinde RK, Alam MS, Hossain MB, Md Imtiaz S, Kim J, Padwal AA, Kim N. Squeeze-MNet: Precise Skin Cancer Detection Model for Low Computing IoT Devices Using Transfer Learning. Cancers (Basel) 2022; 15:cancers15010012. [PMID: 36612010 PMCID: PMC9817940 DOI: 10.3390/cancers15010012] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 12/15/2022] [Accepted: 12/16/2022] [Indexed: 12/24/2022] Open
Abstract
Cancer remains a deadly disease. We developed a lightweight, accurate, general-purpose deep learning algorithm for skin cancer classification. Squeeze-MNet combines a Squeeze algorithm for digital hair removal during preprocessing and a MobileNet deep learning model with predefined weights. The Squeeze algorithm extracts important image features from the image, and the black-hat filter operation removes noise. The MobileNet model (with a dense neural network) was developed using the International Skin Imaging Collaboration (ISIC) dataset to fine-tune the model. The proposed model is lightweight; the prototype was tested on a Raspberry Pi 4 Internet of Things device with a Neo pixel 8-bit LED ring; a medical doctor validated the device. The average precision (AP) for benign and malignant diagnoses was 99.76% and 98.02%, respectively. Using our approach, the required dataset size decreased by 66%. The hair removal algorithm increased the accuracy of skin cancer detection to 99.36% with the ISIC dataset. The area under the receiver operating curve was 98.9%.
Collapse
Affiliation(s)
- Rupali Kiran Shinde
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| | | | - Md. Biddut Hossain
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| | - Shariar Md Imtiaz
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| | - JoonHyun Kim
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
| | | | - Nam Kim
- Department of Information and Communication Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
- Correspondence:
| |
Collapse
|
15
|
Reis HC, Turk V. COVID-DSNet: A novel deep convolutional neural network for detection of coronavirus (SARS-CoV-2) cases from CT and Chest X-Ray images. Artif Intell Med 2022; 134:102427. [PMID: 36462906 PMCID: PMC9574866 DOI: 10.1016/j.artmed.2022.102427] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 10/07/2022] [Accepted: 10/13/2022] [Indexed: 12/14/2022]
Abstract
COVID-19 (SARS-CoV-2), which causes acute respiratory syndrome, is a contagious and deadly disease that has devastating effects on society and human life. COVID-19 can cause serious complications, especially in patients with pre-existing chronic health problems such as diabetes, hypertension, lung cancer, weakened immune systems, and the elderly. The most critical step in the fight against COVID-19 is the rapid diagnosis of infected patients. Computed Tomography (CT), chest X-ray (CXR), and RT-PCR diagnostic kits are frequently used to diagnose the disease. However, due to difficulties such as the inadequacy of RT-PCR test kits and false negative (FN) results in the early stages of the disease, the time-consuming examination of medical images obtained from CT and CXR imaging techniques by specialists/doctors, and the increasing workload on specialists, it is challenging to detect COVID-19. Therefore, researchers have suggested searching for new methods in COVID- 19 detection. In analysis studies with CT and CXR radiography images, it was determined that COVID-19-infected patients experienced abnormalities related to COVID-19. The anomalies observed here are the primary motivation for artificial intelligence researchers to develop COVID-19 detection applications with deep convolutional neural networks. Here, convolutional neural network-based deep learning algorithms from artificial intelligence technologies with high discrimination capabilities can be considered as an alternative approach in the disease detection process. This study proposes a deep convolutional neural network, COVID-DSNet, to diagnose typical pneumonia (bacterial, viral) and COVID-19 diseases from CT, CXR, hybrid CT + CXR images. In the multi-classification study with the CT dataset, 97.60 % accuracy and 97.60 % sensitivity values were obtained from the COVID-DSNet model, and 100 %, 96.30 %, and 96.58 % sensitivity values were obtained in the detection of typical, common pneumonia and COVID-19, respectively. The proposed model is an economical, practical deep learning network that data scientists can benefit from and develop. Although it is not a definitive solution in disease diagnosis, it may help experts as it produces successful results in detecting pneumonia and COVID-19.
Collapse
Affiliation(s)
- Hatice Catal Reis
- Department of Geomatics Engineering, Gumushane University, Gumushane 2900, Turkey,Corresponding author at: Department of Geomatics Engineering, Gumushane University, Gumushane 2900, Turkey
| | - Veysel Turk
- Department of Computer Engineering, University of Harran, Sanliurfa, Turkey
| |
Collapse
|