1
|
Odugbemi AI, Nyirenda C, Christoffels A, Egieyeh SA. Artificial intelligence in antidiabetic drug discovery: The advances in QSAR and the prediction of α-glucosidase inhibitors. Comput Struct Biotechnol J 2024; 23:2964-2977. [PMID: 39148608 PMCID: PMC11326494 DOI: 10.1016/j.csbj.2024.07.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Revised: 07/03/2024] [Accepted: 07/03/2024] [Indexed: 08/17/2024] Open
Abstract
Artificial Intelligence is transforming drug discovery, particularly in the hit identification phase of therapeutic compounds. One tool that has been instrumental in this transformation is Quantitative Structure-Activity Relationship (QSAR) analysis. This computer-aided drug design tool uses machine learning to predict the biological activity of new compounds based on the numerical representation of chemical structures against various biological targets. With diabetes mellitus becoming a significant health challenge in recent times, there is intense research interest in modulating antidiabetic drug targets. α-Glucosidase is an antidiabetic target that has gained attention due to its ability to suppress postprandial hyperglycaemia, a key contributor to diabetic complications. This review explored a detailed approach to developing QSAR models, focusing on strategies for generating input variables (molecular descriptors) and computational approaches ranging from classical machine learning algorithms to modern deep learning algorithms. We also highlighted studies that have used these approaches to develop predictive models for α-glucosidase inhibitors to modulate this critical antidiabetic drug target.
Collapse
Affiliation(s)
- Adeshina I Odugbemi
- South African Medical Research Council Bioinformatics Unit, South African National Bioinformatics Institute, University of the Western Cape, Bellville, Cape Town 7535, South Africa
- School of Pharmacy, University of the Western Cape, Bellville, Cape Town 7535, South Africa
- National Institute for Theoretical and Computational Sciences (NITheCS), South Africa
| | - Clement Nyirenda
- Department of Computer Science, University of the Western Cape, Cape Town 7535, South Africa
| | - Alan Christoffels
- South African Medical Research Council Bioinformatics Unit, South African National Bioinformatics Institute, University of the Western Cape, Bellville, Cape Town 7535, South Africa
- Africa Centres for Disease Control and Prevention, African Union, Addis Ababa, Ethiopia
| | - Samuel A Egieyeh
- School of Pharmacy, University of the Western Cape, Bellville, Cape Town 7535, South Africa
- National Institute for Theoretical and Computational Sciences (NITheCS), South Africa
| |
Collapse
|
2
|
Vittorio S, Lunghini F, Morerio P, Gadioli D, Orlandini S, Silva P, Jan Martinovic, Pedretti A, Bonanni D, Del Bue A, Palermo G, Vistoli G, Beccari AR. Addressing docking pose selection with structure-based deep learning: Recent advances, challenges and opportunities. Comput Struct Biotechnol J 2024; 23:2141-2151. [PMID: 38827235 PMCID: PMC11141151 DOI: 10.1016/j.csbj.2024.05.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 05/15/2024] [Accepted: 05/15/2024] [Indexed: 06/04/2024] Open
Abstract
Molecular docking is a widely used technique in drug discovery to predict the binding mode of a given ligand to its target. However, the identification of the near-native binding pose in docking experiments still represents a challenging task as the scoring functions currently employed by docking programs are parametrized to predict the binding affinity, and, therefore, they often fail to correctly identify the ligand native binding conformation. Selecting the correct binding mode is crucial to obtaining meaningful results and to conveniently optimizing new hit compounds. Deep learning (DL) algorithms have been an area of a growing interest in this sense for their capability to extract the relevant information directly from the protein-ligand structure. Our review aims to present the recent advances regarding the development of DL-based pose selection approaches, discussing limitations and possible future directions. Moreover, a comparison between the performances of some classical scoring functions and DL-based methods concerning their ability to select the correct binding mode is reported. In this regard, two novel DL-based pose selectors developed by us are presented.
Collapse
Affiliation(s)
- Serena Vittorio
- Dipartimento di Scienze Farmaceutiche, Università degli Studi di Milano, Via Luigi Mangiagalli 25, I-20133 Milano, Italy
| | - Filippo Lunghini
- EXSCALATE, Dompé Farmaceutici SpA, Via Tommaso de Amicis 95, 80123 Naples, Italy
| | - Pietro Morerio
- Pattern Analysis and Computer Vision, Fondazione Istituto Italiano di Tecnologia, Via Morego, 30, 16163 Genova, Italy
| | - Davide Gadioli
- Dipartimento di Elettronica Informazione e Bioingegneria, Politecnico di Milano, Via Ponzio 34/5, I-20133 Milano, Italy
| | - Sergio Orlandini
- SCAI, SuperComputing Applications and Innovation Department, CINECA, Via dei Tizii 6, Rome 00185, Italy
| | - Paulo Silva
- IT4Innovations, VSB – Technical University of Ostrava, 17. listopadu 2172/15, 70800 Ostrava-Poruba, Czech Republic
| | - Jan Martinovic
- IT4Innovations, VSB – Technical University of Ostrava, 17. listopadu 2172/15, 70800 Ostrava-Poruba, Czech Republic
| | - Alessandro Pedretti
- Dipartimento di Scienze Farmaceutiche, Università degli Studi di Milano, Via Luigi Mangiagalli 25, I-20133 Milano, Italy
| | - Domenico Bonanni
- Department of Physical and Chemical Sciences, University of L′Aquila, via Vetoio, L′Aquila 67010, Italy
| | - Alessio Del Bue
- Pattern Analysis and Computer Vision, Fondazione Istituto Italiano di Tecnologia, Via Morego, 30, 16163 Genova, Italy
| | - Gianluca Palermo
- Dipartimento di Elettronica Informazione e Bioingegneria, Politecnico di Milano, Via Ponzio 34/5, I-20133 Milano, Italy
| | - Giulio Vistoli
- Dipartimento di Scienze Farmaceutiche, Università degli Studi di Milano, Via Luigi Mangiagalli 25, I-20133 Milano, Italy
| | - Andrea R. Beccari
- EXSCALATE, Dompé Farmaceutici SpA, Via Tommaso de Amicis 95, 80123 Naples, Italy
| |
Collapse
|
3
|
Ita K, Roshanaei S. Artificial intelligence for skin permeability prediction: deep learning. J Drug Target 2024; 32:334-346. [PMID: 38258521 DOI: 10.1080/1061186x.2024.2309574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 01/07/2024] [Indexed: 01/24/2024]
Abstract
BACKGROUND AND OBJECTIVE Researchers have put in significant laboratory time and effort in measuring the permeability coefficient (Kp) of xenobiotics. To develop alternative approaches to this labour-intensive procedure, predictive models have been employed by scientists to describe the transport of xenobiotics across the skin. Most quantitative structure-permeability relationship (QSPR) models are derived statistically from experimental data. Recently, artificial intelligence-based computational drug delivery has attracted tremendous interest. Deep learning is an umbrella term for machine-learning algorithms consisting of deep neural networks (DNNs). Distinct network architectures, like convolutional neural networks (CNNs), feedforward neural networks (FNNs), and recurrent neural networks (RNNs), can be employed for prediction. METHODS In this project, we used a convolutional neural network, feedforward neural network, and recurrent neural network to predict skin permeability coefficients from a publicly available database reported by Cheruvu et al. The dataset contains 476 records of 145 chemicals, xenobiotics, and pharmaceuticals, administered on the human epidermis in vitro from aqueous solutions of constant concentration either saturated in infinite dose quantities or diluted. All the computations were conducted with Python under Anaconda and Jupyterlab environment after importing the required Python, Keras, and Tensorflow modules. RESULTS We used a convolutional neural network, feedforward neural network, and recurrent neural network to predict log kp. CONCLUSION This research work shows that deep learning networks can be successfully used to digitally screen and predict the skin permeability of xenobiotics.
Collapse
Affiliation(s)
- Kevin Ita
- College of Pharmacy, Touro University, Vallejo, CA, USA
| | | |
Collapse
|
4
|
Suzuki H, Kokabu T, Yamada K, Ishikawa Y, Yabu A, Yanagihashi Y, Hyakumachi T, Tachi H, Shimizu T, Endo T, Ohnishi T, Ukeba D, Nagahama K, Takahata M, Sudo H, Iwasaki N. Deep learning-based detection of lumbar spinal canal stenosis using convolutional neural networks. Spine J 2024; 24:2086-2101. [PMID: 38909909 DOI: 10.1016/j.spinee.2024.06.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 06/13/2024] [Accepted: 06/14/2024] [Indexed: 06/25/2024]
Abstract
BACKGROUND CONTEXT Lumbar spinal canal stenosis (LSCS) is the most common spinal degenerative disorder in elderly people and usually first seen by primary care physicians or orthopedic surgeons who are not spine surgery specialists. Magnetic resonance imaging (MRI) is useful in the diagnosis of LSCS, but the equipment is often not available or difficult to read. LSCS patients with progressive neurologic deficits have difficulty with recovery if surgical treatment is delayed. So, early diagnosis and determination of appropriate surgical indications are crucial in the treatment of LSCS. Convolutional neural networks (CNNs), a type of deep learning, offers significant advantages for image recognition and classification, and work well with radiographs, which can be easily taken at any facility. PURPOSE Our purpose was to develop an algorithm to diagnose the presence or absence of LSCS requiring surgery from plain radiographs using CNNs. STUDY DESIGN Retrospective analysis of consecutive, nonrandomized series of patients at a single institution. PATIENT SAMPLE Data of 150 patients who underwent surgery for LSCS, including degenerative spondylolisthesis, at a single institution from January 2022 to August 2022, were collected. Additionally, 25 patients who underwent surgery at 2 other hospitals were included for extra external validation. OUTCOME MEASURES In annotation 1, the area under the curve (AUC) computed from the receiver operating characteristic (ROC) curve, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), accuracy, positive likelihood ratio (PLR), and negative likelihood ratio (NLR) were calculated. In annotation 2, correlation coefficients were used. METHODS Four intervertebral levels from L1/2 to L4/5 were extracted as region of interest from lateral plain lumbar spine radiographs totaling 600 images were obtained. Based on the date of surgery, 500 images derived from the first 125 cases were used for internal validation, and 100 images from the subsequent 25 cases used for external validation. Additionally, 100 images from other hospitals were used for extra external validation. In annotation 1, binary classification of operative and nonoperative levels was used, and in annotation 2, the spinal canal area measured on axial MRI was labeled as the output layer. For internal validation, the 500 images were divided into each 5 dataset on per-patient basis and 5-fold cross-validation was performed. Five trained models were registered in the external validation prediction performance. Grad-CAM was used to visualize area with the high features extracted by CNNs. RESULTS In internal validation, the AUC and accuracy for annotation 1 ranged between 0.85-0.89 and 79-83%, respectively, and the correlation coefficients for annotation 2 ranged between 0.53 and 0.64 (all p<.01). In external validation, the AUC and accuracy for annotation 1 were 0.90 and 82%, respectively, and the correlation coefficient for annotation 2 was 0.69, using 5 trained CNN models. In the extra external validation, the AUC and accuracy for annotation 1 were 0.89 and 84%, respectively, and the correlation coefficient for annotation 2 was 0.56. Grad-CAM showed high feature density in the intervertebral joints and posterior intervertebral discs. CONCLUSIONS This technology automatically detects LSCS from plain lumbar spine radiographs, making it possible for medical facilities without MRI or nonspecialists to diagnose LSCS, suggesting the possibility of eliminating delays in the diagnosis and treatment of LSCS that require early treatment.
Collapse
Affiliation(s)
- Hisataka Suzuki
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan; Department of Orthopaedic Surgery, Eniwa Hospital, 2-1-1 Kogane Chuo, Eniwa, Hokkaido 061-1449, Japan
| | - Terufumi Kokabu
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan; Department of Orthopaedic Surgery, Eniwa Hospital, 2-1-1 Kogane Chuo, Eniwa, Hokkaido 061-1449, Japan
| | - Katsuhisa Yamada
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan.
| | - Yoko Ishikawa
- Department of Orthopaedic Surgery, Eniwa Hospital, 2-1-1 Kogane Chuo, Eniwa, Hokkaido 061-1449, Japan
| | - Akito Yabu
- Department of Orthopaedic Surgery, Eniwa Hospital, 2-1-1 Kogane Chuo, Eniwa, Hokkaido 061-1449, Japan
| | - Yasushi Yanagihashi
- Department of Orthopaedic Surgery, Eniwa Hospital, 2-1-1 Kogane Chuo, Eniwa, Hokkaido 061-1449, Japan
| | - Takahiko Hyakumachi
- Department of Orthopaedic Surgery, Eniwa Hospital, 2-1-1 Kogane Chuo, Eniwa, Hokkaido 061-1449, Japan
| | - Hiroyuki Tachi
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan
| | - Tomohiro Shimizu
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan
| | - Tsutomu Endo
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan
| | - Takashi Ohnishi
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan
| | - Daisuke Ukeba
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan
| | - Ken Nagahama
- Department of Orthopaedic Surgery, Sapporo Endoscopic Spine Surgery, N16E16, Sapporo, Hokkaido 065-0016, Japan
| | - Masahiko Takahata
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan
| | - Hideki Sudo
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan
| | - Norimasa Iwasaki
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan
| |
Collapse
|
5
|
Kim S, Park H, Park SH. A review of deep learning-based reconstruction methods for accelerated MRI using spatiotemporal and multi-contrast redundancies. Biomed Eng Lett 2024; 14:1221-1242. [PMID: 39465106 PMCID: PMC11502678 DOI: 10.1007/s13534-024-00425-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Revised: 08/27/2024] [Accepted: 09/06/2024] [Indexed: 10/29/2024] Open
Abstract
Accelerated magnetic resonance imaging (MRI) has played an essential role in reducing data acquisition time for MRI. Acceleration can be achieved by acquiring fewer data points in k-space, which results in various artifacts in the image domain. Conventional reconstruction methods have resolved the artifacts by utilizing multi-coil information, but with limited robustness. Recently, numerous deep learning-based reconstruction methods have been developed, enabling outstanding reconstruction performances with higher acceleration. Advances in hardware and developments of specialized network architectures have produced such achievements. Besides, MRI signals contain various redundant information including multi-coil redundancy, multi-contrast redundancy, and spatiotemporal redundancy. Utilization of the redundant information combined with deep learning approaches allow not only higher acceleration, but also well-preserved details in the reconstructed images. Consequently, this review paper introduces the basic concepts of deep learning and conventional accelerated MRI reconstruction methods, followed by review of recent deep learning-based reconstruction methods that exploit various redundancies. Lastly, the paper concludes by discussing the challenges, limitations, and potential directions of future developments.
Collapse
Affiliation(s)
- Seonghyuk Kim
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - HyunWook Park
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Sung-Hong Park
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, 291 Daehak-ro, Yuseong-gu, Daejeon, 34141 Republic of Korea
| |
Collapse
|
6
|
Ayalon A, Sahel JA, Chhablani J. A journey through the world of vitreous. Surv Ophthalmol 2024; 69:957-966. [PMID: 38885759 DOI: 10.1016/j.survophthal.2024.06.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Revised: 06/06/2024] [Accepted: 06/10/2024] [Indexed: 06/20/2024]
Abstract
Vitreous, one of the largest components of the human eye, mostly contains water. Despite decades of studying the vitreous structure, numerous unanswered questions still remain, fueling ongoing active research. We attempt to provide a comprehensive overview of the current understanding of the development, morphology, biochemical composition, and function of the vitreous. We emphasize the impact of the vitreous structure and composition on the distribution of drugs. Fast-developing imaging technologies, such as modern optical coherence tomography, unlocked multiple new approaches, offering the potential for in vivo study of the vitreous structure. They allowed to analyze in vivo a range of vitreous structures, such as posterior precortical vitreous pockets, Cloquet canal, channels that interconnect them, perivascular vitreous fissures, and cisterns. We provide an overview of such imaging techniques and their principles and of some challenges in visualizing vitreous structures. Finally, we explores the potential of combining the latest technologies and machine learning to enhance our understanding of vitreous structures.
Collapse
Affiliation(s)
- Anfisa Ayalon
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA.
| | - José-Alain Sahel
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Jay Chhablani
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| |
Collapse
|
7
|
Khosravi M, Jasemi SK, Hayati P, Javar HA, Izadi S, Izadi Z. Transformative artificial intelligence in gastric cancer: Advancements in diagnostic techniques. Comput Biol Med 2024; 183:109261. [PMID: 39488054 DOI: 10.1016/j.compbiomed.2024.109261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 09/30/2024] [Accepted: 10/07/2024] [Indexed: 11/04/2024]
Abstract
Gastric cancer represents a significant global health challenge with elevated incidence and mortality rates, highlighting the need for advancements in diagnostic and therapeutic strategies. This review paper addresses the critical need for a thorough synthesis of the role of artificial intelligence (AI) in the management of gastric cancer. It provides an in-depth analysis of current AI applications, focusing on their contributions to early diagnosis, treatment planning, and outcome prediction. The review identifies key gaps and limitations in the existing literature by examining recent studies and technological developments. It aims to clarify the evolution of AI-driven methods and their impact on enhancing diagnostic accuracy, personalizing treatment strategies, and improving patient outcomes. The paper emphasizes the transformative potential of AI in overcoming the challenges associated with gastric cancer management and proposes future research directions to further harness AI's capabilities. Through this synthesis, the review underscores the importance of integrating AI technologies into clinical practice to revolutionize gastric cancer management.
Collapse
Affiliation(s)
- Mobina Khosravi
- Student Research Committee, Kermanshah University of Medical Sciences, Kermanshah, Iran; USERN Office, Kermanshah University of Medical Sciences, Kermanshah, Iran.
| | - Seyedeh Kimia Jasemi
- Student Research Committee, Kermanshah University of Medical Sciences, Kermanshah, Iran; USERN Office, Kermanshah University of Medical Sciences, Kermanshah, Iran.
| | - Parsa Hayati
- Student Research Committee, Kermanshah University of Medical Sciences, Kermanshah, Iran; USERN Office, Kermanshah University of Medical Sciences, Kermanshah, Iran.
| | - Hamid Akbari Javar
- Department of Pharmaceutics, Faculty of Pharmacy, Tehran University of Medical Sciences, Tehran, Iran; USERN Office, Kermanshah University of Medical Sciences, Kermanshah, Iran.
| | - Saadat Izadi
- Department of Computer Engineering and Information Technology, Razi University, Kermanshah, Iran; USERN Office, Kermanshah University of Medical Sciences, Kermanshah, Iran.
| | - Zhila Izadi
- Pharmaceutical Sciences Research Center, Health Institute, Kermanshah University of Medical Sciences, Kermanshah, Iran; USERN Office, Kermanshah University of Medical Sciences, Kermanshah, Iran.
| |
Collapse
|
8
|
Goessinger EV, Gottfrois P, Mueller AM, Cerminara SE, Navarini AA. Image-Based Artificial Intelligence in Psoriasis Assessment: The Beginning of a New Diagnostic Era? Am J Clin Dermatol 2024; 25:861-872. [PMID: 39259262 PMCID: PMC11511687 DOI: 10.1007/s40257-024-00883-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/02/2024] [Indexed: 09/12/2024]
Abstract
Psoriasis, a chronic inflammatory skin disease, affects millions of people worldwide. It imposes a significant burden on patients' quality of life and healthcare systems, creating an urgent need for optimized diagnosis, treatment, and management. In recent years, image-based artificial intelligence (AI) applications have emerged as promising tools to assist physicians by offering improved accuracy and efficiency. In this review, we provide an overview of the current landscape of image-based AI applications in psoriasis. Emphasis is placed on machine learning (ML) algorithms, a key subset of AI, which enable automated pattern recognition for various tasks. Key AI applications in psoriasis include lesion detection and segmentation, differentiation from other skin conditions, subtype identification, automated area involvement, and severity scoring, as well as personalized treatment selection and response prediction. Furthermore, we discuss two commercially available systems that utilize standardized photo documentation, automated segmentation, and semi-automated Psoriasis Area and Severity Index (PASI) calculation for patient assessment and follow-up. Despite the promise of AI in this field, many challenges remain. These include the validation of current models, integration into clinical workflows, the current lack of diversity in training-set data, and the need for standardized imaging protocols. Addressing these issues is crucial for the successful implementation of AI technologies in clinical practice. Overall, we underscore the potential of AI to revolutionize psoriasis management, highlighting both the advancements and the hurdles that need to be overcome. As technology continues to evolve, AI is expected to significantly improve the accuracy, efficiency, and personalization of psoriasis treatment.
Collapse
Affiliation(s)
- Elisabeth V Goessinger
- Department of Dermatology, University Hospital Basel, Basel, Switzerland
- Faculty of Medicine, University of Basel, Basel, Switzerland
| | - Philippe Gottfrois
- Department of Dermatology, University Hospital Basel, Basel, Switzerland
- Faculty of Medicine, University of Basel, Basel, Switzerland
| | - Alina M Mueller
- Department of Dermatology, University Hospital Basel, Basel, Switzerland
- Faculty of Medicine, University of Basel, Basel, Switzerland
| | - Sara E Cerminara
- Department of Dermatology, University Hospital Basel, Basel, Switzerland
- Faculty of Medicine, University of Basel, Basel, Switzerland
| | - Alexander A Navarini
- Department of Dermatology, University Hospital Basel, Basel, Switzerland.
- Faculty of Medicine, University of Basel, Basel, Switzerland.
| |
Collapse
|
9
|
Ferrández MC, Golla SSV, Eertink JJ, Wiegers SE, Zwezerijnen GJC, Heymans MW, Lugtenburg PJ, Kurch L, Hüttmann A, Hanoun C, Dührsen U, Barrington SF, Mikhaeel NG, Ceriani L, Zucca E, Czibor S, Györke T, Chamuleau MED, Zijlstra JM, Boellaard R. Validation of an Artificial Intelligence-Based Prediction Model Using 5 External PET/CT Datasets of Diffuse Large B-Cell Lymphoma. J Nucl Med 2024; 65:1802-1807. [PMID: 39362767 DOI: 10.2967/jnumed.124.268191] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Accepted: 09/09/2024] [Indexed: 10/05/2024] Open
Abstract
The aim of this study was to validate a previously developed deep learning model in 5 independent clinical trials. The predictive performance of this model was compared with the international prognostic index (IPI) and 2 models incorporating radiomic PET/CT features (clinical PET and PET models). Methods: In total, 1,132 diffuse large B-cell lymphoma patients were included: 296 for training and 836 for external validation. The primary outcome was 2-y time to progression. The deep learning model was trained on maximum-intensity projections from PET/CT scans. The clinical PET model included metabolic tumor volume, maximum distance from the bulkiest lesion to another lesion, SUVpeak, age, and performance status. The PET model included metabolic tumor volume, maximum distance from the bulkiest lesion to another lesion, and SUVpeak Model performance was assessed using the area under the curve (AUC) and Kaplan-Meier curves. Results: The IPI yielded an AUC of 0.60 on all external data. The deep learning model yielded a significantly higher AUC of 0.66 (P < 0.01). For each individual clinical trial, the model was consistently better than IPI. Radiomic model AUCs remained higher for all clinical trials. The deep learning and clinical PET models showed equivalent performance (AUC, 0.69; P > 0.05). The PET model yielded the highest AUC of all models (AUC, 0.71; P < 0.05). Conclusion: The deep learning model predicted outcome in all trials with a higher performance than IPI and better survival curve separation. This model can predict treatment outcome in diffuse large B-cell lymphoma without tumor delineation but at the cost of a lower prognostic performance than with radiomics.
Collapse
Affiliation(s)
- Maria C Ferrández
- Department of Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands;
- Imaging and Biomarkers, Cancer Center Amsterdam, Amsterdam, The Netherlands
| | - Sandeep S V Golla
- Department of Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
- Imaging and Biomarkers, Cancer Center Amsterdam, Amsterdam, The Netherlands
| | - Jakoba J Eertink
- Imaging and Biomarkers, Cancer Center Amsterdam, Amsterdam, The Netherlands
- Department of Hematology, Cancer Center Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Sanne E Wiegers
- Department of Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
- Imaging and Biomarkers, Cancer Center Amsterdam, Amsterdam, The Netherlands
| | - Gerben J C Zwezerijnen
- Department of Radiology and Nuclear Medicine, Cancer Center Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
- Imaging and Biomarkers, Cancer Center Amsterdam, Amsterdam, The Netherlands
| | - Martijn W Heymans
- Department of Epidemiology and Data Science, Amsterdam Public Health Research Institute, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Pieternella J Lugtenburg
- Department of Hematology, Erasmus MC Cancer Institute, University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Lars Kurch
- Clinic and Polyclinic for Nuclear Medicine, Department of Nuclear Medicine, University of Leipzig, Leipzig, Germany
| | - Andreas Hüttmann
- Department of Hematology, West German Cancer Center, University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Christine Hanoun
- Department of Hematology, West German Cancer Center, University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Ulrich Dührsen
- Department of Hematology, West German Cancer Center, University Hospital Essen, University of Duisburg-Essen, Essen, Germany
| | - Sally F Barrington
- School of Biomedical Engineering and Imaging Sciences, King's College London and Guy's and St Thomas' PET Centre, King's Health Partners, King's College London, London, United Kingdom
| | - N George Mikhaeel
- Department of Clinical Oncology, Guy's Cancer Centre and School of Cancer and Pharmaceutical Sciences, King's College London, London, United Kingdom
| | - Luca Ceriani
- Department of Nuclear Medicine and PET/CT Centre, Imaging Institute of Southern Switzerland-EOC, Faculty of Biomedical Sciences, Università della Svizzera Italiana, Lugano, Switzerland
- SAKK Swiss Group for Clinical Cancer Research, Bern, Switzerland
| | - Emanuele Zucca
- SAKK Swiss Group for Clinical Cancer Research, Bern, Switzerland
- Department of Oncology, Oncology Institute of Southern Switzerland-EOC, Faculty of Biomedical Sciences, Università della Svizzera Italiana, Bellinzona, Switzerland
- Faculty of Biomedical Sciences, Università della Svizzera Italiana, Lugano, Switzerland; and
| | - Sándor Czibor
- Department of Nuclear Medicine, Medical Imaging Centre, Semmelweis University, Budapest, Hungary
| | - Tamás Györke
- Department of Nuclear Medicine, Medical Imaging Centre, Semmelweis University, Budapest, Hungary
| | - Martine E D Chamuleau
- Imaging and Biomarkers, Cancer Center Amsterdam, Amsterdam, The Netherlands
- Department of Hematology, Cancer Center Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | - Josée M Zijlstra
- Imaging and Biomarkers, Cancer Center Amsterdam, Amsterdam, The Netherlands
- Department of Hematology, Cancer Center Amsterdam, Amsterdam UMC, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands
| | | |
Collapse
|
10
|
Yao J, Chu LC, Patlas M. Applications of Artificial Intelligence in Acute Abdominal Imaging. Can Assoc Radiol J 2024; 75:761-770. [PMID: 38715249 DOI: 10.1177/08465371241250197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/12/2024] Open
Abstract
Artificial intelligence (AI) is a rapidly growing field with significant implications for radiology. Acute abdominal pain is a common clinical presentation that can range from benign conditions to life-threatening emergencies. The critical nature of these situations renders emergent abdominal imaging an ideal candidate for AI applications. CT, radiographs, and ultrasound are the most common modalities for imaging evaluation of these patients. For each modality, numerous studies have assessed the performance of AI models for detecting common pathologies, such as appendicitis, bowel obstruction, and cholecystitis. The capabilities of these models range from simple classification to detailed severity assessment. This narrative review explores the evolution, trends, and challenges in AI applications for evaluating acute abdominal pathologies. We review implementations of AI for non-traumatic and traumatic abdominal pathologies, with discussion of potential clinical impact, challenges, and future directions for the technology.
Collapse
Affiliation(s)
- Jason Yao
- Department of Radiology, McMaster University, Hamilton, ON, Canada
| | - Linda C Chu
- Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Michael Patlas
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
11
|
Khan R, Taj S, Ma X, Noor A, Zhu H, Khan J, Khan ZU, Khan SU. Advanced federated ensemble internet of learning approach for cloud based medical healthcare monitoring system. Sci Rep 2024; 14:26068. [PMID: 39478132 PMCID: PMC11526108 DOI: 10.1038/s41598-024-77196-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Accepted: 10/21/2024] [Indexed: 11/02/2024] Open
Abstract
Medical image machines serve as a valuable tool to monitor and diagnose a variety of diseases. However, manual and centralized interpretation are both error-prone and time-consuming due to malicious attacks. Numerous diagnostic algorithms have been developed to improve precision and prevent poisoning attacks by integrating symptoms, test methods, and imaging data. But in today's digital technology world, it is necessary to have a global cloud-based diagnostic artificial intelligence model that is efficient in diagnosis and preventing poisoning attacks and might be used for multiple purposes. We propose the Healthcare Federated Ensemble Internet of Learning Cloud Doctor System (FDEIoL) model, which integrates different Internet of Things (IoT) devices to provide precise and accurate interpretation without poisoning attack problems, thereby facilitating IoT-enabled remote patient monitoring for smart healthcare systems. Furthermore, the FDEIoL system model uses a federated ensemble learning strategy to provide an automatic, up-to-date global prediction model based on input local models from the medical specialist. This assures biomedical security by safeguarding patient data and preserving the integrity of diagnostic processes. The FDEIoL system model utilizes local model feature selection to discriminate between malicious and non-malicious local models, and ensemble strategies use positive and negative samples to optimize the performance of the test dataset, enhancing its capability for remote patient monitoring. The FDEIoL system model achieved an exceptional accuracy rate of 99.24% on the Chest X-ray dataset and 99.0% on the MRI dataset of brain tumors compared to centralized models, demonstrating its ability for precision diagnosis in IoT-enabled healthcare systems.
Collapse
Affiliation(s)
- Rahim Khan
- College of Information and Communication Engineering, Harbin Engineering University, Harbin150001, China
| | - Sher Taj
- Software College, Northeastern University, Shenyang, 110169, China
| | - Xuefei Ma
- College of Information and Communication Engineering, Harbin Engineering University, Harbin150001, China.
| | - Alam Noor
- CISTER Research Center, Porto, Portugal
| | - Haifeng Zhu
- College of Information and Communication Engineering, Harbin Engineering University, Harbin150001, China
| | - Javed Khan
- Department of software Engineering, University of Science and Technology, Bannu, KPK, Pakistan
| | - Zahid Ullah Khan
- College of Information and Communication Engineering, Harbin Engineering University, Harbin150001, China
| | - Sajid Ullah Khan
- Department of Information Systems, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Alkharj, KSA, Saudi Arabia
| |
Collapse
|
12
|
Ha HG, Jeung D, Ullah I, Tokuda J, Hong J, Lee H. Target-specified reference-based deep learning network for joint image deblurring and resolution enhancement in surgical zoom lens camera calibration. Comput Biol Med 2024; 183:109309. [PMID: 39442443 DOI: 10.1016/j.compbiomed.2024.109309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 10/07/2024] [Accepted: 10/18/2024] [Indexed: 10/25/2024]
Abstract
BACKGROUND AND OBJECTIVE For the augmented reality of surgical navigation, which overlays a 3D model of the surgical target on an image, accurate camera calibration is imperative. However, when the checkerboard images for calibration are captured using a surgical microscope having high magnification, blur owing to the narrow depth of focus and blocking artifacts caused by limited resolution around the fine edges occur. These artifacts strongly affect the localization of corner points of the checkerboard in these images, resulting in inaccurate calibration, which leads to a large displacement in augmented reality. To solve this problem, in this study, we proposed a novel target-specific deep learning network that simultaneously enhances both the blur and spatial resolution of an image for surgical zoom lens camera calibration. METHODS As a scheme of an end-to-end convolutional deep neural network, the proposed network is specifically intended for the checkerboard image enhancement used in camera calibration. Through the symmetric architecture of the network, which consists of encoding and decoding layers, the distinctive spatial features of the encoding layers are transferred and merged with the output of the decoding layers. Additionally, by integrating a multi-frame framework including subpixel motion estimation and ideal reference image with the symmetric architecture, joint image deblurring and enhanced resolution were efficiently achieved. RESULTS From experimental comparisons, we verified the capability of the proposed method to improve the subjective and objective performances of surgical microscope calibration. Furthermore, we confirmed that the augmented reality overlap ratio, which quantitatively indicates augmented reality accuracy, from calibration with the enhanced image of the proposed method is higher than that of the previous methods. CONCLUSIONS These findings suggest that the proposed network provides sharp high-resolution images from blurry low-resolution inputs. Furthermore, we demonstrate superior performance in camera calibration by using surgical microscopic images, thus showing its potential applications in the field of practical surgical navigation.
Collapse
Affiliation(s)
- Ho-Gun Ha
- Division of Intelligent Robot, DGIST, 333 Techno Jungang-daero, Hyeonpung-myeon, Dalseong-gun, Daegu, 42988, Republic of Korea
| | - Deokgi Jeung
- Department of Robotics and Mechatronics Engineering, DGIST, 333 Techno Jungang-daero, Hyeonpung-myeon, Dalseong-gun, Daegu, 42988, Republic of Korea
| | - Ihsan Ullah
- Division of Intelligent Robot, DGIST, 333 Techno Jungang-daero, Hyeonpung-myeon, Dalseong-gun, Daegu, 42988, Republic of Korea
| | - Junichi Tokuda
- Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, 75 Francis Street, Boston, MA, 02115, USA
| | - Jaesung Hong
- Department of Robotics and Mechatronics Engineering, DGIST, 333 Techno Jungang-daero, Hyeonpung-myeon, Dalseong-gun, Daegu, 42988, Republic of Korea
| | - Hyunki Lee
- Division of Intelligent Robot, DGIST, 333 Techno Jungang-daero, Hyeonpung-myeon, Dalseong-gun, Daegu, 42988, Republic of Korea.
| |
Collapse
|
13
|
Patharia P, Sethy PK, Nanthaamornphong A. Advancements and Challenges in the Image-Based Diagnosis of Lung and Colon Cancer: A Comprehensive Review. Cancer Inform 2024; 23:11769351241290608. [PMID: 39483315 PMCID: PMC11526153 DOI: 10.1177/11769351241290608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Accepted: 09/25/2024] [Indexed: 11/03/2024] Open
Abstract
Image-based diagnosis has become a crucial tool in the identification and management of various cancers, particularly lung and colon cancer. This review delves into the latest advancements and ongoing challenges in the field, with a focus on deep learning, machine learning, and image processing techniques applied to X-rays, CT scans, and histopathological images. Significant progress has been made in imaging technologies like computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET), which, when combined with machine learning and artificial intelligence (AI) methodologies, have greatly enhanced the accuracy of cancer detection and characterization. These advances have enabled early detection, more precise tumor localization, personalized treatment plans, and overall improved patient outcomes. However, despite these improvements, challenges persist. Variability in image interpretation, the lack of standardized diagnostic protocols, unequal access to advanced imaging technologies, and concerns over data privacy and security within AI-based systems remain major obstacles. Furthermore, integrating imaging data with broader clinical information is crucial to achieving a more comprehensive approach to cancer diagnosis and treatment. This review provides valuable insights into the recent developments and challenges in image-based diagnosis for lung and colon cancers, underscoring both the remarkable progress and the hurdles that still need to be overcome to optimize cancer care.
Collapse
Affiliation(s)
- Pragati Patharia
- Department of Electronics and Communication Engineering, Guru Ghasidas Vishwavidyalaya, Bilaspur, Chhattisgarh, India
| | - Prabira Kumar Sethy
- Department of Electronics and Communication Engineering, Guru Ghasidas Vishwavidyalaya, Bilaspur, Chhattisgarh, India
- Department of Electronics, Sambalpur University, Burla, Odisha, India
| | | |
Collapse
|
14
|
Salau AO, Tamiru NK, Abeje BT. Derived Amharic alphabet sign language recognition using machine learning methods. Heliyon 2024; 10:e38265. [PMID: 39386773 PMCID: PMC11462330 DOI: 10.1016/j.heliyon.2024.e38265] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Revised: 09/19/2024] [Accepted: 09/20/2024] [Indexed: 10/12/2024] Open
Abstract
Hearing-impaired people use sign language as a means of communication with those with no hearing disability. It is therefore difficult to communicate with hearing impaired people without the expertise of a signer or knowledge of sign language. As a result, technologies that understands sign language are required to bridge the communication gap between those that have hearing impairments and those that dont. Ethiopian Amharic alphabets sign language (EAMASL) is different from other countries sign languages because Amharic Language is spoken in Ethiopia and has a number of complex alphabets. Presently in Ethiopia, just a few studies on AMASL have been conducted. Previous works, on the other hand, only worked on basic and a few derived Amharic alphabet signs. To solve this challenge, in this paper, we propose Machine Learning techniques such as Support Vector Machine (SVM) with Convolutional Neural Network (CNN), Histogram of Oriented Gradients (HOG), and their hybrid features to recognize the remaining derived Amharic alphabet signs. Because CNN is good for rotation and translation of signs, and HOG works well for low quality data under strong illumination variation and a small quantity of training data, the two have been combined for feature extraction. CNN (Softmax) was utilized as a classifier for normalized hybrid features in addition to SVM. SVM model using CNN, HOG, normalized, and non-normalized hybrid feature vectors achieved an accuracy of 89.02%, 95.42%, 97.40%, and 93.61% using 10-fold cross validation, respectively. With the normalized hybrid features, the other classifier, CNN (sofmax), produced a 93.55% accuracy.
Collapse
Affiliation(s)
- Ayodeji Olalekan Salau
- Department of Electrical/Electronics and Computer Engineering, Afe Babalola University, Ado-Ekiti, Nigeria
- Saveetha School of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, Tamil Nadu, India
| | - Nigus Kefyalew Tamiru
- Department of Electrical and Computer Engineering, Debre Markos University, Debre Markos, Ethiopia
| | - Bekalu Tadele Abeje
- Bahir Dar Institute of Technology Department of Computer Science, Bahir Dar, Amhara, Ethiopia
- Department of Information Technology, Haramaya University, Dire Dawa, Ethiopia
| |
Collapse
|
15
|
Yao J, Wei L, Hao P, Liu Z, Wang P. Application of artificial intelligence model in pathological staging and prognosis of clear cell renal cell carcinoma. Discov Oncol 2024; 15:545. [PMID: 39390246 PMCID: PMC11467134 DOI: 10.1007/s12672-024-01437-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/20/2024] [Accepted: 10/07/2024] [Indexed: 10/12/2024] Open
Abstract
This study aims to develop a deep learning (DL) model based on whole-slide images (WSIs) to predict the pathological stage of clear cell renal cell carcinoma (ccRCC). The histopathological images of 513 ccRCC patients were downloaded from The Cancer Genome Atlas (TCGA) database and randomly divided into training set and validation set according to the ratio of 8∶2. The CLAM algorithm was used to establish the DL model, and the stability of the model was evaluated in the external validation set. DL features were extracted from the model to construct a prognostic risk model, which was validated in an external dataset. The results showed that the DL model showed excellent prediction ability with an area under the curve (AUC) of 0.875 and an average accuracy score of 0.809, indicating that the model could reliably distinguish ccRCC patients at different stages from histopathological images. In addition, the prognostic risk model constructed by DL characteristics showed that the overall survival rate of patients in the high-risk group was significantly lower than that in the low-risk group (P = 0.003), and AUC values for predicting 1-, 3- and 5-year overall survival rates were 0.68, 0.69 and 0.69, respectively, indicating that the prediction model had high sensitivity and specificity. The results of the validation set are consistent with the above results. Therefore, DL model can accurately predict the pathological stage and prognosis of ccRCC patients, and provide certain reference value for clinical diagnosis.
Collapse
Affiliation(s)
- Jing Yao
- Department of Radiology, Tongji Hospital of Tongji University, Shanghai, 200065, China
- Institute of Medical Imaging Artificial Intelligence, Tongji University School of Medicine, Shanghai, 200065, China
| | - Lai Wei
- Department of Radiology, Tongji Hospital of Tongji University, Shanghai, 200065, China
- Institute of Medical Imaging Artificial Intelligence, Tongji University School of Medicine, Shanghai, 200065, China
| | - Peipei Hao
- Department of Radiology, Tongji Hospital of Tongji University, Shanghai, 200065, China
- Institute of Medical Imaging Artificial Intelligence, Tongji University School of Medicine, Shanghai, 200065, China
| | - Zhongliu Liu
- Department of Radiology, Tongji Hospital of Tongji University, Shanghai, 200065, China
- Institute of Medical Imaging Artificial Intelligence, Tongji University School of Medicine, Shanghai, 200065, China
| | - Peijun Wang
- Department of Radiology, Tongji Hospital of Tongji University, Shanghai, 200065, China.
- Institute of Medical Imaging Artificial Intelligence, Tongji University School of Medicine, Shanghai, 200065, China.
| |
Collapse
|
16
|
Singh Y, Quaia E. Feature Reviews for Tomography 2023. Tomography 2024; 10:1605-1607. [PMID: 39453035 PMCID: PMC11511180 DOI: 10.3390/tomography10100118] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2024] [Accepted: 09/26/2024] [Indexed: 10/26/2024] Open
Abstract
In an era of rapid technological progress, this Special Issue aims to provide a comprehensive overview of the state-of-the-art in tomographic imaging [...].
Collapse
Affiliation(s)
- Yashbir Singh
- Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA
| | - Emilio Quaia
- Department of Radiology, University of Padova, 35122 Padova, Italy
| |
Collapse
|
17
|
Gu H, Jiang K, Yu F, Wang L, Yang X, Li X, Jiang Y, Lü W, Sun X. Multifunctional Human-Computer Interaction System Based on Deep Learning-Assisted Strain Sensing Array. ACS APPLIED MATERIALS & INTERFACES 2024; 16:54496-54507. [PMID: 39325961 DOI: 10.1021/acsami.4c12758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/28/2024]
Abstract
Continuous and reliable monitoring of gait is crucial for health monitoring, such as postoperative recovery of bone joint surgery and early diagnosis of disease. However, existing gait analysis systems often suffer from large volumes and the requirement of special space for setting motion capture systems, limiting their application in daily life. Here, we develop an intelligent gait monitoring and analysis prediction system based on flexible piezoelectric sensors and deep learning neural networks with high sensitivity (241.29 mV/N), quick response (66 ms loading, 87 ms recovery), and excellent stability (R2 = 0.9946). The theoretical simulations and experiments confirm that the sensor provides exceptional signal feedback, which can easily acquire accurate gait data when fitted to shoe soles. By integrating high-quality gait data with a custom-built deep learning model, the system can detect and infer human motion states in real time (the recognition accuracy reaches 94.7%). To further validate the sensor's application in real life, we constructed a flexible wearable recognition system with human-computer interaction interface and a simple operation process for long-term and continuous tracking of athletes' gait, potentially aiding personalized health management, early detection of disease, and remote medical care.
Collapse
Affiliation(s)
- Hao Gu
- Key Laboratory of Advanced Structural Materials, Ministry of Education & Advanced Institute of Materials Science, Changchun University of Technology, Changchun 130012, China
- State Key Laboratory of Luminescence and Applications, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
| | - Ke Jiang
- State Key Laboratory of Luminescence and Applications, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
| | - Fei Yu
- Key Laboratory of Advanced Structural Materials, Ministry of Education & Advanced Institute of Materials Science, Changchun University of Technology, Changchun 130012, China
| | - Liying Wang
- Key Laboratory of Advanced Structural Materials, Ministry of Education & Advanced Institute of Materials Science, Changchun University of Technology, Changchun 130012, China
| | - Xijia Yang
- Key Laboratory of Advanced Structural Materials, Ministry of Education & Advanced Institute of Materials Science, Changchun University of Technology, Changchun 130012, China
| | - Xuesong Li
- Key Laboratory of Advanced Structural Materials, Ministry of Education & Advanced Institute of Materials Science, Changchun University of Technology, Changchun 130012, China
| | - Yi Jiang
- School of Science, Changchun Institute of Technology, Changchun 130012, China
| | - Wei Lü
- Key Laboratory of Advanced Structural Materials, Ministry of Education & Advanced Institute of Materials Science, Changchun University of Technology, Changchun 130012, China
- State Key Laboratory of Luminescence and Applications, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
| | - Xiaojuan Sun
- State Key Laboratory of Luminescence and Applications, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
| |
Collapse
|
18
|
Çetin-Kaya Y. Equilibrium Optimization-Based Ensemble CNN Framework for Breast Cancer Multiclass Classification Using Histopathological Image. Diagnostics (Basel) 2024; 14:2253. [PMID: 39410657 PMCID: PMC11475610 DOI: 10.3390/diagnostics14192253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2024] [Revised: 09/12/2024] [Accepted: 10/08/2024] [Indexed: 10/20/2024] Open
Abstract
Background: Breast cancer is one of the most lethal cancers among women. Early detection and proper treatment reduce mortality rates. Histopathological images provide detailed information for diagnosing and staging breast cancer disease. Methods: The BreakHis dataset, which includes histopathological images, is used in this study. Medical images are prone to problems such as different textural backgrounds and overlapping cell structures, unbalanced class distribution, and insufficiently labeled data. In addition to these, the limitations of deep learning models in overfitting and insufficient feature extraction make it extremely difficult to obtain a high-performance model in this dataset. In this study, 20 state-of-the-art models are trained to diagnose eight types of breast cancer using the fine-tuning method. In addition, a comprehensive experimental study was conducted to determine the most successful new model, with 20 different custom models reported. As a result, we propose a novel model called MultiHisNet. Results: The most effective new model, which included a pointwise convolution layer, residual link, channel, and spatial attention module, achieved 94.69% accuracy in multi-class breast cancer classification. An ensemble model was created with the best-performing transfer learning and custom models obtained in the study, and model weights were determined with an Equilibrium Optimizer. The proposed ensemble model achieved 96.71% accuracy in eight-class breast cancer detection. Conclusions: The results show that the proposed model will support pathologists in successfully diagnosing breast cancer.
Collapse
Affiliation(s)
- Yasemin Çetin-Kaya
- Department of Computer Engineering, Faculty of Engineering and Architecture, Tokat Gaziosmanpasa University, Tokat 60250, Turkey
| |
Collapse
|
19
|
Suchal ZA, Ain NU, Mahmud A. Revolutionizing LVH detection using artificial intelligence: the AI heartbeat project. J Hypertens 2024:00004872-990000000-00560. [PMID: 39445588 DOI: 10.1097/hjh.0000000000003885] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2024] [Accepted: 09/08/2024] [Indexed: 10/25/2024]
Abstract
Many studies have shown the utility and promise of artificial intelligence (AI), for the diagnosis of left ventricular hypertrophy (LVH). The aim of the present study was to conduct a meta-analysis to compare the accuracy of AI tools to electrocardiographic criteria, including Sokolow-Lyon and the Cornell, most commonly used for the detection of LVH in clinical practice. Nine studies meeting the inclusion criteria were selected, comprising a sample size of 31 657 patients in the testing and 100 271 in the training datasets. Meta-analysis was performed using a hierarchal model, calculating the pooled sensitivity, specificity, accuracy, along with the 95% confidence intervals (95% CIs). To ensure that the results were not skewed by one particular study, a sensitivity analysis using the 'leave-out-one approach' was adopted for all three outcomes. AI was associated with greater pooled estimates; accuracy, 80.50 (95% CI: 80.4-80.60), sensitivity, 89.29 (95% CI: 89.25-89.33) and specificity, 93.32 (95% CI: 93.26-93.38). Adjusting for weightage of individual studies on the outcomes, the results showed that while accuracy and specificity were unchanged, the adjusted pooled sensitivity was 53.16 (95% CI: 52.92-53.40). AI demonstrates higher diagnostic accuracy and sensitivity compared with conventional ECG criteria for LVH detection. AI holds promise as a reliable and efficient tool for the accurate detection of LVH in diverse populations. Further studies are needed to test AI models in hypertensive populations, particularly in low resource settings.
Collapse
Affiliation(s)
- Zafar Aleem Suchal
- Hypertension Clinic, Shalamar Hospital and Department of Clinical Research, Shalamar Medical & Dental College, Lahore, Pakistan
| | | | | |
Collapse
|
20
|
Cichosz SL, Olesen SS, Jensen MH. Explainable Machine-Learning Models to Predict Weekly Risk of Hyperglycemia, Hypoglycemia, and Glycemic Variability in Patients With Type 1 Diabetes Based on Continuous Glucose Monitoring. J Diabetes Sci Technol 2024:19322968241286907. [PMID: 39377175 DOI: 10.1177/19322968241286907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/09/2024]
Abstract
BACKGROUND AND OBJECTIVE The aim of this study was to develop and validate explainable prediction models based on continuous glucose monitoring (CGM) and baseline data to identify a week-to-week risk of CGM key metrics (hyperglycemia, hypoglycemia, glycemic variability). By having a weekly prediction of CGM key metrics, it is possible for the patient or health care personnel to take immediate preemptive action. METHODS We analyzed, trained, and internally tested three prediction models (Logistic regression, XGBoost, and TabNet) using CGM data from 187 type 1 diabetes patients with long-term CGM monitoring. A binary classification approach combined with feature engineering deployed on the CGM signals was used to predict hyperglycemia, hypoglycemia, and glycemic variability based on consensus targets (time above range ≥5%, time below range ≥4%, coefficient of variation ≥36%). The models were validated in two independent cohorts with a total of 223 additional patients of varying ages. RESULTS A total of 46 593 weeks of CGM data were included in the analysis. For the best model (XGBoost), the area under the receiver operating characteristic curve (ROC-AUC) was 0.9 [95% confidence interval (CI) = 0.89-0.91], 0.89 [95% CI = 0.88-0.9], and 0.8 [95% CI = 0.79-0.81] for predicting hyperglycemia, hypoglycemia, and glycemic variability in the interval validation, respectively. The validation test showed good generalizability of the models with ROC-AUC of 0.88 to 0.95, 0.84 to 0.89, and 0.80 to 0.82 for predicting the glycemic outcomes. CONCLUSION Prediction models based on real-world CGM data can be used to predict the risk of unstable glycemic control in the forthcoming week. The models showed good performance in both internal and external validation cohorts.
Collapse
Affiliation(s)
- Simon Lebech Cichosz
- Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
| | - Søren Schou Olesen
- Department of Clinical Medicine, Faculty of Medicine, Aalborg University Hospital, Aalborg, Denmark
- Mech-Sense, Centre for Pancreatic Diseases, Department of Gastroenterology and Hepatology, Aalborg University Hospital, Aalborg, Denmark
| | - Morten Hasselstrøm Jensen
- Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
- Data Science, Novo Nordisk A/S, Søborg, Denmark
| |
Collapse
|
21
|
Li Q, Geng S, Luo H, Wang W, Mo YQ, Luo Q, Wang L, Song GB, Sheng JP, Xu B. Signaling pathways involved in colorectal cancer: pathogenesis and targeted therapy. Signal Transduct Target Ther 2024; 9:266. [PMID: 39370455 PMCID: PMC11456611 DOI: 10.1038/s41392-024-01953-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 07/25/2024] [Accepted: 08/16/2024] [Indexed: 10/08/2024] Open
Abstract
Colorectal cancer (CRC) remains one of the leading causes of cancer-related mortality worldwide. Its complexity is influenced by various signal transduction networks that govern cellular proliferation, survival, differentiation, and apoptosis. The pathogenesis of CRC is a testament to the dysregulation of these signaling cascades, which culminates in the malignant transformation of colonic epithelium. This review aims to dissect the foundational signaling mechanisms implicated in CRC, to elucidate the generalized principles underpinning neoplastic evolution and progression. We discuss the molecular hallmarks of CRC, including the genomic, epigenomic and microbial features of CRC to highlight the role of signal transduction in the orchestration of the tumorigenic process. Concurrently, we review the advent of targeted and immune therapies in CRC, assessing their impact on the current clinical landscape. The development of these therapies has been informed by a deepening understanding of oncogenic signaling, leading to the identification of key nodes within these networks that can be exploited pharmacologically. Furthermore, we explore the potential of integrating AI to enhance the precision of therapeutic targeting and patient stratification, emphasizing their role in personalized medicine. In summary, our review captures the dynamic interplay between aberrant signaling in CRC pathogenesis and the concerted efforts to counteract these changes through targeted therapeutic strategies, ultimately aiming to pave the way for improved prognosis and personalized treatment modalities in colorectal cancer.
Collapse
Affiliation(s)
- Qing Li
- The Shapingba Hospital, Chongqing University, Chongqing, China
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital and School of Medicine, Chongqing University, Chongqing, China
- Key Laboratory of Biorheological Science and Technology, Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China
| | - Shan Geng
- Central Laboratory, The Affiliated Dazu Hospital of Chongqing Medical University, Chongqing, China
| | - Hao Luo
- Key Laboratory of Biorheological Science and Technology, Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China
- Cancer Center, Daping Hospital, Army Medical University, Chongqing, China
| | - Wei Wang
- Chongqing Municipal Health and Health Committee, Chongqing, China
| | - Ya-Qi Mo
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital and School of Medicine, Chongqing University, Chongqing, China
| | - Qing Luo
- Key Laboratory of Biorheological Science and Technology, Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China
| | - Lu Wang
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital and School of Medicine, Chongqing University, Chongqing, China
| | - Guan-Bin Song
- Key Laboratory of Biorheological Science and Technology, Ministry of Education, College of Bioengineering, Chongqing University, Chongqing, China.
| | - Jian-Peng Sheng
- College of Artificial Intelligence, Nanjing University of Aeronautics and Astronautics, Nanjing, China.
| | - Bo Xu
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital and School of Medicine, Chongqing University, Chongqing, China.
| |
Collapse
|
22
|
Kim MJ, Chae SG, Bae SJ, Hwang KG. Unsupervised few shot learning architecture for diagnosis of periodontal disease in dental panoramic radiographs. Sci Rep 2024; 14:23237. [PMID: 39369017 PMCID: PMC11455883 DOI: 10.1038/s41598-024-73665-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Accepted: 09/19/2024] [Indexed: 10/07/2024] Open
Abstract
In the domain of medical imaging, the advent of deep learning has marked a significant progression, particularly in the nuanced area of periodontal disease diagnosis. This study specifically targets the prevalent issue of scarce labeled data in medical imaging. We introduce a novel unsupervised few-shot learning algorithm, meticulously crafted for classifying periodontal diseases using a limited collection of dental panoramic radiographs. Our method leverages UNet architecture for generating regions of interest (RoI) from radiographs, which are then processed through a Convolutional Variational Autoencoder (CVAE). This approach is pivotal in extracting critical latent features, subsequently clustered using an advanced algorithm. This clustering is key in our methodology, enabling the assignment of labels to images indicative of periodontal diseases, thus circumventing the challenges posed by limited datasets. Our validation process, involving a comparative analysis with traditional supervised learning and standard autoencoder-based clustering, demonstrates a marked improvement in both diagnostic accuracy and efficiency. For three real-world validation datasets, our UNet-CVAE architecture achieved up to average 14% higher accuracy compared to state-of-the-art supervised models including the vision transformer model when trained with 100 labeled images. This study not only highlights the capability of unsupervised learning in overcoming data limitations but also sets a new benchmark for diagnostic methodologies in medical AI, potentially transforming practices in data-constrained scenarios.
Collapse
Affiliation(s)
- Min Joo Kim
- Department of Medical and Digital Engineering, Hanyang University, Seoul, 04763, Republic of Korea
| | - Sun Geu Chae
- Department of Industrial Engineering, Hanyang University, Seoul, 04763, Republic of Korea
| | - Suk Joo Bae
- Department of Industrial Engineering, Hanyang University, Seoul, 04763, Republic of Korea.
| | - Kyung-Gyun Hwang
- Department of Dentistry, College of Medicine, Hanyang University, Seoul, 04763, Republic of Korea.
| |
Collapse
|
23
|
Al-Bashir AK, Al-Bataiha DH, Hafsa M, Al-Abed MA, Kanoun O. Electrical impedance tomography image reconstruction for lung monitoring based on ensemble learning algorithms. Healthc Technol Lett 2024; 11:271-282. [PMID: 39359686 PMCID: PMC11442128 DOI: 10.1049/htl2.12085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 04/01/2024] [Accepted: 04/16/2024] [Indexed: 10/04/2024] Open
Abstract
Electrical impedance tomography (EIT) is a promising non-invasive imaging technique that visualizes the electrical conductivity of an anatomic structure to form based on measured boundary voltages. However, the EIT inverse problem for the image reconstruction is nonlinear and highly ill-posed. Therefore, in this work, a simulated dataset that mimics the human thorax was generated with boundary voltages based on given conductivity distributions. To overcome the challenges of image reconstruction, an ensemble learning method was proposed. The ensemble method combines several convolutional neural network models, which are the simple Convolutional Neural Network (CNN) model, AlexNet, AlexNet with residual block, and the modified AlexNet model. The ensemble models' weights selection was based on average technique giving the best coefficient of determination (R2 score). The reconstruction quality is quantitatively evaluated by calculating the root mean square error (RMSE), the coefficient of determination (R2 score), and the image correlation coefficient (ICC). The proposed method's best performance is an RMSE of 0.09404, an R2 score of 0.926186, and an ICC of 0.95783 using an ensemble model. The proposed method is promising as it can construct valuable images for clinical EIT applications and measurements compared to previous studies.
Collapse
Affiliation(s)
- Areen K Al-Bashir
- Biomedical Engineering Department Jordan University of Science and Technology Irbid Jordan
| | - Duha H Al-Bataiha
- Biomedical Engineering Department Jordan University of Science and Technology Irbid Jordan
| | - Mariem Hafsa
- Biomedical Engineering Department Hashemite University Zarqa Jordan
| | | | - Olfa Kanoun
- Measurement and Sensor Technology Chemnitz University of Technology Chemnitz Germany
| |
Collapse
|
24
|
Koido M, Tomizuka K, Terao C. Fundamentals for predicting transcriptional regulations from DNA sequence patterns. J Hum Genet 2024; 69:499-504. [PMID: 38730006 PMCID: PMC11422166 DOI: 10.1038/s10038-024-01256-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 04/10/2024] [Accepted: 04/25/2024] [Indexed: 05/12/2024]
Abstract
Cell-type-specific regulatory elements, cataloged through extensive experiments and bioinformatics in large-scale consortiums, have enabled enrichment analyses of genetic associations that primarily utilize positional information of the regulatory elements. These analyses have identified cell types and pathways genetically associated with human complex traits. However, our understanding of detailed allelic effects on these elements' activities and on-off states remains incomplete, hampering the interpretation of human genetic study results. This review introduces machine learning methods to learn sequence-dependent transcriptional regulation mechanisms from DNA sequences for predicting such allelic effects (not associations). We provide a concise history of machine-learning-based approaches, the requirements, and the key computational processes, focusing on primers in machine learning. Convolution and self-attention, pivotal in modern deep-learning models, are explained through geometrical interpretations using dot products. This facilitates understanding of the concept and why these have been used for machine learning for DNA sequences. These will inspire further research in this genetics and genomics field.
Collapse
Affiliation(s)
- Masaru Koido
- Laboratory of Complex Trait Genomics, Department of Computational Biology and Medical Sciences, Graduate School of Frontier Sciences, The University of Tokyo, Tokyo, Japan.
- Laboratory for Statistical and Translational Genetics, RIKEN Center for Integrative Medical Sciences, Yokohama, Japan.
| | - Kohei Tomizuka
- Laboratory for Statistical and Translational Genetics, RIKEN Center for Integrative Medical Sciences, Yokohama, Japan
| | - Chikashi Terao
- Laboratory for Statistical and Translational Genetics, RIKEN Center for Integrative Medical Sciences, Yokohama, Japan.
- Clinical Research Center, Shizuoka General Hospital, Shizuoka, Japan.
- The Department of Applied Genetics, The School of Pharmaceutical Sciences, University of Shizuoka, Shizuoka, Japan.
| |
Collapse
|
25
|
Tagami M, Nishio M, Yoshikawa A, Misawa N, Sakai A, Haruna Y, Tomita M, Azumi A, Honda S. Artificial intelligence-based differential diagnosis of orbital MALT lymphoma and IgG4 related ophthalmic disease using hematoxylin-eosin images. Graefes Arch Clin Exp Ophthalmol 2024; 262:3355-3366. [PMID: 38700592 DOI: 10.1007/s00417-024-06501-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 04/12/2024] [Accepted: 04/28/2024] [Indexed: 10/08/2024] Open
Abstract
PURPOSE To investigate the possibility of distinguishing between IgG4-related ophthalmic disease (IgG4-ROD) and orbital MALT lymphoma using artificial intelligence (AI) and hematoxylin-eosin (HE) images. METHODS After identifying a total of 127 patients from whom we were able to procure tissue blocks with IgG4-ROD and orbital MALT lymphoma, we performed histological and molecular genetic analyses, such as gene rearrangement. Subsequently, pathological HE images were collected from these patients followed by the cutting out of 10 different image patches from the HE image of each patient. A total of 970 image patches from the 97 patients were used to construct nine different models of deep learning, and the 300 image patches from the remaining 30 patients were used to evaluate the diagnostic performance of the models. Area under the curve (AUC) and accuracy (ACC) were used for the performance evaluation of the deep learning models. In addition, four ophthalmologists performed the binary classification between IgG4-ROD and orbital MALT lymphoma. RESULTS EVA, which is a vision-centric foundation model to explore the limits of visual representation, was the best deep learning model among the nine models. The results of EVA were ACC = 73.3% and AUC = 0.807. The ACC of the four ophthalmologists ranged from 40 to 60%. CONCLUSIONS It was possible to construct an AI software based on deep learning that was able to distinguish between IgG4-ROD and orbital MALT. This AI model may be useful as an initial screening tool to direct further ancillary investigations.
Collapse
Affiliation(s)
- Mizuki Tagami
- Department of Ophthalmology and Visual Sciences,, Graduate School of Medicine, Osaka Metropolitan University, 1-5-7 Asahimachi, Abeno-Ku, Osaka-Shi, Osaka, 545-8586, Japan.
- Ophthalmology Department and Eye Center, Kobe Kaisei Hospital, Kobe, Hyogo, Japan.
| | - Mizuho Nishio
- Center for Advanced Medical Engineering Research & Development, Kobe University Graduate School of Medicine, Kobe, Hyogo, Japan
| | - Atsuko Yoshikawa
- Ophthalmology Department and Eye Center, Kobe Kaisei Hospital, Kobe, Hyogo, Japan
| | - Norihiko Misawa
- Department of Ophthalmology and Visual Sciences,, Graduate School of Medicine, Osaka Metropolitan University, 1-5-7 Asahimachi, Abeno-Ku, Osaka-Shi, Osaka, 545-8586, Japan
| | - Atsushi Sakai
- Department of Ophthalmology and Visual Sciences,, Graduate School of Medicine, Osaka Metropolitan University, 1-5-7 Asahimachi, Abeno-Ku, Osaka-Shi, Osaka, 545-8586, Japan
| | - Yusuke Haruna
- Department of Ophthalmology and Visual Sciences,, Graduate School of Medicine, Osaka Metropolitan University, 1-5-7 Asahimachi, Abeno-Ku, Osaka-Shi, Osaka, 545-8586, Japan
| | - Mami Tomita
- Department of Ophthalmology and Visual Sciences,, Graduate School of Medicine, Osaka Metropolitan University, 1-5-7 Asahimachi, Abeno-Ku, Osaka-Shi, Osaka, 545-8586, Japan
| | - Atsushi Azumi
- Ophthalmology Department and Eye Center, Kobe Kaisei Hospital, Kobe, Hyogo, Japan
| | - Shigeru Honda
- Department of Ophthalmology and Visual Sciences,, Graduate School of Medicine, Osaka Metropolitan University, 1-5-7 Asahimachi, Abeno-Ku, Osaka-Shi, Osaka, 545-8586, Japan
| |
Collapse
|
26
|
Murmu A, Győrffy B. Artificial intelligence methods available for cancer research. Front Med 2024; 18:778-797. [PMID: 39115792 DOI: 10.1007/s11684-024-1085-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Accepted: 05/17/2024] [Indexed: 11/01/2024]
Abstract
Cancer is a heterogeneous and multifaceted disease with a significant global footprint. Despite substantial technological advancements for battling cancer, early diagnosis and selection of effective treatment remains a challenge. With the convenience of large-scale datasets including multiple levels of data, new bioinformatic tools are needed to transform this wealth of information into clinically useful decision-support tools. In this field, artificial intelligence (AI) technologies with their highly diverse applications are rapidly gaining ground. Machine learning methods, such as Bayesian networks, support vector machines, decision trees, random forests, gradient boosting, and K-nearest neighbors, including neural network models like deep learning, have proven valuable in predictive, prognostic, and diagnostic studies. Researchers have recently employed large language models to tackle new dimensions of problems. However, leveraging the opportunity to utilize AI in clinical settings will require surpassing significant obstacles-a major issue is the lack of use of the available reporting guidelines obstructing the reproducibility of published studies. In this review, we discuss the applications of AI methods and explore their benefits and limitations. We summarize the available guidelines for AI in healthcare and highlight the potential role and impact of AI models on future directions in cancer research.
Collapse
Affiliation(s)
- Ankita Murmu
- Institute of Molecular Life Sciences, HUN-REN Research Centre for Natural Sciences, Budapest, 1117, Hungary
- National Laboratory for Drug Research and Development, Budapest, 1117, Hungary
- Department of Bioinformatics, Semmelweis University, Budapest, 1094, Hungary
| | - Balázs Győrffy
- Institute of Molecular Life Sciences, HUN-REN Research Centre for Natural Sciences, Budapest, 1117, Hungary.
- Department of Bioinformatics, Semmelweis University, Budapest, 1094, Hungary.
- Department of Biophysics, University of Pecs, Pecs, 7624, Hungary.
| |
Collapse
|
27
|
Bourdillon AT. Computer Vision-Radiomics & Pathognomics. Otolaryngol Clin North Am 2024; 57:719-751. [PMID: 38910065 DOI: 10.1016/j.otc.2024.05.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2024]
Abstract
The role of computer vision in extracting radiographic (radiomics) and histopathologic (pathognomics) features is an extension of molecular biomarkers that have been foundational to our understanding across the spectrum of head and neck disorders. Especially within head and neck cancers, machine learning and deep learning applications have yielded advances in the characterization of tumor features, nodal features, and various outcomes. This review aims to overview the landscape of radiomic and pathognomic applications, informing future work to address gaps. Novel methodologies will be needed to potentially engineer ways of integrating multidimensional data inputs to examine disease features to guide prognosis comprehensively and ultimately clinical management.
Collapse
Affiliation(s)
- Alexandra T Bourdillon
- Department of Otolaryngology-Head & Neck Surgery, University of California-San Francisco, San Francisco, CA 94115, USA.
| |
Collapse
|
28
|
Maganur PC, Vishwanathaiah S, Mashyakhy M, Abumelha AS, Robaian A, Almohareb T, Almutairi B, Alzahrani KM, Binalrimal S, Marwah N, Khanagar SB, Manoharan V. Development of Artificial Intelligence Models for Tooth Numbering and Detection: A Systematic Review. Int Dent J 2024; 74:917-929. [PMID: 38851931 DOI: 10.1016/j.identj.2024.04.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 04/15/2024] [Accepted: 04/21/2024] [Indexed: 06/10/2024] Open
Abstract
Dental radiography is widely used in dental practices and offers a valuable resource for the development of AI technology. Consequently, many researchers have been drawn to explore its application in different areas. The current systematic review was undertaken to critically appraise developments and performance of artificial intelligence (AI) models designed for tooth numbering and detection using dento-maxillofacial radiographic images. In order to maintain the integrity of their methodology, the authors of this systematic review followed the diagnostic test accuracy criteria outlined in PRISMA-DTA. Electronic search was done by navigating through various databases such as PubMed, Scopus, Embase, Cochrane, Web of Science, Google Scholar, and the Saudi Digital Library for the articles published from 2018 to 2023. Sixteen articles that met the inclusion exclusion criteria were subjected to risk of bias assessment using QUADAS-2 and certainty of evidence was assessed using GRADE approach.AI technology has been mainly applied for automated tooth detection and numbering, to detect teeth in CBCT images, to identify dental treatment patterns and approaches. The AI models utilised in the studies included exhibited a highest precision of 99.4% for tooth detection and 98% for tooth numbering. The use of AI as a supplementary diagnostic tool in the field of dental radiology holds great potential.
Collapse
Affiliation(s)
- Prabhadevi C Maganur
- Division of Pediatric Dentistry, Department of Preventive Dental Science, College of Dentistry, Jazan university, Jazan, Saudi Arabia
| | - Satish Vishwanathaiah
- Division of Pediatric Dentistry, Department of Preventive Dental Science, College of Dentistry, Jazan university, Jazan, Saudi Arabia.
| | - Mohammed Mashyakhy
- Restorative Dental Science Department, College of Dentistry, Jazan university, Jazan, Saudi Arabia.
| | - Abdulaziz S Abumelha
- Division of Endodontics, College of Dentistry, King Khalid University, Abha, Saudi Arabia
| | - Ali Robaian
- Department of Conservative Dental Sciences, College of Dentistry, Prince Sattam Bin Abdulaziz University, Al Kharj, Saudi Arabia
| | - Thamer Almohareb
- Division of Operative Dentistry, Department of Restorative Dental Sciences, College of Dentistry, King Saud University, Riyadh, Saudi Arabia
| | - Basil Almutairi
- Division of Operative Dentistry, Department of Restorative Dental Sciences, College of Dentistry, King Saud University, Riyadh, Saudi Arabia
| | - Khaled M Alzahrani
- Department of Prosthetic Dental Sciences, College of Dentistry, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Sultan Binalrimal
- Restorative Department, College of Medicine and Dentistry, Riyadh Elm University, Riyadh, Saudi Arabia
| | - Nikhil Marwah
- Department of Pediatric and Preventive Dentistry, Mahatma Gandhi Dental College and Hospital, Jaipur, Rajasthan, India
| | - Sanjeev B Khanagar
- Preventive Dental Science Department, College of Dentistry, King Saud Bin Abdulaziz, University for Health Sciences, Riyadh, Saudi Arabia; King Abdullah International Medical Research Center, National Guard Health Affairs, Riyadh, Saudi Arabia
| | - Varsha Manoharan
- Department of Public Health Dentistry, KVG dental college and Hospital, Sullia, Karnataka, India
| |
Collapse
|
29
|
Hou A, Luo H, Liu H, Luo L, Ding P. Multi-scale DNA language model improves 6 mA binding sites prediction. Comput Biol Chem 2024; 112:108129. [PMID: 39067351 DOI: 10.1016/j.compbiolchem.2024.108129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2024] [Revised: 06/05/2024] [Accepted: 06/10/2024] [Indexed: 07/30/2024]
Abstract
DNA methylation at the N6 position of adenine (N6-methyladenine, 6 mA), which refers to the attachment of a methyl group to the N6 site of the adenine (A) of DNA, is an important epigenetic modification in prokaryotic and eukaryotic genomes. Accurately predicting the 6 mA binding sites can provide crucial insights into gene regulation, DNA repair, disease development and so on. Wet experiments are commonly used for analyzing 6 mA binding sites. However, they suffer from high cost and expensive time. Therefore, various deep learning methods have been widely used to predict 6 mA binding sites recently. In this study, we develop a framework based on multi-scale DNA language model named "iDNA6mA-MDL". "iDNA6mA-MDL" integrates multiple kmers and the nucleotide property and frequency method for feature embedding, which can capture a full range of DNA sequence context information. At the prediction stage, it also leverages DNABERT to compensate for the incomplete capture of global DNA information. Experiments show that our framework obtains average AUC of 0.981 on a classic 6 mA rice gene dataset, going beyond all existing advanced models under fivefold cross-validations. Moreover, "iDNA6mA-MDL" outperforms most of the popular state-of-the-art methods on another 11 6 mA datasets, demonstrating its effectiveness in 6 mA binding sites prediction.
Collapse
Affiliation(s)
- Anlin Hou
- School of Computer Science, University of South China, Hengyang 421001, China
| | - Hanyu Luo
- School of Computer Science, University of South China, Hengyang 421001, China
| | - Huan Liu
- School of Computer Science, University of South China, Hengyang 421001, China
| | - Lingyun Luo
- School of Computer Science, University of South China, Hengyang 421001, China.
| | - Pingjian Ding
- School of Computer Science, University of South China, Hengyang 421001, China
| |
Collapse
|
30
|
Bajaj S, Bala M, Angurala M. A comparative analysis of different augmentations for brain images. Med Biol Eng Comput 2024; 62:3123-3150. [PMID: 38782880 DOI: 10.1007/s11517-024-03127-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 05/10/2024] [Indexed: 05/25/2024]
Abstract
Deep learning (DL) requires a large amount of training data to improve performance and prevent overfitting. To overcome these difficulties, we need to increase the size of the training dataset. This can be done by augmentation on a small dataset. The augmentation approaches must enhance the model's performance during the learning period. There are several types of transformations that can be applied to medical images. These transformations can be applied to the entire dataset or to a subset of the data, depending on the desired outcome. In this study, we categorize data augmentation methods into four groups: Absent augmentation, where no modifications are made; basic augmentation, which includes brightness and contrast adjustments; intermediate augmentation, encompassing a wider array of transformations like rotation, flipping, and shifting in addition to brightness and contrast adjustments; and advanced augmentation, where all transformation layers are employed. We plan to conduct a comprehensive analysis to determine which group performs best when applied to brain CT images. This evaluation aims to identify the augmentation group that produces the most favorable results in terms of improving model accuracy, minimizing diagnostic errors, and ensuring the robustness of the model in the context of brain CT image analysis.
Collapse
Affiliation(s)
- Shilpa Bajaj
- Applied Sciences (Computer Applications), I.K. Gujral Punjab Technical University, Jalandhar, Kapurthala, India.
| | - Manju Bala
- Department of Computer Science and Engineering, Khalsa College of Engineering and Technology, Amritsar, India
| | - Mohit Angurala
- Apex Institute of Technology (CSE), Chandigarh University, Gharuan, Mohali, Punjab, India
| |
Collapse
|
31
|
Lee JY, Lee YS, Tae JH, Chang IH, Kim TH, Myung SC, Nguyen TT, Lee JH, Choi J, Kim JH, Kim JW, Choi SY. Selection of Convolutional Neural Network Model for Bladder Tumor Classification of Cystoscopy Images and Comparison with Humans. J Endourol 2024; 38:1036-1043. [PMID: 38877795 DOI: 10.1089/end.2024.0250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/16/2024] Open
Abstract
Purpose: An investigation of various convolutional neural network (CNN)-based deep learning algorithms was conducted to select the appropriate artificial intelligence (AI) model for calculating the diagnostic performance of bladder tumor classification on cystoscopy images, with the performance of the selected model to be compared against that of medical students and urologists. Methods: A total of 3,731 cystoscopic images that contained 2,191 tumor images were obtained from 543 bladder tumor cases and 219 normal cases were evaluated. A total of 17 CNN models were trained for tumor classification with various hyperparameters. The diagnostic performance of the selected AI model was compared with the results obtained from urologists and medical students by using the receiver operating characteristic (ROC) curve graph and metrics. Results: EfficientNetB0 was selected as the appropriate AI model. In the test results, EfficientNetB0 achieved a balanced accuracy of 81%, sensitivity of 88%, specificity of 74%, and an area under the curve (AUC) of 92%. In contrast, human-derived diagnostic statistics for the test data showed an average balanced accuracy of 75%, sensitivity of 94%, and specificity of 55%. Specifically, urologists had an average balanced accuracy of 91%, sensitivity of 95%, and specificity of 88%, while medical students had an average balanced accuracy of 69%, sensitivity of 94%, and specificity of 44%. Conclusions: Among the various AI models, we suggest that EfficientNetB0 is an appropriate AI classification model for determining the presence of bladder tumors in cystoscopic images. EfficientNetB0 showed the highest performance among several models and showed high accuracy and specificity compared to medical students. This AI technology will be helpful for less experienced urologists or nonurologists in making diagnoses. Image-based deep learning classifies bladder cancer using cystoscopy images and shows promise for generalized applications in biomedical image analysis and clinical decision making.
Collapse
Affiliation(s)
| | - Yong Seong Lee
- Department of Urology, Chung-Ang University Gwangmyeong Hospital, Chung-Ang University College of Medicine, Gyeonggi-do, Korea
| | - Jong Hyun Tae
- Department of Urology, Chung-Ang University Hospital, Chung-Ang University College of Medicine, Seoul, Korea
| | - In Ho Chang
- Department of Urology, Chung-Ang University Hospital, Chung-Ang University College of Medicine, Seoul, Korea
| | - Tae-Hyoung Kim
- Department of Urology, Chung-Ang University Hospital, Chung-Ang University College of Medicine, Seoul, Korea
| | - Soon Chul Myung
- Department of Urology, Chung-Ang University Hospital, Chung-Ang University College of Medicine, Seoul, Korea
| | - Tuan Thanh Nguyen
- Department of Urology, Cho Ray Hospital, University of Medicine and Pharmacy, Ho Chi Minh City, Vietnam
| | | | - Joongwon Choi
- Department of Urology, Chung-Ang University Gwangmyeong Hospital, Chung-Ang University College of Medicine, Gyeonggi-do, Korea
| | - Jung Hoon Kim
- Department of Urology, Chung-Ang University Gwangmyeong Hospital, Chung-Ang University College of Medicine, Gyeonggi-do, Korea
| | - Jin Wook Kim
- Department of Urology, Chung-Ang University Gwangmyeong Hospital, Chung-Ang University College of Medicine, Gyeonggi-do, Korea
| | - Se Young Choi
- Department of Urology, Chung-Ang University Hospital, Chung-Ang University College of Medicine, Seoul, Korea
| |
Collapse
|
32
|
Kudus K, Wagner M, Ertl-Wagner BB, Khalvati F. Applications of machine learning to MR imaging of pediatric low-grade gliomas. Childs Nerv Syst 2024; 40:3027-3035. [PMID: 38972953 DOI: 10.1007/s00381-024-06522-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 06/21/2024] [Indexed: 07/09/2024]
Abstract
INTRODUCTION Machine learning (ML) shows promise for the automation of routine tasks related to the treatment of pediatric low-grade gliomas (pLGG) such as tumor grading, typing, and segmentation. Moreover, it has been shown that ML can identify crucial information from medical images that is otherwise currently unattainable. For example, ML appears to be capable of preoperatively identifying the underlying genetic status of pLGG. METHODS In this chapter, we reviewed, to the best of our knowledge, all published works that have used ML techniques for the imaging-based evaluation of pLGGs. Additionally, we aimed to provide some context on what it will take to go from the exploratory studies we reviewed to clinically deployed models. RESULTS Multiple studies have demonstrated that ML can accurately grade, type, and segment and detect the genetic status of pLGGs. We compared the approaches used between the different studies and observed a high degree of variability throughout the methodologies. Standardization and cooperation between the numerous groups working on these approaches will be key to accelerating the clinical deployment of these models. CONCLUSION The studies reviewed in this chapter detail the potential for ML techniques to transform the treatment of pLGG. However, there are still challenges that need to be overcome prior to clinical deployment.
Collapse
Affiliation(s)
- Kareem Kudus
- Neurosciences & Mental Health Research Program, The Hospital for Sick Children, Toronto, Canada
- Institute of Medical Science, University of Toronto, Toronto, Canada
| | - Matthias Wagner
- Department of Diagnostic & Interventional Radiology, The Hospital for Sick Children, Toronto, Canada
- Department of Diagnostic and Interventional Neuroradiology, University Hospital Augsburg, Augsburg, Germany
| | - Birgit Betina Ertl-Wagner
- Neurosciences & Mental Health Research Program, The Hospital for Sick Children, Toronto, Canada
- Institute of Medical Science, University of Toronto, Toronto, Canada
- Department of Diagnostic & Interventional Radiology, The Hospital for Sick Children, Toronto, Canada
- Department of Medical Imaging, University of Toronto, Toronto, Canada
| | - Farzad Khalvati
- Neurosciences & Mental Health Research Program, The Hospital for Sick Children, Toronto, Canada.
- Institute of Medical Science, University of Toronto, Toronto, Canada.
- Department of Diagnostic & Interventional Radiology, The Hospital for Sick Children, Toronto, Canada.
- Department of Medical Imaging, University of Toronto, Toronto, Canada.
- Department of Computer Science, University of Toronto, Toronto, Canada.
- Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Canada.
| |
Collapse
|
33
|
Mikhail D, Milad D, Antaki F, Hammamji K, Qian CX, Rezende FA, Duval R. The role of artificial intelligence in macular hole management: A scoping review. Surv Ophthalmol 2024:S0039-6257(24)00123-1. [PMID: 39357748 DOI: 10.1016/j.survophthal.2024.09.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 09/16/2024] [Accepted: 09/23/2024] [Indexed: 10/04/2024]
Abstract
NARRATIVE ABSTRACT We focus on the utility of artificial intelligence (AI) in the management of macular hole (MH). We synthesize 25 studies, comprehensively reporting on each AI model's development strategy, validation, tasks, performance, strengths, and limitations. All models analyzed ophthalmic images, and 5 (20 %) also analyzed clinical features. Study objectives were categorized based on 3 stages of MH care: diagnosis, identification of MH characteristics, and postoperative predictions of hole closure and vision recovery. Twenty-two (88 %) AI models underwent supervised learning, and the models were most often deployed to determine a MH diagnosis. None of the articles applied AI to guiding treatment plans. AI model performance was compared to other algorithms and to human graders. Of the 10 studies comparing AI to human graders (i.e., retinal specialists, general ophthalmologists, and ophthalmology trainees), 5 (50 %) reported equivalent or higher performance. Overall, AI analysis of images and clinical characteristics in MH demonstrated high diagnostic and predictive accuracy. Convolutional neural networks comprised the majority of included AI models, including those which were high performing. Future research may consider validating algorithms to propose personalized treatment plans and explore clinical use of the aforementioned algorithms.
Collapse
Affiliation(s)
- David Mikhail
- Temerty Faculty of Medicine, University of Toronto, Toronto, Canada; Department of Ophthalmology, University of Montreal, Montreal, Canada
| | - Daniel Milad
- Department of Ophthalmology, University of Montreal, Montreal, Canada; Department of Ophthalmology, Hôpital Maisonneuve-Rosemont, Montreal, Canada; Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, Canada
| | - Fares Antaki
- Department of Ophthalmology, University of Montreal, Montreal, Canada; Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, Canada
| | - Karim Hammamji
- Department of Ophthalmology, University of Montreal, Montreal, Canada; Department of Ophthalmology, Centre Hospitalier de l'Université de Montréal (CHUM), Montreal, Canada
| | - Cynthia X Qian
- Department of Ophthalmology, University of Montreal, Montreal, Canada; Department of Ophthalmology, Hôpital Maisonneuve-Rosemont, Montreal, Canada
| | - Flavio A Rezende
- Department of Ophthalmology, University of Montreal, Montreal, Canada; Department of Ophthalmology, Hôpital Maisonneuve-Rosemont, Montreal, Canada
| | - Renaud Duval
- Department of Ophthalmology, University of Montreal, Montreal, Canada; Department of Ophthalmology, Hôpital Maisonneuve-Rosemont, Montreal, Canada.
| |
Collapse
|
34
|
Anbarasi J, Kumari R, Ganesh M, Agrawal R. Translational Connectomics: overview of machine learning in macroscale Connectomics for clinical insights. BMC Neurol 2024; 24:364. [PMID: 39342171 PMCID: PMC11438080 DOI: 10.1186/s12883-024-03864-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2024] [Accepted: 09/16/2024] [Indexed: 10/01/2024] Open
Abstract
Connectomics is a neuroscience paradigm focused on noninvasively mapping highly intricate and organized networks of neurons. The advent of neuroimaging has led to extensive mapping of the brain functional and structural connectome on a macroscale level through modalities such as functional and diffusion MRI. In parallel, the healthcare field has witnessed a surge in the application of machine learning and artificial intelligence for diagnostics, especially in imaging. While reviews covering machine learn ing and macroscale connectomics exist for specific disorders, none provide an overview that captures their evolving role, especially through the lens of clinical application and translation. The applications include understanding disorders, classification, identifying neuroimaging biomarkers, assessing severity, predicting outcomes and intervention response, identifying potential targets for brain stimulation, and evaluating the effects of stimulation intervention on the brain and connectome mapping in patients before neurosurgery. The covered studies span neurodegenerative, neurodevelopmental, neuropsychiatric, and neurological disorders. Along with applications, the review provides a brief of common ML methods to set context. Conjointly, limitations in ML studies within connectomics and strategies to mitigate them have been covered.
Collapse
Affiliation(s)
- Janova Anbarasi
- BrainSightAI, No. 677, 1st Floor, 27th Main, 13th Cross, HSR Layout, Sector 1, Adugodi, Bengaluru, Karnataka, 560102, India
| | - Radha Kumari
- BrainSightAI, No. 677, 1st Floor, 27th Main, 13th Cross, HSR Layout, Sector 1, Adugodi, Bengaluru, Karnataka, 560102, India
| | - Malvika Ganesh
- BrainSightAI, No. 677, 1st Floor, 27th Main, 13th Cross, HSR Layout, Sector 1, Adugodi, Bengaluru, Karnataka, 560102, India
| | - Rimjhim Agrawal
- BrainSightAI, No. 677, 1st Floor, 27th Main, 13th Cross, HSR Layout, Sector 1, Adugodi, Bengaluru, Karnataka, 560102, India.
| |
Collapse
|
35
|
Shah STH, Shah SAH, Khan II, Imran A, Shah SBH, Mehmood A, Qureshi SA, Raza M, Di Terlizzi A, Cavaglià M, Deriu MA. Data-driven classification and explainable-AI in the field of lung imaging. Front Big Data 2024; 7:1393758. [PMID: 39364222 PMCID: PMC11446784 DOI: 10.3389/fdata.2024.1393758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Accepted: 09/03/2024] [Indexed: 10/05/2024] Open
Abstract
Detecting lung diseases in medical images can be quite challenging for radiologists. In some cases, even experienced experts may struggle with accurately diagnosing chest diseases, leading to potential inaccuracies due to complex or unseen biomarkers. This review paper delves into various datasets and machine learning techniques employed in recent research for lung disease classification, focusing on pneumonia analysis using chest X-ray images. We explore conventional machine learning methods, pretrained deep learning models, customized convolutional neural networks (CNNs), and ensemble methods. A comprehensive comparison of different classification approaches is presented, encompassing data acquisition, preprocessing, feature extraction, and classification using machine vision, machine and deep learning, and explainable-AI (XAI). Our analysis highlights the superior performance of transfer learning-based methods using CNNs and ensemble models/features for lung disease classification. In addition, our comprehensive review offers insights for researchers in other medical domains too who utilize radiological images. By providing a thorough overview of various techniques, our work enables the establishment of effective strategies and identification of suitable methods for a wide range of challenges. Currently, beyond traditional evaluation metrics, researchers emphasize the importance of XAI techniques in machine and deep learning models and their applications in classification tasks. This incorporation helps in gaining a deeper understanding of their decision-making processes, leading to improved trust, transparency, and overall clinical decision-making. Our comprehensive review serves as a valuable resource for researchers and practitioners seeking not only to advance the field of lung disease detection using machine learning and XAI but also from other diverse domains.
Collapse
Affiliation(s)
- Syed Taimoor Hussain Shah
- PolitoBIOMed Lab, Department of Mechanical and Aerospace Engineering, Politecnico di Torino, Turin, Italy
| | - Syed Adil Hussain Shah
- PolitoBIOMed Lab, Department of Mechanical and Aerospace Engineering, Politecnico di Torino, Turin, Italy
- Department of Research and Development (R&D), GPI SpA, Trento, Italy
| | - Iqra Iqbal Khan
- Department of Computer Science, Bahauddin Zakariya University, Multan, Pakistan
| | - Atif Imran
- College of Electrical and Mechanical Engineering, National University of Sciences and Technology, Rawalpindi, Pakistan
| | - Syed Baqir Hussain Shah
- Department of Computer Science, Commission on Science and Technology for Sustainable Development in the South (COMSATS) University Islamabad (CUI), Wah Campus, Wah, Pakistan
| | - Atif Mehmood
- School of Computer Science and Technology, Zhejiang Normal University, Jinhua, China
- Zhejiang Institute of Photoelectronics & Zhejiang Institute for Advanced Light Source, Zhejiang Normal University, Jinhua, Zhejiang, China
| | - Shahzad Ahmad Qureshi
- Department of Computer and Information Sciences, Pakistan Institute of Engineering and Applied Sciences (PIEAS), Islamabad, Pakistan
| | - Mudassar Raza
- Department of Computer Science, Namal University Mianwali, Mianwali, Pakistan
- Department of Computer Science, Heavy Industries Taxila Education City (HITEC), University of Taxila, Taxila, Pakistan
| | | | - Marco Cavaglià
- PolitoBIOMed Lab, Department of Mechanical and Aerospace Engineering, Politecnico di Torino, Turin, Italy
| | - Marco Agostino Deriu
- PolitoBIOMed Lab, Department of Mechanical and Aerospace Engineering, Politecnico di Torino, Turin, Italy
| |
Collapse
|
36
|
Gohla G, Estler A, Zerweck L, Knoppik J, Ruff C, Werner S, Nikolaou K, Ernemann U, Afat S, Brendlin A. Deep Learning-Based Denoising Enables High-Quality, Fully Diagnostic Neuroradiological Trauma CT at 25% Radiation Dose. Acad Radiol 2024:S1076-6332(24)00581-6. [PMID: 39294053 DOI: 10.1016/j.acra.2024.08.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 08/04/2024] [Accepted: 08/09/2024] [Indexed: 09/20/2024]
Abstract
RATIONALE AND OBJECTIVES Traumatic neuroradiological emergencies necessitate rapid and accurate diagnosis, often relying on computed tomography (CT). However, the associated ionizing radiation poses long-term risks. Modern artificial intelligence reconstruction algorithms have shown promise in reducing radiation dose while maintaining image quality. Therefore, we aimed to evaluate the dose reduction capabilities of a deep learning-based denoising (DLD) algorithm in traumatic neuroradiological emergency CT scans. MATERIALS AND METHODS This retrospective single-center study included 100 patients with neuroradiological trauma CT scans. Full-dose (100%) and low-dose (25%) simulated scans were processed using iterative reconstruction (IR2) and DLD. Subjective and objective image quality assessments were performed by four neuroradiologists alongside clinical endpoint analysis. Bayesian sensitivity and specificity were computed with 95% credible intervals. RESULTS Subjective analysis showed superior scores for 100% DLD compared to 100% IR2 and 25% IR2 (p < 0.001). No significant differences were observed between 25% DLD and 100% IR2. Objective analysis revealed no significant CT value differences but higher noise at 25% dose for DLD and IR2 compared to 100% (p < 0.001). DLD exhibited lower noise than IR2 at both dose levels (p < 0.001). Clinical endpoint analysis indicated equivalence to 100% IR2 in fracture detection for all datasets, with sensitivity losses in hemorrhage detection at 25% IR2. DLD (25% and 100%) maintained comparable sensitivity to 100% IR2. All comparisons demonstrated robust specificity. CONCLUSIONS The evaluated algorithm enables high-quality, fully diagnostic CT scans at 25% of the initial radiation dose and improves patient care by reducing unnecessary radiation exposure.
Collapse
Affiliation(s)
- Georg Gohla
- Department of Diagnostic and Interventional Neuroradiology, Eberhard Karls-University Tuebingen, D-72076 Tuebingen, Germany (G.G., A.E., L.Z., J.K., C.R., U.E.).
| | - Arne Estler
- Department of Diagnostic and Interventional Neuroradiology, Eberhard Karls-University Tuebingen, D-72076 Tuebingen, Germany (G.G., A.E., L.Z., J.K., C.R., U.E.)
| | - Leonie Zerweck
- Department of Diagnostic and Interventional Neuroradiology, Eberhard Karls-University Tuebingen, D-72076 Tuebingen, Germany (G.G., A.E., L.Z., J.K., C.R., U.E.)
| | - Jessica Knoppik
- Department of Diagnostic and Interventional Neuroradiology, Eberhard Karls-University Tuebingen, D-72076 Tuebingen, Germany (G.G., A.E., L.Z., J.K., C.R., U.E.)
| | - Christer Ruff
- Department of Diagnostic and Interventional Neuroradiology, Eberhard Karls-University Tuebingen, D-72076 Tuebingen, Germany (G.G., A.E., L.Z., J.K., C.R., U.E.)
| | - Sebastian Werner
- Department of Diagnostic and Interventional Radiology, Eberhard Karls-University Tuebingen, D-72076 Tuebingen, Germany (S.W., K.N., S.A., A.B.)
| | - Konstantin Nikolaou
- Department of Diagnostic and Interventional Radiology, Eberhard Karls-University Tuebingen, D-72076 Tuebingen, Germany (S.W., K.N., S.A., A.B.)
| | - Ulrike Ernemann
- Department of Diagnostic and Interventional Neuroradiology, Eberhard Karls-University Tuebingen, D-72076 Tuebingen, Germany (G.G., A.E., L.Z., J.K., C.R., U.E.)
| | - Saif Afat
- Department of Diagnostic and Interventional Radiology, Eberhard Karls-University Tuebingen, D-72076 Tuebingen, Germany (S.W., K.N., S.A., A.B.)
| | - Andreas Brendlin
- Department of Diagnostic and Interventional Radiology, Eberhard Karls-University Tuebingen, D-72076 Tuebingen, Germany (S.W., K.N., S.A., A.B.)
| |
Collapse
|
37
|
Zhao Z, Bakar EBA, Razak NBA, Akhtar MN. Corrosion image classification method based on EfficientNetV2. Heliyon 2024; 10:e36754. [PMID: 39286174 PMCID: PMC11403497 DOI: 10.1016/j.heliyon.2024.e36754] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 07/22/2024] [Accepted: 08/21/2024] [Indexed: 09/19/2024] Open
Abstract
Corrosion is one of the key factors leading to material failure, which can occur in facilities and equipment closely related to people's lives, causing structural damage and thus affecting the safety of people's lives and property. To identify corrosion more effectively across multiple facilities and equipment, this paper utilizes a corrosion binary classification dataset containing various materials to develop a CNN classification model for better detection and distinction of material corrosion, using a methodological paradigm of transfer learning and fine-tuning. The proposed model implementation initially uses data augmentation to enhance the dataset and employs different sizes of EfficientNetV2 for training, evaluated using Confusion Matrix, ROC curve, and the values of Precision, Recall, and F1-score. To further enhance the testing results, this paper focuses on the impact of using the Global Average Pooling layer versus the Global Max Pooling layer, as well as the number of fine-tuning layers. The results show that the Global Average Pooling layer performs better, and EfficientNetV2B0 with a fine-tuning rate of 20 %, and EfficientNetV2S with a fine-tuning rate of 15 %, achieve the highest testing accuracy of 0.9176, an ROC-AUC value of 0.97, and Precision, Recall, and F1-Score values exceeding 0.9. These findings can be served as a reference for other corrosion classification models which uses EfficientNetV2.
Collapse
Affiliation(s)
- Ziheng Zhao
- School of Aerospace Engineering, Kampus Kejuruteraan, Universiti Sains Malaysia, 14300, Nibong Tebal, Pulau Pinang, Malaysia
| | - Elmi Bin Abu Bakar
- School of Aerospace Engineering, Kampus Kejuruteraan, Universiti Sains Malaysia, 14300, Nibong Tebal, Pulau Pinang, Malaysia
| | - Norizham Bin Abdul Razak
- School of Aerospace Engineering, Kampus Kejuruteraan, Universiti Sains Malaysia, 14300, Nibong Tebal, Pulau Pinang, Malaysia
| | - Mohammad Nishat Akhtar
- School of Aerospace Engineering, Kampus Kejuruteraan, Universiti Sains Malaysia, 14300, Nibong Tebal, Pulau Pinang, Malaysia
| |
Collapse
|
38
|
Shenouda M, Gudmundsson E, Li F, Straus CM, Kindler HL, Dudek AZ, Stinchcombe T, Wang X, Starkey A, Armato Iii SG. Convolutional Neural Networks for Segmentation of Pleural Mesothelioma: Analysis of Probability Map Thresholds (CALGB 30901, Alliance). JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01092-z. [PMID: 39266911 DOI: 10.1007/s10278-024-01092-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 02/28/2024] [Accepted: 02/29/2024] [Indexed: 09/14/2024]
Abstract
The purpose of this study was to evaluate the impact of probability map threshold on pleural mesothelioma (PM) tumor delineations generated using a convolutional neural network (CNN). One hundred eighty-six CT scans from 48 PM patients were segmented by a VGG16/U-Net CNN. A radiologist modified the contours generated at a 0.5 probability threshold. Percent difference of tumor volume and overlap using the Dice Similarity Coefficient (DSC) were compared between the reference standard provided by the radiologist and CNN outputs for thresholds ranging from 0.001 to 0.9. CNN-derived contours consistently yielded smaller tumor volumes than radiologist contours. Reducing the probability threshold from 0.5 to 0.01 decreased the absolute percent volume difference, on average, from 42.93% to 26.60%. Median and mean DSC ranged from 0.57 to 0.59, with a peak at a threshold of 0.2; no distinct threshold was found for percent volume difference. The CNN exhibited deficiencies with specific disease presentations, such as severe pleural effusion or disease in the pleural fissure. No single output threshold in the CNN probability maps was optimal for both tumor volume and DSC. This study emphasized the importance of considering both figures of merit when evaluating deep learning-based tumor segmentations across probability thresholds. This work underscores the need to simultaneously assess tumor volume and spatial overlap when evaluating CNN performance. While automated segmentations may yield comparable tumor volumes to that of the reference standard, the spatial region delineated by the CNN at a specific threshold is equally important.
Collapse
Affiliation(s)
- Mena Shenouda
- Department of Radiology, The University of Chicago, Chicago, IL, 60637, USA
| | | | - Feng Li
- Department of Radiology, The University of Chicago, Chicago, IL, 60637, USA
| | | | - Hedy L Kindler
- Department of Medicine, The University of Chicago, Chicago, IL, 60637, USA
| | - Arkadiusz Z Dudek
- Metro Minnesota Community Oncology Research Consortium, St. Louis Park, MN, 55416, USA
| | | | - Xiaofei Wang
- Alliance Statistics and Data Management Center, Duke University, Durham, NC, 27710, USA
| | - Adam Starkey
- Department of Radiology, The University of Chicago, Chicago, IL, 60637, USA
| | - Samuel G Armato Iii
- Department of Radiology, The University of Chicago, Chicago, IL, 60637, USA.
| |
Collapse
|
39
|
Mai Z, Shen H, Zhang A, Sun HZ, Zheng L, Guo J, Liu C, Chen Y, Wang C, Ye J, Zhu L, Fu TM, Yang X, Tao S. Convolutional Neural Networks Facilitate Process Understanding of Megacity Ozone Temporal Variability. ENVIRONMENTAL SCIENCE & TECHNOLOGY 2024; 58:15691-15701. [PMID: 38485962 DOI: 10.1021/acs.est.3c07907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/21/2024]
Abstract
Ozone pollution is profoundly modulated by meteorological features such as temperature, air pressure, wind, and humidity. While many studies have developed empirical models to elucidate the effects of meteorology on ozone variability, they predominantly focus on local weather conditions, overlooking the influences from high-altitude and broader regional meteorological patterns. Here, we employ convolutional neural networks (CNNs), a technique typically applied to image recognition, to investigate the influence of three-dimensional spatial variations in meteorological fields on the daily, seasonal, and interannual dynamics of ozone in Shenzhen, a major coastal urban center in China. Our optimized CNNs model, covering a 13° × 13° spatial domain, effectively explains over 70% of daily ozone variability, outperforming alternative empirical approaches by 7 to 62%. Model interpretations reveal the crucial roles of 2-m temperature and humidity as primary drivers, contributing 16% and 15% to daily ozone fluctuations, respectively. Regional wind fields account for up to 40% of ozone changes during the episodes. CNNs successfully replicate observed ozone temporal patterns, attributing -5-6 μg·m-3 of interannual ozone variability to weather anomalies. Our interpretable CNNs framework enables quantitative attribution of historical ozone fluctuations to nonlinear meteorological effects across spatiotemporal scales, offering vital process-based insights for managing megacity air quality amidst changing climate regimes.
Collapse
Affiliation(s)
- Zelin Mai
- Shenzhen Key Laboratory of Precision Measurement and Early Warning Technology for Urban Environmental Health Risks, School of Environmental Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
- Guangdong Provincial Observation and Research Station for Coastal Atmosphere and Climate of the Greater Bay Area, School of Environmental Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
| | - Huizhong Shen
- Shenzhen Key Laboratory of Precision Measurement and Early Warning Technology for Urban Environmental Health Risks, School of Environmental Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
- Guangdong Provincial Observation and Research Station for Coastal Atmosphere and Climate of the Greater Bay Area, School of Environmental Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
| | - Aoxing Zhang
- Shenzhen Key Laboratory of Precision Measurement and Early Warning Technology for Urban Environmental Health Risks, School of Environmental Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
- Guangdong Provincial Observation and Research Station for Coastal Atmosphere and Climate of the Greater Bay Area, School of Environmental Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
| | - Haitong Zhe Sun
- Centre for Atmospheric Science, Yusuf Hamied Department of Chemistry, University of Cambridge, Cambridge CB2 1EW, U.K
- Centre for Sustainable Medicine, Yong Loo Lin School of Medicine, National University of Singapore, Singapore 117609, Republic of Singapore
| | - Lianming Zheng
- Shenzhen Key Laboratory of Precision Measurement and Early Warning Technology for Urban Environmental Health Risks, School of Environmental Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
- Guangdong Provincial Observation and Research Station for Coastal Atmosphere and Climate of the Greater Bay Area, School of Environmental Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
| | - Jianfeng Guo
- Shenzhen Ecology and Environment Monitoring Centre of Guangdong Province, Shenzhen 518049, China
| | - Chanfang Liu
- Shenzhen Ecology and Environment Monitoring Centre of Guangdong Province, Shenzhen 518049, China
| | - Yilin Chen
- School of Urban Planning and Design, Peking University, Shenzhen Graduate School, Shenzhen 518055, China
| | - Chen Wang
- Shenzhen Key Laboratory of Precision Measurement and Early Warning Technology for Urban Environmental Health Risks, School of Environmental Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
- Guangdong Provincial Observation and Research Station for Coastal Atmosphere and Climate of the Greater Bay Area, School of Environmental Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
| | - Jianhuai Ye
- Shenzhen Key Laboratory of Precision Measurement and Early Warning Technology for Urban Environmental Health Risks, School of Environmental Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
- Guangdong Provincial Observation and Research Station for Coastal Atmosphere and Climate of the Greater Bay Area, School of Environmental Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
| | - Lei Zhu
- Shenzhen Key Laboratory of Precision Measurement and Early Warning Technology for Urban Environmental Health Risks, School of Environmental Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
- Guangdong Provincial Observation and Research Station for Coastal Atmosphere and Climate of the Greater Bay Area, School of Environmental Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
| | - Tzung-May Fu
- Shenzhen Key Laboratory of Precision Measurement and Early Warning Technology for Urban Environmental Health Risks, School of Environmental Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
- Guangdong Provincial Observation and Research Station for Coastal Atmosphere and Climate of the Greater Bay Area, School of Environmental Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
| | - Xin Yang
- Shenzhen Key Laboratory of Precision Measurement and Early Warning Technology for Urban Environmental Health Risks, School of Environmental Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
- Guangdong Provincial Observation and Research Station for Coastal Atmosphere and Climate of the Greater Bay Area, School of Environmental Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
| | - Shu Tao
- Shenzhen Key Laboratory of Precision Measurement and Early Warning Technology for Urban Environmental Health Risks, School of Environmental Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
- Guangdong Provincial Observation and Research Station for Coastal Atmosphere and Climate of the Greater Bay Area, School of Environmental Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
- College of Urban and Environmental Sciences, Peking University, Beijing 100871, China
- Institute of Carbon Neutrality, Peking University, Beijing 100871, China
| |
Collapse
|
40
|
Yari A, Fasih P, Hosseini Hooshiar M, Goodarzi A, Fattahi SF. Detection and classification of mandibular fractures in panoramic radiography using artificial intelligence. Dentomaxillofac Radiol 2024; 53:363-371. [PMID: 38652576 PMCID: PMC11358630 DOI: 10.1093/dmfr/twae018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 04/11/2024] [Accepted: 04/19/2024] [Indexed: 04/25/2024] Open
Abstract
OBJECTIVES This study evaluated the performance of the YOLOv5 deep learning model in detecting different mandibular fracture types in panoramic images. METHODS The dataset of panoramic radiographs with mandibular fractures was divided into training, validation, and testing sets, with 60%, 20%, and 20% of the images, respectively. An equal number of control images without fractures were also distributed among the datasets. The YOLOv5 algorithm was trained to detect six mandibular fracture types based on the anatomical location including symphysis, body, angle, ramus, condylar neck, and condylar head. Performance metrics of accuracy, precision, sensitivity (recall), specificity, dice coefficient (F1 score), and area under the curve (AUC) were calculated for each class. RESULTS A total of 498 panoramic images containing 673 fractures were collected. The accuracy was highest in detecting body (96.21%) and symphysis (95.87%), and was lowest in angle (90.51%) fractures. The highest and lowest precision values were observed in detecting symphysis (95.45%) and condylar head (63.16%) fractures, respectively. The sensitivity was highest in the body (96.67%) fractures and was lowest in the condylar head (80.00%) and condylar neck (81.25%) fractures. The highest specificity was noted in symphysis (98.96%), body (96.08%), and ramus (96.04%) fractures, respectively. The dice coefficient and AUC were highest in detecting body fractures (0.921 and 0.942, respectively), and were lowest in detecting condylar head fractures (0.706 and 0.812, respectively). CONCLUSION The trained algorithm achieved promising results in detecting most fracture types, particularly in body and symphysis regions indicating machine learning potential as a diagnostic aid for clinicians.
Collapse
Affiliation(s)
- Amir Yari
- Department of Oral and Maxillofacial Surgery, School of Dentistry, Kashan University of Medical Sciences, Kashan, 8715973474, Iran
| | - Paniz Fasih
- Department of Prosthodontics, School of Dentistry, Kashan University of Medical Sciences, Kashan, 8715973474, Iran
| | - Mohammad Hosseini Hooshiar
- Department of Periodontics, School of Dentistry, Tehran University of Medical Sciences, Tehran, 1439955991, Iran
| | - Ali Goodarzi
- Department of Oral and Maxillofacial Surgery, School of Dentistry, Shiraz University of Medical Sciences, Shiraz, 7195615878, Iran
| | - Seyedeh Farnaz Fattahi
- Department of Prosthodontics, School of Dentistry, Shiraz University of Medical Sciences, Shiraz, 7195615878, Iran
| |
Collapse
|
41
|
Zhao Y, Coppola A, Karamchandani U, Amiras D, Gupte CM. Artificial intelligence applied to magnetic resonance imaging reliably detects the presence, but not the location, of meniscus tears: a systematic review and meta-analysis. Eur Radiol 2024; 34:5954-5964. [PMID: 38386028 PMCID: PMC11364796 DOI: 10.1007/s00330-024-10625-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2023] [Revised: 12/24/2023] [Accepted: 01/13/2024] [Indexed: 02/23/2024]
Abstract
OBJECTIVES To review and compare the accuracy of convolutional neural networks (CNN) for the diagnosis of meniscal tears in the current literature and analyze the decision-making processes utilized by these CNN algorithms. MATERIALS AND METHODS PubMed, MEDLINE, EMBASE, and Cochrane databases up to December 2022 were searched in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) statement. Risk of analysis was used for all identified articles. Predictive performance values, including sensitivity and specificity, were extracted for quantitative analysis. The meta-analysis was divided between AI prediction models identifying the presence of meniscus tears and the location of meniscus tears. RESULTS Eleven articles were included in the final review, with a total of 13,467 patients and 57,551 images. Heterogeneity was statistically significantly large for the sensitivity of the tear identification analysis (I2 = 79%). A higher level of accuracy was observed in identifying the presence of a meniscal tear over locating tears in specific regions of the meniscus (AUC, 0.939 vs 0.905). Pooled sensitivity and specificity were 0.87 (95% confidence interval (CI) 0.80-0.91) and 0.89 (95% CI 0.83-0.93) for meniscus tear identification and 0.88 (95% CI 0.82-0.91) and 0.84 (95% CI 0.81-0.85) for locating the tears. CONCLUSIONS AI prediction models achieved favorable performance in the diagnosis, but not location, of meniscus tears. Further studies on the clinical utilities of deep learning should include standardized reporting, external validation, and full reports of the predictive performances of these models, with a view to localizing tears more accurately. CLINICAL RELEVANCE STATEMENT Meniscus tears are hard to diagnose in the knee magnetic resonance images. AI prediction models may play an important role in improving the diagnostic accuracy of clinicians and radiologists. KEY POINTS • Artificial intelligence (AI) provides great potential in improving the diagnosis of meniscus tears. • The pooled diagnostic performance for artificial intelligence (AI) in identifying meniscus tears was better (sensitivity 87%, specificity 89%) than locating the tears (sensitivity 88%, specificity 84%). • AI is good at confirming the diagnosis of meniscus tears, but future work is required to guide the management of the disease.
Collapse
Affiliation(s)
- Yi Zhao
- Imperial College London School of Medicine, Exhibition Rd, South Kensington, London, SW7 2BU, UK.
| | - Andrew Coppola
- Imperial College London School of Medicine, Exhibition Rd, South Kensington, London, SW7 2BU, UK
| | | | - Dimitri Amiras
- Imperial College London School of Medicine, Exhibition Rd, South Kensington, London, SW7 2BU, UK
- Imperial College London NHS Trust, London, UK
| | - Chinmay M Gupte
- Imperial College London School of Medicine, Exhibition Rd, South Kensington, London, SW7 2BU, UK
- Imperial College London NHS Trust, London, UK
| |
Collapse
|
42
|
Fedato Tobias RS, Teodoro AB, Evangelista K, Leite AF, Valladares-Neto J, de Freitas Silva BS, Yamamoto-Silva FP, Almeida FT, Silva MAG. Diagnostic capability of artificial intelligence tools for detecting and classifying odontogenic cysts and tumors: a systematic review and meta-analysis. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 138:414-426. [PMID: 38845306 DOI: 10.1016/j.oooo.2024.03.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Revised: 03/09/2024] [Accepted: 03/11/2024] [Indexed: 08/23/2024]
Abstract
OBJECTIVE To evaluate the diagnostic capability of artificial intelligence (AI) for detecting and classifying odontogenic cysts and tumors, with special emphasis on odontogenic keratocyst (OKC) and ameloblastoma. STUDY DESIGN Nine electronic databases and the gray literature were examined. Human-based studies using AI algorithms to detect or classify odontogenic cysts and tumors by using panoramic radiographs or CBCT were included. Diagnostic tests were evaluated, and a meta-analysis was performed for classifying OKCs and ameloblastomas. Heterogeneity, risk of bias, and certainty of evidence were evaluated. RESULTS Twelve studies concluded that AI is a promising tool for the detection and/or classification of lesions, producing high diagnostic test values. Three articles assessed the sensitivity of convolutional neural networks in classifying similar lesions using panoramic radiographs, specifically OKC and ameloblastoma. The accuracy was 0.893 (95% CI 0.832-0.954). AI applied to cone beam computed tomography produced superior accuracy based on only 4 studies. The results revealed heterogeneity in the models used, variations in imaging examinations, and discrepancies in the presentation of metrics. CONCLUSION AI tools exhibited a relatively high level of accuracy in detecting and classifying OKC and ameloblastoma. Panoramic radiography appears to be an accurate method for AI-based classification of these lesions, albeit with a low level of certainty. The accuracy of CBCT model data appears to be high and promising, although with limited available data.
Collapse
Affiliation(s)
| | - Ana Beatriz Teodoro
- Graduate Program, School of Dentistry, Federal University of Goias, Goiânia, Goiás, Brazil
| | - Karine Evangelista
- Department of Orthodontics, School of Dentistry, Federal University of Goias, Goiânia, Goiás, Brazil
| | - André Ferreira Leite
- Oral and Maxillofacial Radiology, Department of Dentistry, Faculty of Health Sciences, Brasília-DF, Brazil
| | - José Valladares-Neto
- Department of Orthodontics, School of Dentistry, Federal University of Goias, Goiânia, Goiás, Brazil
| | | | | | - Fabiana T Almeida
- Oral and Maxillofacial Radiology, Faculty of Medicine and Dentistry, University of Alberta, Canada
| | | |
Collapse
|
43
|
Ribeiro E, Cardenas DAC, Dias FM, Krieger JE, Gutierrez MA. Explainable artificial intelligence in deep learning-based detection of aortic elongation on chest X-ray images. EUROPEAN HEART JOURNAL. DIGITAL HEALTH 2024; 5:524-534. [PMID: 39318689 PMCID: PMC11417491 DOI: 10.1093/ehjdh/ztae045] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 06/03/2024] [Accepted: 06/16/2024] [Indexed: 09/26/2024]
Abstract
Aims Aortic elongation can result from age-related changes, congenital factors, aneurysms, or conditions affecting blood vessel elasticity. It is associated with cardiovascular diseases and severe complications like aortic aneurysms and dissection. We assess qualitatively and quantitatively explainable methods to understand the decisions of a deep learning model for detecting aortic elongation using chest X-ray (CXR) images. Methods and results In this study, we evaluated the performance of deep learning models (DenseNet and EfficientNet) for detecting aortic elongation using transfer learning and fine-tuning techniques with CXR images as input. EfficientNet achieved higher accuracy (86.7% ± 2.1), precision (82.7% ± 2.7), specificity (89.4% ± 1.7), F1 score (82.5% ± 2.9), and area under the receiver operating characteristic (92.7% ± 0.6) but lower sensitivity (82.3% ± 3.2) compared with DenseNet. To gain insights into the decision-making process of these models, we employed gradient-weighted class activation mapping and local interpretable model-agnostic explanations explainability methods, which enabled us to identify the expected location of aortic elongation in CXR images. Additionally, we used the pixel-flipping method to quantitatively assess the model interpretations, providing valuable insights into model behaviour. Conclusion Our study presents a comprehensive strategy for analysing CXR images by integrating aortic elongation detection models with explainable artificial intelligence techniques. By enhancing the interpretability and understanding of the models' decisions, this approach holds promise for aiding clinicians in timely and accurate diagnosis, potentially improving patient outcomes in clinical practice.
Collapse
Affiliation(s)
- Estela Ribeiro
- Heart Institute (InCor), Clinics Hospital University of Sao Paulo Medical School (HCFMUSP), Av. Dr. Enéas Carvalho de Aguiar, 44 - Cerqueira César, São Paulo, SP 05403-900, Brazil
- University of Sao Paulo Medical School (FMUSP), Av. Dr. Arnaldo, 455 - Cerqueira César, Pacaembu, SP 01246-903, Brazil
| | - Diego A C Cardenas
- Heart Institute (InCor), Clinics Hospital University of Sao Paulo Medical School (HCFMUSP), Av. Dr. Enéas Carvalho de Aguiar, 44 - Cerqueira César, São Paulo, SP 05403-900, Brazil
| | - Felipe M Dias
- Heart Institute (InCor), Clinics Hospital University of Sao Paulo Medical School (HCFMUSP), Av. Dr. Enéas Carvalho de Aguiar, 44 - Cerqueira César, São Paulo, SP 05403-900, Brazil
- Polytechnique School, University of Sao Paulo (POLI USP), Av. Prof. Luciano Gualberto, 380 - Butantã, São Paulo, SP 05508-010, Brazil
| | - Jose E Krieger
- Heart Institute (InCor), Clinics Hospital University of Sao Paulo Medical School (HCFMUSP), Av. Dr. Enéas Carvalho de Aguiar, 44 - Cerqueira César, São Paulo, SP 05403-900, Brazil
| | - Marco A Gutierrez
- Heart Institute (InCor), Clinics Hospital University of Sao Paulo Medical School (HCFMUSP), Av. Dr. Enéas Carvalho de Aguiar, 44 - Cerqueira César, São Paulo, SP 05403-900, Brazil
- University of Sao Paulo Medical School (FMUSP), Av. Dr. Arnaldo, 455 - Cerqueira César, Pacaembu, SP 01246-903, Brazil
- Polytechnique School, University of Sao Paulo (POLI USP), Av. Prof. Luciano Gualberto, 380 - Butantã, São Paulo, SP 05508-010, Brazil
| |
Collapse
|
44
|
Dashti M, Ghaedsharaf S, Ghasemi S, Zare N, Constantin EF, Fahimipour A, Tajbakhsh N, Ghadimi N. Evaluation of deep learning and convolutional neural network algorithms for mandibular fracture detection using radiographic images: A systematic review and meta-analysis. Imaging Sci Dent 2024; 54:232-239. [PMID: 39371302 PMCID: PMC11450407 DOI: 10.5624/isd.20240038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Revised: 05/25/2024] [Accepted: 06/04/2024] [Indexed: 10/08/2024] Open
Abstract
Purpose The use of artificial intelligence (AI) and deep learning algorithms in dentistry, especially for processing radiographic images, has markedly increased. However, detailed information remains limited regarding the accuracy of these algorithms in detecting mandibular fractures. Materials and Methods This meta-analysis was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Specific keywords were generated regarding the accuracy of AI algorithms in detecting mandibular fractures on radiographic images. Then, the PubMed/Medline, Scopus, Embase, and Web of Science databases were searched. The Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) tool was employed to evaluate potential bias in the selected studies. A pooled analysis of the relevant parameters was conducted using STATA version 17 (StataCorp, College Station, TX, USA), utilizing the metandi command. Results Of the 49 studies reviewed, 5 met the inclusion criteria. All of the selected studies utilized convolutional neural network algorithms, albeit with varying backbone structures, and all evaluated panoramic radiography images. The pooled analysis yielded a sensitivity of 0.971 (95% confidence interval [CI]: 0.881-0.949), a specificity of 0.813 (95% CI: 0.797-0.824), and a diagnostic odds ratio of 7.109 (95% CI: 5.27-8.913). Conclusion This review suggests that deep learning algorithms show potential for detecting mandibular fractures on panoramic radiography images. However, their effectiveness is currently limited by the small size and narrow scope of available datasets. Further research with larger and more diverse datasets is crucial to verify the accuracy of these tools in in practical dental settings.
Collapse
Affiliation(s)
- Mahmood Dashti
- Dentofacial Deformities Research Center, Research Institute of Dental Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Sahar Ghaedsharaf
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Shohreh Ghasemi
- Department of Trauma and Craniofacial Reconstruction, Queen Mary College, London, England
| | - Niusha Zare
- Department of Operative Dentistry, University of Southern California, Los Angeles, CA, USA
| | | | - Amir Fahimipour
- Discipline of Oral Surgery, Medicine and Diagnostics, School of Dentistry, Faculty of Medicine and Health, Westmead Centre for Oral Health, The University of Sydney, Sydney, Australia
| | - Neda Tajbakhsh
- School of Dentistry, Islamic Azad University Tehran, Dental Branch, Tehran, Iran
| | - Niloofar Ghadimi
- Department of Oral and Maxillofacial Radiology, Dental School, Islamic Azad University of Medical Sciences, Tehran, Iran
| |
Collapse
|
45
|
Ismail IN, Subramaniam PK, Chi Adam KB, Ghazali AB. Application of Artificial Intelligence in Cone-Beam Computed Tomography for Airway Analysis: A Narrative Review. Diagnostics (Basel) 2024; 14:1917. [PMID: 39272702 PMCID: PMC11394605 DOI: 10.3390/diagnostics14171917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Revised: 08/25/2024] [Accepted: 08/29/2024] [Indexed: 09/15/2024] Open
Abstract
Cone-beam computed tomography (CBCT) has emerged as a promising tool for the analysis of the upper airway, leveraging on its ability to provide three-dimensional information, minimal radiation exposure, affordability, and widespread accessibility. The integration of artificial intelligence (AI) in CBCT for airway analysis has shown improvements in the accuracy and efficiency of diagnosing and managing airway-related conditions. This review aims to explore the current applications of AI in CBCT for airway analysis, highlighting its components and processes, applications, benefits, challenges, and potential future directions. A comprehensive literature review was conducted, focusing on studies published in the last decade that discuss AI applications in CBCT airway analysis. Many studies reported the significant improvement in segmentation and measurement of airway volumes from CBCT using AI, thereby facilitating accurate diagnosis of airway-related conditions. In addition, these AI models demonstrated high accuracy and consistency in their application for airway analysis through automated segmentation tasks, volume measurement, and 3D reconstruction, which enhanced the diagnostic accuracy and allowed predictive treatment outcomes. Despite these advancements, challenges remain in the integration of AI into clinical workflows. Furthermore, variability in AI performance across different populations and imaging settings necessitates further validation studies. Continued research and development are essential to overcome current challenges and fully realize the potential of AI in airway analysis.
Collapse
Affiliation(s)
- Izzati Nabilah Ismail
- Oral and Maxillofacial Surgery Unit, Department of Oral and Maxillofacial Surgery and Oral Diagnosis, Kulliyyah of Dentistry, International Islamic University, Kuantan 25200, Malaysia
| | - Pram Kumar Subramaniam
- Oral and Maxillofacial Surgery Unit, Department of Oral and Maxillofacial Surgery and Oral Diagnosis, Kulliyyah of Dentistry, International Islamic University, Kuantan 25200, Malaysia
| | - Khairul Bariah Chi Adam
- Oral and Maxillofacial Surgery Unit, Department of Oral and Maxillofacial Surgery and Oral Diagnosis, Kulliyyah of Dentistry, International Islamic University, Kuantan 25200, Malaysia
| | - Ahmad Badruddin Ghazali
- Oral Radiology Unit, Department of Oral and Maxillofacial Surgery and Oral Diagnosis, Kulliyyah of Dentistry, International Islamic University, Kuantan 25200, Malaysia
| |
Collapse
|
46
|
Ong W, Lee A, Tan WC, Fong KTD, Lai DD, Tan YL, Low XZ, Ge S, Makmur A, Ong SJ, Ting YH, Tan JH, Kumar N, Hallinan JTPD. Oncologic Applications of Artificial Intelligence and Deep Learning Methods in CT Spine Imaging-A Systematic Review. Cancers (Basel) 2024; 16:2988. [PMID: 39272846 PMCID: PMC11394591 DOI: 10.3390/cancers16172988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Revised: 08/14/2024] [Accepted: 08/26/2024] [Indexed: 09/15/2024] Open
Abstract
In spinal oncology, integrating deep learning with computed tomography (CT) imaging has shown promise in enhancing diagnostic accuracy, treatment planning, and patient outcomes. This systematic review synthesizes evidence on artificial intelligence (AI) applications in CT imaging for spinal tumors. A PRISMA-guided search identified 33 studies: 12 (36.4%) focused on detecting spinal malignancies, 11 (33.3%) on classification, 6 (18.2%) on prognostication, 3 (9.1%) on treatment planning, and 1 (3.0%) on both detection and classification. Of the classification studies, 7 (21.2%) used machine learning to distinguish between benign and malignant lesions, 3 (9.1%) evaluated tumor stage or grade, and 2 (6.1%) employed radiomics for biomarker classification. Prognostic studies included three (9.1%) that predicted complications such as pathological fractures and three (9.1%) that predicted treatment outcomes. AI's potential for improving workflow efficiency, aiding decision-making, and reducing complications is discussed, along with its limitations in generalizability, interpretability, and clinical integration. Future directions for AI in spinal oncology are also explored. In conclusion, while AI technologies in CT imaging are promising, further research is necessary to validate their clinical effectiveness and optimize their integration into routine practice.
Collapse
Affiliation(s)
- Wilson Ong
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Aric Lee
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Wei Chuan Tan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Kuan Ting Dominic Fong
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Daoyong David Lai
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Yi Liang Tan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Xi Zhen Low
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Shuliang Ge
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Andrew Makmur
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Shao Jin Ong
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Yong Han Ting
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Jiong Hao Tan
- National University Spine Institute, Department of Orthopaedic Surgery, National University Health System, 1E, Lower Kent Ridge Road, Singapore 119228, Singapore
| | - Naresh Kumar
- National University Spine Institute, Department of Orthopaedic Surgery, National University Health System, 1E, Lower Kent Ridge Road, Singapore 119228, Singapore
| | - James Thomas Patrick Decourcy Hallinan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| |
Collapse
|
47
|
Kutbi M. Artificial Intelligence-Based Applications for Bone Fracture Detection Using Medical Images: A Systematic Review. Diagnostics (Basel) 2024; 14:1879. [PMID: 39272664 PMCID: PMC11394268 DOI: 10.3390/diagnostics14171879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2024] [Revised: 08/19/2024] [Accepted: 08/26/2024] [Indexed: 09/15/2024] Open
Abstract
Artificial intelligence (AI) is making notable advancements in the medical field, particularly in bone fracture detection. This systematic review compiles and assesses existing research on AI applications aimed at identifying bone fractures through medical imaging, encompassing studies from 2010 to 2023. It evaluates the performance of various AI models, such as convolutional neural networks (CNNs), in diagnosing bone fractures, highlighting their superior accuracy, sensitivity, and specificity compared to traditional diagnostic methods. Furthermore, the review explores the integration of advanced imaging techniques like 3D CT and MRI with AI algorithms, which has led to enhanced diagnostic accuracy and improved patient outcomes. The potential of Generative AI and Large Language Models (LLMs), such as OpenAI's GPT, to enhance diagnostic processes through synthetic data generation, comprehensive report creation, and clinical scenario simulation is also discussed. The review underscores the transformative impact of AI on diagnostic workflows and patient care, while also identifying research gaps and suggesting future research directions to enhance data quality, model robustness, and ethical considerations.
Collapse
Affiliation(s)
- Mohammed Kutbi
- College of Computing and Informatics, Saudi Electronic University, Riyadh 13316, Saudi Arabia
| |
Collapse
|
48
|
Lee YH, Jeon S, Auh QS, Chung EJ. Automatic prediction of obstructive sleep apnea in patients with temporomandibular disorder based on multidata and machine learning. Sci Rep 2024; 14:19362. [PMID: 39169169 PMCID: PMC11339326 DOI: 10.1038/s41598-024-70432-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2024] [Accepted: 08/16/2024] [Indexed: 08/23/2024] Open
Abstract
Obstructive sleep apnea (OSA) is closely associated with the development and chronicity of temporomandibular disorder (TMD). Given the intricate pathophysiology of both OSA and TMD, comprehensive diagnostic approaches are crucial. This study aimed to develop an automatic prediction model utilizing multimodal data to diagnose OSA among TMD patients. We collected a range of multimodal data, including clinical characteristics, portable polysomnography, X-ray, and MRI data, from 55 TMD patients who reported sleep problems. This data was then analyzed using advanced machine learning techniques. Three-dimensional VGG16 and logistic regression models were used to identify significant predictors. Approximately 53% (29 out of 55) of TMD patients had OSA. Performance accuracy was evaluated using logistic regression, multilayer perceptron, and area under the curve (AUC) scores. OSA prediction accuracy in TMD patients was 80.00-91.43%. When MRI data were added to the algorithm, the AUC score increased to 1.00, indicating excellent capability. Only the obstructive apnea index was statistically significant in predicting OSA in TMD patients, with a threshold of 4.25 events/h. The learned features of the convolutional neural network were visualized as a heatmap using a gradient-weighted class activation mapping algorithm, revealing that it focuses on differential anatomical parameters depending on the absence or presence of OSA. In OSA-positive cases, the nasopharynx, oropharynx, uvula, larynx, epiglottis, and brain region were recognized, whereas in OSA-negative cases, the tongue, nose, nasal turbinate, and hyoid bone were recognized. Prediction accuracy and heat map analyses support the plausibility and usefulness of this artificial intelligence-based OSA diagnosis and prediction model in TMD patients, providing a deeper understanding of regions distinguishing between OSA and non-OSA.
Collapse
Affiliation(s)
- Yeon-Hee Lee
- Department of Orofacial Pain and Oral Medicine, Kyung Hee University, Kyung Hee University Dental Hospital, #613 Hoegi-dong, Dongdaemun-gu, Seoul, 02447, Korea.
| | - Seonggwang Jeon
- Department of Computer Science, Hanyang University, Seoul, 04763, Korea
| | - Q-Schick Auh
- Department of Orofacial Pain and Oral Medicine, Kyung Hee University, Kyung Hee University Dental Hospital, #613 Hoegi-dong, Dongdaemun-gu, Seoul, 02447, Korea
| | - Eun-Jae Chung
- Otorhinolaryngology-Head and Neck Surgery, SNUCM Otorhinolaryngology-Head and Neck Surgery, Seoul National University Hospital Otorhinolaryngology-Head & Neck Surgery, Seoul, Korea
| |
Collapse
|
49
|
Choi Y, Ko JS, Park JE, Jeong G, Seo M, Jun Y, Fujita S, Bilgic B. Beyond the Conventional Structural MRI: Clinical Application of Deep Learning Image Reconstruction and Synthetic MRI of the Brain. Invest Radiol 2024:00004424-990000000-00248. [PMID: 39159333 DOI: 10.1097/rli.0000000000001114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/21/2024]
Abstract
ABSTRACT Recent technological advancements have revolutionized routine brain magnetic resonance imaging (MRI) sequences, offering enhanced diagnostic capabilities in intracranial disease evaluation. This review explores 2 pivotal breakthrough areas: deep learning reconstruction (DLR) and quantitative MRI techniques beyond conventional structural imaging. DLR using deep neural networks facilitates accelerated imaging with improved signal-to-noise ratio and spatial resolution, enhancing image quality with short scan times. DLR focuses on supervised learning applied to clinical implementation and applications. Quantitative MRI techniques, exemplified by 2D multidynamic multiecho, 3D quantification using interleaved Look-Locker acquisition sequences with T2 preparation pulses, and magnetic resonance fingerprinting, enable precise calculation of brain-tissue parameters and further advance diagnostic accuracy and efficiency. Potential DLR instabilities and quantification and bias limitations will be discussed. This review underscores the synergistic potential of DLR and quantitative MRI, offering prospects for improved brain imaging beyond conventional methods.
Collapse
Affiliation(s)
- Yangsean Choi
- From the Department of Radiology and Research Institute of Radiology, Asan Medical Center, Seoul, Republic of Korea (Y.C., J.S.K., J.E.P.); AIRS Medical LLC, Seoul, Republic of Korea (G.J.); Department of Radiology, Seoul St. Mary's Hospital, The Catholic University of Korea, Seoul, Republic of Korea (M.S.); Department of Radiology, Harvard Medical School, Boston, MA (Y.J., S.F., B.B.); Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA (Y.J., S.F., B.B.); and Harvard/MIT Health Sciences and Technology, Cambridge, MA (B.B.)
| | | | | | | | | | | | | | | |
Collapse
|
50
|
Al-Obeidat F, Hafez W, Gador M, Ahmed N, Abdeljawad MM, Yadav A, Rashed A. Diagnostic performance of AI-based models versus physicians among patients with hepatocellular carcinoma: a systematic review and meta-analysis. Front Artif Intell 2024; 7:1398205. [PMID: 39224209 PMCID: PMC11368160 DOI: 10.3389/frai.2024.1398205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2024] [Accepted: 07/26/2024] [Indexed: 09/04/2024] Open
Abstract
Background Hepatocellular carcinoma (HCC) is a common primary liver cancer that requires early diagnosis due to its poor prognosis. Recent advances in artificial intelligence (AI) have facilitated hepatocellular carcinoma detection using multiple AI models; however, their performance is still uncertain. Aim This meta-analysis aimed to compare the diagnostic performance of different AI models with that of clinicians in the detection of hepatocellular carcinoma. Methods We searched the PubMed, Scopus, Cochrane Library, and Web of Science databases for eligible studies. The R package was used to synthesize the results. The outcomes of various studies were aggregated using fixed-effect and random-effects models. Statistical heterogeneity was evaluated using I-squared (I2) and chi-square statistics. Results We included seven studies in our meta-analysis;. Both physicians and AI-based models scored an average sensitivity of 93%. Great variation in sensitivity, accuracy, and specificity was observed depending on the model and diagnostic technique used. The region-based convolutional neural network (RCNN) model showed high sensitivity (96%). Physicians had the highest specificity in diagnosing hepatocellular carcinoma(100%); furthermore, models-based convolutional neural networks achieved high sensitivity. Models based on AI-assisted Contrast-enhanced ultrasound (CEUS) showed poor accuracy (69.9%) compared to physicians and other models. The leave-one-out sensitivity revealed high heterogeneity among studies, which represented true differences among the studies. Conclusion Models based on Faster R-CNN excel in image classification and data extraction, while both CNN-based models and models combining contrast-enhanced ultrasound (CEUS) with artificial intelligence (AI) had good sensitivity. Although AI models outperform physicians in diagnosing HCC, they should be utilized as supportive tools to help make more accurate and timely decisions.
Collapse
Affiliation(s)
- Feras Al-Obeidat
- College of Technological Innovation, Zayed University, Abu Dubai, United Arab Emirates
| | - Wael Hafez
- NMC Royal Hospital, Khalifa City, United Arab Emirates
- Internal Medicine Department, Medical Research and Clinical Studies Institute, The National Research Centre, Cairo, Egypt
| | - Muneir Gador
- Internal Medicine Department, Medical Research and Clinical Studies Institute, The National Research Centre, Cairo, Egypt
| | | | | | - Antesh Yadav
- NMC Royal Hospital, Khalifa City, United Arab Emirates
| | - Asrar Rashed
- NMC Royal Hospital, Khalifa City, United Arab Emirates
- Department of Computer Science, Edinburgh Napier University, Merchiston Campus, Edinburgh, United Kingdom
| |
Collapse
|