1
|
Vittorio S, Lunghini F, Morerio P, Gadioli D, Orlandini S, Silva P, Jan Martinovic, Pedretti A, Bonanni D, Del Bue A, Palermo G, Vistoli G, Beccari AR. Addressing docking pose selection with structure-based deep learning: Recent advances, challenges and opportunities. Comput Struct Biotechnol J 2024; 23:2141-2151. [PMID: 38827235 PMCID: PMC11141151 DOI: 10.1016/j.csbj.2024.05.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 05/15/2024] [Accepted: 05/15/2024] [Indexed: 06/04/2024] Open
Abstract
Molecular docking is a widely used technique in drug discovery to predict the binding mode of a given ligand to its target. However, the identification of the near-native binding pose in docking experiments still represents a challenging task as the scoring functions currently employed by docking programs are parametrized to predict the binding affinity, and, therefore, they often fail to correctly identify the ligand native binding conformation. Selecting the correct binding mode is crucial to obtaining meaningful results and to conveniently optimizing new hit compounds. Deep learning (DL) algorithms have been an area of a growing interest in this sense for their capability to extract the relevant information directly from the protein-ligand structure. Our review aims to present the recent advances regarding the development of DL-based pose selection approaches, discussing limitations and possible future directions. Moreover, a comparison between the performances of some classical scoring functions and DL-based methods concerning their ability to select the correct binding mode is reported. In this regard, two novel DL-based pose selectors developed by us are presented.
Collapse
Affiliation(s)
- Serena Vittorio
- Dipartimento di Scienze Farmaceutiche, Università degli Studi di Milano, Via Luigi Mangiagalli 25, I-20133 Milano, Italy
| | - Filippo Lunghini
- EXSCALATE, Dompé Farmaceutici SpA, Via Tommaso de Amicis 95, 80123 Naples, Italy
| | - Pietro Morerio
- Pattern Analysis and Computer Vision, Fondazione Istituto Italiano di Tecnologia, Via Morego, 30, 16163 Genova, Italy
| | - Davide Gadioli
- Dipartimento di Elettronica Informazione e Bioingegneria, Politecnico di Milano, Via Ponzio 34/5, I-20133 Milano, Italy
| | - Sergio Orlandini
- SCAI, SuperComputing Applications and Innovation Department, CINECA, Via dei Tizii 6, Rome 00185, Italy
| | - Paulo Silva
- IT4Innovations, VSB – Technical University of Ostrava, 17. listopadu 2172/15, 70800 Ostrava-Poruba, Czech Republic
| | - Jan Martinovic
- IT4Innovations, VSB – Technical University of Ostrava, 17. listopadu 2172/15, 70800 Ostrava-Poruba, Czech Republic
| | - Alessandro Pedretti
- Dipartimento di Scienze Farmaceutiche, Università degli Studi di Milano, Via Luigi Mangiagalli 25, I-20133 Milano, Italy
| | - Domenico Bonanni
- Department of Physical and Chemical Sciences, University of L′Aquila, via Vetoio, L′Aquila 67010, Italy
| | - Alessio Del Bue
- Pattern Analysis and Computer Vision, Fondazione Istituto Italiano di Tecnologia, Via Morego, 30, 16163 Genova, Italy
| | - Gianluca Palermo
- Dipartimento di Elettronica Informazione e Bioingegneria, Politecnico di Milano, Via Ponzio 34/5, I-20133 Milano, Italy
| | - Giulio Vistoli
- Dipartimento di Scienze Farmaceutiche, Università degli Studi di Milano, Via Luigi Mangiagalli 25, I-20133 Milano, Italy
| | - Andrea R. Beccari
- EXSCALATE, Dompé Farmaceutici SpA, Via Tommaso de Amicis 95, 80123 Naples, Italy
| |
Collapse
|
2
|
Ita K, Roshanaei S. Artificial intelligence for skin permeability prediction: deep learning. J Drug Target 2024; 32:334-346. [PMID: 38258521 DOI: 10.1080/1061186x.2024.2309574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 01/07/2024] [Indexed: 01/24/2024]
Abstract
BACKGROUND AND OBJECTIVE Researchers have put in significant laboratory time and effort in measuring the permeability coefficient (Kp) of xenobiotics. To develop alternative approaches to this labour-intensive procedure, predictive models have been employed by scientists to describe the transport of xenobiotics across the skin. Most quantitative structure-permeability relationship (QSPR) models are derived statistically from experimental data. Recently, artificial intelligence-based computational drug delivery has attracted tremendous interest. Deep learning is an umbrella term for machine-learning algorithms consisting of deep neural networks (DNNs). Distinct network architectures, like convolutional neural networks (CNNs), feedforward neural networks (FNNs), and recurrent neural networks (RNNs), can be employed for prediction. METHODS In this project, we used a convolutional neural network, feedforward neural network, and recurrent neural network to predict skin permeability coefficients from a publicly available database reported by Cheruvu et al. The dataset contains 476 records of 145 chemicals, xenobiotics, and pharmaceuticals, administered on the human epidermis in vitro from aqueous solutions of constant concentration either saturated in infinite dose quantities or diluted. All the computations were conducted with Python under Anaconda and Jupyterlab environment after importing the required Python, Keras, and Tensorflow modules. RESULTS We used a convolutional neural network, feedforward neural network, and recurrent neural network to predict log kp. CONCLUSION This research work shows that deep learning networks can be successfully used to digitally screen and predict the skin permeability of xenobiotics.
Collapse
Affiliation(s)
- Kevin Ita
- College of Pharmacy, Touro University, Vallejo, CA, USA
| | | |
Collapse
|
3
|
Kudus K, Wagner M, Ertl-Wagner BB, Khalvati F. Applications of machine learning to MR imaging of pediatric low-grade gliomas. Childs Nerv Syst 2024:10.1007/s00381-024-06522-5. [PMID: 38972953 DOI: 10.1007/s00381-024-06522-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 06/21/2024] [Indexed: 07/09/2024]
Abstract
INTRODUCTION Machine learning (ML) shows promise for the automation of routine tasks related to the treatment of pediatric low-grade gliomas (pLGG) such as tumor grading, typing, and segmentation. Moreover, it has been shown that ML can identify crucial information from medical images that is otherwise currently unattainable. For example, ML appears to be capable of preoperatively identifying the underlying genetic status of pLGG. METHODS In this chapter, we reviewed, to the best of our knowledge, all published works that have used ML techniques for the imaging-based evaluation of pLGGs. Additionally, we aimed to provide some context on what it will take to go from the exploratory studies we reviewed to clinically deployed models. RESULTS Multiple studies have demonstrated that ML can accurately grade, type, and segment and detect the genetic status of pLGGs. We compared the approaches used between the different studies and observed a high degree of variability throughout the methodologies. Standardization and cooperation between the numerous groups working on these approaches will be key to accelerating the clinical deployment of these models. CONCLUSION The studies reviewed in this chapter detail the potential for ML techniques to transform the treatment of pLGG. However, there are still challenges that need to be overcome prior to clinical deployment.
Collapse
Affiliation(s)
- Kareem Kudus
- Neurosciences & Mental Health Research Program, The Hospital for Sick Children, Toronto, Canada
- Institute of Medical Science, University of Toronto, Toronto, Canada
| | - Matthias Wagner
- Department of Diagnostic & Interventional Radiology, The Hospital for Sick Children, Toronto, Canada
- Department of Diagnostic and Interventional Neuroradiology, University Hospital Augsburg, Augsburg, Germany
| | - Birgit Betina Ertl-Wagner
- Neurosciences & Mental Health Research Program, The Hospital for Sick Children, Toronto, Canada
- Institute of Medical Science, University of Toronto, Toronto, Canada
- Department of Diagnostic & Interventional Radiology, The Hospital for Sick Children, Toronto, Canada
- Department of Medical Imaging, University of Toronto, Toronto, Canada
| | - Farzad Khalvati
- Neurosciences & Mental Health Research Program, The Hospital for Sick Children, Toronto, Canada.
- Institute of Medical Science, University of Toronto, Toronto, Canada.
- Department of Diagnostic & Interventional Radiology, The Hospital for Sick Children, Toronto, Canada.
- Department of Medical Imaging, University of Toronto, Toronto, Canada.
- Department of Computer Science, University of Toronto, Toronto, Canada.
- Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Canada.
| |
Collapse
|
4
|
Nam K, Lee C, Lee T, Shin M, Kim BH, Park JW. Automated Laryngeal Invasion Detector of Boluses in Videofluoroscopic Swallowing Study Videos Using Action Recognition-Based Networks. Diagnostics (Basel) 2024; 14:1444. [PMID: 39001334 PMCID: PMC11241273 DOI: 10.3390/diagnostics14131444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Revised: 07/01/2024] [Accepted: 07/04/2024] [Indexed: 07/16/2024] Open
Abstract
We aimed to develop an automated detector that determines laryngeal invasion during swallowing. Laryngeal invasion, which causes significant clinical problems, is defined as two or more points on the penetration-aspiration scale (PAS). We applied two three-dimensional (3D) stream networks for action recognition in videofluoroscopic swallowing study (VFSS) videos. To detect laryngeal invasion (PAS 2 or higher scores) in VFSS videos, we employed two 3D stream networks for action recognition. To establish the robustness of our model, we compared its performance with those of various current image classification-based architectures. The proposed model achieved an accuracy of 92.10%. Precision, recall, and F1 scores for detecting laryngeal invasion (≥PAS 2) in VFSS videos were 0.9470 each. The accuracy of our model in identifying laryngeal invasion surpassed that of other updated image classification models (60.58% for ResNet101, 60.19% for Swin-Transformer, 63.33% for EfficientNet-B2, and 31.17% for HRNet-W32). Our model is the first automated detector of laryngeal invasion in VFSS videos based on video action recognition networks. Considering its high and balanced performance, it may serve as an effective screening tool before clinicians review VFSS videos, ultimately reducing the burden on clinicians.
Collapse
Affiliation(s)
- Kihwan Nam
- Graduate School of Management of Technology, Korea University, Seoul 02841, Republic of Korea
| | | | - Taeheon Lee
- Department of Physical Medicine and Rehabilitation, Dongguk University Ilsan Hospital, College of Medicine, 27 Dongguk-ro, Ilsandong-gu, Goyang 10326, Republic of Korea
| | - Munseop Shin
- Department of Physical Medicine and Rehabilitation, Dongguk University Ilsan Hospital, College of Medicine, 27 Dongguk-ro, Ilsandong-gu, Goyang 10326, Republic of Korea
| | - Bo Hae Kim
- Department of Otorhinolaryngology-Head and Neck Surgery, Dongguk University Ilsan Hospital, College of Medicine, 27 Dongguk-ro, Ilsandong-gu, Goyang 10326, Republic of Korea
| | - Jin-Woo Park
- Department of Physical Medicine and Rehabilitation, Dongguk University Ilsan Hospital, College of Medicine, 27 Dongguk-ro, Ilsandong-gu, Goyang 10326, Republic of Korea
| |
Collapse
|
5
|
Koo JH, Lee YJ, Kim HJ, Matusik W, Kim DH, Jeong H. Electronic Skin: Opportunities and Challenges in Convergence with Machine Learning. Annu Rev Biomed Eng 2024; 26:331-355. [PMID: 38959390 DOI: 10.1146/annurev-bioeng-103122-032652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/05/2024]
Abstract
Recent advancements in soft electronic skin (e-skin) have led to the development of human-like devices that reproduce the skin's functions and physical attributes. These devices are being explored for applications in robotic prostheses as well as for collecting biopotentials for disease diagnosis and treatment, as exemplified by biomedical e-skins. More recently, machine learning (ML) has been utilized to enhance device control accuracy and data processing efficiency. The convergence of e-skin technologies with ML is promoting their translation into clinical practice, especially in healthcare. This review highlights the latest developments in ML-reinforced e-skin devices for robotic prostheses and biomedical instrumentations. We first describe technological breakthroughs in state-of-the-art e-skin devices, emphasizing technologies that achieve skin-like properties. We then introduce ML methods adopted for control optimization and pattern recognition, followed by practical applications that converge the two technologies. Lastly, we briefly discuss the challenges this interdisciplinary research encounters in its clinical and industrial transition.
Collapse
Affiliation(s)
- Ja Hoon Koo
- Department of Semiconductor Systems Engineering and Institute of Semiconductor and System IC, Sejong University, Seoul, Republic of Korea
| | - Young Joong Lee
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - Hye Jin Kim
- Center for Nanoparticle Research, Institute for Basic Science, Seoul, Republic of Korea
- School of Chemical and Biological Engineering, Institute of Chemical Processes, Seoul National University, Seoul, Republic of Korea
| | - Wojciech Matusik
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - Dae-Hyeong Kim
- Center for Nanoparticle Research, Institute for Basic Science, Seoul, Republic of Korea
- School of Chemical and Biological Engineering, Institute of Chemical Processes, Seoul National University, Seoul, Republic of Korea
- Department of Materials Science and Engineering, Seoul National University, Seoul, Republic of Korea
- Interdisciplinary Program for Bioengineering, Seoul National University, Seoul, Republic of Korea;
| | - Hyoyoung Jeong
- Department of Electrical and Computer Engineering, University of California, Davis, California, USA;
| |
Collapse
|
6
|
Klein DS, Karmakar S, Jonnalagadda A, Abbey CK, Eckstein MP. Greater benefits of deep learning-based computer-aided detection systems for finding small signals in 3D volumetric medical images. J Med Imaging (Bellingham) 2024; 11:045501. [PMID: 38988989 PMCID: PMC11232702 DOI: 10.1117/1.jmi.11.4.045501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Revised: 06/17/2024] [Accepted: 06/20/2024] [Indexed: 07/12/2024] Open
Abstract
Purpose Radiologists are tasked with visually scrutinizing large amounts of data produced by 3D volumetric imaging modalities. Small signals can go unnoticed during the 3D search because they are hard to detect in the visual periphery. Recent advances in machine learning and computer vision have led to effective computer-aided detection (CADe) support systems with the potential to mitigate perceptual errors. Approach Sixteen nonexpert observers searched through digital breast tomosynthesis (DBT) phantoms and single cross-sectional slices of the DBT phantoms. The 3D/2D searches occurred with and without a convolutional neural network (CNN)-based CADe support system. The model provided observers with bounding boxes superimposed on the image stimuli while they looked for a small microcalcification signal and a large mass signal. Eye gaze positions were recorded and correlated with changes in the area under the ROC curve (AUC). Results The CNN-CADe improved the 3D search for the small microcalcification signal ( Δ AUC = 0.098 , p = 0.0002 ) and the 2D search for the large mass signal ( Δ AUC = 0.076 , p = 0.002 ). The CNN-CADe benefit in 3D for the small signal was markedly greater than in 2D ( Δ Δ AUC = 0.066 , p = 0.035 ). Analysis of individual differences suggests that those who explored the least with eye movements benefited the most from the CNN-CADe ( r = - 0.528 , p = 0.036 ). However, for the large signal, the 2D benefit was not significantly greater than the 3D benefit ( Δ Δ AUC = 0.033 , p = 0.133 ). Conclusion The CNN-CADe brings unique performance benefits to the 3D (versus 2D) search of small signals by reducing errors caused by the underexploration of the volumetric data.
Collapse
Affiliation(s)
- Devi S. Klein
- University of California, Department of Psychological and Brain Sciences, Santa Barbara, California, United States
| | - Srijita Karmakar
- University of California, Department of Psychological and Brain Sciences, Santa Barbara, California, United States
| | - Aditya Jonnalagadda
- University of California, Department of Electrical and Computer Engineering, Santa Barbara, California, United States
| | - Craig K. Abbey
- University of California, Department of Psychological and Brain Sciences, Santa Barbara, California, United States
| | - Miguel P. Eckstein
- University of California, Department of Psychological and Brain Sciences, Santa Barbara, California, United States
- University of California, Department of Electrical and Computer Engineering, Santa Barbara, California, United States
- University of California, Department of Computer Science, Santa Barbara, California, United States
| |
Collapse
|
7
|
Trujillo-Acatitla R, Tuxpan-Vargas J, Ovando-Vázquez C, Monterrubio-Martínez E. Marine oil spill detection and segmentation in SAR data with two steps Deep Learning framework. MARINE POLLUTION BULLETIN 2024; 204:116549. [PMID: 38850755 DOI: 10.1016/j.marpolbul.2024.116549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2024] [Revised: 05/30/2024] [Accepted: 05/31/2024] [Indexed: 06/10/2024]
Abstract
Marine oil spills pose significant ecological and economic threats worldwide, requiring effective decision-making tools. In this study, the optimal parameters, and configurations for Deep Learning models in oil spill classification and segmentation using Sentinel-1 SAR imagery were identified. First, a new Sentinel-1 image dataset was created. Ninety CNN configurations were explored for classification by varying the number of convolutional layers, filters, hidden layers, and neurons in each layer. For segmentation tasks, MLP and U-Net models were evaluated with variations in convolutional layers, filters, and incorporation of IoU and Focal Loss. The results indicated that a CNN model with six layers, 32 filters, and two hidden layers achieved 99 % classification accuracy. For segmentation, the U-Net model with more layers and filters using Focal Loss achieved 99 % accuracy and 96 % IoU. Therefore, a CNN and U-Net framework was proposed that achieves an overall accuracy of 95 % and an IoU of 90 %.
Collapse
Affiliation(s)
- Rubicel Trujillo-Acatitla
- División de Geociencias Aplicadas, Instituto Potosino de Investigación Científica y Tecnológica A.C., Camino a la Presa de San José No. 2055, Colonia Lomas 4ta Sección, San Luis Potosí, San Luis Potosí C.P. 78216, Mexico
| | - José Tuxpan-Vargas
- División de Geociencias Aplicadas, Instituto Potosino de Investigación Científica y Tecnológica A.C., Camino a la Presa de San José No. 2055, Colonia Lomas 4ta Sección, San Luis Potosí, San Luis Potosí C.P. 78216, Mexico; Cátedras-CONAHCyT, Consejo Nacional de Humanidades, Ciencias y Tecnologías, CDMX 03940, Mexico.
| | - Cesaré Ovando-Vázquez
- División de Biología Molecular, Instituto Potosino de Investigación Científica y Tecnológica A.C., Camino a la Presa de San José No. 2055, Colonia Lomas 4ta Sección, San Luis Potosí, San Luis Potosí C.P. 78216, Mexico; Centro Nacional de Supercómputo (CNS), Instituto Potosino de Investigación Científica y Tecnológica A.C., Camino a la Presa de San José No. 2055, Colonia Lomas 4ta Sección, San Luis Potosí, San Luis Potosí C.P. 78216, Mexico; Cátedras-CONAHCyT, Consejo Nacional de Humanidades, Ciencias y Tecnologías, CDMX 03940, Mexico.
| | - Erandi Monterrubio-Martínez
- División de Geociencias Aplicadas, Instituto Potosino de Investigación Científica y Tecnológica A.C., Camino a la Presa de San José No. 2055, Colonia Lomas 4ta Sección, San Luis Potosí, San Luis Potosí C.P. 78216, Mexico
| |
Collapse
|
8
|
Lee JY, Lee YS, Tae JH, Chang IH, Kim TH, Myung SC, Nguyen TT, Lee JH, Choi J, Kim JH, Kim JW, Choi SY. Selection of Convolutional Neural Network Model for Bladder Tumor Classification of Cystoscopy Images and Comparison with Humans. J Endourol 2024. [PMID: 38877795 DOI: 10.1089/end.2024.0250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/16/2024] Open
Abstract
Purpose: An investigation of various convolutional neural network (CNN)-based deep learning algorithms was conducted to select the appropriate artificial intelligence (AI) model for calculating the diagnostic performance of bladder tumor classification on cystoscopy images, with the performance of the selected model to be compared against that of medical students and urologists. Methods: A total of 3,731 cystoscopic images that contained 2,191 tumor images were obtained from 543 bladder tumor cases and 219 normal cases were evaluated. A total of 17 CNN models were trained for tumor classification with various hyperparameters. The diagnostic performance of the selected AI model was compared with the results obtained from urologists and medical students by using the receiver operating characteristic (ROC) curve graph and metrics. Results: EfficientNetB0 was selected as the appropriate AI model. In the test results, EfficientNetB0 achieved a balanced accuracy of 81%, sensitivity of 88%, specificity of 74%, and an area under the curve (AUC) of 92%. In contrast, human-derived diagnostic statistics for the test data showed an average balanced accuracy of 75%, sensitivity of 94%, and specificity of 55%. Specifically, urologists had an average balanced accuracy of 91%, sensitivity of 95%, and specificity of 88%, while medical students had an average balanced accuracy of 69%, sensitivity of 94%, and specificity of 44%. Conclusions: Among the various AI models, we suggest that EfficientNetB0 is an appropriate AI classification model for determining the presence of bladder tumors in cystoscopic images. EfficientNetB0 showed the highest performance among several models and showed high accuracy and specificity compared to medical students. This AI technology will be helpful for less experienced urologists or nonurologists in making diagnoses. Image-based deep learning classifies bladder cancer using cystoscopy images and shows promise for generalized applications in biomedical image analysis and clinical decision making.
Collapse
Affiliation(s)
| | - Yong Seong Lee
- Department of Urology, Chung-Ang University Gwangmyeong Hospital, Chung-Ang University College of Medicine, Gyeonggi-do, Korea
| | - Jong Hyun Tae
- Department of Urology, Chung-Ang University Hospital, Chung-Ang University College of Medicine, Seoul, Korea
| | - In Ho Chang
- Department of Urology, Chung-Ang University Hospital, Chung-Ang University College of Medicine, Seoul, Korea
| | - Tae-Hyoung Kim
- Department of Urology, Chung-Ang University Hospital, Chung-Ang University College of Medicine, Seoul, Korea
| | - Soon Chul Myung
- Department of Urology, Chung-Ang University Hospital, Chung-Ang University College of Medicine, Seoul, Korea
| | - Tuan Thanh Nguyen
- Department of Urology, Cho Ray Hospital, University of Medicine and Pharmacy, Ho Chi Minh City, Vietnam
| | | | - Joongwon Choi
- Department of Urology, Chung-Ang University Gwangmyeong Hospital, Chung-Ang University College of Medicine, Gyeonggi-do, Korea
| | - Jung Hoon Kim
- Department of Urology, Chung-Ang University Gwangmyeong Hospital, Chung-Ang University College of Medicine, Gyeonggi-do, Korea
| | - Jin Wook Kim
- Department of Urology, Chung-Ang University Gwangmyeong Hospital, Chung-Ang University College of Medicine, Gyeonggi-do, Korea
| | - Se Young Choi
- Department of Urology, Chung-Ang University Hospital, Chung-Ang University College of Medicine, Seoul, Korea
| |
Collapse
|
9
|
Parsain T, Tripathi A, Tiwari A. Detection of milk adulteration using coffee ring effect and convolutional neural network. Food Addit Contam Part A Chem Anal Control Expo Risk Assess 2024; 41:730-741. [PMID: 38814700 DOI: 10.1080/19440049.2024.2358518] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Accepted: 05/15/2024] [Indexed: 05/31/2024]
Abstract
A low-cost and effective method is reported to identify water and synthetic milk adulteration of cow's milk using coffee ring patterns. The cow's milk samples were diluted with tap water (TW), distilled water (DW) and mineral water (MW) and drop cast onto glass slides to observe coffee ring patterns. The area of the ring, total particle area and average particle diameter were extracted from these patterns. For each ring, the ratio of total particle area versus total ring area was calculated. The area ratio, regardless of water adulterants, follows an exponential model with respect to average particle diameter. Unlike TW, the ratio for DW and MW adulterated milk are clustered and classified together with respect to the particle diameter. These results were independent of dilution level and are used for adulterant classification. The ring of milk adulterated using synthetic milk gave multiple concentric rings, flower-like structures, and oil globules throughout the dilution level. An Alexnet model was used to classify water and synthetic milk adulterants in authentic milk. The trained model could achieve 96.7% and 95.8% accuracy for binary and tertiary classification respectively. These results enable us to distinguish synthetic milk from pure milk and segregate DW and MW with respect to TW adulterated milk.
Collapse
Affiliation(s)
- Tapan Parsain
- Department of Physics, Institute of Science, Banaras Hindu University, Varanasi, Uttar Pradesh, India
| | - Ajay Tripathi
- Department of Physics, Sikkim University, Gangtok, Sikkim, India
| | - Archana Tiwari
- Department of Physics, Institute of Science, Banaras Hindu University, Varanasi, Uttar Pradesh, India
| |
Collapse
|
10
|
Mo Y, Li H, Wang D, Liu G. An intrusion detection system based on convolution neural network. PeerJ Comput Sci 2024; 10:e2152. [PMID: 38983193 PMCID: PMC11232621 DOI: 10.7717/peerj-cs.2152] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 06/05/2024] [Indexed: 07/11/2024]
Abstract
With the rapid extensive development of the Internet, users not only enjoy great convenience but also face numerous serious security problems. The increasing frequency of data breaches has made it clear that the network security situation is becoming increasingly urgent. In the realm of cybersecurity, intrusion detection plays a pivotal role in monitoring network attacks. However, the efficacy of existing solutions in detecting such intrusions remains suboptimal, perpetuating the security crisis. To address this challenge, we propose a sparse autoencoder-Bayesian optimization-convolutional neural network (SA-BO-CNN) system based on convolutional neural network (CNN). Firstly, to tackle the issue of data imbalance, we employ the SMOTE resampling function during system construction. Secondly, we enhance the system's feature extraction capabilities by incorporating SA. Finally, we leverage BO in conjunction with CNN to enhance system accuracy. Additionally, a multi-round iteration approach is adopted to further refine detection accuracy. Experimental findings demonstrate an impressive system accuracy of 98.36%. Comparative analyses underscore the superior detection rate of the SA-BO-CNN system.
Collapse
Affiliation(s)
- Yanmeng Mo
- School of Computer, Jiangsu University of Science and Technology, Zhenjiang, China
| | - Huige Li
- School of Computer, Jiangsu University of Science and Technology, Zhenjiang, China
| | - Dongsheng Wang
- School of Computer, Jiangsu University of Science and Technology, Zhenjiang, China
| | - Gaqiong Liu
- School of Computer, Jiangsu University of Science and Technology, Zhenjiang, China
| |
Collapse
|
11
|
Kiran L, Zeb A, Rehman QNU, Rahman T, Shehzad Khan M, Ahmad S, Irfan M, Naeem M, Huda S, Mahmoud H. An enhanced pattern detection and segmentation of brain tumors in MRI images using deep learning technique. Front Comput Neurosci 2024; 18:1418280. [PMID: 38988988 PMCID: PMC11233794 DOI: 10.3389/fncom.2024.1418280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Accepted: 05/27/2024] [Indexed: 07/12/2024] Open
Abstract
Neuroscience is a swiftly progressing discipline that aims to unravel the intricate workings of the human brain and mind. Brain tumors, ranging from non-cancerous to malignant forms, pose a significant diagnostic challenge due to the presence of more than 100 distinct types. Effective treatment hinges on the precise detection and segmentation of these tumors early. We introduce a cutting-edge deep-learning approach employing a binary convolutional neural network (BCNN) to address this. This method is employed to segment the 10 most prevalent brain tumor types and is a significant improvement over current models restricted to only segmenting four types. Our methodology begins with acquiring MRI images, followed by a detailed preprocessing stage where images undergo binary conversion using an adaptive thresholding method and morphological operations. This prepares the data for the next step, which is segmentation. The segmentation identifies the tumor type and classifies it according to its grade (Grade I to Grade IV) and differentiates it from healthy brain tissue. We also curated a unique dataset comprising 6,600 brain MRI images specifically for this study. The overall performance achieved by our proposed model is 99.36%. The effectiveness of our model is underscored by its remarkable performance metrics, achieving 99.40% accuracy, 99.32% precision, 99.45% recall, and a 99.28% F-Measure in segmentation tasks.
Collapse
Affiliation(s)
- Lubna Kiran
- Qurtuba University of Science and Information Technology, Peshawar, Pakistan
| | - Asim Zeb
- Abbottabad University of Science and Technology, Abbottabad, Pakistan
| | | | - Taj Rahman
- Qurtuba University of Science and Information Technology, Peshawar, Pakistan
| | | | - Shafiq Ahmad
- Department of Industrial Engineering, College of Engineering, King Saud University, Riyadh, Saudi Arabia
| | - Muhammad Irfan
- Department of Computer Science, Kohat University of Science and Technology, Kohat, Pakistan
| | - Muhammad Naeem
- Abbottabad University of Science and Technology, Abbottabad, Pakistan
| | - Shamsul Huda
- School of Information Technology, Deakin University, Burwood, VIC, Australia
| | - Haitham Mahmoud
- Department of Industrial Engineering, College of Engineering, King Saud University, Riyadh, Saudi Arabia
| |
Collapse
|
12
|
Box J, Schnell E, Rutel I. Binary classification of dead detector elements in flat panel detectors using convolutional neural networks. Biomed Phys Eng Express 2024; 10:045054. [PMID: 38870913 DOI: 10.1088/2057-1976/ad57cd] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 06/13/2024] [Indexed: 06/15/2024]
Abstract
Objective.Medical physicists routinely perform quality assurance on digital detection systems, part of which involves the testing of flat panel detectors. Flat panels may degrade over time as an increasing number of individual detector elements begin to malfunction. The pixels that correspond to these elements are corrected for using information elsewhere in the detector system, however these corrected elements still constitute a loss in image quality for the system as a whole. These correction methods, as well as the location and number of dead detector elements, are often only available to the vendor of the digital detection system, but not to the medical physicist responsible for the quality assurance of the system.Approach.We greatly expand upon a previous work by providing a novel technique for classifying dead detector elements at single pixel resolution. We also demonstrate that this technique can be trained on one detector, and then tested and validated on another with moderate success, which demonstrates some ability to generalize to different detectors. The technique requires 3 flat field, or 'noise', images to be taken to predict the dead detector element maps for the system.Main results.Models using only for-processing pixel data were unable to successfully generalize from one detector to the other. Models preprocessed using the standard deviation across three for-processing images were able to classify dead detector element maps with an F1score ranging from 0.4527 to 0.8107 and recall ranging from 0.5420 to 0.9303 with better performance, on average, observed using the low exposure data set.Significance. Many physicists do not have access to the dead detector maps for their diagnostic digital radiography systems. CNNs are capable of predicting the dead detector maps of flat panel detectors with single pixel resolution. Physicists can implement this tool by acquiring three flat field images and then inputting them into the model. Model performance saw a marginal increase when trained on the low exposure set data, as opposed to the high exposure set data, indicating high exposure, low relative noise images may not be necessary for optimal performance. Model performance across detectors manufactured by different vendors requires further investigation.
Collapse
Affiliation(s)
- Jon Box
- The University of Oklahoma Health Sciences Center, 940 NE 13th St. Garrison Tower, Suite 3G3210, Oklahoma City, OK 73104, United States of America
| | - Erich Schnell
- The University of Oklahoma Health Sciences Center, 940 NE 13th St. Garrison Tower, Suite 3G3210, Oklahoma City, OK 73104, United States of America
| | - Isaac Rutel
- The University of Oklahoma Health Sciences Center, 940 NE 13th St. Garrison Tower, Suite 3G3210, Oklahoma City, OK 73104, United States of America
| |
Collapse
|
13
|
Attallah O. Skin-CAD: Explainable deep learning classification of skin cancer from dermoscopic images by feature selection of dual high-level CNNs features and transfer learning. Comput Biol Med 2024; 178:108798. [PMID: 38925085 DOI: 10.1016/j.compbiomed.2024.108798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 05/30/2024] [Accepted: 06/19/2024] [Indexed: 06/28/2024]
Abstract
Skin cancer (SC) significantly impacts many individuals' health all over the globe. Hence, it is imperative to promptly identify and diagnose such conditions at their earliest stages using dermoscopic imaging. Computer-aided diagnosis (CAD) methods relying on deep learning techniques especially convolutional neural networks (CNN) can effectively address this issue with outstanding outcomes. Nevertheless, such black box methodologies lead to a deficiency in confidence as dermatologists are incapable of comprehending and verifying the predictions that were made by these models. This article presents an advanced an explainable artificial intelligence (XAI) based CAD system named "Skin-CAD" which is utilized for the classification of dermoscopic photographs of SC. The system accurately categorises the photographs into two categories: benign or malignant, and further classifies them into seven subclasses of SC. Skin-CAD employs four CNNs of different topologies and deep layers. It gathers features out of a pair of deep layers of every CNN, particularly the final pooling and fully connected layers, rather than merely depending on attributes from a single deep layer. Skin-CAD applies the principal component analysis (PCA) dimensionality reduction approach to minimise the dimensions of pooling layer features. This also reduces the complexity of the training procedure compared to using deep features from a CNN that has a substantial size. Furthermore, it combines the reduced pooling features with the fully connected features of each CNN. Additionally, Skin-CAD integrates the dual-layer features of the four CNNs instead of entirely depending on the features of a single CNN architecture. In the end, it utilizes a feature selection step to determine the most important deep attributes. This helps to decrease the general size of the feature set and streamline the classification process. Predictions are analysed in more depth using the local interpretable model-agnostic explanations (LIME) approach. This method is used to create visual interpretations that align with an already existing viewpoint and adhere to recommended standards for general clarifications. Two benchmark datasets are employed to validate the efficiency of Skin-CAD which are the Skin Cancer: Malignant vs. Benign and HAM10000 datasets. The maximum accuracy achieved using Skin-CAD is 97.2 % and 96.5 % for the Skin Cancer: Malignant vs. Benign and HAM10000 datasets respectively. The findings of Skin-CAD demonstrate its potential to assist professional dermatologists in detecting and classifying SC precisely and quickly.
Collapse
Affiliation(s)
- Omneya Attallah
- Department of Electronics and Communications Engineering, College of Engineering and Technology, Arab Academy for Science, Technology and Maritime Transport, Alexandri, 21937, Egypt; Wearables, Biosensing, and Biosignal Processing Laboratory, Arab Academy for Science, Technology and Maritime Transport, Alexandria, 21937, Egypt.
| |
Collapse
|
14
|
Yang M, Yang M, Yang L, Wang Z, Ye P, Chen C, Fu L, Xu S. Deep learning for MRI lesion segmentation in rectal cancer. Front Med (Lausanne) 2024; 11:1394262. [PMID: 38983364 PMCID: PMC11231084 DOI: 10.3389/fmed.2024.1394262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 06/14/2024] [Indexed: 07/11/2024] Open
Abstract
Rectal cancer (RC) is a globally prevalent malignant tumor, presenting significant challenges in its management and treatment. Currently, magnetic resonance imaging (MRI) offers superior soft tissue contrast and radiation-free effects for RC patients, making it the most widely used and effective detection method. In early screening, radiologists rely on patients' medical radiology characteristics and their extensive clinical experience for diagnosis. However, diagnostic accuracy may be hindered by factors such as limited expertise, visual fatigue, and image clarity issues, resulting in misdiagnosis or missed diagnosis. Moreover, the distribution of surrounding organs in RC is extensive with some organs having similar shapes to the tumor but unclear boundaries; these complexities greatly impede doctors' ability to diagnose RC accurately. With recent advancements in artificial intelligence, machine learning techniques like deep learning (DL) have demonstrated immense potential and broad prospects in medical image analysis. The emergence of this approach has significantly enhanced research capabilities in medical image classification, detection, and segmentation fields with particular emphasis on medical image segmentation. This review aims to discuss the developmental process of DL segmentation algorithms along with their application progress in lesion segmentation from MRI images of RC to provide theoretical guidance and support for further advancements in this field.
Collapse
Affiliation(s)
- Mingwei Yang
- Department of General Surgery, Nanfang Hospital Zengcheng Campus, Guangzhou, Guangdong, China
| | - Miyang Yang
- Department of Radiology, Fuzong Teaching Hospital, Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China
- Department of Radiology, 900th Hospital of Joint Logistics Support Force, Fuzhou, Fujian, China
| | - Lanlan Yang
- Department of Radiology, Fuzong Teaching Hospital, Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China
| | - Zhaochu Wang
- Department of Radiology, Fuzong Teaching Hospital, Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China
| | - Peiyun Ye
- Department of Radiology, Fuzong Teaching Hospital, Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China
- Department of Radiology, 900th Hospital of Joint Logistics Support Force, Fuzhou, Fujian, China
| | - Chujie Chen
- Department of Radiology, Fuzong Teaching Hospital, Fujian University of Traditional Chinese Medicine, Fuzhou, Fujian, China
- Department of Radiology, 900th Hospital of Joint Logistics Support Force, Fuzhou, Fujian, China
| | - Liyuan Fu
- Department of Radiology, 900th Hospital of Joint Logistics Support Force, Fuzhou, Fujian, China
| | - Shangwen Xu
- Department of Radiology, 900th Hospital of Joint Logistics Support Force, Fuzhou, Fujian, China
| |
Collapse
|
15
|
Pham TD, Tsunoyama T. Exploring Extravasation in Cancer Patients. Cancers (Basel) 2024; 16:2308. [PMID: 39001371 PMCID: PMC11240416 DOI: 10.3390/cancers16132308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Revised: 06/17/2024] [Accepted: 06/21/2024] [Indexed: 07/16/2024] Open
Abstract
Extravasation, the unintended leakage of intravenously administered substances, poses significant challenges in cancer treatment, particularly during chemotherapy and radiotherapy. This comprehensive review explores the pathophysiology, incidence, risk factors, clinical presentation, diagnosis, prevention strategies, management approaches, complications, and long-term effects of extravasation in cancer patients. It also outlines future directions and research opportunities, including identifying gaps in the current knowledge and proposing areas for further investigation in extravasation prevention and management. Emerging technologies and therapies with the potential to improve extravasation prevention and management in both chemotherapy and radiotherapy are highlighted. Such innovations include advanced vein visualization technologies, smart catheters, targeted drug delivery systems, novel topical treatments, and artificial intelligence-based image analysis. By addressing these aspects, this review not only provides healthcare professionals with insights to enhance patient safety and optimize clinical practice but also underscores the importance of ongoing research and innovation in improving outcomes for cancer patients experiencing extravasation events.
Collapse
Affiliation(s)
- Tuan D. Pham
- Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London E1 2AD, UK
| | | |
Collapse
|
16
|
Suzuki H, Kokabu T, Yamada K, Ishikawa Y, Yabu A, Yanagihashi Y, Hyakumachi T, Tachi H, Shimizu T, Endo T, Ohnishi T, Ukeba D, Nagahama K, Takahata M, Sudo H, Iwasaki N. Deep learning-based detection of lumbar spinal canal stenosis using convolutional neural networks. Spine J 2024:S1529-9430(24)00299-7. [PMID: 38909909 DOI: 10.1016/j.spinee.2024.06.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 06/13/2024] [Accepted: 06/14/2024] [Indexed: 06/25/2024]
Abstract
BACKGROUND CONTEXT Lumbar spinal canal stenosis (LSCS) is the most common spinal degenerative disorder in elderly people and usually first seen by primary care physicians or orthopedic surgeons who are not spine surgery specialists. Magnetic resonance imaging (MRI) is useful in the diagnosis of LSCS, but the equipment is often not available or difficult to read. LSCS patients with progressive neurologic deficits have difficulty with recovery if surgical treatment is delayed. So, early diagnosis and determination of appropriate surgical indications are crucial in the treatment of LSCS. Convolutional neural networks (CNNs), a type of deep learning, offers significant advantages for image recognition and classification, and work well with radiographs, which can be easily taken at any facility. PURPOSE Our purpose was to develop an algorithm to diagnose the presence or absence of LSCS requiring surgery from plain radiographs using CNNs. STUDY DESIGN Retrospective analysis of consecutive, nonrandomized series of patients at a single institution. PATIENT SAMPLE Data of 150 patients who underwent surgery for LSCS, including degenerative spondylolisthesis, at a single institution from January 2022 to August 2022, were collected. Additionally, 25 patients who underwent surgery at 2 other hospitals were included for extra external validation. OUTCOME MEASURES In annotation 1, the area under the curve (AUC) computed from the receiver operating characteristic (ROC) curve, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), accuracy, positive likelihood ratio (PLR), and negative likelihood ratio (NLR) were calculated. In annotation 2, correlation coefficients were used. METHODS Four intervertebral levels from L1/2 to L4/5 were extracted as region of interest from lateral plain lumbar spine radiographs totaling 600 images were obtained. Based on the date of surgery, 500 images derived from the first 125 cases were used for internal validation, and 100 images from the subsequent 25 cases used for external validation. Additionally, 100 images from other hospitals were used for extra external validation. In annotation 1, binary classification of operative and nonoperative levels was used, and in annotation 2, the spinal canal area measured on axial MRI was labeled as the output layer. For internal validation, the 500 images were divided into each 5 dataset on per-patient basis and 5-fold cross-validation was performed. Five trained models were registered in the external validation prediction performance. Grad-CAM was used to visualize area with the high features extracted by CNNs. RESULTS In internal validation, the AUC and accuracy for annotation 1 ranged between 0.85-0.89 and 79-83%, respectively, and the correlation coefficients for annotation 2 ranged between 0.53 and 0.64 (all p<.01). In external validation, the AUC and accuracy for annotation 1 were 0.90 and 82%, respectively, and the correlation coefficient for annotation 2 was 0.69, using 5 trained CNN models. In the extra external validation, the AUC and accuracy for annotation 1 were 0.89 and 84%, respectively, and the correlation coefficient for annotation 2 was 0.56. Grad-CAM showed high feature density in the intervertebral joints and posterior intervertebral discs. CONCLUSIONS This technology automatically detects LSCS from plain lumbar spine radiographs, making it possible for medical facilities without MRI or nonspecialists to diagnose LSCS, suggesting the possibility of eliminating delays in the diagnosis and treatment of LSCS that require early treatment.
Collapse
Affiliation(s)
- Hisataka Suzuki
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan; Department of Orthopaedic Surgery, Eniwa Hospital, 2-1-1 Kogane Chuo, Eniwa, Hokkaido 061-1449, Japan
| | - Terufumi Kokabu
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan; Department of Orthopaedic Surgery, Eniwa Hospital, 2-1-1 Kogane Chuo, Eniwa, Hokkaido 061-1449, Japan
| | - Katsuhisa Yamada
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan.
| | - Yoko Ishikawa
- Department of Orthopaedic Surgery, Eniwa Hospital, 2-1-1 Kogane Chuo, Eniwa, Hokkaido 061-1449, Japan
| | - Akito Yabu
- Department of Orthopaedic Surgery, Eniwa Hospital, 2-1-1 Kogane Chuo, Eniwa, Hokkaido 061-1449, Japan
| | - Yasushi Yanagihashi
- Department of Orthopaedic Surgery, Eniwa Hospital, 2-1-1 Kogane Chuo, Eniwa, Hokkaido 061-1449, Japan
| | - Takahiko Hyakumachi
- Department of Orthopaedic Surgery, Eniwa Hospital, 2-1-1 Kogane Chuo, Eniwa, Hokkaido 061-1449, Japan
| | - Hiroyuki Tachi
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan
| | - Tomohiro Shimizu
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan
| | - Tsutomu Endo
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan
| | - Takashi Ohnishi
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan
| | - Daisuke Ukeba
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan
| | - Ken Nagahama
- Department of Orthopaedic Surgery, Sapporo Endoscopic Spine Surgery, N16E16, Sapporo, Hokkaido 065-0016, Japan
| | - Masahiko Takahata
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan
| | - Hideki Sudo
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan
| | - Norimasa Iwasaki
- Department of Orthopaedic Surgery, Faculty of Medicine and Graduate School of Medicine, Hokkaido University, N15W7, Sapporo, Hokkaido 060-8638, Japan
| |
Collapse
|
17
|
Bourdillon AT. Computer Vision-Radiomics & Pathognomics. Otolaryngol Clin North Am 2024:S0030-6665(24)00072-0. [PMID: 38910065 DOI: 10.1016/j.otc.2024.05.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/25/2024]
Abstract
The role of computer vision in extracting radiographic (radiomics) and histopathologic (pathognomics) features is an extension of molecular biomarkers that have been foundational to our understanding across the spectrum of head and neck disorders. Especially within head and neck cancers, machine learning and deep learning applications have yielded advances in the characterization of tumor features, nodal features, and various outcomes. This review aims to overview the landscape of radiomic and pathognomic applications, informing future work to address gaps. Novel methodologies will be needed to potentially engineer ways of integrating multidimensional data inputs to examine disease features to guide prognosis comprehensively and ultimately clinical management.
Collapse
Affiliation(s)
- Alexandra T Bourdillon
- Department of Otolaryngology-Head & Neck Surgery, University of California-San Francisco, San Francisco, CA 94115, USA.
| |
Collapse
|
18
|
Liu MH, Chien SY, Wu YL, Sun TH, Huang CS, Hsu KC, Hang LW. EfficientNet-based machine learning architecture for sleep apnea identification in clinical single-lead ECG signal data sets. Biomed Eng Online 2024; 23:57. [PMID: 38902671 PMCID: PMC11188209 DOI: 10.1186/s12938-024-01252-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Accepted: 06/03/2024] [Indexed: 06/22/2024] Open
Abstract
OBJECTIVE Our objective was to create a machine learning architecture capable of identifying obstructive sleep apnea (OSA) patterns in single-lead electrocardiography (ECG) signals, exhibiting exceptional performance when utilized in clinical data sets. METHODS We conducted our research using a data set consisting of 1656 patients, representing a diverse demographic, from the sleep center of China Medical University Hospital. To detect apnea ECG segments and extract apnea features, we utilized the EfficientNet and some of its layers, respectively. Furthermore, we compared various training and data preprocessing techniques to enhance the model's prediction, such as setting class and sample weights or employing overlapping and regular slicing. Finally, we tested our approach against other literature on the Apnea-ECG database. RESULTS Our research found that the EfficientNet model achieved the best apnea segment detection using overlapping slicing and sample-weight settings, with an AUC of 0.917 and an accuracy of 0.855. For patient screening with AHI > 30, we combined the trained model with XGBoost, leading to an AUC of 0.975 and an accuracy of 0.928. Additional tests using PhysioNet data showed that our model is comparable in performance to existing models regarding its ability to screen OSA levels. CONCLUSIONS Our suggested architecture, coupled with training and preprocessing techniques, showed admirable performance with a diverse demographic dataset, bringing us closer to practical implementation in OSA diagnosis. Trial registration The data for this study were collected retrospectively from the China Medical University Hospital in Taiwan with approval from the institutional review board CMUH109-REC3-018.
Collapse
Affiliation(s)
- Meng-Hsuan Liu
- Artificial Intelligence Center, China Medical University Hospital, No. 2, Yude Rd, North Dist, Taichung, Taiwan
| | - Shang-Yu Chien
- Artificial Intelligence Center, China Medical University Hospital, No. 2, Yude Rd, North Dist, Taichung, Taiwan
| | - Ya-Lun Wu
- Artificial Intelligence Center, China Medical University Hospital, No. 2, Yude Rd, North Dist, Taichung, Taiwan
| | - Ting-Hsuan Sun
- Artificial Intelligence Center, China Medical University Hospital, No. 2, Yude Rd, North Dist, Taichung, Taiwan
| | - Chun-Sen Huang
- Sleep Medicine Center, Department of Pulmonary and Critical Care Medicine, China Medical University Hospital, No. 2, Yude Rd., North Dist, Taichung, Taiwan
| | - Kai-Cheng Hsu
- Artificial Intelligence Center, China Medical University Hospital, No. 2, Yude Rd, North Dist, Taichung, Taiwan.
- School of Medicine, China Medical University, Taichung, Taiwan.
- Neuroscience and Brain Disease Center, China Medical University, Taichung, Taiwan.
- Department of Neurology, China Medical University Hospital, Taichung, Taiwan.
| | - Liang-Wen Hang
- Sleep Medicine Center, Department of Pulmonary and Critical Care Medicine, China Medical University Hospital, No. 2, Yude Rd., North Dist, Taichung, Taiwan.
- Department of Respiratory Therapy, College of Health Care, China, Medical University Hospital, Taichung, Taiwan.
| |
Collapse
|
19
|
Li Y, Li W, Zhang X, Lin H, Li D, Li Z. Three-dimensional morphological characterization of blood droplets during the dynamic coagulation process. JOURNAL OF BIOPHOTONICS 2024:e202400116. [PMID: 38887206 DOI: 10.1002/jbio.202400116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2024] [Revised: 05/01/2024] [Accepted: 05/03/2024] [Indexed: 06/20/2024]
Abstract
In this study, we employed a method integrating optical coherence tomography (OCT) with the U-Net and Visual Geometry Group (VGG)-Net frameworks within a convolutional neural network for quantitative characterization of the three dimensional whole blood during the dynamic coagulation process. VGG-Net architecture for the identification of blood droplets across three distinct coagulation stages including drop, gelation, and coagulation achieves an accuracy of up to 99%. In addition, the U-Net architecture demonstrated proficiency in effectively segmenting uncoagulated and coagulated portions of whole blood, as well as the background. Notably, parameters such as volume of uncoagulated and coagulated segments of the whole blood were successfully employed for the precise quantification of the coagulation process, which indicates well for the potential of future clinical diagnostics and analyses.
Collapse
Affiliation(s)
- Yao Li
- Key Laboratory of Optoelectronic Science and Technology for Medicine, Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, College of Photonic and Electronic Engineering, Fujian Normal University, Fuzhou, Fujian, China
| | - Wangbiao Li
- Key Laboratory of Optoelectronic Science and Technology for Medicine, Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, College of Photonic and Electronic Engineering, Fujian Normal University, Fuzhou, Fujian, China
| | - Xiaoman Zhang
- Key Laboratory of Optoelectronic Science and Technology for Medicine, Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, College of Photonic and Electronic Engineering, Fujian Normal University, Fuzhou, Fujian, China
| | - Hui Lin
- Key Laboratory of Optoelectronic Science and Technology for Medicine, Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, College of Photonic and Electronic Engineering, Fujian Normal University, Fuzhou, Fujian, China
| | - Dezi Li
- Key Laboratory of Intelligent Control Technology for Wuling-Mountain Ecological Agriculture in Hunan Province, Huaihua University, Huaihua, Hunan, China
| | - Zhifang Li
- Key Laboratory of Optoelectronic Science and Technology for Medicine, Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, College of Photonic and Electronic Engineering, Fujian Normal University, Fuzhou, Fujian, China
- The Internet of Things and Artificial Intelligence College, Fujian Polytechnic of Information Technology, Fuzhou, Fujian, China
| |
Collapse
|
20
|
Yousef RN, Ata MM, Rashed AEE, Badawy M, Elhosseini MA, Bahgat WM. A Novel Multi-Scaled Deep Convolutional Structure for Punctilious Human Gait Authentication. Biomimetics (Basel) 2024; 9:364. [PMID: 38921244 PMCID: PMC11201791 DOI: 10.3390/biomimetics9060364] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Revised: 05/15/2024] [Accepted: 05/21/2024] [Indexed: 06/27/2024] Open
Abstract
The need for non-interactive human recognition systems to ensure safe isolation between users and biometric equipment has been exposed by the COVID-19 pandemic. This study introduces a novel Multi-Scaled Deep Convolutional Structure for Punctilious Human Gait Authentication (MSDCS-PHGA). The proposed MSDCS-PHGA involves segmenting, preprocessing, and resizing silhouette images into three scales. Gait features are extracted from these multi-scale images using custom convolutional layers and fused to form an integrated feature set. This multi-scaled deep convolutional approach demonstrates its efficacy in gait recognition by significantly enhancing accuracy. The proposed convolutional neural network (CNN) architecture is assessed using three benchmark datasets: CASIA, OU-ISIR, and OU-MVLP. Moreover, the proposed model is evaluated against other pre-trained models using key performance metrics such as precision, accuracy, sensitivity, specificity, and training time. The results indicate that the proposed deep CNN model outperforms existing models focused on human gait. Notably, it achieves an accuracy of approximately 99.9% for both the CASIA and OU-ISIR datasets and 99.8% for the OU-MVLP dataset while maintaining a minimal training time of around 3 min.
Collapse
Affiliation(s)
- Reem N. Yousef
- Delta Higher Institute for Engineering and Technology, Mansoura 35681, Egypt;
| | - Mohamed Maher Ata
- School of Computational Sciences and Artificial Intelligence (CSAI), Zewail City of Science and Technology, October Gardens, 6th of October City, Giza 12578, Egypt;
- Department of Communications and Electronics Engineering, MISR Higher Institute for Engineering and Technology, Mansoura 35516, Egypt
| | - Amr E. Eldin Rashed
- Department of Computer Engineering, College of Computers and Information Technology, Taif University, Taif P.O. Box 11099, Saudi Arabia;
| | - Mahmoud Badawy
- Department of Computer Science and Informatics, Taibah University, Medina 42353, Saudi Arabia;
- Department of Computers and Control Systems Engineering, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt;
| | - Mostafa A. Elhosseini
- College of Computer Science and Engineering, Taibah University, Yanbu 46421, Saudi Arabia
- Department of Computers and Control Systems Engineering, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt;
| | - Waleed M. Bahgat
- Department of Computer Science and Informatics, Taibah University, Medina 42353, Saudi Arabia;
- Information Technology Department, Faculty of Computers and Information, Mansoura University, El Mansoura 35516, Egypt
| |
Collapse
|
21
|
Jaradat JH, Nashwan AJ. Revolutionizing disease diagnosis and management: Open-access magnetic resonance imaging datasets a challenge for artificial intelligence driven liver iron quantification. World J Clin Cases 2024; 12:2921-2924. [PMID: 38898864 PMCID: PMC11185379 DOI: 10.12998/wjcc.v12.i17.2921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 04/04/2024] [Accepted: 04/18/2024] [Indexed: 06/04/2024] Open
Abstract
Artificial intelligence (AI), particularly machine learning (ML) and deep learning (DL) techniques, such as convolutional neural networks (CNNs), have emerged as transformative technologies with vast potential in healthcare. Body iron load is usually assessed using slightly invasive blood tests (serum ferritin, serum iron, and serum transferrin). Serum ferritin is widely used to assess body iron and drive medical management; however, it is an acute phase reactant protein offering wrong interpretation in the setting of inflammation and distressed patients. Magnetic resonance imaging is a non-invasive technique that can be used to assess liver iron. The ML and DL algorithms can be used to enhance the detection of minor changes. However, a lack of open-access datasets may delay the advancement of medical research in this field. In this letter, we highlight the importance of standardized datasets for advancing AI and CNNs in medical imaging. Despite the current limitations, embracing AI and CNNs holds promise in revolutionizing disease diagnosis and treatment.
Collapse
Affiliation(s)
- Jaber H Jaradat
- Faculty of Medicine, Mutah University, Al-Karak 61101, Jordan
| | | |
Collapse
|
22
|
Ayalon A, Sahel JA, Chhablani J. A journey through the world of vitreous. Surv Ophthalmol 2024:S0039-6257(24)00070-5. [PMID: 38885759 DOI: 10.1016/j.survophthal.2024.06.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Revised: 06/06/2024] [Accepted: 06/10/2024] [Indexed: 06/20/2024]
Abstract
Vitreous, one of the largest components of the human eye, mostly contains water. Despite decades of studying the vitreous structure, numerous unanswered questions still remain, fueling ongoing active research. We attempt to provide a comprehensive overview of the current understanding of the development, morphology, biochemical composition, and function of the vitreous. We emphasize the impact of the vitreous structure and composition on the distribution of drugs. Fast developing imaging technologies, such as modern optical coherence tomography, unlocked multiple new approaches, offering the potential for in vivo study of the vitreous structure. They allowed to analyze in vivo a range of vitreous structures, such as posterior precortical vitreous pockets, Cloquet canal, channels that interconnect them, perivascular vitreous fissures, and cisterns. We provide an overview of such imaging techniques and their principles and of some challenges in visualizing vitreous structures. Finally, it explores the potential of combining the latest technologies and machine learning to enhance our understanding of vitreous structures.
Collapse
Affiliation(s)
- Anfisa Ayalon
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania.
| | - José-Alain Sahel
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania
| | - Jay Chhablani
- Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania
| |
Collapse
|
23
|
Irastorza-Valera L, Soria-Gómez E, Benitez JM, Montáns FJ, Saucedo-Mora L. Review of the Brain's Behaviour after Injury and Disease for Its Application in an Agent-Based Model (ABM). Biomimetics (Basel) 2024; 9:362. [PMID: 38921242 PMCID: PMC11202129 DOI: 10.3390/biomimetics9060362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2024] [Revised: 05/28/2024] [Accepted: 06/05/2024] [Indexed: 06/27/2024] Open
Abstract
The brain is the most complex organ in the human body and, as such, its study entails great challenges (methodological, theoretical, etc.). Nonetheless, there is a remarkable amount of studies about the consequences of pathological conditions on its development and functioning. This bibliographic review aims to cover mostly findings related to changes in the physical distribution of neurons and their connections-the connectome-both structural and functional, as well as their modelling approaches. It does not intend to offer an extensive description of all conditions affecting the brain; rather, it presents the most common ones. Thus, here, we highlight the need for accurate brain modelling that can subsequently be used to understand brain function and be applied to diagnose, track, and simulate treatments for the most prevalent pathologies affecting the brain.
Collapse
Affiliation(s)
- Luis Irastorza-Valera
- E.T.S. de Ingeniería Aeronáutica y del Espacio, Universidad Politécnica de Madrid, Pza. Cardenal Cisneros 3, 28040 Madrid, Spain; (L.I.-V.); (J.M.B.); (F.J.M.)
- PIMM Laboratory, ENSAM–Arts et Métiers ParisTech, 151 Bd de l’Hôpital, 75013 Paris, France
| | - Edgar Soria-Gómez
- Achúcarro Basque Center for Neuroscience, Barrio Sarriena, s/n, 48940 Leioa, Spain;
- Ikerbasque, Basque Foundation for Science, Plaza Euskadi, 5, 48009 Bilbao, Spain
- Department of Neurosciences, University of the Basque Country UPV/EHU, Barrio Sarriena, s/n, 48940 Leioa, Spain
| | - José María Benitez
- E.T.S. de Ingeniería Aeronáutica y del Espacio, Universidad Politécnica de Madrid, Pza. Cardenal Cisneros 3, 28040 Madrid, Spain; (L.I.-V.); (J.M.B.); (F.J.M.)
| | - Francisco J. Montáns
- E.T.S. de Ingeniería Aeronáutica y del Espacio, Universidad Politécnica de Madrid, Pza. Cardenal Cisneros 3, 28040 Madrid, Spain; (L.I.-V.); (J.M.B.); (F.J.M.)
- Department of Mechanical and Aerospace Engineering, Herbert Wertheim College of Engineering, University of Florida, Gainesville, FL 32611, USA
| | - Luis Saucedo-Mora
- E.T.S. de Ingeniería Aeronáutica y del Espacio, Universidad Politécnica de Madrid, Pza. Cardenal Cisneros 3, 28040 Madrid, Spain; (L.I.-V.); (J.M.B.); (F.J.M.)
- Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PJ, UK
- Department of Nuclear Science and Engineering, Massachusetts Institute of Technology (MIT), 77 Massachusetts Ave, Cambridge, MA 02139, USA
| |
Collapse
|
24
|
Cao Y, Xu B, Li B, Fu H. Advanced Design of Soft Robots with Artificial Intelligence. NANO-MICRO LETTERS 2024; 16:214. [PMID: 38869734 DOI: 10.1007/s40820-024-01423-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 04/22/2024] [Indexed: 06/14/2024]
Affiliation(s)
- Ying Cao
- Nanotechnology Center, School of Fashion and Textiles, The Hong Kong Polytechnic University, Hong Kong, 999077, People's Republic of China
| | - Bingang Xu
- Nanotechnology Center, School of Fashion and Textiles, The Hong Kong Polytechnic University, Hong Kong, 999077, People's Republic of China.
| | - Bin Li
- Bioinspired Engineering and Biomechanics Center, Xi'an Jiaotong University, Xi'an, 710049, People's Republic of China
| | - Hong Fu
- Department of Mathematics and Information Technology, The Education University of Hong Kong, Hong Kong, 999077, People's Republic of China.
| |
Collapse
|
25
|
Vinhas M, Leitão PM, Raimundo BS, Gil N, Vaz PD, Luis-Ferreira F. AI Applied to Volatile Organic Compound (VOC) Profiles from Exhaled Breath Air for Early Detection of Lung Cancer. Cancers (Basel) 2024; 16:2200. [PMID: 38927906 PMCID: PMC11201396 DOI: 10.3390/cancers16122200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Revised: 06/05/2024] [Accepted: 06/10/2024] [Indexed: 06/28/2024] Open
Abstract
Volatile organic compounds (VOCs) are an increasingly meaningful method for the early detection of various types of cancers, including lung cancer, through non-invasive methods. Traditional cancer detection techniques such as biopsies, imaging, and blood tests, though effective, often involve invasive procedures or are costly, time consuming, and painful. Recent advancements in technology have led to the exploration of VOC detection as a promising non-invasive and comfortable alternative. VOCs are organic chemicals that have a high vapor pressure at room temperature, making them readily detectable in breath, urine, and skin. The present study leverages artificial intelligence (AI) and machine learning algorithms to enhance classification accuracy and efficiency in detecting lung cancer through VOC analysis collected from exhaled breath air. Unlike other studies that primarily focus on identifying specific compounds, this study takes an agnostic approach, maximizing detection efficiency over the identification of specific compounds focusing on the overall compositional profiles and their differences across groups of patients. The results reported hereby uphold the potential of AI-driven techniques in revolutionizing early cancer detection methodologies towards their implementation in a clinical setting.
Collapse
Affiliation(s)
- Manuel Vinhas
- Departamento de Engenharia Electrotécnica e de Computadores, Faculdade de Ciências e Tecnologia, Universidade Nova de Lisboa, 2829-516 Monte da Caparica, Portugal;
| | - Pedro M. Leitão
- Unidade de Pulmão, Centro Clínico Champalimaud, Fundação Champalimaud, Av. Brasília, 1400-038 Lisbon, Portugal; (P.M.L.); (B.S.R.); (N.G.)
| | - Bernardo S. Raimundo
- Unidade de Pulmão, Centro Clínico Champalimaud, Fundação Champalimaud, Av. Brasília, 1400-038 Lisbon, Portugal; (P.M.L.); (B.S.R.); (N.G.)
| | - Nuno Gil
- Unidade de Pulmão, Centro Clínico Champalimaud, Fundação Champalimaud, Av. Brasília, 1400-038 Lisbon, Portugal; (P.M.L.); (B.S.R.); (N.G.)
| | - Pedro D. Vaz
- Unidade de Pulmão, Centro Clínico Champalimaud, Fundação Champalimaud, Av. Brasília, 1400-038 Lisbon, Portugal; (P.M.L.); (B.S.R.); (N.G.)
| | - Fernando Luis-Ferreira
- Departamento de Engenharia Electrotécnica e de Computadores, Faculdade de Ciências e Tecnologia, Universidade Nova de Lisboa, 2829-516 Monte da Caparica, Portugal;
| |
Collapse
|
26
|
Wyciślik Ł, Wylężek P, Momot A. The Improved Biometric Identification of Keystroke Dynamics Based on Deep Learning Approaches. SENSORS (BASEL, SWITZERLAND) 2024; 24:3763. [PMID: 38931547 PMCID: PMC11207587 DOI: 10.3390/s24123763] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 06/05/2024] [Accepted: 06/07/2024] [Indexed: 06/28/2024]
Abstract
In an era marked by escalating concerns about digital security, biometric identification methods have gained paramount importance. Despite the increasing adoption of biometric techniques, keystroke dynamics analysis remains a less explored yet promising avenue. This study highlights the untapped potential of keystroke dynamics, emphasizing its non-intrusive nature and distinctiveness. While keystroke dynamics analysis has not achieved widespread usage, ongoing research indicates its viability as a reliable biometric identifier. This research builds upon the existing foundation by proposing an innovative deep-learning methodology for keystroke dynamics-based identification. Leveraging open research datasets, our approach surpasses previously reported results, showcasing the effectiveness of deep learning in extracting intricate patterns from typing behaviors. This article contributes to the advancement of biometric identification, shedding light on the untapped potential of keystroke dynamics and demonstrating the efficacy of deep learning in enhancing the precision and reliability of identification systems.
Collapse
Affiliation(s)
- Łukasz Wyciślik
- Department of Applied Informatics, Faculty of Automatic Control, Electronics and Computer Sciences, Silesian University of Technology, 44-100 Gliwice, Poland;
| | | | - Alina Momot
- Department of Applied Informatics, Faculty of Automatic Control, Electronics and Computer Sciences, Silesian University of Technology, 44-100 Gliwice, Poland;
| |
Collapse
|
27
|
He S, Sillah M, Cole AR, Uboveja A, Aird KM, Chen YC, Gong YN. D-MAINS: A Deep-Learning Model for the Label-Free Detection of Mitosis, Apoptosis, Interphase, Necrosis, and Senescence in Cancer Cells. Cells 2024; 13:1004. [PMID: 38920634 PMCID: PMC11205186 DOI: 10.3390/cells13121004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2024] [Revised: 06/01/2024] [Accepted: 06/04/2024] [Indexed: 06/27/2024] Open
Abstract
BACKGROUND Identifying cells engaged in fundamental cellular processes, such as proliferation or living/death statuses, is pivotal across numerous research fields. However, prevailing methods relying on molecular biomarkers are constrained by high costs, limited specificity, protracted sample preparation, and reliance on fluorescence imaging. METHODS Based on cellular morphology in phase contrast images, we developed a deep-learning model named Detector of Mitosis, Apoptosis, Interphase, Necrosis, and Senescence (D-MAINS). RESULTS D-MAINS utilizes machine learning and image processing techniques, enabling swift and label-free categorization of cell death, division, and senescence at a single-cell resolution. Impressively, D-MAINS achieved an accuracy of 96.4 ± 0.5% and was validated with established molecular biomarkers. D-MAINS underwent rigorous testing under varied conditions not initially present in the training dataset. It demonstrated proficiency across diverse scenarios, encompassing additional cell lines, drug treatments, and distinct microscopes with different objective lenses and magnifications, affirming the robustness and adaptability of D-MAINS across multiple experimental setups. CONCLUSIONS D-MAINS is an example showcasing the feasibility of a low-cost, rapid, and label-free methodology for distinguishing various cellular states. Its versatility makes it a promising tool applicable across a broad spectrum of biomedical research contexts, particularly in cell death and oncology studies.
Collapse
Affiliation(s)
- Sarah He
- Department of Biological Sciences, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, USA;
- Hillman Cancer Center, UPMC, 5115 Center Avenue, Pittsburgh, PA 15232, USA; (M.S.); (A.U.)
| | - Muhammed Sillah
- Hillman Cancer Center, UPMC, 5115 Center Avenue, Pittsburgh, PA 15232, USA; (M.S.); (A.U.)
- Department of Immunology, University of Pittsburgh School of Medicine, 3420 Forbes Avenue, Pittsburgh, PA 15260, USA
| | - Aidan R. Cole
- Hillman Cancer Center, UPMC, 5115 Center Avenue, Pittsburgh, PA 15232, USA; (M.S.); (A.U.)
- Department of Pharmacology & Chemical Biology, University of Pittsburgh School of Medicine, 3420 Forbes Avenue, Pittsburgh, PA 15260, USA
| | - Apoorva Uboveja
- Hillman Cancer Center, UPMC, 5115 Center Avenue, Pittsburgh, PA 15232, USA; (M.S.); (A.U.)
- Department of Pharmacology & Chemical Biology, University of Pittsburgh School of Medicine, 3420 Forbes Avenue, Pittsburgh, PA 15260, USA
| | - Katherine M. Aird
- Hillman Cancer Center, UPMC, 5115 Center Avenue, Pittsburgh, PA 15232, USA; (M.S.); (A.U.)
- Department of Pharmacology & Chemical Biology, University of Pittsburgh School of Medicine, 3420 Forbes Avenue, Pittsburgh, PA 15260, USA
| | - Yu-Chih Chen
- Hillman Cancer Center, UPMC, 5115 Center Avenue, Pittsburgh, PA 15232, USA; (M.S.); (A.U.)
- Department of Computational and Systems Biology, University of Pittsburgh School of Medicine, 3420 Forbes Avenue, Pittsburgh, PA 15260, USA
- Department of Bioengineering, Swanson School of Engineering, University of Pittsburgh, 3700 O’Hara Street, Pittsburgh, PA 15260, USA
- CMU-Pitt Ph.D. Program in Computational Biology, University of Pittsburgh, 3420 Forbes Avenue, Pittsburgh, PA 15260, USA
| | - Yi-Nan Gong
- Hillman Cancer Center, UPMC, 5115 Center Avenue, Pittsburgh, PA 15232, USA; (M.S.); (A.U.)
- Department of Immunology, University of Pittsburgh School of Medicine, 3420 Forbes Avenue, Pittsburgh, PA 15260, USA
| |
Collapse
|
28
|
Das A, Dorafshan S, Kaabouch N. Autonomous Image-Based Corrosion Detection in Steel Structures Using Deep Learning. SENSORS (BASEL, SWITZERLAND) 2024; 24:3630. [PMID: 38894421 PMCID: PMC11175235 DOI: 10.3390/s24113630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Revised: 05/27/2024] [Accepted: 05/29/2024] [Indexed: 06/21/2024]
Abstract
Steel structures are susceptible to corrosion due to their exposure to the environment. Currently used non-destructive techniques require inspector involvement. Inaccessibility of the defective part may lead to unnoticed corrosion, allowing the corrosion to propagate and cause catastrophic structural failure over time. Autonomous corrosion detection is essential for mitigating these problems. This study investigated the effect of the type of encoder-decoder neural network and the training strategy that works the best to automate the segmentation of corroded pixels in visual images. Models using pre-trained DesnseNet121 and EfficientNetB7 backbones yielded 96.78% and 98.5% average pixel-level accuracy, respectively. Deeper EffiecientNetB7 performed the worst, with only 33% true-positive values, which was 58% less than ResNet34 and the original UNet. ResNet 34 successfully classified the corroded pixels, with 2.98% false positives, whereas the original UNet predicted 8.24% of the non-corroded pixels as corroded when tested on a specific set of images exclusive to the investigated training dataset. Deep networks were found to be better for transfer learning than full training, and a smaller dataset could be one of the reasons for performance degradation. Both fully trained conventional UNet and ResNet34 models were tested on some external images of different steel structures with different colors and types of corrosion, with the ResNet 34 backbone outperforming conventional UNet.
Collapse
Affiliation(s)
- Amrita Das
- Department of Civil Engineering, College of Engineering & Mines, University of North Dakota, Grand Forks, ND 58202, USA;
| | - Sattar Dorafshan
- Department of Civil Engineering, College of Engineering & Mines, University of North Dakota, Grand Forks, ND 58202, USA;
| | - Naima Kaabouch
- Department of Electrical Engineering, School of Electric Engineering & Computer Science, University North Dakota, Grand Forks, ND 58202, USA;
| |
Collapse
|
29
|
Perez-Lopez R, Ghaffari Laleh N, Mahmood F, Kather JN. A guide to artificial intelligence for cancer researchers. Nat Rev Cancer 2024; 24:427-441. [PMID: 38755439 DOI: 10.1038/s41568-024-00694-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 04/09/2024] [Indexed: 05/18/2024]
Abstract
Artificial intelligence (AI) has been commoditized. It has evolved from a specialty resource to a readily accessible tool for cancer researchers. AI-based tools can boost research productivity in daily workflows, but can also extract hidden information from existing data, thereby enabling new scientific discoveries. Building a basic literacy in these tools is useful for every cancer researcher. Researchers with a traditional biological science focus can use AI-based tools through off-the-shelf software, whereas those who are more computationally inclined can develop their own AI-based software pipelines. In this article, we provide a practical guide for non-computational cancer researchers to understand how AI-based tools can benefit them. We convey general principles of AI for applications in image analysis, natural language processing and drug discovery. In addition, we give examples of how non-computational researchers can get started on the journey to productively use AI in their own work.
Collapse
Affiliation(s)
- Raquel Perez-Lopez
- Radiomics Group, Vall d'Hebron Institute of Oncology, Vall d'Hebron Barcelona Hospital Campus, Barcelona, Spain
| | - Narmin Ghaffari Laleh
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany
| | - Faisal Mahmood
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Harvard Data Science Initiative, Harvard University, Cambridge, MA, USA
| | - Jakob Nikolas Kather
- Else Kroener Fresenius Center for Digital Health, Technical University Dresden, Dresden, Germany.
- Department of Medicine I, University Hospital Dresden, Dresden, Germany.
- Medical Oncology, National Center for Tumour Diseases (NCT), University Hospital Heidelberg, Heidelberg, Germany.
| |
Collapse
|
30
|
Kim CA, An HR, Yoo J, Lee YM, Sung TY, Kim WG, Song DE. Morphometric Analysis of Lateral Cervical Lymph Node Metastasis in Papillary Thyroid Carcinoma Using Digital Pathology. Endocr Pathol 2024; 35:113-121. [PMID: 38064165 DOI: 10.1007/s12022-023-09790-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 11/02/2023] [Indexed: 06/14/2024]
Abstract
Digital pathology uses digitized images for cancer research. We aimed to assess morphometric parameters using digital pathology for predicting recurrence in patients with papillary thyroid carcinoma (PTC) and lateral cervical lymph node (LN) metastasis. We analyzed 316 PTC patients and assessed the longest diameter and largest area of metastatic focus in LNs using a whole slide imaging scanner. In digital pathology assessment, the longest diameters and largest areas of metastatic foci in LNs were positively correlated with traditional optically measured diameters (R = 0.928 and R2 = 0.727, p < 0.001 and p < 0.001, respectively). The optimal cutoff diameter was 8.0 mm in both traditional microscopic (p = 0.009) and digital pathology (p = 0.016) evaluations, with significant differences in progression-free survival (PFS) observed at this cutoff (p = 0.006 and p = 0.002, respectively). The predictive area's cutoff was 35.6 mm2 (p = 0.005), which significantly affected PFS (p = 0.015). Using an 8.0-mm cutoff in traditional microscopic evaluation and a 35.6-mm2 cutoff in digital pathology showed comparable predictive results using the proportion of variation explained (PVE) methods (2.6% vs. 2.4%). Excluding cases with predominant cystic changes in LNs, the largest metastatic areas by digital pathology had the highest PVE at 3.9%. Furthermore, high volume of LN metastasis (p = 0.001), extranodal extension (p = 0.047), and high ratio of metastatic LNs (p = 0.006) were associated with poor prognosis. Both traditional microscopic and digital pathology evaluations effectively measured the longest diameter of metastatic foci in LNs. Moreover, digital pathology offers limited advantages in predicting PFS of patients with lateral cervical LN metastasis of PTC, especially those without predominant cystic changes in LNs.
Collapse
Affiliation(s)
- Chae A Kim
- Division of Endocrinology and Metabolism, Department of Internal Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Hyeong Rok An
- Department of Pathology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jungmin Yoo
- Division of Endocrinology and Metabolism, Department of Internal Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Yu-Mi Lee
- Department of Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Tae-Yon Sung
- Department of Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Won Gu Kim
- Division of Endocrinology and Metabolism, Department of Internal Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Dong Eun Song
- Department of Pathology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
31
|
Barlow J, Sragi Z, Rivera-Rivera G, Al-Awady A, Daşdöğen Ü, Courey MS, Kirke DN. The Use of Deep Learning Software in the Detection of Voice Disorders: A Systematic Review. Otolaryngol Head Neck Surg 2024; 170:1531-1543. [PMID: 38168017 DOI: 10.1002/ohn.636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 11/30/2023] [Accepted: 12/07/2023] [Indexed: 01/05/2024]
Abstract
OBJECTIVE To summarize the use of deep learning in the detection of voice disorders using acoustic and laryngoscopic input, compare specific neural networks in terms of accuracy, and assess their effectiveness compared to expert clinical visual examination. DATA SOURCES Embase, MEDLINE, and Cochrane Central. REVIEW METHODS Databases were screened through November 11, 2023 for relevant studies. The inclusion criteria required studies to utilize a specified deep learning method, use laryngoscopy or acoustic input, and measure accuracy of binary classification between healthy patients and those with voice disorders. RESULTS Thirty-four studies met the inclusion criteria, with 18 focusing on voice analysis, 15 on imaging analysis, and 1 both. Across the 18 acoustic studies, 21 programs were used for identification of organic and functional voice disorders. These technologies included 10 convolutional neural networks (CNNs), 6 multilayer perceptrons (MLPs), and 5 other neural networks. The binary classification systems yielded a mean accuracy of 89.0% overall, including 93.7% for MLP programs and 84.5% for CNNs. Among the 15 imaging analysis studies, a total of 23 programs were utilized, resulting in a mean accuracy of 91.3%. Specifically, the twenty CNNs achieved a mean accuracy of 92.6% compared to 83.0% for the 3 MLPs. CONCLUSION Deep learning models were shown to be highly accurate in the detection of voice pathology, with CNNs most effective for assessing laryngoscopy images and MLPs most effective for assessing acoustic input. While deep learning methods outperformed expert clinical exam in limited comparisons, further studies integrating external validation are necessary.
Collapse
Affiliation(s)
- Joshua Barlow
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York City, New York, USA
| | - Zara Sragi
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York City, New York, USA
| | - Gabriel Rivera-Rivera
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York City, New York, USA
| | - Abdurrahman Al-Awady
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York City, New York, USA
| | - Ümit Daşdöğen
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York City, New York, USA
| | - Mark S Courey
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York City, New York, USA
| | - Diana N Kirke
- Department of Otolaryngology-Head and Neck Surgery, Icahn School of Medicine at Mount Sinai, New York City, New York, USA
| |
Collapse
|
32
|
Zhao L, Hao R, Chai Z, Fu W, Yang W, Li C, Liu Q, Jiang Y. DeepOCR: A multi-species deep-learning framework for accurate identification of open chromatin regions in livestock. Comput Biol Chem 2024; 110:108077. [PMID: 38691895 DOI: 10.1016/j.compbiolchem.2024.108077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Revised: 03/27/2024] [Accepted: 04/16/2024] [Indexed: 05/03/2024]
Abstract
A wealth of experimental evidence has suggested that open chromatin regions (OCRs) are involved in many critical biological activities, such as DNA replication, enhancer activity, and gene transcription. Accurately identifying OCRs in livestock species can provide critical insights into the distribution and characteristics of OCRs for disease treatment in livestock, thereby improving animal welfare. However, most current machine-learning methods for OCR prediction were originally designed for a limited number of model organisms, such as humans and some model organisms, and thus their performance on non-model organisms, specifically livestock, is often unsatisfactory. To bridge this gap, we propose DeepOCR, a lightweight depth-separable residual network model for predicting OCRs in livestock, including chicken, cattle, and sheep. DeepOCR integrates a single convolution layer and two improved residue structure blocks to extract and learn important features from the input DNA sequences. A fully connected layer was also employed to further process the extracted features and improve the robustness of the entire network. Our benchmarking experiments demonstrated superior prediction performance of DeepOCR compared to state-of-the-art approaches on testing datasets of the three species. The source code of DeepOCR is freely available for academic purposes at https://github.com/jasonzhao371/DeepOCR/. We anticipate DeepOCR servers as a practical and reliable computational tool for OCR-related studies in livestock species.
Collapse
Affiliation(s)
- Liangwei Zhao
- College of Information Engineering, Northwest A&F University, Yangling 712100, China
| | - Ran Hao
- College of Information Engineering, Northwest A&F University, Yangling 712100, China
| | - Ziyi Chai
- College of Information Engineering, Northwest A&F University, Yangling 712100, China
| | - Weiwei Fu
- College of Pastoral Agriculture Science and Technology, Lanzhou University, Lanzhou, Gansu 730020, China
| | - Wei Yang
- National Clinical Research Center for Infectious Diseases, Shenzhen Third People's Hospital, Shenzhen 518112, China
| | - Chen Li
- Monash Biomedicine Discovery Institute and Department of Biochemistry and Molecular Biology, Monash University, Melbourne, VIC 3800, Australia.
| | - Quanzhong Liu
- College of Information Engineering, Northwest A&F University, Yangling 712100, China.
| | - Yu Jiang
- Key Laboratory of Animal Genetics, Breeding and Reproduction of Shaanxi Province, College of Animal Science and Technology, Northwest A&F University, Yangling 712100, China; Key Laboratory of Livestock Biology, Northwest A&F University, Yangling, Shaanxi 712100, China.
| |
Collapse
|
33
|
Taciuc IA, Dumitru M, Vrinceanu D, Gherghe M, Manole F, Marinescu A, Serboiu C, Neagos A, Costache A. Applications and challenges of neural networks in otolaryngology (Review). Biomed Rep 2024; 20:92. [PMID: 38765859 PMCID: PMC11099604 DOI: 10.3892/br.2024.1781] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Accepted: 04/05/2024] [Indexed: 05/22/2024] Open
Abstract
Artificial Intelligence (AI) has become a topic of interest that is frequently debated in all research fields. The medical field is no exception, where several unanswered questions remain. When and how this field can benefit from AI support in daily routines are the most frequently asked questions. The present review aims to present the types of neural networks (NNs) available for development, discussing their advantages, disadvantages and how they can be applied practically. In addition, the present review summarizes how NNs (combined with various other features) have already been applied in studies in the ear nose throat research field, from assisting diagnosis to treatment management. Although the answer to this question regarding AI remains elusive, understanding the basics and types of applicable NNs can lead to future studies possibly using more than one type of NN. This approach may bypass the actual limitations in accuracy and relevance of information generated by AI. The proposed studies, the majority of which used convolutional NNs, obtained accuracies varying 70-98%, with a number of studies having the AI trained on a limited number of cases (<100 patients). The lack of standardization in AI protocols for research negatively affects data homogeneity and transparency of databases.
Collapse
Affiliation(s)
- Iulian-Alexandru Taciuc
- Department of Pathology, ‘Carol Davila’ University of Medicine and Pharmacy, 020021 Bucharest, Romania
| | - Mihai Dumitru
- Department of ENT, ‘Carol Davila’ University of Medicine and Pharmacy, 050751 Bucharest, Romania
| | - Daniela Vrinceanu
- Department of ENT, ‘Carol Davila’ University of Medicine and Pharmacy, 050751 Bucharest, Romania
| | - Mirela Gherghe
- Department of Nuclear Medicine, ‘Carol Davila’ University of Medicine and Pharmacy, 022328 Bucharest, Romania
| | - Felicia Manole
- Department of ENT, Faculty of Medicine University of Oradea, 410073 Oradea, Romania
| | - Andreea Marinescu
- Department of Radiology and Medical Imaging ‘Carol Davila’ University of Medicine and Pharmacy, 050096 Bucharest, Romania
| | - Crenguta Serboiu
- Department of Cell Biology, Molecular and Histology, ‘Carol Davila’ University of Medicine and Pharmacy, 050096 Bucharest, Romania
| | - Adriana Neagos
- Department of ENT, ‘George Emil Palade’ University of Medicine, Pharmacy, Science, and Technology of Targu Mures, 540142 Mures, Romania
| | - Adrian Costache
- Department of Pathology, ‘Carol Davila’ University of Medicine and Pharmacy, 020021 Bucharest, Romania
| |
Collapse
|
34
|
Ross T, Tanna R, Lilaonitkul W, Mehta N. Deep Learning for Automated Image Segmentation of the Middle Ear: A Scoping Review. Otolaryngol Head Neck Surg 2024; 170:1544-1554. [PMID: 38667630 DOI: 10.1002/ohn.758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2024] [Revised: 02/28/2024] [Accepted: 03/15/2024] [Indexed: 05/31/2024]
Abstract
OBJECTIVE Convolutional neural networks (CNNs) have revolutionized medical image segmentation in recent years. This scoping review aimed to carry out a comprehensive review of the literature describing automated image segmentation of the middle ear using CNNs from computed tomography (CT) scans. DATA SOURCES A comprehensive literature search, generated jointly with a medical librarian, was performed on Medline, Embase, Scopus, Web of Science, and Cochrane, using Medical Subject Heading terms and keywords. Databases were searched from inception to July 2023. Reference lists of included papers were also screened. REVIEW METHODS Ten studies were included for analysis, which contained a total of 866 scans which were used in model training/testing. Thirteen different architectures were described to perform automated segmentation. The best Dice similarity coefficient (DSC) for the entire ossicular chain was 0.87 using ResNet. The highest DSC for any structure was the incus using 3D-V-Net at 0.93. The most difficult structure to segment was the stapes, with the highest DSC of 0.84 using 3D-V-Net. CONCLUSIONS Numerous architectures have demonstrated good performance in segmenting the middle ear using CNNs. To overcome some of the difficulties in segmenting the stapes, we recommend the development of an architecture trained on cone beam CTs to provide improved spatial resolution to assist with delineating the smallest ossicle. IMPLICATIONS FOR PRACTICE This has clinical applications for preoperative planning, diagnosis, and simulation.
Collapse
Affiliation(s)
- Talisa Ross
- Department of Ear, Nose and Throat Surgery, Charing Cross Hospital, Imperial College Healthcare NHS Trust, London, UK
- evidENT Team, Ear Institute, University College London, London, UK
| | - Ravina Tanna
- Department of Ear, Nose and Throat Surgery, Great Ormond Street Hospital, London, UK
| | | | - Nishchay Mehta
- evidENT Team, Ear Institute, University College London, London, UK
- Department of Ear, Nose and Throat Surgery, Royal National Ear Nose and Throat Hospital, London, UK
| |
Collapse
|
35
|
Kaheni H, Shiran MB, Kamrava SK, Zare-Sadeghi A. Intra and inter-regional functional connectivity of the human brain due to Task-Evoked fMRI Data classification through CNN & LSTM. J Neuroradiol 2024; 51:101188. [PMID: 38408721 DOI: 10.1016/j.neurad.2024.02.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 01/27/2024] [Accepted: 02/21/2024] [Indexed: 02/28/2024]
Abstract
BACKGROUND AND PURPOSE Olfaction is an early marker of neurodegenerative disease. Standard olfactory function is essential due to the importance of olfaction in human life. The psychophysical evaluation assesses the olfactory function commonly. It is patient-reported, and results rely on the patient's answers and collaboration. However, methodological difficulties attributed to the psychophysical evaluation of olfactory-related cerebral areas led to limited assessment of olfactory function in the human brain. MATERIALS AND METHODS The current study utilized clustering approaches to assess olfactory function in fMRI data and used brain activity to parcellate the brain with homogeneous properties. Deep neural network architecture based on ResNet convolutional neural networks (CNN) and Long Short-Term Model (LSTM) designed to classify healthy with olfactory disorders subjects. RESULTS The fMRI result obtained by k-means unsupervised machine learning model was within the expected outcome and similar to those found with the conn toolbox in detecting active areas. There was no significant difference between the means of subjects and every subject. Proposing a CRNN deep learning model to classify fMRI data in two different healthy and with olfactory disorders groups leads to an accuracy score of 97 %. CONCLUSIONS The K-means unsupervised algorithm can detect the active regions in the brain and analyze olfactory function. Classification results prove the CNN-LSTM architecture using ResNet provides the best accuracy score in olfactory fMRI data. It is the first attempt conducted on olfactory fMRI data in detail until now.
Collapse
Affiliation(s)
- Haniyeh Kaheni
- Finetech in Medicine Research Center, Department of Medical Physics, School of Medicine, Iran University of Medical Sciences (IUMS), Tehran, Iran
| | - Mohammad Bagher Shiran
- Finetech in Medicine Research Center, Department of Medical Physics, School of Medicine, Iran University of Medical Sciences (IUMS), Tehran, Iran
| | - Seyed Kamran Kamrava
- ENT and Head and Neck Research Center and Department, The Five Senses Health Institute, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Arash Zare-Sadeghi
- Finetech in Medicine Research Center, Department of Medical Physics, School of Medicine, Iran University of Medical Sciences (IUMS), Tehran, Iran.
| |
Collapse
|
36
|
Borna MR, Sepehri MM, Shadpour P, Khaleghi Mehr F. Enhancing bladder cancer diagnosis through transitional cell carcinoma polyp detection and segmentation: an artificial intelligence powered deep learning solution. Front Artif Intell 2024; 7:1406806. [PMID: 38873177 PMCID: PMC11169928 DOI: 10.3389/frai.2024.1406806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Accepted: 05/08/2024] [Indexed: 06/15/2024] Open
Abstract
Background Bladder cancer, specifically transitional cell carcinoma (TCC) polyps, presents a significant healthcare challenge worldwide. Accurate segmentation of TCC polyps in cystoscopy images is crucial for early diagnosis and urgent treatment. Deep learning models have shown promise in addressing this challenge. Methods We evaluated deep learning architectures, including Unetplusplus_vgg19, Unet_vgg11, and FPN_resnet34, trained on a dataset of annotated cystoscopy images of low quality. Results The models showed promise, with Unetplusplus_vgg19 and FPN_resnet34 exhibiting precision of 55.40 and 57.41%, respectively, suitable for clinical application without modifying existing treatment workflows. Conclusion Deep learning models demonstrate potential in TCC polyp segmentation, even when trained on lower-quality images, suggesting their viability in improving timely bladder cancer diagnosis without impacting the current clinical processes.
Collapse
Affiliation(s)
- Mahdi-Reza Borna
- Department of IT Engineering, Faculty of Industrial and Systems Engineering, Tarbiat Modares University, Tehran, Iran
| | - Mohammad Mehdi Sepehri
- Department of IT Engineering, Faculty of Industrial and Systems Engineering, Tarbiat Modares University, Tehran, Iran
| | - Pejman Shadpour
- Hasheminejad Kidney Center (HKC), Iran University of Medical Sciences, Tehran, Iran
| | | |
Collapse
|
37
|
Hou G, Li R, Tian M, Ding J, Zhang X, Yang B, Chen C, Huang R, Yin Y. Improving Efficiency: Automatic Intelligent Weighing System as a Replacement for Manual Pig Weighing. Animals (Basel) 2024; 14:1614. [PMID: 38891661 PMCID: PMC11171250 DOI: 10.3390/ani14111614] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 05/27/2024] [Accepted: 05/27/2024] [Indexed: 06/21/2024] Open
Abstract
To verify the accuracy of AIWS, we weighed 106 pen growing-finishing pigs' weights using both the manual and AIWS methods, respectively. Accuracy was evaluated based on the values of MAE, MAPE, and RMSE. In the growth experiment, manual weighing was conducted every two weeks and AIWS predicted weight data was recorded daily, followed by fitting the growth curves. The results showed that MAE, MAPE, and RMSE values for 60 to 120 kg pigs were 3.48 kg, 3.71%, and 4.43 kg, respectively. The correlation coefficient r between the AIWS and manual method was 0.9410, and R2 was 0.8854. The two were extremely significant correlations (p < 0.001). In growth curve fitting, the AIWS method has lower AIC and BIC values than the manual method. The Logistic model by AIWS was the best-fit model. The age and body weight at the inflection point of the best-fit model were 164.46 d and 93.45 kg, respectively. The maximum growth rate was 831.66 g/d. In summary, AIWS can accurately predict pigs' body weights in actual production and has a better fitting effect on the growth curves of growing-finishing pigs. This study suggested that it was feasible for AIWS to replace manual weighing to measure the weight of 50 to 120 kg live pigs in large-scale farming.
Collapse
Affiliation(s)
- Gaifeng Hou
- CAS Key Laboratory of Agro-Ecological Processes in Subtropical Region, Hunan Provincial Key Laboratory of Animal Nutritional Physiology and Metabolic Process, Hunan Research Center of Livestock and Poultry Sciences, South Central Experimental Station of Animal Nutrition and Feed Science in the Ministry of Agriculture, National Engineering Laboratory for Poultry Breeding Pollution Control and Resource Technology, Institute of Subtropical Agriculture, Chinese Academy of Sciences, Changsha 410125, China; (G.H.); (R.L.); (M.T.); (J.D.)
| | - Rui Li
- CAS Key Laboratory of Agro-Ecological Processes in Subtropical Region, Hunan Provincial Key Laboratory of Animal Nutritional Physiology and Metabolic Process, Hunan Research Center of Livestock and Poultry Sciences, South Central Experimental Station of Animal Nutrition and Feed Science in the Ministry of Agriculture, National Engineering Laboratory for Poultry Breeding Pollution Control and Resource Technology, Institute of Subtropical Agriculture, Chinese Academy of Sciences, Changsha 410125, China; (G.H.); (R.L.); (M.T.); (J.D.)
| | - Mingzhou Tian
- CAS Key Laboratory of Agro-Ecological Processes in Subtropical Region, Hunan Provincial Key Laboratory of Animal Nutritional Physiology and Metabolic Process, Hunan Research Center of Livestock and Poultry Sciences, South Central Experimental Station of Animal Nutrition and Feed Science in the Ministry of Agriculture, National Engineering Laboratory for Poultry Breeding Pollution Control and Resource Technology, Institute of Subtropical Agriculture, Chinese Academy of Sciences, Changsha 410125, China; (G.H.); (R.L.); (M.T.); (J.D.)
| | - Jing Ding
- CAS Key Laboratory of Agro-Ecological Processes in Subtropical Region, Hunan Provincial Key Laboratory of Animal Nutritional Physiology and Metabolic Process, Hunan Research Center of Livestock and Poultry Sciences, South Central Experimental Station of Animal Nutrition and Feed Science in the Ministry of Agriculture, National Engineering Laboratory for Poultry Breeding Pollution Control and Resource Technology, Institute of Subtropical Agriculture, Chinese Academy of Sciences, Changsha 410125, China; (G.H.); (R.L.); (M.T.); (J.D.)
| | - Xingfu Zhang
- College of Computer Science and Technology, Heilongjiang Institute of Technology, Harbin 150050, China;
- Beijing Focused Loong Technology Co., Ltd., Beijing 100086, China
| | - Bin Yang
- Key Laboratory of Visual Perception and Artificial Intelligence of Hunan Province, College of Electrical and Information Engineering, Hunan University, Changsha 410082, China;
| | - Chunyu Chen
- College of Information and Communication, Harbin Engineering University, Harbin 150001, China;
| | - Ruilin Huang
- CAS Key Laboratory of Agro-Ecological Processes in Subtropical Region, Hunan Provincial Key Laboratory of Animal Nutritional Physiology and Metabolic Process, Hunan Research Center of Livestock and Poultry Sciences, South Central Experimental Station of Animal Nutrition and Feed Science in the Ministry of Agriculture, National Engineering Laboratory for Poultry Breeding Pollution Control and Resource Technology, Institute of Subtropical Agriculture, Chinese Academy of Sciences, Changsha 410125, China; (G.H.); (R.L.); (M.T.); (J.D.)
| | - Yulong Yin
- CAS Key Laboratory of Agro-Ecological Processes in Subtropical Region, Hunan Provincial Key Laboratory of Animal Nutritional Physiology and Metabolic Process, Hunan Research Center of Livestock and Poultry Sciences, South Central Experimental Station of Animal Nutrition and Feed Science in the Ministry of Agriculture, National Engineering Laboratory for Poultry Breeding Pollution Control and Resource Technology, Institute of Subtropical Agriculture, Chinese Academy of Sciences, Changsha 410125, China; (G.H.); (R.L.); (M.T.); (J.D.)
| |
Collapse
|
38
|
Bajaj S, Bala M, Angurala M. A comparative analysis of different augmentations for brain images. Med Biol Eng Comput 2024:10.1007/s11517-024-03127-7. [PMID: 38782880 DOI: 10.1007/s11517-024-03127-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 05/10/2024] [Indexed: 05/25/2024]
Abstract
Deep learning (DL) requires a large amount of training data to improve performance and prevent overfitting. To overcome these difficulties, we need to increase the size of the training dataset. This can be done by augmentation on a small dataset. The augmentation approaches must enhance the model's performance during the learning period. There are several types of transformations that can be applied to medical images. These transformations can be applied to the entire dataset or to a subset of the data, depending on the desired outcome. In this study, we categorize data augmentation methods into four groups: Absent augmentation, where no modifications are made; basic augmentation, which includes brightness and contrast adjustments; intermediate augmentation, encompassing a wider array of transformations like rotation, flipping, and shifting in addition to brightness and contrast adjustments; and advanced augmentation, where all transformation layers are employed. We plan to conduct a comprehensive analysis to determine which group performs best when applied to brain CT images. This evaluation aims to identify the augmentation group that produces the most favorable results in terms of improving model accuracy, minimizing diagnostic errors, and ensuring the robustness of the model in the context of brain CT image analysis.
Collapse
Affiliation(s)
- Shilpa Bajaj
- Applied Sciences (Computer Applications), I.K. Gujral Punjab Technical University, Jalandhar, Kapurthala, India.
| | - Manju Bala
- Department of Computer Science and Engineering, Khalsa College of Engineering and Technology, Amritsar, India
| | - Mohit Angurala
- Apex Institute of Technology (CSE), Chandigarh University, Gharuan, Mohali, Punjab, India
| |
Collapse
|
39
|
Zayed SO, Abd-Rabou RYM, Abdelhameed GM, Abdelhamid Y, Khairy K, Abulnoor BA, Ibrahim SH, Khaled H. The innovation of AI-based software in oral diseases: clinical-histopathological correlation diagnostic accuracy primary study. BMC Oral Health 2024; 24:598. [PMID: 38778322 PMCID: PMC11112957 DOI: 10.1186/s12903-024-04347-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Accepted: 05/08/2024] [Indexed: 05/25/2024] Open
Abstract
BACKGROUND Machine learning (ML) through artificial intelligence (AI) could provide clinicians and oral pathologists to advance diagnostic problems in the field of potentially malignant lesions, oral cancer, periodontal diseases, salivary gland disease, oral infections, immune-mediated disease, and others. AI can detect micro-features beyond human eyes and provide solution in critical diagnostic cases. OBJECTIVE The objective of this study was developing a software with all needed feeding data to act as AI-based program to diagnose oral diseases. So our research question was: Can we develop a Computer-Aided Software for accurate diagnosis of oral diseases based on clinical and histopathological data inputs? METHOD The study sample included clinical images, patient symptoms, radiographic images, histopathological images and texts for the oral diseases of interest in the current study (premalignant lesions, oral cancer, salivary gland neoplasms, immune mediated oral mucosal lesions, oral reactive lesions) total oral diseases enrolled in this study was 28 diseases retrieved from the archives of oral maxillofacial pathology department. Total 11,200 texts and 3000 images (2800 images were used for training data to the program and 100 images were used as test data to the program and 100 cases for calculating accuracy, sensitivity& specificity). RESULTS The correct diagnosis rates for group 1 (software users), group 2 (microscopic users) and group 3 (hybrid) were 87%, 90.6, 95% respectively. The reliability for inter-observer value was done by calculating Cronbach's alpha and interclass correlation coefficient. The test revealed for group 1, 2 and 3 the following values respectively 0.934, 0.712 & 0.703. All groups showed acceptable reliability especially for Diagnosis Oral Diseases Software (DODS) that revealed higher reliability value than other groups. However, The accuracy, sensitivity & specificity of this software was lower than those of oral pathologists (master's degree). CONCLUSION The correct diagnosis rate of DODS was comparable to oral pathologists using standard microscopic examination. The DODS program could be utilized as diagnostic guidance tool with high reliability & accuracy.
Collapse
Affiliation(s)
- Shaimaa O Zayed
- Department of Oral maxillofacial Pathology, Faculty of Dentistry, Cairo University, Cairo, Egypt
- Department of Oral Pathology, Misr University for Science and Technology, P. O. Box 77, Giza, Egypt
| | - Rawan Y M Abd-Rabou
- Faculty of Oral Medicine & Dental Surgery, Misr University for Science and Technology, P. O. Box 77, Giza, Egypt
| | | | - Youssef Abdelhamid
- Philosophy & Interactive Media Minors, New York University, Abu Dhabi, United Arab Emirates
| | | | - Bassam A Abulnoor
- Fixes Prosthodontics, Faculty of Dentistry, Ain Shams University, Cairo, Egypt
| | | | - Heba Khaled
- Lecturer of Oral Maxillofacial Pathology, Faculty of Dentistry, Cairo University, Cairo, Egypt
| |
Collapse
|
40
|
Klüner LV, Chan K, Antoniades C. Using artificial intelligence to study atherosclerosis from computed tomography imaging: A state-of-the-art review of the current literature. Atherosclerosis 2024:117580. [PMID: 38852022 DOI: 10.1016/j.atherosclerosis.2024.117580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 05/03/2024] [Accepted: 05/14/2024] [Indexed: 06/10/2024]
Abstract
With the enormous progress in the field of cardiovascular imaging in recent years, computed tomography (CT) has become readily available to phenotype atherosclerotic coronary artery disease. New analytical methods using artificial intelligence (AI) enable the analysis of complex phenotypic information of atherosclerotic plaques. In particular, deep learning-based approaches using convolutional neural networks (CNNs) facilitate tasks such as lesion detection, segmentation, and classification. New radiotranscriptomic techniques even capture underlying bio-histochemical processes through higher-order structural analysis of voxels on CT images. In the near future, the international large-scale Oxford Risk Factors And Non-invasive Imaging (ORFAN) study will provide a powerful platform for testing and validating prognostic AI-based models. The goal is the transition of these new approaches from research settings into a clinical workflow. In this review, we present an overview of existing AI-based techniques with focus on imaging biomarkers to determine the degree of coronary inflammation, coronary plaques, and the associated risk. Further, current limitations using AI-based approaches as well as the priorities to address these challenges will be discussed. This will pave the way for an AI-enabled risk assessment tool to detect vulnerable atherosclerotic plaques and to guide treatment strategies for patients.
Collapse
Affiliation(s)
- Laura Valentina Klüner
- Acute Multidisciplinary Imaging and Interventional Centre, Division of Cardiovascular Medicine, Radcliffe Department of Medicine, Oxford NIHR Biomedical Research Centre, University of Oxford, United Kingdom
| | - Kenneth Chan
- Acute Multidisciplinary Imaging and Interventional Centre, Division of Cardiovascular Medicine, Radcliffe Department of Medicine, Oxford NIHR Biomedical Research Centre, University of Oxford, United Kingdom
| | - Charalambos Antoniades
- Acute Multidisciplinary Imaging and Interventional Centre, Division of Cardiovascular Medicine, Radcliffe Department of Medicine, Oxford NIHR Biomedical Research Centre, University of Oxford, United Kingdom.
| |
Collapse
|
41
|
Jovanovic L, Damaševičius R, Matic R, Kabiljo M, Simic V, Kunjadic G, Antonijevic M, Zivkovic M, Bacanin N. Detecting Parkinson's disease from shoe-mounted accelerometer sensors using convolutional neural networks optimized with modified metaheuristics. PeerJ Comput Sci 2024; 10:e2031. [PMID: 38855236 PMCID: PMC11157549 DOI: 10.7717/peerj-cs.2031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 04/09/2024] [Indexed: 06/11/2024]
Abstract
Neurodegenerative conditions significantly impact patient quality of life. Many conditions do not have a cure, but with appropriate and timely treatment the advance of the disease could be diminished. However, many patients only seek a diagnosis once the condition progresses to a point at which the quality of life is significantly impacted. Effective non-invasive and readily accessible methods for early diagnosis can considerably enhance the quality of life of patients affected by neurodegenerative conditions. This work explores the potential of convolutional neural networks (CNNs) for patient gain freezing associated with Parkinson's disease. Sensor data collected from wearable gyroscopes located at the sole of the patient's shoe record walking patterns. These patterns are further analyzed using convolutional networks to accurately detect abnormal walking patterns. The suggested method is assessed on a public real-world dataset collected from parents affected by Parkinson's as well as individuals from a control group. To improve the accuracy of the classification, an altered variant of the recent crayfish optimization algorithm is introduced and compared to contemporary optimization metaheuristics. Our findings reveal that the modified algorithm (MSCHO) significantly outperforms other methods in accuracy, demonstrated by low error rates and high Cohen's Kappa, precision, sensitivity, and F1-measures across three datasets. These results suggest the potential of CNNs, combined with advanced optimization techniques, for early, non-invasive diagnosis of neurodegenerative conditions, offering a path to improve patient quality of life.
Collapse
Affiliation(s)
- Luka Jovanovic
- Faculty of Technical Sciences, Singidunum University, Belgrade, Serbia
| | | | - Rade Matic
- Department for Information Systems and Technologies, Belgrade Academy for Business and Arts Applied Studies, Belgrade, Serbia
| | - Milos Kabiljo
- Department for Information Systems and Technologies, Belgrade Academy for Business and Arts Applied Studies, Belgrade, Serbia
| | - Vladimir Simic
- Faculty of Transport and Traffic Engineering, University of Belgrade, Belgrade, Serbia
- College of Engineering, Department of Industrial Engineering and Management, Yuan Ze University, Taoyuan City, Taiwan
| | - Goran Kunjadic
- Higher Colleges of Technology, Abu Dhabi, United Arab Emirates
| | - Milos Antonijevic
- Faculty of Informatics and Computing, Singidunum University, Belgrade, Serbia
| | - Miodrag Zivkovic
- Faculty of Informatics and Computing, Singidunum University, Belgrade, Serbia
| | - Nebojsa Bacanin
- Faculty of Informatics and Computing, Singidunum University, Belgrade, Serbia
- MEU Research Unit, Middle East University, Amman, Jordan
| |
Collapse
|
42
|
Shobayo O, Saatchi R, Ramlakhan S. Convolutional Neural Network to Classify Infrared Thermal Images of Fractured Wrists in Pediatrics. Healthcare (Basel) 2024; 12:994. [PMID: 38786405 PMCID: PMC11121475 DOI: 10.3390/healthcare12100994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Revised: 05/02/2024] [Accepted: 05/10/2024] [Indexed: 05/25/2024] Open
Abstract
Convolutional neural network (CNN) models were devised and evaluated to classify infrared thermal (IRT) images of pediatric wrist fractures. The images were recorded from 19 participants with a wrist fracture and 21 without a fracture (sprain). The injury diagnosis was by X-ray radiography. For each participant, 299 IRT images of their wrists were recorded. These generated 11,960 images (40 participants × 299 images). For each image, the wrist region of interest (ROI) was selected and fast Fourier transformed (FFT) to obtain a magnitude frequency spectrum. The spectrum was resized to 100 × 100 pixels from its center as this region represented the main frequency components. Image augmentations of rotation, translation and shearing were applied to the 11,960 magnitude frequency spectra to assist with the CNN generalization during training. The CNN had 34 layers associated with convolution, batch normalization, rectified linear unit, maximum pooling and SoftMax and classification. The ratio of images for the training and test was 70:30, respectively. The effects of augmentation and dropout on CNN performance were explored. Wrist fracture identification sensitivity and accuracy of 88% and 76%, respectively, were achieved. The CNN model was able to identify wrist fractures; however, a larger sample size would improve accuracy.
Collapse
Affiliation(s)
- Olamilekan Shobayo
- Department of Computing, Sheffield Hallam University, Sheffield S1 2NU, UK;
| | - Reza Saatchi
- Department of Engineering and Mathematics, Sheffield Hallam University, Sheffield S1 1WB, UK
| | - Shammi Ramlakhan
- Emergency Department, Sheffield Children’s Hospital NHS Foundation Trust, Sheffield S10 2TH, UK;
| |
Collapse
|
43
|
Koido M, Tomizuka K, Terao C. Fundamentals for predicting transcriptional regulations from DNA sequence patterns. J Hum Genet 2024:10.1038/s10038-024-01256-3. [PMID: 38730006 DOI: 10.1038/s10038-024-01256-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 04/10/2024] [Accepted: 04/25/2024] [Indexed: 05/12/2024]
Abstract
Cell-type-specific regulatory elements, cataloged through extensive experiments and bioinformatics in large-scale consortiums, have enabled enrichment analyses of genetic associations that primarily utilize positional information of the regulatory elements. These analyses have identified cell types and pathways genetically associated with human complex traits. However, our understanding of detailed allelic effects on these elements' activities and on-off states remains incomplete, hampering the interpretation of human genetic study results. This review introduces machine learning methods to learn sequence-dependent transcriptional regulation mechanisms from DNA sequences for predicting such allelic effects (not associations). We provide a concise history of machine-learning-based approaches, the requirements, and the key computational processes, focusing on primers in machine learning. Convolution and self-attention, pivotal in modern deep-learning models, are explained through geometrical interpretations using dot products. This facilitates understanding of the concept and why these have been used for machine learning for DNA sequences. These will inspire further research in this genetics and genomics field.
Collapse
Affiliation(s)
- Masaru Koido
- Laboratory of Complex Trait Genomics, Department of Computational Biology and Medical Sciences, Graduate School of Frontier Sciences, The University of Tokyo, Tokyo, Japan.
- Laboratory for Statistical and Translational Genetics, RIKEN Center for Integrative Medical Sciences, Yokohama, Japan.
| | - Kohei Tomizuka
- Laboratory for Statistical and Translational Genetics, RIKEN Center for Integrative Medical Sciences, Yokohama, Japan
| | - Chikashi Terao
- Laboratory for Statistical and Translational Genetics, RIKEN Center for Integrative Medical Sciences, Yokohama, Japan.
- Clinical Research Center, Shizuoka General Hospital, Shizuoka, Japan.
- The Department of Applied Genetics, The School of Pharmaceutical Sciences, University of Shizuoka, Shizuoka, Japan.
| |
Collapse
|
44
|
Yao J, Chu LC, Patlas M. Applications of Artificial Intelligence in Acute Abdominal Imaging. Can Assoc Radiol J 2024:8465371241250197. [PMID: 38715249 DOI: 10.1177/08465371241250197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/12/2024] Open
Abstract
Artificial intelligence (AI) is a rapidly growing field with significant implications for radiology. Acute abdominal pain is a common clinical presentation that can range from benign conditions to life-threatening emergencies. The critical nature of these situations renders emergent abdominal imaging an ideal candidate for AI applications. CT, radiographs, and ultrasound are the most common modalities for imaging evaluation of these patients. For each modality, numerous studies have assessed the performance of AI models for detecting common pathologies, such as appendicitis, bowel obstruction, and cholecystitis. The capabilities of these models range from simple classification to detailed severity assessment. This narrative review explores the evolution, trends, and challenges in AI applications for evaluating acute abdominal pathologies. We review implementations of AI for non-traumatic and traumatic abdominal pathologies, with discussion of potential clinical impact, challenges, and future directions for the technology.
Collapse
Affiliation(s)
- Jason Yao
- Department of Radiology, McMaster University, Hamilton, ON, Canada
| | - Linda C Chu
- Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Michael Patlas
- Department of Medical Imaging, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
45
|
Ishikawa T, Takeo Y, Sakurai K, Yoshinaga K, Furuya N, Inubushi Y, Tono K, Joti Y, Yabashi M, Kimura T, Yoshimi K. Sub-photon accuracy noise reduction of a single shot coherent diffraction pattern with an atomic model trained autoencoder. OPTICS EXPRESS 2024; 32:18301-18316. [PMID: 38858990 DOI: 10.1364/oe.523999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 04/17/2024] [Indexed: 06/12/2024]
Abstract
Single-shot imaging with femtosecond X-ray lasers is a powerful measurement technique that can achieve both high spatial and temporal resolution. However, its accuracy has been severely limited by the difficulty of applying conventional noise-reduction processing. This study uses deep learning to validate noise reduction techniques, with autoencoders serving as the learning model. Focusing on the diffraction patterns of nanoparticles, we simulated a large dataset treating the nanoparticles as composed of many independent atoms. Three neural network architectures are investigated: neural network, convolutional neural network and U-net, with U-net showing superior performance in noise reduction and subphoton reproduction. We also extended our models to apply to diffraction patterns of particle shapes different from those in the simulated data. We then applied the U-net model to a coherent diffractive imaging study, wherein a nanoparticle in a microfluidic device is exposed to a single X-ray free-electron laser pulse. After noise reduction, the reconstructed nanoparticle image improved significantly even though the nanoparticle shape was different from the training data, highlighting the importance of transfer learning.
Collapse
|
46
|
Ju J, Zhang Q, Guan Z, Shen X, Shen Z, Xu P. NTSM: a non-salient target segmentation model for oral mucosal diseases. BMC Oral Health 2024; 24:521. [PMID: 38698377 DOI: 10.1186/s12903-024-04193-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2024] [Accepted: 03/27/2024] [Indexed: 05/05/2024] Open
Abstract
BACKGROUND Oral mucosal diseases are similar to the surrounding normal tissues, i.e., their many non-salient features, which poses a challenge for accurate segmentation lesions. Additionally, high-precision large models generate too many parameters, which puts pressure on storage and makes it difficult to deploy on portable devices. METHODS To address these issues, we design a non-salient target segmentation model (NTSM) to improve segmentation performance while reducing the number of parameters. The NTSM includes a difference association (DA) module and multiple feature hierarchy pyramid attention (FHPA) modules. The DA module enhances feature differences at different levels to learn local context information and extend the segmentation mask to potentially similar areas. It also learns logical semantic relationship information through different receptive fields to determine the actual lesions and further elevates the segmentation performance of non-salient lesions. The FHPA module extracts pathological information from different views by performing the hadamard product attention (HPA) operation on input features, which reduces the number of parameters. RESULTS The experimental results on the oral mucosal diseases (OMD) dataset and international skin imaging collaboration (ISIC) dataset demonstrate that our model outperforms existing state-of-the-art methods. Compared with the nnU-Net backbone, our model has 43.20% fewer parameters while still achieving a 3.14% increase in the Dice score. CONCLUSIONS Our model has high segmentation accuracy on non-salient areas of oral mucosal diseases and can effectively reduce resource consumption.
Collapse
Affiliation(s)
- Jianguo Ju
- School of Information Science and Technology, Northwest University, No.1, Xuefu Road, Xi'an, 710119, Shaanxi, China
| | - Qian Zhang
- School of Information Science and Technology, Northwest University, No.1, Xuefu Road, Xi'an, 710119, Shaanxi, China
| | - Ziyu Guan
- School of Information Science and Technology, Northwest University, No.1, Xuefu Road, Xi'an, 710119, Shaanxi, China
| | - Xuemin Shen
- Department of Oral Mucosal Diseases, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, No.639, Manufacturing Bureau Road, HuangpuShanghai, 200011, China
| | - Zhengyu Shen
- Department of Dermatology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, No.639, Manufacturing Bureau Road, HuangpuShanghai, 200011, China.
| | - Pengfei Xu
- School of Information Science and Technology, Northwest University, No.1, Xuefu Road, Xi'an, 710119, Shaanxi, China
| |
Collapse
|
47
|
Aljohani A, Aburasain RY. A hybrid framework for glaucoma detection through federated machine learning and deep learning models. BMC Med Inform Decis Mak 2024; 24:115. [PMID: 38698412 PMCID: PMC11064392 DOI: 10.1186/s12911-024-02518-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Accepted: 04/19/2024] [Indexed: 05/05/2024] Open
Abstract
BACKGROUND Glaucoma, the second leading cause of global blindness, demands timely detection due to its asymptomatic progression. This paper introduces an advanced computerized system, integrates Machine Learning (ML), convolutional neural networks (CNNs), and image processing for accurate glaucoma detection using medical imaging data, surpassing prior research efforts. METHOD Developing a hybrid glaucoma detection framework using CNNs (ResNet50, VGG-16) and Random Forest. Models analyze pre-processed retinal images independently, and post-processing rules combine predictions for an overall glaucoma impact assessment. RESULT The hybrid framework achieves a significant 95.41% accuracy, with precision and recall at 99.37% and 88.37%, respectively. The F1 score, balancing precision and recall, reaches a commendable 93.52%. These results highlight the robustness and effectiveness of the hybrid framework in accurate glaucoma diagnosis. CONCLUSION In summary, our research presents an innovative hybrid framework combining CNNs and traditional ML models for glaucoma detection. Using ResNet50, VGG-16, and Random Forest in an ensemble approach yields remarkable accuracy, precision, recall, and F1 score. These results showcase the methodology's potential to enhance glaucoma diagnosis, emphasizing its promising role in early detection and preventing irreversible vision loss. The integration of ML and DNNs in medical imaging analysis suggests a valuable path for future advancements in ophthalmic healthcare.
Collapse
Affiliation(s)
- Abeer Aljohani
- Department of Computer Science , Applied College, Taibah University, Medina, 42353, Kingdom of Saudi Arabia.
| | - Rua Y Aburasain
- Department of Computer Science, College of Engineering and Computer Science, Jazan University, Jazan, 45142, Kingdom of Saudi Arabia
| |
Collapse
|
48
|
Zhang W, Tang Z, Shao H, Sun C, He X, Zhang J, Wang T, Yang X, Wang Y, Bin Y, Zhao L, Zhang S, Liang D, Wang J, Zhong D, Li Q. Intelligent classification of cardiotocography based on a support vector machine and convolutional neural network: Multiscene research. Int J Gynaecol Obstet 2024; 165:737-745. [PMID: 38009598 DOI: 10.1002/ijgo.15236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Revised: 09/20/2023] [Accepted: 10/24/2023] [Indexed: 11/29/2023]
Abstract
OBJECTIVE To propose a computerized system utilizing multiscene analysis based on a support vector machine (SVM) and convolutional neural network (CNN) to assess cardiotocography (CTG) intelligently. METHODS We retrospectively collected 2542 CTG records of singleton pregnancies delivered at the maternity ward of the First Affiliated Hospital of Xi'an Jiaotong University from October 10, 2020, to August 7, 2021. CTG records were divided into five categories (baseline, variability, acceleration, deceleration, and normality). Apart from the category of normality, the other four different categories of abnormal data correspond to four scenes. Each scene was divided into training and testing sets at 9:1 or 7:3. We used three computer algorithms (dynamic threshold, SVM, and CNN) to learn and optimize the system. Accuracy, sensitivity, and specificity were performed to evaluate performance. RESULTS The global accuracy, sensitivity, and specificity of the system were 93.88%, 93.06%, and 94.33%, respectively. In acceleration and deceleration scenes, when the convolution kernel was 3, the test data set reached the highest performance. CONCLUSION The multiscene research model using SVM and CNN is a potential effective tool to assist obstetricians in classifying CTG intelligently.
Collapse
Affiliation(s)
- Wen Zhang
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Zixiang Tang
- Wuhan Second Ship Design and Research Institute, Wuhan, Hubei, China
| | - Huikai Shao
- School of Automation Science and Engineering, Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Chao Sun
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Xin He
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Jiahui Zhang
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Tiantian Wang
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Xiaowei Yang
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Yiran Wang
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Yadi Bin
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Lanbo Zhao
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Siyi Zhang
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Dongxin Liang
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Jianliu Wang
- Department of Obstetrics and Gynecology, Peking University People's Hospital, Beijing, China
| | - Dexing Zhong
- School of Automation Science and Engineering, Xi'an Jiaotong University, Xi'an, Shaanxi, China
- Pazhou Lab, Guangzhou, China
| | - Qiling Li
- Department of Obstetrics and Gynecology, The First Affiliated Hospital of Xi'an Jiaotong University, Xi'an, Shaanxi, China
| |
Collapse
|
49
|
Cantor MC, Welk AA, Creutzinger KC, Woodrum Setser MM, Costa JHC, Renaud DL. The development and validation of a milk feeding behavior alert from automated feeder data to classify calves at risk for a diarrhea bout: A diagnostic accuracy study. J Dairy Sci 2024; 107:3140-3156. [PMID: 37949402 DOI: 10.3168/jds.2023-23635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Accepted: 10/29/2023] [Indexed: 11/12/2023]
Abstract
The objective of this diagnostic accuracy study was to develop and validate an alert to identify calves at risk for a diarrhea bout using milk feeding behavior data (behavior) from automated milk feeders (AMF). We enrolled Holstein calves (n = 259) as a convenience sample size from 2 facilities that were health scored daily preweaning and offered either 10 or 15 L/d of milk replacer. For alert development, 132 calves were enrolled and the ability of milk intake, drinking speed, and rewarded visits collected from AMF to identify calves at risk for diarrhea was tested. Alerts that had high diagnostic accuracy in the alert development phase were validated using a holdout validation strategy of 127 different calves from the same facilities (all offered 15 L/d) for -3 to 1 d relative to diarrhea diagnosis. We enrolled calves that were either healthy or had a first diarrheal bout (loose feces ≥2 d or watery feces ≥1 d). Relative change and rolling dividends for each milk feeding behavior were calculated for each calf from the previous 2 d. Logistic regression models and receiver operator curves (ROC) were used to assess the diagnostic ability for relative change and rolling dividends behavior relative to alert d) to classify calves at risk for a diarrhea bout from -2 to 0 d relative to diagnosis. To maximize sensitivity (Se), alert thresholds were based on ROC optimal classification cutoff. Diagnostic accuracy was met when the alert had a moderate area under the ROC curve (≥0.70), high accuracy (Acc; ≥0.80), high Se (≥0.80), and very high precision (Pre; ≥0.85). For alert development, deviations in rolling dividend milk intake with drinking speed had the best performance (10 L/d: ROC area under the curve [AUC] = 0.79, threshold ≤0.70; 15 L/d: ROC AUC = 0.82, threshold ≤0.60). Our diagnostic criteria were only met in calves offered 15 L/d (10 L/d: Se 75%, Acc 72%, Pre 92%, specificity [Sp] 55% vs. 15 L/d: Se 91%, Acc 91%, Pre 89%, Sp 73%). For holdout validation, rolling dividend milk intake with drinking speed met diagnostic criteria for one facility (threshold ≤0.60, Se 86%, Acc 82%, Pre 94%, Sp 50%). However, no milk feeding behavior alerts met diagnostic criteria for the second facility due to poor Se (relative change milk intake -0.36 threshold, Se 71%, Acc 70%, and Pre 97%). We suggest that changes in milk feeding behavior may indicate diarrhea bouts in dairy calves. Future research should validate this alert in commercial settings; furthermore, software updates, support, and new analytics might be required for on-farm application to implement these types of alerts.
Collapse
Affiliation(s)
- M C Cantor
- Department of Animal Science, The Pennsylvania State University, College Park, PA 16803; Department of Population Medicine, University of Guelph, Guelph, ON, Canada N1G 2W1.
| | - A A Welk
- Department of Population Medicine, University of Guelph, Guelph, ON, Canada N1G 2W1
| | - K C Creutzinger
- Department of Animal and Food Science, University of Wisconsin-River Falls, River Falls, WI 54022
| | - M M Woodrum Setser
- Department of Animal and Food Sciences, University of Kentucky, Lexington, KY 40546
| | - J H C Costa
- Department of Veterinary and Animal Sciences, University of Vermont, Burlington, VT 05405
| | - D L Renaud
- Department of Population Medicine, University of Guelph, Guelph, ON, Canada N1G 2W1
| |
Collapse
|
50
|
Hoon Yun B, Yu HY, Kim H, Myoung S, Yeo N, Choi J, Sook Chun H, Kim H, Ahn S. Geographical discrimination of Asian red pepper powders using 1H NMR spectroscopy and deep learning-based convolution neural networks. Food Chem 2024; 439:138082. [PMID: 38070234 DOI: 10.1016/j.foodchem.2023.138082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 11/24/2023] [Accepted: 11/24/2023] [Indexed: 01/10/2024]
Abstract
This study investigated an innovative approach to discriminate the geographical origins of Asian red pepper powders by analyzing one-dimensional 1H NMR spectra through a deep learning-based convolution neural network (CNN). 1H NMR spectra were collected from 300 samples originating from China, Korea, and Vietnam and used as input data. Principal component analysis - linear discriminant analysis and support vector machine models were employed for comparison. Bayesian optimization was used for hyperparameter optimization, and cross-validation was performed to prevent overfitting. As a result, all three models discriminated the origins of the test samples with over 95 % accuracy. Specifically, the CNN models achieved a 100 % accuracy rate. Gradient-weighted class activation mapping analysis verified that the CNN models recognized the origins of the samples based on variations in metabolite distributions. This research demonstrated the potential of deep learning-based classification of 1H NMR spectra as an accurate and reliable approach for determining the geographical origins of various foods.
Collapse
Affiliation(s)
- Byung Hoon Yun
- Department of Chemistry, Chung-Ang University, Seoul 06974, South Korea.
| | - Hyo-Yeon Yu
- Department of Chemistry, Chung-Ang University, Seoul 06974, South Korea.
| | - Hyeongmin Kim
- Department of Chemistry, Chung-Ang University, Seoul 06974, South Korea.
| | - Sangki Myoung
- Department of Chemistry, Chung-Ang University, Seoul 06974, South Korea.
| | - Neulhwi Yeo
- Department of Chemistry, Chung-Ang University, Seoul 06974, South Korea.
| | - Jongwon Choi
- Department of Advanced Imaging, Chung-Ang University, Seoul 06974, South Korea.
| | - Hyang Sook Chun
- Department of Food Science & Technology, Chung-Ang University, Anseong 17546, South Korea.
| | - Hyeonjin Kim
- Department of Medical Sciences, Seoul National University, Seoul 03080, South Korea; Department of Radiology, Seoul National University Hospital, Seoul 03080, South Korea.
| | - Sangdoo Ahn
- Department of Chemistry, Chung-Ang University, Seoul 06974, South Korea.
| |
Collapse
|