151
|
Pannipulath Venugopal V, Babu Saheer L, Maktabdar Oghaz M. COVID-19 lateral flow test image classification using deep CNN and StyleGAN2. Front Artif Intell 2024; 6:1235204. [PMID: 38348096 PMCID: PMC10860423 DOI: 10.3389/frai.2023.1235204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 12/28/2023] [Indexed: 02/15/2024] Open
Abstract
Introduction Artificial intelligence (AI) in healthcare can enhance clinical workflows and diagnoses, particularly in large-scale operations like COVID-19 mass testing. This study presents a deep Convolutional Neural Network (CNN) model for automated COVID-19 RATD image classification. Methods To address the absence of a RATD image dataset, we crowdsourced 900 real-world images focusing on positive and negative cases. Rigorous data augmentation and StyleGAN2-ADA generated simulated images to overcome dataset limitations and class imbalances. Results The best CNN model achieved a 93% validation accuracy. Test accuracies were 88% for simulated datasets and 82% for real datasets. Augmenting simulated images during training did not significantly improve real-world test image performance but enhanced simulated test image performance. Discussion The findings of this study highlight the potential of the developed model in expediting COVID-19 testing processes and facilitating large-scale testing and tracking systems. The study also underscores the challenges in designing and developing such models, emphasizing the importance of addressing dataset limitations and class imbalances. Conclusion This research contributes to the deployment of large-scale testing and tracking systems, offering insights into the potential applications of AI in mitigating outbreaks similar to COVID-19. Future work could focus on refining the model and exploring its adaptability to other healthcare scenarios.
Collapse
Affiliation(s)
| | - Lakshmi Babu Saheer
- School of Computing and Information Science, Anglia Ruskin University, Cambridge, United Kingdom
| | | |
Collapse
|
152
|
Mostafa F, Chen M. Computational models for predicting liver toxicity in the deep learning era. FRONTIERS IN TOXICOLOGY 2024; 5:1340860. [PMID: 38312894 PMCID: PMC10834666 DOI: 10.3389/ftox.2023.1340860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Accepted: 12/22/2023] [Indexed: 02/06/2024] Open
Abstract
Drug-induced liver injury (DILI) is a severe adverse reaction caused by drugs and may result in acute liver failure and even death. Many efforts have centered on mitigating risks associated with potential DILI in humans. Among these, quantitative structure-activity relationship (QSAR) was proven to be a valuable tool for early-stage hepatotoxicity screening. Its advantages include no requirement for physical substances and rapid delivery of results. Deep learning (DL) made rapid advancements recently and has been used for developing QSAR models. This review discusses the use of DL in predicting DILI, focusing on the development of QSAR models employing extensive chemical structure datasets alongside their corresponding DILI outcomes. We undertake a comprehensive evaluation of various DL methods, comparing with those of traditional machine learning (ML) approaches, and explore the strengths and limitations of DL techniques regarding their interpretability, scalability, and generalization. Overall, our review underscores the potential of DL methodologies to enhance DILI prediction and provides insights into future avenues for developing predictive models to mitigate DILI risk in humans.
Collapse
Affiliation(s)
- Fahad Mostafa
- Department of Mathematics and Statistics, Texas Tech University, Lubbock, TX, United States
- Division of Bioinformatics and Biostatistics, National Center for Toxicological Research, U.S. Food and Drug Administration, Jefferson, AR, United States
| | - Minjun Chen
- Division of Bioinformatics and Biostatistics, National Center for Toxicological Research, U.S. Food and Drug Administration, Jefferson, AR, United States
| |
Collapse
|
153
|
Raut P, Baldini G, Schöneck M, Caldeira L. Using a generative adversarial network to generate synthetic MRI images for multi-class automatic segmentation of brain tumors. FRONTIERS IN RADIOLOGY 2024; 3:1336902. [PMID: 38304344 PMCID: PMC10830800 DOI: 10.3389/fradi.2023.1336902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 12/28/2023] [Indexed: 02/03/2024]
Abstract
Challenging tasks such as lesion segmentation, classification, and analysis for the assessment of disease progression can be automatically achieved using deep learning (DL)-based algorithms. DL techniques such as 3D convolutional neural networks are trained using heterogeneous volumetric imaging data such as MRI, CT, and PET, among others. However, DL-based methods are usually only applicable in the presence of the desired number of inputs. In the absence of one of the required inputs, the method cannot be used. By implementing a generative adversarial network (GAN), we aim to apply multi-label automatic segmentation of brain tumors to synthetic images when not all inputs are present. The implemented GAN is based on the Pix2Pix architecture and has been extended to a 3D framework named Pix2PixNIfTI. For this study, 1,251 patients of the BraTS2021 dataset comprising sequences such as T1w, T2w, T1CE, and FLAIR images equipped with respective multi-label segmentation were used. This dataset was used for training the Pix2PixNIfTI model for generating synthetic MRI images of all the image contrasts. The segmentation model, namely DeepMedic, was trained in a five-fold cross-validation manner for brain tumor segmentation and tested using the original inputs as the gold standard. The inference of trained segmentation models was later applied to synthetic images replacing missing input, in combination with other original images to identify the efficacy of generated images in achieving multi-class segmentation. For the multi-class segmentation using synthetic data or lesser inputs, the dice scores were observed to be significantly reduced but remained similar in range for the whole tumor when compared with evaluated original image segmentation (e.g. mean dice of synthetic T2w prediction NC, 0.74 ± 0.30; ED, 0.81 ± 0.15; CET, 0.84 ± 0.21; WT, 0.90 ± 0.08). A standard paired t-tests with multiple comparison correction were performed to assess the difference between all regions (p < 0.05). The study concludes that the use of Pix2PixNIfTI allows us to segment brain tumors when one input image is missing.
Collapse
Affiliation(s)
- P. Raut
- Department of Pediatric Pulmonology, Erasmus Medical Center, Rotterdam, Netherlands
- Department of Radiology & Nuclear Medicine, Erasmus Medical Center, Rotterdam, Netherlands
- Institute for Diagnostic and Interventional Radiology, University Hospital Cologne, Cologne, Germany
| | - G. Baldini
- Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen, Essen, Germany
| | - M. Schöneck
- Institute for Diagnostic and Interventional Radiology, University Hospital Cologne, Cologne, Germany
| | - L. Caldeira
- Institute for Diagnostic and Interventional Radiology, University Hospital Cologne, Cologne, Germany
| |
Collapse
|
154
|
Qattous H, Azzeh M, Ibrahim R, Abed Al-Ghafer I, Al Sorkhy M, Alkhateeb A. PaCMAP-embedded convolutional neural network for multi-omics data integration. Heliyon 2024; 10:e23195. [PMID: 38163104 PMCID: PMC10756978 DOI: 10.1016/j.heliyon.2023.e23195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 11/22/2023] [Accepted: 11/29/2023] [Indexed: 01/03/2024] Open
Abstract
Aims The multi-omics data integration has emerged as a prominent avenue within the healthcare industry, presenting substantial potential for enhancing predictive models. The main motivation behind this study stems from the imperative need to advance prognostic methodologies in cancer diagnosis, an area where precision is pivotal for effective clinical decision-making. In this context, the present study introduces an innovative methodology that integrates copy number alteration (CNA), DNA methylation, and gene expression data. Methods The three omics data were successfully merged into a two-dimensional (2D) map using the PaCMAP dimensionality reduction technique. Utilizing the RGB coloring scheme, a visual representation of the integration was produced utilizing the values of the three omics of each sample. Then, the colored 2D maps were fed into a convolutional neural network (CNN) to forecast the Gleason score. Results Our proposed model outperforms the cutting-edge i-SOM-GSN model by integrating multi-omics data and the CNN architecture with an accuracy of 98.89, and AUC of 0.9996. Conclusion This study demonstrates the effectiveness of multi-omics data integration in predicting health outcomes. The proposed methodology, combining PaCMAP for dimensionality reduction, RGB coloring for visualization, and CNN for prediction, offers a comprehensive framework for integrating heterogeneous omics data and improving predictive accuracy. These findings contribute to the advancement of personalized medicine and have the potential to aid in clinical decision-making for prostate cancer patients.
Collapse
Affiliation(s)
- Hazem Qattous
- Software Engineering Department, Princess Sumaya University for Technology, Amman P.O. Box 1438, Jordan
| | - Mohammad Azzeh
- Data Science Department, Princess Sumaya University for Technology, Amman P.O. Box 1438, Jordan
| | - Rahmeh Ibrahim
- Computer Science Department, Princess Sumaya University for Technology, Amman P.O. Box 1438, Jordan
| | - Ibrahim Abed Al-Ghafer
- Data Science Department, Princess Sumaya University for Technology, Amman P.O. Box 1438, Jordan
| | - Mohammad Al Sorkhy
- Heritage College of Osteopathic medicine, Ohio University, Cleveland, OH 44122, USA
| | - Abedalrhman Alkhateeb
- Computer Science Department, Lakehead University, 955 Oliver Rd, Thunder Bay, ON P7B 5E1, Ontario, Canada
| |
Collapse
|
155
|
Mohammed A, Corzo G. Spatiotemporal convolutional long short-term memory for regional streamflow predictions. JOURNAL OF ENVIRONMENTAL MANAGEMENT 2024; 350:119585. [PMID: 38016234 DOI: 10.1016/j.jenvman.2023.119585] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 10/05/2023] [Accepted: 11/06/2023] [Indexed: 11/30/2023]
Abstract
Rainfall-runoff (RR) modelling is a challenging task in hydrology, especially at the regional scale. This work presents an approach to simultaneously predict daily streamflow in 86 catchments across the US using a sequential CNN-LSTM deep learning architecture. The model effectively incorporates both spatial and temporal information, leveraging the CNN to encode spatial patterns and the LSTM to learn their temporal relations. For training, a year-long spatially distributed input with precipitation, maximum temperature, and minimum temperature for each day was used to predict one-day streamflow. The trained CNN-LSTM model was further fine-tuned for three local sub-clusters of the 86 stations, assessing the significance of fine-tuning in model performance. The CNN-LSTM model, post fine-tuning, exhibited strong predictive capabilities with a median Nash-Sutcliffe efficiency (NSE) of 0.62 over the test period. Remarkably, 65% of the 86 stations achieved NSE values greater than 0.6. The performance of the model was also compared to different deep learning models trained using a similar setup (CNN, LSTM, ANN). An LSTM model was also developed and trained individually to predict for each of the stations using local data. The CNN-LSTM model outperformed all the models which was trained regionally, and achieved a comparable performance to the local LSTM model. Fine-tuning improved the performance of all models during the test period. The results highlight the potential of the CNN-LSTM approach for regional RR modelling by effectively capturing complex spatiotemporal patterns inherent in the RR process.
Collapse
Affiliation(s)
- Abdalla Mohammed
- Hydroinformatics Department, IHE Delft Institute for Water Education, Westvest 7, 2611 AX, Delft, Netherlands; School of Geography and the Environment, University of Oxford, Oxford, UK.
| | - Gerald Corzo
- Hydroinformatics Department, IHE Delft Institute for Water Education, Westvest 7, 2611 AX, Delft, Netherlands
| |
Collapse
|
156
|
Park JA, Kim D, Yang S, Kang JH, Kim JE, Huh KH, Lee SS, Yi WJ, Heo MS. Automatic detection of posterior superior alveolar artery in dental cone-beam CT images using a deeply supervised multi-scale 3D network. Dentomaxillofac Radiol 2024; 53:22-31. [PMID: 38214942 PMCID: PMC11003607 DOI: 10.1093/dmfr/twad002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 09/15/2023] [Accepted: 10/18/2023] [Indexed: 01/13/2024] Open
Abstract
OBJECTIVES This study aimed to develop a robust and accurate deep learning network for detecting the posterior superior alveolar artery (PSAA) in dental cone-beam CT (CBCT) images, focusing on the precise localization of the centre pixel as a critical centreline pixel. METHODS PSAA locations were manually labelled on dental CBCT data from 150 subjects. The left maxillary sinus images were horizontally flipped. In total, 300 datasets were created. Six different deep learning networks were trained, including 3D U-Net, deeply supervised 3D U-Net (3D U-Net DS), multi-scale deeply supervised 3D U-Net (3D U-Net MSDS), 3D Attention U-Net, 3D V-Net, and 3D Dense U-Net. The performance evaluation involved predicting the centre pixel of the PSAA. This was assessed using mean absolute error (MAE), mean radial error (MRE), and successful detection rate (SDR). RESULTS The 3D U-Net MSDS achieved the best prediction performance among the tested networks, with an MAE measurement of 0.696 ± 1.552 mm and MRE of 1.101 ± 2.270 mm. In comparison, the 3D U-Net showed the lowest performance. The 3D U-Net MSDS demonstrated a SDR of 95% within a 2 mm MAE. This was a significantly higher result than other networks that achieved a detection rate of over 80%. CONCLUSIONS This study presents a robust deep learning network for accurate PSAA detection in dental CBCT images, emphasizing precise centre pixel localization. The method achieves high accuracy in locating small vessels, such as the PSAA, and has the potential to enhance detection accuracy and efficiency, thus impacting oral and maxillofacial surgery planning and decision-making.
Collapse
Affiliation(s)
- Jae-An Park
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, 101 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| | - DaEl Kim
- Interdisciplinary Program in Bioengineering, Graduate School of Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea
| | - Su Yang
- Department of Applied Bioengineering, Graduate School of Convergence Science and Technology, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea
| | - Ju-Hee Kang
- Department of Oral and Maxillofacial Radiology, Seoul National University Dental Hospital, 101 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| | - Jo-Eun Kim
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, 101 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| | - Kyung-Hoe Huh
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, 101 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| | - Sam-Sun Lee
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, 101 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| | - Won-Jin Yi
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, 101 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| | - Min-Suk Heo
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, 101 Daehak-ro, Jongno-gu, Seoul, 03080, South Korea
| |
Collapse
|
157
|
Sadr S, Rokhshad R, Daghighi Y, Golkar M, Tolooie Kheybari F, Gorjinejad F, Mataji Kojori A, Rahimirad P, Shobeiri P, Mahdian M, Mohammad-Rahimi H. Deep learning for tooth identification and numbering on dental radiography: a systematic review and meta-analysis. Dentomaxillofac Radiol 2024; 53:5-21. [PMID: 38183164 PMCID: PMC11003608 DOI: 10.1093/dmfr/twad001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 10/03/2023] [Accepted: 10/05/2023] [Indexed: 01/07/2024] Open
Abstract
OBJECTIVES Improved tools based on deep learning can be used to accurately number and identify teeth. This study aims to review the use of deep learning in tooth numbering and identification. METHODS An electronic search was performed through October 2023 on PubMed, Scopus, Cochrane, Google Scholar, IEEE, arXiv, and medRxiv. Studies that used deep learning models with segmentation, object detection, or classification tasks for teeth identification and numbering of human dental radiographs were included. For risk of bias assessment, included studies were critically analysed using quality assessment of diagnostic accuracy studies (QUADAS-2). To generate plots for meta-analysis, MetaDiSc and STATA 17 (StataCorp LP, College Station, TX, USA) were used. Pooled outcome diagnostic odds ratios (DORs) were determined through calculation. RESULTS The initial search yielded 1618 studies, of which 29 were eligible based on the inclusion criteria. Five studies were found to have low bias across all domains of the QUADAS-2 tool. Deep learning has been reported to have an accuracy range of 81.8%-99% in tooth identification and numbering and a precision range of 84.5%-99.94%. Furthermore, sensitivity was reported as 82.7%-98% and F1-scores ranged from 87% to 98%. Sensitivity was 75.5%-98% and specificity was 79.9%-99%. Only 6 studies found the deep learning model to be less than 90% accurate. The average DOR of the pooled data set was 1612, the sensitivity was 89%, the specificity was 99%, and the area under the curve was 96%. CONCLUSION Deep learning models successfully can detect, identify, and number teeth on dental radiographs. Deep learning-powered tooth numbering systems can enhance complex automated processes, such as accurately reporting which teeth have caries, thus aiding clinicians in making informed decisions during clinical practice.
Collapse
Affiliation(s)
- Soroush Sadr
- Department of Endodontics, School of Dentistry, Hamadan University of Medical Sciences, Hamadan 6517838636, Iran
| | - Rata Rokhshad
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin 10117, Germany
- Section of Endocrinology, Nutrition, and Diabetes, Department of Medicine, Boston University Medical Center, Boston, MA 02118, United States
| | - Yasaman Daghighi
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran 1983963113, Iran
| | - Mohsen Golkar
- Department of Oral and Maxillofacial Surgery, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran 4188794755, Iran
| | - Fateme Tolooie Kheybari
- Faculty of Dentistry, Tabriz Medical Sciences, Islamic Azad University, Tabriz 5166/15731, Iran
| | - Fatemeh Gorjinejad
- Faculty of Dentistry, Dental School of Islamic Azad University of Medical Sciences, Tehran 19395/1495, Iran
| | - Atousa Mataji Kojori
- Faculty of Dentistry, Dental School of Islamic Azad University of Medical Sciences, Tehran 19395/1495, Iran
| | - Parisa Rahimirad
- Student Research Committee, School of Dentistry, Guilan University of Medical Sciences, Rasht 4188794755, Iran
| | - Parnian Shobeiri
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY 10065, United States
| | - Mina Mahdian
- Department of Prosthodontics and Digital Technology, Stony Brook University School of Dental Medicine, New York, NY 11794, United States
| | - Hossein Mohammad-Rahimi
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin 10117, Germany
| |
Collapse
|
158
|
Xi Y, Chong H, Zhou Y, Zhu F, Yao Y, Wang G. Convolutional neural network for brachial plexus segmentation at the interscalene level. BMC Anesthesiol 2024; 24:17. [PMID: 38191333 PMCID: PMC10773123 DOI: 10.1186/s12871-024-02402-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Accepted: 01/03/2024] [Indexed: 01/10/2024] Open
Abstract
BACKGROUND Regional anesthesia with ultrasound-guided brachial plexus block is widely used for patients undergoing shoulder and upper limb surgery, but needle misplacement can result in complications. The purpose of this study was to develop and validate a convolutional neural network (CNN) model for segmentation of the brachial plexus at the interscalene level. METHODS This prospective study included patients who underwent ultrasound-guided brachial plexus block in the Anesthesiology Department of Beijing Jishuitan Hospital between October 2019 and June 2022. A Unet semantic segmentation model was developed to train the CNN to identify the brachial plexus features in the ultrasound images. The degree of overlap between the predicted segmentation and ground truth segmentation (manually drawn by experienced clinicians) was evaluated by calculation of the Dice index and Jaccard index. RESULTS The final analysis included 502 images from 127 patients aged 41 ± 14 years-old (72 men, 56.7%). The mean Dice index was 0.748 ± 0.190, which was extremely close to the threshold level of 0.75 for good overlap between the predicted and ground truth segregations. The Jaccard index was 0.630 ± 0.213, which exceeded the threshold value of 0.5 for a good overlap. CONCLUSION The CNN performed well at segregating the brachial plexus at the interscalene level. Further development could allow the CNN to be used to facilitate real-time identification of the brachial plexus during interscalene block administration. CLINICAL TRIAL REGISTRATION The trial was registered prior to patient enrollment at the Chinese Clinical Trial Registry (ChiCTR2200055591), the site url is https://www.chictr.org.cn/ . The date of trial registration and patient enrollment is 14/01/2022.
Collapse
Affiliation(s)
- Yang Xi
- Department of Pain Managemengt, Beijing Jishuitan Hospital, Capital Medical University, Beijing, 100035, China
| | - Hao Chong
- Department of Pain Managemengt, Beijing Jishuitan Hospital, Capital Medical University, Beijing, 100035, China
| | - Yan Zhou
- Department of Pain Managemengt, Beijing Jishuitan Hospital, Capital Medical University, Beijing, 100035, China
| | - Feng Zhu
- Department of Anesthesiology, Beijing Jishuitan Hospital, Capital Medical University, Beijing, 100035, China
| | - Yuhang Yao
- Beijing AMIT Medical Science and Technology Ltd., Co, Beijing, 100000, China
| | - Geng Wang
- Department of Anesthesiology, Beijing Jishuitan Hospital, Capital Medical University, Beijing, 100035, China.
| |
Collapse
|
159
|
Krokos G, Kotwal T, Malaih A, Barrington S, Jackson P, Hicks RJ, Marsden PK, Fischer BM. Evaluation of manual and automated approaches for segmentation and extraction of quantitative indices from [ 18F]FDG PET-CT images. Biomed Phys Eng Express 2024; 10:025007. [PMID: 38100790 PMCID: PMC10767880 DOI: 10.1088/2057-1976/ad160e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 11/28/2023] [Accepted: 12/15/2023] [Indexed: 12/17/2023]
Abstract
Utilisation of whole organ volumes to extract anatomical and functional information from computed tomography (CT) and positron emission tomography (PET) images may provide key information for the treatment and follow-up of cancer patients. However, manual organ segmentation, is laborious and time-consuming. In this study, a CT-based deep learning method and a multi-atlas method were evaluated for segmenting the liver and spleen on CT images to extract quantitative tracer information from Fluorine-18 fluorodeoxyglucose ([18F]FDG) PET images of 50 patients with advanced Hodgkin lymphoma (HL). Manual segmentation was used as the reference method. The two automatic methods were also compared with a manually defined volume of interest (VOI) within the organ, a technique commonly performed in clinical settings. Both automatic methods provided accurate CT segmentations, with the deep learning method outperforming the multi-atlas with a DICE coefficient of 0.93 ± 0.03 (mean ± standard deviation) in liver and 0.87 ± 0.17 in spleen compared to 0.87 ± 0.05 (liver) and 0.78 ± 0.11 (spleen) for the multi-atlas. Similarly, a mean relative error of -3.2% for the liver and -3.4% for the spleen across patients was found for the mean standardized uptake value (SUVmean) using the deep learning regions while the corresponding errors for the multi-atlas method were -4.7% and -9.2%, respectively. For the maximum SUV (SUVmax), both methods resulted in higher than 20% overestimation due to the extension of organ boundaries to include neighbouring, high-uptake regions. The conservative VOI method which did not extend into neighbouring tissues, provided a more accurate SUVmaxestimate. In conclusion, the automatic, and particularly the deep learning method could be used to rapidly extract information of the SUVmeanwithin the liver and spleen. However, activity from neighbouring organs and lesions can lead to high biases in SUVmaxand current practices of manually defining a volume of interest in the organ should be considered instead.
Collapse
Affiliation(s)
- Georgios Krokos
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Tejas Kotwal
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Afnan Malaih
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Sally Barrington
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | | | - Rodney J Hicks
- Department of Medicine, St Vincent’s Hospital Medical School, the University of Melbourne, Australia
| | - Paul K Marsden
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
| | - Barbara Malene Fischer
- School of Biomedical Engineering and Imaging Sciences, King’s College London, London, United Kingdom
- Dept. Clinical Physiology and Nuclear Medicine, Rigshospitalet, Copenhagen, Denmark
- Dept. of Clinical Medicine, University of Copenhagen, Denmark
| |
Collapse
|
160
|
Dashti M, Londono J, Ghasemi S, Tabatabaei S, Hashemi S, Baghaei K, Palma PJ, Khurshid Z. Evaluation of accuracy of deep learning and conventional neural network algorithms in detection of dental implant type using intraoral radiographic images: A systematic review and meta-analysis. J Prosthet Dent 2024:S0022-3913(23)00812-0. [PMID: 38176985 DOI: 10.1016/j.prosdent.2023.11.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Revised: 11/23/2023] [Accepted: 11/28/2023] [Indexed: 01/06/2024]
Abstract
STATEMENT OF PROBLEM With the growing importance of implant brand detection in clinical practice, the accuracy of machine learning algorithms in implant brand detection has become a subject of research interest. Recent studies have shown promising results for the use of machine learning in implant brand detection. However, despite these promising findings, a comprehensive evaluation of the accuracy of machine learning in implant brand detection is needed. PURPOSE The purpose of this systematic review and meta-analysis was to assess the accuracy, sensitivity, and specificity of deep learning algorithms in implant brand detection using 2-dimensional images such as from periapical or panoramic radiographs. MATERIAL AND METHODS Electronic searches were conducted in PubMed, Embase, Scopus, Scopus Secondary, and Web of Science databases. Studies that met the inclusion criteria were assessed for quality using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool. Meta-analyses were performed using the random-effects model to estimate the pooled performance measures and the 95% confidence intervals (CIs) using STATA v.17. RESULTS Thirteen studies were selected for the systematic review, and 3 were used in the meta-analysis. The meta-analysis of the studies found that the overall accuracy of CNN algorithms in detecting dental implants in radiographic images was 95.63%, with a sensitivity of 94.55% and a specificity of 97.91%. The highest reported accuracy was 99.08% for CNN Multitask ResNet152 algorithm, and sensitivity and specificity were 100.00% and 98.70% respectively for the deep CNN (Neuro-T version 2.0.1) algorithm with the Straumann SLActive BLT implant brand. All studies had a low risk of bias. CONCLUSIONS The highest accuracy and sensitivity were reported in studies using CNN Multitask ResNet152 and deep CNN (Neuro-T version 2.0.1) algorithms.
Collapse
Affiliation(s)
- Mahmood Dashti
- Researcher, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran.
| | - Jimmy Londono
- Professor and Director of the Prosthodontics Residency Program and the Ronald Goldstein Center for Esthetics and Implant Dentistry, The Dental College of Georgia at Augusta University, Augusta, GA
| | - Shohreh Ghasemi
- Graduate Student, MSc of Trauma and Craniofacial Reconstrution, Faculty of Medicine and Dentistry, Queen Mary College, London, England
| | | | - Sara Hashemi
- Graduate student, Faculty of Dental Medicine and Oral Health Sciences, McGill University, Montreal, Quebec, Canada
| | - Kimia Baghaei
- Researcher, Dental Students' Research Committee, School of Dentistry, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Paulo J Palma
- Researcher, Center for Innovation and Research in Oral Sciences (CIROS), Faculty of Medicine, University of Coimbra, Coimbra, Portugal; and Professor, Institute of Endodontics, Faculty of Medicine, University of Coimbra, Coimbra, Portugal.
| | - Zohaib Khurshid
- Lecturer, Prosthodontics, Department of Prosthodontics and Dental Implantology, King Faisal University, Al-Ahsa, Saudi Arabia; and Professor, Center of Excellence for Regenerative Dentistry, Department of Anatomy, Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand
| |
Collapse
|
161
|
Habibdoust A, Seifaddini M, Tatar M, Araz OM, Wilson FA. Predicting COVID-19 new cases in California with Google Trends data and a machine learning approach. Inform Health Soc Care 2024; 49:56-72. [PMID: 38353707 DOI: 10.1080/17538157.2024.2315246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/10/2024]
Abstract
BACKGROUND Google Trends data can be a valuable source of information for health-related issues such as predicting infectious disease trends. OBJECTIVES To evaluate the accuracy of predicting COVID-19 new cases in California using Google Trends data, we develop and use a GMDH-type neural network model and compare its performance with a LTSM model. METHODS We predicted COVID-19 new cases using Google query data over three periods. Our first period covered March 1, 2020, to July 31, 2020, including the first peak of infection. We also estimated a model from October 1, 2020, to January 7, 2021, including the second wave of COVID-19 and avoiding possible biases from public interest in searching about the new pandemic. In addition, we extended our forecasting period from May 20, 2020, to January 31, 2021, to cover an extended period of time. RESULTS Our findings show that Google relative search volume (RSV) can be used to accurately predict new COVID-19 cases. We find that among our Google relative search volume terms, "Fever," "COVID Testing," "Signs of COVID," "COVID Treatment," and "Shortness of Breath" increase model predictive accuracy. CONCLUSIONS Our findings highlight the value of using data sources providing near real-time data, e.g., Google Trends, to detect trends in COVID-19 cases, in order to supplement and extend existing epidemiological models.
Collapse
Affiliation(s)
- Amir Habibdoust
- Institute for Data Science and Informatics, University of Missouri, Columbia, Missouri, USA
| | | | - Moosa Tatar
- Department of Pharmaceutical Health Outcomes and Policy, University of Houston College of Pharmacy, Houston, Texas, USA
| | - Ozgur M Araz
- College of Business, University of Nebraska- Lincoln, Lincoln, Nebraska, USA
| | - Fernando A Wilson
- Matheson Center for Health Care Studies, University of Utah, Salt Lake City, Utah, USA
- Department of Population Health Sciences, University of Utah, Salt Lake City, Utah, USA
- Department of Economics, University of Utah, Salt Lake City, Utah, USA
| |
Collapse
|
162
|
Ueda T, Yamashita K, Kawazoe R, Sayawaki Y, Morisawa Y, Kamezaki R, Ikeda R, Shiraishi S, Uchiyama Y, Ito S. Feasibility of direct brain 18F-fluorodeoxyglucose-positron emission tomography attenuation and high-resolution correction methods using deep learning. ASIA OCEANIA JOURNAL OF NUCLEAR MEDICINE & BIOLOGY 2024; 12:108-119. [PMID: 39050241 PMCID: PMC11263769 DOI: 10.22038/aojnmb.2024.74875.1522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 11/24/2023] [Accepted: 01/11/2024] [Indexed: 07/27/2024]
Abstract
Objectives To develop the following three attenuation correction (AC) methods for brain 18F-fluorodeoxyglucose-positron emission tomography (PET), using deep learning, and to ascertain their precision levels: (i) indirect method; (ii) direct method; and (iii) direct and high-resolution correction (direct+HRC) method. Methods We included 53 patients who underwent cranial magnetic resonance imaging (MRI) and computed tomography (CT) and 27 patients who underwent cranial MRI, CT, and PET. After fusion of the magnetic resonance, CT, and PET images, resampling was performed to standardize the field of view and matrix size and prepare the data set. In the indirect method, synthetic CT (SCT) images were generated, whereas in the direct and direct+HRC methods, a U-net structure was used to generate AC images. In the indirect method, attenuation correction was performed using SCT images generated from MRI findings using U-net instead of CT images. In the direct and direct+HRC methods, AC images were generated directly from non-AC images using U-net, followed by image evaluation. The precision levels of AC images generated using the indirect and direct methods were compared based on the normalized mean squared error (NMSE) and structural similarity (SSIM). Results Visual inspection revealed no difference between the AC images prepared using CT-based attenuation correction and those prepared using the three methods. The NMSE increased in the order indirect, direct, and direct+HRC methods, with values of 0.281×10-3, 4.62×10-3, and 12.7×10-3, respectively. Moreover, the SSIM of the direct+HRC method was 0.975. Conclusion The direct+HRC method enables accurate attenuation without CT exposure and high-resolution correction without dedicated correction programs.
Collapse
Affiliation(s)
- Tomohiro Ueda
- Graduate School of Health Sciences, Kumamoto University, Japan
| | | | - Retsu Kawazoe
- Graduate School of Health Sciences, Kumamoto University, Japan
| | - Yuta Sayawaki
- Graduate School of Health Sciences, Kumamoto University, Japan
| | | | - Ryosuke Kamezaki
- Department of Central Radiology Kumamoto University Hospital, Japan
| | - Ryuji Ikeda
- Department of Central Radiology Kumamoto University Hospital, Japan
| | - Shinya Shiraishi
- Department of Diagnostic Radiology, Faculty of Life Sciences,Kumamoto University, Japan
| | - Yoshikazu Uchiyama
- Department of Information and Communication Technology, Faculty of Engineering, University of Miyazaki, Japan
| | - Shigeki Ito
- Department of Medical Radiation Sciences, Faculty of Life Sciences, Kumamoto University, Japan
| |
Collapse
|
163
|
Mendes F, Mascarenhas M, Ribeiro T, Afonso J, Cardoso P, Martins M, Cardoso H, Andrade P, Ferreira JPS, Mascarenhas Saraiva M, Macedo G. Artificial Intelligence and Panendoscopy-Automatic Detection of Clinically Relevant Lesions in Multibrand Device-Assisted Enteroscopy. Cancers (Basel) 2024; 16:208. [PMID: 38201634 PMCID: PMC10778030 DOI: 10.3390/cancers16010208] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 12/27/2023] [Accepted: 12/28/2023] [Indexed: 01/12/2024] Open
Abstract
Device-assisted enteroscopy (DAE) is capable of evaluating the entire gastrointestinal tract, identifying multiple lesions. Nevertheless, DAE's diagnostic yield is suboptimal. Convolutional neural networks (CNN) are multi-layer architecture artificial intelligence models suitable for image analysis, but there is a lack of studies about their application in DAE. Our group aimed to develop a multidevice CNN for panendoscopic detection of clinically relevant lesions during DAE. In total, 338 exams performed in two specialized centers were retrospectively evaluated, with 152 single-balloon enteroscopies (Fujifilm®, Porto, Portugal), 172 double-balloon enteroscopies (Olympus®, Porto, Portugal) and 14 motorized spiral enteroscopies (Olympus®, Porto, Portugal); then, 40,655 images were divided in a training dataset (90% of the images, n = 36,599) and testing dataset (10% of the images, n = 4066) used to evaluate the model. The CNN's output was compared to an expert consensus classification. The model was evaluated by its sensitivity, specificity, positive (PPV) and negative predictive values (NPV), accuracy and area under the precision recall curve (AUC-PR). The CNN had an 88.9% sensitivity, 98.9% specificity, 95.8% PPV, 97.1% NPV, 96.8% accuracy and an AUC-PR of 0.97. Our group developed the first multidevice CNN for panendoscopic detection of clinically relevant lesions during DAE. The development of accurate deep learning models is of utmost importance for increasing the diagnostic yield of DAE-based panendoscopy.
Collapse
Affiliation(s)
- Francisco Mendes
- Alameda Professor Hernâni Monteiro, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (F.M.); (T.R.); (P.C.); (M.M.); (P.A.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4050-345 Porto, Portugal
| | - Miguel Mascarenhas
- Alameda Professor Hernâni Monteiro, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (F.M.); (T.R.); (P.C.); (M.M.); (P.A.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4050-345 Porto, Portugal
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Tiago Ribeiro
- Alameda Professor Hernâni Monteiro, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (F.M.); (T.R.); (P.C.); (M.M.); (P.A.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4050-345 Porto, Portugal
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - João Afonso
- Alameda Professor Hernâni Monteiro, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (F.M.); (T.R.); (P.C.); (M.M.); (P.A.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4050-345 Porto, Portugal
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Pedro Cardoso
- Alameda Professor Hernâni Monteiro, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (F.M.); (T.R.); (P.C.); (M.M.); (P.A.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4050-345 Porto, Portugal
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Miguel Martins
- Alameda Professor Hernâni Monteiro, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (F.M.); (T.R.); (P.C.); (M.M.); (P.A.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4050-345 Porto, Portugal
| | - Hélder Cardoso
- Alameda Professor Hernâni Monteiro, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (F.M.); (T.R.); (P.C.); (M.M.); (P.A.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4050-345 Porto, Portugal
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Patrícia Andrade
- Alameda Professor Hernâni Monteiro, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (F.M.); (T.R.); (P.C.); (M.M.); (P.A.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4050-345 Porto, Portugal
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - João P. S. Ferreira
- Department of Mechanical Engineering, Faculty of Engineering, University of Porto, Rua Dr. Roberto Frias, 4200-465 Porto, Portugal;
- DigestAID—Digestive Artificial Intelligence Development, R. Alfredo Allen n°. 455/461, 4200-135 Porto, Portugal
| | | | - Guilherme Macedo
- Alameda Professor Hernâni Monteiro, Department of Gastroenterology, São João University Hospital, 4200-427 Porto, Portugal; (F.M.); (T.R.); (P.C.); (M.M.); (P.A.); (G.M.)
- WGO Gastroenterology and Hepatology Training Center, 4050-345 Porto, Portugal
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| |
Collapse
|
164
|
Ashrafi A, Teimouri K, Aghazadeh F, Shayanfar A. Neural Network Models for Predicting Solubility and Metabolism Class of Drugs in the Biopharmaceutics Drug Disposition Classification System (BDDCS). Eur J Drug Metab Pharmacokinet 2024; 49:1-6. [PMID: 37864650 DOI: 10.1007/s13318-023-00861-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/26/2023] [Indexed: 10/23/2023]
Abstract
BACKGROUND AND OBJECTIVE The biopharmaceutics drug disposition classification system (BDDCS) categorizes drugs into four classes on the basis of their solubility and metabolism. This framework allows for the study of the pharmacokinetics of transporters and enzymatic metabolization on biopharmaceuticals, as well as drug-drug interactions in the body. The objective of the present study was to develop computational models by neural network models and structural parameters and physicochemical properties to estimate the class of a drug in the BDDCS system. METHODS In this study, deep learning methods were utilized to explore the potential of artificial and convolutional neural networks (ANNs and CNNs) in predicting the BDDCS class of 721 substances. The structural parameters and physicochemical properties [Abraham solvation parameters, octanol-water partition (log P) and over the pH range 1-7.5 (log D), number of rotatable bonds, hydrogen bond acceptor numbers, as well as hydrogen bond donor count] are calculated with various software. These compounds were then split into a training set consisting of 602 molecules and a test set of 119 compounds to validate the models. RESULTS The results of this study showed that neural network models using applied parameters of the drug, i.e., log D and Abraham solvation parameters, are able to predict the class of solubility and metabolism in the BDDCS system with good accuracy. CONCLUSIONS Neural network models are well equipped to deal with the relations between the structural parameters and physicochemical properties of drugs and BDDCS classes. In addition, log D is a more suitable parameter compared with log P in predicting BDDCS.
Collapse
Affiliation(s)
- Aryan Ashrafi
- Faculty of Electrical and Computer Engineering, University of Tabriz, Tabriz, Iran
| | - Kiarash Teimouri
- Student Research Committee, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Farnaz Aghazadeh
- Biotechnology Research Center, Tabriz University of Medical Sciences, Tabriz, Iran
| | - Ali Shayanfar
- Pharmaceutical Analysis Research Center, Tabriz University of Medical Sciences, Tabriz, Iran.
- Faculty of Pharmacy, Tabriz University of Medical Sciences, Golgasht St., Tabriz, 5166614766, East Azerbaijan, Iran.
| |
Collapse
|
165
|
Mese I, Altintas Mese C, Demirsoy U, Anik Y. Innovative advances in pediatric radiology: computed tomography reconstruction techniques, photon-counting detector computed tomography, and beyond. Pediatr Radiol 2024; 54:1-11. [PMID: 38041712 DOI: 10.1007/s00247-023-05823-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/06/2023] [Revised: 11/17/2023] [Accepted: 11/20/2023] [Indexed: 12/03/2023]
Abstract
In pediatric radiology, balancing diagnostic accuracy with reduced radiation exposure is paramount due to the heightened vulnerability of younger patients to radiation. Technological advancements in computed tomography (CT) reconstruction techniques, especially model-based iterative reconstruction and deep learning image reconstruction, have enabled significant reductions in radiation doses without compromising image quality. Deep learning image reconstruction, powered by deep learning algorithms, has demonstrated superiority over traditional techniques like filtered back projection, providing enhanced image quality, especially in pediatric head and cardiac CT scans. Photon-counting detector CT has emerged as another groundbreaking technology, allowing for high-resolution images while substantially reducing radiation doses, proving highly beneficial for pediatric patients requiring frequent imaging. Furthermore, cloud-based dose tracking software focuses on monitoring radiation exposure, ensuring adherence to safety standards. However, the deployment of these technologies presents challenges, including the need for large datasets, computational demands, and potential data privacy issues. This article provides a comprehensive exploration of these technological advancements, their clinical implications, and the ongoing efforts to enhance pediatric radiology's safety and effectiveness.
Collapse
Affiliation(s)
- Ismail Mese
- Department of Radiology, Health Sciences University, Erenkoy Mental Health and Neurology Training and Research Hospital, 19 Mayis, Sinan Ercan Cd. No:23, Kadikoy, Istanbul, 34736, Turkey.
| | - Ceren Altintas Mese
- Department of Pediatrics, Haydarpasa Numune Training and Research Hospital, Istanbul, Turkey
| | - Ugur Demirsoy
- Department of Pediatric Oncology, Faculty of Medicine, Kocaeli University, Kocaeli, Turkey
| | - Yonca Anik
- Department of Pediatric Radiology, Faculty of Medicine, Kocaeli University, Kocaeli, Turkey
| |
Collapse
|
166
|
Jeon SM, Kim S, Lee KC. Deep Learning-based Assessment of Facial Asymmetry Using U-Net Deep Convolutional Neural Network Algorithm. J Craniofac Surg 2024; 35:133-136. [PMID: 37973054 DOI: 10.1097/scs.0000000000009862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 10/09/2023] [Indexed: 11/19/2023] Open
Abstract
OBJECTIVES This study aimed to evaluate the diagnostic performance of a deep convolutional neural network (DCNN)-based computer-assisted diagnosis (CAD) system to detect facial asymmetry on posteroanterior (PA) cephalograms and compare the results of the DCNN with those made by the orthodontist. MATERIALS AND METHODS PA cephalograms of 1020 patients with orthodontics were used to train the DCNN-based CAD systems for autoassessment of facial asymmetry, the degree of menton deviation, and the coordinates of its regarding landmarks. Twenty-five PA cephalograms were used to test the performance of the DCNN in analyzing facial asymmetry. The diagnostic performance of the DCNN-based CAD system was assessed using independent t -tests and Bland-Altman plots. RESULTS Comparison between the DCNN-based CAD system and conventional analysis confirmed no significant differences. Bland-Altman plots showed good agreement for all the measurements. CONCLUSIONS The DCNN-based CAD system might offer a clinically acceptable diagnostic evaluation of facial asymmetry on PA cephalograms.
Collapse
Affiliation(s)
| | - Seojeong Kim
- Korea Electronics Technology Institute, Seongnam, Korea
| | - Kyungmin Clara Lee
- Department of Orthodontics, School of Dentistry, Chonnam National University, Gwangju, Korea
| |
Collapse
|
167
|
Kanda T, Wakiya T, Ishido K, Kimura N, Nagase H, Yoshida E, Nakagawa J, Matsuzaka M, Niioka T, Sasaki Y, Hakamada K. Noninvasive Computed Tomography-Based Deep Learning Model Predicts In Vitro Chemosensitivity Assay Results in Pancreatic Cancer. Pancreas 2024; 53:e55-e61. [PMID: 38019604 DOI: 10.1097/mpa.0000000000002270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/01/2023]
Abstract
OBJECTIVES We aimed to predict in vitro chemosensitivity assay results from computed tomography (CT) images by applying deep learning (DL) to optimize chemotherapy for pancreatic ductal adenocarcinoma (PDAC). MATERIALS AND METHODS Preoperative enhanced abdominal CT images and the histoculture drug response assay (HDRA) results were collected from 33 PDAC patients undergoing surgery. Deep learning was performed using CT images of both the HDRA-positive and HDRA-negative groups. We trimmed small patches from the entire tumor area. We established various prediction labels for HDRA results with 5-fluorouracil (FU), gemcitabine (GEM), and paclitaxel (PTX). We built a predictive model using a residual convolutional neural network and used 3-fold cross-validation. RESULTS Of the 33 patients, effective response to FU, GEM, and PTX by HDRA was observed in 19 (57.6%), 11 (33.3%), and 23 (88.5%) patients, respectively. The average accuracy and the area under the receiver operating characteristic curve (AUC) of the model for predicting the effective response to FU were 93.4% and 0.979, respectively. In the prediction of GEM, the models demonstrated high accuracy (92.8%) and AUC (0.969). Likewise, the model for predicting response to PTX had a high performance (accuracy, 95.9%; AUC, 0.979). CONCLUSIONS Our CT patch-based DL model exhibited high predictive performance in projecting HDRA results. Our study suggests that the DL approach could possibly provide a noninvasive means for the optimization of chemotherapy.
Collapse
Affiliation(s)
- Taishu Kanda
- From the Department of Gastroenterological Surgery, Hirosaki University Graduate School of Medicine, Hirosaki City
| | - Taiichi Wakiya
- From the Department of Gastroenterological Surgery, Hirosaki University Graduate School of Medicine, Hirosaki City
| | - Keinosuke Ishido
- From the Department of Gastroenterological Surgery, Hirosaki University Graduate School of Medicine, Hirosaki City
| | - Norihisa Kimura
- From the Department of Gastroenterological Surgery, Hirosaki University Graduate School of Medicine, Hirosaki City
| | - Hayato Nagase
- From the Department of Gastroenterological Surgery, Hirosaki University Graduate School of Medicine, Hirosaki City
| | - Eri Yoshida
- From the Department of Gastroenterological Surgery, Hirosaki University Graduate School of Medicine, Hirosaki City
| | | | | | | | - Yoshihiro Sasaki
- Medical Informatics, Hirosaki University Hospital, Hirosaki, Japan
| | - Kenichi Hakamada
- From the Department of Gastroenterological Surgery, Hirosaki University Graduate School of Medicine, Hirosaki City
| |
Collapse
|
168
|
Kondamuri SR, Thadikemalla VSG, Suryanarayana G, Karthik C, Reddy VS, Sahithi VB, Anitha Y, Yogitha V, Valli PR. Chest CT Image based Lung Disease Classification - A Review. Curr Med Imaging 2024; 20:1-14. [PMID: 38389342 DOI: 10.2174/0115734056248176230923143105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 07/22/2023] [Accepted: 08/22/2023] [Indexed: 02/24/2024]
Abstract
Computed tomography (CT) scans are widely used to diagnose lung conditions due to their ability to provide a detailed overview of the body's respiratory system. Despite its popularity, visual examination of CT scan images can lead to misinterpretations that impede a timely diagnosis. Utilizing technology to evaluate images for disease detection is also a challenge. As a result, there is a significant demand for more advanced systems that can accurately classify lung diseases from CT scan images. In this work, we provide an extensive analysis of different approaches and their performances that can help young researchers to build more advanced systems. First, we briefly introduce diagnosis and treatment procedures for various lung diseases. Then, a brief description of existing methods used for the classification of lung diseases is presented. Later, an overview of the general procedures for lung disease classification using machine learning (ML) is provided. Furthermore, an overview of recent progress in ML-based classification of lung diseases is provided. Finally, existing challenges in ML techniques are presented. It is concluded that deep learning techniques have revolutionized the early identification of lung disorders. We expect that this work will equip medical professionals with the awareness they require in order to recognize and classify certain medical disorders.
Collapse
Affiliation(s)
- Shri Ramtej Kondamuri
- Department of ECE, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, 520007, India
| | | | - Gunnam Suryanarayana
- Department of ECE, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, 520007, India
| | - Chandran Karthik
- Department of Robotics and Automation, Jyothi Engineering College, Thrissur, Kerala 679531, India
| | - Vanga Siva Reddy
- Department of ECE, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, 520007, India
| | - V Bhuvana Sahithi
- Department of ECE, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, 520007, India
| | - Y Anitha
- Department of ECE, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, 520007, India
| | - V Yogitha
- Department of ECE, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, 520007, India
| | - P Reshma Valli
- Department of ECE, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, 520007, India
| |
Collapse
|
169
|
Tomimatsu T, Yamashita K, Sakata T, Kamezaki R, Ikeda R, Shiraishi S, Uchiyama Y, Ito S. Development of an automated region-of-interest-setting method based on a deep neural network for brain perfusion single photon emission computed tomography quantification methods. ASIA OCEANIA JOURNAL OF NUCLEAR MEDICINE & BIOLOGY 2024; 12:120-130. [PMID: 39050240 PMCID: PMC11263778 DOI: 10.22038/aojnmb.2024.75375.1528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 11/29/2023] [Accepted: 01/20/2024] [Indexed: 07/27/2024]
Abstract
Objectives A simple noninvasive microsphere (SIMS) method using 123I-IMP and an improved brain uptake ratio (IBUR) method using 99mTc-ECD for the quantitative measurement of regional cerebral blood flow have been recently reported. The input functions of these methods were determined using the administered dose, which was obtained by analyzing the time activity curve of the pulmonary artery (PA) for SIMS and the ascending aorta (AAo) for the IBUR methods for dynamic chest images. If the PA and AAo regions of interest (ROIs) can be determined using deep convolutional neural networks (DCNN) for segmentation, the accuracy of these ROI-setting methods can be improved through simple analytical operations to ensure repeatability and reproducibility. The purpose of this study was to develop new PA and AAo-ROI setting methods using a DCNN (DCNN-ROI method). Methods A U-Net architecture based on convolutional neural networks was used to determine the PA and AAo candidate regions. Images of 290 patients who underwent 123I-IMP RI-angiography and 108 patients who underwent 99mTc-ECD RI-angiography were used. The PA and AAo-ROI results for the DCNN-ROI method were compared to those obtained using manual methods. The counts for the input function on the PA and AAo-ROI were determined by integrating the area under the curve (AUC) counts of the time-activity curve of PA and AAo-ROI, respectively. The effectiveness of the DCNN-ROI method was elucidated through a comparison with the integrated AUC counts of the DCNN-ROI and the manual ROI. Results The coincidence ratio for the locations of the PA and AAo-ROI obtained using the DCNN method and that for the manual method was 100%. Strong correlations were observed between the AUC counts using the DCNN and manual methods. Conclusion New ROI- setting programs were developed using a deep convolution neural network DCNN to determine the input functions for the SIMS and IBUR methods. The accuracy of these methods is comparable to that of the manual method.
Collapse
Affiliation(s)
- Taeko Tomimatsu
- Graduate School of Health Sciences, Kumamoto University, Japan
| | | | - Takumi Sakata
- Graduate School of Health Sciences, Kumamoto University, Japan
| | - Ryosuke Kamezaki
- Department of Central Radiology Kumamoto University Hospital, Japan
| | - Ryuji Ikeda
- Department of Central Radiology Kumamoto University Hospital, Japan
| | - Shinya Shiraishi
- Department of Diagnostic Radiology, Faculty of Life Sciences, Kumamoto University, Japan
| | - Yoshikazu Uchiyama
- Department of Information and Communication Technology, Faculty of Engineering, University of Miyazaki, Japan
| | - Shigeki Ito
- Department of Medical Radiation Sciences, Faculty of Life Science, Kumamoto University, Japan
| |
Collapse
|
170
|
Neijzen D, Lunter G. Unsupervised learning for medical data: A review of probabilistic factorization methods. Stat Med 2023; 42:5541-5554. [PMID: 37850249 DOI: 10.1002/sim.9924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 09/13/2023] [Indexed: 10/19/2023]
Abstract
We review popular unsupervised learning methods for the analysis of high-dimensional data encountered in, for example, genomics, medical imaging, cohort studies, and biobanks. We show that four commonly used methods, principal component analysis, K-means clustering, nonnegative matrix factorization, and latent Dirichlet allocation, can be written as probabilistic models underpinned by a low-rank matrix factorization. In addition to highlighting their similarities, this formulation clarifies the various assumptions and restrictions of each approach, which eases identifying the appropriate method for specific applications for applied medical researchers. We also touch upon the most important aspects of inference and model selection for the application of these methods to health data.
Collapse
Affiliation(s)
- Dorien Neijzen
- Department of Epidemiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
| | - Gerton Lunter
- Department of Epidemiology, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
- Weatherall Institute of Molecular Medicine, Oxford University, Oxford, UK
| |
Collapse
|
171
|
Hossain MM, Hossain MM, Arefin MB, Akhtar F, Blake J. Combining State-of-the-Art Pre-Trained Deep Learning Models: A Noble Approach for Skin Cancer Detection Using Max Voting Ensemble. Diagnostics (Basel) 2023; 14:89. [PMID: 38201399 PMCID: PMC10795598 DOI: 10.3390/diagnostics14010089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 12/21/2023] [Accepted: 12/22/2023] [Indexed: 01/12/2024] Open
Abstract
Skin cancer poses a significant healthcare challenge, requiring precise and prompt diagnosis for effective treatment. While recent advances in deep learning have dramatically improved medical image analysis, including skin cancer classification, ensemble methods offer a pathway for further enhancing diagnostic accuracy. This study introduces a cutting-edge approach employing the Max Voting Ensemble Technique for robust skin cancer classification on ISIC 2018: Task 1-2 dataset. We incorporate a range of cutting-edge, pre-trained deep neural networks, including MobileNetV2, AlexNet, VGG16, ResNet50, DenseNet201, DenseNet121, InceptionV3, ResNet50V2, InceptionResNetV2, and Xception. These models have been extensively trained on skin cancer datasets, achieving individual accuracies ranging from 77.20% to 91.90%. Our method leverages the synergistic capabilities of these models by combining their complementary features to elevate classification performance further. In our approach, input images undergo preprocessing for model compatibility. The ensemble integrates the pre-trained models with their architectures and weights preserved. For each skin lesion image under examination, every model produces a prediction. These are subsequently aggregated using the max voting ensemble technique to yield the final classification, with the majority-voted class serving as the conclusive prediction. Through comprehensive testing on a diverse dataset, our ensemble outperformed individual models, attaining an accuracy of 93.18% and an AUC score of 0.9320, thus demonstrating superior diagnostic reliability and accuracy. We evaluated the effectiveness of our proposed method on the HAM10000 dataset to ensure its generalizability. Our ensemble method delivers a robust, reliable, and effective tool for the classification of skin cancer. By utilizing the power of advanced deep neural networks, we aim to assist healthcare professionals in achieving timely and accurate diagnoses, ultimately reducing mortality rates and enhancing patient outcomes.
Collapse
Affiliation(s)
- Md. Mamun Hossain
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - Md. Moazzem Hossain
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - Most. Binoee Arefin
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - Fahima Akhtar
- Department of Computer Science and Engineering, Bangladesh Army University of Science and Technology, Saidpur 5310, Bangladesh
| | - John Blake
- School of Computer Science and Engineering, University of Aizu, Aizuwakamatsu 965-8580, Japan
| |
Collapse
|
172
|
Fernandes JRN, Teles AS, Fernandes TRS, Lima LDB, Balhara S, Gupta N, Teixeira S. Artificial Intelligence on Diagnostic Aid of Leprosy: A Systematic Literature Review. J Clin Med 2023; 13:180. [PMID: 38202187 PMCID: PMC10779723 DOI: 10.3390/jcm13010180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Revised: 12/20/2023] [Accepted: 12/25/2023] [Indexed: 01/12/2024] Open
Abstract
Leprosy is a neglected tropical disease that can cause physical injury and mental disability. Diagnosis is primarily clinical, but can be inconclusive due to the absence of initial symptoms and similarity to other dermatological diseases. Artificial intelligence (AI) techniques have been used in dermatology, assisting clinical procedures and diagnostics. In particular, AI-supported solutions have been proposed in the literature to aid in the diagnosis of leprosy, and this Systematic Literature Review (SLR) aims to characterize the state of the art. This SLR followed the preferred reporting items for systematic reviews and meta-analyses (PRISMA) framework and was conducted in the following databases: ACM Digital Library, IEEE Digital Library, ISI Web of Science, Scopus, and PubMed. Potentially relevant research articles were retrieved. The researchers applied criteria to select the studies, assess their quality, and perform the data extraction process. Moreover, 1659 studies were retrieved, of which 21 were included in the review after selection. Most of the studies used images of skin lesions, classical machine learning algorithms, and multi-class classification tasks to develop models to diagnose dermatological diseases. Most of the reviewed articles did not target leprosy as the study's primary objective but rather the classification of different skin diseases (among them, leprosy). Although AI-supported leprosy diagnosis is constantly evolving, research in this area is still in its early stage, then studies are required to make AI solutions mature enough to be transformed into clinical practice. Expanding research efforts on leprosy diagnosis, coupled with the advocacy of open science in leveraging AI for diagnostic support, can yield robust and influential outcomes.
Collapse
Affiliation(s)
- Jacks Renan Neves Fernandes
- PhD Program in Biotechnology—Northeast Biotechnology Network, Federal University of Piauí, Teresina 64049-550, Brazil;
| | - Ariel Soares Teles
- Postgraduate Program in Biotechnology, Parnaíba Delta Federal University, Parnaíba 64202-020, Brazil; (T.R.S.F.); (L.D.B.L.); (S.T.)
- Federal Institute of Maranhão, Araioses 65570-000, Brazil
| | - Thayaná Ribeiro Silva Fernandes
- Postgraduate Program in Biotechnology, Parnaíba Delta Federal University, Parnaíba 64202-020, Brazil; (T.R.S.F.); (L.D.B.L.); (S.T.)
| | - Lucas Daniel Batista Lima
- Postgraduate Program in Biotechnology, Parnaíba Delta Federal University, Parnaíba 64202-020, Brazil; (T.R.S.F.); (L.D.B.L.); (S.T.)
| | - Surjeet Balhara
- Department of Electronics & Communication Engineering, Bharati Vidyapeeth’s College of Engineering, New Delhi 110063, India;
| | - Nishu Gupta
- Department of Electronic Systems, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, 2815 Gjøvik, Norway;
| | - Silmar Teixeira
- Postgraduate Program in Biotechnology, Parnaíba Delta Federal University, Parnaíba 64202-020, Brazil; (T.R.S.F.); (L.D.B.L.); (S.T.)
| |
Collapse
|
173
|
Kido A, Himoto Y, Kurata Y, Minamiguchi S, Nakamoto Y. Preoperative Imaging Evaluation of Endometrial Cancer in FIGO 2023. J Magn Reson Imaging 2023. [PMID: 38146775 DOI: 10.1002/jmri.29161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 11/07/2023] [Accepted: 11/09/2023] [Indexed: 12/27/2023] Open
Abstract
The staging of endometrial cancer is based on the International Federation of Gynecology and Obstetrics (FIGO) staging system according to the examination of surgical specimens, and has revised in 2023, 14 years after its last revision in 2009. Molecular and histological classification has incorporated to new FIGO system reflecting the biological behavior and prognosis of endometrial cancer. Nonetheless, the basic role of imaging modalities including ultrasound, computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography, as a preoperative assessment of the tumor extension and also the evaluation points in CT and MRI imaging are not changed, other than several point of local tumor extension. In the field of radiology, it has also undergone remarkable advancement through the rapid progress of computational technology. The application of deep learning reconstruction techniques contributes the benefits of shorter acquisition time or higher quality. Radiomics, which extract various quantitative features from the images, is also expected to have the potential for the quantitative prediction of risk factors such as histological types and lymphovascular space invasion, which is newly included in the new FIGO system. This article reviews the preoperative imaging diagnosis in new FIGO system and recent advances in imaging analysis and their clinical contributions in endometrial cancer. EVIDENCE LEVEL: 4 TECHNICAL EFFICACY: Stage 3.
Collapse
Affiliation(s)
- Aki Kido
- Department Radiology, Toyama University Hospital, Toyama, Japan
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Hospital, Kyoto, Japan
| | - Yuki Himoto
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Hospital, Kyoto, Japan
| | - Yasuhisa Kurata
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Hospital, Kyoto, Japan
| | | | - Yuji Nakamoto
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Hospital, Kyoto, Japan
| |
Collapse
|
174
|
Saikia MJ, Kuanar S, Mahapatra D, Faghani S. Multi-Modal Ensemble Deep Learning in Head and Neck Cancer HPV Sub-Typing. Bioengineering (Basel) 2023; 11:13. [PMID: 38247890 PMCID: PMC11154466 DOI: 10.3390/bioengineering11010013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 12/14/2023] [Accepted: 12/21/2023] [Indexed: 01/23/2024] Open
Abstract
Oropharyngeal Squamous Cell Carcinoma (OPSCC) is one of the common forms of heterogeneity in head and neck cancer. Infection with human papillomavirus (HPV) has been identified as a major risk factor for OPSCC. Therefore, differentiating the HPV-positive and negative cases in OPSCC patients is an essential diagnostic factor influencing future treatment decisions. In this study, we investigated the accuracy of a deep learning-based method for image interpretation and automatically detected the HPV status of OPSCC in routinely acquired Computed Tomography (CT) and Positron Emission Tomography (PET) images. We introduce a 3D CNN-based multi-modal feature fusion architecture for HPV status prediction in primary tumor lesions. The architecture is composed of an ensemble of CNN networks and merges image features in a softmax classification layer. The pipeline separately learns the intensity, contrast variation, shape, texture heterogeneity, and metabolic assessment from CT and PET tumor volume regions and fuses those multi-modal features for final HPV status classification. The precision, recall, and AUC scores of the proposed method are computed, and the results are compared with other existing models. The experimental results demonstrate that the multi-modal ensemble model with soft voting outperformed single-modality PET/CT, with an AUC of 0.76 and F1 score of 0.746 on publicly available TCGA and MAASTRO datasets. In the MAASTRO dataset, our model achieved an AUC score of 0.74 over primary tumor volumes of interest (VOIs). In the future, more extensive cohort validation may suffice for better diagnostic accuracy and provide preliminary assessment before the biopsy.
Collapse
Affiliation(s)
- Manob Jyoti Saikia
- Electrical Engineering, University of North Florida, Jacksonville, FL 32224, USA
| | - Shiba Kuanar
- Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA; (S.K.); (S.F.)
| | - Dwarikanath Mahapatra
- Inception Institute of Artificial Intelligence, Abu Dhabi 127788, United Arab Emirates;
| | - Shahriar Faghani
- Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA; (S.K.); (S.F.)
| |
Collapse
|
175
|
Jekateryńczuk G, Piotrowski Z. A Survey of Sound Source Localization and Detection Methods and Their Applications. SENSORS (BASEL, SWITZERLAND) 2023; 24:68. [PMID: 38202930 PMCID: PMC10781166 DOI: 10.3390/s24010068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Revised: 12/19/2023] [Accepted: 12/20/2023] [Indexed: 01/12/2024]
Abstract
This study is a survey of sound source localization and detection methods. The study provides a detailed classification of the methods used in the fields of science mentioned above. It classifies sound source localization systems based on criteria found in the literature. Moreover, an analysis of classic methods based on the propagation model and methods based on machine learning and deep learning techniques has been carried out. Attention has been paid to providing the most detailed information on the possibility of using physical phenomena, mathematical relationships, and artificial intelligence to determine sound source localization. Additionally, the article underscores the significance of these methods within both military and civil contexts. The study culminates with a discussion of forthcoming trends in the realms of acoustic detection and localization. The primary objective of this research is to serve as a valuable resource for selecting the most suitable approach within this domain.
Collapse
|
176
|
Singh D, Mittal N, Verma S, Singh A, Siddiqui MH. Applications of some advanced sequencing, analytical, and computational approaches in medicinal plant research: a review. Mol Biol Rep 2023; 51:23. [PMID: 38117315 DOI: 10.1007/s11033-023-09057-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 11/27/2023] [Indexed: 12/21/2023]
Abstract
The potential active chemicals found in medicinal plants, which have long been employed as natural medicines, are abundant. Exploring the genes responsible for producing these compounds has given new insights into medicinal plant research. Previously, the authentication of medicinal plants was done via DNA marker sequencing. With the advancement of sequencing technology, several new techniques like next-generation sequencing, single molecule sequencing, and fourth-generation sequencing have emerged. These techniques enshrined the role of molecular approaches for medicinal plants because all the genes involved in the biosynthesis of medicinal compound(s) could be identified through RNA-seq analysis. In several research insights, transcriptome data have also been used for the identification of biosynthesis pathways. miRNAs in several medicinal plants and their role in the biosynthesis pathway as well as regulation of the disease-causing genes were also identified. In several research articles, an in silico study was also found to be effective in identifying the inhibitory effect of medicinal plant-based compounds against virus' gene(s). The use of advanced analytical methods like spectroscopy and chromatography in metabolite proofing of secondary metabolites has also been reported in several recent research findings. Furthermore, advancement in molecular and analytic methods will give new insight into studying the traditionally important medicinal plants that are still unexplored.
Collapse
Affiliation(s)
- Dhananjay Singh
- Department of Biosciences, Integral University, Lucknow, Uttar Pradesh, 226026, India
| | - Nishu Mittal
- Institute of Biosciences and Technology, Shri Ramswaroop Memorial University, Barabanki, Uttar Pradesh, 225003, India
| | - Swati Verma
- College of Horticulture and Forestry Thunag, Dr. Y. S. Parmar University of Horticulture and Forestry, Nauni, Solan, Himachal Pradesh, 173230, India
| | - Anjali Singh
- Institute of Biosciences and Technology, Shri Ramswaroop Memorial University, Barabanki, Uttar Pradesh, 225003, India
| | | |
Collapse
|
177
|
Wei X, Cheng S, Chen R, Wang Z, Li Y. ANN deformation prediction model for deep foundation pit with considering the influence of rainfall. Sci Rep 2023; 13:22664. [PMID: 38114655 PMCID: PMC10730717 DOI: 10.1038/s41598-023-49579-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 12/09/2023] [Indexed: 12/21/2023] Open
Abstract
Deep foundation pits involving complex soil-water-structure interactions are often at a high risk of failure under heavy rainfall. Predicted deformation is an important index for early risk warning. In the study, an ANN model is proposed based on the Wave Transform (WT), Copula method, Convolutional Neural Network (CNN) and Long Short-Term Memory Neural Network (LSTM). The total deformation was firstly decomposed into low and high frequency components with WT. The CNN and LSTM were then used for prediction of the two components with rolling training and prediction. The input variables of the CNN and LSTM were determined and optimized based on the correlations analysis of Copula method of the two components with different random variables, especially with the rainfall. And finally, the predicted total deformation was obtained by adding the two prediction components. A deep foundation pit in Chengdu, China was taken as a case study, of which the horizontal deformation curves at different measuring points shows three types of developed trend, as unstable, less stable, and stable types. The predictions of the deformations of different development types by the proposed ANN model show high accuracies with a few input variables and can accurately prompt risk warning in advance.
Collapse
Affiliation(s)
- Xing Wei
- Department of Geotechnical Engineering, School of Civil Engineering, Southwest Jiaotong University, Chengdu, 610031, China.
| | - Shitao Cheng
- Sichuan Vocational and Technical College of Communications, Chengdu, 611130, China
| | - Rui Chen
- Department of Geotechnical Engineering, School of Civil Engineering, Southwest Jiaotong University, Chengdu, 610031, China
| | - Zijian Wang
- Sichuan Vocational and Technical College of Communications, Chengdu, 611130, China
| | - Yanjun Li
- Department of Geotechnical Engineering, School of Civil Engineering, Southwest Jiaotong University, Chengdu, 610031, China
| |
Collapse
|
178
|
Alhakbani N, Alghamdi M, Al-Nafjan A. Design and Development of an Imitation Detection System for Human Action Recognition Using Deep Learning. SENSORS (BASEL, SWITZERLAND) 2023; 23:9889. [PMID: 38139734 PMCID: PMC10747182 DOI: 10.3390/s23249889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 11/22/2023] [Accepted: 12/14/2023] [Indexed: 12/24/2023]
Abstract
Human action recognition (HAR) is a rapidly growing field with numerous applications in various domains. HAR involves the development of algorithms and techniques to automatically identify and classify human actions from video data. Accurate recognition of human actions has significant implications in fields such as surveillance and sports analysis and in the health care domain. This paper presents a study on the design and development of an imitation detection system using an HAR algorithm based on deep learning. This study explores the use of deep learning models, such as a single-frame convolutional neural network (CNN) and pretrained VGG-16, for the accurate classification of human actions. The proposed models were evaluated using a benchmark dataset, KTH. The performance of these models was compared with that of classical classifiers, including K-Nearest Neighbors, Support Vector Machine, and Random Forest. The results showed that the VGG-16 model achieved higher accuracy than the single-frame CNN, with a 98% accuracy rate.
Collapse
Affiliation(s)
- Noura Alhakbani
- Information Technology Department, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia; (N.A.); (M.A.)
| | - Maha Alghamdi
- Information Technology Department, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia; (N.A.); (M.A.)
| | - Abeer Al-Nafjan
- Computer Science Department, College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University, Riyadh 11432, Saudi Arabia
| |
Collapse
|
179
|
Smith CM, Weathers AL, Lewis SL. An overview of clinical machine learning applications in neurology. J Neurol Sci 2023; 455:122799. [PMID: 37979413 DOI: 10.1016/j.jns.2023.122799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 10/26/2023] [Accepted: 11/12/2023] [Indexed: 11/20/2023]
Abstract
Machine learning techniques for clinical applications are evolving, and the potential impact this will have on clinical neurology is important to recognize. By providing a broad overview on this growing paradigm of clinical tools, this article aims to help healthcare professionals in neurology prepare to navigate both the opportunities and challenges brought on through continued advancements in machine learning. This narrative review first elaborates on how machine learning models are organized and implemented. Machine learning tools are then classified by clinical application, with examples of uses within neurology described in more detail. Finally, this article addresses limitations and considerations regarding clinical machine learning applications in neurology.
Collapse
Affiliation(s)
- Colin M Smith
- Lehigh Valley Fleming Neuroscience Institute, 1250 S Cedar Crest Blvd., Allentown, PA 18103, USA
| | - Allison L Weathers
- Cleveland Clinic Information Technology Division, 9500 Euclid Ave. Cleveland, OH 44195, USA
| | - Steven L Lewis
- Lehigh Valley Fleming Neuroscience Institute, 1250 S Cedar Crest Blvd., Allentown, PA 18103, USA.
| |
Collapse
|
180
|
Bolocan VO, Secareanu M, Sava E, Medar C, Manolescu LSC, Cătălin Rașcu AȘ, Costache MG, Radavoi GD, Dobran RA, Jinga V. Convolutional Neural Network Model for Segmentation and Classification of Clear Cell Renal Cell Carcinoma Based on Multiphase CT Images. J Imaging 2023; 9:280. [PMID: 38132698 PMCID: PMC10743786 DOI: 10.3390/jimaging9120280] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 12/08/2023] [Accepted: 12/12/2023] [Indexed: 12/23/2023] Open
Abstract
(1) Background: Computed tomography (CT) imaging challenges in diagnosing renal cell carcinoma (RCC) include distinguishing malignant from benign tissues and determining the likely subtype. The goal is to show the algorithm's ability to improve renal cell carcinoma identification and treatment, improving patient outcomes. (2) Methods: This study uses the European Deep-Health toolkit's Convolutional Neural Network with ECVL, (European Computer Vision Library), and EDDL, (European Distributed Deep Learning Library). Image segmentation utilized U-net architecture and classification with resnet101. The model's clinical efficiency was assessed utilizing kidney, tumor, Dice score, and renal cell carcinoma categorization quality. (3) Results: The raw dataset contains 457 healthy right kidneys, 456 healthy left kidneys, 76 pathological right kidneys, and 84 pathological left kidneys. Preparing raw data for analysis was crucial to algorithm implementation. Kidney segmentation performance was 0.84, and tumor segmentation mean Dice score was 0.675 for the suggested model. Renal cell carcinoma classification was 0.885 accurate. (4) Conclusion and key findings: The present study focused on analyzing data from both healthy patients and diseased renal patients, with a particular emphasis on data processing. The method achieved a kidney segmentation accuracy of 0.84 and mean Dice scores of 0.675 for tumor segmentation. The system performed well in classifying renal cell carcinoma, achieving an accuracy of 0.885, results which indicates that the technique has the potential to improve the diagnosis of kidney pathology.
Collapse
Affiliation(s)
- Vlad-Octavian Bolocan
- Department of Fundamental Sciences, Faculty of Midwifery and Nursing, University of Medicine and Pharmacy “Carol Davila”, 050474 Bucharest, Romania; (V.-O.B.); (C.M.); (M.G.C.)
- Department of Clinical Laboratory of Radiology and Medical Imaging, Clinical Hospital “Prof. Dr. Theodor Burghele”, 050664 Bucharest, Romania; (M.S.); (E.S.)
| | - Mihaela Secareanu
- Department of Clinical Laboratory of Radiology and Medical Imaging, Clinical Hospital “Prof. Dr. Theodor Burghele”, 050664 Bucharest, Romania; (M.S.); (E.S.)
| | - Elena Sava
- Department of Clinical Laboratory of Radiology and Medical Imaging, Clinical Hospital “Prof. Dr. Theodor Burghele”, 050664 Bucharest, Romania; (M.S.); (E.S.)
| | - Cosmin Medar
- Department of Fundamental Sciences, Faculty of Midwifery and Nursing, University of Medicine and Pharmacy “Carol Davila”, 050474 Bucharest, Romania; (V.-O.B.); (C.M.); (M.G.C.)
- Department of Clinical Laboratory of Radiology and Medical Imaging, Clinical Hospital “Prof. Dr. Theodor Burghele”, 050664 Bucharest, Romania; (M.S.); (E.S.)
| | - Loredana Sabina Cornelia Manolescu
- Department of Fundamental Sciences, Faculty of Midwifery and Nursing, University of Medicine and Pharmacy “Carol Davila”, 050474 Bucharest, Romania; (V.-O.B.); (C.M.); (M.G.C.)
| | - Alexandru-Ștefan Cătălin Rașcu
- Department of Urology, Clinical Hospital “Prof. Dr. Theodor Burghele”, Faculty of Medicine, University of Medicine and Pharmacy “Carol Davila”, 050474 Bucharest, Romania; (A.-Ș.C.R.); (G.D.R.); (V.J.)
- Department of Urology, Clinical Hospital “Prof. Dr. Theodor Burghele”, 050664 Bucharest, Romania
| | - Maria Glencora Costache
- Department of Fundamental Sciences, Faculty of Midwifery and Nursing, University of Medicine and Pharmacy “Carol Davila”, 050474 Bucharest, Romania; (V.-O.B.); (C.M.); (M.G.C.)
| | - George Daniel Radavoi
- Department of Urology, Clinical Hospital “Prof. Dr. Theodor Burghele”, Faculty of Medicine, University of Medicine and Pharmacy “Carol Davila”, 050474 Bucharest, Romania; (A.-Ș.C.R.); (G.D.R.); (V.J.)
- Department of Urology, Clinical Hospital “Prof. Dr. Theodor Burghele”, 050664 Bucharest, Romania
| | | | - Viorel Jinga
- Department of Urology, Clinical Hospital “Prof. Dr. Theodor Burghele”, Faculty of Medicine, University of Medicine and Pharmacy “Carol Davila”, 050474 Bucharest, Romania; (A.-Ș.C.R.); (G.D.R.); (V.J.)
- Department of Urology, Clinical Hospital “Prof. Dr. Theodor Burghele”, 050664 Bucharest, Romania
- Medical Sciences Section, Academy of Romanian Scientists, 050085 Bucharest, Romania
| |
Collapse
|
181
|
Endalie D, Haile G, Taye W. Deep learning-based idiomatic expression recognition for the Amharic language. PLoS One 2023; 18:e0295339. [PMID: 38096324 PMCID: PMC10720994 DOI: 10.1371/journal.pone.0295339] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 11/20/2023] [Indexed: 12/17/2023] Open
Abstract
Idiomatic expressions are built into all languages and are common in ordinary conversation. Idioms are difficult to understand because they cannot be deduced directly from the source word. Previous studies reported that idiomatic expression affects many Natural language processing tasks in the Amharic language. However, most natural language processing models used with the Amharic language, such as machine translation, semantic analysis, sentiment analysis, information retrieval, question answering, and next-word prediction, do not consider idiomatic expressions. As a result, in this paper, we proposed a convolutional neural network (CNN) with a FastText embedding model for detecting idioms in an Amharic text. We collected 1700 idiomatic and 1600 non-idiomatic expressions from Amharic books to test the proposed model's performance. The proposed model is then evaluated using this dataset. We employed an 80 by 10,10 splitting ratio to train, validate, and test the proposed idiomatic recognition model. The proposed model's learning accuracy across the training dataset is 98%, and the model achieves 80% accuracy on the testing dataset. We compared the proposed model to machine learning models like K-Nearest Neighbor (KNN), Support Vector Machine (SVM), and Random Forest classifiers. According to the experimental results, the proposed model produces promising results.
Collapse
Affiliation(s)
- Demeke Endalie
- Faculty of Computing and Informatics, Jimma Institute of Technology, Jimma, Ethiopia
| | - Getamesay Haile
- Faculty of Computing and Informatics, Jimma Institute of Technology, Jimma, Ethiopia
| | - Wondmagegn Taye
- Faculty of Civil and Environmental Engineering, Jimma Institute of Technology, Jimma, Ethiopia
| |
Collapse
|
182
|
Khan SH, Alahmadi TJ, Alsahfi T, Alsadhan AA, Mazroa AA, Alkahtani HK, Albanyan A, Sakr HA. COVID-19 infection analysis framework using novel boosted CNNs and radiological images. Sci Rep 2023; 13:21837. [PMID: 38071373 PMCID: PMC10710448 DOI: 10.1038/s41598-023-49218-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 12/05/2023] [Indexed: 12/18/2023] Open
Abstract
COVID-19, a novel pathogen that emerged in late 2019, has the potential to cause pneumonia with unique variants upon infection. Hence, the development of efficient diagnostic systems is crucial in accurately identifying infected patients and effectively mitigating the spread of the disease. However, the system poses several challenges because of the limited availability of labeled data, distortion, and complexity in image representation, as well as variations in contrast and texture. Therefore, a novel two-phase analysis framework has been developed to scrutinize the subtle irregularities associated with COVID-19 contamination. A new Convolutional Neural Network-based STM-BRNet is developed, which integrates the Split-Transform-Merge (STM) block and Feature map enrichment (FME) techniques in the first phase. The STM block captures boundary and regional-specific features essential for detecting COVID-19 infectious CT slices. Additionally, by incorporating the FME and Transfer Learning (TL) concept into the STM blocks, multiple enhanced channels are generated to effectively capture minute variations in illumination and texture specific to COVID-19-infected images. Additionally, residual multipath learning is used to improve the learning capacity of STM-BRNet and progressively increase the feature representation by boosting at a high level through TL. In the second phase of the analysis, the COVID-19 CT scans are processed using the newly developed SA-CB-BRSeg segmentation CNN to accurately delineate infection in the images. The SA-CB-BRSeg method utilizes a unique approach that combines smooth and heterogeneous processes in both the encoder and decoder. These operations are structured to effectively capture COVID-19 patterns, including region-homogenous, texture variation, and border. By incorporating these techniques, the SA-CB-BRSeg method demonstrates its ability to accurately analyze and segment COVID-19 related data. Furthermore, the SA-CB-BRSeg model incorporates the novel concept of CB in the decoder, where additional channels are combined using TL to enhance the learning of low contrast regions. The developed STM-BRNet and SA-CB-BRSeg models achieve impressive results, with an accuracy of 98.01%, recall of 98.12%, F-score of 98.11%, Dice Similarity of 96.396%, and IOU of 98.85%. The proposed framework will alleviate the workload and enhance the radiologist's decision-making capacity in identifying the infected region of COVID-19 and evaluating the severity stages of the disease.
Collapse
Affiliation(s)
- Saddam Hussain Khan
- Department of Computer Systems Engineering, University of Engineering and Applied Science, Swat, 19060, Pakistan
| | - Tahani Jaser Alahmadi
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, 11671, Riyadh, Saudi Arabia.
| | - Tariq Alsahfi
- Department of Information Systems and Technology, College of Computer Science and Engineering, University of Jeddah, Jeddah, Saudi Arabia
| | - Abeer Abdullah Alsadhan
- Computer Science Department, Applied College, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia.
| | - Alanoud Al Mazroa
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, 11671, Riyadh, Saudi Arabia
| | - Hend Khalid Alkahtani
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, 11671, Riyadh, Saudi Arabia
| | - Abdullah Albanyan
- College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Hesham A Sakr
- Nile Higher Institute for Engineering and Technology, Mansoura, Egypt
| |
Collapse
|
183
|
Knoedler L, Alfertshofer M, Simon S, Prantl L, Kehrer A, Hoch CC, Knoedler S, Lamby P. Diagnosing lagophthalmos using artificial intelligence. Sci Rep 2023; 13:21657. [PMID: 38066112 PMCID: PMC10709577 DOI: 10.1038/s41598-023-49006-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Accepted: 12/02/2023] [Indexed: 12/18/2023] Open
Abstract
Lagophthalmos is the incomplete closure of the eyelids posing the risk of corneal ulceration and blindness. Lagophthalmos is a common symptom of various pathologies. We aimed to program a convolutional neural network to automatize lagophthalmos diagnosis. From June 2019 to May 2021, prospective data acquisition was performed on 30 patients seen at the Department of Plastic, Hand, and Reconstructive Surgery at the University Hospital Regensburg, Germany (IRB reference number: 20-2081-101). In addition, comparative data were gathered from 10 healthy patients as the control group. The training set comprised 826 images, while the validation and testing sets consisted of 91 patient images each. Validation accuracy was 97.8% over the span of 64 epochs. The model was trained for 17.3 min. For training and validation, an average loss of 0.304 and 0.358 and a final loss of 0.276 and 0.157 were noted. The testing accuracy was observed to be 93.41% with a loss of 0.221. This study proposes a novel application for rapid and reliable lagophthalmos diagnosis. Our CNN-based approach combines effective anti-overfitting strategies, short training times, and high accuracy levels. Ultimately, this tool carries high translational potential to facilitate the physician's workflow and improve overall lagophthalmos patient care.
Collapse
Affiliation(s)
- Leonard Knoedler
- Department of Plastic, Hand and Reconstructive Surgery, University Hospital Regensburg, Franz-Josef-Strauss-Allee 11, 93053, Regensburg, Germany.
| | - Michael Alfertshofer
- Division of Hand, Plastic and Aesthetic Surgery, Ludwig-Maximilians-University Munich, Munich, Germany
| | | | - Lukas Prantl
- Department of Plastic, Hand and Reconstructive Surgery, University Hospital Regensburg, Franz-Josef-Strauss-Allee 11, 93053, Regensburg, Germany
| | - Andreas Kehrer
- Department of Plastic, Hand and Reconstructive Surgery, University Hospital Regensburg, Franz-Josef-Strauss-Allee 11, 93053, Regensburg, Germany
| | - Cosima C Hoch
- Department of Otolaryngology, Head and Neck Surgery, School of Medicine, Technical University of Munich (TUM), 81675, Munich, Germany
| | - Samuel Knoedler
- Department of Plastic, Hand and Reconstructive Surgery, University Hospital Regensburg, Franz-Josef-Strauss-Allee 11, 93053, Regensburg, Germany
| | - Philipp Lamby
- Department of Plastic, Hand and Reconstructive Surgery, University Hospital Regensburg, Franz-Josef-Strauss-Allee 11, 93053, Regensburg, Germany
| |
Collapse
|
184
|
Wang M, Jiang H. PST-Radiomics: a PET/CT lymphoma classification method based on pseudo spatial-temporal radiomic features and structured atrous recurrent convolutional neural network. Phys Med Biol 2023; 68:235014. [PMID: 37956448 DOI: 10.1088/1361-6560/ad0c0f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2023] [Accepted: 11/13/2023] [Indexed: 11/15/2023]
Abstract
Objective.Existing radiomic methods tend to treat each isolated tumor as an inseparable whole, when extracting radiomic features. However, they may discard the critical intra-tumor metabolic heterogeneity (ITMH) information, that contributes to triggering tumor subtypes. To improve lymphoma classification performance, we propose a pseudo spatial-temporal radiomic method (PST-Radiomics) based on positron emission tomography computed tomography (PET/CT).Approach.Specifically, to enable exploitation of ITMH, we first present a multi-threshold gross tumor volume sequence (GTVS). Next, we extract 1D radiomic features based on PET images and each volume in GTVS and create a pseudo spatial-temporal feature sequence (PSTFS) tightly interwoven with ITMH. Then, we reshape PSTFS to create 2D pseudo spatial-temporal feature maps (PSTFM), of which the columns are elements of PSTFS. Finally, to learn from PSTFM in an end-to-end manner, we build a light-weighted pseudo spatial-temporal radiomic network (PSTR-Net), in which a structured atrous recurrent convolutional neural network serves as a PET branch to better exploit the strong local dependencies in PSTFM, and a residual convolutional neural network is used as a CT branch to exploit conventional radiomic features extracted from CT volumes.Main results.We validate PST-Radiomics based on a PET/CT lymphoma subtype classification task. Experimental results quantitatively demonstrate the superiority of PST-Radiomics, when compared to existing radiomic methods.Significance.Feature map visualization of our method shows that it performs complex feature selection while extracting hierarchical feature maps, which qualitatively demonstrates its superiority.
Collapse
Affiliation(s)
- Meng Wang
- Software College, Northeastern University, Shenyang 110819, People's Republic of China
| | - Huiyan Jiang
- Software College, Northeastern University, Shenyang 110819, People's Republic of China
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang 110819, People's Republic of China
| |
Collapse
|
185
|
Tagami M, Nishio M, Katsuyama-Yoshikawa A, Misawa N, Sakai A, Haruna Y, Azumi A, Honda S. Machine Learning Model with Texture Analysis for Automatic Classification of Histopathological Images of Ocular Adnexal Mucosa-associated Lymphoid Tissue Lymphoma of Two Different Origins. Curr Eye Res 2023; 48:1195-1202. [PMID: 37566457 DOI: 10.1080/02713683.2023.2246696] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2023] [Revised: 08/03/2023] [Accepted: 08/05/2023] [Indexed: 08/12/2023]
Abstract
PURPOSE The purpose of this study was to develop artificial intelligence algorithms that can distinguish between orbital and conjunctival mucosa-associated lymphoid tissue (MALT) lymphomas in pathological images. METHODS Tissue blocks with residual MALT lymphoma and data from histological and flow cytometric studies and molecular genetic analyses such as gene rearrangement were procured for 129 patients treated between April 2008 and April 2020. We collected pathological hematoxylin and eosin-stained (HE) images of lymphoma from these patients and cropped 10 different image patches at a resolution of 2048 × 2048 from pathological images from each patient. A total of 990 images from 99 patients were used to create and evaluate machine-learning models. Each image patch of three different magnification rates at ×4, ×20, and ×40 underwent texture analysis to extract features, and then seven different machine-learning algorithms were applied to the results to create models. Cross-validation on a patient-by-patient basis was used to create and evaluate models, and then 300 images from the remaining 30 cases were used to evaluate the average accuracy rate. RESULTS Ten-fold cross-validation using the support vector machine with linear kernel algorithm was identified as the best algorithm for discriminating between conjunctival mucosa-associated lymphoid tissue and orbital MALT lymphomas, with an average accuracy rate under cross-validation of 85%. There were ×20 magnification HE images that were more accurate in distinguishing orbital and conjunctival MALT lymphomas among ×4, ×20, and ×40. CONCLUSION Artificial intelligence algorithms can successfully distinguish HE images between orbital and conjunctival MALT lymphomas.
Collapse
Affiliation(s)
- Mizuki Tagami
- Department of Ophthalmology and Visual Sciences, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
- Ophthalmology Department and Eye Center, Kobe Kaisei Hospital, Kobe, Japan
| | - Mizuho Nishio
- Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Japan
| | | | - Norihiko Misawa
- Department of Ophthalmology and Visual Sciences, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Atsushi Sakai
- Department of Ophthalmology and Visual Sciences, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Yusuke Haruna
- Department of Ophthalmology and Visual Sciences, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Atsushi Azumi
- Ophthalmology Department and Eye Center, Kobe Kaisei Hospital, Kobe, Japan
| | - Shigeru Honda
- Department of Ophthalmology and Visual Sciences, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| |
Collapse
|
186
|
Parvatikar PP, Patil S, Khaparkhuntikar K, Patil S, Singh PK, Sahana R, Kulkarni RV, Raghu AV. Artificial intelligence: Machine learning approach for screening large database and drug discovery. Antiviral Res 2023; 220:105740. [PMID: 37935248 DOI: 10.1016/j.antiviral.2023.105740] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 10/17/2023] [Accepted: 10/26/2023] [Indexed: 11/09/2023]
Abstract
Recent research in drug discovery dealing with many faces difficulties, including development of new drugs during disease outbreak and drug resistance due to rapidly accumulating mutations. Virtual screening is the most widely used method in computer aided drug discovery. It has a prominent ability in screening drug targets from large molecular databases. Recently, a number of web servers have developed for quickly screening publicly accessible chemical databases. In a nutshell, deep learning algorithms and artificial neural networks have modernised the field. Several drug discovery processes have used machine learning and deep learning algorithms, including peptide synthesis, structure-based virtual screening, ligand-based virtual screening, toxicity prediction, drug monitoring and release, pharmacophore modelling, quantitative structure-activity relationship, drug repositioning, polypharmacology, and physiochemical activity. Although there are presently a wide variety of data-driven AI/ML tools available, the majority of these tools have, up to this point, been developed in the context of non-communicable diseases like cancer, and a number of obstacles have prevented the translation of these tools to the discovery of treatments against infectious diseases. In this review various aspects of AI and ML in virtual screening of large databases were discussed. Here, with an emphasis on antivirals as well as other disease, offers a perspective on the advantages, drawbacks, and hazards of AI/ML techniques in the search for innovative treatments.
Collapse
Affiliation(s)
- Prachi P Parvatikar
- Department of Biotechnology, Allied Health Science, BLDE (Deemed-to-be University), Vijayapur 586103, Karnataka, India.
| | - Sudha Patil
- Department of Pharmaceutics, BLDEA's SSM College of Pharmacy and Research Centre, Vijayapur 586 103, Karnataka, India
| | - Kedar Khaparkhuntikar
- Department of Pharmaceutics, National Institute of Pharmaceutical Education and Research (NIPER), Hyderabad, Telangana, 500037, India
| | - Shruti Patil
- Department of Biotechnology, Allied Health Science, BLDE (Deemed-to-be University), Vijayapur 586103, Karnataka, India
| | - Pankaj K Singh
- Department of Pharmaceutics, National Institute of Pharmaceutical Education and Research (NIPER), Hyderabad, Telangana, 500037, India
| | - R Sahana
- Department of Computer Science and Engineering, RV Institute of Technology and Management, 560076, Bengaluru, India
| | - Raghavendra V Kulkarni
- Department of Biotechnology, Allied Health Science, BLDE (Deemed-to-be University), Vijayapur 586103, Karnataka, India; Department of Pharmaceutics, BLDEA's SSM College of Pharmacy and Research Centre, Vijayapur 586 103, Karnataka, India
| | - Anjanapura V Raghu
- Department of Science and Technology, BLDE (Deemed-to-be University), Vijayapur 586103, Karnataka, India.
| |
Collapse
|
187
|
Yousif MAA, Ozturk M. Deep Learning-Based Classification of Epileptic Electroencephalography Signals Using a Concentrated Time-Frequency Approach. Int J Neural Syst 2023; 33:2350064. [PMID: 37830300 DOI: 10.1142/s0129065723500648] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2023]
Abstract
ConceFT (concentration of frequency and time) is a new time-frequency (TF) analysis method which combines multitaper technique and synchrosqueezing transform (SST). This combination produces highly concentrated TF representations with approximately perfect time and frequency resolutions. In this paper, it is aimed to show the TF representation performance and robustness of ConceFT by using it for the classification of the epileptic electroencephalography (EEG) signals. Therefore, a signal classification algorithm which uses TF images obtained with ConceFT to feed the transfer learning structure has been presented. Epilepsy is a common neurological disorder that millions of people suffer worldwide. Daily lives of the patients are quite difficult because of the unpredictable time of seizures. EEG signals monitoring the electrical activity of the brain can be used to detect approaching seizures and make possible to warn the patient before the attack. GoogLeNet which is a well-known deep learning model has been preferred to classify TF images. Classification performance is directly related to the TF representation accuracy of the ConceFT. The proposed method has been tested for various classification scenarios and obtained accuracies between 95.83% and 99.58% for two and three-class classification scenarios. High results show that ConceFT is a successful and promising TF analysis method for non-stationary biomedical signals.
Collapse
Affiliation(s)
- Mosab A A Yousif
- Department of Biomedical Engineering, Institute of Graduate Studies, Istanbul University-Cerrahpasa, Istanbul, Turkey
- Department of Electronics Engineering, University of Gezira, Wad-Madani, Sudan
| | - Mahmut Ozturk
- Department of Electrical and Electronics Engineering, Engineering Faculty, Istanbul University-Cerrahpasa, Istanbul, Turkey
| |
Collapse
|
188
|
Teoh YX, Othmani A, Lai KW, Goh SL, Usman J. Stratifying knee osteoarthritis features through multitask deep hybrid learning: Data from the osteoarthritis initiative. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 242:107807. [PMID: 37778138 DOI: 10.1016/j.cmpb.2023.107807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 08/02/2023] [Accepted: 09/08/2023] [Indexed: 10/03/2023]
Abstract
BACKGROUND AND OBJECTIVE Knee osteoarthritis (OA) is a debilitating musculoskeletal disorder that causes functional disability. Automatic knee OA diagnosis has great potential of enabling timely and early intervention, that can potentially reverse the degenerative process of knee OA. Yet, it is a tedious task, concerning the heterogeneity of the disorder. Most of the proposed techniques demonstrated single OA diagnostic task widely based on Kellgren Lawrence (KL) standard, a composite score of only a few imaging features (i.e. osteophytes, joint space narrowing and subchondral bone changes). However, only one key disease pattern was tackled. The KL standard fails to represent disease pattern of individual OA features, particularly osteophytes, joint-space narrowing, and pain intensity that play a fundamental role in OA manifestation. In this study, we aim to develop a multitask model using convolutional neural network (CNN) feature extractors and machine learning classifiers to detect nine important OA features: KL grade, knee osteophytes (both knee, medial fibular: OSFM, medial tibial: OSTM, lateral fibular: OSFL, and lateral tibial: OSTL), joint-space narrowing (medial: JSM, and lateral: JSL), and patient-reported pain intensity from plain radiography. METHODS We proposed a new feature extraction method by replacing fully-connected layer with global average pooling (GAP) layer. A comparative analysis was conducted to compare the efficacy of 16 different convolutional neural network (CNN) feature extractors and three machine learning classifiers. RESULTS Experimental results revealed the potential of CNN feature extractors in conducting multitask diagnosis. Optimal model consisted of VGG16-GAP feature extractor and KNN classifier. This model not only outperformed the other tested models, it also outperformed the state-of-art methods with higher balanced accuracy, higher Cohen's kappa, higher F1, and lower mean squared error (MSE) in seven OA features prediction. CONCLUSIONS The proposed model demonstrates pain prediction on plain radiographs, as well as eight OA-related bony features. Future work should focus on exploring additional potential radiological manifestations of OA and their relation to therapeutic interventions.
Collapse
Affiliation(s)
- Yun Xin Teoh
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, 50603, Malaysia; LISSI, Université Paris-Est Créteil, Vitry sur Seine, 94400, France
| | - Alice Othmani
- LISSI, Université Paris-Est Créteil, Vitry sur Seine, 94400, France.
| | - Khin Wee Lai
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, 50603, Malaysia.
| | - Siew Li Goh
- Sports Medicine Unit, Faculty of Medicine, Universiti Malaya, Kuala Lumpur, 50603, Malaysia; Centre for Epidemiology and Evidence-Based Practice, Faculty of Medicine, Universiti Malaya, Kuala Lumpur, 50603, Malaysia
| | - Juliana Usman
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, 50603, Malaysia
| |
Collapse
|
189
|
Kim MJ, Martin CA, Kim J, Jablonski MM. Computational methods in glaucoma research: Current status and future outlook. Mol Aspects Med 2023; 94:101222. [PMID: 37925783 PMCID: PMC10842846 DOI: 10.1016/j.mam.2023.101222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/06/2023] [Accepted: 10/19/2023] [Indexed: 11/07/2023]
Abstract
Advancements in computational techniques have transformed glaucoma research, providing a deeper understanding of genetics, disease mechanisms, and potential therapeutic targets. Systems genetics integrates genomic and clinical data, aiding in identifying drug targets, comprehending disease mechanisms, and personalizing treatment strategies for glaucoma. Molecular dynamics simulations offer valuable molecular-level insights into glaucoma-related biomolecule behavior and drug interactions, guiding experimental studies and drug discovery efforts. Artificial intelligence (AI) technologies hold promise in revolutionizing glaucoma research, enhancing disease diagnosis, target identification, and drug candidate selection. The generalized protocols for systems genetics, MD simulations, and AI model development are included as a guide for glaucoma researchers. These computational methods, however, are not separate and work harmoniously together to discover novel ways to combat glaucoma. Ongoing research and progresses in genomics technologies, MD simulations, and AI methodologies project computational methods to become an integral part of glaucoma research in the future.
Collapse
Affiliation(s)
- Minjae J Kim
- Department of Ophthalmology, The Hamilton Eye Institute, The University of Tennessee Health Science Center, Memphis, TN, 38163, USA.
| | - Cole A Martin
- Department of Ophthalmology, The Hamilton Eye Institute, The University of Tennessee Health Science Center, Memphis, TN, 38163, USA.
| | - Jinhwa Kim
- Graduate School of Artificial Intelligence, Graduate School of Metaverse, Department of Management Information Systems, Sogang University, 1 Shinsoo-Dong, Mapo-Gu, Seoul, South Korea.
| | - Monica M Jablonski
- Department of Ophthalmology, The Hamilton Eye Institute, The University of Tennessee Health Science Center, Memphis, TN, 38163, USA.
| |
Collapse
|
190
|
Sadr S, Mohammad-Rahimi H, Ghorbanimehr MS, Rokhshad R, Abbasi Z, Soltani P, Moaddabi A, Shahab S, Rohban MH. Deep learning for tooth identification and enumeration in panoramic radiographs. Dent Res J (Isfahan) 2023; 20:116. [PMID: 38169618 PMCID: PMC10758389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2023] [Revised: 10/30/2023] [Accepted: 11/04/2023] [Indexed: 01/05/2024] Open
Abstract
Background Dentists begin the diagnosis by identifying and enumerating teeth. Panoramic radiographs are widely used for tooth identification due to their large field of view and low exposure dose. The automatic numbering of teeth in panoramic radiographs can assist clinicians in avoiding errors. Deep learning has emerged as a promising tool for automating tasks. Our goal is to evaluate the accuracy of a two-step deep learning method for tooth identification and enumeration in panoramic radiographs. Materials and Methods In this retrospective observational study, 1007 panoramic radiographs were labeled by three experienced dentists. It involved drawing bounding boxes in two distinct ways: one for teeth and one for quadrants. All images were preprocessed using the contrast-limited adaptive histogram equalization method. First, panoramic images were allocated to a quadrant detection model, and the outputs of this model were provided to the tooth numbering models. A faster region-based convolutional neural network model was used in each step. Results Average precision (AP) was calculated in different intersection-over-union thresholds. The AP50 of quadrant detection and tooth enumeration was 100% and 95%, respectively. Conclusion We have obtained promising results with a high level of AP using our two-step deep learning framework for automatic tooth enumeration on panoramic radiographs. Further research should be conducted on diverse datasets and real-life situations.
Collapse
Affiliation(s)
- Soroush Sadr
- Department of Endodontics, School of Dentistry, Hamadan University of Medical Sciences, Hamadan, Iran
| | - Hossein Mohammad-Rahimi
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Germany
- Department of Computer Engineering, Sharif University of Technology, Tehran, Iran
| | | | - Rata Rokhshad
- Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Germany
- Department of Medicine, Section of Endocrinology, Nutrition, and Diabetes, Boston University Medical Center, Boston, MA, USA
| | - Zahra Abbasi
- Department of Oral Health Sciences, Faculty of Dentistry, University of British Columbia, Vancouver, Canada
| | - Parisa Soltani
- Department of Oral and Maxillofacial Radiology, Dental Implants Research Center, School of Dentistry, Dental Research Institute, Isfahan University of Medical Sciences, Isfahan, Iran
| | - Amirhossein Moaddabi
- Department of Oral and Maxillofacial Surgery, Dental Research Center, School of Dentistry, Mazandaran University of Medical Sciences, Sari, Iran
| | - Shahriar Shahab
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Shahed University of Medical Sciences, Tehran, Iran
| | | |
Collapse
|
191
|
Babaeipour R, Ouriadov A, Fox MS. Deep Learning Approaches for Quantifying Ventilation Defects in Hyperpolarized Gas Magnetic Resonance Imaging of the Lung: A Review. Bioengineering (Basel) 2023; 10:1349. [PMID: 38135940 PMCID: PMC10740978 DOI: 10.3390/bioengineering10121349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 11/06/2023] [Accepted: 11/21/2023] [Indexed: 12/24/2023] Open
Abstract
This paper provides an in-depth overview of Deep Neural Networks and their application in the segmentation and analysis of lung Magnetic Resonance Imaging (MRI) scans, specifically focusing on hyperpolarized gas MRI and the quantification of lung ventilation defects. An in-depth understanding of Deep Neural Networks is presented, laying the groundwork for the exploration of their use in hyperpolarized gas MRI and the quantification of lung ventilation defects. Five distinct studies are examined, each leveraging unique deep learning architectures and data augmentation techniques to optimize model performance. These studies encompass a range of approaches, including the use of 3D Convolutional Neural Networks, cascaded U-Net models, Generative Adversarial Networks, and nnU-net for hyperpolarized gas MRI segmentation. The findings highlight the potential of deep learning methods in the segmentation and analysis of lung MRI scans, emphasizing the need for consensus on lung ventilation segmentation methods.
Collapse
Affiliation(s)
- Ramtin Babaeipour
- School of Biomedical Engineering, Faculty of Engineering, The University of Western Ontario, London, ON N6A 3K7, Canada;
| | - Alexei Ouriadov
- School of Biomedical Engineering, Faculty of Engineering, The University of Western Ontario, London, ON N6A 3K7, Canada;
- Department of Physics and Astronomy, The University of Western Ontario, London, ON N6A 3K7, Canada;
- Lawson Health Research Institute, London, ON N6C 2R5, Canada
| | - Matthew S. Fox
- Department of Physics and Astronomy, The University of Western Ontario, London, ON N6A 3K7, Canada;
- Lawson Health Research Institute, London, ON N6C 2R5, Canada
| |
Collapse
|
192
|
Kourounis G, Elmahmudi AA, Thomson B, Hunter J, Ugail H, Wilson C. Computer image analysis with artificial intelligence: a practical introduction to convolutional neural networks for medical professionals. Postgrad Med J 2023; 99:1287-1294. [PMID: 37794609 PMCID: PMC10658730 DOI: 10.1093/postmj/qgad095] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 09/06/2023] [Accepted: 09/13/2023] [Indexed: 10/06/2023]
Abstract
Artificial intelligence tools, particularly convolutional neural networks (CNNs), are transforming healthcare by enhancing predictive, diagnostic, and decision-making capabilities. This review provides an accessible and practical explanation of CNNs for clinicians and highlights their relevance in medical image analysis. CNNs have shown themselves to be exceptionally useful in computer vision, a field that enables machines to 'see' and interpret visual data. Understanding how these models work can help clinicians leverage their full potential, especially as artificial intelligence continues to evolve and integrate into healthcare. CNNs have already demonstrated their efficacy in diverse medical fields, including radiology, histopathology, and medical photography. In radiology, CNNs have been used to automate the assessment of conditions such as pneumonia, pulmonary embolism, and rectal cancer. In histopathology, CNNs have been used to assess and classify colorectal polyps, gastric epithelial tumours, as well as assist in the assessment of multiple malignancies. In medical photography, CNNs have been used to assess retinal diseases and skin conditions, and to detect gastric and colorectal polyps during endoscopic procedures. In surgical laparoscopy, they may provide intraoperative assistance to surgeons, helping interpret surgical anatomy and demonstrate safe dissection zones. The integration of CNNs into medical image analysis promises to enhance diagnostic accuracy, streamline workflow efficiency, and expand access to expert-level image analysis, contributing to the ultimate goal of delivering further improvements in patient and healthcare outcomes.
Collapse
Affiliation(s)
- Georgios Kourounis
- NIHR Blood and Transplant Research Unit, Newcastle University and Cambridge University, Newcastle upon Tyne, NE1 7RU, United Kingdom
- Institute of Transplantation, The Freeman Hospital, Newcastle upon Tyne, NE7 7DN, United Kingdom
| | - Ali Ahmed Elmahmudi
- Faculty of Engineering and Informatics, Bradford University, Bradford, BD7 1DP, United Kingdom
| | - Brian Thomson
- Faculty of Engineering and Informatics, Bradford University, Bradford, BD7 1DP, United Kingdom
| | - James Hunter
- Nuffield Department of Surgical Sciences, University of Oxford, Oxford, OX3 9DU, United Kingdom
| | - Hassan Ugail
- Faculty of Engineering and Informatics, Bradford University, Bradford, BD7 1DP, United Kingdom
| | - Colin Wilson
- NIHR Blood and Transplant Research Unit, Newcastle University and Cambridge University, Newcastle upon Tyne, NE1 7RU, United Kingdom
- Institute of Transplantation, The Freeman Hospital, Newcastle upon Tyne, NE7 7DN, United Kingdom
| |
Collapse
|
193
|
Mwangi J, Kamau PM, Thuku RC, Lai R. Design methods for antimicrobial peptides with improved performance. Zool Res 2023; 44:1095-1114. [PMID: 37914524 PMCID: PMC10802102 DOI: 10.24272/j.issn.2095-8137.2023.246] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 09/20/2023] [Indexed: 11/03/2023] Open
Abstract
The recalcitrance of pathogens to traditional antibiotics has made treating and eradicating bacterial infections more difficult. In this regard, developing new antimicrobial agents to combat antibiotic-resistant strains has become a top priority. Antimicrobial peptides (AMPs), a ubiquitous class of naturally occurring compounds with broad-spectrum antipathogenic activity, hold significant promise as an effective solution to the current antimicrobial resistance (AMR) crisis. Several AMPs have been identified and evaluated for their therapeutic application, with many already in the drug development pipeline. Their distinct properties, such as high target specificity, potency, and ability to bypass microbial resistance mechanisms, make AMPs a promising alternative to traditional antibiotics. Nonetheless, several challenges, such as high toxicity, lability to proteolytic degradation, low stability, poor pharmacokinetics, and high production costs, continue to hamper their clinical applicability. Therefore, recent research has focused on optimizing the properties of AMPs to improve their performance. By understanding the physicochemical properties of AMPs that correspond to their activity, such as amphipathicity, hydrophobicity, structural conformation, amino acid distribution, and composition, researchers can design AMPs with desired and improved performance. In this review, we highlight some of the key strategies used to optimize the performance of AMPs, including rational design and de novo synthesis. We also discuss the growing role of predictive computational tools, utilizing artificial intelligence and machine learning, in the design and synthesis of highly efficacious lead drug candidates.
Collapse
Affiliation(s)
- James Mwangi
- Key Laboratory of Bioactive Peptides of Yunnan Province, Engineering Laboratory of Peptides of Chinese Academy of Sciences, KIZ-CUHK Joint Laboratory of Bioresources and Molecular Research in Common Diseases, National Resource Centre for Non-Human Primates, Kunming Primate Research Centre, National Research Facility for Phenotypic & Genetic Analysis of Model Animals (Primate Facility), Sino-African Joint Research Centre, New Cornerstone Science Institute, Kunming Institute of Zoology, Chinese Academy of Sciences, Kunming, Yunnan 650107, China
- Kunming College of Life Science, University of Chinese Academy of Sciences, Kunming, Yunnan 650204, China
| | - Peter Muiruri Kamau
- Key Laboratory of Bioactive Peptides of Yunnan Province, Engineering Laboratory of Peptides of Chinese Academy of Sciences, KIZ-CUHK Joint Laboratory of Bioresources and Molecular Research in Common Diseases, National Resource Centre for Non-Human Primates, Kunming Primate Research Centre, National Research Facility for Phenotypic & Genetic Analysis of Model Animals (Primate Facility), Sino-African Joint Research Centre, New Cornerstone Science Institute, Kunming Institute of Zoology, Chinese Academy of Sciences, Kunming, Yunnan 650107, China
- Kunming College of Life Science, University of Chinese Academy of Sciences, Kunming, Yunnan 650204, China
| | - Rebecca Caroline Thuku
- Key Laboratory of Bioactive Peptides of Yunnan Province, Engineering Laboratory of Peptides of Chinese Academy of Sciences, KIZ-CUHK Joint Laboratory of Bioresources and Molecular Research in Common Diseases, National Resource Centre for Non-Human Primates, Kunming Primate Research Centre, National Research Facility for Phenotypic & Genetic Analysis of Model Animals (Primate Facility), Sino-African Joint Research Centre, New Cornerstone Science Institute, Kunming Institute of Zoology, Chinese Academy of Sciences, Kunming, Yunnan 650107, China
- Kunming College of Life Science, University of Chinese Academy of Sciences, Kunming, Yunnan 650204, China
| | - Ren Lai
- Key Laboratory of Bioactive Peptides of Yunnan Province, Engineering Laboratory of Peptides of Chinese Academy of Sciences, KIZ-CUHK Joint Laboratory of Bioresources and Molecular Research in Common Diseases, National Resource Centre for Non-Human Primates, Kunming Primate Research Centre, National Research Facility for Phenotypic & Genetic Analysis of Model Animals (Primate Facility), Sino-African Joint Research Centre, New Cornerstone Science Institute, Kunming Institute of Zoology, Chinese Academy of Sciences, Kunming, Yunnan 650107, China
- Centre for Evolution and Conservation Biology, Southern Marine Science and Engineering Guangdong Laboratory, Guangzhou, Guangdong 511458, China. E-mail:
| |
Collapse
|
194
|
Zhang X, Yang K, Lu Q, Wu J, Yu L, Lin Y. Predicting carbon futures prices based on a new hybrid machine learning: Comparative study of carbon prices in different periods. JOURNAL OF ENVIRONMENTAL MANAGEMENT 2023; 346:118962. [PMID: 37714085 DOI: 10.1016/j.jenvman.2023.118962] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 08/25/2023] [Accepted: 09/08/2023] [Indexed: 09/17/2023]
Abstract
Accurate prediction of carbon price is of great significance to national energy security and climate environment policies. This paper comes up with a new forecasting model variational mode decomposition, convolutional neural network, bidirectional long short-term memory, and multi-layer perceptron (VMD-CNN-BILSTM-MLP) to predict EUA carbon futures prices in two periods of five years before and after the introduction of emission reduction policies. The parameters of the VMD model are determined by genetic algorithm (GA) firstly, carbon futures prices are broken down into subsequences of different frequencies using the model. The MLP model is then applied to predict the highest frequency sequence. The CNN-BILSTM model is applied to predict other subsequences later. Finally, the predicted values of each subsequence are linearly added to obtain the final result of the entire model. The prediction effect of the model is mainly tested by root mean squared error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE), coefficient of determination (R2) and the modification of Diebold-Mariano test (MDM). In both periods, the proposed model predicts better than the other models, and the prediction effect of carbon futures price in the first five years is a little better than that in the second five years. In general, the experiment of predicting carbon futures prices in two different periods, the experiment of changing the proportion of data set and the experiment of predicting the whole sample all prove that the mixed model proposed in this paper has good prediction effect.
Collapse
Affiliation(s)
- Xi Zhang
- School of Business, Chengdu University of Technology, Chengdu, 610059, China.
| | - Kailing Yang
- School of Business, Chengdu University of Technology, Chengdu, 610059, China.
| | - Qin Lu
- School of Business, Sichuan University, Chengdu, 610065, China.
| | - Jingyu Wu
- School of Business, Chengdu University of Technology, Chengdu, 610059, China.
| | - Liang Yu
- School of Business, Chengdu University of Technology, Chengdu, 610059, China.
| | - Yu Lin
- School of Business, Chengdu University of Technology, Chengdu, 610059, China.
| |
Collapse
|
195
|
Groves I, Holmshaw J, Furley D, Manning E, Chinnaiya K, Towers M, Evans BD, Placzek M, Fletcher AG. Accurate staging of chick embryonic tissues via deep learning of salient features. Development 2023; 150:dev202068. [PMID: 37830145 PMCID: PMC10690058 DOI: 10.1242/dev.202068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Accepted: 10/05/2023] [Indexed: 10/14/2023]
Abstract
Recent work shows that the developmental potential of progenitor cells in the HH10 chick brain changes rapidly, accompanied by subtle changes in morphology. This demands increased temporal resolution for studies of the brain at this stage, necessitating precise and unbiased staging. Here, we investigated whether we could train a deep convolutional neural network to sub-stage HH10 chick brains using a small dataset of 151 expertly labelled images. By augmenting our images with biologically informed transformations and data-driven preprocessing steps, we successfully trained a classifier to sub-stage HH10 brains to 87.1% test accuracy. To determine whether our classifier could be generally applied, we re-trained it using images (269) of randomised control and experimental chick wings, and obtained similarly high test accuracy (86.1%). Saliency analyses revealed that biologically relevant features are used for classification. Our strategy enables training of image classifiers for various applications in developmental biology with limited microscopy data.
Collapse
Affiliation(s)
- Ian Groves
- School of Mathematics and Statistics, University of Sheffield, Hicks Building, Hounsfield Road, Sheffield S3 7RH, UK
- School of Biosciences, University of Sheffield, Firth Court, Western Bank, Sheffield S10 2TN, UK
| | - Jacob Holmshaw
- School of Mathematics and Statistics, University of Sheffield, Hicks Building, Hounsfield Road, Sheffield S3 7RH, UK
| | - David Furley
- School of Mathematics and Statistics, University of Sheffield, Hicks Building, Hounsfield Road, Sheffield S3 7RH, UK
- School of Biosciences, University of Sheffield, Firth Court, Western Bank, Sheffield S10 2TN, UK
| | - Elizabeth Manning
- School of Biosciences, University of Sheffield, Firth Court, Western Bank, Sheffield S10 2TN, UK
| | - Kavitha Chinnaiya
- School of Biosciences, University of Sheffield, Firth Court, Western Bank, Sheffield S10 2TN, UK
| | - Matthew Towers
- School of Biosciences, University of Sheffield, Firth Court, Western Bank, Sheffield S10 2TN, UK
| | - Benjamin D. Evans
- Department of Informatics, School of Engineering and Informatics, University of Sussex, Falmer, Brighton BN1 9RH, UK
| | - Marysia Placzek
- School of Biosciences, University of Sheffield, Firth Court, Western Bank, Sheffield S10 2TN, UK
| | - Alexander G. Fletcher
- School of Mathematics and Statistics, University of Sheffield, Hicks Building, Hounsfield Road, Sheffield S3 7RH, UK
| |
Collapse
|
196
|
Raza A, Uddin J, Almuhaimeed A, Akbar S, Zou Q, Ahmad A. AIPs-SnTCN: Predicting Anti-Inflammatory Peptides Using fastText and Transformer Encoder-Based Hybrid Word Embedding with Self-Normalized Temporal Convolutional Networks. J Chem Inf Model 2023; 63:6537-6554. [PMID: 37905969 DOI: 10.1021/acs.jcim.3c01563] [Citation(s) in RCA: 22] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
Abstract
Inflammation is a biologically resistant response to harmful stimuli, such as infection, damaged cells, toxic chemicals, or tissue injuries. Its purpose is to eradicate pathogenic micro-organisms or irritants and facilitate tissue repair. Prolonged inflammation can result in chronic inflammatory diseases. However, wet-laboratory-based treatments are costly and time-consuming and may have adverse side effects on normal cells. In the past decade, peptide therapeutics have gained significant attention due to their high specificity in targeting affected cells without affecting healthy cells. Motivated by the significance of peptide-based therapies, we developed a highly discriminative prediction model called AIPs-SnTCN to predict anti-inflammatory peptides accurately. The peptide samples are encoded using word embedding techniques such as skip-gram and attention-based bidirectional encoder representation using a transformer (BERT). The conjoint triad feature (CTF) also collects structure-based cluster profile features. The fused vector of word embedding and sequential features is formed to compensate for the limitations of single encoding methods. Support vector machine-based recursive feature elimination (SVM-RFE) is applied to choose the ranking-based optimal space. The optimized feature space is trained by using an improved self-normalized temporal convolutional network (SnTCN). The AIPs-SnTCN model achieved a predictive accuracy of 95.86% and an AUC of 0.97 by using training samples. In the case of the alternate training data set, our model obtained an accuracy of 92.04% and an AUC of 0.96. The proposed AIPs-SnTCN model outperformed existing models with an ∼19% higher accuracy and an ∼14% higher AUC value. The reliability and efficacy of our AIPs-SnTCN model make it a valuable tool for scientists and may play a beneficial role in pharmaceutical design and research academia.
Collapse
Affiliation(s)
- Ali Raza
- Department of Physical and Numerical Sciences, Qurtuba University of Science and Information Technology, Peshawar, Khyber Pakhtunkhwa 25124, Pakistan
- Department of Computer Science, MY University, Islamabad 45750, Pakistan
| | - Jamal Uddin
- Department of Physical and Numerical Sciences, Qurtuba University of Science and Information Technology, Peshawar, Khyber Pakhtunkhwa 25124, Pakistan
| | - Abdullah Almuhaimeed
- Digital Health Institute, King Abdulaziz City for Science and Technology, Riyadh 11442, Saudi Arabia
| | - Shahid Akbar
- Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu 610054, China
- Department of Computer Science, Abdul Wali Khan University Mardan, Mardan, Khyber Pakhtunkhwa 23200, Pakistan
| | - Quan Zou
- Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu 610054, China
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou 324000, PR China
| | - Ashfaq Ahmad
- Department of Computer Science, MY University, Islamabad 45750, Pakistan
| |
Collapse
|
197
|
Lee S, Jeon U, Lee JH, Kang S, Kim H, Lee J, Chung MJ, Cha HS. Artificial intelligence for the detection of sacroiliitis on magnetic resonance imaging in patients with axial spondyloarthritis. Front Immunol 2023; 14:1278247. [PMID: 38022576 PMCID: PMC10676202 DOI: 10.3389/fimmu.2023.1278247] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2023] [Accepted: 10/25/2023] [Indexed: 12/01/2023] Open
Abstract
Background Magnetic resonance imaging (MRI) is important for the early detection of axial spondyloarthritis (axSpA). We developed an artificial intelligence (AI) model for detecting sacroiliitis in patients with axSpA using MRI. Methods This study included MRI examinations of patients who underwent semi-coronal MRI scans of the sacroiliac joints owing to chronic back pain with short tau inversion recovery (STIR) sequences between January 2010 and December 2021. Sacroiliitis was defined as a positive MRI finding according to the ASAS classification criteria for axSpA. We developed a two-stage framework. First, the Faster R-CNN network extracted regions of interest (ROIs) to localize the sacroiliac joints. Maximum intensity projection (MIP) of three consecutive slices was used to mimic the reading of two adjacent slices. Second, the VGG-19 network determined the presence of sacroiliitis in localized ROIs. We augmented the positive dataset six-fold. The sacroiliitis classification performance was measured using the sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC). The prediction models were evaluated using three-round three-fold cross-validation. Results A total of 296 participants with 4,746 MRI slices were included in the study. Sacroiliitis was identified in 864 MRI slices of 119 participants. The mean sensitivity, specificity, and AUROC for the detection of sacroiliitis were 0.725 (95% CI, 0.705-0.745), 0.936 (95% CI, 0.924-0.947), and 0.830 (95%CI, 0.792-0.868), respectively, at the image level and 0.947 (95% CI, 0.912-0.982), 0.691 (95% CI, 0.603-0.779), and 0.816 (95% CI, 0.776-0.856), respectively, at the patient level. In the original model, without using MIP and dataset augmentation, the mean sensitivity, specificity, and AUROC were 0.517 (95% CI, 0.493-0.780), 0.944 (95% CI, 0.933-0.955), and 0.731 (95% CI, 0.681-0.780), respectively, at the image level and 0.806 (95% CI, 0.729-0.883), 0.617 (95% CI, 0.523-0.711), and 0.711 (95% CI, 0.660-0.763), respectively, at the patient level. The performance was improved by MIP techniques and data augmentation. Conclusion An AI model was developed for the detection of sacroiliitis using MRI, compatible with the ASAS criteria for axSpA, with the potential to aid MRI application in a wider clinical setting.
Collapse
Affiliation(s)
- Seulkee Lee
- Department of Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Uju Jeon
- Medical AI Research Center, Samsung Medical Center, Seoul, Republic of Korea
| | - Ji Hyun Lee
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Seonyoung Kang
- Department of Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Hyungjin Kim
- Department of Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Jaejoon Lee
- Department of Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| | - Myung Jin Chung
- Medical AI Research Center, Samsung Medical Center, Seoul, Republic of Korea
- Department of Data Convergence and Future Medicine, Sungkyunkwan University School of Medicine, Suwon, Republic of Korea
| | - Hoon-Suk Cha
- Department of Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea
| |
Collapse
|
198
|
Scattarella F, Diacono D, Monaco A, Amoroso N, Bellantuono L, Massaro G, Pepe FV, Tangaro S, Bellotti R, D'Angelo M. Deep learning approach for denoising low-SNR correlation plenoptic images. Sci Rep 2023; 13:19645. [PMID: 37950034 PMCID: PMC10638444 DOI: 10.1038/s41598-023-46765-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 11/04/2023] [Indexed: 11/12/2023] Open
Abstract
Correlation Plenoptic Imaging (CPI) is a novel volumetric imaging technique that uses two sensors and the spatio-temporal correlations of light to detect both the spatial distribution and the direction of light. This novel approach to plenoptic imaging enables refocusing and 3D imaging with significant enhancement of both resolution and depth of field. However, CPI is generally slower than conventional approaches due to the need to acquire sufficient statistics for measuring correlations with an acceptable signal-to-noise ratio (SNR). We address this issue by implementing a Deep Learning application to improve image quality with undersampled frame statistics. We employ a set of experimental images reconstructed by a standard CPI architecture, at three different sampling ratios, and use it to feed a CNN model pre-trained through the transfer learning paradigm U-Net architecture with VGG-19 net for the encoding part. We find that our model reaches a Structural Similarity (SSIM) index value close to 1 both for the test sample (SSIM = [Formula: see text]) and in 5-fold cross validation (SSIM = [Formula: see text]); the results are also shown to outperform classic denoising methods, in particular for images with lower SNR. The proposed work represents the first application of Artificial Intelligence in the field of CPI and demonstrates its high potential: speeding-up the acquisition by a factor 20 over the fastest CPI so far demonstrated, enabling recording potentially 200 volumetric images per second. The presented results open the way to scanning-free real-time volumetric imaging at video rate, which is expected to achieve a substantial influence in various applications scenarios, from monitoring neuronal activity to machine vision and security.
Collapse
Affiliation(s)
- Francesco Scattarella
- Dipartimento Interateneo di Fisica M. Merlin, Università degli Studi di Bari Aldo Moro, 70125, Bari, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Bari, 70125, Bari, Italy
| | - Domenico Diacono
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Bari, 70125, Bari, Italy
| | - Alfonso Monaco
- Dipartimento Interateneo di Fisica M. Merlin, Università degli Studi di Bari Aldo Moro, 70125, Bari, Italy.
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Bari, 70125, Bari, Italy.
| | - Nicola Amoroso
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Bari, 70125, Bari, Italy
- Dipartimento di Farmacia - Scienze del Farmaco, Università degli Studi di Bari Aldo Moro, 70125, Bari, Italy
| | - Loredana Bellantuono
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Bari, 70125, Bari, Italy
- Dipartimento di Biomedicina Traslazionale e Neuroscienze (DiBraiN), Università degli Studi di Bari Aldo Moro, 70124, Bari, Italy
| | - Gianlorenzo Massaro
- Dipartimento Interateneo di Fisica M. Merlin, Università degli Studi di Bari Aldo Moro, 70125, Bari, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Bari, 70125, Bari, Italy
| | - Francesco V Pepe
- Dipartimento Interateneo di Fisica M. Merlin, Università degli Studi di Bari Aldo Moro, 70125, Bari, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Bari, 70125, Bari, Italy
| | - Sabina Tangaro
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Bari, 70125, Bari, Italy
- Dipartimento di Scienze del Suolo, della Pianta e degli Alimenti, Università degli Studi di Bari Aldo Moro, 70125, Bari, Italy
| | - Roberto Bellotti
- Dipartimento Interateneo di Fisica M. Merlin, Università degli Studi di Bari Aldo Moro, 70125, Bari, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Bari, 70125, Bari, Italy
| | - Milena D'Angelo
- Dipartimento Interateneo di Fisica M. Merlin, Università degli Studi di Bari Aldo Moro, 70125, Bari, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Bari, 70125, Bari, Italy
| |
Collapse
|
199
|
Phatak S, Chakraborty S, Goel P. Computer vision detects inflammatory arthritis in standardized smartphone photographs in an Indian patient cohort. Front Med (Lausanne) 2023; 10:1280462. [PMID: 38020147 PMCID: PMC10666644 DOI: 10.3389/fmed.2023.1280462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Accepted: 10/12/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction Computer vision extracts meaning from pixelated images and holds promise in automating various clinical tasks. Convolutional neural networks (CNNs), a deep learning network used therein, have shown promise in analyzing X-ray images and joint photographs. We studied the performance of a CNN on standardized smartphone photographs in detecting inflammation in three hand joints and compared it to a rheumatologist's diagnosis. Methods We enrolled 100 consecutive patients with inflammatory arthritis with an onset period of less than 2 years, excluding those with deformities. Each patient was examined by a rheumatologist, and the presence of synovitis in each joint was recorded. Hand photographs were taken in a standardized manner, anonymized, and cropped to include joints of interest. A ResNet-101 backbone modified for two class outputs (inflamed or not) was used for training. We also tested a hue-augmented dataset. We reported accuracy, sensitivity, and specificity for three joints: wrist, index finger proximal interphalangeal (IFPIP), and middle finger proximal interphalangeal (MFPIP), taking the rheumatologist's opinion as the gold standard. Results The cohort consisted of 100 individuals, of which 22 of them were men, with a mean age of 49.7 (SD 12.9) years. The majority of the cohort (n = 68, 68%) had rheumatoid arthritis. The wrist (125/200, 62.5%), MFPIP (94/200, 47%), and IFPIP (83/200, 41.5%) were the three most commonly inflamed joints. The CNN achieved the highest accuracy, sensitivity, and specificity in detecting synovitis in the MFPIP (83, 77, and 88%, respectively), followed by the IFPIP (74, 74, and 75%, respectively) and the wrist (62, 90, and 21%, respectively). Discussion We have demonstrated that computer vision was able to detect inflammation in three joints of the hand with reasonable accuracy on standardized photographs despite a small dataset. Feature engineering was not required, and the CNN worked despite a diversity in clinical diagnosis. Larger datasets are likely to improve accuracy and help explain the basis of classification. These data suggest a potential use of computer vision in screening and follow-up of inflammatory arthritis.
Collapse
Affiliation(s)
| | | | - Pranay Goel
- Indian Institute of Science, Education and Research, Pune, India
| |
Collapse
|
200
|
Yoon JS, Yon CJ, Lee D, Lee JJ, Kang CH, Kang SB, Lee NK, Chang CB. Assessment of a novel deep learning-based software developed for automatic feature extraction and grading of radiographic knee osteoarthritis. BMC Musculoskelet Disord 2023; 24:869. [PMID: 37940935 PMCID: PMC10631128 DOI: 10.1186/s12891-023-06951-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Accepted: 10/10/2023] [Indexed: 11/10/2023] Open
Abstract
BACKGROUND The Kellgren-Lawrence (KL) grading system is the most widely used method to classify the severity of osteoarthritis (OA) of the knee. However, due to ambiguity of terminology, the KL system showed inferior inter- and intra-observer reliability. For a more reliable evaluation, we recently developed novel deep learning (DL) software known as MediAI-OA to extract each radiographic feature of knee OA and to grade OA severity based on the KL system. METHODS This research used data from the Osteoarthritis Initiative for training and validation of MediAI-OA. 44,193 radiographs and 810 radiographs were set as the training data and used as validation data, respectively. This AI model was developed to automatically quantify the degree of joint space narrowing (JSN) of medial and lateral tibiofemoral joint, to automatically detect osteophytes in four regions (medial distal femur, lateral distal femur, medial proximal tibia and lateral proximal tibia) of the knee joint, to classify the KL grade, and present the results of these three OA features together. The model was tested by using 400 test datasets, and the results were compared to the ground truth. The accuracy of the JSN quantification and osteophyte detection was evaluated. The KL grade classification performance was evaluated by precision, recall, F1 score, accuracy, and Cohen's kappa coefficient. In addition, we defined KL grade 2 or higher as clinically significant OA, and accuracy of OA diagnosis were obtained. RESULTS The mean squared error of JSN rate quantification was 0.067 and average osteophyte detection accuracy of the MediAI-OA was 0.84. The accuracy of KL grading was 0.83, and the kappa coefficient between the AI model and ground truth was 0.768, which demonstrated substantial consistency. The OA diagnosis accuracy of this software was 0.92. CONCLUSIONS The novel DL software known as MediAI-OA demonstrated satisfactory performance comparable to that of experienced orthopedic surgeons and radiologists for analyzing features of knee OA, KL grading and OA diagnosis. Therefore, reliable KL grading can be performed and the burden of the radiologist can be reduced by using MediAI-OA.
Collapse
Affiliation(s)
- Ji Soo Yoon
- Department of Orthopaedic Surgery, Seoul National University Bundang Hospital, Seongnam-Si, Republic of Korea
| | - Chang-Jin Yon
- Department of Orthopaedic Surgery, Keimyung University Dongsan Hospital, Daegu, Republic of Korea
| | | | | | - Chang Ho Kang
- Department of Radiology, Korea University Anam Hospital, Seoul, Republic of Korea
| | - Seung-Baik Kang
- Department of Orthopaedic Surgery, SMG-SNU Boramae Medical Center, Seoul, Republic of Korea
- Department of Orthopaedic Surgery, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Na-Kyoung Lee
- Department of Orthopaedic Surgery, Seoul National University Bundang Hospital, Seongnam-Si, Republic of Korea.
| | - Chong Bum Chang
- Department of Orthopaedic Surgery, Seoul National University Bundang Hospital, Seongnam-Si, Republic of Korea
- Department of Orthopaedic Surgery, Seoul National University College of Medicine, Seoul, Republic of Korea
| |
Collapse
|