51
|
Carter D, Bykhovsky D, Hasky A, Mamistvalov I, Zimmer Y, Ram E, Hoffer O. Convolutional neural network deep learning model accurately detects rectal cancer in endoanal ultrasounds. Tech Coloproctol 2024; 28:44. [PMID: 38561492 PMCID: PMC10984882 DOI: 10.1007/s10151-024-02917-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 03/06/2024] [Indexed: 04/04/2024]
Abstract
BACKGROUND Imaging is vital for assessing rectal cancer, with endoanal ultrasound (EAUS) being highly accurate in large tertiary medical centers. However, EAUS accuracy drops outside such settings, possibly due to varied examiner experience and fewer examinations. This underscores the need for an AI-based system to enhance accuracy in non-specialized centers. This study aimed to develop and validate deep learning (DL) models to differentiate rectal cancer in standard EAUS images. METHODS A transfer learning approach with fine-tuned DL architectures was employed, utilizing a dataset of 294 images. The performance of DL models was assessed through a tenfold cross-validation. RESULTS The DL diagnostics model exhibited a sensitivity and accuracy of 0.78 each. In the identification phase, the automatic diagnostic platform achieved an area under the curve performance of 0.85 for diagnosing rectal cancer. CONCLUSIONS This research demonstrates the potential of DL models in enhancing rectal cancer detection during EAUS, especially in settings with lower examiner experience. The achieved sensitivity and accuracy suggest the viability of incorporating AI support for improved diagnostic outcomes in non-specialized medical centers.
Collapse
Affiliation(s)
- D Carter
- Department of Gastroenterology, Chaim Sheba Medical Center, Ramat Gan, Israel.
- Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel.
| | - D Bykhovsky
- Electrical and Electronics Engineering Department, Shamoon College of Engineering, Beer-Sheba, Israel
| | - A Hasky
- School of Electrical Engineering, Afeka College of Engineering, Tel Aviv, Israel
| | - I Mamistvalov
- School of Electrical Engineering, Afeka College of Engineering, Tel Aviv, Israel
| | - Y Zimmer
- School of Medical Engineering, Afeka College of Engineering, Tel Aviv, Israel
| | - E Ram
- Department of Gastroenterology, Chaim Sheba Medical Center, Ramat Gan, Israel
- Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - O Hoffer
- School of Electrical Engineering, Afeka College of Engineering, Tel Aviv, Israel
| |
Collapse
|
52
|
Bottomly D, McWeeney S. Just how transformative will AI/ML be for immuno-oncology? J Immunother Cancer 2024; 12:e007841. [PMID: 38531545 DOI: 10.1136/jitc-2023-007841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/15/2024] [Indexed: 03/28/2024] Open
Abstract
Immuno-oncology involves the study of approaches which harness the patient's immune system to fight malignancies. Immuno-oncology, as with every other biomedical and clinical research field as well as clinical operations, is in the midst of technological revolutions, which vastly increase the amount of available data. Recent advances in artificial intelligence and machine learning (AI/ML) have received much attention in terms of their potential to harness available data to improve insights and outcomes in many areas including immuno-oncology. In this review, we discuss important aspects to consider when evaluating the potential impact of AI/ML applications in the clinic. We highlight four clinical/biomedical challenges relevant to immuno-oncology and how they may be able to be addressed by the latest advancements in AI/ML. These challenges include (1) efficiency in clinical workflows, (2) curation of high-quality image data, (3) finding, extracting and synthesizing text knowledge as well as addressing, and (4) small cohort size in immunotherapeutic evaluation cohorts. Finally, we outline how advancements in reinforcement and federated learning, as well as the development of best practices for ethical and unbiased data generation, are likely to drive future innovations.
Collapse
Affiliation(s)
- Daniel Bottomly
- Knight Cancer Institute, Oregon Health and Science University, Portland, Oregon, USA
| | - Shannon McWeeney
- Knight Cancer Institute, Oregon Health and Science University, Portland, Oregon, USA
| |
Collapse
|
53
|
Alzubaidi L, Salhi A, A.Fadhel M, Bai J, Hollman F, Italia K, Pareyon R, Albahri AS, Ouyang C, Santamaría J, Cutbush K, Gupta A, Abbosh A, Gu Y. Trustworthy deep learning framework for the detection of abnormalities in X-ray shoulder images. PLoS One 2024; 19:e0299545. [PMID: 38466693 PMCID: PMC10927121 DOI: 10.1371/journal.pone.0299545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 02/12/2024] [Indexed: 03/13/2024] Open
Abstract
Musculoskeletal conditions affect an estimated 1.7 billion people worldwide, causing intense pain and disability. These conditions lead to 30 million emergency room visits yearly, and the numbers are only increasing. However, diagnosing musculoskeletal issues can be challenging, especially in emergencies where quick decisions are necessary. Deep learning (DL) has shown promise in various medical applications. However, previous methods had poor performance and a lack of transparency in detecting shoulder abnormalities on X-ray images due to a lack of training data and better representation of features. This often resulted in overfitting, poor generalisation, and potential bias in decision-making. To address these issues, a new trustworthy DL framework has been proposed to detect shoulder abnormalities (such as fractures, deformities, and arthritis) using X-ray images. The framework consists of two parts: same-domain transfer learning (TL) to mitigate imageNet mismatch and feature fusion to reduce error rates and improve trust in the final result. Same-domain TL involves training pre-trained models on a large number of labelled X-ray images from various body parts and fine-tuning them on the target dataset of shoulder X-ray images. Feature fusion combines the extracted features with seven DL models to train several ML classifiers. The proposed framework achieved an excellent accuracy rate of 99.2%, F1Score of 99.2%, and Cohen's kappa of 98.5%. Furthermore, the accuracy of the results was validated using three visualisation tools, including gradient-based class activation heat map (Grad CAM), activation visualisation, and locally interpretable model-independent explanations (LIME). The proposed framework outperformed previous DL methods and three orthopaedic surgeons invited to classify the test set, who obtained an average accuracy of 79.1%. The proposed framework has proven effective and robust, improving generalisation and increasing trust in the final results.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD, Australia
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
- Centre for Data Science, Queensland University of Technology, Brisbane, QLD, Australia
- Akunah Medical Technology Pty Ltd Company, Brisbane, QLD, Australia
| | - Asma Salhi
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
- Akunah Medical Technology Pty Ltd Company, Brisbane, QLD, Australia
| | | | - Jinshuai Bai
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD, Australia
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
| | - Freek Hollman
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
| | - Kristine Italia
- Akunah Medical Technology Pty Ltd Company, Brisbane, QLD, Australia
| | - Roberto Pareyon
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
| | - A. S. Albahri
- Technical College, Imam Ja’afar Al-Sadiq University, Baghdad, Iraq
| | - Chun Ouyang
- School of Information Systems, Queensland University of Technology, Brisbane, QLD, Australia
| | - Jose Santamaría
- Department of Computer Science, University of Jaén, Jaén, Spain
| | - Kenneth Cutbush
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
- School of Medicine, The University of Queensland, Brisbane, QLD, Australia
| | - Ashish Gupta
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
- Akunah Medical Technology Pty Ltd Company, Brisbane, QLD, Australia
- Greenslopes Private Hospital, Brisbane, QLD, Australia
| | - Amin Abbosh
- School of Information Technology and Electrical Engineering, Brisbane, QLD, Australia
| | - Yuantong Gu
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD, Australia
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
| |
Collapse
|
54
|
Shou Q, Zhao C, Shao X, Herting MM, Wang DJ. High Resolution Multi-delay Arterial Spin Labeling with Transformer based Denoising for Pediatric Perfusion MRI. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.03.04.24303727. [PMID: 38496517 PMCID: PMC10942515 DOI: 10.1101/2024.03.04.24303727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/19/2024]
Abstract
Multi-delay arterial spin labeling (MDASL) can quantitatively measure cerebral blood flow (CBF) and arterial transit time (ATT), which is particularly suitable for pediatric perfusion imaging. Here we present a high resolution (iso-2mm) MDASL protocol and performed test-retest scans on 21 typically developing children aged 8 to 17 years. We further proposed a Transformer-based deep learning (DL) model with k-space weighted image average (KWIA) denoised images as reference for training the model. The performance of the model was evaluated by the SNR of perfusion images, as well as the SNR, bias and repeatability of the fitted CBF and ATT maps. The proposed method was compared to several benchmark methods including KWIA, joint denoising and reconstruction with total generalized variation (TGV) regularization, as well as directly applying a pretrained Transformer model on a larger dataset. The results show that the proposed Transformer model with KWIA reference can effectively denoise multi-delay ASL images, not only improving the SNR for perfusion images of each delay, but also improving the SNR for the fitted CBF and ATT maps. The proposed method also improved test-retest repeatability of whole-brain perfusion measurements. This may facilitate the use of MDASL in neurodevelopmental studies to characterize typical and aberrant brain development.
Collapse
Affiliation(s)
- Qinyang Shou
- University of Southern California, Los Angeles, California 90033 USA
| | - Chenyang Zhao
- University of Southern California, Los Angeles, California 90033 USA
| | - Xingfeng Shao
- University of Southern California, Los Angeles, California 90033 USA
| | - Megan M Herting
- University of Southern California, Los Angeles, California 90033 USA
| | - Danny Jj Wang
- University of Southern California, Los Angeles, California 90033 USA
| |
Collapse
|
55
|
Shao X, Ge X, Gao J, Niu R, Shi Y, Shao X, Jiang Z, Li R, Wang Y. Transfer learning-based PET/CT three-dimensional convolutional neural network fusion of image and clinical information for prediction of EGFR mutation in lung adenocarcinoma. BMC Med Imaging 2024; 24:54. [PMID: 38438844 PMCID: PMC10913633 DOI: 10.1186/s12880-024-01232-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 02/21/2024] [Indexed: 03/06/2024] Open
Abstract
BACKGROUND To introduce a three-dimensional convolutional neural network (3D CNN) leveraging transfer learning for fusing PET/CT images and clinical data to predict EGFR mutation status in lung adenocarcinoma (LADC). METHODS Retrospective data from 516 LADC patients, encompassing preoperative PET/CT images, clinical information, and EGFR mutation status, were divided into training (n = 404) and test sets (n = 112). Several deep learning models were developed utilizing transfer learning, involving CT-only and PET-only models. A dual-stream model fusing PET and CT and a three-stream transfer learning model (TS_TL) integrating clinical data were also developed. Image preprocessing includes semi-automatic segmentation, resampling, and image cropping. Considering the impact of class imbalance, the performance of the model was evaluated using ROC curves and AUC values. RESULTS TS_TL model demonstrated promising performance in predicting the EGFR mutation status, with an AUC of 0.883 (95%CI = 0.849-0.917) in the training set and 0.730 (95%CI = 0.629-0.830) in the independent test set. Particularly in advanced LADC, the model achieved an AUC of 0.871 (95%CI = 0.823-0.919) in the training set and 0.760 (95%CI = 0.638-0.881) in the test set. The model identified distinct activation areas in solid or subsolid lesions associated with wild and mutant types. Additionally, the patterns captured by the model were significantly altered by effective tyrosine kinase inhibitors treatment, leading to notable changes in predicted mutation probabilities. CONCLUSION PET/CT deep learning model can act as a tool for predicting EGFR mutation in LADC. Additionally, it offers clinicians insights for treatment decisions through evaluations both before and after treatment.
Collapse
Affiliation(s)
- Xiaonan Shao
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China.
- Institute of Clinical Translation of Nuclear Medicine and Molecular Imaging, Soochow University, Changzhou, 213003, China.
| | - Xinyu Ge
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China
- Institute of Clinical Translation of Nuclear Medicine and Molecular Imaging, Soochow University, Changzhou, 213003, China
| | - Jianxiong Gao
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China
- Institute of Clinical Translation of Nuclear Medicine and Molecular Imaging, Soochow University, Changzhou, 213003, China
| | - Rong Niu
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China
- Institute of Clinical Translation of Nuclear Medicine and Molecular Imaging, Soochow University, Changzhou, 213003, China
| | - Yunmei Shi
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China
- Institute of Clinical Translation of Nuclear Medicine and Molecular Imaging, Soochow University, Changzhou, 213003, China
| | - Xiaoliang Shao
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China
- Institute of Clinical Translation of Nuclear Medicine and Molecular Imaging, Soochow University, Changzhou, 213003, China
| | - Zhenxing Jiang
- Department of Radiology, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China
| | - Renyuan Li
- Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, Hangzhou, 310009, China
- Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, 310058, China
| | - Yuetao Wang
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China.
- Institute of Clinical Translation of Nuclear Medicine and Molecular Imaging, Soochow University, Changzhou, 213003, China.
| |
Collapse
|
56
|
Adeoye J, Su YX. Leveraging artificial intelligence for perioperative cancer risk assessment of oral potentially malignant disorders. Int J Surg 2024; 110:1677-1686. [PMID: 38051932 PMCID: PMC10942172 DOI: 10.1097/js9.0000000000000979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2023] [Accepted: 11/21/2023] [Indexed: 12/07/2023]
Abstract
Oral potentially malignant disorders (OPMDs) are mucosal conditions with an inherent disposition to develop oral squamous cell carcinoma. Surgical management is the most preferred strategy to prevent malignant transformation in OPMDs, and surgical approaches to treatment include conventional scalpel excision, laser surgery, cryotherapy, and photodynamic therapy. However, in reality, since all patients with OPMDs will not develop oral squamous cell carcinoma in their lifetime, there is a need to stratify patients according to their risk of malignant transformation to streamline surgical intervention for patients with the highest risks. Artificial intelligence (AI) has the potential to integrate disparate factors influencing malignant transformation for robust, precise, and personalized cancer risk stratification of OPMD patients than current methods to determine the need for surgical resection, excision, or re-excision. Therefore, this article overviews existing AI models and tools, presents a clinical implementation pathway, and discusses necessary refinements to aid the clinical application of AI-based platforms for cancer risk stratification of OPMDs in surgical practice.
Collapse
Affiliation(s)
| | - Yu-Xiong Su
- Division of Oral and Maxillofacial Surgery, Faculty of Dentistry, University of Hong Kong, Hong Kong SAR, People’s Republic of China
| |
Collapse
|
57
|
Vorwerk P, Kelleter J, Müller S, Krause U. Classification in Early Fire Detection Using Multi-Sensor Nodes-A Transfer Learning Approach. SENSORS (BASEL, SWITZERLAND) 2024; 24:1428. [PMID: 38474964 DOI: 10.3390/s24051428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 02/02/2024] [Accepted: 02/21/2024] [Indexed: 03/14/2024]
Abstract
Effective early fire detection is crucial for preventing damage to people and buildings, especially in fire-prone historic structures. However, due to the infrequent occurrence of fire events throughout a building's lifespan, real-world data for training models are often sparse. In this study, we applied feature representation transfer and instance transfer in the context of early fire detection using multi-sensor nodes. The goal was to investigate whether training data from a small-scale setup (source domain) can be used to identify various incipient fire scenarios in their early stages within a full-scale test room (target domain). In a first step, we employed Linear Discriminant Analysis (LDA) to create a new feature space solely based on the source domain data and predicted four different fire types (smoldering wood, smoldering cotton, smoldering cable and candle fire) in the target domain with a classification rate up to 69% and a Cohen's Kappa of 0.58. Notably, lower classification performance was observed for sensor node positions close to the wall in the full-scale test room. In a second experiment, we applied the TrAdaBoost algorithm as a common instance transfer technique to adapt the model to the target domain, assuming that sparse information from the target domain is available. Boosting the data from 1% to 30% was utilized for individual sensor node positions in the target domain to adapt the model to the target domain. We found that additional boosting improved the classification performance (average classification rate of 73% and an average Cohen's Kappa of 0.63). However, it was noted that excessively boosting the data could lead to overfitting to a specific sensor node position in the target domain, resulting in a reduction in the overall classification performance.
Collapse
Affiliation(s)
- Pascal Vorwerk
- Faculty of Process- and Systems Engineering, Institute of Apparatus and Environmental Technology, Otto von Guericke University of Magdeburg, Universitätsplatz 2, 39106 Magdeburg, Germany
| | - Jörg Kelleter
- GTE Industrieelektronik GmbH, Helmholtzstr. 21, 38-40, 41747 Viersen, Germany
| | - Steffen Müller
- GTE Industrieelektronik GmbH, Helmholtzstr. 21, 38-40, 41747 Viersen, Germany
| | - Ulrich Krause
- Faculty of Process- and Systems Engineering, Institute of Apparatus and Environmental Technology, Otto von Guericke University of Magdeburg, Universitätsplatz 2, 39106 Magdeburg, Germany
| |
Collapse
|
58
|
Ullah MS, Khan MA, Masood A, Mzoughi O, Saidani O, Alturki N. Brain tumor classification from MRI scans: a framework of hybrid deep learning model with Bayesian optimization and quantum theory-based marine predator algorithm. Front Oncol 2024; 14:1335740. [PMID: 38390266 PMCID: PMC10882068 DOI: 10.3389/fonc.2024.1335740] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 01/12/2024] [Indexed: 02/24/2024] Open
Abstract
Brain tumor classification is one of the most difficult tasks for clinical diagnosis and treatment in medical image analysis. Any errors that occur throughout the brain tumor diagnosis process may result in a shorter human life span. Nevertheless, most currently used techniques ignore certain features that have particular significance and relevance to the classification problem in favor of extracting and choosing deep significance features. One important area of research is the deep learning-based categorization of brain tumors using brain magnetic resonance imaging (MRI). This paper proposes an automated deep learning model and an optimal information fusion framework for classifying brain tumor from MRI images. The dataset used in this work was imbalanced, a key challenge for training selected networks. This imbalance in the training dataset impacts the performance of deep learning models because it causes the classifier performance to become biased in favor of the majority class. We designed a sparse autoencoder network to generate new images that resolve the problem of imbalance. After that, two pretrained neural networks were modified and the hyperparameters were initialized using Bayesian optimization, which was later utilized for the training process. After that, deep features were extracted from the global average pooling layer. The extracted features contain few irrelevant information; therefore, we proposed an improved Quantum Theory-based Marine Predator Optimization algorithm (QTbMPA). The proposed QTbMPA selects both networks' best features and finally fuses using a serial-based approach. The fused feature set is passed to neural network classifiers for the final classification. The proposed framework tested on an augmented Figshare dataset and an improved accuracy of 99.80%, a sensitivity rate of 99.83%, a false negative rate of 17%, and a precision rate of 99.83% is obtained. Comparison and ablation study show the improvement in the accuracy of this work.
Collapse
Affiliation(s)
| | | | - Anum Masood
- Department of Physics, Norwegian University of Science and Technology, Trondheim, Norway
| | - Olfa Mzoughi
- Department of Computer Science, College of Computer Engineering and Sciences, Prince Sattam bin Abdulaziz University, Al-Kharj, Saudi Arabia
| | - Oumaima Saidani
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Nazik Alturki
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| |
Collapse
|
59
|
Fan L, Gong X, Zheng C, Li J. Data pyramid structure for optimizing EUS-based GISTs diagnosis in multi-center analysis with missing label. Comput Biol Med 2024; 169:107897. [PMID: 38171262 DOI: 10.1016/j.compbiomed.2023.107897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 12/04/2023] [Accepted: 12/23/2023] [Indexed: 01/05/2024]
Abstract
This study introduces the Data Pyramid Structure (DPS) to address data sparsity and missing labels in medical image analysis. The DPS optimizes multi-task learning and enables sustainable expansion of multi-center data analysis. Specifically, It facilitates attribute prediction and malignant tumor diagnosis tasks by implementing a segmentation and aggregation strategy on data with absent attribute labels. To leverage multi-center data, we propose the Unified Ensemble Learning Framework (UELF) and the Unified Federated Learning Framework (UFLF), which incorporate strategies for data transfer and incremental learning in scenarios with missing labels. The proposed method was evaluated on a challenging EUS patient dataset from five centers, achieving promising diagnostic performance. The average accuracy was 0.984 with an AUC of 0.927 for multi-center analysis, surpassing state-of-the-art approaches. The interpretability of the predictions further highlights the potential clinical relevance of our method.
Collapse
Affiliation(s)
- Lin Fan
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, Sichuan 611756, China; Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, China; Engineering Research Center of Sustainable Urban Intelligent Transportation, Ministry of Education, China; National Engineering Laboratory of Integrated Transportation Big Data Application Technology, China
| | - Xun Gong
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, Sichuan 611756, China; Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, China; Engineering Research Center of Sustainable Urban Intelligent Transportation, Ministry of Education, China; National Engineering Laboratory of Integrated Transportation Big Data Application Technology, China.
| | - Cenyang Zheng
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu, Sichuan 611756, China; Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, China; Engineering Research Center of Sustainable Urban Intelligent Transportation, Ministry of Education, China; National Engineering Laboratory of Integrated Transportation Big Data Application Technology, China
| | - Jiao Li
- Department of Gastroenterology, The Third People's Hospital of Chendu, Affiliated Hospital of Southwest Jiaotong University, Chengdu 610031, China
| |
Collapse
|
60
|
Klontzas ME, Vassalou EE, Spanakis K, Meurer F, Woertler K, Zibis A, Marias K, Karantanas AH. Deep learning enables the differentiation between early and late stages of hip avascular necrosis. Eur Radiol 2024; 34:1179-1186. [PMID: 37581656 PMCID: PMC10853078 DOI: 10.1007/s00330-023-10104-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 06/28/2023] [Accepted: 07/10/2023] [Indexed: 08/16/2023]
Abstract
OBJECTIVES To develop a deep learning methodology that distinguishes early from late stages of avascular necrosis of the hip (AVN) to determine treatment decisions. METHODS Three convolutional neural networks (CNNs) VGG-16, Inception ResnetV2, InceptionV3 were trained with transfer learning (ImageNet) and finetuned with a retrospectively collected cohort of (n = 104) MRI examinations of AVN patients, to differentiate between early (ARCO 1-2) and late (ARCO 3-4) stages. A consensus CNN ensemble decision was recorded as the agreement of at least two CNNs. CNN and ensemble performance was benchmarked on an independent cohort of 49 patients from another country and was compared to the performance of two MSK radiologists. CNN performance was expressed with areas under the curve (AUC), the respective 95% confidence intervals (CIs) and precision, and recall and f1-scores. AUCs were compared with DeLong's test. RESULTS On internal testing, Inception-ResnetV2 achieved the highest individual performance with an AUC of 99.7% (95%CI 99-100%), followed by InceptionV3 and VGG-16 with AUCs of 99.3% (95%CI 98.4-100%) and 97.3% (95%CI 95.5-99.2%) respectively. The CNN ensemble the same AUCs Inception ResnetV2. On external validation, model performance dropped with VGG-16 achieving the highest individual AUC of 78.9% (95%CI 51.6-79.6%) The best external performance was achieved by the model ensemble with an AUC of 85.5% (95%CI 72.2-93.9%). No significant difference was found between the CNN ensemble and expert MSK radiologists (p = 0.22 and 0.092 respectively). CONCLUSION An externally validated CNN ensemble accurately distinguishes between the early and late stages of AVN and has comparable performance to expert MSK radiologists. CLINICAL RELEVANCE STATEMENT This paper introduces the use of deep learning for the differentiation between early and late avascular necrosis of the hip, assisting in a complex clinical decision that can determine the choice between conservative and surgical treatment. KEY POINTS • A convolutional neural network ensemble achieved excellent performance in distinguishing between early and late avascular necrosis. • The performance of the deep learning method was similar to the performance of expert readers.
Collapse
Affiliation(s)
- Michail E Klontzas
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece
- Department of Medical Imaging, University Hospital of Heraklion, 71110, Voutes, Crete, Greece
- Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece
- Advanced Hybrid Imaging Systems, Institute of Computer Science, Foundation for Research and Technology (FORTH), Nikolaou Plastira 100, 70013, Heraklion, Crete, Greece
| | - Evangelia E Vassalou
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece
| | - Konstantinos Spanakis
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece
| | - Felix Meurer
- Musculoskeletal Radiology Section, TUM School of Medicine, Technical University of Munich, Ismaninger Str 22, 81675, Munich, Germany
| | - Klaus Woertler
- Musculoskeletal Radiology Section, TUM School of Medicine, Technical University of Munich, Ismaninger Str 22, 81675, Munich, Germany
| | - Aristeidis Zibis
- Department of Anatomy, Medical School, University of Thessaly, Neofytou 9 St., 41223, Larissa, Greece
| | - Kostas Marias
- Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece
- Department of Electrical & Computer Engineering, Hellenic Mediterranean University, Heraklion, Crete, Greece
| | - Apostolos H Karantanas
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece.
- Department of Medical Imaging, University Hospital of Heraklion, 71110, Voutes, Crete, Greece.
- Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece.
- Advanced Hybrid Imaging Systems, Institute of Computer Science, Foundation for Research and Technology (FORTH), Nikolaou Plastira 100, 70013, Heraklion, Crete, Greece.
| |
Collapse
|
61
|
Führer F, Gruber A, Diedam H, Göller AH, Menz S, Schneckener S. A deep neural network: mechanistic hybrid model to predict pharmacokinetics in rat. J Comput Aided Mol Des 2024; 38:7. [PMID: 38294570 DOI: 10.1007/s10822-023-00547-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 12/21/2023] [Indexed: 02/01/2024]
Abstract
An important aspect in the development of small molecules as drugs or agrochemicals is their systemic availability after intravenous and oral administration. The prediction of the systemic availability from the chemical structure of a potential candidate is highly desirable, as it allows to focus the drug or agrochemical development on compounds with a favorable kinetic profile. However, such predictions are challenging as the availability is the result of the complex interplay between molecular properties, biology and physiology and training data is rare. In this work we improve the hybrid model developed earlier (Schneckener in J Chem Inf Model 59:4893-4905, 2019). We reduce the median fold change error for the total oral exposure from 2.85 to 2.35 and for intravenous administration from 1.95 to 1.62. This is achieved by training on a larger data set, improving the neural network architecture as well as the parametrization of mechanistic model. Further, we extend our approach to predict additional endpoints and to handle different covariates, like sex and dosage form. In contrast to a pure machine learning model, our model is able to predict new end points on which it has not been trained. We demonstrate this feature by predicting the exposure over the first 24 h, while the model has only been trained on the total exposure.
Collapse
Affiliation(s)
- Florian Führer
- Engineering & Technology, Applied Mathematics, Bayer AG, 51368, Leverkusen, Germany.
| | - Andrea Gruber
- Pharmaceuticals, R&D, Preclinical Modeling & Simulation, Bayer AG, 13353, Berlin, Germany
| | - Holger Diedam
- Crop Science, Product Supply, SC Simulation & Analysis, Bayer AG, 40789, Monheim, Germany
| | - Andreas H Göller
- Pharmaceuticals, R&D, Molecular Design, Bayer AG, 42096, Wuppertal, Germany
| | - Stephan Menz
- Pharmaceuticals, R&D, Preclinical Modeling & Simulation, Bayer AG, 13353, Berlin, Germany
| | | |
Collapse
|
62
|
Falou O, Sannachi L, Haque M, Czarnota GJ, Kolios MC. Transfer learning of pre-treatment quantitative ultrasound multi-parametric images for the prediction of breast cancer response to neoadjuvant chemotherapy. Sci Rep 2024; 14:2340. [PMID: 38282158 PMCID: PMC10822849 DOI: 10.1038/s41598-024-52858-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 01/24/2024] [Indexed: 01/30/2024] Open
Abstract
Locally advanced breast cancer (LABC) is a severe type of cancer with a poor prognosis, despite advancements in therapy. As the disease is often inoperable, current guidelines suggest upfront aggressive neoadjuvant chemotherapy (NAC). Complete pathological response to chemotherapy is linked to improved survival, but conventional clinical assessments like physical exams, mammography, and imaging are limited in detecting early response. Early detection of tissue response can improve complete pathological response and patient survival while reducing exposure to ineffective and potentially harmful treatments. A rapid, cost-effective modality without the need for exogenous contrast agents would be valuable for evaluating neoadjuvant therapy response. Conventional ultrasound provides information about tissue echogenicity, but image comparisons are difficult due to instrument-dependent settings and imaging parameters. Quantitative ultrasound (QUS) overcomes this by using normalized power spectra to calculate quantitative metrics. This study used a novel transfer learning-based approach to predict LABC response to neoadjuvant chemotherapy using QUS imaging at pre-treatment. Using data from 174 patients, QUS parametric images of breast tumors with margins were generated. The ground truth response to therapy for each patient was based on standard clinical and pathological criteria. The Residual Network (ResNet) deep learning architecture was used to extract features from the parametric QUS maps. This was followed by SelectKBest and Synthetic Minority Oversampling (SMOTE) techniques for feature selection and data balancing, respectively. The Support Vector Machine (SVM) algorithm was employed to classify patients into two distinct categories: nonresponders (NR) and responders (RR). Evaluation results on an unseen test set demonstrate that the transfer learning-based approach using spectral slope parametric maps had the best performance in the identification of nonresponders with precision, recall, F1-score, and balanced accuracy of 100, 71, 83, and 86%, respectively. The transfer learning-based approach has many advantages over conventional deep learning methods since it reduces the need for large image datasets for training and shortens the training time. The results of this study demonstrate the potential of transfer learning in predicting LABC response to neoadjuvant chemotherapy before the start of treatment using quantitative ultrasound imaging. Prediction of NAC response before treatment can aid clinicians in customizing ineffectual treatment regimens for individual patients.
Collapse
Affiliation(s)
- Omar Falou
- Department of Physics, Toronto Metropolitan University, Toronto, ON, Canada.
- Institute for Biomedical Engineering, Science and Technology (iBEST), Keenan Research Centre for Biomedical Science, St. Michael's Hospital, Toronto, ON, Canada.
| | - Lakshmanan Sannachi
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, ON, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, ON, Canada
- Physical Sciences, Sunnybrook Research Institute, Toronto, ON, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| | - Maeashah Haque
- Department of Physics, Toronto Metropolitan University, Toronto, ON, Canada
| | - Gregory J Czarnota
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, ON, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, ON, Canada
- Physical Sciences, Sunnybrook Research Institute, Toronto, ON, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| | - Michael C Kolios
- Department of Physics, Toronto Metropolitan University, Toronto, ON, Canada
- Institute for Biomedical Engineering, Science and Technology (iBEST), Keenan Research Centre for Biomedical Science, St. Michael's Hospital, Toronto, ON, Canada
| |
Collapse
|
63
|
Malik M, Chong B, Fernandez J, Shim V, Kasabov NK, Wang A. Stroke Lesion Segmentation and Deep Learning: A Comprehensive Review. Bioengineering (Basel) 2024; 11:86. [PMID: 38247963 PMCID: PMC10813717 DOI: 10.3390/bioengineering11010086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 01/05/2024] [Accepted: 01/15/2024] [Indexed: 01/23/2024] Open
Abstract
Stroke is a medical condition that affects around 15 million people annually. Patients and their families can face severe financial and emotional challenges as it can cause motor, speech, cognitive, and emotional impairments. Stroke lesion segmentation identifies the stroke lesion visually while providing useful anatomical information. Though different computer-aided software are available for manual segmentation, state-of-the-art deep learning makes the job much easier. This review paper explores the different deep-learning-based lesion segmentation models and the impact of different pre-processing techniques on their performance. It aims to provide a comprehensive overview of the state-of-the-art models and aims to guide future research and contribute to the development of more robust and effective stroke lesion segmentation models.
Collapse
Affiliation(s)
- Mishaim Malik
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (M.M.); (B.C.); (N.K.K.)
| | - Benjamin Chong
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (M.M.); (B.C.); (N.K.K.)
- Faculty of Medical and Health Sciences, The University of Auckland, Auckland 1010, New Zealand
- Centre for Brain Research, The University of Auckland, Auckland 1010, New Zealand
| | - Justin Fernandez
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (M.M.); (B.C.); (N.K.K.)
- Centre for Brain Research, The University of Auckland, Auckland 1010, New Zealand
- Mātai Medical Research Institute, Gisborne 4010, New Zealand
| | - Vickie Shim
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (M.M.); (B.C.); (N.K.K.)
- Mātai Medical Research Institute, Gisborne 4010, New Zealand
| | - Nikola Kirilov Kasabov
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (M.M.); (B.C.); (N.K.K.)
- Knowledge Engineering and Discovery Research Innovation, School of Engineering, Computer and Mathematical Sciences, Auckland University of Technology, Auckland 1010, New Zealand
- Institute for Information and Communication Technologies, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria
- Knowledge Engineering Consulting Ltd., Auckland 1071, New Zealand
| | - Alan Wang
- Auckland Bioengineering Institute, The University of Auckland, Auckland 1010, New Zealand; (M.M.); (B.C.); (N.K.K.)
- Faculty of Medical and Health Sciences, The University of Auckland, Auckland 1010, New Zealand
- Centre for Brain Research, The University of Auckland, Auckland 1010, New Zealand
- Mātai Medical Research Institute, Gisborne 4010, New Zealand
- Medical Imaging Research Centre, The University of Auckland, Auckland 1010, New Zealand
- Centre for Co-Created Ageing Research, The University of Auckland, Auckland 1010, New Zealand
| |
Collapse
|
64
|
Montin E, Deniz CM, Kijowski R, Youm T, Lattanzi R. The impact of data augmentation and transfer learning on the performance of deep learning models for the segmentation of the hip on 3D magnetic resonance images. INFORMATICS IN MEDICINE UNLOCKED 2024; 45:101444. [PMID: 39119151 PMCID: PMC11308385 DOI: 10.1016/j.imu.2023.101444] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/10/2024] Open
Abstract
Different pathologies of the hip are characterized by the abnormal shape of the bony structures of the joint, namely the femur and the acetabulum. Three-dimensional (3D) models of the hip can be used for diagnosis, biomechanical simulation, and planning of surgical treatments. These models can be generated by building 3D surfaces of the joint's structures segmented on magnetic resonance (MR) images. Deep learning can avoid time-consuming manual segmentations, but its performance depends on the amount and quality of the available training data. Data augmentation and transfer learning are two approaches used when there is only a limited number of datasets. In particular, data augmentation can be used to artificially increase the size and diversity of the training datasets, whereas transfer learning can be used to build the desired model on top of a model previously trained with similar data. This study investigates the effect of data augmentation and transfer learning on the performance of deep learning for the automatic segmentation of the femur and acetabulum on 3D MR images of patients diagnosed with femoroacetabular impingement. Transfer learning was applied starting from a model trained for the segmentation of the bony structures of the shoulder joint, which bears some resemblance to the hip joint. Our results suggest that data augmentation is more effective than transfer learning, yielding a Dice similarity coefficient compared to ground-truth manual segmentations of 0.84 and 0.89 for the acetabulum and femur, respectively, whereas the Dice coefficient was 0.78 and 0.88 for the model based on transfer learning. The Accuracy for the two anatomical regions was 0.95 and 0.97 when using data augmentation, and 0.87 and 0.96 when using transfer learning. Data augmentation can improve the performance of deep learning models by increasing the diversity of the training dataset and making the models more robust to noise and variations in image quality. The proposed segmentation model could be combined with radiomic analysis for the automatic evaluation of hip pathologies.
Collapse
Affiliation(s)
- Eros Montin
- Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY, USA
- Center for Advanced Imaging Innovation and Research (CAIR), Department of Radiology, New York University Grossman School of Medicine, New York, NY, USA
| | - Cem M. Deniz
- Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY, USA
- Center for Advanced Imaging Innovation and Research (CAIR), Department of Radiology, New York University Grossman School of Medicine, New York, NY, USA
| | - Richard Kijowski
- Department of Radiology, New York University Grossman School of Medicine, New York, NY, USA
| | - Thomas Youm
- Department of Orthopedic Surgery, New York University Grossman School of Medicine, New York, NY, USA
| | - Riccardo Lattanzi
- Bernard and Irene Schwartz Center for Biomedical Imaging, Department of Radiology, New York University Grossman School of Medicine, New York, NY, USA
- Center for Advanced Imaging Innovation and Research (CAIR), Department of Radiology, New York University Grossman School of Medicine, New York, NY, USA
| |
Collapse
|
65
|
Sadeghi A, Sadeghi M, Sharifpour A, Fakhar M, Zakariaei Z, Sadeghi M, Rokni M, Zakariaei A, Banimostafavi ES, Hajati F. Potential diagnostic application of a novel deep learning- based approach for COVID-19. Sci Rep 2024; 14:280. [PMID: 38167985 PMCID: PMC10762017 DOI: 10.1038/s41598-023-50742-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 12/24/2023] [Indexed: 01/05/2024] Open
Abstract
COVID-19 is a highly communicable respiratory illness caused by the novel coronavirus SARS-CoV-2, which has had a significant impact on global public health and the economy. Detecting COVID-19 patients during a pandemic with limited medical facilities can be challenging, resulting in errors and further complications. Therefore, this study aims to develop deep learning models to facilitate automated diagnosis of COVID-19 from CT scan records of patients. The study also introduced COVID-MAH-CT, a new dataset that contains 4442 CT scan images from 133 COVID-19 patients, as well as 133 CT scan 3D volumes. We proposed and evaluated six different transfer learning models for slide-level analysis that are responsible for detecting COVID-19 in multi-slice spiral CT. Additionally, multi-head attention squeeze and excitation residual (MASERes) neural network, a novel 3D deep model was developed for patient-level analysis, which analyzes all the CT slides of a given patient as a whole and can accurately diagnose COVID-19. The codes and dataset developed in this study are available at https://github.com/alrzsdgh/COVID . The proposed transfer learning models for slide-level analysis were able to detect COVID-19 CT slides with an accuracy of more than 99%, while MASERes was able to detect COVID-19 patients from 3D CT volumes with an accuracy of 100%. These achievements demonstrate that the proposed models in this study can be useful for automatically detecting COVID-19 in both slide-level and patient-level from patients' CT scan records, and can be applied for real-world utilization, particularly in diagnosing COVID-19 cases in areas with limited medical facilities.
Collapse
Affiliation(s)
- Alireza Sadeghi
- Intelligent Mobile Robot Lab (IMRL), Department of Mechatronics Engineering, Faculty of New Sciences and Technologies, University of Tehran, Tehran, Iran
| | - Mahdieh Sadeghi
- Student Research Committee, Mazandaran University of Medical Sciences, Sari, Iran
| | - Ali Sharifpour
- Pulmonary and Critical Care Division, Imam Khomeini Hospital, Mazandaran University of Medical Sciences, Sari, Iran
| | - Mahdi Fakhar
- Iranian National Registry Center for Lophomoniasis and Toxoplasmosis, Imam Khomeini Hospital, Mazandaran University of Medical Sciences, P.O Box: 48166-33131, Sari, Iran.
| | - Zakaria Zakariaei
- Toxicology and Forensic Medicine Division, Mazandaran Registry Center for Opioids Poisoning, Anti-microbial Resistance Research Center, Imam Khomeini Hospital, Mazandaran University of Medical Sciences, P.O box: 48166-33131, Sari, Iran.
| | - Mohammadreza Sadeghi
- Student Research Committee, Mazandaran University of Medical Sciences, Sari, Iran
| | - Mojtaba Rokni
- Department of Radiology, Qaemshahr Razi Hospital, Mazandaran University of Medical Sciences, Sari, Iran
| | - Atousa Zakariaei
- MSC in Civil Engineering, European University of Lefke, Nicosia, Cyprus
| | - Elham Sadat Banimostafavi
- Department of Radiology, Imam Khomeini Hospital, Mazandaran University of Medical Sciences, Sari, Iran
| | - Farshid Hajati
- Intelligent Technology Innovation Lab (ITIL) Group, Institute for Sustainable Industries and Liveable Cities, Victoria University, Footscray, Australia
| |
Collapse
|
66
|
Rubab S, Khan MA, Hamza A, Albarakati HM, Saidani O, Alshardan A, Alasiry A, Marzougui M, Nam Y. A Novel Network-Level Fusion Architecture of Proposed Self-Attention and Vision Transformer Models for Land Use and Land Cover Classification From Remote Sensing Images. IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING 2024; 17:13135-13148. [DOI: 10.1109/jstars.2024.3426950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/25/2024]
Affiliation(s)
- Saddaf Rubab
- Department of Computer Engineering, College of Computing and Informatics, University of Sharjah, Sharjah, UAE
| | | | - Ameer Hamza
- Department of Computer Science, HITEC University, Taxila, Pakistan
| | - Hussain Mobarak Albarakati
- Department of Computer and Network Engineering, College of Computing, Umm Al-Qura University, Makkah, Saudi Arabia
| | - Oumaima Saidani
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Amal Alshardan
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Areej Alasiry
- College of Computer Science, King Khalid University, Abha, Saudi Arabia
| | - Mehrez Marzougui
- College of Computer Science, King Khalid University, Abha, Saudi Arabia
| | - Yunyoung Nam
- Department of ICT Convergence, Soonchunhyang University, Asan, South Korea
| |
Collapse
|
67
|
Su X, Liu W, Jiang S, Gao X, Chu Y, Ma L. Deep learning-based anatomical position recognition for gastroscopic examination. Technol Health Care 2024; 32:39-48. [PMID: 38669495 PMCID: PMC11191429 DOI: 10.3233/thc-248004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/28/2024]
Abstract
BACKGROUND The gastroscopic examination is a preferred method for the detection of upper gastrointestinal lesions. However, gastroscopic examination has high requirements for doctors, especially for the strict position and quantity of the archived images. These requirements are challenging for the education and training of junior doctors. OBJECTIVE The purpose of this study is to use deep learning to develop automatic position recognition technology for gastroscopic examination. METHODS A total of 17182 gastroscopic images in eight anatomical position categories are collected. Convolutional neural network model MogaNet is used to identify all the anatomical positions of the stomach for gastroscopic examination The performance of four models is evaluated by sensitivity, precision, and F1 score. RESULTS The average sensitivity of the method proposed is 0.963, which is 0.074, 0.066 and 0.065 higher than ResNet, GoogleNet and SqueezeNet, respectively. The average precision of the method proposed is 0.964, which is 0.072, 0.067 and 0.068 higher than ResNet, GoogleNet, and SqueezeNet, respectively. And the average F1-Score of the method proposed is 0.964, which is 0.074, 0.067 and 0.067 higher than ResNet, GoogleNet, and SqueezeNet, respectively. The results of the t-test show that the method proposed is significantly different from other methods (p< 0.05). CONCLUSION The method proposed exhibits the best performance for anatomical positions recognition. And the method proposed can help junior doctors meet the requirements of completeness of gastroscopic examination and the number and position of archived images quickly.
Collapse
Affiliation(s)
- Xiufeng Su
- Weihai Municipal Hospital, Cheeloo College of Medicine, Shandong University, Weihai, Shandong, China
| | - Weiyu Liu
- Weihai Municipal Hospital, Cheeloo College of Medicine, Shandong University, Weihai, Shandong, China
| | - Suyi Jiang
- Weihai Municipal Hospital, Cheeloo College of Medicine, Shandong University, Weihai, Shandong, China
| | - Xiaozhong Gao
- Weihai Municipal Hospital, Cheeloo College of Medicine, Shandong University, Weihai, Shandong, China
| | - Yanliu Chu
- Weihai Municipal Hospital, Cheeloo College of Medicine, Shandong University, Weihai, Shandong, China
| | - Liyong Ma
- School of Information Science and Engineering, Harbin Institute of Technology, Weihai, Shandong, China
| |
Collapse
|
68
|
Tao S, Tian Z, Bai L, Xu Y, Kuang C, Liu X. Phase retrieval for X-ray differential phase contrast radiography with knowledge transfer learning from virtual differential absorption model. Comput Biol Med 2024; 168:107711. [PMID: 37995534 DOI: 10.1016/j.compbiomed.2023.107711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 10/31/2023] [Accepted: 11/15/2023] [Indexed: 11/25/2023]
Abstract
Grating-based X-ray phase contrast radiography and computed tomography (CT) are promising modalities for future medical applications. However, the ill-posed phase retrieval problem in X-ray phase contrast imaging has hindered its use for quantitative analysis in biomedical imaging. Deep learning has been proved as an effective tool for image retrieval. However, in practical grating-based X-ray phase contrast imaging system, acquiring the ground truth of phase to form image pairs is challenging, which poses a great obstacle for using deep leaning methods. Transfer learning is widely used to address the problem with knowledge inheritance from similar tasks. In the present research, we propose a virtual differential absorption model and generate a training dataset with differential absorption images and absorption images. The knowledge learned from the training is transferred to phase retrieval with transfer learning techniques. Numerical simulations and experiments both demonstrate its feasibility. Image quality of retrieved phase radiograph and phase CT slices is improved when compared with representative phase retrieval methods. We conclude that this method is helpful in both X-ray 2D and 3D imaging and may find its applications in X-ray phase contrast radiography and X-ray phase CT.
Collapse
Affiliation(s)
- Siwei Tao
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, 310027, China
| | - Zonghan Tian
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, 310027, China
| | - Ling Bai
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, 310027, China
| | - Yueshu Xu
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, 310027, China; State Key Laboratory of Extreme Photonics and Instrumentation, ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University, Hangzhou, 315100, China
| | - Cuifang Kuang
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, 310027, China; State Key Laboratory of Extreme Photonics and Instrumentation, ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University, Hangzhou, 315100, China; Collaborative Innovation Center of Extreme Optics, Shanxi University, Taiyuan, 030006, China.
| | - Xu Liu
- State Key Laboratory of Extreme Photonics and Instrumentation, College of Optical Science and Engineering, Zhejiang University, Hangzhou, 310027, China; State Key Laboratory of Extreme Photonics and Instrumentation, ZJU-Hangzhou Global Scientific and Technological Innovation Center, Zhejiang University, Hangzhou, 315100, China; Ningbo Research Institute, Zhejiang University, Ningbo, 315100, China.
| |
Collapse
|
69
|
Zeng W, Xiao ZY. Few-shot learning based on deep learning: A survey. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:679-711. [PMID: 38303439 DOI: 10.3934/mbe.2024029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/03/2024]
Abstract
In recent years, with the development of science and technology, powerful computing devices have been constantly developing. As an important foundation, deep learning (DL) technology has achieved many successes in multiple fields. In addition, the success of deep learning also relies on the support of large-scale datasets, which can provide models with a variety of images. The rich information in these images can help the model learn more about various categories of images, thereby improving the classification performance and generalization ability of the model. However, in real application scenarios, it may be difficult for most tasks to collect a large number of images or enough images for model training, which also restricts the performance of the trained model to a certain extent. Therefore, how to use limited samples to train the model with high performance becomes key. In order to improve this problem, the few-shot learning (FSL) strategy is proposed, which aims to obtain a model with strong performance through a small amount of data. Therefore, FSL can play its advantages in some real scene tasks where a large number of training data cannot be obtained. In this review, we will mainly introduce the FSL methods for image classification based on DL, which are mainly divided into four categories: methods based on data enhancement, metric learning, meta-learning and adding other tasks. First, we introduce some classic and advanced FSL methods in the order of categories. Second, we introduce some datasets that are often used to test the performance of FSL methods and the performance of some classical and advanced FSL methods on two common datasets. Finally, we discuss the current challenges and future prospects in this field.
Collapse
Affiliation(s)
- Wu Zeng
- Engineering Training Center, Putian University, Putian 351100, China
| | - Zheng-Ying Xiao
- Engineering Training Center, Putian University, Putian 351100, China
| |
Collapse
|
70
|
Kahaki S, Hagemann IS, Cha KH, Trindade C, Petrick N, Kostelecky N, Borden LE, Atwi D, Fung KM, Chen W. End-to-end deep learning method for predicting hormonal treatment response in women with atypical endometrial hyperplasia or endometrial cancer. J Med Imaging (Bellingham) 2024; 11:017502. [PMID: 38370423 PMCID: PMC10868592 DOI: 10.1117/1.jmi.11.1.017502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 12/17/2023] [Accepted: 01/16/2024] [Indexed: 02/20/2024] Open
Abstract
Purpose Endometrial cancer (EC) is the most common gynecologic malignancy in the United States, and atypical endometrial hyperplasia (AEH) is considered a high-risk precursor to EC. Hormone therapies and hysterectomy are practical treatment options for AEH and early-stage EC. Some patients prefer hormone therapies for reasons such as fertility preservation or being poor surgical candidates. However, accurate prediction of an individual patient's response to hormonal treatment would allow for personalized and potentially improved recommendations for these conditions. This study aims to explore the feasibility of using deep learning models on whole slide images (WSI) of endometrial tissue samples to predict the patient's response to hormonal treatment. Approach We curated a clinical WSI dataset of 112 patients from two clinical sites. An expert pathologist annotated these images by outlining AEH/EC regions. We developed an end-to-end machine learning model with mixed supervision. The model is based on image patches extracted from pathologist-annotated AEH/EC regions. Either an unsupervised deep learning architecture (Autoencoder or ResNet50), or non-deep learning (radiomics feature extraction) is used to embed the images into a low-dimensional space, followed by fully connected layers for binary prediction, which was trained with binary responder/non-responder labels established by pathologists. We used stratified sampling to partition the dataset into a development set and a test set for internal validation of the performance of our models. Results The autoencoder model yielded an AUROC of 0.80 with 95% CI [0.63, 0.95] on the independent test set for the task of predicting a patient with AEH/EC as a responder vs non-responder to hormonal treatment. Conclusions These findings demonstrate the potential of using mixed supervised machine learning models on WSIs for predicting the response to hormonal treatment in AEH/EC patients.
Collapse
Affiliation(s)
- Seyed Kahaki
- U.S. Food and Drug Administration (FDA), Center for Devices and Radiological Health, Division of Imaging, Diagnostics, and Software Reliability, Office of Science and Engineering Laboratories, Silver Spring, Maryland, United States
| | - Ian S. Hagemann
- Washington University School of Medicine, Department of Pathology and Immunology, St. Louis, Missouri, United States
- Washington University School of Medicine, Department of Obstetrics and Gynecology, St. Louis, Missouri, United States
| | - Kenny H. Cha
- U.S. Food and Drug Administration (FDA), Center for Devices and Radiological Health, Division of Imaging, Diagnostics, and Software Reliability, Office of Science and Engineering Laboratories, Silver Spring, Maryland, United States
| | - Christopher Trindade
- U.S. Food and Drug Administration (FDA), Division of Molecular Genetics and Pathology, Silver Spring, Maryland, United States
| | - Nicholas Petrick
- U.S. Food and Drug Administration (FDA), Center for Devices and Radiological Health, Division of Imaging, Diagnostics, and Software Reliability, Office of Science and Engineering Laboratories, Silver Spring, Maryland, United States
| | - Nicolas Kostelecky
- Washington University School of Medicine, Department of Pathology and Immunology, St. Louis, Missouri, United States
- Northwestern University Feinberg School of Medicine, Department of Pathology, Chicago, Illinois, United States
| | - Lindsay E. Borden
- University of Oklahoma Health Sciences Center, Department of Obstetrics and Gynecology, Oklahoma City, Oklahoma, United States
- University of Oklahoma Health Sciences Center, Department of Pathology, Oklahoma City, Oklahoma, United States
| | - Doaa Atwi
- University of Oklahoma Health Sciences Center, Department of Pathology, Oklahoma City, Oklahoma, United States
| | - Kar-Ming Fung
- University of Oklahoma Health Sciences Center, Department of Pathology, Oklahoma City, Oklahoma, United States
- University of Oklahoma Health Sciences Center, Stephenson Cancer Center, Oklahoma City, Oklahoma, United States
| | - Weijie Chen
- U.S. Food and Drug Administration (FDA), Center for Devices and Radiological Health, Division of Imaging, Diagnostics, and Software Reliability, Office of Science and Engineering Laboratories, Silver Spring, Maryland, United States
| |
Collapse
|
71
|
Lee GP, Kim YJ, Park DK, Kim YJ, Han SK, Kim KG. Gastro-BaseNet: A Specialized Pre-Trained Model for Enhanced Gastroscopic Data Classification and Diagnosis of Gastric Cancer and Ulcer. Diagnostics (Basel) 2023; 14:75. [PMID: 38201385 PMCID: PMC10795822 DOI: 10.3390/diagnostics14010075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 12/25/2023] [Accepted: 12/26/2023] [Indexed: 01/12/2024] Open
Abstract
Most of the development of gastric disease prediction models has utilized pre-trained models from natural data, such as ImageNet, which lack knowledge of medical domains. This study proposes Gastro-BaseNet, a classification model trained using gastroscopic image data for abnormal gastric lesions. To prove performance, we compared transfer-learning based on two pre-trained models (Gastro-BaseNet and ImageNet) and two training methods (freeze and fine-tune modes). The effectiveness was verified in terms of classification at the image-level and patient-level, as well as the localization performance of lesions. The development of Gastro-BaseNet had demonstrated superior transfer learning performance compared to random weight settings in ImageNet. When developing a model for predicting the diagnosis of gastric cancer and gastric ulcers, the transfer-learned model based on Gastro-BaseNet outperformed that based on ImageNet. Furthermore, the model's performance was highest when fine-tuning the entire layer in the fine-tune mode. Additionally, the trained model was based on Gastro-BaseNet, which showed higher localization performance, which confirmed its accurate detection and classification of lesions in specific locations. This study represents a notable advancement in the development of image analysis models within the medical field, resulting in improved diagnostic predictive accuracy and aiding in making more informed clinical decisions in gastrointestinal endoscopy.
Collapse
Affiliation(s)
- Gi Pyo Lee
- Department of Health Sciences and Technology, Gachon Advanced Institute for Health Sciences and Technology (GAIHST), Gachon University, Incheon 21565, Republic of Korea;
| | - Young Jae Kim
- Department of Biomedical Engineering, Gachon University Gil Medical Center, College of Medicine, Gachon University, Incheon 21565, Republic of Korea;
| | - Dong Kyun Park
- Division of Gastroenterology, Department of Internal Medicine, Gachon University Gil Medical Center, College of Medicine, Gachon University, Incheon 21565, Republic of Korea; (D.K.P.); (Y.J.K.)
| | - Yoon Jae Kim
- Division of Gastroenterology, Department of Internal Medicine, Gachon University Gil Medical Center, College of Medicine, Gachon University, Incheon 21565, Republic of Korea; (D.K.P.); (Y.J.K.)
| | - Su Kyeong Han
- Health IT Research Center, Gachon University Gil Medical Center, Incheon 21565, Republic of Korea;
| | - Kwang Gi Kim
- Department of Biomedical Engineering, Gachon University Gil Medical Center, College of Medicine, Gachon University, Incheon 21565, Republic of Korea;
| |
Collapse
|
72
|
Wibawa MS, Zhou JY, Wang R, Huang YY, Zhan Z, Chen X, Lv X, Young LS, Rajpoot N. AI-Based Risk Score from Tumour-Infiltrating Lymphocyte Predicts Locoregional-Free Survival in Nasopharyngeal Carcinoma. Cancers (Basel) 2023; 15:5789. [PMID: 38136336 PMCID: PMC10742296 DOI: 10.3390/cancers15245789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 11/28/2023] [Accepted: 12/08/2023] [Indexed: 12/24/2023] Open
Abstract
BACKGROUND Locoregional recurrence of nasopharyngeal carcinoma (NPC) occurs in 10% to 50% of cases following primary treatment. However, the current main prognostic markers for NPC, both stage and plasma Epstein-Barr virus DNA, are not sensitive to locoregional recurrence. METHODS We gathered 385 whole-slide images (WSIs) from haematoxylin and eosin (H&E)-stained NPC sections (n = 367 cases), which were collected from Sun Yat-sen University Cancer Centre. We developed a deep learning algorithm to detect tumour nuclei and lymphocyte nuclei in WSIs, followed by density-based clustering to quantify the tumour-infiltrating lymphocytes (TILs) into 12 scores. The Random Survival Forest model was then trained on the TILs to generate risk score. RESULTS Based on Kaplan-Meier analysis, the proposed methods were able to stratify low- and high-risk NPC cases in a validation set of locoregional recurrence with a statically significant result (p < 0.001). This finding was also found in distant metastasis-free survival (p < 0.001), progression-free survival (p < 0.001), and regional recurrence-free survival (p < 0.05). Furthermore, in both univariate analysis (HR: 1.58, CI: 1.13-2.19, p < 0.05) and multivariate analysis (HR:1.59, CI: 1.11-2.28, p < 0.05), we also found that our methods demonstrated a strong prognostic value for locoregional recurrence. CONCLUSION The proposed novel digital markers could potentially be utilised to assist treatment decisions in cases of NPC.
Collapse
Affiliation(s)
- Made Satria Wibawa
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK; (M.S.W.); (R.W.)
| | - Jia-Yu Zhou
- State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China; (J.-Y.Z.); (Y.-Y.H.); (Z.Z.); (X.C.); (X.L.)
- Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Ruoyu Wang
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK; (M.S.W.); (R.W.)
| | - Ying-Ying Huang
- State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China; (J.-Y.Z.); (Y.-Y.H.); (Z.Z.); (X.C.); (X.L.)
- Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Zejiang Zhan
- State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China; (J.-Y.Z.); (Y.-Y.H.); (Z.Z.); (X.C.); (X.L.)
- Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Xi Chen
- State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China; (J.-Y.Z.); (Y.-Y.H.); (Z.Z.); (X.C.); (X.L.)
- Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Xing Lv
- State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangdong Provincial Clinical Research Center for Cancer, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China; (J.-Y.Z.); (Y.-Y.H.); (Z.Z.); (X.C.); (X.L.)
- Department of Nasopharyngeal Carcinoma, Sun Yat-Sen University Cancer Center, Guangzhou 510060, China
| | - Lawrence S. Young
- Warwick Medical School, University of Warwick, Coventry CV4 7AL, UK;
| | - Nasir Rajpoot
- Tissue Image Analytics Centre, Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK; (M.S.W.); (R.W.)
- The Alan Turing Institute, London NW1 2DB, UK
| |
Collapse
|
73
|
Moses O, Qureshi M, King ICC. Letter to the Editor regarding 'Development of a deep learning-based tool to assist wound classification'. J Plast Reconstr Aesthet Surg 2023; 87:215-216. [PMID: 37913620 DOI: 10.1016/j.bjps.2023.10.089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 10/07/2023] [Indexed: 11/03/2023]
Affiliation(s)
- Onyedi Moses
- Brighton and Sussex Medical School, Falmer, Brighton BN1 9RY, United Kingdom.
| | - Mehreen Qureshi
- Department of Plastic Surgery, Royal Sussex County Hospital, Brighton BN2 5BE, United Kingdom; Department of Plastic Surgery, Queen Victoria Hospital, East Grinstead RH19 3DZ, United Kingdom
| | - Ian C C King
- Brighton and Sussex Medical School, Falmer, Brighton BN1 9RY, United Kingdom; Department of Plastic Surgery, Royal Sussex County Hospital, Brighton BN2 5BE, United Kingdom; Department of Plastic Surgery, Queen Victoria Hospital, East Grinstead RH19 3DZ, United Kingdom
| |
Collapse
|
74
|
Soulier T, Colliot O, Ayache N, Rohaut B. How will tomorrow's algorithms fuse multimodal data? The example of the neuroprognosis in Intensive Care. Anaesth Crit Care Pain Med 2023; 42:101301. [PMID: 37709200 DOI: 10.1016/j.accpm.2023.101301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Accepted: 09/03/2023] [Indexed: 09/16/2023]
Affiliation(s)
- Théodore Soulier
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inserm, AP-HP, Hôpital de la Pitié Salpêtrière, F-75013, Paris, France.
| | - Olivier Colliot
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié Salpêtrière, F-75013, Paris, France
| | | | - Benjamin Rohaut
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inserm, AP-HP, Hôpital de la Pitié Salpêtrière, F-75013, Paris, France; Department of Neurology, Groupe Hospitalier Pitié-Salpêtrière, AP-HP, Paris, France
| |
Collapse
|
75
|
Mudeng V, Farid MN, Ayana G, Choe SW. Domain and Histopathology Adaptations-Based Classification for Malignancy Grading System. THE AMERICAN JOURNAL OF PATHOLOGY 2023; 193:2080-2098. [PMID: 37673327 DOI: 10.1016/j.ajpath.2023.07.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Revised: 06/30/2023] [Accepted: 07/19/2023] [Indexed: 09/08/2023]
Abstract
Accurate proliferation rate quantification can be used to devise an appropriate treatment for breast cancer. Pathologists use breast tissue biopsy glass slides stained with hematoxylin and eosin to obtain grading information. However, this manual evaluation may lead to high costs and be ineffective because diagnosis depends on the facility and the pathologists' insights and experiences. Convolutional neural network acts as a computer-based observer to improve clinicians' capacity in grading breast cancer. Therefore, this study proposes a novel scheme for automatic breast cancer malignancy grading from invasive ductal carcinoma. The proposed classifiers implement multistage transfer learning incorporating domain and histopathologic transformations. Domain adaptation using pretrained models, such as InceptionResNetV2, InceptionV3, NASNet-Large, ResNet50, ResNet101, VGG19, and Xception, was applied to classify the ×40 magnification BreaKHis data set into eight classes. Subsequently, InceptionV3 and Xception, which contain the domain and histopathology pretrained weights, were determined to be the best for this study and used to categorize the Databiox database into grades 1, 2, or 3. To provide a comprehensive report, this study offered a patchless automated grading system for magnification-dependent and magnification-independent classifications. With an overall accuracy (means ± SD) of 90.17% ± 3.08% to 97.67% ± 1.09% and an F1 score of 0.9013 to 0.9760 for magnification-dependent classification, the classifiers in this work achieved outstanding performance. The proposed approach could be used for breast cancer grading systems in clinical settings.
Collapse
Affiliation(s)
- Vicky Mudeng
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea; Department of Electrical Engineering, Institut Teknologi Kalimantan, Balikpapan, Indonesia
| | - Mifta Nur Farid
- Department of Electrical Engineering, Institut Teknologi Kalimantan, Balikpapan, Indonesia
| | - Gelan Ayana
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea
| | - Se-Woon Choe
- Department of Medical IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea; Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi, Republic of Korea.
| |
Collapse
|
76
|
Jaiswal M, Sharma M, Khandnor P, Goyal A, Belokar R, Harit S, Sood T, Goyal K, Dua P. Deep Learning Models for Classification of Deciduous and Permanent Teeth From Digital Panoramic Images. Cureus 2023; 15:e49937. [PMID: 38179345 PMCID: PMC10765069 DOI: 10.7759/cureus.49937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/04/2023] [Indexed: 01/06/2024] Open
Abstract
INTRODUCTION Dental radiographs are essential in the diagnostic process in dentistry. They serve various purposes, including determining age, analyzing patterns of tooth eruption/shedding, and treatment planning and prognosis. The emergence of digital radiography technology has piqued interest in using artificial intelligence systems to assist and guide dental professionals. These cutting-edge technologies assist in streamlining decision-making processes by enabling entity classification and localization tasks. With the integration of artificial Intelligence algorithms tailored for pediatric dentistry applications and utilizing automated tools, there is an optimistic outlook on improving diagnostic capabilities while reducing stress and fatigue among clinicians. METHODOLOGY The dataset comprised 620 images (mixed dentition: 314, permanent dentition: 306). Panoramic radiographs taken were within the age range of 4-16 years. The classification of deciduous and permanent teeth involved training CNN-based models using different architectures such as Resnet, AlexNet, and EfficientNet, among others. A ratio of 70:15:15 was utilized for training, validation, and testing, respectively. RESULT AND CONCLUSION The findings indicated that among the models proposed, EfficientNetB0 and EfficientNetB3 exhibited superior performance. Both EfficientNetB0 and EfficientNetB3 achieved an accuracy rate, precision, recall, and F1 scores of 98% in classifying teeth as either deciduous or permanent. This implies that these models were highly accurate in identifying patterns/features within the dataset used for evaluation.
Collapse
Affiliation(s)
- Manoj Jaiswal
- Pedodontics and Preventive Dentistry, Postgraduate Institute of Medical Education and Research, Chandigarh, IND
| | - Megha Sharma
- Computer Science and Engineering, Punjab Engineering College, Chandigarh, IND
| | - Padmavati Khandnor
- Computer Science and Engineering, Punjab Engineering College, Chandigarh, IND
| | - Ashima Goyal
- Pedodontics and Preventive Dentistry, Postgraduate Institute of Medical Education and Research, Chandigarh, IND
| | - Rajendra Belokar
- Production and Industrial Engineering, Punjab Engineering College, Chandigarh, IND
| | - Sandeep Harit
- Computer Science and Engineering, Punjab Engineering College, Chandigarh, IND
| | - Tamanna Sood
- Computer Science and Engineering, Punjab Engineering College, Chandigarh, IND
| | - Kanav Goyal
- Mechanical Engineering, Punjab Engineering College, Chandigarh, IND
| | - Pallavi Dua
- Computer Science and Engineering, Punjab Engineering College, Chandigarh, IND
| |
Collapse
|
77
|
Keles E, Bagci U. The past, current, and future of neonatal intensive care units with artificial intelligence: a systematic review. NPJ Digit Med 2023; 6:220. [PMID: 38012349 PMCID: PMC10682088 DOI: 10.1038/s41746-023-00941-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Accepted: 10/05/2023] [Indexed: 11/29/2023] Open
Abstract
Machine learning and deep learning are two subsets of artificial intelligence that involve teaching computers to learn and make decisions from any sort of data. Most recent developments in artificial intelligence are coming from deep learning, which has proven revolutionary in almost all fields, from computer vision to health sciences. The effects of deep learning in medicine have changed the conventional ways of clinical application significantly. Although some sub-fields of medicine, such as pediatrics, have been relatively slow in receiving the critical benefits of deep learning, related research in pediatrics has started to accumulate to a significant level, too. Hence, in this paper, we review recently developed machine learning and deep learning-based solutions for neonatology applications. We systematically evaluate the roles of both classical machine learning and deep learning in neonatology applications, define the methodologies, including algorithmic developments, and describe the remaining challenges in the assessment of neonatal diseases by using PRISMA 2020 guidelines. To date, the primary areas of focus in neonatology regarding AI applications have included survival analysis, neuroimaging, analysis of vital parameters and biosignals, and retinopathy of prematurity diagnosis. We have categorically summarized 106 research articles from 1996 to 2022 and discussed their pros and cons, respectively. In this systematic review, we aimed to further enhance the comprehensiveness of the study. We also discuss possible directions for new AI models and the future of neonatology with the rising power of AI, suggesting roadmaps for the integration of AI into neonatal intensive care units.
Collapse
Affiliation(s)
- Elif Keles
- Northwestern University, Feinberg School of Medicine, Department of Radiology, Chicago, IL, USA.
| | - Ulas Bagci
- Northwestern University, Feinberg School of Medicine, Department of Radiology, Chicago, IL, USA
- Northwestern University, Department of Biomedical Engineering, Chicago, IL, USA
- Department of Electrical and Computer Engineering, Chicago, IL, USA
| |
Collapse
|
78
|
Gao R, Luo G, Ding R, Yang B, Sun H. A Lightweight Deep Learning Framework for Automatic MRI Data Sorting and Artifacts Detection. J Med Syst 2023; 47:124. [PMID: 37999807 DOI: 10.1007/s10916-023-02017-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Accepted: 11/05/2023] [Indexed: 11/25/2023]
Abstract
The purpose of this study is to develop a lightweight and easily deployable deep learning system for fully automated content-based brain MRI sorting and artifacts detection. 22092 MRI volumes from 4076 patients between 2017 and 2021 were involved in this retrospective study. The dataset mainly contains 4 common contrast (T1-weighted (T1w), contrast-enhanced T1-weighted (T1c), T2-weighted (T2w), fluid-attenuated inversion recovery (FLAIR)) in three perspectives (axial, coronal, and sagittal), and magnetic resonance angiography (MRA), as well as three typical artifacts (motion, aliasing, and metal artifacts). In the proposed architecture, a pre-trained EfficientNetB0 with the fully connected layers removed was used as the feature extractor and a multilayer perceptron (MLP) module with four hidden layers was used as the classifier. Precision, recall, F1_Score, accuracy, the number of trainable parameters, and float-point of operations (FLOPs) were calculated to evaluate the performance of the proposed model. The proposed model was also compared with four other existing CNN-based models in terms of classification performance and model size. The overall precision, recall, F1_Score, and accuracy of the proposed model were 0.983, 0.926, 0.950, and 0.991, respectively. The performance of the proposed model was outperformed the other four CNN-based models. The number of trainable parameters and FLOPs were the smallest among the investigated models. Our proposed model can accurately sort head MRI scans and identify artifacts with minimum computational resources and can be used as a tool to support big medical imaging data research and facilitate large-scale database management.
Collapse
Affiliation(s)
- Ronghui Gao
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, Sichuan, China
| | - Guoting Luo
- Department of Radiology, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, Sichuan, China
| | - Renxin Ding
- IT Center, West China Hospital of Sichuan University, Chengdu, Sichuan, China
| | - Bo Yang
- IT Center, West China Hospital of Sichuan University, Chengdu, Sichuan, China
| | - Huaiqiang Sun
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, Sichuan, China.
- Huaxi MR Research Center, West China Hospital of Sichuan University, Chengdu, Sichuan, China.
- Huaxi MR Research Center, Department of Radiology, West China Hospital of Sichuan University, Chengdu, 610041, Sichuan, China.
| |
Collapse
|
79
|
Zelger P, Brunner A, Zelger B, Willenbacher E, Unterberger SH, Stalder R, Huck CW, Willenbacher W, Pallua JD. Deep learning analysis of mid-infrared microscopic imaging data for the diagnosis and classification of human lymphomas. JOURNAL OF BIOPHOTONICS 2023; 16:e202300015. [PMID: 37578837 DOI: 10.1002/jbio.202300015] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 07/19/2023] [Accepted: 08/09/2023] [Indexed: 08/15/2023]
Abstract
The present study presents an alternative analytical workflow that combines mid-infrared (MIR) microscopic imaging and deep learning to diagnose human lymphoma and differentiate between small and large cell lymphoma. We could show that using a deep learning approach to analyze MIR hyperspectral data obtained from benign and malignant lymph node pathology results in high accuracy for correct classification, learning the distinct region of 3900 to 850 cm-1 . The accuracy is above 95% for every pair of malignant lymphoid tissue and still above 90% for the distinction between benign and malignant lymphoid tissue for binary classification. These results demonstrate that a preliminary diagnosis and subtyping of human lymphoma could be streamlined by applying a deep learning approach to analyze MIR spectroscopic data.
Collapse
Affiliation(s)
- P Zelger
- University Hospital of Hearing, Voice and Speech Disorders, Medical University of Innsbruck, Innsbruck, Austria
| | - A Brunner
- Institute of Pathology, Neuropathology and Molecular Pathology, Medical University of Innsbruck, Innsbruck, Austria
| | - B Zelger
- Institute of Pathology, Neuropathology and Molecular Pathology, Medical University of Innsbruck, Innsbruck, Austria
| | - E Willenbacher
- University Hospital of Internal Medicine V, Hematology & Oncology, Medical University of Innsbruck, Innsbruck, Austria
| | - S H Unterberger
- Institute of Material-Technology, Leopold-Franzens University Innsbruck, Innsbruck, Austria
| | - R Stalder
- Institute of Mineralogy and Petrography, Leopold-Franzens University Innsbruck, Innsbruck, Austria
| | - C W Huck
- Institute of Analytical Chemistry and Radiochemistry, Innsbruck, Austria
| | - W Willenbacher
- University Hospital of Internal Medicine V, Hematology & Oncology, Medical University of Innsbruck, Innsbruck, Austria
- Oncotyrol, Centre for Personalized Cancer Medicine, Innsbruck, Austria
| | - J D Pallua
- University Hospital for Orthopedics and Traumatology, Medical University of Innsbruck, Innsbruck, Austria
| |
Collapse
|
80
|
Choi J, Marwaha JS. Clinical prediction tool pitfalls and considerations: Data and algorithms. Surgery 2023; 174:1270-1272. [PMID: 37709646 DOI: 10.1016/j.surg.2023.08.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 08/02/2023] [Accepted: 08/08/2023] [Indexed: 09/16/2023]
Abstract
In recent years, many surgical prediction models have been developed and published to augment surgeon decision-making, predict postoperative patient trajectories, and more. Collectively underlying all of these models is a wide variety of data sources and algorithms. Each data set and algorithm has its unique strengths, weaknesses, and type of prediction task for which it is best suited. The purpose of this piece is to highlight important characteristics of common data sources and algorithms used in surgical prediction model development so that future researchers interested in developing models of their own may be able to critically evaluate them and select the optimal ones for their study.
Collapse
Affiliation(s)
- Jeff Choi
- Department of Surgery, Stanford University, Stanford, CA. https://www.twitter.com/JeffChoi01
| | - Jayson S Marwaha
- Department of Surgery, Georgetown University Medical Center, Washington, DC.
| |
Collapse
|
81
|
Breto AL, Cullison K, Zacharaki EI, Wallaengen V, Maziero D, Jones K, Valderrama A, de la Fuente MI, Meshman J, Azzam GA, Ford JC, Stoyanova R, Mellon EA. A Deep Learning Approach for Automatic Segmentation during Daily MRI-Linac Radiotherapy of Glioblastoma. Cancers (Basel) 2023; 15:5241. [PMID: 37958415 PMCID: PMC10647471 DOI: 10.3390/cancers15215241] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2023] [Revised: 10/25/2023] [Accepted: 10/30/2023] [Indexed: 11/15/2023] Open
Abstract
Glioblastoma changes during chemoradiotherapy are inferred from high-field MRI before and after treatment but are rarely investigated during radiotherapy. The purpose of this study was to develop a deep learning network to automatically segment glioblastoma tumors on daily treatment set-up scans from the first glioblastoma patients treated on MRI-linac. Glioblastoma patients were prospectively imaged daily during chemoradiotherapy on 0.35T MRI-linac. Tumor and edema (tumor lesion) and resection cavity kinetics throughout the treatment were manually segmented on these daily MRI. Utilizing a convolutional neural network, an automatic segmentation deep learning network was built. A nine-fold cross-validation schema was used to train the network using 80:10:10 for training, validation, and testing. Thirty-six glioblastoma patients were imaged pre-treatment and 30 times during radiotherapy (n = 31 volumes, total of 930 MRIs). The average tumor lesion and resection cavity volumes were 94.56 ± 64.68 cc and 72.44 ± 35.08 cc, respectively. The average Dice similarity coefficient between manual and auto-segmentation for tumor lesion and resection cavity across all patients was 0.67 and 0.84, respectively. This is the first brain lesion segmentation network developed for MRI-linac. The network performed comparably to the only other published network for auto-segmentation of post-operative glioblastoma lesions. Segmented volumes can be utilized for adaptive radiotherapy and propagated across multiple MRI contrasts to create a prognostic model for glioblastoma based on multiparametric MRI.
Collapse
Affiliation(s)
- Adrian L. Breto
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Kaylie Cullison
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Evangelia I. Zacharaki
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Veronica Wallaengen
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Danilo Maziero
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
- Department of Radiation Medicine & Applied Sciences, UC San Diego Health, La Jolla, CA 92093, USA
| | - Kolton Jones
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
- West Physics, Atlanta, GA 30339, USA
| | - Alessandro Valderrama
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Macarena I. de la Fuente
- Department of Neurology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA
| | - Jessica Meshman
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Gregory A. Azzam
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - John C. Ford
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Radka Stoyanova
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| | - Eric A. Mellon
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 33136, USA; (A.L.B.); (K.C.); (R.S.)
| |
Collapse
|
82
|
Cheng PC, Chiang HHK. Diagnosis of Salivary Gland Tumors Using Transfer Learning with Fine-Tuning and Gradual Unfreezing. Diagnostics (Basel) 2023; 13:3333. [PMID: 37958229 PMCID: PMC10648910 DOI: 10.3390/diagnostics13213333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 10/25/2023] [Accepted: 10/27/2023] [Indexed: 11/15/2023] Open
Abstract
Ultrasound is the primary tool for evaluating salivary gland tumors (SGTs); however, tumor diagnosis currently relies on subjective features. This study aimed to establish an objective ultrasound diagnostic method using deep learning. We collected 446 benign and 223 malignant SGT ultrasound images in the training/validation set and 119 benign and 44 malignant SGT ultrasound images in the testing set. We trained convolutional neural network (CNN) models from scratch and employed transfer learning (TL) with fine-tuning and gradual unfreezing to classify malignant and benign SGTs. The diagnostic performances of these models were compared. By utilizing the pretrained ResNet50V2 with fine-tuning and gradual unfreezing, we achieved a 5-fold average validation accuracy of 0.920. The diagnostic performance on the testing set demonstrated an accuracy of 89.0%, a sensitivity of 81.8%, a specificity of 91.6%, a positive predictive value of 78.3%, and a negative predictive value of 93.2%. This performance surpasses that of other models in our study. The corresponding Grad-CAM visualizations were also presented to provide explanations for the diagnosis. This study presents an effective and objective ultrasound method for distinguishing between malignant and benign SGTs, which could assist in preoperative evaluation.
Collapse
Affiliation(s)
- Ping-Chia Cheng
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei 11221, Taiwan;
- Department of Otolaryngology Head and Neck Surgery, Far Eastern Memorial Hospital, New Taipei City 22060, Taiwan
- Department of Communication Engineering, Asia Eastern University of Science and Technology, New Taipei City 22060, Taiwan
| | - Hui-Hua Kenny Chiang
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei 11221, Taiwan;
| |
Collapse
|
83
|
Alshahrani H, Sharma G, Anand V, Gupta S, Sulaiman A, Elmagzoub MA, Reshan MSA, Shaikh A, Azar AT. An Intelligent Attention-Based Transfer Learning Model for Accurate Differentiation of Bone Marrow Stains to Diagnose Hematological Disorder. Life (Basel) 2023; 13:2091. [PMID: 37895472 PMCID: PMC10607952 DOI: 10.3390/life13102091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 10/17/2023] [Accepted: 10/19/2023] [Indexed: 10/29/2023] Open
Abstract
Bone marrow (BM) is an essential part of the hematopoietic system, which generates all of the body's blood cells and maintains the body's overall health and immune system. The classification of bone marrow cells is pivotal in both clinical and research settings because many hematological diseases, such as leukemia, myelodysplastic syndromes, and anemias, are diagnosed based on specific abnormalities in the number, type, or morphology of bone marrow cells. There is a requirement for developing a robust deep-learning algorithm to diagnose bone marrow cells to keep a close check on them. This study proposes a framework for categorizing bone marrow cells into seven classes. In the proposed framework, five transfer learning models-DenseNet121, EfficientNetB5, ResNet50, Xception, and MobileNetV2-are implemented into the bone marrow dataset to classify them into seven classes. The best-performing DenseNet121 model was fine-tuned by adding one batch-normalization layer, one dropout layer, and two dense layers. The proposed fine-tuned DenseNet121 model was optimized using several optimizers, such as AdaGrad, AdaDelta, Adamax, RMSprop, and SGD, along with different batch sizes of 16, 32, 64, and 128. The fine-tuned DenseNet121 model was integrated with an attention mechanism to improve its performance by allowing the model to focus on the most relevant features or regions of the image, which can be particularly beneficial in medical imaging, where certain regions might have critical diagnostic information. The proposed fine-tuned and integrated DenseNet121 achieved the highest accuracy, with a training success rate of 99.97% and a testing success rate of 97.01%. The key hyperparameters, such as batch size, number of epochs, and different optimizers, were all considered for optimizing these pre-trained models to select the best model. This study will help in medical research to effectively classify the BM cells to prevent diseases like leukemia.
Collapse
Affiliation(s)
- Hani Alshahrani
- Department of Computer Science, College of Computer Science and Information Systems, Najran University, Najran 66462, Saudi Arabia; (H.A.); (A.S.)
| | - Gunjan Sharma
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India; (G.S.); (V.A.); (S.G.)
| | - Vatsala Anand
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India; (G.S.); (V.A.); (S.G.)
| | - Sheifali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura 140401, India; (G.S.); (V.A.); (S.G.)
| | - Adel Sulaiman
- Department of Computer Science, College of Computer Science and Information Systems, Najran University, Najran 66462, Saudi Arabia; (H.A.); (A.S.)
| | - M. A. Elmagzoub
- Department of Network and Communication Engineering, College of Computer Science and Information Systems, Najran University, Najran 61441, Saudi Arabia;
| | - Mana Saleh Al Reshan
- Department of Information Systems, College of Computer Science and Information Systems, Najran University, Najran 66462, Saudi Arabia; (M.S.A.R.); (A.S.)
| | - Asadullah Shaikh
- Department of Information Systems, College of Computer Science and Information Systems, Najran University, Najran 66462, Saudi Arabia; (M.S.A.R.); (A.S.)
| | - Ahmad Taher Azar
- College of Computer and Information Sciences, Prince Sultan University, Riyadh 11586, Saudi Arabia
- Automated Systems and Soft Computing Lab (ASSCL), Prince Sultan University, Riyadh 11586, Saudi Arabia
| |
Collapse
|
84
|
Park JH, Moon HS, Jung HI, Hwang J, Choi YH, Kim JE. Deep learning and clustering approaches for dental implant size classification based on periapical radiographs. Sci Rep 2023; 13:16856. [PMID: 37803022 PMCID: PMC10558577 DOI: 10.1038/s41598-023-42385-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 09/09/2023] [Indexed: 10/08/2023] Open
Abstract
This study investigated two artificial intelligence (AI) methods for automatically classifying dental implant diameter and length based on periapical radiographs. The first method, deep learning (DL), involved utilizing the pre-trained VGG16 model and adjusting the fine-tuning degree to analyze image data obtained from periapical radiographs. The second method, clustering analysis, was accomplished by analyzing the implant-specific feature vector derived from three key points coordinates of the dental implant using the k-means++ algorithm and adjusting the weight of the feature vector. DL and clustering model classified dental implant size into nine groups. The performance metrics of AI models were accuracy, sensitivity, specificity, F1-score, positive predictive value, negative predictive value, and area under the receiver operating characteristic curve (AUC-ROC). The final DL model yielded performances above 0.994, 0.950, 0.994, 0.974, 0.952, 0.994, and 0.975, respectively, and the final clustering model yielded performances above 0.983, 0.900, 0.988, 0.923, 0.909, 0.988, and 0.947, respectively. When comparing the AI model before tuning and the final AI model, statistically significant performance improvements were observed in six out of nine groups for DL models and four out of nine groups for clustering models based on AUC-ROC. Two AI models showed reliable classification performances. For clinical applications, AI models require validation on various multicenter data.
Collapse
Affiliation(s)
- Ji-Hyun Park
- Department of Prosthodontics, Yonsei University College of Dentistry, Yonsei-ro 50-1, Seodaemun-gu, Seoul, 03722, Korea
| | - Hong Seok Moon
- Department of Prosthodontics, Yonsei University College of Dentistry, Yonsei-ro 50-1, Seodaemun-gu, Seoul, 03722, Korea
| | - Hoi-In Jung
- Department of Preventive Dentistry and Public Oral Health, Yonsei University College of Dentistry, Seoul, 03722, Korea
| | - JaeJoon Hwang
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Dental Research Institute, Pusan National University, Busan, 50612, Korea
| | - Yoon-Ho Choi
- School of Computer Science and Engineering, Pusan National University, Busan, 46241, Korea
| | - Jong-Eun Kim
- Department of Prosthodontics, Yonsei University College of Dentistry, Yonsei-ro 50-1, Seodaemun-gu, Seoul, 03722, Korea.
| |
Collapse
|
85
|
Ma T, Wang H, Ye Z. Artificial intelligence applications in computed tomography in gastric cancer: a narrative review. Transl Cancer Res 2023; 12:2379-2392. [PMID: 37859746 PMCID: PMC10583011 DOI: 10.21037/tcr-23-201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Accepted: 08/01/2023] [Indexed: 10/21/2023]
Abstract
Background and Objective Artificial intelligence (AI) is a revolutionary technique which is deeply impacting and reshaping clinical practice in oncology. This review aims to summarize the current status of the clinical application of AI-based computed tomography (CT) for gastric cancer (GC), focusing on diagnosis, genetic status detection and risk prediction of metastasis, prognosis and treatment efficacy. The challenges and prospects for future research will also be discussed. Methods We searched the PubMed/MEDLINE database to identify clinical studies published between 1990 and November 2022 that investigated AI applications in CT in GC. The major findings of the verified studies were summarized. Key Content and Findings AI applications in CT images have attracted considerable attention in various fields such as diagnosis, prediction of metastasis risk, survival, and treatment response. These emerging techniques have shown a high potential to outperform clinicians in diagnostic accuracy and time-saving. Conclusions AI-powered tools showed great potential to increase diagnostic accuracy and reduce radiologists' workload. However, the goal of AI is not to replace human ability but to help oncologists make decisions in their practice. Therefore, radiologists should play a predominant role in AI applications and decide the best ways to integrate these complementary techniques within clinical practice.
Collapse
Affiliation(s)
- Tingting Ma
- Department of Radiology, Tianjin Cancer Hospital Airport Hospital, Tianjin, China
- Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
- National Clinical Research Center for Cancer, Tianjin, China
- Tianjin’s Clinical Research Center for Cancer, Tianjin, China
- The Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Hua Wang
- Department of Radiology, Tianjin Cancer Hospital Airport Hospital, Tianjin, China
- Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
- National Clinical Research Center for Cancer, Tianjin, China
- Tianjin’s Clinical Research Center for Cancer, Tianjin, China
- The Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Zhaoxiang Ye
- Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
- National Clinical Research Center for Cancer, Tianjin, China
- Tianjin’s Clinical Research Center for Cancer, Tianjin, China
- The Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| |
Collapse
|
86
|
Wanjiku RN, Nderu L, Kimwele M. Improved transfer learning using textural features conflation and dynamically fine-tuned layers. PeerJ Comput Sci 2023; 9:e1601. [PMID: 37810335 PMCID: PMC10557498 DOI: 10.7717/peerj-cs.1601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 08/29/2023] [Indexed: 10/10/2023]
Abstract
Transfer learning involves using previously learnt knowledge of a model task in addressing another task. However, this process works well when the tasks are closely related. It is, therefore, important to select data points that are closely relevant to the previous task and fine-tune the suitable pre-trained model's layers for effective transfer. This work utilises the least divergent textural features of the target datasets and pre-trained model's layers, minimising the lost knowledge during the transfer learning process. This study extends previous works on selecting data points with good textural features and dynamically selected layers using divergence measures by combining them into one model pipeline. Five pre-trained models are used: ResNet50, DenseNet169, InceptionV3, VGG16 and MobileNetV2 on nine datasets: CIFAR-10, CIFAR-100, MNIST, Fashion-MNIST, Stanford Dogs, Caltech 256, ISIC 2016, ChestX-ray8 and MIT Indoor Scenes. Experimental results show that data points with lower textural feature divergence and layers with more positive weights give better accuracy than other data points and layers. The data points with lower divergence give an average improvement of 3.54% to 6.75%, while the layers improve by 2.42% to 13.04% for the CIFAR-100 dataset. Combining the two methods gives an extra accuracy improvement of 1.56%. This combined approach shows that data points with lower divergence from the source dataset samples can lead to a better adaptation for the target task. The results also demonstrate that selecting layers with more positive weights reduces instances of trial and error in selecting fine-tuning layers for pre-trained models.
Collapse
Affiliation(s)
| | - Lawrence Nderu
- Computing, Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya
| | - Michael Kimwele
- Computing, Jomo Kenyatta University of Agriculture and Technology, Nairobi, Kenya
| |
Collapse
|
87
|
Canales-Fiscal MR, Tamez-Peña JG. Hybrid morphological-convolutional neural networks for computer-aided diagnosis. Front Artif Intell 2023; 6:1253183. [PMID: 37795497 PMCID: PMC10546173 DOI: 10.3389/frai.2023.1253183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Accepted: 08/30/2023] [Indexed: 10/06/2023] Open
Abstract
Training deep Convolutional Neural Networks (CNNs) presents challenges in terms of memory requirements and computational resources, often resulting in issues such as model overfitting and lack of generalization. These challenges can only be mitigated by using an excessive number of training images. However, medical image datasets commonly suffer from data scarcity due to the complexities involved in their acquisition, preparation, and curation. To address this issue, we propose a compact and hybrid machine learning architecture based on the Morphological and Convolutional Neural Network (MCNN), followed by a Random Forest classifier. Unlike deep CNN architectures, the MCNN was specifically designed to achieve effective performance with medical image datasets limited to a few hundred samples. It incorporates various morphological operations into a single layer and uses independent neural networks to extract information from each signal channel. The final classification is obtained by utilizing a Random Forest classifier on the outputs of the last neural network layer. We compare the classification performance of our proposed method with three popular deep CNN architectures (ResNet-18, ShuffleNet-V2, and MobileNet-V2) using two training approaches: full training and transfer learning. The evaluation was conducted on two distinct medical image datasets: the ISIC dataset for melanoma classification and the ORIGA dataset for glaucoma classification. Results demonstrate that the MCNN method exhibits reliable performance in melanoma classification, achieving an AUC of 0.94 (95% CI: 0.91 to 0.97), outperforming the popular CNN architectures. For the glaucoma dataset, the MCNN achieved an AUC of 0.65 (95% CI: 0.53 to 0.74), which was similar to the performance of the popular CNN architectures. This study contributes to the understanding of mathematical morphology in shallow neural networks for medical image classification and highlights the potential of hybrid architectures in effectively learning from medical image datasets that are limited by a small number of case samples.
Collapse
|
88
|
Huisman M, Hannink G. The AI Generalization Gap: One Size Does Not Fit All. Radiol Artif Intell 2023; 5:e230246. [PMID: 37795134 PMCID: PMC10546357 DOI: 10.1148/ryai.230246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 07/12/2023] [Accepted: 08/10/2023] [Indexed: 10/06/2023]
Affiliation(s)
- Merel Huisman
- From Radboudumc, Oudwijk 49, Nijmegen, Utrecht 6500, the
Netherlands
| | - Gerjon Hannink
- From Radboudumc, Oudwijk 49, Nijmegen, Utrecht 6500, the
Netherlands
| |
Collapse
|
89
|
Kamdje Wabo G, Prasser F, Gierend K, Siegel F, Ganslandt T. Data Quality- and Utility-Compliant Anonymization of Common Data Model-Harmonized Electronic Health Record Data: Protocol for a Scoping Review. JMIR Res Protoc 2023; 12:e46471. [PMID: 37566443 PMCID: PMC10457704 DOI: 10.2196/46471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 05/31/2023] [Accepted: 06/28/2023] [Indexed: 08/12/2023] Open
Abstract
BACKGROUND The anonymization of Common Data Model (CDM)-converted EHR data is essential to ensure the data privacy in the use of harmonized health care data. However, applying data anonymization techniques can significantly affect many properties of the resulting data sets and thus biases research results. Few studies have reviewed these applications with a reflection of approaches to manage data utility and quality concerns in the context of CDM-formatted health care data. OBJECTIVE Our intended scoping review aims to identify and describe (1) how formal anonymization methods are carried out with CDM-converted health care data, (2) how data quality and utility concerns are considered, and (3) how the various CDMs differ in terms of their suitability for recording anonymized data. METHODS The planned scoping review is based on the framework of Arksey and O'Malley. By using this, only articles published in English will be included. The retrieval of literature items should be based on a literature search string combining keywords related to data anonymization, CDM standards, and data quality assessment. The proposed literature search query should be validated by a librarian, accompanied by manual searches to include further informal sources. Eligible articles will first undergo a deduplication step, followed by the screening of titles. Second, a full-text reading will allow the 2 reviewers involved to reach the final decision about article selection, while a domain expert will support the resolution of citation selection conflicts. Additionally, key information will be extracted, categorized, summarized, and analyzed by using a proposed template into an iterative process. Tabular and graphical analyses should be addressed in alignment with the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) checklist. We also performed some tentative searches on Web of Science for estimating the feasibility of reaching eligible articles. RESULTS Tentative searches on Web of Science resulted in 507 nonduplicated matches, suggesting the availability of (potential) relevant articles. Further analysis and selection steps will allow us to derive a final literature set. Furthermore, the completion of this scoping review study is expected by the end of the fourth quarter of 2023. CONCLUSIONS Outlining the approaches of applying formal anonymization methods on CDM-formatted health care data while taking into account data quality and utility concerns should provide useful insights to understand the existing approaches and future research direction based on identified gaps. This protocol describes a schedule to perform a scoping review, which should support the conduction of follow-up investigations. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) PRR1-10.2196/46471.
Collapse
Affiliation(s)
- Gaetan Kamdje Wabo
- Department of Biomedical Informatics, Center for Preventive Medicine and Digital Health Baden-Württemberg, Mannheim Medical Faculty of the University of Heidelberg, Mannheim, Germany
| | - Fabian Prasser
- Berlin Institute of Health at Charité, Universitätsmedizin Berlin, Berlin, Germany
| | - Kerstin Gierend
- Department of Biomedical Informatics, Center for Preventive Medicine and Digital Health Baden-Württemberg, Mannheim Medical Faculty of the University of Heidelberg, Mannheim, Germany
| | - Fabian Siegel
- Department of Biomedical Informatics, Center for Preventive Medicine and Digital Health Baden-Württemberg, Mannheim Medical Faculty of the University of Heidelberg, Mannheim, Germany
- Department of Urology and Urosurgery, University Medical Center Mannheim, Mannheim Medical Faculty of the University of Heidelberg, Mannheim, Germany
| | - Thomas Ganslandt
- Chair of Medical Informatics, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany
| |
Collapse
|
90
|
Dasari Y, Duffin J, Sayin ES, Levine HT, Poublanc J, Para AE, Mikulis DJ, Fisher JA, Sobczyk O, Khamesee MB. Convolutional Neural Networks to Assess Steno-Occlusive Disease Using Cerebrovascular Reactivity. Healthcare (Basel) 2023; 11:2231. [PMID: 37628429 PMCID: PMC10454585 DOI: 10.3390/healthcare11162231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 07/31/2023] [Accepted: 08/05/2023] [Indexed: 08/27/2023] Open
Abstract
Cerebrovascular Reactivity (CVR) is a provocative test used with Blood oxygenation level-dependent (BOLD) Magnetic Resonance Imaging (MRI) studies, where a vasoactive stimulus is applied and the corresponding changes in the cerebral blood flow (CBF) are measured. The most common clinical application is the assessment of cerebral perfusion insufficiency in patients with steno-occlusive disease (SOD). Globally, millions of people suffer from cerebrovascular diseases, and SOD is the most common cause of ischemic stroke. Therefore, CVR analyses can play a vital role in early diagnosis and guiding clinical treatment. This study develops a convolutional neural network (CNN)-based clinical decision support system to facilitate the screening of SOD patients by discriminating between healthy and unhealthy CVR maps. The networks were trained on a confidential CVR dataset with two classes: 68 healthy control subjects, and 163 SOD patients. This original dataset was distributed in a ratio of 80%-10%-10% for training, validation, and testing, respectively, and image augmentations were applied to the training and validation sets. Additionally, some popular pre-trained networks were imported and customized for the objective classification task to conduct transfer learning experiments. Results indicate that a customized CNN with a double-stacked convolution layer architecture produces the best results, consistent with expert clinical readings.
Collapse
Affiliation(s)
- Yashesh Dasari
- Department of Mechanical and Mechatronics Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada;
| | - James Duffin
- Department of Physiology, University of Toronto, Toronto, ON M5S 1A8, Canada
- Department of Anesthesia and Pain Management, University Health Network, Toronto, ON M5G 2C4, Canada
| | - Ece Su Sayin
- Department of Physiology, University of Toronto, Toronto, ON M5S 1A8, Canada
- Department of Anesthesia and Pain Management, University Health Network, Toronto, ON M5G 2C4, Canada
| | - Harrison T. Levine
- Department of Physiology, University of Toronto, Toronto, ON M5S 1A8, Canada
- Department of Anesthesia and Pain Management, University Health Network, Toronto, ON M5G 2C4, Canada
| | - Julien Poublanc
- Joint Department of Medical Imaging and the Functional Neuroimaging Laboratory, University Health Network, Toronto, ON M5G 2C4, Canada
| | - Andrea E. Para
- Joint Department of Medical Imaging and the Functional Neuroimaging Laboratory, University Health Network, Toronto, ON M5G 2C4, Canada
| | - David J. Mikulis
- Joint Department of Medical Imaging and the Functional Neuroimaging Laboratory, University Health Network, Toronto, ON M5G 2C4, Canada
- Institute of Medical Sciences, University of Toronto, Toronto, ON M5S 1A8, Canada
| | - Joseph A. Fisher
- Department of Physiology, University of Toronto, Toronto, ON M5S 1A8, Canada
- Department of Anesthesia and Pain Management, University Health Network, Toronto, ON M5G 2C4, Canada
- Institute of Medical Sciences, University of Toronto, Toronto, ON M5S 1A8, Canada
| | - Olivia Sobczyk
- Department of Anesthesia and Pain Management, University Health Network, Toronto, ON M5G 2C4, Canada
- Joint Department of Medical Imaging and the Functional Neuroimaging Laboratory, University Health Network, Toronto, ON M5G 2C4, Canada
| | - Mir Behrad Khamesee
- Department of Mechanical and Mechatronics Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada;
| |
Collapse
|
91
|
Jin T, Pan S, Li X, Chen S. Metadata and Image Features Co-Aware Personalized Federated Learning for Smart Healthcare. IEEE J Biomed Health Inform 2023; 27:4110-4119. [PMID: 37220032 DOI: 10.1109/jbhi.2023.3279096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Recently, artificial intelligence has been widely used in intelligent disease diagnosis and has achieved great success. However, most of the works mainly rely on the extraction of image features but ignore the use of clinical text information of patients, which may limit the diagnosis accuracy fundamentally. In this paper, we propose a metadata and image features co-aware personalized federated learning scheme for smart healthcare. Specifically, we construct an intelligent diagnosis model, by which users can obtain fast and accurate diagnosis services. Meanwhile, a personalized federated learning scheme is designed to utilize the knowledge learned from other edge nodes with larger contributions and customize high-quality personalized classification models for each edge node. Subsequently, a Naïve Bayes classifier is devised for classifying patient metadata. And then the image and metadata diagnosis results are jointly aggregated by different weights to improve the accuracy of intelligent diagnosis. Finally, the simulation results illustrate that, compared with the existing methods, our proposed algorithm achieves better classification accuracy, reaching about 97.16% on PAD-UFES-20 dataset.
Collapse
|
92
|
Harris C, Okorie U, Makrogiannis S. Spatially localized sparse approximations of deep features for breast mass characterization. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:15859-15882. [PMID: 37919992 PMCID: PMC10949936 DOI: 10.3934/mbe.2023706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/04/2023]
Abstract
We propose a deep feature-based sparse approximation classification technique for classification of breast masses into benign and malignant categories in film screen mammographs. This is a significant application as breast cancer is a leading cause of death in the modern world and improvements in diagnosis may help to decrease rates of mortality for large populations. While deep learning techniques have produced remarkable results in the field of computer-aided diagnosis of breast cancer, there are several aspects of this field that remain under-studied. In this work, we investigate the applicability of deep-feature-generated dictionaries to sparse approximation-based classification. To this end we construct dictionaries from deep features and compute sparse approximations of Regions Of Interest (ROIs) of breast masses for classification. Furthermore, we propose block and patch decomposition methods to construct overcomplete dictionaries suitable for sparse coding. The effectiveness of our deep feature spatially localized ensemble sparse analysis (DF-SLESA) technique is evaluated on a merged dataset of mass ROIs from the CBIS-DDSM and MIAS datasets. Experimental results indicate that dictionaries of deep features yield more discriminative sparse approximations of mass characteristics than dictionaries of imaging patterns and dictionaries learned by unsupervised machine learning techniques such as K-SVD. Of note is that the proposed block and patch decomposition strategies may help to simplify the sparse coding problem and to find tractable solutions. The proposed technique achieves competitive performances with state-of-the-art techniques for benign/malignant breast mass classification, using 10-fold cross-validation in merged datasets of film screen mammograms.
Collapse
Affiliation(s)
- Chelsea Harris
- Division of Physics, Engineering, Mathematics, and Computer Science, Delaware State University, 1200 N DuPont Hwy, Dover, DE 19901, USA
| | - Uchenna Okorie
- Division of Physics, Engineering, Mathematics, and Computer Science, Delaware State University, 1200 N DuPont Hwy, Dover, DE 19901, USA
| | - Sokratis Makrogiannis
- Division of Physics, Engineering, Mathematics, and Computer Science, Delaware State University, 1200 N DuPont Hwy, Dover, DE 19901, USA
| |
Collapse
|
93
|
Cobo M, Pérez-Rojas F, Gutiérrez-Rodríguez C, Heredia I, Maragaño-Lizama P, Yung-Manriquez F, Lloret Iglesias L, Vega JA. Novel deep learning method for coronary artery tortuosity detection through coronary angiography. Sci Rep 2023; 13:11137. [PMID: 37429940 PMCID: PMC10333289 DOI: 10.1038/s41598-023-37868-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Accepted: 06/28/2023] [Indexed: 07/12/2023] Open
Abstract
Coronary artery tortuosity is usually an undetected condition in patients undergoing coronary angiography. This condition requires a longer examination by the specialist to be detected. Yet, detailed knowledge of the morphology of coronary arteries is essential for planning any interventional treatment, such as stenting. We aimed to analyze coronary artery tortuosity in coronary angiography with artificial intelligence techniques to develop an algorithm capable of automatically detecting this condition in patients. This work uses deep learning techniques, in particular, convolutional neural networks, to classify patients into tortuous or non-tortuous based on their coronary angiography. The developed model was trained both on left (Spider) and right (45°/0°) coronary angiographies following a fivefold cross-validation procedure. A total of 658 coronary angiographies were included. Experimental results demonstrated satisfactory performance of our image-based tortuosity detection system, with a test accuracy of (87 ± 6)%. The deep learning model had a mean area under the curve of 0.96 ± 0.03 over the test sets. The sensitivity, specificity, positive predictive values, and negative predictive values of the model for detecting coronary artery tortuosity were (87 ± 10)%, (88 ± 10)%, (89 ± 8)%, and (88 ± 9)%, respectively. Deep learning convolutional neural networks were found to have comparable sensitivity and specificity with independent experts' radiological visual examination for detecting coronary artery tortuosity for a conservative threshold of 0.5. These findings have promising applications in the field of cardiology and medical imaging.
Collapse
Affiliation(s)
- Miriam Cobo
- Advanced Computing Research Group, Institute of Physics of Cantabria (IFCA), CSIC - UC, 39005, Santander, Cantabria, Spain.
| | - Francisco Pérez-Rojas
- Grupo de Investigación MEXPA, Facultad de Ciencias de la Salud, Universidad Autónoma de Chile, Talca, Chile
- Grupo de Investigación SINPOS, Departamento de Morfología y Biología Celular, Universidad de Oviedo, 3306, Oviedo, Principality of Asturias, Spain
| | | | - Ignacio Heredia
- Advanced Computing Research Group, Institute of Physics of Cantabria (IFCA), CSIC - UC, 39005, Santander, Cantabria, Spain
| | | | | | - Lara Lloret Iglesias
- Advanced Computing Research Group, Institute of Physics of Cantabria (IFCA), CSIC - UC, 39005, Santander, Cantabria, Spain
| | - José A Vega
- Grupo de Investigación MEXPA, Facultad de Ciencias de la Salud, Universidad Autónoma de Chile, Talca, Chile
- Grupo de Investigación SINPOS, Departamento de Morfología y Biología Celular, Universidad de Oviedo, 3306, Oviedo, Principality of Asturias, Spain
| |
Collapse
|
94
|
Lee M. Recent Advances in Deep Learning for Protein-Protein Interaction Analysis: A Comprehensive Review. Molecules 2023; 28:5169. [PMID: 37446831 DOI: 10.3390/molecules28135169] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 06/30/2023] [Accepted: 06/30/2023] [Indexed: 07/15/2023] Open
Abstract
Deep learning, a potent branch of artificial intelligence, is steadily leaving its transformative imprint across multiple disciplines. Within computational biology, it is expediting progress in the understanding of Protein-Protein Interactions (PPIs), key components governing a wide array of biological functionalities. Hence, an in-depth exploration of PPIs is crucial for decoding the intricate biological system dynamics and unveiling potential avenues for therapeutic interventions. As the deployment of deep learning techniques in PPI analysis proliferates at an accelerated pace, there exists an immediate demand for an exhaustive review that encapsulates and critically assesses these novel developments. Addressing this requirement, this review offers a detailed analysis of the literature from 2021 to 2023, highlighting the cutting-edge deep learning methodologies harnessed for PPI analysis. Thus, this review stands as a crucial reference for researchers in the discipline, presenting an overview of the recent studies in the field. This consolidation helps elucidate the dynamic paradigm of PPI analysis, the evolution of deep learning techniques, and their interdependent dynamics. This scrutiny is expected to serve as a vital aid for researchers, both well-established and newcomers, assisting them in maneuvering the rapidly shifting terrain of deep learning applications in PPI analysis.
Collapse
Affiliation(s)
- Minhyeok Lee
- School of Electrical and Electronics Engineering, Chung-Ang University, Seoul 06974, Republic of Korea
| |
Collapse
|
95
|
Hansun S, Argha A, Alinejad-Rokny H, Liaw ST, Celler BG, Marks GB. Revisiting Transfer Learning Method for Tuberculosis Diagnosis. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083096 DOI: 10.1109/embc40787.2023.10340441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Transfer learning (TL) has been proven to be a good strategy for solving domain-specific problems in many deep learning (DL) applications. Typically, in TL, a pre-trained DL model is used as a feature extractor and the extracted features are then fed to a newly trained classifier as the model head. In this study, we propose a new ensemble approach of transfer learning that uses multiple neural network classifiers at once in the model head. We compared the classification results of the proposed ensemble approach with the direct approach of several popular models, namely VGG-16, ResNet-50, and MobileNet, on two publicly available tuberculosis datasets, i.e., Montgomery County (MC) and Shenzhen (SZ) datasets. Moreover, we also compared the results when a fully pre-trained DL model was used for feature extraction versus the cases in which the features were obtained from a middle layer of the pre-trained DL model. Several metrics derived from confusion matrix results were used, namely the accuracy (ACC), sensitivity (SNS), specificity (SPC), precision (PRC), and F1-score. We concluded that the proposed ensemble approach outperformed the direct approach. Best result was achieved by ResNet-50 when the features were extracted from a middle layer with an accuracy of 91.2698% on MC dataset.Clinical Relevance- The proposed ensemble approach could increase the detection accuracy of 7-8% for Montgomery County dataset and 4-5% for Shenzhen dataset.
Collapse
|
96
|
Rydzewski NR, Helzer KT, Bootsma M, Shi Y, Bakhtiar H, Sjöström M, Zhao SG. Machine Learning & Molecular Radiation Tumor Biomarkers. Semin Radiat Oncol 2023; 33:243-251. [PMID: 37331779 PMCID: PMC10287033 DOI: 10.1016/j.semradonc.2023.03.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/20/2023]
Abstract
Developing radiation tumor biomarkers that can guide personalized radiotherapy clinical decision making is a critical goal in the effort towards precision cancer medicine. High-throughput molecular assays paired with modern computational techniques have the potential to identify individual tumor-specific signatures and create tools that can help understand heterogenous patient outcomes in response to radiotherapy, allowing clinicians to fully benefit from the technological advances in molecular profiling and computational biology including machine learning. However, the increasingly complex nature of the data generated from high-throughput and "omics" assays require careful selection of analytical strategies. Furthermore, the power of modern machine learning techniques to detect subtle data patterns comes with special considerations to ensure that the results are generalizable. Herein, we review the computational framework of tumor biomarker development and describe commonly used machine learning approaches and how they are applied for radiation biomarker development using molecular data, as well as challenges and emerging research trends.
Collapse
Affiliation(s)
- Nicholas R Rydzewski
- Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD; Department of Human Oncology, University of Wisconsin, Madison, WI
| | - Kyle T Helzer
- Department of Human Oncology, University of Wisconsin, Madison, WI
| | - Matthew Bootsma
- Department of Human Oncology, University of Wisconsin, Madison, WI
| | - Yue Shi
- Department of Human Oncology, University of Wisconsin, Madison, WI
| | - Hamza Bakhtiar
- Department of Human Oncology, University of Wisconsin, Madison, WI
| | - Martin Sjöström
- Department of Radiation Oncology, University of California San Francisco, San Francisco, CA
| | - Shuang G Zhao
- Department of Human Oncology, University of Wisconsin, Madison, WI; Carbone Cancer Center, University of Wisconsin, Madison, WI; William S. Middleton Memorial Veterans Hospital, Madison, WI.
| |
Collapse
|
97
|
Kim T, Moon NH, Goh TS, Jung ID. Detection of incomplete atypical femoral fracture on anteroposterior radiographs via explainable artificial intelligence. Sci Rep 2023; 13:10415. [PMID: 37369833 PMCID: PMC10300092 DOI: 10.1038/s41598-023-37560-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 06/23/2023] [Indexed: 06/29/2023] Open
Abstract
One of the key aspects of the diagnosis and treatment of atypical femoral fractures is the early detection of incomplete fractures and the prevention of their progression to complete fractures. However, an incomplete atypical femoral fracture can be misdiagnosed as a normal lesion by both primary care physicians and orthopedic surgeons; expert consultation is needed for accurate diagnosis. To overcome this limitation, we developed a transfer learning-based ensemble model to detect and localize fractures. A total of 1050 radiographs, including 100 incomplete fractures, were preprocessed by applying a Sobel filter. Six models (EfficientNet B5, B6, B7, DenseNet 121, MobileNet V1, and V2) were selected for transfer learning. We then composed two ensemble models; the first was based on the three models having the highest accuracy, and the second was based on the five models having the highest accuracy. The area under the curve (AUC) of the case that used the three most accurate models was the highest at 0.998. This study demonstrates that an ensemble of transfer-learning-based models can accurately classify and detect fractures, even in an imbalanced dataset. This artificial intelligence (AI)-assisted diagnostic application could support decision-making and reduce the workload of clinicians with its high speed and accuracy.
Collapse
Affiliation(s)
- Taekyeong Kim
- Department of Mechanical Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, Republic of Korea
| | - Nam Hoon Moon
- Department of Orthopaedic Surgery, Biomedical Research Institute, Pusan National University Hospital, Pusan National University School of Medicine, Busan, 49241, Republic of Korea
| | - Tae Sik Goh
- Department of Orthopaedic Surgery, Biomedical Research Institute, Pusan National University Hospital, Pusan National University School of Medicine, Busan, 49241, Republic of Korea
| | - Im Doo Jung
- Department of Mechanical Engineering, Ulsan National Institute of Science and Technology, Ulsan, 44919, Republic of Korea.
| |
Collapse
|
98
|
Wang M, Sushil M, Miao BY, Butte AJ. Bottom-up and top-down paradigms of artificial intelligence research approaches to healthcare data science using growing real-world big data. J Am Med Inform Assoc 2023; 30:1323-1332. [PMID: 37187158 PMCID: PMC10280344 DOI: 10.1093/jamia/ocad085] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2023] [Revised: 04/03/2023] [Accepted: 05/04/2023] [Indexed: 05/17/2023] Open
Abstract
OBJECTIVES As the real-world electronic health record (EHR) data continue to grow exponentially, novel methodologies involving artificial intelligence (AI) are becoming increasingly applied to enable efficient data-driven learning and, ultimately, to advance healthcare. Our objective is to provide readers with an understanding of evolving computational methods and help in deciding on methods to pursue. TARGET AUDIENCE The sheer diversity of existing methods presents a challenge for health scientists who are beginning to apply computational methods to their research. Therefore, this tutorial is aimed at scientists working with EHR data who are early entrants into the field of applying AI methodologies. SCOPE This manuscript describes the diverse and growing AI research approaches in healthcare data science and categorizes them into 2 distinct paradigms, the bottom-up and top-down paradigms to provide health scientists venturing into artificial intelligent research with an understanding of the evolving computational methods and help in deciding on methods to pursue through the lens of real-world healthcare data.
Collapse
Affiliation(s)
- Michelle Wang
- Bakar Computational Health Sciences Institute, University of California, San Francisco, San Francisco, California, USA
| | - Madhumita Sushil
- Bakar Computational Health Sciences Institute, University of California, San Francisco, San Francisco, California, USA
| | - Brenda Y Miao
- Bakar Computational Health Sciences Institute, University of California, San Francisco, San Francisco, California, USA
| | - Atul J Butte
- Bakar Computational Health Sciences Institute, University of California, San Francisco, San Francisco, California, USA
- Department of Pediatrics, University of California, San Francisco, San Francisco, California, USA
| |
Collapse
|
99
|
Eshraghi MA, Ayatollahi A, Shokouhi SB. COV-MobNets: a mobile networks ensemble model for diagnosis of COVID-19 based on chest X-ray images. BMC Med Imaging 2023; 23:83. [PMID: 37322450 PMCID: PMC10273540 DOI: 10.1186/s12880-023-01039-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Accepted: 06/01/2023] [Indexed: 06/17/2023] Open
Abstract
BACKGROUND The medical profession is facing an excessive workload, which has led to the development of various Computer-Aided Diagnosis (CAD) systems as well as Mobile-Aid Diagnosis (MAD) systems. These technologies enhance the speed and accuracy of diagnoses, particularly in areas with limited resources or remote regions during the pandemic. The primary purpose of this research is to predict and diagnose COVID-19 infection from chest X-ray images by developing a mobile-friendly deep learning framework, which has the potential for deployment in portable devices such as mobile or tablet, especially in situations where the workload of radiology specialists may be high. Moreover, this could improve the accuracy and transparency of population screening to assist radiologists during the pandemic. METHODS In this study, the Mobile Networks ensemble model called COV-MobNets is proposed to classify positive COVID-19 X-ray images from negative ones and can have an assistant role in diagnosing COVID-19. The proposed model is an ensemble model, combining two lightweight and mobile-friendly models: MobileViT based on transformer structure and MobileNetV3 based on Convolutional Neural Network. Hence, COV-MobNets can extract the features of chest X-ray images in two different methods to achieve better and more accurate results. In addition, data augmentation techniques were applied to the dataset to avoid overfitting during the training process. The COVIDx-CXR-3 benchmark dataset was used for training and evaluation. RESULTS The classification accuracy of the improved MobileViT and MobileNetV3 models on the test set has reached 92.5% and 97%, respectively, while the accuracy of the proposed model (COV-MobNets) has reached 97.75%. The sensitivity and specificity of the proposed model have also reached 98.5% and 97%, respectively. Experimental comparison proves the result is more accurate and balanced than other methods. CONCLUSION The proposed method can distinguish between positive and negative COVID-19 cases more accurately and quickly. The proposed method proves that utilizing two automatic feature extractors with different structures as an overall framework of COVID-19 diagnosis can lead to improved performance, enhanced accuracy, and better generalization to new or unseen data. As a result, the proposed framework in this study can be used as an effective method for computer-aided diagnosis and mobile-aided diagnosis of COVID-19. The code is available publicly for open access at https://github.com/MAmirEshraghi/COV-MobNets .
Collapse
Affiliation(s)
- Mohammad Amir Eshraghi
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | - Ahmad Ayatollahi
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran
| | | |
Collapse
|
100
|
Petäinen L, Väyrynen JP, Ruusuvuori P, Pölönen I, Äyrämö S, Kuopio T. Domain-specific transfer learning in the automated scoring of tumor-stroma ratio from histopathological images of colorectal cancer. PLoS One 2023; 18:e0286270. [PMID: 37235626 DOI: 10.1371/journal.pone.0286270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Accepted: 05/11/2023] [Indexed: 05/28/2023] Open
Abstract
Tumor-stroma ratio (TSR) is a prognostic factor for many types of solid tumors. In this study, we propose a method for automated estimation of TSR from histopathological images of colorectal cancer. The method is based on convolutional neural networks which were trained to classify colorectal cancer tissue in hematoxylin-eosin stained samples into three classes: stroma, tumor and other. The models were trained using a data set that consists of 1343 whole slide images. Three different training setups were applied with a transfer learning approach using domain-specific data i.e. an external colorectal cancer histopathological data set. The three most accurate models were chosen as a classifier, TSR values were predicted and the results were compared to a visual TSR estimation made by a pathologist. The results suggest that classification accuracy does not improve when domain-specific data are used in the pre-training of the convolutional neural network models in the task at hand. Classification accuracy for stroma, tumor and other reached 96.1% on an independent test set. Among the three classes the best model gained the highest accuracy (99.3%) for class tumor. When TSR was predicted with the best model, the correlation between the predicted values and values estimated by an experienced pathologist was 0.57. Further research is needed to study associations between computationally predicted TSR values and other clinicopathological factors of colorectal cancer and the overall survival of the patients.
Collapse
Affiliation(s)
- Liisa Petäinen
- Faculty of Information Technology, University of Jyväskylä, Jyväskylä, Finland
| | - Juha P Väyrynen
- Cancer and Translational Medicine Research Unit, Medical Research Center, Oulu University Hospital, and University of Oulu, Oulu, Finland
| | - Pekka Ruusuvuori
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
- Cancer Research Unit, Institute of Biomedicine, University of Turku, Turku, Finland
- FICAN West Cancer Centre, Turku University Hospital, Turku, Finland
| | - Ilkka Pölönen
- Faculty of Information Technology, University of Jyväskylä, Jyväskylä, Finland
| | - Sami Äyrämö
- Faculty of Information Technology, University of Jyväskylä, Jyväskylä, Finland
| | - Teijo Kuopio
- Department of Education and Research, Hospital Nova of Central Finland, Jyväskylä, Finland
- Department of Biological and Environmental Science, University of Jyväskylä, Jyväskylä, Finland
- Department of Pathology, Hospital Nova of Central Finland, Jyväskylä, Finland
| |
Collapse
|