51
|
Håkansson S, Tuci M, Bolliger M, Curt A, Jutzeler CR, Brüningk SC. Data-driven prediction of spinal cord injury recovery: An exploration of current status and future perspectives. Exp Neurol 2024; 380:114913. [PMID: 39097073 DOI: 10.1016/j.expneurol.2024.114913] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2024] [Revised: 07/24/2024] [Accepted: 07/30/2024] [Indexed: 08/05/2024]
Abstract
Spinal Cord Injury (SCI) presents a significant challenge in rehabilitation medicine, with recovery outcomes varying widely among individuals. Machine learning (ML) is a promising approach to enhance the prediction of recovery trajectories, but its integration into clinical practice requires a thorough understanding of its efficacy and applicability. We systematically reviewed the current literature on data-driven models of SCI recovery prediction. The included studies were evaluated based on a range of criteria assessing the approach, implementation, input data preferences, and the clinical outcomes aimed to forecast. We observe a tendency to utilize routinely acquired data, such as International Standards for Neurological Classification of SCI (ISNCSCI), imaging, and demographics, for the prediction of functional outcomes derived from the Spinal Cord Independence Measure (SCIM) III and Functional Independence Measure (FIM) scores with a focus on motor ability. Although there has been an increasing interest in data-driven studies over time, traditional machine learning architectures, such as linear regression and tree-based approaches, remained the overwhelmingly popular choices for implementation. This implies ample opportunities for exploring architectures addressing the challenges of predicting SCI recovery, including techniques for learning from limited longitudinal data, improving generalizability, and enhancing reproducibility. We conclude with a perspective, highlighting possible future directions for data-driven SCI recovery prediction and drawing parallels to other application fields in terms of diverse data types (imaging, tabular, sequential, multimodal), data challenges (limited, missing, longitudinal data), and algorithmic needs (causal inference, robustness).
Collapse
Affiliation(s)
- Samuel Håkansson
- ETH Zürich, Department of Health Sciences and Technology (D-HEST), Zürich, Switzerland; Swiss Institute of Bioinformatics (SIB), Lausanne, Switzerland.
| | - Miklovana Tuci
- ETH Zürich, Department of Health Sciences and Technology (D-HEST), Zürich, Switzerland; Spinal Cord Injury Center, University Hospital Balgrist, University of Zürich, Switzerland
| | - Marc Bolliger
- Spinal Cord Injury Center, University Hospital Balgrist, University of Zürich, Switzerland
| | - Armin Curt
- Spinal Cord Injury Center, University Hospital Balgrist, University of Zürich, Switzerland
| | - Catherine R Jutzeler
- ETH Zürich, Department of Health Sciences and Technology (D-HEST), Zürich, Switzerland; Swiss Institute of Bioinformatics (SIB), Lausanne, Switzerland
| | - Sarah C Brüningk
- ETH Zürich, Department of Health Sciences and Technology (D-HEST), Zürich, Switzerland; Swiss Institute of Bioinformatics (SIB), Lausanne, Switzerland
| |
Collapse
|
52
|
Wang J, Tong J, Li J, Cao C, Wang S, Bi T, Zhu P, Shi L, Deng Y, Ma T, Hou J, Cui X. Using the GoogLeNet deep-learning model to distinguish between benign and malignant breast masses based on conventional ultrasound: a systematic review and meta-analysis. Quant Imaging Med Surg 2024; 14:7111-7127. [PMID: 39429606 PMCID: PMC11485374 DOI: 10.21037/qims-24-679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Accepted: 08/19/2024] [Indexed: 10/22/2024]
Abstract
Background Breast cancer is one of the most common malignancies in women worldwide, and early and accurate diagnosis is crucial for improving treatment outcomes. Conventional ultrasound (CUS) is a widely used screening method for breast cancer; however, the subjective nature of interpreting the results can lead to diagnostic errors. The current study sought to estimate the effectiveness of using a GoogLeNet deep-learning convolutional neural network (CNN) model to identify benign and malignant breast masses based on CUS. Methods A literature search was conducted of the Embase, PubMed, Web of Science, Wanfang, China National Knowledge Infrastructure (CNKI), and other databases to retrieve studies related to GoogLeNet deep-learning CUS-based models published before July 15, 2023. The diagnostic performance of the GoogLeNet models was evaluated using several metrics, including pooled sensitivity (PSEN), pooled specificity (PSPE), the positive likelihood ratio (PLR), the negative likelihood ratio (NLR), the diagnostic odds ratio (DOR), and the area under the curve (AUC). The quality of the included studies was evaluated using the Quality Assessment of Diagnostic Accuracy Studies Scale (QUADAS). The eligibility of the included literature were independently searched and assessed by two authors. Results All of the 12 studies that used pathological findings as the gold standard were included in the meta-analysis. The overall average estimation of sensitivity and specificity was 0.85 [95% confidence interval (CI): 0.80-0.89] and 0.86 (95% CI: 0.78-0.92), respectively. The PLR and NLR were 6.2 (95% CI: 3.9-9.9) and 0.17 (95% CI: 0.12-0.23), respectively. The DOR was 37.06 (95% CI: 20.78-66.10). The AUC was 0.92 (95% CI: 0.89-0.94). No obvious publication bias was detected. Conclusions The GoogLeNet deep-learning model, which uses a CNN, achieved good diagnostic results in distinguishing between benign and malignant breast masses in CUS-based images.
Collapse
Affiliation(s)
- Jinli Wang
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Jin Tong
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Jun Li
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Chunli Cao
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Sirui Wang
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Tianyu Bi
- School of Business Administration, Lanzhou University of Finance and Economics, Lanzhou, China
| | - Peishan Zhu
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Linan Shi
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Yaqian Deng
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Ting Ma
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Jixue Hou
- Department of Ultrasound, the First Affiliated Hospital of Medical College, Shihezi University, Shihezi, China
| | - Xinwu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
53
|
Kudus K, Wagner M, Ertl-Wagner BB, Khalvati F. Applications of machine learning to MR imaging of pediatric low-grade gliomas. Childs Nerv Syst 2024; 40:3027-3035. [PMID: 38972953 DOI: 10.1007/s00381-024-06522-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/25/2024] [Accepted: 06/21/2024] [Indexed: 07/09/2024]
Abstract
INTRODUCTION Machine learning (ML) shows promise for the automation of routine tasks related to the treatment of pediatric low-grade gliomas (pLGG) such as tumor grading, typing, and segmentation. Moreover, it has been shown that ML can identify crucial information from medical images that is otherwise currently unattainable. For example, ML appears to be capable of preoperatively identifying the underlying genetic status of pLGG. METHODS In this chapter, we reviewed, to the best of our knowledge, all published works that have used ML techniques for the imaging-based evaluation of pLGGs. Additionally, we aimed to provide some context on what it will take to go from the exploratory studies we reviewed to clinically deployed models. RESULTS Multiple studies have demonstrated that ML can accurately grade, type, and segment and detect the genetic status of pLGGs. We compared the approaches used between the different studies and observed a high degree of variability throughout the methodologies. Standardization and cooperation between the numerous groups working on these approaches will be key to accelerating the clinical deployment of these models. CONCLUSION The studies reviewed in this chapter detail the potential for ML techniques to transform the treatment of pLGG. However, there are still challenges that need to be overcome prior to clinical deployment.
Collapse
Affiliation(s)
- Kareem Kudus
- Neurosciences & Mental Health Research Program, The Hospital for Sick Children, Toronto, Canada
- Institute of Medical Science, University of Toronto, Toronto, Canada
| | - Matthias Wagner
- Department of Diagnostic & Interventional Radiology, The Hospital for Sick Children, Toronto, Canada
- Department of Diagnostic and Interventional Neuroradiology, University Hospital Augsburg, Augsburg, Germany
| | - Birgit Betina Ertl-Wagner
- Neurosciences & Mental Health Research Program, The Hospital for Sick Children, Toronto, Canada
- Institute of Medical Science, University of Toronto, Toronto, Canada
- Department of Diagnostic & Interventional Radiology, The Hospital for Sick Children, Toronto, Canada
- Department of Medical Imaging, University of Toronto, Toronto, Canada
| | - Farzad Khalvati
- Neurosciences & Mental Health Research Program, The Hospital for Sick Children, Toronto, Canada.
- Institute of Medical Science, University of Toronto, Toronto, Canada.
- Department of Diagnostic & Interventional Radiology, The Hospital for Sick Children, Toronto, Canada.
- Department of Medical Imaging, University of Toronto, Toronto, Canada.
- Department of Computer Science, University of Toronto, Toronto, Canada.
- Department of Mechanical and Industrial Engineering, University of Toronto, Toronto, Canada.
| |
Collapse
|
54
|
Sood T, Khandnor P, Bhatia R. Enhancing pap smear image classification: integrating transfer learning and attention mechanisms for improved detection of cervical abnormalities. Biomed Phys Eng Express 2024; 10:065031. [PMID: 39377445 DOI: 10.1088/2057-1976/ad7bc0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2024] [Accepted: 09/17/2024] [Indexed: 10/09/2024]
Abstract
Cervical cancer remains a major global health challenge, accounting for significant morbidity and mortality among women. Early detection through screening, such as Pap smear tests, is crucial for effective treatment and improved patient outcomes. However, traditional manual analysis of Pap smear images is labor-intensive, subject to human error, and requires extensive expertise. To address these challenges, automated approaches using deep learning techniques have been increasingly explored, offering the potential for enhanced diagnostic accuracy and efficiency. This research focuses on improving cervical cancer detection from Pap smear images using advanced deep-learning techniques. Specifically, we aim to enhance classification performance by leveraging Transfer Learning (TL) combined with an attention mechanism, supplemented by effective preprocessing techniques. Our preprocessing pipeline includes image normalization, resizing, and the application of Histogram of Oriented Gradients (HOG), all of which contribute to better feature extraction and improved model performance. The dataset used in this study is the Mendeley Liquid-Based Cytology (LBC) dataset, which provides a comprehensive collection of cervical cytology images annotated by expert cytopathologists. Initial experiments with the ResNet model on raw data yielded an accuracy of 63.95%. However, by applying our preprocessing techniques and integrating an attention mechanism, the accuracy of the ResNet model increased dramatically to 96.74%. Further, the Xception model, known for its superior feature extraction capabilities, achieved the best performance with an accuracy of 98.95%, along with high precision (0.97), recall (0.99), and F1-Score (0.98) on preprocessed data with an attention mechanism. These results underscore the effectiveness of combining preprocessing techniques, TL, and attention mechanisms to significantly enhance the performance of automated cervical cancer detection systems. Our findings demonstrate the potential of these advanced techniques to provide reliable, accurate, and efficient diagnostic tools, which could greatly benefit clinical practice and improve patient outcomes in cervical cancer screening.
Collapse
Affiliation(s)
- Tamanna Sood
- CSE Department, Punjab Engineering College (Deemed to be University), Chandigarh, India
| | - Padmavati Khandnor
- CSE Department, Punjab Engineering College (Deemed to be University), Chandigarh, India
| | - Rajesh Bhatia
- CSE Department, Punjab Engineering College (Deemed to be University), Chandigarh, India
| |
Collapse
|
55
|
Gautam P, Singh M. 3-1-3 Weight averaging technique-based performance evaluation of deep neural networks for Alzheimer's disease detection using structural MRI. Biomed Phys Eng Express 2024; 10:065027. [PMID: 39178890 DOI: 10.1088/2057-1976/ad72f7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2024] [Accepted: 08/23/2024] [Indexed: 08/26/2024]
Abstract
Alzheimer's disease (AD) is a progressive neurological disorder. It is identified by the gradual shrinkage of the brain and the loss of brain cells. This leads to cognitive decline and impaired social functioning, making it a major contributor to dementia. While there are no treatments to reverse AD's progression, spotting the disease's onset can have a significant impact in the medical field. Deep learning (DL) has revolutionized medical image classification by automating feature engineering, removing the requirement for human experts in feature extraction. DL-based solutions are highly accurate but demand a lot of training data, which poses a common challenge. Transfer learning (TL) has gained attention for its knack for handling limited data and expediting model training. This study uses TL to classify AD using T1-weighted 3D Magnetic Resonance Imaging (MRI) from the Alzheimer's Disease Neuroimaging (ADNI) database. Four modified pre-trained deep neural networks (DNN), VGG16, MobileNet, DenseNet121, and NASNetMobile, are trained and evaluated on the ADNI dataset. The 3-1-3 weight averaging technique and fine-tuning improve the performance of the classification models. The evaluated accuracies for AD classification are VGG16: 98.75%; MobileNet: 97.5%; DenseNet: 97.5%; and NASNetMobile: 96.25%. The receiver operating characteristic (ROC), precision-recall (PR), and Kolmogorov-Smirnov (KS) statistic plots validate the effectiveness of the modified pre-trained model. Modified VGG16 excels with area under the curve (AUC) values of 0.99 for ROC and 0.998 for PR curves. The proposed approach shows effective AD classification by achieving high accuracy using the 3-1-3 weight averaging technique and fine-tuning.
Collapse
Affiliation(s)
- Priyanka Gautam
- ECE Department, Dr B R Ambedkar National Institute of Technology, Jalandhar, Punjab, India
| | - Manjeet Singh
- ECE Department, Dr B R Ambedkar National Institute of Technology, Jalandhar, Punjab, India
| |
Collapse
|
56
|
Lee A, Ong W, Makmur A, Ting YH, Tan WC, Lim SWD, Low XZ, Tan JJH, Kumar N, Hallinan JTPD. Applications of Artificial Intelligence and Machine Learning in Spine MRI. Bioengineering (Basel) 2024; 11:894. [PMID: 39329636 PMCID: PMC11428307 DOI: 10.3390/bioengineering11090894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2024] [Revised: 09/01/2024] [Accepted: 09/01/2024] [Indexed: 09/28/2024] Open
Abstract
Diagnostic imaging, particularly MRI, plays a key role in the evaluation of many spine pathologies. Recent progress in artificial intelligence and its subset, machine learning, has led to many applications within spine MRI, which we sought to examine in this review. A literature search of the major databases (PubMed, MEDLINE, Web of Science, ClinicalTrials.gov) was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The search yielded 1226 results, of which 50 studies were selected for inclusion. Key data from these studies were extracted. Studies were categorized thematically into the following: Image Acquisition and Processing, Segmentation, Diagnosis and Treatment Planning, and Patient Selection and Prognostication. Gaps in the literature and the proposed areas of future research are discussed. Current research demonstrates the ability of artificial intelligence to improve various aspects of this field, from image acquisition to analysis and clinical care. We also acknowledge the limitations of current technology. Future work will require collaborative efforts in order to fully exploit new technologies while addressing the practical challenges of generalizability and implementation. In particular, the use of foundation models and large-language models in spine MRI is a promising area, warranting further research. Studies assessing model performance in real-world clinical settings will also help uncover unintended consequences and maximize the benefits for patient care.
Collapse
Affiliation(s)
- Aric Lee
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Wilson Ong
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Andrew Makmur
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Yong Han Ting
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Wei Chuan Tan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Shi Wei Desmond Lim
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
| | - Xi Zhen Low
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| | - Jonathan Jiong Hao Tan
- National University Spine Institute, Department of Orthopaedic Surgery, National University Health System, 1E Lower Kent Ridge Road, Singapore 119228, Singapore
| | - Naresh Kumar
- National University Spine Institute, Department of Orthopaedic Surgery, National University Health System, 1E Lower Kent Ridge Road, Singapore 119228, Singapore
| | - James T P D Hallinan
- Department of Diagnostic Imaging, National University Hospital, 5 Lower Kent Ridge Rd, Singapore 119074, Singapore
- Department of Diagnostic Radiology, Yong Loo Lin School of Medicine, National University of Singapore, 10 Medical Drive, Singapore 117597, Singapore
| |
Collapse
|
57
|
Zhang Y, Zhao J, Li Z, Yang M, Ye Z. Preoperative prediction of renal fibrous capsule invasion in clear cell renal cell carcinoma using CT-based radiomics model. Br J Radiol 2024; 97:1557-1567. [PMID: 38897659 PMCID: PMC11332665 DOI: 10.1093/bjr/tqae122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 05/01/2024] [Accepted: 06/17/2024] [Indexed: 06/21/2024] Open
Abstract
OBJECTIVES To develop radiomics-based classifiers for preoperative prediction of fibrous capsule invasion in renal cell carcinoma (RCC) patients by CT images. METHODS In this study, clear cell RCC (ccRCC) patients who underwent both preoperative abdominal contrast-enhanced CT and nephrectomy surgery at our hospital were analysed. By transfer learning, we used base model obtained from Kidney Tumour Segmentation challenge dataset to semi-automatically segment kidney and tumours from corticomedullary phase (CMP) CT images. Dice similarity coefficient (DSC) was measured to evaluate the performance of segmentation models. Ten machine learning classifiers were compared in our study. Performance of the models was assessed by their accuracy, precision, recall, and area under the receiver operating characteristic curve (AUC). The reporting and methodological quality of our study was assessed by the CLEAR checklist and METRICS score. RESULTS This retrospective study enrolled 163 ccRCC patients. The semiautomatic segmentation model using CMP CT images obtained DSCs of 0.98 in the training cohort and 0.96 in the test cohort for kidney segmentation, and DSCs of 0.94 and 0.86 for tumour segmentation in the training and test set, respectively. For preoperative prediction of renal capsule invasion, the AdaBoost had the best performance in batch 1, with accuracy, precision, recall, and F1-score equal to 0.8571, 0.8333, 0.9091, and 0.8696, respectively; and the same classifier was also the most suitable for this classification in batch 2. The AUCs of AdaBoost for batch 1 and batch 2 were 0.83 (95% CI: 0.68-0.98) and 0.74 (95% CI: 0.51-0.97), respectively. Nine common significant features for classification were found from 2 independent batch datasets, including morphological and texture features. CONCLUSIONS The CT-based radiomics classifiers performed well for the preoperative prediction of fibrous capsule invasion in ccRCC. ADVANCES IN KNOWLEDGE Noninvasive prediction of renal fibrous capsule invasion in RCC is rather difficult by abdominal CT images before surgery. A machine learning classifier integrated with radiomics features shows a promising potential to assist surgical treatment options for RCC patients.
Collapse
Affiliation(s)
- Yaodan Zhang
- Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
- National Clinical Research Center for Cancer, Tianjin, China
- Tianjin’s Clinical Research Center for Cancer, Tianjin, China
- Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Jinkun Zhao
- Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
- National Clinical Research Center for Cancer, Tianjin, China
- Tianjin’s Clinical Research Center for Cancer, Tianjin, China
- Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Zhijun Li
- Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
- National Clinical Research Center for Cancer, Tianjin, China
- Tianjin’s Clinical Research Center for Cancer, Tianjin, China
- Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| | - Meng Yang
- National Clinical Research Center for Cancer, Tianjin, China
- Tianjin’s Clinical Research Center for Cancer, Tianjin, China
- Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
- Tianjin Cancer Institute, Tianjin, China
- Key Laboratory of Molecular Cancer Epidemiology of Tianjin, Tianjin, China
- Tianjin Medical University, Tianjin, China
| | - Zhaoxiang Ye
- Department of Radiology, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
- National Clinical Research Center for Cancer, Tianjin, China
- Tianjin’s Clinical Research Center for Cancer, Tianjin, China
- Key Laboratory of Cancer Prevention and Therapy, Tianjin, China
| |
Collapse
|
58
|
Huynh BN, Groendahl AR, Tomic O, Liland KH, Knudtsen IS, Hoebers F, van Elmpt W, Dale E, Malinen E, Futsaether CM. Deep learning with uncertainty estimation for automatic tumor segmentation in PET/CT of head and neck cancers: impact of model complexity, image processing and augmentation. Biomed Phys Eng Express 2024; 10:055038. [PMID: 39127060 DOI: 10.1088/2057-1976/ad6dcd] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Accepted: 08/09/2024] [Indexed: 08/12/2024]
Abstract
Objective.Target volumes for radiotherapy are usually contoured manually, which can be time-consuming and prone to inter- and intra-observer variability. Automatic contouring by convolutional neural networks (CNN) can be fast and consistent but may produce unrealistic contours or miss relevant structures. We evaluate approaches for increasing the quality and assessing the uncertainty of CNN-generated contours of head and neck cancers with PET/CT as input.Approach.Two patient cohorts with head and neck squamous cell carcinoma and baseline18F-fluorodeoxyglucose positron emission tomography and computed tomography images (FDG-PET/CT) were collected retrospectively from two centers. The union of manual contours of the gross primary tumor and involved nodes was used to train CNN models for generating automatic contours. The impact of image preprocessing, image augmentation, transfer learning and CNN complexity, architecture, and dimension (2D or 3D) on model performance and generalizability across centers was evaluated. A Monte Carlo dropout technique was used to quantify and visualize the uncertainty of the automatic contours.Main results. CNN models provided contours with good overlap with the manually contoured ground truth (median Dice Similarity Coefficient: 0.75-0.77), consistent with reported inter-observer variations and previous auto-contouring studies. Image augmentation and model dimension, rather than model complexity, architecture, or advanced image preprocessing, had the largest impact on model performance and cross-center generalizability. Transfer learning on a limited number of patients from a separate center increased model generalizability without decreasing model performance on the original training cohort. High model uncertainty was associated with false positive and false negative voxels as well as low Dice coefficients.Significance.High quality automatic contours can be obtained using deep learning architectures that are not overly complex. Uncertainty estimation of the predicted contours shows potential for highlighting regions of the contour requiring manual revision or flagging segmentations requiring manual inspection and intervention.
Collapse
Affiliation(s)
- Bao Ngoc Huynh
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Aurora Rosvoll Groendahl
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
- Section of Oncology, Vestre Viken Hospital Trust, Drammen, Norway
| | - Oliver Tomic
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Kristian Hovde Liland
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Ingerid Skjei Knudtsen
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, Norway
| | - Frank Hoebers
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Reproduction, Maastricht, Netherlands
| | - Wouter van Elmpt
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Reproduction, Maastricht, Netherlands
| | - Einar Dale
- Department of Oncology, Oslo University Hospital, Oslo, Norway
| | - Eirik Malinen
- Department of Medical Physics, Oslo University Hospital, Oslo, Norway
- Department of Physics, University of Oslo, Oslo, Norway
| | | |
Collapse
|
59
|
Adachi M, Fujioka T, Ishiba T, Nara M, Maruya S, Hayashi K, Kumaki Y, Yamaga E, Katsuta L, Hao D, Hartman M, Mengling F, Oda G, Kubota K, Tateishi U. AI Use in Mammography for Diagnosing Metachronous Contralateral Breast Cancer. J Imaging 2024; 10:211. [PMID: 39330431 PMCID: PMC11432939 DOI: 10.3390/jimaging10090211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2024] [Revised: 08/14/2024] [Accepted: 08/22/2024] [Indexed: 09/28/2024] Open
Abstract
Although several studies have been conducted on artificial intelligence (AI) use in mammography (MG), there is still a paucity of research on the diagnosis of metachronous bilateral breast cancer (BC), which is typically more challenging to diagnose. This study aimed to determine whether AI could enhance BC detection, achieving earlier or more accurate diagnoses than radiologists in cases of metachronous contralateral BC. We included patients who underwent unilateral BC surgery and subsequently developed contralateral BC. This retrospective study evaluated the AI-supported MG diagnostic system called FxMammo™. We evaluated the capability of FxMammo™ (FathomX Pte Ltd., Singapore) to diagnose BC more accurately or earlier than radiologists' assessments. This evaluation was supplemented by reviewing MG readings made by radiologists. Out of 1101 patients who underwent surgery, 10 who had initially undergone a partial mastectomy and later developed contralateral BC were analyzed. The AI system identified malignancies in six cases (60%), while radiologists identified five cases (50%). Notably, two cases (20%) were diagnosed solely by the AI system. Additionally, for these cases, the AI system had identified malignancies a year before the conventional diagnosis. This study highlights the AI system's effectiveness in diagnosing metachronous contralateral BC via MG. In some cases, the AI system consistently diagnosed cancer earlier than radiological assessments.
Collapse
Affiliation(s)
- Mio Adachi
- Department of Breast Surgery, Tokyo Medical and Dental University Hospital, Tokyo 113-8510, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University Hospital, Tokyo 113-8510, Japan
| | - Toshiyuki Ishiba
- Department of Breast Surgery, Tokyo Medical and Dental University Hospital, Tokyo 113-8510, Japan
| | - Miyako Nara
- Ohtsuka Breast Care Clinic, Tokyo 121-0813, Japan
| | - Sakiko Maruya
- Department of Breast Surgery, Tokyo Medical and Dental University Hospital, Tokyo 113-8510, Japan
| | - Kumiko Hayashi
- Department of Breast Surgery, Tokyo Medical and Dental University Hospital, Tokyo 113-8510, Japan
| | - Yuichi Kumaki
- Department of Breast Surgery, Tokyo Medical and Dental University Hospital, Tokyo 113-8510, Japan
| | - Emi Yamaga
- Department of Diagnostic Radiology, Tokyo Medical and Dental University Hospital, Tokyo 113-8510, Japan
| | - Leona Katsuta
- Department of Diagnostic Radiology, Tokyo Medical and Dental University Hospital, Tokyo 113-8510, Japan
| | - Du Hao
- Saw Swee Hock School of Public Health, National University of Singapore, National University Health System, Singapore 119074, Singapore
| | - Mikael Hartman
- Saw Swee Hock School of Public Health, National University of Singapore, National University Health System, Singapore 119074, Singapore
- Department of Surgery, National University Hospital, National University Health System, Singapore 119074, Singapore
- Institute of Data Science, National University of Singapore, Singapore 117597, Singapore
| | - Feng Mengling
- Saw Swee Hock School of Public Health, National University of Singapore, National University Health System, Singapore 119074, Singapore
- Institute of Data Science, National University of Singapore, Singapore 117597, Singapore
| | - Goshi Oda
- Department of Breast Surgery, Tokyo Medical and Dental University Hospital, Tokyo 113-8510, Japan
| | - Kazunori Kubota
- Department of Radiology, Dokkyo Medical University Saitama Medical Center, Saitama 343-8555, Japan
| | - Ukihide Tateishi
- Department of Diagnostic Radiology, Tokyo Medical and Dental University Hospital, Tokyo 113-8510, Japan
| |
Collapse
|
60
|
Kalupahana D, Kahatapitiya NS, Silva BN, Kim J, Jeon M, Wijenayake U, Wijesinghe RE. Dense Convolutional Neural Network-Based Deep Learning Pipeline for Pre-Identification of Circular Leaf Spot Disease of Diospyros kaki Leaves Using Optical Coherence Tomography. SENSORS (BASEL, SWITZERLAND) 2024; 24:5398. [PMID: 39205092 PMCID: PMC11359294 DOI: 10.3390/s24165398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/21/2024] [Revised: 07/30/2024] [Accepted: 08/05/2024] [Indexed: 09/04/2024]
Abstract
Circular leaf spot (CLS) disease poses a significant threat to persimmon cultivation, leading to substantial harvest reductions. Existing visual and destructive inspection methods suffer from subjectivity, limited accuracy, and considerable time consumption. This study presents an automated pre-identification method of the disease through a deep learning (DL) based pipeline integrated with optical coherence tomography (OCT), thereby addressing the highlighted issues with the existing methods. The investigation yielded promising outcomes by employing transfer learning with pre-trained DL models, specifically DenseNet-121 and VGG-16. The DenseNet-121 model excels in differentiating among three stages of CLS disease (healthy (H), apparently healthy (or healthy-infected (HI)), and infected (I)). The model achieved precision values of 0.7823 for class-H, 0.9005 for class-HI, and 0.7027 for class-I, supported by recall values of 0.8953 for class-HI and 0.8387 for class-I. Moreover, the performance of CLS detection was enhanced by a supplemental quality inspection model utilizing VGG-16, which attained an accuracy of 98.99% in discriminating between low-detail and high-detail images. Moreover, this study employed a combination of LAMP and A-scan for the dataset labeling process, significantly enhancing the accuracy of the models. Overall, this study underscores the potential of DL techniques integrated with OCT to enhance disease identification processes in agricultural settings, particularly in persimmon cultivation, by offering efficient and objective pre-identification of CLS and enabling early intervention and management strategies.
Collapse
Affiliation(s)
- Deshan Kalupahana
- Department of Computer Engineering, Faculty of Engineering, University of Sri Jayewardenepura, Nugegoda 10250, Sri Lanka; (D.K.); (N.S.K.)
| | - Nipun Shantha Kahatapitiya
- Department of Computer Engineering, Faculty of Engineering, University of Sri Jayewardenepura, Nugegoda 10250, Sri Lanka; (D.K.); (N.S.K.)
| | - Bhagya Nathali Silva
- Department of Information Technology, Faculty of Computing, Sri Lanka Institute of Information Technology, Malabe 10115, Sri Lanka;
- Center for Excellence in Informatics, Electronics & Transmission (CIET), Sri Lanka Institute of Information Technology, Malabe 10115, Sri Lanka
| | - Jeehyun Kim
- School of Electronic and Electrical Engineering, College of IT Engineering, Kyungpook National University, 80 Daehak-ro, Buk-gu, Daegu 41566, Republic of Korea; (J.K.); (M.J.)
| | - Mansik Jeon
- School of Electronic and Electrical Engineering, College of IT Engineering, Kyungpook National University, 80 Daehak-ro, Buk-gu, Daegu 41566, Republic of Korea; (J.K.); (M.J.)
| | - Udaya Wijenayake
- Department of Computer Engineering, Faculty of Engineering, University of Sri Jayewardenepura, Nugegoda 10250, Sri Lanka; (D.K.); (N.S.K.)
| | - Ruchire Eranga Wijesinghe
- Center for Excellence in Informatics, Electronics & Transmission (CIET), Sri Lanka Institute of Information Technology, Malabe 10115, Sri Lanka
- Department of Electrical and Electronic Engineering, Faculty of Engineering, Sri Lanka Institute of Information Technology, Malabe 10115, Sri Lanka
| |
Collapse
|
61
|
Tian C, Xi Y, Ma Y, Chen C, Wu C, Ru K, Li W, Zhao M. Harnessing Deep Learning for Accurate Pathological Assessment of Brain Tumor Cell Types. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01107-9. [PMID: 39150595 DOI: 10.1007/s10278-024-01107-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 03/11/2024] [Accepted: 03/27/2024] [Indexed: 08/17/2024]
Abstract
Primary diffuse central nervous system large B-cell lymphoma (CNS-pDLBCL) and high-grade glioma (HGG) often present similarly, clinically and on imaging, making differentiation challenging. This similarity can complicate pathologists' diagnostic efforts, yet accurately distinguishing between these conditions is crucial for guiding treatment decisions. This study leverages a deep learning model to classify brain tumor pathology images, addressing the common issue of limited medical imaging data. Instead of training a convolutional neural network (CNN) from scratch, we employ a pre-trained network for extracting deep features, which are then used by a support vector machine (SVM) for classification. Our evaluation shows that the Resnet50 (TL + SVM) model achieves a 97.4% accuracy, based on tenfold cross-validation on the test set. These results highlight the synergy between deep learning and traditional diagnostics, potentially setting a new standard for accuracy and efficiency in the pathological diagnosis of brain tumors.
Collapse
Affiliation(s)
- Chongxuan Tian
- School of Control Science and Engineering, Shandong University, Jinan, Shandong, 250061, China
| | - Yue Xi
- Shandong Provincial Hospital affiliated to Shandong First Medical University, Jinan, Shandong, China
| | - Yuting Ma
- Shandong Provincial Hospital affiliated to Shandong First Medical University, Jinan, Shandong, China
| | - Cai Chen
- Shandong Institute of Advanced Technology, Chinese Academy of Sciences, Jinan, Shandong, China
| | - Cong Wu
- Shandong Provincial Hospital affiliated to Shandong First Medical University, Jinan, Shandong, China
| | - Kun Ru
- Department of Pathology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, Shandong, China
| | - Wei Li
- School of Control Science and Engineering, Shandong University, Jinan, Shandong, 250061, China.
| | - Miaoqing Zhao
- Department of Pathology, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, Shandong, China.
| |
Collapse
|
62
|
Wen J, An Y, Shao L, Yin L, Peng Z, Liu Y, Tian J, Du Y. Dual-channel end-to-end network with prior knowledge embedding for improving spatial resolution of magnetic particle imaging. Comput Biol Med 2024; 178:108783. [PMID: 38909446 DOI: 10.1016/j.compbiomed.2024.108783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 05/21/2024] [Accepted: 06/15/2024] [Indexed: 06/25/2024]
Abstract
Magnetic particle imaging (MPI) is an emerging non-invasive medical imaging tomography technology based on magnetic particles, with excellent imaging depth penetration, high sensitivity and contrast. Spatial resolution and signal-to-noise ratio (SNR) are key performance metrics for evaluating MPI, which are directly influenced by the gradient of the selection field (SF). Increasing the SF gradient can improve the spatial resolution of MPI, but will lead to a decrease in SNR. Deep learning (DL) methods may enable obtaining high-resolution images from low-resolution images to improve the MPI resolution under low gradient conditions. However, existing DL methods overlook the physical procedures contributing to the blurring of MPI images, resulting in low interpretability and hindering breakthroughs in resolution. To address this issue, we propose a dual-channel end-to-end network with prior knowledge embedding for MPI (DENPK-MPI) to effectively establish a latent mapping between low-gradient and high-gradient images, thus improving MPI resolution without compromising SNR. By seamlessly integrating MPI PSF with DL paradigm, DENPK-MPI leads to a significant improvement in spatial resolution performance. Simulation, phantom, and in vivo MPI experiments have collectively confirmed that our method can improve the resolution of low-gradient MPI images without sacrificing SNR, resulting in a decrease in full width at half maximum by 14.8%-23.8 %, and the accuracy of image reconstruction is 18.2 %-27.3 % higher than other DL methods. In conclusion, we propose a DL method that incorporates MPI prior knowledge, which can improve the spatial resolution of MPI without compromising SNR and possess improved biomedical application.
Collapse
Affiliation(s)
- Jiaxuan Wen
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, China; School of Artificial Intelligence, The University of Chinese Academy of Sciences, Beijing, China
| | - Yu An
- School of Engineering Medicine, Beihang University, Beijing, China; The Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology, Beijing, China
| | - Lizhi Shao
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, China; School of Artificial Intelligence, The University of Chinese Academy of Sciences, Beijing, China
| | - Lin Yin
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, China; School of Artificial Intelligence, The University of Chinese Academy of Sciences, Beijing, China
| | - Zhengyao Peng
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, China; School of Artificial Intelligence, The University of Chinese Academy of Sciences, Beijing, China
| | - Yanjun Liu
- School of Engineering Medicine, Beihang University, Beijing, China; The Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology, Beijing, China
| | - Jie Tian
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, China; School of Engineering Medicine, Beihang University, Beijing, China; The Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology, Beijing, China
| | - Yang Du
- CAS Key Laboratory of Molecular Imaging, Institute of Automation, Beijing, China; School of Artificial Intelligence, The University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
63
|
Badkul A, Vamsi I, Sudha R. Comparative study of DCNN and image processing based classification of chest X-rays for identification of COVID-19 patients using fine-tuning. J Med Eng Technol 2024; 48:213-222. [PMID: 39648993 DOI: 10.1080/03091902.2024.2438158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Revised: 11/20/2024] [Accepted: 11/30/2024] [Indexed: 12/10/2024]
Abstract
The conventional detection of COVID-19 by evaluating the CT scan images is tiresome, often experiences high inter-observer variability and uncertainty issues. This work proposes the automatic detection and classification of COVID-19 by analysing the chest X-ray images (CXR) with the deep convolutional neural network (DCNN) models through a fine-tuning and pre-training approach. CXR images pertaining to four health scenarios, namely, healthy, COVID-19, bacterial pneumonia and viral pneumonia, are considered and subjected to data augmentation. Two types of input datasets are prepared; in which dataset I contains the original image dataset categorised under four classes, whereas the original CXR images are subjected to image pre-processing via Contrast Limited Adaptive Histogram Equalisation (CLAHE) algorithm and Blackhat Morphological Operation (BMO) for devising the input dataset II. Both datasets are supplied as input to various DCNN models such as DenseNet, MobileNet, ResNet, VGG16, and Xception for achieving multi-class classification. It is observed that the classification accuracies are improved, and the classification errors are reduced with the image pre-processing. Overall, the VGG16 model resulted in better classification accuracies and reduced classification errors while accomplishing multi-class classification. Thus, the proposed work would assist the clinical diagnosis, and reduce the workload of the front-line healthcare workforce and medical professionals.
Collapse
Affiliation(s)
- Amitesh Badkul
- Department of Electrical and Electronics, Birla Institute of Technology and Science-Pilani, Hyderabad, India
| | - Inturi Vamsi
- Mechanical Engineering Department, Chaitanya Bharathi Institute of Technology (A), Hyderabad, India
| | - Radhika Sudha
- Department of Electrical and Electronics, Birla Institute of Technology and Science-Pilani, Hyderabad, India
| |
Collapse
|
64
|
Liu W, Wang D, Liu L, Zhou Z. Assessing the Influence of B-US, CDFI, SE, and Patient Age on Predicting Molecular Subtypes in Breast Lesions Using Deep Learning Algorithms. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2024; 43:1375-1388. [PMID: 38581195 DOI: 10.1002/jum.16460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 03/01/2024] [Accepted: 03/25/2024] [Indexed: 04/08/2024]
Abstract
OBJECTIVES Our study aims to investigate the impact of B-mode ultrasound (B-US) imaging, color Doppler flow imaging (CDFI), strain elastography (SE), and patient age on the prediction of molecular subtypes in breast lesions. METHODS Totally 2272 multimodal ultrasound imaging was collected from 198 patients. The ResNet-18 network was employed to predict four molecular subtypes from B-US imaging, CDFI, and SE of patients with different ages. All the images were split into training and testing datasets by the ratio of 80%:20%. The predictive performance on testing dataset was evaluated through 5 metrics including mean accuracy, precision, recall, F1-scores, and confusion matrix. RESULTS Based on B-US imaging, the test mean accuracy is 74.50%, the precision is 74.84%, the recall is 72.48%, and the F1-scores is 0.73. By combining B-US imaging with CDFI, the results were increased to 85.41%, 85.03%, 85.05%, and 0.84, respectively. With the integration of B-US imaging and SE, the results were changed to 75.64%, 74.69%, 73.86%, and 0.74, respectively. Using images from patients under 40 years old, the results were 90.48%, 90.88%, 88.47%, and 0.89. When images from patients who are above 40 years old, they were changed to 81.96%, 83.12%, 80.5%, and 0.81, respectively. CONCLUSION Multimodal ultrasound imaging can be used to accurately predict the molecular subtypes of breast lesions. In addition to B-US imaging, CDFI rather than SE contribute further to improve predictive performance. The predictive performance is notably better for patients under 40 years old compared with those who are 40 years old and above.
Collapse
Affiliation(s)
- Weiyong Liu
- Department of Ultrasound, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Dongyue Wang
- School of Management, Hefei University of Technology, Hefei, China
- Key Laboratory of Process Optimization and Intelligent Decision-Making, Ministry of Education, Hefei, China
- Ministry of Education Engineering Research Center for Intelligent Decision-Making & Information System Technologies, Hefei, China
| | - Le Liu
- Department of Ultrasound, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
| | - Zhiguo Zhou
- Reliable Intelligence and Medical Innovation Laboratory (RIMI Lab), Department of Biostatistics & Data Science, University of Kansas Medical Center, Kansas City, Kansas, USA
| |
Collapse
|
65
|
Capurro N, Pastore VP, Touijer L, Odone F, Cozzani E, Gasparini G, Parodi A. A deep learning approach to direct immunofluorescence pattern recognition in autoimmune bullous diseases. Br J Dermatol 2024; 191:261-266. [PMID: 38581445 DOI: 10.1093/bjd/ljae142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 02/19/2024] [Accepted: 03/29/2024] [Indexed: 04/08/2024]
Abstract
BACKGROUND Artificial intelligence (AI) is reshaping healthcare, using machine and deep learning (DL) to enhance disease management. Dermatology has seen improved diagnostics, particularly in skin cancer detection, through the integration of AI. However, the potential of AI in automating immunofluorescence imaging for autoimmune bullous skin diseases (AIBDs) remains untapped. While direct immunofluorescence (DIF) supports diagnosis, its manual interpretation can hinder efficiency. The use of DL to classify DIF patterns automatically, including the intercellular (ICP) and linear pattern (LP), holds promise for improving the diagnosis of AIBDs. OBJECTIVES To develop AI algorithms for automated classification of AIBD DIF patterns, such as ICP and LP, in order to enhance diagnostic accuracy, streamline disease management and improve patient outcomes through DL-driven immunofluorescence interpretation. METHODS We collected immunofluorescence images from skin biopsies of patients suspected of having an AIBD between January 2022 and January 2024. Skin tissue was obtained via a 5-mm punch biopsy, prepared for DIF. Experienced dermatologists classified the images as ICP, LP or negative. To evaluate our DL approach, we divided the images into training (n = 436) and test sets (n = 93). We employed transfer learning with pretrained deep neural networks and conducted fivefold cross-validation to assess model performance. Our dataset's class imbalance was addressed using weighted loss and data augmentation strategies. The models were trained for 50 epochs using Pytorch, achieving an image size of 224 × 224 pixels for both convolutional neural networks (CNNs) and the Swin Transformer. RESULTS Our study compared six CNNs and the Swin Transformer for AIBD image classification, with the Swin Transformer achieving the highest average validation accuracy (98.5%). On a separate test set, the best model attained an accuracy of 94.6%, demonstrating 95.3% sensitivity and 97.5% specificity across AIBD classes. Visualization with Grad-CAM (class activation mapping) highlighted the model's reliance on characteristic patterns for accurate classification. CONCLUSIONS The study highlighted the accuracy of CNNs in identifying DIF features. This approach aids automated analysis and reporting, offering reproducibility, speed, data handling and cost-efficiency. Integrating DL into skin immunofluorescence promises precise diagnostics and streamlined reporting in this branch of dermatology.
Collapse
Affiliation(s)
- Niccolò Capurro
- Section of Dermatology, Department of Health Sciences, University of Genoa, Genoa, Italy
| | | | | | | | - Emanuele Cozzani
- Section of Dermatology, Department of Health Sciences, University of Genoa, Genoa, Italy
| | - Giulia Gasparini
- Section of Dermatology, Department of Health Sciences, University of Genoa, Genoa, Italy
| | - Aurora Parodi
- Section of Dermatology, Department of Health Sciences, University of Genoa, Genoa, Italy
| |
Collapse
|
66
|
Doolan BJ, Thomas BR. Bursting the bubble on diagnostics: artificial intelligence in autoimmune bullous disease. Br J Dermatol 2024; 191:160-161. [PMID: 38736238 DOI: 10.1093/bjd/ljae197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2024] [Accepted: 03/13/2024] [Indexed: 05/14/2024]
Affiliation(s)
- Brent J Doolan
- St John's Institute of Dermatology, School of Basic and Medical Biosciences, King's College London, London, UK
- St John's Institute of Dermatology, Guy's and St Thomas' Hospital, London, UK
| | - Bjorn R Thomas
- St John's Institute of Dermatology, Guy's and St Thomas' Hospital, London, UK
| |
Collapse
|
67
|
Choopong P, Kusakunniran W. Selection of pre-trained weights for transfer learning in automated cytomegalovirus retinitis classification. Sci Rep 2024; 14:15899. [PMID: 38987446 PMCID: PMC11237151 DOI: 10.1038/s41598-024-67121-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2024] [Accepted: 07/08/2024] [Indexed: 07/12/2024] Open
Abstract
Cytomegalovirus retinitis (CMVR) is a significant cause of vision loss. Regular screening is crucial but challenging in resource-limited settings. A convolutional neural network is a state-of-the-art deep learning technique to generate automatic diagnoses from retinal images. However, there are limited numbers of CMVR images to train the model properly. Transfer learning (TL) is a strategy to train a model with a scarce dataset. This study explores the efficacy of TL with different pre-trained weights for automated CMVR classification using retinal images. We utilised a dataset of 955 retinal images (524 CMVR and 431 normal) from Siriraj Hospital, Mahidol University, collected between 2005 and 2015. Images were processed using Kowa VX-10i or VX-20 fundus cameras and augmented for training. We employed DenseNet121 as a backbone model, comparing the performance of TL with weights pre-trained on ImageNet, APTOS2019, and CheXNet datasets. The models were evaluated based on accuracy, loss, and other performance metrics, with the depth of fine-tuning varied across different pre-trained weights. The study found that TL significantly enhances model performance in CMVR classification. The best results were achieved with weights sequentially transferred from ImageNet to APTOS2019 dataset before application to our CMVR dataset. This approach yielded the highest mean accuracy (0.99) and lowest mean loss (0.04), outperforming other methods. The class activation heatmaps provided insights into the model's decision-making process. The model with APTOS2019 pre-trained weights offered the best explanation and highlighted the pathologic lesions resembling human interpretation. Our findings demonstrate the potential of sequential TL in improving the accuracy and efficiency of CMVR diagnosis, particularly in settings with limited data availability. They highlight the importance of domain-specific pre-training in medical image classification. This approach streamlines the diagnostic process and paves the way for broader applications in automated medical image analysis, offering a scalable solution for early disease detection.
Collapse
Affiliation(s)
- Pitipol Choopong
- Department of Ophthalmology, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok, Thailand
- Faculty of Information and Communication Technology, Mahidol University, Nakhon Pathom, Thailand
| | - Worapan Kusakunniran
- Faculty of Information and Communication Technology, Mahidol University, Nakhon Pathom, Thailand.
| |
Collapse
|
68
|
Islam MM, Rifat HR, Shahid MSB, Akhter A, Uddin MA. Utilizing Deep Feature Fusion for Automatic Leukemia Classification: An Internet of Medical Things-Enabled Deep Learning Framework. SENSORS (BASEL, SWITZERLAND) 2024; 24:4420. [PMID: 39001200 PMCID: PMC11244606 DOI: 10.3390/s24134420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2024] [Revised: 06/30/2024] [Accepted: 07/05/2024] [Indexed: 07/16/2024]
Abstract
Acute lymphoblastic leukemia, commonly referred to as ALL, is a type of cancer that can affect both the blood and the bone marrow. The process of diagnosis is a difficult one since it often calls for specialist testing, such as blood tests, bone marrow aspiration, and biopsy, all of which are highly time-consuming and expensive. It is essential to obtain an early diagnosis of ALL in order to start therapy in a timely and suitable manner. In recent medical diagnostics, substantial progress has been achieved through the integration of artificial intelligence (AI) and Internet of Things (IoT) devices. Our proposal introduces a new AI-based Internet of Medical Things (IoMT) framework designed to automatically identify leukemia from peripheral blood smear (PBS) images. In this study, we present a novel deep learning-based fusion model to detect ALL types of leukemia. The system seamlessly delivers the diagnostic reports to the centralized database, inclusive of patient-specific devices. After collecting blood samples from the hospital, the PBS images are transmitted to the cloud server through a WiFi-enabled microscopic device. In the cloud server, a new fusion model that is capable of classifying ALL from PBS images is configured. The fusion model is trained using a dataset including 6512 original and segmented images from 89 individuals. Two input channels are used for the purpose of feature extraction in the fusion model. These channels include both the original and the segmented images. VGG16 is responsible for extracting features from the original images, whereas DenseNet-121 is responsible for extracting features from the segmented images. The two output features are merged together, and dense layers are used for the categorization of leukemia. The fusion model that has been suggested obtains an accuracy of 99.89%, a precision of 99.80%, and a recall of 99.72%, which places it in an excellent position for the categorization of leukemia. The proposed model outperformed several state-of-the-art Convolutional Neural Network (CNN) models in terms of performance. Consequently, this proposed model has the potential to save lives and effort. For a more comprehensive simulation of the entire methodology, a web application (Beta Version) has been developed in this study. This application is designed to determine the presence or absence of leukemia in individuals. The findings of this study hold significant potential for application in biomedical research, particularly in enhancing the accuracy of computer-aided leukemia detection.
Collapse
Affiliation(s)
- Md Manowarul Islam
- Department of Computer Science and Engineering, Jagannath University, Dhaka 1100, Bangladesh
| | - Habibur Rahman Rifat
- Department of Computer Science and Engineering, Jagannath University, Dhaka 1100, Bangladesh
| | - Md Shamim Bin Shahid
- Department of Computer Science and Engineering, Jagannath University, Dhaka 1100, Bangladesh
| | - Arnisha Akhter
- Department of Computer Science and Engineering, Jagannath University, Dhaka 1100, Bangladesh
| | - Md Ashraf Uddin
- School of Info Technology, Deakin University, Burwood, VIC 3125, Australia
| |
Collapse
|
69
|
Chattopadhyay T, Ozarkar SS, Buwa K, Joshy NA, Komandur D, Naik J, Thomopoulos SI, Ver Steeg G, Ambite JL, Thompson PM. Comparison of deep learning architectures for predicting amyloid positivity in Alzheimer's disease, mild cognitive impairment, and healthy aging, from T1-weighted brain structural MRI. Front Neurosci 2024; 18:1387196. [PMID: 39015378 PMCID: PMC11250587 DOI: 10.3389/fnins.2024.1387196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 06/14/2024] [Indexed: 07/18/2024] Open
Abstract
Abnormal β-amyloid (Aβ) accumulation in the brain is an early indicator of Alzheimer's disease (AD) and is typically assessed through invasive procedures such as PET (positron emission tomography) or CSF (cerebrospinal fluid) assays. As new anti-Alzheimer's treatments can now successfully target amyloid pathology, there is a growing interest in predicting Aβ positivity (Aβ+) from less invasive, more widely available types of brain scans, such as T1-weighted (T1w) MRI. Here we compare multiple approaches to infer Aβ + from standard anatomical MRI: (1) classical machine learning algorithms, including logistic regression, XGBoost, and shallow artificial neural networks, (2) deep learning models based on 2D and 3D convolutional neural networks (CNNs), (3) a hybrid ANN-CNN, combining the strengths of shallow and deep neural networks, (4) transfer learning models based on CNNs, and (5) 3D Vision Transformers. All models were trained on paired MRI/PET data from 1,847 elderly participants (mean age: 75.1 yrs. ± 7.6SD; 863 females/984 males; 661 healthy controls, 889 with mild cognitive impairment (MCI), and 297 with Dementia), scanned as part of the Alzheimer's Disease Neuroimaging Initiative. We evaluated each model's balanced accuracy and F1 scores. While further tests on more diverse data are warranted, deep learning models trained on standard MRI showed promise for estimating Aβ + status, at least in people with MCI. This may offer a potential screening option before resorting to more invasive procedures.
Collapse
Affiliation(s)
- Tamoghna Chattopadhyay
- Imaging Genetics Center, Mark and Mary Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Marina del Rey, CA, United States
| | - Saket S. Ozarkar
- Imaging Genetics Center, Mark and Mary Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Marina del Rey, CA, United States
| | - Ketaki Buwa
- Imaging Genetics Center, Mark and Mary Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Marina del Rey, CA, United States
| | - Neha Ann Joshy
- Imaging Genetics Center, Mark and Mary Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Marina del Rey, CA, United States
| | - Dheeraj Komandur
- Imaging Genetics Center, Mark and Mary Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Marina del Rey, CA, United States
| | - Jayati Naik
- Imaging Genetics Center, Mark and Mary Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Marina del Rey, CA, United States
| | - Sophia I. Thomopoulos
- Imaging Genetics Center, Mark and Mary Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Marina del Rey, CA, United States
| | | | - Jose Luis Ambite
- Information Sciences Institute, University of Southern California, Marina del Rey, CA, United States
| | - Paul M. Thompson
- Imaging Genetics Center, Mark and Mary Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Marina del Rey, CA, United States
| |
Collapse
|
70
|
Manokaran J, Mittal R, Ukwatta E. Pulmonary nodule detection in low dose computed tomography using a medical-to-medical transfer learning approach. J Med Imaging (Bellingham) 2024; 11:044502. [PMID: 38988991 PMCID: PMC11232701 DOI: 10.1117/1.jmi.11.4.044502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Revised: 06/20/2024] [Accepted: 06/24/2024] [Indexed: 07/12/2024] Open
Abstract
Purpose Lung cancer is the second most common cancer and the leading cause of cancer death globally. Low dose computed tomography (LDCT) is the recommended imaging screening tool for the early detection of lung cancer. A fully automated computer-aided detection method for LDCT will greatly improve the existing clinical workflow. Most of the existing methods for lung detection are designed for high-dose CTs (HDCTs), and those methods cannot be directly applied to LDCTs due to domain shifts and inferior quality of LDCT images. In this work, we describe a semi-automated transfer learning-based approach for the early detection of lung nodules using LDCTs. Approach In this work, we developed an algorithm based on the object detection model, you only look once (YOLO) to detect lung nodules. The YOLO model was first trained on CTs, and the pre-trained weights were used as initial weights during the retraining of the model on LDCTs using a medical-to-medical transfer learning approach. The dataset for this study was from a screening trial consisting of LDCTs acquired from 50 biopsy-confirmed lung cancer patients obtained over 3 consecutive years (T1, T2, and T3). About 60 lung cancer patients' HDCTs were obtained from a public dataset. The developed model was evaluated using a hold-out test set comprising 15 patient cases (93 slices with cancerous nodules) using precision, specificity, recall, and F1-score. The evaluation metrics were reported patient-wise on a per-year basis and averaged for 3 years. For comparative analysis, the proposed detection model was trained using pre-trained weights from the COCO dataset as the initial weights. A paired t-test and chi-squared test with an alpha value of 0.05 were used for statistical significance testing. Results The results were reported by comparing the proposed model developed using HDCT pre-trained weights with COCO pre-trained weights. The former approach versus the latter approach obtained a precision of 0.982 versus 0.93 in detecting cancerous nodules, specificity of 0.923 versus 0.849 in identifying slices with no cancerous nodules, recall of 0.87 versus 0.886, and F1-score of 0.924 versus 0.903. As the nodule progressed, the former approach achieved a precision of 1, specificity of 0.92, and sensitivity of 0.930. The statistical analysis performed in the comparative study resulted in a p -value of 0.0054 for precision and a p -value of 0.00034 for specificity. Conclusions In this study, a semi-automated method was developed to detect lung nodules in LDCTs using HDCT pre-trained weights as the initial weights and retraining the model. Further, the results were compared by replacing HDCT pre-trained weights in the above approach with COCO pre-trained weights. The proposed method may identify early lung nodules during the screening program, reduce overdiagnosis and follow-ups due to misdiagnosis in LDCTs, start treatment options in the affected patients, and lower the mortality rate.
Collapse
Affiliation(s)
- Jenita Manokaran
- University of Guelph, School of Engineering, Guelph, Ontario, Canada
| | - Richa Mittal
- Guelph general hospital, Guelph, Ontario, Canada
| | - Eranga Ukwatta
- University of Guelph, School of Engineering, Guelph, Ontario, Canada
| |
Collapse
|
71
|
Zsidai B, Kaarre J, Narup E, Hamrin Senorski E, Pareek A, Grassi A, Ley C, Longo UG, Herbst E, Hirschmann MT, Kopf S, Seil R, Tischer T, Samuelsson K, Feldt R. A practical guide to the implementation of artificial intelligence in orthopaedic research-Part 2: A technical introduction. J Exp Orthop 2024; 11:e12025. [PMID: 38715910 PMCID: PMC11076014 DOI: 10.1002/jeo2.12025] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 01/31/2024] [Accepted: 03/21/2024] [Indexed: 12/26/2024] Open
Abstract
UNLABELLED Recent advances in artificial intelligence (AI) present a broad range of possibilities in medical research. However, orthopaedic researchers aiming to participate in research projects implementing AI-based techniques require a sound understanding of the technical fundamentals of this rapidly developing field. Initial sections of this technical primer provide an overview of the general and the more detailed taxonomy of AI methods. Researchers are presented with the technical basics of the most frequently performed machine learning (ML) tasks, such as classification, regression, clustering and dimensionality reduction. Additionally, the spectrum of supervision in ML including the domains of supervised, unsupervised, semisupervised and self-supervised learning will be explored. Recent advances in neural networks (NNs) and deep learning (DL) architectures have rendered them essential tools for the analysis of complex medical data, which warrants a rudimentary technical introduction to orthopaedic researchers. Furthermore, the capability of natural language processing (NLP) to interpret patterns in human language is discussed and may offer several potential applications in medical text classification, patient sentiment analysis and clinical decision support. The technical discussion concludes with the transformative potential of generative AI and large language models (LLMs) on AI research. Consequently, this second article of the series aims to equip orthopaedic researchers with the fundamental technical knowledge required to engage in interdisciplinary collaboration in AI-driven orthopaedic research. LEVEL OF EVIDENCE Level IV.
Collapse
Affiliation(s)
- Bálint Zsidai
- Sahlgrenska Sports Medicine CenterGothenburgSweden
- Department of Orthopaedics, Institute of Clinical Sciences, Sahlgrenska AcademyUniversity of GothenburgGothenburgSweden
| | - Janina Kaarre
- Sahlgrenska Sports Medicine CenterGothenburgSweden
- Department of Orthopaedics, Institute of Clinical Sciences, Sahlgrenska AcademyUniversity of GothenburgGothenburgSweden
- Department of Orthopaedic Surgery, UPMC Freddie Fu Sports Medicine CenterUniversity of PittsburghPittsburghUSA
| | - Eric Narup
- Sahlgrenska Sports Medicine CenterGothenburgSweden
- Department of Orthopaedics, Institute of Clinical Sciences, Sahlgrenska AcademyUniversity of GothenburgGothenburgSweden
| | - Eric Hamrin Senorski
- Sahlgrenska Sports Medicine CenterGothenburgSweden
- Department of Health and Rehabilitation, Institute of Neuroscience and Physiology, Sahlgrenska AcademyUniversity of GothenburgGothenburgSweden
- Sportrehab Sports Medicine ClinicGothenburgSweden
| | - Ayoosh Pareek
- Sports and Shoulder Service, Hospital for Special SurgeryNew YorkNew YorkUSA
| | - Alberto Grassi
- Department of Orthopaedics, Institute of Clinical Sciences, Sahlgrenska AcademyUniversity of GothenburgGothenburgSweden
- IIa Clinica Ortopedica e Traumatologica, IRCCS Istituto Ortopedico RizzoliBolognaItaly
| | - Christophe Ley
- Department of MathematicsUniversity of LuxembourgEsch‐sur‐AlzetteLuxembourg
| | - Umile Giuseppe Longo
- Fondazione Policlinico Universitario Campus Bio‐MedicoRomeItaly
- Research Unit of Orthopaedic and Trauma Surgery, Department of Medicine and SurgeryUniversità Campus Bio‐Medico di RomaRomeItaly
| | - Elmar Herbst
- Department of Trauma, Hand and Reconstructive SurgeryUniversity Hospital MünsterMünsterGermany
| | - Michael T. Hirschmann
- Department of Orthopedic Surgery and Traumatology, Head Knee Surgery and DKF Head of ResearchKantonsspital BasellandBruderholzSwitzerland
| | - Sebastian Kopf
- Center of Orthopaedics and TraumatologyUniversity Hospital Brandenburg a.d.H., Brandenburg Medical School Theodor FontaneBrandenburg a.d.H.Germany
- Faculty of Health Sciences BrandenburgBrandenburg Medical School Theodor FontaneBrandenburg a.d.H.Germany
| | - Romain Seil
- Department of Orthopaedic Surgery LuxembourgCentre Hospitalier de Luxembourg—Clinique d'EichLuxembourgLuxembourg
- Luxembourg Institute of Research in OrthopaedicsSports Medicine and Science (LIROMS)LuxembourgLuxembourg
- Luxembourg Institute of Health, Human Motion, OrthopaedicsSports Medicine and Digital Methods (HOSD)LuxembourgLuxembourg
| | - Thomas Tischer
- Clinic for Orthopaedics and Trauma SurgeryErlangenGermany
| | - Kristian Samuelsson
- Sahlgrenska Sports Medicine CenterGothenburgSweden
- Department of Orthopaedics, Institute of Clinical Sciences, Sahlgrenska AcademyUniversity of GothenburgGothenburgSweden
- Department of OrthopaedicsSahlgrenska University HospitalMölndalSweden
| | - Robert Feldt
- Department of Computer Science and EngineeringChalmers University of TechnologyGothenburgSweden
| | | |
Collapse
|
72
|
Vezakis IA, Georgas K, Fotiadis D, Matsopoulos GK. EffiSegNet: Gastrointestinal Polyp Segmentation through a Pre-Trained EfficientNet-based Network with a Simplified Decoder. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2024; 2024:1-4. [PMID: 40039472 DOI: 10.1109/embc53108.2024.10782015] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2025]
Abstract
This work introduces EffiSegNet, a novel segmentation framework leveraging transfer learning with a pre-trained Convolutional Neural Network (CNN) classifier as its backbone. Deviating from traditional architectures with a symmetric U-shape, EffiSegNet simplifies the decoder and utilizes full-scale feature fusion to minimize computational cost and the number of parameters. We evaluated our model on the gastrointestinal polyp segmentation task using the publicly available Kvasir-SEG dataset, achieving state-of-the-art results. Specifically, the EffiSegNet-B4 network variant achieved an F 1 score of 0.9552, mean Dice (mDice) 0.9483, mean Intersection over Union (mIoU) 0.9056, Precision 0.9679, and Recall 0.9429 with a pre-trained backbone - to the best of our knowledge, the highest reported scores in the literature for this dataset. Additional training from scratch also demonstrated exceptional performance compared to previous work, achieving an F 1 score of 0.9286, mDice 0.9207, mIoU 0.8668, Precision 0.9311 and Recall 0.9262. These results underscore the importance of a well-designed encoder in image segmentation networks and the effectiveness of transfer learning approaches.
Collapse
|
73
|
Armato SG, Katz SI, Frauenfelder T, Jayasekera G, Catino A, Blyth KG, Theodoro T, Rousset P, Nackaerts K, Opitz I. Imaging in pleural Mesothelioma: A review of the 16th International Conference of the International Mesothelioma Interest Group. Lung Cancer 2024; 193:107832. [PMID: 38875938 DOI: 10.1016/j.lungcan.2024.107832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Revised: 05/21/2024] [Accepted: 05/27/2024] [Indexed: 06/16/2024]
Abstract
Imaging continues to gain a greater role in the assessment and clinical management of patients with mesothelioma. This communication summarizes the oral presentations from the imaging session at the 2023 International Conference of the International Mesothelioma Interest Group (iMig), which was held in Lille, France from June 26 to 28, 2023. Topics at this session included an overview of best practices for clinical imaging of mesothelioma as reported by an iMig consensus panel, emerging imaging techniques for surgical planning, radiologic assessment of malignant pleural effusion, a radiomics-based transfer learning model to predict patient response to treatment, automated assessment of early contrast enhancement, and tumor thickness for response assessment in peritoneal mesothelioma.
Collapse
Affiliation(s)
- Samuel G Armato
- Department of Radiology, The University of Chicago, Chicago, IL, USA.
| | - Sharyn I Katz
- Department of Radiology, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, USA
| | - Thomas Frauenfelder
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Zurich, Switzerland
| | - Geeshath Jayasekera
- Glasgow Pleural Disease Unit, Queen Elizabeth University Hospital, Glasgow, UK and School of Cancer Sciences, University of Glasgow, UK
| | - Annamaria Catino
- Medical Thoracic Oncology Unit, IRCCS Istituto Tumori "Giovanni Paolo II," BARI, Italy
| | - Kevin G Blyth
- Cancer Research UK Scotland Centre, Glasgow, UK and Glasgow Pleural Disease Unit, Queen Elizabeth University Hospital, Glasgow, UK and School of Cancer Sciences, University of Glasgow, UK
| | - Taylla Theodoro
- Institute of Computing, University of Campinas, Campinas, Brazil and Cancer Research UK Scotland Centre, Glasgow, UK
| | - Pascal Rousset
- Department of Radiology, Lyon Sud University Hospital, Hospices Civils de Lyon, Lyon 1 University, Pierre-Bénite, France
| | - Kristiaan Nackaerts
- Department of Pulmonology/Respiratory Oncology, KU Leuven, University Hospitals Leuven, Leuven, Belgium
| | - Isabelle Opitz
- Department of Thoracic Surgery, University Hospital Zurich, Zurich, Switzerland
| |
Collapse
|
74
|
Cadrin-Chênevert A. Navigating Clinical Variability: Transfer Learning's Impact on Imaging Model Performance. Radiol Artif Intell 2024; 6:e240263. [PMID: 38900033 PMCID: PMC11294946 DOI: 10.1148/ryai.240263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Revised: 05/04/2024] [Accepted: 05/09/2024] [Indexed: 06/21/2024]
Affiliation(s)
- Alexandre Cadrin-Chênevert
- From the CISSS Lanaudière-Medical Imaging, 200
Louis-Vadeboncoeur, Saint-Charles-Borromee, QC, Canada J6E 6J2
| |
Collapse
|
75
|
Ong J, Jang KJ, Baek SJ, Hu D, Lin V, Jang S, Thaler A, Sabbagh N, Saeed A, Kwon M, Kim JH, Lee S, Han YS, Zhao M, Sokolsky O, Lee I, Al-Aswad LA. Development of oculomics artificial intelligence for cardiovascular risk factors: A case study in fundus oculomics for HbA1c assessment and clinically relevant considerations for clinicians. Asia Pac J Ophthalmol (Phila) 2024; 13:100095. [PMID: 39209216 DOI: 10.1016/j.apjo.2024.100095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2024] [Revised: 08/19/2024] [Accepted: 08/19/2024] [Indexed: 09/04/2024] Open
Abstract
Artificial Intelligence (AI) is transforming healthcare, notably in ophthalmology, where its ability to interpret images and data can significantly enhance disease diagnosis and patient care. Recent developments in oculomics, the integration of ophthalmic features to develop biomarkers for systemic diseases, have demonstrated the potential for providing rapid, non-invasive methods of screening leading to enhance in early detection and improve healthcare quality, particularly in underserved areas. However, the widespread adoption of such AI-based technologies faces challenges primarily related to the trustworthiness of the system. We demonstrate the potential and considerations needed to develop trustworthy AI in oculomics through a pilot study for HbA1c assessment using an AI-based approach. We then discuss various challenges, considerations, and solutions that have been developed for powerful AI technologies in the past in healthcare and subsequently apply these considerations to the oculomics pilot study. Building upon the observations in the study we highlight the challenges and opportunities for advancing trustworthy AI in oculomics. Ultimately, oculomics presents as a powerful and emerging technology in ophthalmology and understanding how to optimize transparency prior to clinical adoption is of utmost importance.
Collapse
Affiliation(s)
- Joshua Ong
- Department of Ophthalmology and Visual Sciences, University of Michigan Kellogg Eye Center, Ann Arbor, MI, United States
| | - Kuk Jin Jang
- School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States
| | - Seung Ju Baek
- Department of AI Convergence Engineering, Gyeongsang National University, Republic of Korea
| | - Dongyin Hu
- School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States
| | - Vivian Lin
- School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States
| | - Sooyong Jang
- School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States
| | - Alexandra Thaler
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States
| | - Nouran Sabbagh
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States
| | - Almiqdad Saeed
- Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States; St John Eye Hospital-Jerusalem, Department of Ophthalmology, Israel
| | - Minwook Kwon
- Department of AI Convergence Engineering, Gyeongsang National University, Republic of Korea
| | - Jin Hyun Kim
- Department of Intelligence and Communication Engineering, Gyeongsang National University, Republic of Korea
| | - Seongjin Lee
- Department of AI Convergence Engineering, Gyeongsang National University, Republic of Korea
| | - Yong Seop Han
- Department of Ophthalmology, Gyeongsang National University College of Medicine, Institute of Health Sciences, Republic of Korea
| | - Mingmin Zhao
- School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States
| | - Oleg Sokolsky
- School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States
| | - Insup Lee
- School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States.
| | - Lama A Al-Aswad
- School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, United States; Department of Ophthalmology, Scheie Eye Institute, University of Pennsylvania Perelman School of Medicine, Philadelphia, PA, United States.
| |
Collapse
|
76
|
Schäfer R, Nicke T, Höfener H, Lange A, Merhof D, Feuerhake F, Schulz V, Lotz J, Kiessling F. Overcoming data scarcity in biomedical imaging with a foundational multi-task model. NATURE COMPUTATIONAL SCIENCE 2024; 4:495-509. [PMID: 39030386 PMCID: PMC11288886 DOI: 10.1038/s43588-024-00662-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 06/17/2024] [Indexed: 07/21/2024]
Abstract
Foundational models, pretrained on a large scale, have demonstrated substantial success across non-medical domains. However, training these models typically requires large, comprehensive datasets, which contrasts with the smaller and more specialized datasets common in biomedical imaging. Here we propose a multi-task learning strategy that decouples the number of training tasks from memory requirements. We trained a universal biomedical pretrained model (UMedPT) on a multi-task database including tomographic, microscopic and X-ray images, with various labeling strategies such as classification, segmentation and object detection. The UMedPT foundational model outperformed ImageNet pretraining and previous state-of-the-art models. For classification tasks related to the pretraining database, it maintained its performance with only 1% of the original training data and without fine-tuning. For out-of-domain tasks it required only 50% of the original training data. In an external independent validation, imaging features extracted using UMedPT proved to set a new standard for cross-center transferability.
Collapse
Affiliation(s)
- Raphael Schäfer
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Till Nicke
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Henning Höfener
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Annkristin Lange
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
| | - Dorit Merhof
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
- Institute of Image Analysis and Computer Vision, Faculty of Informatics and Data Science, University of Regensburg, Regensburg, Germany
| | - Friedrich Feuerhake
- Institute for Pathology, Hannover Medical School, Hanover, Germany
- Institute for Neuropathology, Medical Center, University of Freiburg, Freiburg, Germany
| | - Volkmar Schulz
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany
- Institute for Experimental Molecular Imaging, RWTH Aachen University, Aachen, Germany
| | - Johannes Lotz
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany.
| | - Fabian Kiessling
- Fraunhofer Institute for Digital Medicine MEVIS, Bremen, Germany.
- Institute for Experimental Molecular Imaging, RWTH Aachen University, Aachen, Germany.
| |
Collapse
|
77
|
Kanyal A, Mazumder B, Calhoun VD, Preda A, Turner J, Ford J, Ye DH. Multi-modal deep learning from imaging genomic data for schizophrenia classification. Front Psychiatry 2024; 15:1384842. [PMID: 39006822 PMCID: PMC11239396 DOI: 10.3389/fpsyt.2024.1384842] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/11/2024] [Accepted: 05/23/2024] [Indexed: 07/16/2024] Open
Abstract
Background Schizophrenia (SZ) is a psychiatric condition that adversely affects an individual's cognitive, emotional, and behavioral aspects. The etiology of SZ, although extensively studied, remains unclear, as multiple factors come together to contribute toward its development. There is a consistent body of evidence documenting the presence of structural and functional deviations in the brains of individuals with SZ. Moreover, the hereditary aspect of SZ is supported by the significant involvement of genomics markers. Therefore, the need to investigate SZ from a multi-modal perspective and develop approaches for improved detection arises. Methods Our proposed method employed a deep learning framework combining features from structural magnetic resonance imaging (sMRI), functional magnetic resonance imaging (fMRI), and genetic markers such as single nucleotide polymorphism (SNP). For sMRI, we used a pre-trained DenseNet to extract the morphological features. To identify the most relevant functional connections in fMRI and SNPs linked to SZ, we applied a 1-dimensional convolutional neural network (CNN) followed by layerwise relevance propagation (LRP). Finally, we concatenated these obtained features across modalities and fed them to the extreme gradient boosting (XGBoost) tree-based classifier to classify SZ from healthy control (HC). Results Experimental evaluation on clinical dataset demonstrated that, compared to the outcomes obtained from each modality individually, our proposed multi-modal approach performed classification of SZ individuals from HC with an improved accuracy of 79.01%. Conclusion We proposed a deep learning based framework that selects multi-modal (sMRI, fMRI and genetic) features efficiently and fuse them to obtain improved classification scores. Additionally, by using Explainable AI (XAI), we were able to pinpoint and validate significant functional network connections and SNPs that contributed the most toward SZ classification, providing necessary interpretation behind our findings.
Collapse
Affiliation(s)
- Ayush Kanyal
- Department of Computer Science, Georgia State University, Atlanta, GA, United States
| | - Badhan Mazumder
- Department of Computer Science, Georgia State University, Atlanta, GA, United States
| | - Vince D Calhoun
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Atlanta, GA, United States
| | - Adrian Preda
- Department of Psychiatry and Human Behavior, Univeristy of California Irvine, Irvine, CA, United States
| | - Jessica Turner
- Department of Psychiatry and Behavioral Health, The Ohio State University, Columbus, OH, United States
| | - Judith Ford
- Department of Psychiatry, University of California, San Francisco, San Francisco, CA, United States
| | - Dong Hye Ye
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Atlanta, GA, United States
| |
Collapse
|
78
|
Chang J, Hatfield B. Advancements in computer vision and pathology: Unraveling the potential of artificial intelligence for precision diagnosis and beyond. Adv Cancer Res 2024; 161:431-478. [PMID: 39032956 DOI: 10.1016/bs.acr.2024.05.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/23/2024]
Abstract
The integration of computer vision into pathology through slide digitalization represents a transformative leap in the field's evolution. Traditional pathology methods, while reliable, are often time-consuming and susceptible to intra- and interobserver variability. In contrast, computer vision, empowered by artificial intelligence (AI) and machine learning (ML), promises revolutionary changes, offering consistent, reproducible, and objective results with ever-increasing speed and scalability. The applications of advanced algorithms and deep learning architectures like CNNs and U-Nets augment pathologists' diagnostic capabilities, opening new frontiers in automated image analysis. As these technologies mature and integrate into digital pathology workflows, they are poised to provide deeper insights into disease processes, quantify and standardize biomarkers, enhance patient outcomes, and automate routine tasks, reducing pathologists' workload. However, this transformative force calls for cross-disciplinary collaboration between pathologists, computer scientists, and industry innovators to drive research and development. While acknowledging its potential, this chapter addresses the limitations of AI in pathology, encompassing technical, practical, and ethical considerations during development and implementation.
Collapse
Affiliation(s)
- Justin Chang
- Virginia Commonwealth University Health System, Richmond, VA, United States
| | - Bryce Hatfield
- Virginia Commonwealth University Health System, Richmond, VA, United States.
| |
Collapse
|
79
|
Srikrishna M, Seo W, Zettergren A, Kern S, Cantré D, Gessler F, Sotoudeh H, Seidlitz J, Bernstock JD, Wahlund LO, Westman E, Skoog I, Virhammar J, Fällmar D, Schöll M. Assessing CT-based Volumetric Analysis via Transfer Learning with MRI and Manual Labels for Idiopathic Normal Pressure Hydrocephalus. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.06.23.24309144. [PMID: 38978640 PMCID: PMC11230337 DOI: 10.1101/2024.06.23.24309144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/10/2024]
Abstract
Background Brain computed tomography (CT) is an accessible and commonly utilized technique for assessing brain structure. In cases of idiopathic normal pressure hydrocephalus (iNPH), the presence of ventriculomegaly is often neuroradiologically evaluated by visual rating and manually measuring each image. Previously, we have developed and tested a deep-learning-model that utilizes transfer learning from magnetic resonance imaging (MRI) for CT-based intracranial tissue segmentation. Accordingly, herein we aimed to enhance the segmentation of ventricular cerebrospinal fluid (VCSF) in brain CT scans and assess the performance of automated brain CT volumetrics in iNPH patient diagnostics. Methods The development of the model used a two-stage approach. Initially, a 2D U-Net model was trained to predict VCSF segmentations from CT scans, using paired MR-VCSF labels from healthy controls. This model was subsequently refined by incorporating manually segmented lateral CT-VCSF labels from iNPH patients, building on the features learned from the initial U-Net model. The training dataset included 734 CT datasets from healthy controls paired with T1-weighted MRI scans from the Gothenburg H70 Birth Cohort Studies and 62 CT scans from iNPH patients at Uppsala University Hospital. To validate the model's performance across diverse patient populations, external clinical images including scans of 11 iNPH patients from the Universitatsmedizin Rostock, Germany, and 30 iNPH patients from the University of Alabama at Birmingham, United States were used. Further, we obtained three CT-based volumetric measures (CTVMs) related to iNPH. Results Our analyses demonstrated strong volumetric correlations (ϱ=0.91, p<0.001) between automatically and manually derived CT-VCSF measurements in iNPH patients. The CTVMs exhibited high accuracy in differentiating iNPH patients from controls in external clinical datasets with an AUC of 0.97 and in the Uppsala University Hospital datasets with an AUC of 0.99. Discussion CTVMs derived through deep learning, show potential for assessing and quantifying morphological features in hydrocephalus. Critically, these measures performed comparably to gold-standard neuroradiology assessments in distinguishing iNPH from healthy controls, even in the presence of intraventricular shunt catheters. Accordingly, such an approach may serve to improve the radiological evaluation of iNPH diagnosis/monitoring (i.e., treatment responses). Since CT is much more widely available than MRI, our results have considerable clinical impact.
Collapse
Affiliation(s)
- Meera Srikrishna
- Wallenberg Centre for Molecular and Translational Medicine, University of Gothenburg, Gothenburg, Sweden
- Department of Psychiatry and Neurochemistry, Institute of Physiology and Neuroscience, University of Gothenburg, Gothenburg, Sweden
| | - Woosung Seo
- Department of Surgical Sciences, Neuroradiology, Uppsala University, Uppsala, Sweden
| | - Anna Zettergren
- Neuropsychiatric Epidemiology, Institute of Neuroscience and Physiology, Sahlgrenska Academy, Centre for Ageing and Health (AgeCap), University of Gothenburg, Gothenburg, Sweden
| | - Silke Kern
- Neuropsychiatric Epidemiology, Institute of Neuroscience and Physiology, Sahlgrenska Academy, Centre for Ageing and Health (AgeCap), University of Gothenburg, Gothenburg, Sweden
- Department of Psychiatry and Neurochemistry, Institute of Neuroscience and Physiology, Sahlgrenska Academy, University of Gothenburg, Mölndal, Sweden
| | - Daniel Cantré
- Institute of Diagnostic and Interventional Radiology, Pediatric Radiology and Neuroradiology, University Medical Center Rostock, Rostock, Germany
| | - Florian Gessler
- Department of Neurosurgery, University Medicine of Rostock, 18057 Rostock, Germany
| | - Houman Sotoudeh
- Department of Neuroradiology, University of Alabama, Birmingham, AL, United States
| | - Jakob Seidlitz
- Lifespan Brain Institute, The Children’s Hospital of Philadelphia and Penn Medicine, Philadelphia, PA, USA
- Institute for Translational Medicine and Therapeutics, University of Pennsylvania, Philadelphia, PA, USA
- Department of Psychiatry, University of Pennsylvania, Philadelphia, United States
- Department of Child and Adolescent Psychiatry and Behavioral Science, The Children’s Hospital of Philadelphia, Philadelphia, United States
| | - Joshua D. Bernstock
- Department of Neurosurgery, Brigham and Women’s Hospital, Harvard Medical School, Boston, Massachusetts
- David H. Koch Institute for Integrative Cancer Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
| | - Lars-Olof Wahlund
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
| | - Eric Westman
- Division of Clinical Geriatrics, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden
| | - Ingmar Skoog
- Neuropsychiatric Epidemiology, Institute of Neuroscience and Physiology, Sahlgrenska Academy, Centre for Ageing and Health (AgeCap), University of Gothenburg, Gothenburg, Sweden
| | - Johan Virhammar
- Department of Medical Sciences, Neurology, Uppsala University, Uppsala, Sweden
| | - David Fällmar
- Department of Surgical Sciences, Neuroradiology, Uppsala University, Uppsala, Sweden
| | - Michael Schöll
- Wallenberg Centre for Molecular and Translational Medicine, University of Gothenburg, Gothenburg, Sweden
- Department of Psychiatry and Neurochemistry, Institute of Physiology and Neuroscience, University of Gothenburg, Gothenburg, Sweden
- Dementia Research Centre, Queen Square Institute of Neurology, University College London, London, UK
- Department of Psychiatry, Cognition and Aging Psychiatry, Sahlgrenska University Hospital, Mölndal, Sweden
| |
Collapse
|
80
|
Jain S, Li X, Xu M. Knowledge Transfer from Macro-world to Micro-world: Enhancing 3D Cryo-ET Classification through Fine-Tuning Video-based Deep Models. Bioinformatics 2024; 40:btae368. [PMID: 38889274 PMCID: PMC11269433 DOI: 10.1093/bioinformatics/btae368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 04/30/2024] [Accepted: 06/11/2024] [Indexed: 06/20/2024] Open
Abstract
MOTIVATION Deep learning models have achieved remarkable success in a wide range of natural-world tasks, such as vision, language, and speech recognition. These accomplishments are largely attributed to the availability of open-source large-scale datasets. More importantly, pre-trained foundational modellearnings exhibit a surprising degree of transferability to downstream tasks, enabling efficient learning even with limited training examples. However, the application of such natural-domain models to the domain of tiny Cryo-Electron Tomography (Cryo-ET) images has been a relatively unexplored frontier. This research is motivated by the intuition that 3D Cryo-ET voxel data can be conceptually viewed as a sequence of progressively evolving video frames. RESULTS Leveraging the above insight, we propose a novel approach that involves the utilization of 3D models pre-trained on large-scale video datasets to enhance Cryo-ET subtomogram classification. Our experiments, conducted on both simulated and real Cryo-ET datasets, reveal compelling results. The use of video initialization not only demonstrates improvements in classification accuracy but also substantially reduces training costs. Further analyses provide additional evidence of the value of video initialization in enhancing subtomogram feature extraction. Additionally, we observe that video initialization yields similar positive effects when applied to medical 3D classification tasks, underscoring the potential of cross-domain knowledge transfer from video-based models to advance the state-of-the-art in a wide range of biological and medical data types. AVAILABILITY AND IMPLEMENTATION https://github.com/xulabs/aitom.
Collapse
Affiliation(s)
- Sabhay Jain
- Electrical Engineering Department, Indian Institute of Technology Kanpur, Kanpur, Uttar Pradesh, 208016, India
| | - Xingjian Li
- Ray and Stephanie Lane Computational Biology Department, Carnegie Mellon University, Pittsburgh, Pennsylvania, 15213, United States
| | - Min Xu
- Ray and Stephanie Lane Computational Biology Department, Carnegie Mellon University, Pittsburgh, Pennsylvania, 15213, United States
| |
Collapse
|
81
|
Zou C, Ji H, Cui J, Qian B, Chen YC, Zhang Q, He S, Sui Y, Bai Y, Zhong Y, Zhang X, Ni T, Che Z. Preliminary study on AI-assisted diagnosis of bone remodeling in chronic maxillary sinusitis. BMC Med Imaging 2024; 24:140. [PMID: 38858631 PMCID: PMC11165780 DOI: 10.1186/s12880-024-01316-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Accepted: 05/30/2024] [Indexed: 06/12/2024] Open
Abstract
OBJECTIVE To construct the deep learning convolution neural network (CNN) model and machine learning support vector machine (SVM) model of bone remodeling of chronic maxillary sinusitis (CMS) based on CT image data to improve the accuracy of image diagnosis. METHODS Maxillary sinus CT data of 1000 samples in 500 patients from January 2018 to December 2021 in our hospital was collected. The first part is the establishment and testing of chronic maxillary sinusitis detection model by 461 images. The second part is the establishment and testing of the detection model of chronic maxillary sinusitis with bone remodeling by 802 images. The sensitivity, specificity and accuracy and area under the curve (AUC) value of the test set were recorded, respectively. RESULTS Preliminary application results of CT based AI in the diagnosis of chronic maxillary sinusitis and bone remodeling. The sensitivity, specificity and accuracy of the test set of 93 samples of CMS, were 0.9796, 0.8636 and 0.9247, respectively. Simultaneously, the value of AUC was 0.94. And the sensitivity, specificity and accuracy of the test set of 161 samples of CMS with bone remodeling were 0.7353, 0.9685 and 0.9193, respectively. Simultaneously, the value of AUC was 0.89. CONCLUSION It is feasible to use artificial intelligence research methods such as deep learning and machine learning to automatically identify CMS and bone remodeling in MSCT images of paranasal sinuses, which is helpful to standardize imaging diagnosis and meet the needs of clinical application.
Collapse
Affiliation(s)
- Caiyun Zou
- Department of Radiology, Nanjing Tongren Hospital, School of Medicine, Southeast University, No. 2007, Ji Yin Avenue, Jiang Ning District, Nanjing, 211102, PR China
| | - Hongbo Ji
- Department of Radiology, Nanjing Tongren Hospital, School of Medicine, Southeast University, No. 2007, Ji Yin Avenue, Jiang Ning District, Nanjing, 211102, PR China
| | - Jie Cui
- Department of Radiology, Nanjing Tongren Hospital, School of Medicine, Southeast University, No. 2007, Ji Yin Avenue, Jiang Ning District, Nanjing, 211102, PR China
| | - Bo Qian
- Department of Radiology, Nanjing Tongren Hospital, School of Medicine, Southeast University, No. 2007, Ji Yin Avenue, Jiang Ning District, Nanjing, 211102, PR China
| | - Yu-Chen Chen
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, Nanjing, PR China
| | - Qingxiang Zhang
- Department of Otolaryngology Head and Neck Surgery, Nanjing Tongren Hospital, School of Medicine, Southeast University, Nanjing, PR China
| | - Shuangba He
- Department of Otolaryngology Head and Neck Surgery, Nanjing Tongren Hospital, School of Medicine, Southeast University, Nanjing, PR China
| | - Yang Sui
- School of Statistics and Management, Shanghai University of Finance and Economics, Shanghai, PR China
| | - Yang Bai
- School of Statistics and Management, Shanghai University of Finance and Economics, Shanghai, PR China
| | - Yeming Zhong
- Department of Radiology, Nanjing Tongren Hospital, School of Medicine, Southeast University, No. 2007, Ji Yin Avenue, Jiang Ning District, Nanjing, 211102, PR China
| | - Xu Zhang
- Department of Radiology, Nanjing Tongren Hospital, School of Medicine, Southeast University, No. 2007, Ji Yin Avenue, Jiang Ning District, Nanjing, 211102, PR China
| | - Ting Ni
- Department of Radiology, Nanjing Tongren Hospital, School of Medicine, Southeast University, No. 2007, Ji Yin Avenue, Jiang Ning District, Nanjing, 211102, PR China
| | - Zigang Che
- Department of Radiology, Nanjing Tongren Hospital, School of Medicine, Southeast University, No. 2007, Ji Yin Avenue, Jiang Ning District, Nanjing, 211102, PR China.
| |
Collapse
|
82
|
Fang K, Zheng X, Lin X, Dai Z. A comprehensive approach for osteoporosis detection through chest CT analysis and bone turnover markers: harnessing radiomics and deep learning techniques. Front Endocrinol (Lausanne) 2024; 15:1296047. [PMID: 38894742 PMCID: PMC11183288 DOI: 10.3389/fendo.2024.1296047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 05/22/2024] [Indexed: 06/21/2024] Open
Abstract
Purpose The main objective of this study is to assess the possibility of using radiomics, deep learning, and transfer learning methods for the analysis of chest CT scans. An additional aim is to combine these techniques with bone turnover markers to identify and screen for osteoporosis in patients. Method A total of 488 patients who had undergone chest CT and bone turnover marker testing, and had known bone mineral density, were included in this study. ITK-SNAP software was used to delineate regions of interest, while radiomics features were extracted using Python. Multiple 2D and 3D deep learning models were trained to identify these regions of interest. The effectiveness of these techniques in screening for osteoporosis in patients was compared. Result Clinical models based on gender, age, and β-cross achieved an accuracy of 0.698 and an AUC of 0.665. Radiomics models, which utilized 14 selected radiomics features, achieved a maximum accuracy of 0.750 and an AUC of 0.739. The test group yielded promising results: the 2D Deep Learning model achieved an accuracy of 0.812 and an AUC of 0.855, while the 3D Deep Learning model performed even better with an accuracy of 0.854 and an AUC of 0.906. Similarly, the 2D Transfer Learning model achieved an accuracy of 0.854 and an AUC of 0.880, whereas the 3D Transfer Learning model exhibited an accuracy of 0.740 and an AUC of 0.737. Overall, the application of 3D deep learning and 2D transfer learning techniques on chest CT scans showed excellent screening performance in the context of osteoporosis. Conclusion Bone turnover markers may not be necessary for osteoporosis screening, as 3D deep learning and 2D transfer learning techniques utilizing chest CT scans proved to be equally effective alternatives.
Collapse
Affiliation(s)
- Kaibin Fang
- Department of Orthopaedic Surgery, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Xiaoling Zheng
- Aviation College, Liming Vocational University, Quanzhou, China
| | - Xiaocong Lin
- Department of Orthopaedic Surgery, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Zhangsheng Dai
- Department of Orthopaedic Surgery, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| |
Collapse
|
83
|
Vidhani FR, Woo JJ, Zhang YB, Olsen RJ, Ramkumar PN. Automating Linear and Angular Measurements for the Hip and Knee After Computed Tomography: Validation of a Three-Stage Deep Learning and Computer Vision-Based Pipeline for Pathoanatomic Assessment. Arthroplast Today 2024; 27:101394. [PMID: 39071819 PMCID: PMC11282415 DOI: 10.1016/j.artd.2024.101394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 01/17/2024] [Accepted: 04/01/2024] [Indexed: 07/30/2024] Open
Abstract
Background Variability in the bony morphology of pathologic hips/knees is a challenge in automating preoperative computed tomography (CT) scan measurements. With the increasing prevalence of CT for advanced preoperative planning, processing this data represents a critical bottleneck in presurgical planning, research, and development. The purpose of this study was to demonstrate a reproducible and scalable methodology for analyzing CT-based anatomy to process hip and knee anatomy for perioperative planning and execution. Methods One hundred patients with preoperative CT scans undergoing total knee arthroplasty for osteoarthritis were processed. A two-step deep learning pipeline of classification and segmentation models was developed that identifies landmark images and then generates contour representations. We utilized an open-source computer vision library to compute measurements. Classification models were assessed by accuracy, precision, and recall. Segmentation models were evaluated using dice and mean Intersection over Union (IOU) metrics. Contour measurements were compared against manual measurements to validate posterior condylar axis angle, sulcus angle, trochlear groove-tibial tuberosity distance, acetabular anteversion, and femoral version. Results Classifiers identified landmark images with accuracy of 0.91 and 0.88 for hip and knee models, respectively. Segmentation models demonstrated mean IOU scores above 0.95 with the highest dice coefficient of 0.957 [0.954-0.961] (UNet3+) and the highest mean IOU of 0.965 [0.961-0.969] (Attention U-Net). There were no statistically significant differences for the measurements taken automatically vs manually (P > 0.05). Average time for the pipeline to preprocess (48.65 +/- 4.41 sec), classify/retrieve landmark images (8.36 +/- 3.40 sec), segment images (<1 sec), and obtain measurements was 2.58 (+/- 1.92) minutes. Conclusions A fully automated three-stage deep learning and computer vision-based pipeline of classification and segmentation models accurately localized, segmented, and measured landmark hip and knee images for patients undergoing total knee arthroplasty. Incorporation of clinical parameters, like patient-reported outcome measures and instability risk, will be important considerations alongside anatomic parameters.
Collapse
Affiliation(s)
- Faizaan R. Vidhani
- Brown University/The Warren Alpert School of Brown University, Providence, RI, USA
| | - Joshua J. Woo
- Brown University/The Warren Alpert School of Brown University, Providence, RI, USA
| | - Yibin B. Zhang
- Harvard Medical School/Brigham and Women’s, Boston, MA, USA
| | - Reena J. Olsen
- Sports Medicine Institute, Hospital for Special Surgery, New York, NY, USA
| | | |
Collapse
|
84
|
Saha A, Ganie SM, Dutta Pramanik PK, Yadav RK, Mallik S, Zhao Z. Correction: VER-Net: a hybrid transfer learning model for lung cancer detection using CT scan images. BMC Med Imaging 2024; 24:128. [PMID: 38822231 PMCID: PMC11140995 DOI: 10.1186/s12880-024-01315-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/02/2024] Open
Affiliation(s)
- Anindita Saha
- Department of Computing Science and Engineering, IFTM University, Moradabad, Uttar Pradesh, India
| | - Shahid Mohammad Ganie
- AI Research Centre, Department of Analytics, School of Business, Woxsen University, Hyderabad, Telangana, 502345, India
| | - Pijush Kanti Dutta Pramanik
- School of Computer Applications and Technology, Galgotias University, Greater Noida, Uttar Pradesh, 203201, India.
| | - Rakesh Kumar Yadav
- Department of Computer Science & Engineering, MSOET, Maharishi University of Information Technology, Lucknow, Uttar Pradesh, India
| | - Saurav Mallik
- Department of Environmental Health, Harvard T. H. Chan School of Public Health, Boston, MA, USA
| | - Zhongming Zhao
- Center for Precision Health, McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, 77030, USA.
| |
Collapse
|
85
|
Saha A, Ganie SM, Pramanik PKD, Yadav RK, Mallik S, Zhao Z. VER-Net: a hybrid transfer learning model for lung cancer detection using CT scan images. BMC Med Imaging 2024; 24:120. [PMID: 38789925 PMCID: PMC11127393 DOI: 10.1186/s12880-024-01238-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Accepted: 03/05/2024] [Indexed: 05/26/2024] Open
Abstract
BACKGROUND Lung cancer is the second most common cancer worldwide, with over two million new cases per year. Early identification would allow healthcare practitioners to handle it more effectively. The advancement of computer-aided detection systems significantly impacted clinical analysis and decision-making on human disease. Towards this, machine learning and deep learning techniques are successfully being applied. Due to several advantages, transfer learning has become popular for disease detection based on image data. METHODS In this work, we build a novel transfer learning model (VER-Net) by stacking three different transfer learning models to detect lung cancer using lung CT scan images. The model is trained to map the CT scan images with four lung cancer classes. Various measures, such as image preprocessing, data augmentation, and hyperparameter tuning, are taken to improve the efficacy of VER-Net. All the models are trained and evaluated using multiclass classifications chest CT images. RESULTS The experimental results confirm that VER-Net outperformed the other eight transfer learning models compared with. VER-Net scored 91%, 92%, 91%, and 91.3% when tested for accuracy, precision, recall, and F1-score, respectively. Compared to the state-of-the-art, VER-Net has better accuracy. CONCLUSION VER-Net is not only effectively used for lung cancer detection but may also be useful for other diseases for which CT scan images are available.
Collapse
Affiliation(s)
- Anindita Saha
- Department of Computing Science and Engineering, IFTM University, Moradabad, Uttar Pradesh, India
| | - Shahid Mohammad Ganie
- AI Research Centre, Department of Analytics, School of Business, Woxsen University, Hyderabad, Telangana, 502345, India
| | - Pijush Kanti Dutta Pramanik
- School of Computer Applications and Technology, Galgotias University, Greater Noida, Uttar Pradesh, 203201, India.
| | - Rakesh Kumar Yadav
- Department of Computer Science & Engineering, MSOET, Maharishi University of Information Technology, Lucknow, Uttar Pradesh, India
| | - Saurav Mallik
- Department of Environmental Health, Harvard T. H. Chan School of Public Health, Boston, MA, USA
| | - Zhongming Zhao
- Center for Precision Health, McWilliams School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, TX, 77030, USA.
| |
Collapse
|
86
|
Sheng H, Ma L, Samson JF, Liu D. BarlowTwins-CXR: enhancing chest X-ray abnormality localization in heterogeneous data with cross-domain self-supervised learning. BMC Med Inform Decis Mak 2024; 24:126. [PMID: 38755563 PMCID: PMC11097466 DOI: 10.1186/s12911-024-02529-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Accepted: 05/07/2024] [Indexed: 05/18/2024] Open
Abstract
BACKGROUND Chest X-ray imaging based abnormality localization, essential in diagnosing various diseases, faces significant clinical challenges due to complex interpretations and the growing workload of radiologists. While recent advances in deep learning offer promising solutions, there is still a critical issue of domain inconsistency in cross-domain transfer learning, which hampers the efficiency and accuracy of diagnostic processes. This study aims to address the domain inconsistency problem and improve autonomic abnormality localization performance of heterogeneous chest X-ray image analysis, particularly in detecting abnormalities, by developing a self-supervised learning strategy called "BarlwoTwins-CXR". METHODS We utilized two publicly available datasets: the NIH Chest X-ray Dataset and the VinDr-CXR. The BarlowTwins-CXR approach was conducted in a two-stage training process. Initially, self-supervised pre-training was performed using an adjusted Barlow Twins algorithm on the NIH dataset with a Resnet50 backbone pre-trained on ImageNet. This was followed by supervised fine-tuning on the VinDr-CXR dataset using Faster R-CNN with Feature Pyramid Network (FPN). The study employed mean Average Precision (mAP) at an Intersection over Union (IoU) of 50% and Area Under the Curve (AUC) for performance evaluation. RESULTS Our experiments showed a significant improvement in model performance with BarlowTwins-CXR. The approach achieved a 3% increase in mAP50 accuracy compared to traditional ImageNet pre-trained models. In addition, the Ablation CAM method revealed enhanced precision in localizing chest abnormalities. The study involved 112,120 images from the NIH dataset and 18,000 images from the VinDr-CXR dataset, indicating robust training and testing samples. CONCLUSION BarlowTwins-CXR significantly enhances the efficiency and accuracy of chest X-ray image-based abnormality localization, outperforming traditional transfer learning methods and effectively overcoming domain inconsistency in cross-domain scenarios. Our experiment results demonstrate the potential of using self-supervised learning to improve the generalizability of models in medical settings with limited amounts of heterogeneous data. This approach can be instrumental in aiding radiologists, particularly in high-workload environments, offering a promising direction for future AI-driven healthcare solutions.
Collapse
Affiliation(s)
- Haoyue Sheng
- Département d'informatique et de recherche opérationnelle, Université de Montréal, 2920 chemin de la Tour, Montréal, H3T 1J4, QC, Canada.
- Mila - Quebec AI Institute, 6666 Rue Saint-Urbain, Montréal, H2S 3H1, QC, Canada.
- Direction des ressources informationnelles, CIUSSS du Centre-Sud-de-l'Île-de-Montréal, 400 Blvd. De Maisonneuve Ouest, Montréal, H3A 1L4, QC, Canada.
| | - Linrui Ma
- Département d'informatique et de recherche opérationnelle, Université de Montréal, 2920 chemin de la Tour, Montréal, H3T 1J4, QC, Canada
- Mila - Quebec AI Institute, 6666 Rue Saint-Urbain, Montréal, H2S 3H1, QC, Canada
| | - Jean-François Samson
- Direction des ressources informationnelles, CIUSSS du Centre-Sud-de-l'Île-de-Montréal, 400 Blvd. De Maisonneuve Ouest, Montréal, H3A 1L4, QC, Canada
| | - Dianbo Liu
- Mila - Quebec AI Institute, 6666 Rue Saint-Urbain, Montréal, H2S 3H1, QC, Canada
- School of Medicine and College of Design and Engineering, National University of Singapore, 21 Lower Kent Ridge Rd, Singapore, 119077, SG, Singapore
| |
Collapse
|
87
|
Magboo VPC, Magboo MSA. SPECT-MPI for Coronary Artery Disease: A Deep Learning Approach. ACTA MEDICA PHILIPPINA 2024; 58:67-75. [PMID: 38812768 PMCID: PMC11132284 DOI: 10.47895/amp.vi0.7582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 05/31/2024]
Abstract
Background Worldwide, coronary artery disease (CAD) is a leading cause of mortality and morbidity and remains to be a top health priority in many countries. A non-invasive imaging modality for diagnosis of CAD such as single photon emission computed tomography-myocardial perfusion imaging (SPECT-MPI) is usually requested by cardiologists as it displays radiotracer distribution in the heart reflecting myocardial perfusion. The interpretation of SPECT-MPI is done visually by a nuclear medicine physician and is largely dependent on his clinical experience and showing significant inter-observer variability. Objective The aim of the study is to apply a deep learning approach in the classification of SPECT-MPI for perfusion abnormalities using convolutional neural networks (CNN). Methods A publicly available anonymized SPECT-MPI from a machine learning repository (https://www.kaggle.com/selcankaplan/spect-mpi) was used in this study involving 192 patients who underwent stress-test-rest Tc99m MPI. An exploratory approach of CNN hyperparameter selection to search for optimum neural network model was utilized with particular focus on various dropouts (0.2, 0.5, 0.7), batch sizes (8, 16, 32, 64), and number of dense nodes (32, 64, 128, 256). The base CNN model was also compared with the commonly used pre-trained CNNs in medical images such as VGG16, InceptionV3, DenseNet121 and ResNet50. All simulations experiments were performed in Kaggle using TensorFlow 2.6.0., Keras 2.6.0, and Python language 3.7.10. Results The best performing base CNN model with parameters consisting of 0.7 dropout, batch size 8, and 32 dense nodes generated the highest normalized Matthews Correlation Coefficient at 0.909 and obtained 93.75% accuracy, 96.00% sensitivity, 96.00% precision, and 96.00% F1-score. It also obtained higher classification performance as compared to the pre-trained architectures. Conclusions The results suggest that deep learning approaches through the use of CNN models can be deployed by nuclear medicine physicians in their clinical practice to further augment their decision skills in the interpretation of SPECT-MPI tests. These CNN models can also be used as a dependable and valid second opinion that can aid physicians as a decision-support tool as well as serve as teaching or learning materials for the less-experienced physicians particularly those still in their training career. These highlights the clinical utility of deep learning approaches through CNN models in the practice of nuclear cardiology.
Collapse
Affiliation(s)
- Vincent Peter C Magboo
- Department of Physical Sciences and Mathematics, College of Arts and Sciences, University of the Philippines Manila
| | - Ma Sheila A Magboo
- Department of Physical Sciences and Mathematics, College of Arts and Sciences, University of the Philippines Manila
| |
Collapse
|
88
|
Chiu YJ. Automated medication verification system (AMVS): System based on edge detection and CNN classification drug on embedded systems. Heliyon 2024; 10:e30486. [PMID: 38742071 PMCID: PMC11089321 DOI: 10.1016/j.heliyon.2024.e30486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Accepted: 04/28/2024] [Indexed: 05/16/2024] Open
Abstract
A novel automated medication verification system (AMVS) aims to address the limitation of manual medication verification among healthcare professionals with a high workload, thereby reducing medication errors in hospitals. Specifically, the manual medication verification process is time-consuming and prone to errors, especially in healthcare settings with high workloads. The proposed system strategy is to streamline and automate this process, enhancing efficiency and reducing medication errors. The system employs deep learning models to swiftly and accurately classify multiple medications within a single image without requiring manual labeling during model construction. It comprises edge detection and classification to verify medication types. Unlike previous studies conducted in open spaces, our study takes place in a closed space to minimize the impact of optical changes on image capture. During the experimental process, the system individually identifies each drug within the image by edge detection method and utilizes a classification model to determine each drug type. Our research has successfully developed a fully automated drug recognition system, achieving an accuracy of over 95 % in identifying drug types and conducting segmentation analyses. Specifically, the system demonstrates an accuracy rate of approximately 96 % for drug sets containing fewer than ten types and 93 % for those with ten types. This verification system builds an image classification model quickly. It holds promising potential in assisting nursing staff during AMVS, thereby reducing the likelihood of medication errors and alleviating the burden on nursing staff.
Collapse
Affiliation(s)
- Yen-Jung Chiu
- Department of Biomedical Engineering, Ming Chuan University, Taoyuan, 333, Taiwan
| |
Collapse
|
89
|
Lee MR, Kao MH, Hsieh YC, Sun M, Tang KT, Wang JY, Ho CC, Shih JY, Yu CJ. Cross-site validation of lung cancer diagnosis by electronic nose with deep learning: a multicenter prospective study. Respir Res 2024; 25:203. [PMID: 38730430 PMCID: PMC11084132 DOI: 10.1186/s12931-024-02840-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 05/06/2024] [Indexed: 05/12/2024] Open
Abstract
BACKGROUND Although electronic nose (eNose) has been intensively investigated for diagnosing lung cancer, cross-site validation remains a major obstacle to be overcome and no studies have yet been performed. METHODS Patients with lung cancer, as well as healthy control and diseased control groups, were prospectively recruited from two referral centers between 2019 and 2022. Deep learning models for detecting lung cancer with eNose breathprint were developed using training cohort from one site and then tested on cohort from the other site. Semi-Supervised Domain-Generalized (Semi-DG) Augmentation (SDA) and Noise-Shift Augmentation (NSA) methods with or without fine-tuning was applied to improve performance. RESULTS In this study, 231 participants were enrolled, comprising a training/validation cohort of 168 individuals (90 with lung cancer, 16 healthy controls, and 62 diseased controls) and a test cohort of 63 individuals (28 with lung cancer, 10 healthy controls, and 25 diseased controls). The model has satisfactory results in the validation cohort from the same hospital while directly applying the trained model to the test cohort yielded suboptimal results (AUC, 0.61, 95% CI: 0.47─0.76). The performance improved after applying data augmentation methods in the training cohort (SDA, AUC: 0.89 [0.81─0.97]; NSA, AUC:0.90 [0.89─1.00]). Additionally, after applying fine-tuning methods, the performance further improved (SDA plus fine-tuning, AUC:0.95 [0.89─1.00]; NSA plus fine-tuning, AUC:0.95 [0.90─1.00]). CONCLUSION Our study revealed that deep learning models developed for eNose breathprint can achieve cross-site validation with data augmentation and fine-tuning. Accordingly, eNose breathprints emerge as a convenient, non-invasive, and potentially generalizable solution for lung cancer detection. CLINICAL TRIAL REGISTRATION This study is not a clinical trial and was therefore not registered.
Collapse
Affiliation(s)
- Meng-Rui Lee
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
- Department of Internal Medicine, National Taiwan University Hospital Hsin-Chu Branch, Hsin-Chu, Taiwan
| | - Mu-Hsiang Kao
- Department. of Electrical Engineering, National Tsing Hua University, No. 101, Sec. 2, Kuang-Fu Road, Hsinchu, 30013, Taiwan
| | - Ya-Chu Hsieh
- Department. of Electrical Engineering, National Tsing Hua University, No. 101, Sec. 2, Kuang-Fu Road, Hsinchu, 30013, Taiwan
| | - Min Sun
- Department. of Electrical Engineering, National Tsing Hua University, No. 101, Sec. 2, Kuang-Fu Road, Hsinchu, 30013, Taiwan.
| | - Kea-Tiong Tang
- Department. of Electrical Engineering, National Tsing Hua University, No. 101, Sec. 2, Kuang-Fu Road, Hsinchu, 30013, Taiwan.
| | - Jann-Yuan Wang
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Chao-Chi Ho
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Jin-Yuan Shih
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Chong-Jen Yu
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
- Department of Internal Medicine, National Taiwan University Hospital Hsin-Chu Branch, Hsin-Chu, Taiwan
| |
Collapse
|
90
|
Cheng CT, Ooyang CH, Liao CH, Kang SC. Applications of deep learning in trauma radiology: A narrative review. Biomed J 2024; 48:100743. [PMID: 38679199 PMCID: PMC11751421 DOI: 10.1016/j.bj.2024.100743] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 03/26/2024] [Accepted: 04/24/2024] [Indexed: 05/01/2024] Open
Abstract
Diagnostic imaging is essential in modern trauma care for initial evaluation and identifying injuries requiring intervention. Deep learning (DL) has become mainstream in medical image analysis and has shown promising efficacy for classification, segmentation, and lesion detection. This narrative review provides the fundamental concepts for developing DL algorithms in trauma imaging and presents an overview of current progress in each modality. DL has been applied to detect free fluid on Focused Assessment with Sonography for Trauma (FAST), traumatic findings on chest and pelvic X-rays, and computed tomography (CT) scans, identify intracranial hemorrhage on head CT, detect vertebral fractures, and identify injuries to organs like the spleen, liver, and lungs on abdominal and chest CT. Future directions involve expanding dataset size and diversity through federated learning, enhancing model explainability and transparency to build clinician trust, and integrating multimodal data to provide more meaningful insights into traumatic injuries. Though some commercial artificial intelligence products are Food and Drug Administration-approved for clinical use in the trauma field, adoption remains limited, highlighting the need for multi-disciplinary teams to engineer practical, real-world solutions. Overall, DL shows immense potential to improve the efficiency and accuracy of trauma imaging, but thoughtful development and validation are critical to ensure these technologies positively impact patient care.
Collapse
Affiliation(s)
- Chi-Tung Cheng
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Taoyuan, Taiwan; School of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Chun-Hsiang Ooyang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Taoyuan, Taiwan
| | - Chien-Hung Liao
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Taoyuan, Taiwan
| | - Shih-Ching Kang
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital, Linkou, Taoyuan, Taiwan.
| |
Collapse
|
91
|
Gu C, Lee M. Deep Transfer Learning Using Real-World Image Features for Medical Image Classification, with a Case Study on Pneumonia X-ray Images. Bioengineering (Basel) 2024; 11:406. [PMID: 38671827 PMCID: PMC11048359 DOI: 10.3390/bioengineering11040406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 04/14/2024] [Accepted: 04/18/2024] [Indexed: 04/28/2024] Open
Abstract
Deep learning has profoundly influenced various domains, particularly medical image analysis. Traditional transfer learning approaches in this field rely on models pretrained on domain-specific medical datasets, which limits their generalizability and accessibility. In this study, we propose a novel framework called real-world feature transfer learning, which utilizes backbone models initially trained on large-scale general-purpose datasets such as ImageNet. We evaluate the effectiveness and robustness of this approach compared to models trained from scratch, focusing on the task of classifying pneumonia in X-ray images. Our experiments, which included converting grayscale images to RGB format, demonstrate that real-world-feature transfer learning consistently outperforms conventional training approaches across various performance metrics. This advancement has the potential to accelerate deep learning applications in medical imaging by leveraging the rich feature representations learned from general-purpose pretrained models. The proposed methodology overcomes the limitations of domain-specific pretrained models, thereby enabling accelerated innovation in medical diagnostics and healthcare. From a mathematical perspective, we formalize the concept of real-world feature transfer learning and provide a rigorous mathematical formulation of the problem. Our experimental results provide empirical evidence supporting the effectiveness of this approach, laying the foundation for further theoretical analysis and exploration. This work contributes to the broader understanding of feature transferability across domains and has significant implications for the development of accurate and efficient models for medical image analysis, even in resource-constrained settings.
Collapse
Affiliation(s)
- Chanhoe Gu
- Department of Intelligent Semiconductor Engineering, Chung-Ang University, Seoul 06974, Republic of Korea;
| | - Minhyeok Lee
- Department of Intelligent Semiconductor Engineering, Chung-Ang University, Seoul 06974, Republic of Korea;
- School of Electrical and Electronics Engineering, Chung-Ang University, Seoul 06974, Republic of Korea
| |
Collapse
|
92
|
Turon G, Njoroge M, Mulubwa M, Duran-Frigola M, Chibale K. AI can help to tailor drugs for Africa - but Africans should lead the way. Nature 2024; 628:265-267. [PMID: 38594395 DOI: 10.1038/d41586-024-01001-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/11/2024]
|
93
|
Carter D, Bykhovsky D, Hasky A, Mamistvalov I, Zimmer Y, Ram E, Hoffer O. Convolutional neural network deep learning model accurately detects rectal cancer in endoanal ultrasounds. Tech Coloproctol 2024; 28:44. [PMID: 38561492 PMCID: PMC10984882 DOI: 10.1007/s10151-024-02917-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 03/06/2024] [Indexed: 04/04/2024]
Abstract
BACKGROUND Imaging is vital for assessing rectal cancer, with endoanal ultrasound (EAUS) being highly accurate in large tertiary medical centers. However, EAUS accuracy drops outside such settings, possibly due to varied examiner experience and fewer examinations. This underscores the need for an AI-based system to enhance accuracy in non-specialized centers. This study aimed to develop and validate deep learning (DL) models to differentiate rectal cancer in standard EAUS images. METHODS A transfer learning approach with fine-tuned DL architectures was employed, utilizing a dataset of 294 images. The performance of DL models was assessed through a tenfold cross-validation. RESULTS The DL diagnostics model exhibited a sensitivity and accuracy of 0.78 each. In the identification phase, the automatic diagnostic platform achieved an area under the curve performance of 0.85 for diagnosing rectal cancer. CONCLUSIONS This research demonstrates the potential of DL models in enhancing rectal cancer detection during EAUS, especially in settings with lower examiner experience. The achieved sensitivity and accuracy suggest the viability of incorporating AI support for improved diagnostic outcomes in non-specialized medical centers.
Collapse
Affiliation(s)
- D Carter
- Department of Gastroenterology, Chaim Sheba Medical Center, Ramat Gan, Israel.
- Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel.
| | - D Bykhovsky
- Electrical and Electronics Engineering Department, Shamoon College of Engineering, Beer-Sheba, Israel
| | - A Hasky
- School of Electrical Engineering, Afeka College of Engineering, Tel Aviv, Israel
| | - I Mamistvalov
- School of Electrical Engineering, Afeka College of Engineering, Tel Aviv, Israel
| | - Y Zimmer
- School of Medical Engineering, Afeka College of Engineering, Tel Aviv, Israel
| | - E Ram
- Department of Gastroenterology, Chaim Sheba Medical Center, Ramat Gan, Israel
- Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - O Hoffer
- School of Electrical Engineering, Afeka College of Engineering, Tel Aviv, Israel
| |
Collapse
|
94
|
Bottomly D, McWeeney S. Just how transformative will AI/ML be for immuno-oncology? J Immunother Cancer 2024; 12:e007841. [PMID: 38531545 DOI: 10.1136/jitc-2023-007841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/15/2024] [Indexed: 03/28/2024] Open
Abstract
Immuno-oncology involves the study of approaches which harness the patient's immune system to fight malignancies. Immuno-oncology, as with every other biomedical and clinical research field as well as clinical operations, is in the midst of technological revolutions, which vastly increase the amount of available data. Recent advances in artificial intelligence and machine learning (AI/ML) have received much attention in terms of their potential to harness available data to improve insights and outcomes in many areas including immuno-oncology. In this review, we discuss important aspects to consider when evaluating the potential impact of AI/ML applications in the clinic. We highlight four clinical/biomedical challenges relevant to immuno-oncology and how they may be able to be addressed by the latest advancements in AI/ML. These challenges include (1) efficiency in clinical workflows, (2) curation of high-quality image data, (3) finding, extracting and synthesizing text knowledge as well as addressing, and (4) small cohort size in immunotherapeutic evaluation cohorts. Finally, we outline how advancements in reinforcement and federated learning, as well as the development of best practices for ethical and unbiased data generation, are likely to drive future innovations.
Collapse
Affiliation(s)
- Daniel Bottomly
- Knight Cancer Institute, Oregon Health and Science University, Portland, Oregon, USA
| | - Shannon McWeeney
- Knight Cancer Institute, Oregon Health and Science University, Portland, Oregon, USA
| |
Collapse
|
95
|
Alzubaidi L, Salhi A, A.Fadhel M, Bai J, Hollman F, Italia K, Pareyon R, Albahri AS, Ouyang C, Santamaría J, Cutbush K, Gupta A, Abbosh A, Gu Y. Trustworthy deep learning framework for the detection of abnormalities in X-ray shoulder images. PLoS One 2024; 19:e0299545. [PMID: 38466693 PMCID: PMC10927121 DOI: 10.1371/journal.pone.0299545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 02/12/2024] [Indexed: 03/13/2024] Open
Abstract
Musculoskeletal conditions affect an estimated 1.7 billion people worldwide, causing intense pain and disability. These conditions lead to 30 million emergency room visits yearly, and the numbers are only increasing. However, diagnosing musculoskeletal issues can be challenging, especially in emergencies where quick decisions are necessary. Deep learning (DL) has shown promise in various medical applications. However, previous methods had poor performance and a lack of transparency in detecting shoulder abnormalities on X-ray images due to a lack of training data and better representation of features. This often resulted in overfitting, poor generalisation, and potential bias in decision-making. To address these issues, a new trustworthy DL framework has been proposed to detect shoulder abnormalities (such as fractures, deformities, and arthritis) using X-ray images. The framework consists of two parts: same-domain transfer learning (TL) to mitigate imageNet mismatch and feature fusion to reduce error rates and improve trust in the final result. Same-domain TL involves training pre-trained models on a large number of labelled X-ray images from various body parts and fine-tuning them on the target dataset of shoulder X-ray images. Feature fusion combines the extracted features with seven DL models to train several ML classifiers. The proposed framework achieved an excellent accuracy rate of 99.2%, F1Score of 99.2%, and Cohen's kappa of 98.5%. Furthermore, the accuracy of the results was validated using three visualisation tools, including gradient-based class activation heat map (Grad CAM), activation visualisation, and locally interpretable model-independent explanations (LIME). The proposed framework outperformed previous DL methods and three orthopaedic surgeons invited to classify the test set, who obtained an average accuracy of 79.1%. The proposed framework has proven effective and robust, improving generalisation and increasing trust in the final results.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD, Australia
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
- Centre for Data Science, Queensland University of Technology, Brisbane, QLD, Australia
- Akunah Medical Technology Pty Ltd Company, Brisbane, QLD, Australia
| | - Asma Salhi
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
- Akunah Medical Technology Pty Ltd Company, Brisbane, QLD, Australia
| | | | - Jinshuai Bai
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD, Australia
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
| | - Freek Hollman
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
| | - Kristine Italia
- Akunah Medical Technology Pty Ltd Company, Brisbane, QLD, Australia
| | - Roberto Pareyon
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
| | - A. S. Albahri
- Technical College, Imam Ja’afar Al-Sadiq University, Baghdad, Iraq
| | - Chun Ouyang
- School of Information Systems, Queensland University of Technology, Brisbane, QLD, Australia
| | - Jose Santamaría
- Department of Computer Science, University of Jaén, Jaén, Spain
| | - Kenneth Cutbush
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
- School of Medicine, The University of Queensland, Brisbane, QLD, Australia
| | - Ashish Gupta
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
- Akunah Medical Technology Pty Ltd Company, Brisbane, QLD, Australia
- Greenslopes Private Hospital, Brisbane, QLD, Australia
| | - Amin Abbosh
- School of Information Technology and Electrical Engineering, Brisbane, QLD, Australia
| | - Yuantong Gu
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD, Australia
- Queensland Unit for Advanced Shoulder Research (QUASR)/ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD, Australia
| |
Collapse
|
96
|
Patel V, Patel K, Goel P, Shah M. Classification of Gastrointestinal Diseases from Endoscopic Images Using Convolutional Neural Network with Transfer Learning. 2024 5TH INTERNATIONAL CONFERENCE ON INTELLIGENT COMMUNICATION TECHNOLOGIES AND VIRTUAL MOBILE NETWORKS (ICICV) 2024:504-508. [DOI: 10.1109/icicv62344.2024.00085] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2025]
Affiliation(s)
- Vandan Patel
- Devang Patel Institute of Advance Technology and Research, Charotar University of Science and Technology (CHARUSAT),Computer Science & Engineering Department,India
| | - Kirtan Patel
- Devang Patel Institute of Advance Technology and Research, Charotar University of Science and Technology (CHARUSAT),Computer Science & Engineering Department,India
| | - Parth Goel
- Devang Patel Institute of Advance Technology and Research, Charotar University of Science and Technology (CHARUSAT),Computer Science & Engineering Department,India
| | - Milind Shah
- Devang Patel Institute of Advance Technology and Research, Charotar University of Science and Technology (CHARUSAT),Computer Engineering Department,India
| |
Collapse
|
97
|
Shou Q, Zhao C, Shao X, Herting MM, Wang DJ. High Resolution Multi-delay Arterial Spin Labeling with Transformer based Denoising for Pediatric Perfusion MRI. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2024:2024.03.04.24303727. [PMID: 38496517 PMCID: PMC10942515 DOI: 10.1101/2024.03.04.24303727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/19/2024]
Abstract
Multi-delay arterial spin labeling (MDASL) can quantitatively measure cerebral blood flow (CBF) and arterial transit time (ATT), which is particularly suitable for pediatric perfusion imaging. Here we present a high resolution (iso-2mm) MDASL protocol and performed test-retest scans on 21 typically developing children aged 8 to 17 years. We further proposed a Transformer-based deep learning (DL) model with k-space weighted image average (KWIA) denoised images as reference for training the model. The performance of the model was evaluated by the SNR of perfusion images, as well as the SNR, bias and repeatability of the fitted CBF and ATT maps. The proposed method was compared to several benchmark methods including KWIA, joint denoising and reconstruction with total generalized variation (TGV) regularization, as well as directly applying a pretrained Transformer model on a larger dataset. The results show that the proposed Transformer model with KWIA reference can effectively denoise multi-delay ASL images, not only improving the SNR for perfusion images of each delay, but also improving the SNR for the fitted CBF and ATT maps. The proposed method also improved test-retest repeatability of whole-brain perfusion measurements. This may facilitate the use of MDASL in neurodevelopmental studies to characterize typical and aberrant brain development.
Collapse
Affiliation(s)
- Qinyang Shou
- University of Southern California, Los Angeles, California 90033 USA
| | - Chenyang Zhao
- University of Southern California, Los Angeles, California 90033 USA
| | - Xingfeng Shao
- University of Southern California, Los Angeles, California 90033 USA
| | - Megan M Herting
- University of Southern California, Los Angeles, California 90033 USA
| | - Danny Jj Wang
- University of Southern California, Los Angeles, California 90033 USA
| |
Collapse
|
98
|
Shao X, Ge X, Gao J, Niu R, Shi Y, Shao X, Jiang Z, Li R, Wang Y. Transfer learning-based PET/CT three-dimensional convolutional neural network fusion of image and clinical information for prediction of EGFR mutation in lung adenocarcinoma. BMC Med Imaging 2024; 24:54. [PMID: 38438844 PMCID: PMC10913633 DOI: 10.1186/s12880-024-01232-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Accepted: 02/21/2024] [Indexed: 03/06/2024] Open
Abstract
BACKGROUND To introduce a three-dimensional convolutional neural network (3D CNN) leveraging transfer learning for fusing PET/CT images and clinical data to predict EGFR mutation status in lung adenocarcinoma (LADC). METHODS Retrospective data from 516 LADC patients, encompassing preoperative PET/CT images, clinical information, and EGFR mutation status, were divided into training (n = 404) and test sets (n = 112). Several deep learning models were developed utilizing transfer learning, involving CT-only and PET-only models. A dual-stream model fusing PET and CT and a three-stream transfer learning model (TS_TL) integrating clinical data were also developed. Image preprocessing includes semi-automatic segmentation, resampling, and image cropping. Considering the impact of class imbalance, the performance of the model was evaluated using ROC curves and AUC values. RESULTS TS_TL model demonstrated promising performance in predicting the EGFR mutation status, with an AUC of 0.883 (95%CI = 0.849-0.917) in the training set and 0.730 (95%CI = 0.629-0.830) in the independent test set. Particularly in advanced LADC, the model achieved an AUC of 0.871 (95%CI = 0.823-0.919) in the training set and 0.760 (95%CI = 0.638-0.881) in the test set. The model identified distinct activation areas in solid or subsolid lesions associated with wild and mutant types. Additionally, the patterns captured by the model were significantly altered by effective tyrosine kinase inhibitors treatment, leading to notable changes in predicted mutation probabilities. CONCLUSION PET/CT deep learning model can act as a tool for predicting EGFR mutation in LADC. Additionally, it offers clinicians insights for treatment decisions through evaluations both before and after treatment.
Collapse
Affiliation(s)
- Xiaonan Shao
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China.
- Institute of Clinical Translation of Nuclear Medicine and Molecular Imaging, Soochow University, Changzhou, 213003, China.
| | - Xinyu Ge
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China
- Institute of Clinical Translation of Nuclear Medicine and Molecular Imaging, Soochow University, Changzhou, 213003, China
| | - Jianxiong Gao
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China
- Institute of Clinical Translation of Nuclear Medicine and Molecular Imaging, Soochow University, Changzhou, 213003, China
| | - Rong Niu
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China
- Institute of Clinical Translation of Nuclear Medicine and Molecular Imaging, Soochow University, Changzhou, 213003, China
| | - Yunmei Shi
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China
- Institute of Clinical Translation of Nuclear Medicine and Molecular Imaging, Soochow University, Changzhou, 213003, China
| | - Xiaoliang Shao
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China
- Institute of Clinical Translation of Nuclear Medicine and Molecular Imaging, Soochow University, Changzhou, 213003, China
| | - Zhenxing Jiang
- Department of Radiology, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China
| | - Renyuan Li
- Interdisciplinary Institute of Neuroscience and Technology, School of Medicine, Zhejiang University, Hangzhou, 310009, China
- Sir Run Run Shaw Hospital, School of Medicine, Zhejiang University, Hangzhou, 310058, China
| | - Yuetao Wang
- Department of Nuclear Medicine, The Third Affiliated Hospital of Soochow University, Changzhou, 213003, China.
- Institute of Clinical Translation of Nuclear Medicine and Molecular Imaging, Soochow University, Changzhou, 213003, China.
| |
Collapse
|
99
|
Adeoye J, Su YX. Leveraging artificial intelligence for perioperative cancer risk assessment of oral potentially malignant disorders. Int J Surg 2024; 110:1677-1686. [PMID: 38051932 PMCID: PMC10942172 DOI: 10.1097/js9.0000000000000979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2023] [Accepted: 11/21/2023] [Indexed: 12/07/2023]
Abstract
Oral potentially malignant disorders (OPMDs) are mucosal conditions with an inherent disposition to develop oral squamous cell carcinoma. Surgical management is the most preferred strategy to prevent malignant transformation in OPMDs, and surgical approaches to treatment include conventional scalpel excision, laser surgery, cryotherapy, and photodynamic therapy. However, in reality, since all patients with OPMDs will not develop oral squamous cell carcinoma in their lifetime, there is a need to stratify patients according to their risk of malignant transformation to streamline surgical intervention for patients with the highest risks. Artificial intelligence (AI) has the potential to integrate disparate factors influencing malignant transformation for robust, precise, and personalized cancer risk stratification of OPMD patients than current methods to determine the need for surgical resection, excision, or re-excision. Therefore, this article overviews existing AI models and tools, presents a clinical implementation pathway, and discusses necessary refinements to aid the clinical application of AI-based platforms for cancer risk stratification of OPMDs in surgical practice.
Collapse
Affiliation(s)
| | - Yu-Xiong Su
- Division of Oral and Maxillofacial Surgery, Faculty of Dentistry, University of Hong Kong, Hong Kong SAR, People’s Republic of China
| |
Collapse
|
100
|
Vorwerk P, Kelleter J, Müller S, Krause U. Classification in Early Fire Detection Using Multi-Sensor Nodes-A Transfer Learning Approach. SENSORS (BASEL, SWITZERLAND) 2024; 24:1428. [PMID: 38474964 DOI: 10.3390/s24051428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Revised: 02/02/2024] [Accepted: 02/21/2024] [Indexed: 03/14/2024]
Abstract
Effective early fire detection is crucial for preventing damage to people and buildings, especially in fire-prone historic structures. However, due to the infrequent occurrence of fire events throughout a building's lifespan, real-world data for training models are often sparse. In this study, we applied feature representation transfer and instance transfer in the context of early fire detection using multi-sensor nodes. The goal was to investigate whether training data from a small-scale setup (source domain) can be used to identify various incipient fire scenarios in their early stages within a full-scale test room (target domain). In a first step, we employed Linear Discriminant Analysis (LDA) to create a new feature space solely based on the source domain data and predicted four different fire types (smoldering wood, smoldering cotton, smoldering cable and candle fire) in the target domain with a classification rate up to 69% and a Cohen's Kappa of 0.58. Notably, lower classification performance was observed for sensor node positions close to the wall in the full-scale test room. In a second experiment, we applied the TrAdaBoost algorithm as a common instance transfer technique to adapt the model to the target domain, assuming that sparse information from the target domain is available. Boosting the data from 1% to 30% was utilized for individual sensor node positions in the target domain to adapt the model to the target domain. We found that additional boosting improved the classification performance (average classification rate of 73% and an average Cohen's Kappa of 0.63). However, it was noted that excessively boosting the data could lead to overfitting to a specific sensor node position in the target domain, resulting in a reduction in the overall classification performance.
Collapse
Affiliation(s)
- Pascal Vorwerk
- Faculty of Process- and Systems Engineering, Institute of Apparatus and Environmental Technology, Otto von Guericke University of Magdeburg, Universitätsplatz 2, 39106 Magdeburg, Germany
| | - Jörg Kelleter
- GTE Industrieelektronik GmbH, Helmholtzstr. 21, 38-40, 41747 Viersen, Germany
| | - Steffen Müller
- GTE Industrieelektronik GmbH, Helmholtzstr. 21, 38-40, 41747 Viersen, Germany
| | - Ulrich Krause
- Faculty of Process- and Systems Engineering, Institute of Apparatus and Environmental Technology, Otto von Guericke University of Magdeburg, Universitätsplatz 2, 39106 Magdeburg, Germany
| |
Collapse
|