1
|
Peng H, Lin S, King D, Su YH, Abuzeid WM, Bly RA, Moe KS, Hannaford B. Reducing annotating load: Active learning with synthetic images in surgical instrument segmentation. Med Image Anal 2024; 97:103246. [PMID: 38943835 DOI: 10.1016/j.media.2024.103246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 05/28/2024] [Accepted: 06/17/2024] [Indexed: 07/01/2024]
Abstract
Accurate instrument segmentation in the endoscopic vision of minimally invasive surgery is challenging due to complex instruments and environments. Deep learning techniques have shown competitive performance in recent years. However, deep learning usually requires a large amount of labeled data to achieve accurate prediction, which poses a significant workload. To alleviate this workload, we propose an active learning-based framework to generate synthetic images for efficient neural network training. In each active learning iteration, a small number of informative unlabeled images are first queried by active learning and manually labeled. Next, synthetic images are generated based on these selected images. The instruments and backgrounds are cropped out and randomly combined with blending and fusion near the boundary. The proposed method leverages the advantage of both active learning and synthetic images. The effectiveness of the proposed method is validated on two sinus surgery datasets and one intraabdominal surgery dataset. The results indicate a considerable performance improvement, especially when the size of the annotated dataset is small. All the code is open-sourced at: https://github.com/HaonanPeng/active_syn_generator.
Collapse
Affiliation(s)
- Haonan Peng
- University of Washington, 185 E Stevens Way NE AE100R, Seattle, WA 98195, USA.
| | - Shan Lin
- University of California San Diego, 9500 Gilman Dr, La Jolla, CA 92093, USA
| | - Daniel King
- University of Washington, 185 E Stevens Way NE AE100R, Seattle, WA 98195, USA
| | - Yun-Hsuan Su
- Mount Holyoke College, 50 College St, South Hadley, MA 01075, USA
| | - Waleed M Abuzeid
- University of Washington, 185 E Stevens Way NE AE100R, Seattle, WA 98195, USA
| | - Randall A Bly
- University of Washington, 185 E Stevens Way NE AE100R, Seattle, WA 98195, USA
| | - Kris S Moe
- University of Washington, 185 E Stevens Way NE AE100R, Seattle, WA 98195, USA
| | - Blake Hannaford
- University of Washington, 185 E Stevens Way NE AE100R, Seattle, WA 98195, USA
| |
Collapse
|
2
|
Li Y, Liang Z, Li Y, Cao Y, Zhang H, Dong B. Machine learning value in the diagnosis of vertebral fractures: A systematic review and meta-analysis. Eur J Radiol 2024; 181:111714. [PMID: 39241305 DOI: 10.1016/j.ejrad.2024.111714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2024] [Revised: 07/28/2024] [Accepted: 08/30/2024] [Indexed: 09/09/2024]
Abstract
PURPOSE To evaluate the diagnostic accuracy of machine learning (ML) in detecting vertebral fractures, considering varying fracture classifications, patient populations, and imaging approaches. METHOD A systematic review and meta-analysis were conducted by searching PubMed, Embase, Cochrane Library, and Web of Science up to December 31, 2023, for studies using ML for vertebral fracture diagnosis. Bias risk was assessed using QUADAS-2. A bivariate mixed-effects model was used for the meta-analysis. Meta-analyses were performed according to five task types (vertebral fractures, osteoporotic vertebral fractures, differentiation of benign and malignant vertebral fractures, differentiation of acute and chronic vertebral fractures, and prediction of vertebral fractures). Subgroup analyses were conducted by different ML models (including ML and DL) and modeling methods (including CT, X-ray, MRI, and clinical features). RESULTS Eighty-one studies were included. ML demonstrated a diagnostic sensitivity of 0.91 and specificity of 0.95 for vertebral fractures. Subgroup analysis showed that DL (SROC 0.98) and CT (SROC 0.98) performed best overall. For osteoporotic fractures, ML showed a sensitivity of 0.93 and specificity of 0.96, with DL (SROC 0.99) and X-ray (SROC 0.99) performing better. For differentiating benign from malignant fractures, ML achieved a sensitivity of 0.92 and specificity of 0.93, with DL (SROC 0.96) and MRI (SROC 0.97) performing best. For differentiating acute from chronic vertebral fractures, ML showed a sensitivity of 0.92 and specificity of 0.93, with ML (SROC 0.96) and CT (SROC 0.97) performing best. For predicting vertebral fractures, ML had a sensitivity of 0.76 and specificity of 0.87, with ML (SROC 0.80) and clinical features (SROC 0.86) performing better. CONCLUSIONS ML, especially DL models applied to CT, MRI, and X-ray, shows high diagnostic accuracy for vertebral fractures. ML also effectively predicts osteoporotic vertebral fractures, aiding in tailored prevention strategies. Further research and validation are required to confirm ML's clinical efficacy.
Collapse
Affiliation(s)
- Yue Li
- Pain Ward of Rehabilitation Department, Honghui Hospital, Xi'an Jiaotong University, Xi'an, Shaanxi Province 710054, PR China
| | - Zhuang Liang
- Pain Ward of Rehabilitation Department, Honghui Hospital, Xi'an Jiaotong University, Xi'an, Shaanxi Province 710054, PR China
| | - Yingchun Li
- Pain Ward of Rehabilitation Department, Honghui Hospital, Xi'an Jiaotong University, Xi'an, Shaanxi Province 710054, PR China
| | - Yang Cao
- Pain Ward of Rehabilitation Department, Honghui Hospital, Xi'an Jiaotong University, Xi'an, Shaanxi Province 710054, PR China
| | - Hui Zhang
- Pain Ward of Rehabilitation Department, Honghui Hospital, Xi'an Jiaotong University, Xi'an, Shaanxi Province 710054, PR China
| | - Bo Dong
- Pain Ward of Rehabilitation Department, Honghui Hospital, Xi'an Jiaotong University, Xi'an, Shaanxi Province 710054, PR China.
| |
Collapse
|
3
|
Hagen F, Vorberg L, Thamm F, Ditt H, Maier A, Brendel JM, Ghibes P, Bongers MN, Krumm P, Nikolaou K, Horger M. Improved detection of small pulmonary embolism on unenhanced computed tomography using an artificial intelligence-based algorithm - a single centre retrospective study. THE INTERNATIONAL JOURNAL OF CARDIOVASCULAR IMAGING 2024:10.1007/s10554-024-03222-8. [PMID: 39196450 DOI: 10.1007/s10554-024-03222-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2024] [Accepted: 08/07/2024] [Indexed: 08/29/2024]
Abstract
To preliminarily verify the feasibility of a deep-learning (DL) artificial intelligence (AI) model to localize pulmonary embolism (PE) on unenhanced chest-CT by comparison with pulmonary artery (PA) CT angiography (CTA). In a monocentric study, we retrospectively reviewed 99 oncological patients (median age in years: 64 (range: 28-92 years); percentage of female: 39.4%) who received unenhanced and contrast-enhanced chest CT examinations in one session between January 2020 and October 2022 and who were diagnosed incidentally with PE. Findings in the unenhanced images were correlated with the contrast-enhanced images, which were considered the gold standard for central, segmental and subsegmental PE. The new algorithm was trained and tested based on the 99 unenhanced chest-CT image data sets. Based on them, candidate boxes, which were output by the model, were post-processed by evaluating whether the predicted box intersects with the patient's lung segmentation at any position. The AI-based algorithm proved to have an overall sensitivity of 54.5% for central, of 81.9% for segmental and 80.0% for subsegmental PE if taking n = 20 candidate boxes into account. Depending on the localization of the pulmonary embolism, the detection rate for only one box was: 18.1% central, 34.7% segmental and 0.0% subsegmental. The median volume of the clots differed significantly between the three subgroups and was 846.5 mm3 (IQR:591.1-964.8) in central, 201.3 mm3 (IQR:98.3-390.9) in segmental and 110.6 mm3 (IQR:94.3-128.0) in subsegmental PA (p < 0.05). The new algorithm proved to have high sensitivity in detecting PE in particular in segmental/subsegmental localization and may guide to decide whether a second contrast enhanced CT is necessary.
Collapse
Affiliation(s)
- Florian Hagen
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University, Hoppe-Seyler-Str. 3, 72076, Tübingen, Germany
| | - Linda Vorberg
- Pattern Recognition Lab, Friedrich-Alexander-Universität, Erlangen-Nürnberg, Germany
- Computed Tomography, Siemens Healthineers AG, Forchheim, Germany
| | - Florian Thamm
- Pattern Recognition Lab, Friedrich-Alexander-Universität, Erlangen-Nürnberg, Germany
| | - Hendrik Ditt
- Computed Tomography, Siemens Healthineers AG, Forchheim, Germany
| | - Andreas Maier
- Pattern Recognition Lab, Friedrich-Alexander-Universität, Erlangen-Nürnberg, Germany
| | - Jan Michael Brendel
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University, Hoppe-Seyler-Str. 3, 72076, Tübingen, Germany
| | - Patrick Ghibes
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University, Hoppe-Seyler-Str. 3, 72076, Tübingen, Germany
| | - Malte Niklas Bongers
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University, Hoppe-Seyler-Str. 3, 72076, Tübingen, Germany
| | - Patrick Krumm
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University, Hoppe-Seyler-Str. 3, 72076, Tübingen, Germany
| | - Konstantin Nikolaou
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University, Hoppe-Seyler-Str. 3, 72076, Tübingen, Germany
| | - Marius Horger
- Department of Diagnostic and Interventional Radiology, Eberhard-Karls-University, Hoppe-Seyler-Str. 3, 72076, Tübingen, Germany.
| |
Collapse
|
4
|
Kidder BL. Advanced image generation for cancer using diffusion models. Biol Methods Protoc 2024; 9:bpae062. [PMID: 39258159 PMCID: PMC11387006 DOI: 10.1093/biomethods/bpae062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 07/25/2024] [Accepted: 08/21/2024] [Indexed: 09/12/2024] Open
Abstract
Deep neural networks have significantly advanced the field of medical image analysis, yet their full potential is often limited by relatively small dataset sizes. Generative modeling, particularly through diffusion models, has unlocked remarkable capabilities in synthesizing photorealistic images, thereby broadening the scope of their application in medical imaging. This study specifically investigates the use of diffusion models to generate high-quality brain MRI scans, including those depicting low-grade gliomas, as well as contrast-enhanced spectral mammography (CESM) and chest and lung X-ray images. By leveraging the DreamBooth platform, we have successfully trained stable diffusion models utilizing text prompts alongside class and instance images to generate diverse medical images. This approach not only preserves patient anonymity but also substantially mitigates the risk of patient re-identification during data exchange for research purposes. To evaluate the quality of our synthesized images, we used the Fréchet inception distance metric, demonstrating high fidelity between the synthesized and real images. Our application of diffusion models effectively captures oncology-specific attributes across different imaging modalities, establishing a robust framework that integrates artificial intelligence in the generation of oncological medical imagery.
Collapse
Affiliation(s)
- Benjamin L Kidder
- Department of Oncology, Wayne State University School of Medicine, Detroit, MI, 48201, United States
- Karmanos Cancer Institute, Wayne State University School of Medicine, Detroit, MI, 48201, United States
| |
Collapse
|
5
|
Lee YH, Jeon S, Won JH, Auh QS, Noh YK. Automatic detection and visualization of temporomandibular joint effusion with deep neural network. Sci Rep 2024; 14:18865. [PMID: 39143180 PMCID: PMC11324909 DOI: 10.1038/s41598-024-69848-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Accepted: 08/09/2024] [Indexed: 08/16/2024] Open
Abstract
This study investigated the usefulness of deep learning-based automatic detection of temporomandibular joint (TMJ) effusion using magnetic resonance imaging (MRI) in patients with temporomandibular disorder and whether the diagnostic accuracy of the model improved when patients' clinical information was provided in addition to MRI images. The sagittal MR images of 2948 TMJs were collected from 1017 women and 457 men (mean age 37.19 ± 18.64 years). The TMJ effusion diagnostic performances of three convolutional neural networks (scratch, fine-tuning, and freeze schemes) were compared with those of human experts based on areas under the curve (AUCs) and diagnosis accuracies. The fine-tuning model with proton density (PD) images showed acceptable prediction performance (AUC = 0.7895), and the from-scratch (0.6193) and freeze (0.6149) models showed lower performances (p < 0.05). The fine-tuning model had excellent specificity compared to the human experts (87.25% vs. 58.17%). However, the human experts were superior in sensitivity (80.00% vs. 57.43%) (all p < 0.001). In gradient-weighted class activation mapping (Grad-CAM) visualizations, the fine-tuning scheme focused more on effusion than on other structures of the TMJ, and the sparsity was higher than that of the from-scratch scheme (82.40% vs. 49.83%, p < 0.05). The Grad-CAM visualizations agreed with the model learned through important features in the TMJ area, particularly around the articular disc. Two fine-tuning models on PD and T2-weighted images showed that the diagnostic performance did not improve compared with using PD alone (p < 0.05). Diverse AUCs were observed across each group when the patients were divided according to age (0.7083-0.8375) and sex (male:0.7576, female:0.7083). The prediction accuracy of the ensemble model was higher than that of the human experts when all the data were used (74.21% vs. 67.71%, p < 0.05). A deep neural network (DNN) was developed to process multimodal data, including MRI and patient clinical data. Analysis of four age groups with the DNN model showed that the 41-60 age group had the best performance (AUC = 0.8258). The fine-tuning model and DNN were optimal for judging TMJ effusion and may be used to prevent true negative cases and aid in human diagnostic performance. Assistive automated diagnostic methods have the potential to increase clinicians' diagnostic accuracy.
Collapse
Affiliation(s)
- Yeon-Hee Lee
- Department of Orofacial Pain and Oral Medicine, Kyung Hee University Dental Hospital, Kyung Hee University School of Dentistry, #613 Hoegi-Dong, Dongdaemun-gu, Seoul, 02447, Korea.
| | - Seonggwang Jeon
- Department of Computer Science, Hanyang University, Seoul, 04763, Korea
| | - Jong-Hyun Won
- Department of Computer Science, Hanyang University, Seoul, 04763, Korea
| | - Q-Schick Auh
- Department of Orofacial Pain and Oral Medicine, Kyung Hee University Dental Hospital, Kyung Hee University School of Dentistry, #613 Hoegi-Dong, Dongdaemun-gu, Seoul, 02447, Korea
| | - Yung-Kyun Noh
- Department of Computer Science, Hanyang University, Seoul, 04763, Korea.
- School of Computational Sciences, Korea Institute for Advanced Study (KIAS), Seoul, 02455, Korea.
| |
Collapse
|
6
|
Cesarelli G, Ponsiglione AM, Sansone M, Amato F, Donisi L, Ricciardi C. Machine Learning for Biomedical Applications. Bioengineering (Basel) 2024; 11:790. [PMID: 39199748 PMCID: PMC11351950 DOI: 10.3390/bioengineering11080790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2024] [Accepted: 07/09/2024] [Indexed: 09/01/2024] Open
Abstract
Machine learning (ML) is a field of artificial intelligence that uses algorithms capable of extracting knowledge directly from data that could support decisions in multiple fields of engineering [...].
Collapse
Affiliation(s)
- Giuseppe Cesarelli
- Department of Electrical Engineering and Information Technology, University of Naples Federico II, Via Claudio 21, 80125 Naples, Italy; (A.M.P.); (M.S.); (F.A.); (C.R.)
| | - Alfonso Maria Ponsiglione
- Department of Electrical Engineering and Information Technology, University of Naples Federico II, Via Claudio 21, 80125 Naples, Italy; (A.M.P.); (M.S.); (F.A.); (C.R.)
| | - Mario Sansone
- Department of Electrical Engineering and Information Technology, University of Naples Federico II, Via Claudio 21, 80125 Naples, Italy; (A.M.P.); (M.S.); (F.A.); (C.R.)
| | - Francesco Amato
- Department of Electrical Engineering and Information Technology, University of Naples Federico II, Via Claudio 21, 80125 Naples, Italy; (A.M.P.); (M.S.); (F.A.); (C.R.)
| | - Leandro Donisi
- Department of Advanced Medical and Surgical Sciences, University of Campania Luigi Vanvitelli, Via De Crecchio 7, 80138 Naples, Italy;
| | - Carlo Ricciardi
- Department of Electrical Engineering and Information Technology, University of Naples Federico II, Via Claudio 21, 80125 Naples, Italy; (A.M.P.); (M.S.); (F.A.); (C.R.)
| |
Collapse
|
7
|
Xu S, Peng H, Yang L, Zhong W, Gao X, Song J. An Automatic Grading System for Orthodontically Induced External Root Resorption Based on Deep Convolutional Neural Network. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:1800-1811. [PMID: 38393620 PMCID: PMC11300848 DOI: 10.1007/s10278-024-01045-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 02/09/2024] [Accepted: 02/09/2024] [Indexed: 02/25/2024]
Abstract
Orthodontically induced external root resorption (OIERR) is a common complication of orthodontic treatments. Accurate OIERR grading is crucial for clinical intervention. This study aimed to evaluate six deep convolutional neural networks (CNNs) for performing OIERR grading on tooth slices to construct an automatic grading system for OIERR. A total of 2146 tooth slices of different OIERR grades were collected and preprocessed. Six pre-trained CNNs (EfficientNet-B1, EfficientNet-B2, EfficientNet-B3, EfficientNet-B4, EfficientNet-B5, and MobileNet-V3) were trained and validated on the pre-processed images based on four different cross-validation methods. The performances of the CNNs on a test set were evaluated and compared with those of orthodontists. The gradient-weighted class activation mapping (Grad-CAM) technique was used to explore the area of maximum impact on the model decisions in the tooth slices. The six CNN models performed remarkably well in OIERR grading, with a mean accuracy of 0.92, surpassing that of the orthodontists (mean accuracy of 0.82). EfficientNet-B4 trained with fivefold cross-validation emerged as the final OIERR grading system, with a high accuracy of 0.94. Grad-CAM revealed that the apical region had the greatest effect on the OIERR grading system. The six CNNs demonstrated excellent OIERR grading and outperformed orthodontists. The proposed OIERR grading system holds potential as a reliable diagnostic support for orthodontists in clinical practice.
Collapse
Affiliation(s)
- Shuxi Xu
- College of Stomatology, Chongqing Medical University, Chongqing, 401147, China
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, Chongqing, 401147, China
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, Chongqing, 401147, China
| | - Houli Peng
- College of Stomatology, Chongqing Medical University, Chongqing, 401147, China
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, Chongqing, 401147, China
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, Chongqing, 401147, China
| | - Lanxin Yang
- College of Stomatology, Chongqing Medical University, Chongqing, 401147, China
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, Chongqing, 401147, China
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, Chongqing, 401147, China
| | - Wenjie Zhong
- College of Stomatology, Chongqing Medical University, Chongqing, 401147, China
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, Chongqing, 401147, China
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, Chongqing, 401147, China
| | - Xiang Gao
- College of Stomatology, Chongqing Medical University, Chongqing, 401147, China.
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, Chongqing, 401147, China.
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, Chongqing, 401147, China.
| | - Jinlin Song
- College of Stomatology, Chongqing Medical University, Chongqing, 401147, China.
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, Chongqing, 401147, China.
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, Chongqing, 401147, China.
| |
Collapse
|
8
|
Gupta S, Dubey AK, Singh R, Kalra MK, Abraham A, Kumari V, Laird JR, Al-Maini M, Gupta N, Singh I, Viskovic K, Saba L, Suri JS. Four Transformer-Based Deep Learning Classifiers Embedded with an Attention U-Net-Based Lung Segmenter and Layer-Wise Relevance Propagation-Based Heatmaps for COVID-19 X-ray Scans. Diagnostics (Basel) 2024; 14:1534. [PMID: 39061671 PMCID: PMC11275579 DOI: 10.3390/diagnostics14141534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2024] [Revised: 07/10/2024] [Accepted: 07/10/2024] [Indexed: 07/28/2024] Open
Abstract
Background: Diagnosing lung diseases accurately is crucial for proper treatment. Convolutional neural networks (CNNs) have advanced medical image processing, but challenges remain in their accurate explainability and reliability. This study combines U-Net with attention and Vision Transformers (ViTs) to enhance lung disease segmentation and classification. We hypothesize that Attention U-Net will enhance segmentation accuracy and that ViTs will improve classification performance. The explainability methodologies will shed light on model decision-making processes, aiding in clinical acceptance. Methodology: A comparative approach was used to evaluate deep learning models for segmenting and classifying lung illnesses using chest X-rays. The Attention U-Net model is used for segmentation, and architectures consisting of four CNNs and four ViTs were investigated for classification. Methods like Gradient-weighted Class Activation Mapping plus plus (Grad-CAM++) and Layer-wise Relevance Propagation (LRP) provide explainability by identifying crucial areas influencing model decisions. Results: The results support the conclusion that ViTs are outstanding in identifying lung disorders. Attention U-Net obtained a Dice Coefficient of 98.54% and a Jaccard Index of 97.12%. ViTs outperformed CNNs in classification tasks by 9.26%, reaching an accuracy of 98.52% with MobileViT. An 8.3% increase in accuracy was seen while moving from raw data classification to segmented image classification. Techniques like Grad-CAM++ and LRP provided insights into the decision-making processes of the models. Conclusions: This study highlights the benefits of integrating Attention U-Net and ViTs for analyzing lung diseases, demonstrating their importance in clinical settings. Emphasizing explainability clarifies deep learning processes, enhancing confidence in AI solutions and perhaps enhancing clinical acceptance for improved healthcare results.
Collapse
Affiliation(s)
- Siddharth Gupta
- Department of Computer Science and Engineering, Bharati Vidyapeeth’s College of Engineering, New Delhi 110063, India;
| | - Arun K. Dubey
- Department of Information Technology, Bharati Vidyapeeth’s College of Engineering, New Delhi 110063, India; (A.K.D.); (N.G.)
| | - Rajesh Singh
- Department of Research and Innovation, Uttaranchal Institute of Technology, Uttaranchal University, Dehradun 248007, India;
| | - Mannudeep K. Kalra
- Department of Radiology, Massachusetts General Hospital, 55 Fruit Street, Boston, MA 02114, USA;
| | - Ajith Abraham
- Department of Computer Science, Bennett University, Greater Noida 201310, India;
| | - Vandana Kumari
- School of Computer Science and Engineering, Galgotias University, Greater Noida 201310, India;
| | - John R. Laird
- Heart and Vascular Institute, Adventist Health St. Helena, St. Helena, CA 94574, USA;
| | - Mustafa Al-Maini
- Allergy, Clinical Immunology and Rheumatology Institute, Toronto, ON M5G 1N8, Canada;
| | - Neha Gupta
- Department of Information Technology, Bharati Vidyapeeth’s College of Engineering, New Delhi 110063, India; (A.K.D.); (N.G.)
| | - Inder Singh
- Department of Radiology and Ultrasound, University Hospital for Infectious Diseases, 10000 Zagreb, Croatia;
| | - Klaudija Viskovic
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09100 Cagliari, Italy;
| | - Luca Saba
- Department of ECE, Idaho State University, Pocatello, ID 83209, USA;
| | - Jasjit S. Suri
- Department of ECE, Idaho State University, Pocatello, ID 83209, USA;
- Stroke Diagnostics and Monitoring Division, AtheroPoint™, Roseville, CA 95661, USA
- Department of Computer Engineering, Graphic Era (Deemed to be University), Dehradun 248002, India
- Department of Computer Science & Engineering, Symbiosis Institute of Technology, Nagpur Campus 440008, Symbiosis International (Deemed University), Pune 412115, India
| |
Collapse
|
9
|
Calazans MAA, Ferreira FABS, Santos FAN, Madeiro F, Lima JB. Machine Learning and Graph Signal Processing Applied to Healthcare: A Review. Bioengineering (Basel) 2024; 11:671. [PMID: 39061753 PMCID: PMC11273494 DOI: 10.3390/bioengineering11070671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2024] [Revised: 06/20/2024] [Accepted: 06/26/2024] [Indexed: 07/28/2024] Open
Abstract
Signal processing is a very useful field of study in the interpretation of signals in many everyday applications. In the case of applications with time-varying signals, one possibility is to consider them as graphs, so graph theory arises, which extends classical methods to the non-Euclidean domain. In addition, machine learning techniques have been widely used in pattern recognition activities in a wide variety of tasks, including health sciences. The objective of this work is to identify and analyze the papers in the literature that address the use of machine learning applied to graph signal processing in health sciences. A search was performed in four databases (Science Direct, IEEE Xplore, ACM, and MDPI), using search strings to identify papers that are in the scope of this review. Finally, 45 papers were included in the analysis, the first being published in 2015, which indicates an emerging area. Among the gaps found, we can mention the need for better clinical interpretability of the results obtained in the papers, that is not to restrict the results or conclusions simply to performance metrics. In addition, a possible research direction is the use of new transforms. It is also important to make new public datasets available that can be used to train the models.
Collapse
Affiliation(s)
| | - Felipe A. B. S. Ferreira
- Unidade Acadêmica do Cabo de Santo Agostinho, Universidade Federal Rural de Pernambuco, Cabo de Santo Agostinho 54518-430, Brazil;
| | - Fernando A. N. Santos
- Institute for Advanced Studies, Universiteit van Amsterdam, 1012 WP Amsterdam, The Netherlands;
| | - Francisco Madeiro
- Escola Politécnica de Pernambuco, Universidade de Pernambuco, Recife 50720-001, Brazil;
| | - Juliano B. Lima
- Centro de Tecnologia e Geociências, Universidade Federal de Pernambuco, Recife 50670-901, Brazil;
| |
Collapse
|
10
|
VanDecker WA. The Integrative Sport of Cardiac Imaging and Clinical Cardiology: Machine Augmentation and an Evolving Odyssey. JACC Cardiovasc Imaging 2024; 17:792-794. [PMID: 38613557 DOI: 10.1016/j.jcmg.2024.02.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 02/13/2024] [Indexed: 04/15/2024]
Affiliation(s)
- William A VanDecker
- Lewis Katz School of Medicine at Temple University, Philadelphia, Pennsylvania, USA.
| |
Collapse
|
11
|
Guan H, Yap PT, Bozoki A, Liu M. Federated learning for medical image analysis: A survey. PATTERN RECOGNITION 2024; 151:110424. [PMID: 38559674 PMCID: PMC10976951 DOI: 10.1016/j.patcog.2024.110424] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Machine learning in medical imaging often faces a fundamental dilemma, namely, the small sample size problem. Many recent studies suggest using multi-domain data pooled from different acquisition sites/centers to improve statistical power. However, medical images from different sites cannot be easily shared to build large datasets for model training due to privacy protection reasons. As a promising solution, federated learning, which enables collaborative training of machine learning models based on data from different sites without cross-site data sharing, has attracted considerable attention recently. In this paper, we conduct a comprehensive survey of the recent development of federated learning methods in medical image analysis. We have systematically gathered research papers on federated learning and its applications in medical image analysis published between 2017 and 2023. Our search and compilation were conducted using databases from IEEE Xplore, ACM Digital Library, Science Direct, Springer Link, Web of Science, Google Scholar, and PubMed. In this survey, we first introduce the background of federated learning for dealing with privacy protection and collaborative learning issues. We then present a comprehensive review of recent advances in federated learning methods for medical image analysis. Specifically, existing methods are categorized based on three critical aspects of a federated learning system, including client end, server end, and communication techniques. In each category, we summarize the existing federated learning methods according to specific research problems in medical image analysis and also provide insights into the motivations of different approaches. In addition, we provide a review of existing benchmark medical imaging datasets and software platforms for current federated learning research. We also conduct an experimental study to empirically evaluate typical federated learning methods for medical image analysis. This survey can help to better understand the current research status, challenges, and potential research opportunities in this promising research field.
Collapse
Affiliation(s)
- Hao Guan
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Andrea Bozoki
- Department of Neurology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| |
Collapse
|
12
|
Costa ED, Gaêta-Araujo H, Carneiro JA, Zancan BAG, Baranauskas JA, Macedo AA, Tirapelli C. Development of a dental digital data set for research in artificial intelligence: the importance of labeling performed by radiologists. Oral Surg Oral Med Oral Pathol Oral Radiol 2024; 138:205-213. [PMID: 38632036 DOI: 10.1016/j.oooo.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Revised: 11/12/2023] [Accepted: 12/07/2023] [Indexed: 04/19/2024]
Abstract
OBJECTIVE The aim of this study was to present the development of a database (dataset) of panoramic radiographs. STUDY DESIGN Three radiologists labeled an image set consisting of 936 panoramic radiographs. Labeling includes tooth numbering (including teeth present and missing) and annotation of dental conditions (e.g., caries, dental restoration, residual root, endodontic treatment, implant, fixed prosthesis, incisal wear). The annotation process was performed in a Picture Archive and Communication System software customized for the study purposes using a small bounding box to delimit the entire tooth and items for radiographic diagnosis and a large bounding box to simultaneously delimit the 2 dental arches (maxilla and mandible). A JSON file was generated for each annotation. RESULTS The database encompassed 23,619 annotations; disagreement between radiologists occurred in 0.7% of the notes. CONCLUSIONS This work aimed to inform researchers about the importance of the labeling process, in addition to providing the scientific community with a bank of labeled images to implement artificial intelligence systems in clinical practice.
Collapse
Affiliation(s)
- Eliana Dantas Costa
- Department of Dental Materials and Prosthodontics, School of Dentistry of Ribeirão Preto, University of São Paulo, Ribeirão Preto, São Paulo, Brazil.
| | - Hugo Gaêta-Araujo
- Department of Stomatology, Public Health and Forensic Dentistry, Division of Oral Radiology, School of Dentistry of Ribeirão Preto, University of São Paulo, Ribeirão Preto, São Paulo, Brazil
| | - José Andery Carneiro
- Department of Computing and Mathematics, University of São Paulo, Ribeirão Preto, São Paulo, Brazil
| | | | - José Augusto Baranauskas
- Department of Computing and Mathematics, University of São Paulo, Ribeirão Preto, São Paulo, Brazil
| | - Alessandra Alaniz Macedo
- Department of Computing and Mathematics, University of São Paulo, Ribeirão Preto, São Paulo, Brazil
| | - Camila Tirapelli
- Department of Dental Materials and Prosthodontics, School of Dentistry of Ribeirão Preto, University of São Paulo, Ribeirão Preto, São Paulo, Brazil
| |
Collapse
|
13
|
Rizk PA, Gonzalez MR, Galoaa BM, Girgis AG, Van Der Linden L, Chang CY, Lozano-Calderon SA. Machine Learning-Assisted Decision Making in Orthopaedic Oncology. JBJS Rev 2024; 12:01874474-202407000-00005. [PMID: 38991098 DOI: 10.2106/jbjs.rvw.24.00057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/13/2024]
Abstract
» Artificial intelligence is an umbrella term for computational calculations that are designed to mimic human intelligence and problem-solving capabilities, although in the future, this may become an incomplete definition. Machine learning (ML) encompasses the development of algorithms or predictive models that generate outputs without explicit instructions, assisting in clinical predictions based on large data sets. Deep learning is a subset of ML that utilizes layers of networks that use various inter-relational connections to define and generalize data.» ML algorithms can enhance radiomics techniques for improved image evaluation and diagnosis. While ML shows promise with the advent of radiomics, there are still obstacles to overcome.» Several calculators leveraging ML algorithms have been developed to predict survival in primary sarcomas and metastatic bone disease utilizing patient-specific data. While these models often report exceptionally accurate performance, it is crucial to evaluate their robustness using standardized guidelines.» While increased computing power suggests continuous improvement of ML algorithms, these advancements must be balanced against challenges such as diversifying data, addressing ethical concerns, and enhancing model interpretability.
Collapse
Affiliation(s)
- Paul A Rizk
- Division of Orthopaedic Oncology, Department of Orthopaedic Surgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| | - Marcos R Gonzalez
- Division of Orthopaedic Oncology, Department of Orthopaedic Surgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| | - Bishoy M Galoaa
- Interdisciplinary Science & Engineering Complex (ISEC), Northeastern University, Boston, Massachusetts
| | - Andrew G Girgis
- Boston University Chobanian & Avedisian School of Medicine, Boston, Massachusetts
| | - Lotte Van Der Linden
- Division of Orthopaedic Oncology, Department of Orthopaedic Surgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| | - Connie Y Chang
- Musculoskeletal Imaging and Intervention, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts
| | - Santiago A Lozano-Calderon
- Division of Orthopaedic Oncology, Department of Orthopaedic Surgery, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
14
|
Al-Kadi OS, Al-Emaryeen R, Al-Nahhas S, Almallahi I, Braik R, Mahafza W. Empowering brain cancer diagnosis: harnessing artificial intelligence for advanced imaging insights. Rev Neurosci 2024; 35:399-419. [PMID: 38291768 DOI: 10.1515/revneuro-2023-0115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 12/10/2023] [Indexed: 02/01/2024]
Abstract
Artificial intelligence (AI) is increasingly being used in the medical field, specifically for brain cancer imaging. In this review, we explore how AI-powered medical imaging can impact the diagnosis, prognosis, and treatment of brain cancer. We discuss various AI techniques, including deep learning and causality learning, and their relevance. Additionally, we examine current applications that provide practical solutions for detecting, classifying, segmenting, and registering brain tumors. Although challenges such as data quality, availability, interpretability, transparency, and ethics persist, we emphasise the enormous potential of intelligent applications in standardising procedures and enhancing personalised treatment, leading to improved patient outcomes. Innovative AI solutions have the power to revolutionise neuro-oncology by enhancing the quality of routine clinical practice.
Collapse
Affiliation(s)
- Omar S Al-Kadi
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Roa'a Al-Emaryeen
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Sara Al-Nahhas
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Isra'a Almallahi
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| | - Ruba Braik
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| | - Waleed Mahafza
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| |
Collapse
|
15
|
Ru J, Zhu Z, Shi J. Spatial and geometric learning for classification of breast tumors from multi-center ultrasound images: a hybrid learning approach. BMC Med Imaging 2024; 24:133. [PMID: 38840240 PMCID: PMC11155188 DOI: 10.1186/s12880-024-01307-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2024] [Accepted: 05/27/2024] [Indexed: 06/07/2024] Open
Abstract
BACKGROUND Breast cancer is the most common cancer among women, and ultrasound is a usual tool for early screening. Nowadays, deep learning technique is applied as an auxiliary tool to provide the predictive results for doctors to decide whether to make further examinations or treatments. This study aimed to develop a hybrid learning approach for breast ultrasound classification by extracting more potential features from local and multi-center ultrasound data. METHODS We proposed a hybrid learning approach to classify the breast tumors into benign and malignant. Three multi-center datasets (BUSI, BUS, OASBUD) were used to pretrain a model by federated learning, then every dataset was fine-tuned at local. The proposed model consisted of a convolutional neural network (CNN) and a graph neural network (GNN), aiming to extract features from images at a spatial level and from graphs at a geometric level. The input images are small-sized and free from pixel-level labels, and the input graphs are generated automatically in an unsupervised manner, which saves the costs of labor and memory space. RESULTS The classification AUCROC of our proposed method is 0.911, 0.871 and 0.767 for BUSI, BUS and OASBUD. The balanced accuracy is 87.6%, 85.2% and 61.4% respectively. The results show that our method outperforms conventional methods. CONCLUSIONS Our hybrid approach can learn the inter-feature among multi-center data and the intra-feature of local data. It shows potential in aiding doctors for breast tumor classification in ultrasound at an early stage.
Collapse
Affiliation(s)
- Jintao Ru
- Department of Medical Engineering, Shaoxing Hospital of Traditional Chinese Medicine, Shaoxing, Zhejiang, People's Republic of China.
| | - Zili Zhu
- Department of Radiology, The First Affiliated Hospital of Ningbo University, Ningbo, Zhejiang, People's Republic of China
| | - Jialin Shi
- Rehabilitation Medicine Institute, Zhejiang Rehabilitation Medical Center, Hangzhou, Zhejiang, People's Republic of China
| |
Collapse
|
16
|
Li Y, Shao Y, Wang J, Liu Y, Yang Y, Wang Z, Xi Q. Machine learning based on functional and structural connectivity in mild cognitive impairment. Magn Reson Imaging 2024; 109:10-17. [PMID: 38408690 DOI: 10.1016/j.mri.2024.02.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2023] [Revised: 02/21/2024] [Accepted: 02/21/2024] [Indexed: 02/28/2024]
Abstract
OBJECTIVE Alzheimer's disease (AD) is a chronic, degenerative neurological disorder characterized by progressive cognitive decline and mental behavioral abnormalities. Mild cognitive impairment (MCI) is regarded as a transitional stage in the progression from normal elderly individuals to patients with AD. While studies have identified abnormalities in brain connectivity in patients with MCI, including functional and structural connectivity, accurately identifying patients with MCI in clinical screening remains challenging. We hypothesized that utilizing machine learning (ML) based on both functional and structural connectivity could yield meaningful results in distinguishing between patients with MCI and normal elderly individuals, so as to provide valuable information for early diagnosis and precise evaluation of patients with MCI. METHODS Following clinical criteria, we recruited 32 patients with MCI for the patient group, and 32 normal elderly individuals for the control group. All subjects underwent examinations for resting-state functional magnetic resonance imaging (rs-fMRI) and diffusion tensor imaging (DTI). Subsequently, significant functional and structural connectivity features were selected and combined with a support vector machine for classification of the patient and control groups. RESULTS We observed significantly different functional connectivity in the frontal lobe and putamen between the MCI group and normal controls. The results based on functional connectivity features demonstrated a classification accuracy of 71.88% and an area under the curve (AUC) value of 0.78. In terms of structural connectivity, we found that decreased fractional anisotropy in patients with MCI was significantly associated with Montreal Cognitive Assessment scores, specifically in regions such as the precuneus and cingulate gyrus. The classification results using the structural connectivity feature yielded an accuracy of 92.19% and an AUC value of 0.99. Lastly, combining functional and structural connectivity features resulted in a classification accuracy and AUC value of 93.75% and 0.99, respectively. CONCLUSIONS In this study, we demonstrated a high classification performance, underscoring the potential of both brain functional and structural connectivity in distinguishing patients with MCI from normal elderly individuals. Furthermore, the integration of functional connectivity and structural connectivity features indicated that utilizing rs-fMRI and DTI could enhance the accuracy and specificity of identifying patients with MCI compared with relying on a single neuroimaging technique.
Collapse
Affiliation(s)
- Yan Li
- Department of Radiology, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, 150 Jimo Road, Pudong New Area, Shanghai 200120, China
| | - Yongjia Shao
- Department of Radiology, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, 150 Jimo Road, Pudong New Area, Shanghai 200120, China
| | - Junlang Wang
- Department of Radiology, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, 150 Jimo Road, Pudong New Area, Shanghai 200120, China; Department of Radiology, Daping Hospital, Army Medical University, No. 10 Changjiang Branch Road, Yuzhong District, Chongqing 400042, China
| | - Yu Liu
- School of Computer Science and Technology, Donghua University, No. 2999 North Renmin Road, Songjiang Area, Shanghai 200000, China.
| | - Yuhan Yang
- Department of Radiology, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, 150 Jimo Road, Pudong New Area, Shanghai 200120, China
| | - Zijian Wang
- School of Computer Science and Technology, Donghua University, No. 2999 North Renmin Road, Songjiang Area, Shanghai 200000, China.
| | - Qian Xi
- Department of Radiology, Shanghai East Hospital, Tongji University School of Medicine, Shanghai, 150 Jimo Road, Pudong New Area, Shanghai 200120, China.
| |
Collapse
|
17
|
Zhou L, Ji Q, Peng H, Chen F, Zheng Y, Jiao Z, Gong J, Li W. Automatic image segmentation and online survival prediction model of medulloblastoma based on machine learning. Eur Radiol 2024; 34:3644-3655. [PMID: 37994966 DOI: 10.1007/s00330-023-10316-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2022] [Revised: 08/19/2023] [Accepted: 08/26/2023] [Indexed: 11/24/2023]
Abstract
OBJECTIVES To develop a dynamic nomogram containing radiomics signature and clinical features for estimating the overall survival (OS) of patients with medulloblastoma (MB) and design an automatic image segmentation model to reduce labor and time costs. METHODS Data from 217 medulloblastoma (MB) patients over the past 4 years were collected and separated into a training set and a test set. Intraclass correlation coefficient (ICC), random survival forest (RSF), and least absolute shrinkage and selection operator (LASSO) regression methods were employed to select variables in the training set. Univariate and multivariate Cox proportional hazard models, as well as Kaplan-Meier analysis, were utilized to determine the relationship among the radiomics signature, clinical features, and overall survival. A dynamic nomogram was developed. Additionally, a 3D-Unet deep learning model was used to train the automatic tumor delineation model. RESULTS Higher Rad-scores were significantly associated with worse OS in both the training and validation sets (p < 0.001 and p = 0.047, respectively). The Cox model combined clinical and radiomics signatures ([IBS = 0.079], [C-index = 0.747, SE = 0.045]) outperformed either radiomics signatures alone ([IBS = 0.081], [C-index = 0.738, SE = 0.041]) or clinical features alone ([IBS = 0.085], [C-index = 0.565, SE = 0.041]). The segmentation model had mean Dice coefficients of 0.80, 0.82, and 0.78 in the training, validation, and test sets respectively. A deep learning-based tumor segmentation model was built with Dice coefficients of 0.8372, 0.8017, and 0.7673 on the training set, validation set, and test set, respectively. CONCLUSIONS A combination of radiomics features and clinical characteristics enhances the accuracy of OS prediction in medulloblastoma patients. Additionally, building an MRI image automatic segmentation model reduces labor and time costs. CLINICAL RELEVANCE STATEMENT A survival prognosis model based on radiomics and clinical characteristics could improve the accuracy of prognosis estimation for medulloblastoma patients, and an MRI-based automatic tumor segmentation model could reduce the cost of time. KEY POINTS • A model that combines radiomics and clinical features can predict the survival prognosis of patients with medulloblastoma. • Online nomogram and image automatic segmentation model can help doctors better judge the prognosis of medulloblastoma and save working time. • The developed AI system can help doctors judge the prognosis of diseases and promote the development of precision medicine.
Collapse
Affiliation(s)
- Lili Zhou
- Cancer Center, Beijing Tiantan Hospital, Capital Medical University, No. 119, Nansihuan West Road, Fengtai District, Beijing, 100070, China
| | - Qiang Ji
- Cancer Center, Beijing Tiantan Hospital, Capital Medical University, No. 119, Nansihuan West Road, Fengtai District, Beijing, 100070, China
| | - Hong Peng
- Department of Radiology, Shanghai Ninth People's Hospital, Shanghai Jiao Tong University School of Medicine, No. 639 Zhizaoju Road, Huangpu District, Shanghai, 20011, China
| | - Feng Chen
- Cancer Center, Beijing Tiantan Hospital, Capital Medical University, No. 119, Nansihuan West Road, Fengtai District, Beijing, 100070, China
| | - Yi Zheng
- Cancer Center, Beijing Tiantan Hospital, Capital Medical University, No. 119, Nansihuan West Road, Fengtai District, Beijing, 100070, China
| | | | - Jian Gong
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical Unversity, No. 119, Nansihuan West Road, Fengtai District, Beijing, 100070, China.
| | - Wenbin Li
- Cancer Center, Beijing Tiantan Hospital, Capital Medical University, No. 119, Nansihuan West Road, Fengtai District, Beijing, 100070, China.
| |
Collapse
|
18
|
Thribhuvan Reddy D, Grewal I, García Pinzon LF, Latchireddy B, Goraya S, Ali Alansari B, Gadwal A. The Role of Artificial Intelligence in Healthcare: Enhancing Coronary Computed Tomography Angiography for Coronary Artery Disease Management. Cureus 2024; 16:e61523. [PMID: 38957241 PMCID: PMC11218716 DOI: 10.7759/cureus.61523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/02/2024] [Indexed: 07/04/2024] Open
Abstract
This review aims to explore the potential of artificial intelligence (AI) in coronary CT angiography (CCTA), a key tool for diagnosing coronary artery disease (CAD). Because CAD is still a major cause of death worldwide, effective and accurate diagnostic methods are required to identify and manage the condition. CCTA certainly is a noninvasive alternative for diagnosing CAD, but it requires a large amount of data as input. We intend to discuss the idea of incorporating AI into CCTA, which enhances its diagnostic accuracy and operational efficiency. Using such AI technologies as machine learning (ML) and deep learning (DL) tools, CCTA images are automated to perfection and the analysis is significantly refined. It enables the characterization of a plaque, assesses the severity of the stenosis, and makes more accurate risk stratifications than traditional methods, with pinpoint accuracy. Automating routine tasks through AI-driven CCTA will reduce the radiologists' workload considerably, which is a standard benefit of such technologies. More importantly, it would enable radiologists to allocate more time and expertise to complex cases, thereby improving overall patient care. However, the field of AI in CCTA is not without its challenges, which include data protection, algorithm transparency, as well as criteria for standardization encoding. Despite such obstacles, it appears that the integration of AI technology into CCTA in the future holds great promise for keeping CAD itself in check, thereby aiding the fight against this disease and begetting better clinical outcomes and more optimized modes of healthcare. Future research on AI algorithms for CCTA, making ethical use of AI, and thereby overcoming the technical and clinical barriers to widespread adoption of this new tool, will hopefully pave the way for profound AI-driven transformations in healthcare.
Collapse
Affiliation(s)
| | - Inayat Grewal
- Department of Medicine, Government Medical College and Hospital, Chandigarh, IND
| | | | | | - Simran Goraya
- Department of Medicine, Kharkiv National Medical University, Kharkiv, UKR
| | | | - Aishwarya Gadwal
- Department of Radiodiagnosis, St. John's Medical College and Hospital, Bengaluru, IND
| |
Collapse
|
19
|
Mayer R, Turkbey B, Simone CB. Autonomous Tumor Signature Extraction Applied to Spatially Registered Bi-Parametric MRI to Predict Prostate Tumor Aggressiveness: A Pilot Study. Cancers (Basel) 2024; 16:1822. [PMID: 38791901 PMCID: PMC11120057 DOI: 10.3390/cancers16101822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Revised: 05/07/2024] [Accepted: 05/09/2024] [Indexed: 05/26/2024] Open
Abstract
BACKGROUND Accurate, reliable, non-invasive assessment of patients diagnosed with prostate cancer is essential for proper disease management. Quantitative assessment of multi-parametric MRI, such as through artificial intelligence or spectral/statistical approaches, can provide a non-invasive objective determination of the prostate tumor aggressiveness without side effects or potential poor sampling from needle biopsy or overdiagnosis from prostate serum antigen measurements. To simplify and expedite prostate tumor evaluation, this study examined the efficacy of autonomously extracting tumor spectral signatures for spectral/statistical algorithms for spatially registered bi-parametric MRI. METHODS Spatially registered hypercubes were digitally constructed by resizing, translating, and cropping from the image sequences (Apparent Diffusion Coefficient (ADC), High B-value, T2) from 42 consecutive patients in the bi-parametric MRI PI-CAI dataset. Prostate cancer blobs exceeded a threshold applied to the registered set from normalizing the registered set into an image that maximizes High B-value, but minimizes the ADC and T2 images, appearing "green" in the color composite. Clinically significant blobs were selected based on size, average normalized green value, sliding window statistics within a blob, and position within the hypercube. The center of mass and maximized sliding window statistics within the blobs identified voxels associated with tumor signatures. We used correlation coefficients (R) and p-values, to evaluate the linear regression fits of the z-score and SCR (with processed covariance matrix) to tumor aggressiveness, as well as Area Under the Curves (AUC) for Receiver Operator Curves (ROC) from logistic probability fits to clinically significant prostate cancer. RESULTS The highest R (R > 0.45), AUC (>0.90), and lowest p-values (<0.01) were achieved using z-score and modified registration applied to the covariance matrix and tumor signatures selected from the "greenest" parts from the selected blob. CONCLUSIONS The first autonomous tumor signature applied to spatially registered bi-parametric MRI shows promise for determining prostate tumor aggressiveness.
Collapse
Affiliation(s)
- Rulon Mayer
- Department of Radiation Oncology, University of Pennsylvania, Philadelphia, PA 19104, USA
- OncoScore, Garrett Park, MD 20896, USA
| | - Baris Turkbey
- Molecular Imaging Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892, USA;
| | | |
Collapse
|
20
|
Nadarzynski T, Knights N, Husbands D, Graham CA, Llewellyn CD, Buchanan T, Montgomery I, Ridge D. Achieving health equity through conversational AI: A roadmap for design and implementation of inclusive chatbots in healthcare. PLOS DIGITAL HEALTH 2024; 3:e0000492. [PMID: 38696359 PMCID: PMC11065243 DOI: 10.1371/journal.pdig.0000492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 03/25/2024] [Indexed: 05/04/2024]
Abstract
BACKGROUND The rapid evolution of conversational and generative artificial intelligence (AI) has led to the increased deployment of AI tools in healthcare settings. While these conversational AI tools promise efficiency and expanded access to healthcare services, there are growing concerns ethically, practically and in terms of inclusivity. This study aimed to identify activities which reduce bias in conversational AI and make their designs and implementation more equitable. METHODS A qualitative research approach was employed to develop an analytical framework based on the content analysis of 17 guidelines about AI use in clinical settings. A stakeholder consultation was subsequently conducted with a total of 33 ethnically diverse community members, AI designers, industry experts and relevant health professionals to further develop a roadmap for equitable design and implementation of conversational AI in healthcare. Framework analysis was conducted on the interview data. RESULTS A 10-stage roadmap was developed to outline activities relevant to equitable conversational AI design and implementation phases: 1) Conception and planning, 2) Diversity and collaboration, 3) Preliminary research, 4) Co-production, 5) Safety measures, 6) Preliminary testing, 7) Healthcare integration, 8) Service evaluation and auditing, 9) Maintenance, and 10) Termination. DISCUSSION We have made specific recommendations to increase conversational AI's equity as part of healthcare services. These emphasise the importance of a collaborative approach and the involvement of patient groups in navigating the rapid evolution of conversational AI technologies. Further research must assess the impact of recommended activities on chatbots' fairness and their ability to reduce health inequalities.
Collapse
Affiliation(s)
- Tom Nadarzynski
- School of Social Sciences, University of Westminster, London, United Kingdom
| | - Nicky Knights
- School of Social Sciences, University of Westminster, London, United Kingdom
| | - Deborah Husbands
- School of Social Sciences, University of Westminster, London, United Kingdom
| | - Cynthia A. Graham
- Kinsey Institute and Department of Gender Studies, Indiana University, Bloomington, United States of America
| | - Carrie D. Llewellyn
- Brighton and Sussex Medical School, University of Sussex, Brighton, United Kingdom
| | - Tom Buchanan
- School of Social Sciences, University of Westminster, London, United Kingdom
| | | | - Damien Ridge
- School of Social Sciences, University of Westminster, London, United Kingdom
| |
Collapse
|
21
|
Chakraborty S, Pradhan B. Editorial for the Special Issue "Machine Learning in Computer Vision and Image Sensing: Theory and Applications". SENSORS (BASEL, SWITZERLAND) 2024; 24:2874. [PMID: 38732978 PMCID: PMC11086158 DOI: 10.3390/s24092874] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 03/25/2024] [Indexed: 05/13/2024]
Abstract
Machine learning (ML) models have experienced remarkable growth in their application for multimodal data analysis over the past decade [...].
Collapse
Affiliation(s)
- Subrata Chakraborty
- School of Science and Technology, University of New England, Armidale, NSW 2351, Australia
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), School of Civil and Environmental Engineering, Faculty of Engineering & IT, University of Technology Sydney, Sydney, NSW 2007, Australia;
- Griffith Business School, Griffith University, Nathan, QLD 4111, Australia
| | - Biswajeet Pradhan
- Centre for Advanced Modelling and Geospatial Information Systems (CAMGIS), School of Civil and Environmental Engineering, Faculty of Engineering & IT, University of Technology Sydney, Sydney, NSW 2007, Australia;
- Earth Observation Centre, Institute of Climate Change, Universiti Kebangsaan Malaysia (UKM), Bangi 43600, Selangor, Malaysia
| |
Collapse
|
22
|
Huang W, Ong WC, Wong MKF, Ng EYK, Koh T, Chandramouli C, Ng CT, Hummel Y, Huang F, Lam CSP, Tromp J. Applying the UTAUT2 framework to patients' attitudes toward healthcare task shifting with artificial intelligence. BMC Health Serv Res 2024; 24:455. [PMID: 38605373 PMCID: PMC11007870 DOI: 10.1186/s12913-024-10861-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Accepted: 03/13/2024] [Indexed: 04/13/2024] Open
Abstract
BACKGROUND Increasing patient loads, healthcare inflation and ageing population have put pressure on the healthcare system. Artificial intelligence and machine learning innovations can aid in task shifting to help healthcare systems remain efficient and cost effective. To gain an understanding of patients' acceptance toward such task shifting with the aid of AI, this study adapted the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2), looking at performance and effort expectancy, facilitating conditions, social influence, hedonic motivation and behavioural intention. METHODS This was a cross-sectional study which took place between September 2021 to June 2022 at the National Heart Centre, Singapore. One hundred patients, aged ≥ 21 years with at least one heart failure symptom (pedal oedema, New York Heart Association II-III effort limitation, orthopnoea, breathlessness), who presented to the cardiac imaging laboratory for physician-ordered clinical echocardiogram, underwent both echocardiogram by skilled sonographers and the experience of echocardiogram by a novice guided by AI technologies. They were then given a survey which looked at the above-mentioned constructs using the UTAUT2 framework. RESULTS Significant, direct, and positive effects of all constructs on the behavioral intention of accepting the AI-novice combination were found. Facilitating conditions, hedonic motivation and performance expectancy were the top 3 constructs. The analysis of the moderating variables, age, gender and education levels, found no impact on behavioral intention. CONCLUSIONS These results are important for stakeholders and changemakers such as policymakers, governments, physicians, and insurance companies, as they design adoption strategies to ensure successful patient engagement by focusing on factors affecting the facilitating conditions, hedonic motivation and performance expectancy for AI technologies used in healthcare task shifting.
Collapse
Affiliation(s)
- Weiting Huang
- National Heart Centre Singapore, 5 Hospital Drive, Singapore, 169609, Singapore.
- Duke-NUS Medical School, Singapore, Singapore.
| | - Wen Chong Ong
- National Healthcare Group Polyclinics, Singapore, Singapore
| | - Mark Kei Fong Wong
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore, Singapore
| | - Eddie Yin Kwee Ng
- School of Mechanical and Aerospace Engineering, Nanyang Technological University, Singapore, Singapore
| | - Tracy Koh
- National Heart Centre Singapore, 5 Hospital Drive, Singapore, 169609, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Chanchal Chandramouli
- National Heart Centre Singapore, 5 Hospital Drive, Singapore, 169609, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | - Choon Ta Ng
- National Heart Centre Singapore, 5 Hospital Drive, Singapore, 169609, Singapore
- Duke-NUS Medical School, Singapore, Singapore
| | | | | | - Carolyn Su Ping Lam
- National Heart Centre Singapore, 5 Hospital Drive, Singapore, 169609, Singapore
- Duke-NUS Medical School, Singapore, Singapore
- , Us2.ai, Singapore, Singapore
| | - Jasper Tromp
- Duke-NUS Medical School, Singapore, Singapore
- Saw Swee Hock School of Public Health, National University of Singapore, National University Health System Singapore, Singapore, Singapore
| |
Collapse
|
23
|
Sherwani MK, Gopalakrishnan S. A systematic literature review: deep learning techniques for synthetic medical image generation and their applications in radiotherapy. FRONTIERS IN RADIOLOGY 2024; 4:1385742. [PMID: 38601888 PMCID: PMC11004271 DOI: 10.3389/fradi.2024.1385742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2024] [Accepted: 03/11/2024] [Indexed: 04/12/2024]
Abstract
The aim of this systematic review is to determine whether Deep Learning (DL) algorithms can provide a clinically feasible alternative to classic algorithms for synthetic Computer Tomography (sCT). The following categories are presented in this study: ∙ MR-based treatment planning and synthetic CT generation techniques. ∙ Generation of synthetic CT images based on Cone Beam CT images. ∙ Low-dose CT to High-dose CT generation. ∙ Attenuation correction for PET images. To perform appropriate database searches, we reviewed journal articles published between January 2018 and June 2023. Current methodology, study strategies, and results with relevant clinical applications were analyzed as we outlined the state-of-the-art of deep learning based approaches to inter-modality and intra-modality image synthesis. This was accomplished by contrasting the provided methodologies with traditional research approaches. The key contributions of each category were highlighted, specific challenges were identified, and accomplishments were summarized. As a final step, the statistics of all the cited works from various aspects were analyzed, which revealed that DL-based sCTs have achieved considerable popularity, while also showing the potential of this technology. In order to assess the clinical readiness of the presented methods, we examined the current status of DL-based sCT generation.
Collapse
Affiliation(s)
- Moiz Khan Sherwani
- Section for Evolutionary Hologenomics, Globe Institute, University of Copenhagen, Copenhagen, Denmark
| | | |
Collapse
|
24
|
Hill DLG. AI in imaging: the regulatory landscape. Br J Radiol 2024; 97:483-491. [PMID: 38366148 PMCID: PMC11027239 DOI: 10.1093/bjr/tqae002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 12/03/2023] [Accepted: 12/26/2023] [Indexed: 02/18/2024] Open
Abstract
Artificial intelligence (AI) methods have been applied to medical imaging for several decades, but in the last few years, the number of publications and the number of AI-enabled medical devices coming on the market have significantly increased. While some AI-enabled approaches are proving very valuable, systematic reviews of the AI imaging field identify significant weaknesses in a significant proportion of the literature. Medical device regulators have recently become more proactive in publishing guidance documents and recognizing standards that will require that the development and validation of AI-enabled medical devices need to be more rigorous than required for tradition "rule-based" software. In particular, developers are required to better identify and mitigate risks (such as bias) that arise in AI-enabled devices, and to ensure that the devices are validated in a realistic clinical setting to ensure their output is clinically meaningful. While this evolving regulatory landscape will mean that device developers will take longer to bring novel AI-based medical imaging devices to market, such additional rigour is necessary to address existing weaknesses in the field and ensure that patients and healthcare professionals can trust AI-enabled devices. There would also be benefits in the academic community taking into account this regulatory framework, to improve the quality of the literature and make it easier for academically developed AI tools to make the transition to medical devices that impact healthcare.
Collapse
|
25
|
Qian Y, Alhaskawi A, Dong Y, Ni J, Abdalbary S, Lu H. Transforming medicine: artificial intelligence integration in the peripheral nervous system. Front Neurol 2024; 15:1332048. [PMID: 38419700 PMCID: PMC10899496 DOI: 10.3389/fneur.2024.1332048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2023] [Accepted: 02/01/2024] [Indexed: 03/02/2024] Open
Abstract
In recent years, artificial intelligence (AI) has undergone remarkable advancements, exerting a significant influence across a multitude of fields. One area that has particularly garnered attention and witnessed substantial progress is its integration into the realm of the nervous system. This article provides a comprehensive examination of AI's applications within the peripheral nervous system, with a specific focus on AI-enhanced diagnostics for peripheral nervous system disorders, AI-driven pain management, advancements in neuroprosthetics, and the development of neural network models. By illuminating these facets, we unveil the burgeoning opportunities for revolutionary medical interventions and the enhancement of human capabilities, thus paving the way for a future in which AI becomes an integral component of our nervous system's interface.
Collapse
Affiliation(s)
- Yue Qian
- Rehabilitation Center, Hangzhou Wuyunshan Hospital (Hangzhou Institute of Health Promotion), Hangzhou, China
| | - Ahmad Alhaskawi
- Department of Orthopedics, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Yanzhao Dong
- Department of Orthopedics, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Juemin Ni
- Rehabilitation Center, Hangzhou Wuyunshan Hospital (Hangzhou Institute of Health Promotion), Hangzhou, China
| | - Sahar Abdalbary
- Department of Orthopedic Physical Therapy, Faculty of Physical Therapy, Nahda University in Beni Suef, Beni Suef, Egypt
| | - Hui Lu
- Department of Orthopedics, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
- Alibaba-Zhejiang University Joint Research Center of Future Digital Healthcare, Zhejiang University, Hangzhou, China
| |
Collapse
|
26
|
Mahootiha M, Qadir HA, Bergsland J, Balasingham I. Multimodal deep learning for personalized renal cell carcinoma prognosis: Integrating CT imaging and clinical data. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 244:107978. [PMID: 38113804 DOI: 10.1016/j.cmpb.2023.107978] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 12/05/2023] [Accepted: 12/12/2023] [Indexed: 12/21/2023]
Abstract
BACKGROUND AND OBJECTIVE Renal cell carcinoma represents a significant global health challenge with a low survival rate. The aim of this research was to devise a comprehensive deep-learning model capable of predicting survival probabilities in patients with renal cell carcinoma by integrating CT imaging and clinical data and addressing the limitations observed in prior studies. The aim is to facilitate the identification of patients requiring urgent treatment. METHODS The proposed framework comprises three modules: a 3D image feature extractor, clinical variable selection, and survival prediction. Based on the 3D CNN architecture, the feature extractor module predicts the ISUP grade of renal cell carcinoma tumors linked to mortality rates from CT images. Clinical variables are systematically selected using the Spearman score and random forest importance score as criteria. A deep learning-based network, trained with discrete LogisticHazard-based loss, performs the survival prediction. Nine distinct experiments are performed, with varying numbers of clinical variables determined by different thresholds of the Spearman and importance scores. RESULTS Our findings demonstrate that the proposed strategy surpasses the current literature on renal cancer prognosis based on CT scans and clinical factors. The best-performing experiment yielded a concordance index of 0.84 and an area under the curve value of 0.8 on the test cohort, which suggests strong predictive power. CONCLUSIONS The multimodal deep-learning approach developed in this study shows promising results in estimating survival probabilities for renal cell carcinoma patients using CT imaging and clinical data. This may have potential implications in identifying patients who require urgent treatment, potentially improving patient outcomes. The code created for this project is available for the public on: GitHub.
Collapse
Affiliation(s)
- Maryamalsadat Mahootiha
- The Intervention Centre, Oslo University Hospital, Oslo, 0372, Norway; Faculty of Medicine, University of Oslo, Oslo, 0372, Norway.
| | - Hemin Ali Qadir
- The Intervention Centre, Oslo University Hospital, Oslo, 0372, Norway
| | - Jacob Bergsland
- The Intervention Centre, Oslo University Hospital, Oslo, 0372, Norway
| | - Ilangko Balasingham
- The Intervention Centre, Oslo University Hospital, Oslo, 0372, Norway; Department of Electronic Systems, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
27
|
Aamir A, Iqbal A, Jawed F, Ashfaque F, Hafsa H, Anas Z, Oduoye MO, Basit A, Ahmed S, Abdul Rauf S, Khan M, Mansoor T. Exploring the current and prospective role of artificial intelligence in disease diagnosis. Ann Med Surg (Lond) 2024; 86:943-949. [PMID: 38333305 PMCID: PMC10849462 DOI: 10.1097/ms9.0000000000001700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 12/28/2023] [Indexed: 02/10/2024] Open
Abstract
Artificial intelligence (AI) refers to the simulation of human intelligence processes by machines, especially computer systems, providing assistance in a variety of patient care and health systems. The aim of this review is to contribute valuable insights to the ongoing discourse on the transformative potential of AI in healthcare, providing a nuanced understanding of its current applications, future possibilities, and associated challenges. The authors conducted a literature search on the current role of AI in disease diagnosis and its possible future applications using PubMed, Google Scholar, and ResearchGate within 10 years. Our investigation revealed that AI, encompassing machine-learning and deep-learning techniques, has become integral to healthcare, facilitating immediate access to evidence-based guidelines, the latest medical literature, and tools for generating differential diagnoses. However, our research also acknowledges the limitations of current AI methodologies in disease diagnosis and explores uncertainties and obstacles associated with the complete integration of AI into clinical practice. This review has highlighted the critical significance of integrating AI into the medical healthcare framework and meticulously examined the evolutionary trajectory of healthcare-oriented AI from its inception, delving into the current state of development and projecting the extent of reliance on AI in the future. The authors have found that central to this study is the exploration of how the strategic integration of AI can accelerate the diagnostic process, heighten diagnostic accuracy, and enhance overall operational efficiency, concurrently relieving the burdens faced by healthcare practitioners.
Collapse
Affiliation(s)
- Ali Aamir
- Department of Medicine, Dow University of Health Sciences
| | - Arham Iqbal
- Department of Medicine, Dow International Medical College, Karachi, Pakistan
| | - Fareeha Jawed
- Department of Medicine, Dow University of Health Sciences
| | - Faiza Ashfaque
- Department of Medicine, Dow University of Health Sciences
| | - Hafiza Hafsa
- Department of Medicine, Dow University of Health Sciences
| | - Zahra Anas
- Department of Medicine, Dow University of Health Sciences
| | - Malik Olatunde Oduoye
- Department of Research, Medical Research Circle, Bukavu, Democratic Republic of Congo
| | - Abdul Basit
- Department of Medicine, Dow University of Health Sciences
| | - Shaheer Ahmed
- Department of Medicine, Dow University of Health Sciences
| | | | - Mushkbar Khan
- Liaquat National Hospital and Medical College, Pakistan
| | | |
Collapse
|
28
|
Babu M, Lautman Z, Lin X, Sobota MHB, Snyder MP. Wearable Devices: Implications for Precision Medicine and the Future of Health Care. Annu Rev Med 2024; 75:401-415. [PMID: 37983384 DOI: 10.1146/annurev-med-052422-020437] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2023]
Abstract
Wearable devices are integrated analytical units equipped with sensitive physical, chemical, and biological sensors capable of noninvasive and continuous monitoring of vital physiological parameters. Recent advances in disciplines including electronics, computation, and material science have resulted in affordable and highly sensitive wearable devices that are routinely used for tracking and managing health and well-being. Combined with longitudinal monitoring of physiological parameters, wearables are poised to transform the early detection, diagnosis, and treatment/management of a range of clinical conditions. Smartwatches are the most commonly used wearable devices and have already demonstrated valuable biomedical potential in detecting clinical conditions such as arrhythmias, Lyme disease, inflammation, and, more recently, COVID-19 infection. Despite significant clinical promise shown in research settings, there remain major hurdles in translating the medical uses of wearables to the clinic. There is a clear need for more effective collaboration among stakeholders, including users, data scientists, clinicians, payers, and governments, to improve device security, user privacy, data standardization, regulatory approval, and clinical validity. This review examines the potential of wearables to offer affordable and reliable measures of physiological status that are on par with FDA-approved specialized medical devices. We briefly examine studies where wearables proved critical for the early detection of acute and chronic clinical conditions with a particular focus on cardiovascular disease, viral infections, and mental health. Finally, we discuss current obstacles to the clinical implementation of wearables and provide perspectives on their potential to deliver increasingly personalized proactive health care across a wide variety of conditions.
Collapse
Affiliation(s)
- Mohan Babu
- Department of Genetics, Stanford University School of Medicine, Stanford, California, USA;
| | - Ziv Lautman
- Department of Genetics, Stanford University School of Medicine, Stanford, California, USA;
- Department of Bioengineering, Stanford University School of Medicine, Stanford, California, USA
| | - Xiangping Lin
- Department of Genetics, Stanford University School of Medicine, Stanford, California, USA;
| | - Milan H B Sobota
- Department of Genetics, Stanford University School of Medicine, Stanford, California, USA;
| | - Michael P Snyder
- Department of Genetics, Stanford University School of Medicine, Stanford, California, USA;
| |
Collapse
|
29
|
Bekheet M, Sallah M, Alghamdi NS, Rusu-Both R, Elgarayhi A, Elmogy M. Cardiac Fibrosis Automated Diagnosis Based on FibrosisNet Network Using CMR Ischemic Cardiomyopathy. Diagnostics (Basel) 2024; 14:255. [PMID: 38337771 PMCID: PMC10855193 DOI: 10.3390/diagnostics14030255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 01/18/2024] [Accepted: 01/19/2024] [Indexed: 02/12/2024] Open
Abstract
Ischemic heart condition is one of the most prevalent causes of death that can be treated more effectively and lead to fewer fatalities if identified early. Heart muscle fibrosis affects the diastolic and systolic function of the heart and is linked to unfavorable cardiovascular outcomes. Cardiac magnetic resonance (CMR) scarring, a risk factor for ischemic heart disease, may be accurately identified by magnetic resonance imaging (MRI) to recognize fibrosis. In the past few decades, numerous methods based on MRI have been employed to identify and categorize cardiac fibrosis. Because they increase the therapeutic advantages and the likelihood that patients will survive, developing these approaches is essential and has significant medical benefits. A brand-new method that uses MRI has been suggested to help with diagnosing. Advances in deep learning (DL) networks contribute to the early and accurate diagnosis of heart muscle fibrosis. This study introduces a new deep network known as FibrosisNet, which detects and classifies fibrosis if it is present. It includes some of 17 various series layers to achieve the fibrosis detection target. The introduced classification system is trained and evaluated for the best performance results. In addition, deep transfer-learning models are applied to the different famous convolution neural networks to find fibrosis detection architectures. The FibrosisNet architecture achieves an accuracy of 96.05%, a sensitivity of 97.56%, and an F1-Score of 96.54%. The experimental results show that FibrosisNet has numerous benefits and produces higher results than current state-of-the-art methods and other advanced CNN approaches.
Collapse
Affiliation(s)
- Mohamed Bekheet
- Applied Mathematical Physics Research Group, Physics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt
- Radiography and Medical Imaging Department, Faculty of Applied Health Sciences Technology, Sphinx University, New Assiut 71515, Egypt
| | - Mohammed Sallah
- Department of Physics, College of Sciences, University of Bisha, P.O. Box 344, Bisha 61922, Saudi Arabia
| | - Norah S. Alghamdi
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Roxana Rusu-Both
- Automation Department, Faculty of Automation and Computer Science, Technical University of Cluj-Napoca, 400027 Cluj-Napoca, Romania
| | - Ahmed Elgarayhi
- Applied Mathematical Physics Research Group, Physics Department, Faculty of Science, Mansoura University, Mansoura 35516, Egypt
| | - Mohammed Elmogy
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt
| |
Collapse
|
30
|
Cau R, Pisu F, Suri JS, Montisci R, Gatti M, Mannelli L, Gong X, Saba L. Artificial Intelligence in the Differential Diagnosis of Cardiomyopathy Phenotypes. Diagnostics (Basel) 2024; 14:156. [PMID: 38248033 PMCID: PMC11154548 DOI: 10.3390/diagnostics14020156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Revised: 01/03/2024] [Accepted: 01/08/2024] [Indexed: 01/23/2024] Open
Abstract
Artificial intelligence (AI) is rapidly being applied to the medical field, especially in the cardiovascular domain. AI approaches have demonstrated their applicability in the detection, diagnosis, and management of several cardiovascular diseases, enhancing disease stratification and typing. Cardiomyopathies are a leading cause of heart failure and life-threatening ventricular arrhythmias. Identifying the etiologies is fundamental for the management and diagnostic pathway of these heart muscle diseases, requiring the integration of various data, including personal and family history, clinical examination, electrocardiography, and laboratory investigations, as well as multimodality imaging, making the clinical diagnosis challenging. In this scenario, AI has demonstrated its capability to capture subtle connections from a multitude of multiparametric datasets, enabling the discovery of hidden relationships in data and handling more complex tasks than traditional methods. This review aims to present a comprehensive overview of the main concepts related to AI and its subset. Additionally, we review the existing literature on AI-based models in the differential diagnosis of cardiomyopathy phenotypes, and we finally examine the advantages and limitations of these AI approaches.
Collapse
Affiliation(s)
- Riccardo Cau
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), di Cagliari-Polo di Monserrato s.s. 554 Monserrato, 09045 Cagliari, Italy; (R.C.); (F.P.)
| | - Francesco Pisu
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), di Cagliari-Polo di Monserrato s.s. 554 Monserrato, 09045 Cagliari, Italy; (R.C.); (F.P.)
| | - Jasjit S. Suri
- Stroke Monitoring and Diagnostic Division, AtheroPoin™, Roseville, CA 95661, USA;
| | - Roberta Montisci
- Department of Cardiology, Azienda Ospedaliero Universitaria (A.O.U.), di Cagliari-Polo di Monserrato s.s. 554 Monserrato, 09045 Cagliari, Italy;
| | - Marco Gatti
- Department of Radiology, Università degli Studi di Torino, 10129 Turin, Italy;
| | | | - Xiangyang Gong
- Radiology Department, Zhejiang Provincial People’s Hospital, Affiliated People’s Hospital, Hangzhou Medical College, Hangzhou 310014, China;
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), di Cagliari-Polo di Monserrato s.s. 554 Monserrato, 09045 Cagliari, Italy; (R.C.); (F.P.)
| |
Collapse
|
31
|
Sheikhi M, Sina S, Karimipourfard M. Deep-learned generation of renal dual-energy CT from a single-energy scan. Clin Radiol 2024; 79:e17-e25. [PMID: 37923626 DOI: 10.1016/j.crad.2023.09.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 09/14/2023] [Accepted: 09/24/2023] [Indexed: 11/07/2023]
Abstract
AIM To investigate the role of the deep-learning (DL) method in the generation of dual-energy computed tomography (DECT) images from single-energy images for precise diagnosis of kidney stone type. MATERIALS AND METHODS DECT of 23 patients was acquired, and the stone types were investigated based on the DECT software suggestions. The data were divided into two paired groups:120 kVp input and 80 kVp target and 120 kVp input and 135 kVp targets, p2p-UNet-GAN was exploited to generate the different energy images based on the common CT protocols. RESULTS The images generated of the generative adversarial network (GAN) network were evaluated based on the SSIM, PSNR, and MSE metrics, and the values were estimated as 0.85-0.95, 28-32, and 0.85-0.89 respectively. The attenuation ratio of test patient images were estimated and compared with real patient reports. The network achieved high accuracy in stone region localisation and resulted in accurate stone type predictions. CONCLUSION This study presents a useful method based on the DL technique to reduce patient radiation dose and facilitate the prediction of urinary stone types using single-energy CT imaging.
Collapse
Affiliation(s)
- M Sheikhi
- Nuclear Engineering Department, School of Mechanical Engineering, Shiraz University, Shiraz, Iran; Abu Ali Sina Hospital, Shiraz, Iran
| | - S Sina
- Nuclear Engineering Department, School of Mechanical Engineering, Shiraz University, Shiraz, Iran; Radiation Research Center, Shiraz University, Shiraz, Iran.
| | - M Karimipourfard
- Nuclear Engineering Department, School of Mechanical Engineering, Shiraz University, Shiraz, Iran
| |
Collapse
|
32
|
Shi J, Bendig D, Vollmar HC, Rasche P. Mapping the Bibliometrics Landscape of AI in Medicine: Methodological Study. J Med Internet Res 2023; 25:e45815. [PMID: 38064255 PMCID: PMC10746970 DOI: 10.2196/45815] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Revised: 08/16/2023] [Accepted: 09/30/2023] [Indexed: 12/18/2023] Open
Abstract
BACKGROUND Artificial intelligence (AI), conceived in the 1950s, has permeated numerous industries, intensifying in tandem with advancements in computing power. Despite the widespread adoption of AI, its integration into medicine trails other sectors. However, medical AI research has experienced substantial growth, attracting considerable attention from researchers and practitioners. OBJECTIVE In the absence of an existing framework, this study aims to outline the current landscape of medical AI research and provide insights into its future developments by examining all AI-related studies within PubMed over the past 2 decades. We also propose potential data acquisition and analysis methods, developed using Python (version 3.11) and to be executed in Spyder IDE (version 5.4.3), for future analogous research. METHODS Our dual-pronged approach involved (1) retrieving publication metadata related to AI from PubMed (spanning 2000-2022) via Python, including titles, abstracts, authors, journals, country, and publishing years, followed by keyword frequency analysis and (2) classifying relevant topics using latent Dirichlet allocation, an unsupervised machine learning approach, and defining the research scope of AI in medicine. In the absence of a universal medical AI taxonomy, we used an AI dictionary based on the European Commission Joint Research Centre AI Watch report, which emphasizes 8 domains: reasoning, planning, learning, perception, communication, integration and interaction, service, and AI ethics and philosophy. RESULTS From 2000 to 2022, a comprehensive analysis of 307,701 AI-related publications from PubMed highlighted a 36-fold increase. The United States emerged as a clear frontrunner, producing 68,502 of these articles. Despite its substantial contribution in terms of volume, China lagged in terms of citation impact. Diving into specific AI domains, as the Joint Research Centre AI Watch report categorized, the learning domain emerged dominant. Our classification analysis meticulously traced the nuanced research trajectories across each domain, revealing the multifaceted and evolving nature of AI's application in the realm of medicine. CONCLUSIONS The research topics have evolved as the volume of AI studies increases annually. Machine learning remains central to medical AI research, with deep learning expected to maintain its fundamental role. Empowered by predictive algorithms, pattern recognition, and imaging analysis capabilities, the future of AI research in medicine is anticipated to concentrate on medical diagnosis, robotic intervention, and disease management. Our topic modeling outcomes provide a clear insight into the focus of AI research in medicine over the past decades and lay the groundwork for predicting future directions. The domains that have attracted considerable research attention, primarily the learning domain, will continue to shape the trajectory of AI in medicine. Given the observed growing interest, the domain of AI ethics and philosophy also stands out as a prospective area of increased focus.
Collapse
Affiliation(s)
- Jin Shi
- Institute for Entrepreneurship, University of Münster, Münster, Germany
| | - David Bendig
- Institute for Entrepreneurship, University of Münster, Münster, Germany
| | | | - Peter Rasche
- Department of Healthcare, University of Applied Science - Hochschule Niederrhein, Krefeld, Germany
| |
Collapse
|
33
|
Chekmeyan M, Baccei SJ, Garwood ER. Cross-Check QA: A Quality Assurance Workflow to Prevent Missed Diagnoses by Alerting Inadvertent Discordance Between the Radiologist and Artificial Intelligence in the Interpretation of High-Acuity CT Scans. J Am Coll Radiol 2023; 20:1225-1230. [PMID: 37423347 DOI: 10.1016/j.jacr.2023.06.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 06/02/2023] [Accepted: 06/09/2023] [Indexed: 07/11/2023]
Abstract
PURPOSE The aim of this study was to implement and evaluate a quality assurance (QA) workflow that leverages natural language processing to rapidly resolve inadvertent discordance between radiologists and an artificial intelligence (AI) decision support system (DSS) in the interpretation of high-acuity CT studies when the radiologist does not engage with AI DSS output. METHODS All consecutive high-acuity adult CT examinations performed in a health system between March 1, 2020, and September 20, 2022, were interpreted alongside an AI DSS (Aidoc) for intracranial hemorrhage, cervical spine fracture, and pulmonary embolus. CT studies were flagged for this QA workflow if they met three criteria: (1) negative results by radiologist report, (2) a high probability of positive results by the AI DSS, and (3) unviewed AI DSS output. In these cases, an automated e-mail notification was sent to our quality team. If discordance was confirmed on secondary review-an initially missed diagnosis-addendum and communication documentation was performed. RESULTS Of 111,674 high-acuity CT examinations interpreted alongside the AI DSS over this 2.5-year time period, the frequency of missed diagnoses (intracranial hemorrhage, pulmonary embolus, and cervical spine fracture) uncovered by this workflow was 0.02% (n = 26). Of 12,412 CT studies prioritized as depicting positive findings by the AI DSS, 0.4% (n = 46) were discordant, unengaged, and flagged for QA. Among these discordant cases, 57% (26 of 46) were determined to be true positives. Addendum and communication documentation was performed within 24 hours of the initial report signing in 85% of these cases. CONCLUSIONS Inadvertent discordance between radiologists and the AI DSS occurred in a small number of cases. This QA workflow leveraged natural language processing to rapidly detect, notify, and resolve these discrepancies and prevent potential missed diagnoses.
Collapse
Affiliation(s)
| | - Steven J Baccei
- Professor, Vice-Chair, Quality, Safety, and Process Improvement, and Interim Co-CMO, UMass Memorial Medical Center and Department of Radiology, UMass Chan Medical School, Worcester, Massachusetts
| | - Elisabeth R Garwood
- Assistant Professor and Director of Radiology AI and Clinical Innovation, Department of Radiology, UMass Chan Medical School, Worcester, Massachusetts
| |
Collapse
|
34
|
Li M, Jiang Y, Zhang Y, Zhu H. Medical image analysis using deep learning algorithms. Front Public Health 2023; 11:1273253. [PMID: 38026291 PMCID: PMC10662291 DOI: 10.3389/fpubh.2023.1273253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 10/05/2023] [Indexed: 12/01/2023] Open
Abstract
In the field of medical image analysis within deep learning (DL), the importance of employing advanced DL techniques cannot be overstated. DL has achieved impressive results in various areas, making it particularly noteworthy for medical image analysis in healthcare. The integration of DL with medical image analysis enables real-time analysis of vast and intricate datasets, yielding insights that significantly enhance healthcare outcomes and operational efficiency in the industry. This extensive review of existing literature conducts a thorough examination of the most recent deep learning (DL) approaches designed to address the difficulties faced in medical healthcare, particularly focusing on the use of deep learning algorithms in medical image analysis. Falling all the investigated papers into five different categories in terms of their techniques, we have assessed them according to some critical parameters. Through a systematic categorization of state-of-the-art DL techniques, such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Generative Adversarial Networks (GANs), Long Short-term Memory (LSTM) models, and hybrid models, this study explores their underlying principles, advantages, limitations, methodologies, simulation environments, and datasets. Based on our results, Python was the most frequent programming language used for implementing the proposed methods in the investigated papers. Notably, the majority of the scrutinized papers were published in 2021, underscoring the contemporaneous nature of the research. Moreover, this review accentuates the forefront advancements in DL techniques and their practical applications within the realm of medical image analysis, while simultaneously addressing the challenges that hinder the widespread implementation of DL in image analysis within the medical healthcare domains. These discerned insights serve as compelling impetuses for future studies aimed at the progressive advancement of image analysis in medical healthcare research. The evaluation metrics employed across the reviewed articles encompass a broad spectrum of features, encompassing accuracy, sensitivity, specificity, F-score, robustness, computational complexity, and generalizability.
Collapse
Affiliation(s)
- Mengfang Li
- The First Affiliated Hospital of Wenzhou Medical University, Wenzhou, China
| | - Yuanyuan Jiang
- Department of Cardiovascular Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Yanzhou Zhang
- Department of Cardiovascular Medicine, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Haisheng Zhu
- Department of Cardiovascular Medicine, Wencheng People’s Hospital, Wencheng, China
| |
Collapse
|
35
|
Hossain MSA, Gul S, Chowdhury MEH, Khan MS, Sumon MSI, Bhuiyan EH, Khandakar A, Hossain M, Sadique A, Al-Hashimi I, Ayari MA, Mahmud S, Alqahtani A. Deep Learning Framework for Liver Segmentation from T1-Weighted MRI Images. SENSORS (BASEL, SWITZERLAND) 2023; 23:8890. [PMID: 37960589 PMCID: PMC10650219 DOI: 10.3390/s23218890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 08/08/2023] [Accepted: 08/15/2023] [Indexed: 11/15/2023]
Abstract
The human liver exhibits variable characteristics and anatomical information, which is often ambiguous in radiological images. Machine learning can be of great assistance in automatically segmenting the liver in radiological images, which can be further processed for computer-aided diagnosis. Magnetic resonance imaging (MRI) is preferred by clinicians for liver pathology diagnosis over volumetric abdominal computerized tomography (CT) scans, due to their superior representation of soft tissues. The convenience of Hounsfield unit (HoU) based preprocessing in CT scans is not available in MRI, making automatic segmentation challenging for MR images. This study investigates multiple state-of-the-art segmentation networks for liver segmentation from volumetric MRI images. Here, T1-weighted (in-phase) scans are investigated using expert-labeled liver masks from a public dataset of 20 patients (647 MR slices) from the Combined Healthy Abdominal Organ Segmentation grant challenge (CHAOS). The reason for using T1-weighted images is that it demonstrates brighter fat content, thus providing enhanced images for the segmentation task. Twenty-four different state-of-the-art segmentation networks with varying depths of dense, residual, and inception encoder and decoder backbones were investigated for the task. A novel cascaded network is proposed to segment axial liver slices. The proposed framework outperforms existing approaches reported in the literature for the liver segmentation task (on the same test set) with a dice similarity coefficient (DSC) score and intersect over union (IoU) of 95.15% and 92.10%, respectively.
Collapse
Affiliation(s)
- Md. Sakib Abrar Hossain
- NSU Genome Research Institute (NGRI), North South University, Dhaka 1229, Bangladesh
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | - Sidra Gul
- Department of Computer Systems Engineering, University of Engineering and Technology Peshawar, Peshawar 25000, Pakistan
- Artificial Intelligence in Healthcare, IIPL, National Center of Artificial Intelligence, Peshawar 25000, Pakistan
| | | | | | | | - Enamul Haque Bhuiyan
- Center for Magnetic Resonance Research, University of Illinois Chicago, Chicago, IL 60607, USA
| | - Amith Khandakar
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | - Maqsud Hossain
- NSU Genome Research Institute (NGRI), North South University, Dhaka 1229, Bangladesh
| | - Abdus Sadique
- NSU Genome Research Institute (NGRI), North South University, Dhaka 1229, Bangladesh
| | | | | | - Sakib Mahmud
- Department of Electrical Engineering, Qatar University, Doha 2713, Qatar
| | - Abdulrahman Alqahtani
- Department of Medical Equipment Technology, College of Applied, Medical Science, Majmaah University, Majmaah City 11952, Saudi Arabia
- Department of Biomedical Technology, College of Applied Medical Sciences, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
| |
Collapse
|
36
|
Cui R, Wang L, Lin L, Li J, Lu R, Liu S, Liu B, Gu Y, Zhang H, Shang Q, Chen L, Tian D. Deep Learning in Barrett's Esophagus Diagnosis: Current Status and Future Directions. Bioengineering (Basel) 2023; 10:1239. [PMID: 38002363 PMCID: PMC10669008 DOI: 10.3390/bioengineering10111239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 10/13/2023] [Accepted: 10/16/2023] [Indexed: 11/26/2023] Open
Abstract
Barrett's esophagus (BE) represents a pre-malignant condition characterized by abnormal cellular proliferation in the distal esophagus. A timely and accurate diagnosis of BE is imperative to prevent its progression to esophageal adenocarcinoma, a malignancy associated with a significantly reduced survival rate. In this digital age, deep learning (DL) has emerged as a powerful tool for medical image analysis and diagnostic applications, showcasing vast potential across various medical disciplines. In this comprehensive review, we meticulously assess 33 primary studies employing varied DL techniques, predominantly featuring convolutional neural networks (CNNs), for the diagnosis and understanding of BE. Our primary focus revolves around evaluating the current applications of DL in BE diagnosis, encompassing tasks such as image segmentation and classification, as well as their potential impact and implications in real-world clinical settings. While the applications of DL in BE diagnosis exhibit promising results, they are not without challenges, such as dataset issues and the "black box" nature of models. We discuss these challenges in the concluding section. Essentially, while DL holds tremendous potential to revolutionize BE diagnosis, addressing these challenges is paramount to harnessing its full capacity and ensuring its widespread application in clinical practice.
Collapse
Affiliation(s)
- Ruichen Cui
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Lei Wang
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
- West China School of Nursing, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China
| | - Lin Lin
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
- West China School of Nursing, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China
| | - Jie Li
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
- West China School of Nursing, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China
| | - Runda Lu
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Shixiang Liu
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Bowei Liu
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Yimin Gu
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Hanlu Zhang
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Qixin Shang
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Longqi Chen
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| | - Dong Tian
- Department of Thoracic Surgery, West China Hospital, Sichuan University, 37 Guoxue Alley, Chengdu 610041, China; (R.C.); (L.W.); (L.L.); (J.L.); (R.L.); (S.L.); (B.L.); (Y.G.); (H.Z.); (Q.S.)
| |
Collapse
|
37
|
Sahu A, Das PK, Meher S. Recent advancements in machine learning and deep learning-based breast cancer detection using mammograms. Phys Med 2023; 114:103138. [PMID: 37914431 DOI: 10.1016/j.ejmp.2023.103138] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 07/22/2023] [Accepted: 09/14/2023] [Indexed: 11/03/2023] Open
Abstract
OBJECTIVE Mammogram-based automatic breast cancer detection has a primary role in accurate cancer diagnosis and treatment planning to save valuable lives. Mammography is one basic yet efficient test for screening breast cancer. Very few comprehensive surveys have been presented to briefly analyze methods for detecting breast cancer with mammograms. In this article, our objective is to give an overview of recent advancements in machine learning (ML) and deep learning (DL)-based breast cancer detection systems. METHODS We give a structured framework to categorize mammogram-based breast cancer detection techniques. Several publicly available mammogram databases and different performance measures are also mentioned. RESULTS After deliberate investigation, we find most of the works classify breast tumors either as normal-abnormal or malignant-benign rather than classifying them into three classes. Furthermore, DL-based features are more significant than hand-crafted features. However, transfer learning is preferred over others as it yields better performance in small datasets, unlike classical DL techniques. SIGNIFICANCE AND CONCLUSION In this article, we have made an attempt to give recent advancements in artificial intelligence (AI)-based breast cancer detection systems. Furthermore, a number of challenging issues and possible research directions are mentioned, which will help researchers in further scopes of research in this field.
Collapse
Affiliation(s)
- Adyasha Sahu
- Department of Electronics and Communication Engineering, National Institute of Technology, Rourkela, Odisha, 769008, India.
| | - Pradeep Kumar Das
- School of Electronics Engineering (SENSE), VIT Vellore, Tamil Nadu, 632014, India.
| | - Sukadev Meher
- Department of Electronics and Communication Engineering, National Institute of Technology, Rourkela, Odisha, 769008, India.
| |
Collapse
|
38
|
Wang R, Chen LC, Moukheiber L, Seastedt KP, Moukheiber M, Moukheiber D, Zaiman Z, Moukheiber S, Litchman T, Trivedi H, Steinberg R, Gichoya JW, Kuo PC, Celi LA. Enabling chronic obstructive pulmonary disease diagnosis through chest X-rays: A multi-site and multi-modality study. Int J Med Inform 2023; 178:105211. [PMID: 37690225 DOI: 10.1016/j.ijmedinf.2023.105211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 07/23/2023] [Accepted: 09/01/2023] [Indexed: 09/12/2023]
Abstract
PURPOSE Chronic obstructive pulmonary disease (COPD) is one of the most common chronic illnesses in the world. Unfortunately, COPD is often difficult to diagnose early when interventions can alter the disease course, and it is underdiagnosed or only diagnosed too late for effective treatment. Currently, spirometry is the gold standard for diagnosing COPD but it can be challenging to obtain, especially in resource-poor countries. Chest X-rays (CXRs), however, are readily available and may have the potential as a screening tool to identify patients with COPD who should undergo further testing or intervention. In this study, we used three CXR datasets alongside their respective electronic health records (EHR) to develop and externally validate our models. METHOD To leverage the performance of convolutional neural network models, we proposed two fusion schemes: (1) model-level fusion, using Bootstrap aggregating to aggregate predictions from two models, (2) data-level fusion, using CXR image data from different institutions or multi-modal data, CXR image data, and EHR data for model training. Fairness analysis was then performed to evaluate the models across different demographic groups. RESULTS Our results demonstrate that DL models can detect COPD using CXRs with an area under the curve of over 0.75, which could facilitate patient screening for COPD, especially in low-resource regions where CXRs are more accessible than spirometry. CONCLUSIONS By using a ubiquitous test, future research could build on this work to detect COPD in patients early who would not otherwise have been diagnosed or treated, altering the course of this highly morbid disease.
Collapse
Affiliation(s)
- Ryan Wang
- Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan
| | - Li-Ching Chen
- Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan
| | - Lama Moukheiber
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Kenneth P Seastedt
- Department of Surgery, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Mira Moukheiber
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Dana Moukheiber
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Zachary Zaiman
- Department of Computer Science, Emory University, Atlanta, GA, USA
| | - Sulaiman Moukheiber
- Department of Computer Science, Worcester Polytechnic Institute, Worcester, MA, USA
| | - Tess Litchman
- Department of Medicine, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Hari Trivedi
- Department of Radiology, Emory University, Atlanta, GA, USA
| | | | - Judy W Gichoya
- Department of Radiology, Emory University, Atlanta, GA, USA
| | - Po-Chih Kuo
- Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan.
| | - Leo A Celi
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA; Division of Pulmonary Critical Care and Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA; Department of Biostatistics, Harvard T.H. Chan School of Public Health, Boston, MA, USA
| |
Collapse
|
39
|
Adler RS. Musculoskeletal ultrasound: a technical and historical perspective. J Ultrason 2023; 23:e172-e187. [PMID: 38020513 PMCID: PMC10668930 DOI: 10.15557/jou.2023.0027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Accepted: 08/21/2023] [Indexed: 12/01/2023] Open
Abstract
During the past four decades, musculoskeletal ultrasound has become popular as an imaging modality due to its low cost, accessibility, and lack of ionizing radiation. The development of ultrasound technology was possible in large part due to concomitant advances in both solid-state electronics and signal processing. The invention of the transistor and digital computer in the late 1940s was integral in its development. Moore's prediction that the number of microprocessors on a chip would grow exponentially, resulting in progressive miniaturization in chip design and therefore increased computational power, added to these capabilities. The development of musculoskeletal ultrasound has paralleled technical advances in diagnostic ultrasound. The appearance of a large variety of transducer capabilities and rapid image processing along with the ability to assess vascularity and tissue properties has expanded and continues to expand the role of musculoskeletal ultrasound. It should also be noted that these developments have in large part been due to a number of individuals who had the insight to see the potential applications of this developing technology to a host of relevant clinical musculoskeletal problems. Exquisite high-resolution images of both deep and small superficial musculoskeletal anatomy, assessment of vascularity on a capillary level and tissue mechanical properties can be obtained. Ultrasound has also been recognized as the method of choice to perform a large variety of interventional procedures. A brief review of these technical developments, the timeline over which these improvements occurred, and the impact on musculoskeletal ultrasound is presented below.
Collapse
Affiliation(s)
- Ronald Steven Adler
- Department of Radiology, New York University, Grossman School of Medicine, Langone Orthopedic Center, New York, USA
| |
Collapse
|
40
|
Kocak B, Yardimci AH, Yuzkan S, Keles A, Altun O, Bulut E, Bayrak ON, Okumus AA. Transparency in Artificial Intelligence Research: a Systematic Review of Availability Items Related to Open Science in Radiology and Nuclear Medicine. Acad Radiol 2023; 30:2254-2266. [PMID: 36526532 DOI: 10.1016/j.acra.2022.11.030] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 11/21/2022] [Accepted: 11/22/2022] [Indexed: 12/15/2022]
Abstract
RATIONALE AND OBJECTIVES Reproducibility of artificial intelligence (AI) research has become a growing concern. One of the fundamental reasons is the lack of transparency in data, code, and model. In this work, we aimed to systematically review the radiology and nuclear medicine papers on AI in terms of transparency and open science. MATERIALS AND METHODS A systematic literature search was performed in PubMed to identify original research studies on AI. The search was restricted to studies published in Q1 and Q2 journals that are also indexed on the Web of Science. A random sampling of the literature was performed. Besides six baseline study characteristics, a total of five availability items were evaluated. Two groups of independent readers including eight readers participated in the study. Inter-rater agreement was analyzed. Disagreements were resolved with consensus. RESULTS Following eligibility criteria, we included a final set of 194 papers. The raw data was available in about one-fifth of the papers (34/194; 18%). However, the authors made their private data available only in one paper (1/161; 1%). About one-tenth of the papers made their pre-modeling (25/194; 13%), modeling (28/194; 14%), or post-modeling files (15/194; 8%) available. Most of the papers (189/194; 97%) did not attempt to create a ready-to-use system for real-world usage. Data origin, use of deep learning, and external validation had statistically significantly different distributions. The use of private data alone was negatively associated with the availability of at least one item (p<0.001). CONCLUSION Overall rates of availability for items were poor, leaving room for substantial improvement.
Collapse
Affiliation(s)
- Burak Kocak
- Department of Radiology, University of Health Sciences, Basaksehir Cam and Sakura City Hospital, Basaksehir, 34480, Istanbul, Turkey.
| | - Aytul Hande Yardimci
- Department of Radiology, University of Health Sciences, Basaksehir Cam and Sakura City Hospital, Basaksehir, 34480, Istanbul, Turkey
| | - Sabahattin Yuzkan
- Department of Radiology, University of Health Sciences, Basaksehir Cam and Sakura City Hospital, Basaksehir, 34480, Istanbul, Turkey
| | - Ali Keles
- Department of Radiology, University of Health Sciences, Basaksehir Cam and Sakura City Hospital, Basaksehir, 34480, Istanbul, Turkey
| | - Omer Altun
- Department of Radiology, University of Health Sciences, Basaksehir Cam and Sakura City Hospital, Basaksehir, 34480, Istanbul, Turkey
| | - Elif Bulut
- Department of Radiology, University of Health Sciences, Basaksehir Cam and Sakura City Hospital, Basaksehir, 34480, Istanbul, Turkey
| | - Osman Nuri Bayrak
- Department of Radiology, University of Health Sciences, Basaksehir Cam and Sakura City Hospital, Basaksehir, 34480, Istanbul, Turkey
| | - Ahmet Arda Okumus
- Department of Radiology, University of Health Sciences, Basaksehir Cam and Sakura City Hospital, Basaksehir, 34480, Istanbul, Turkey
| |
Collapse
|
41
|
Mandal PK, Mahto RV. Deep Multi-Branch CNN Architecture for Early Alzheimer's Detection from Brain MRIs. SENSORS (BASEL, SWITZERLAND) 2023; 23:8192. [PMID: 37837027 PMCID: PMC10574860 DOI: 10.3390/s23198192] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2023] [Revised: 09/25/2023] [Accepted: 09/26/2023] [Indexed: 10/15/2023]
Abstract
Alzheimer's disease (AD) is a neurodegenerative disease that can cause dementia and result in a severe reduction in brain function, inhibiting simple tasks, especially if no preventative care is taken. Over 1 in 9 Americans suffer from AD-induced dementia, and unpaid care for people with AD-related dementia is valued at USD 271.6 billion. Hence, various approaches have been developed for early AD diagnosis to prevent its further progression. In this paper, we first review other approaches that could be used for the early detection of AD. We then give an overview of our dataset and propose a deep convolutional neural network (CNN) architecture consisting of 7,866,819 parameters. This model comprises three different convolutional branches, each having a different length. Each branch is comprised of different kernel sizes. This model can predict whether a patient is non-demented, mild-demented, or moderately demented with a 99.05% three-class accuracy. In summary, the deep CNN model demonstrated exceptional accuracy in the early diagnosis of AD, offering a significant advancement in the field and the potential to improve patient care.
Collapse
Affiliation(s)
- Paul K. Mandal
- Department of Computer Science, University of Texas, Austin, TX 78712, USA
| | - Rakeshkumar V. Mahto
- Department of Electrical and Computer Engineering, California State University, Fullerton, CA 92831, USA;
| |
Collapse
|
42
|
Li Y, Dong B, Yuan P. The diagnostic value of machine learning for the classification of malignant bone tumor: a systematic evaluation and meta-analysis. Front Oncol 2023; 13:1207175. [PMID: 37746301 PMCID: PMC10513372 DOI: 10.3389/fonc.2023.1207175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 08/23/2023] [Indexed: 09/26/2023] Open
Abstract
Background Malignant bone tumors are a type of cancer with varying malignancy and prognosis. Accurate diagnosis and classification are crucial for treatment and prognosis assessment. Machine learning has been introduced for early differential diagnosis of malignant bone tumors, but its performance is controversial. This systematic review and meta-analysis aims to explore the diagnostic value of machine learning for malignant bone tumors. Methods PubMed, Embase, Cochrane Library, and Web of Science were searched for literature on machine learning in the differential diagnosis of malignant bone tumors up to October 31, 2022. The risk of bias assessment was conducted using QUADAS-2. A bivariate mixed-effects model was used for meta-analysis, with subgroup analyses by machine learning methods and modeling approaches. Results The inclusion comprised 31 publications with 382,371 patients, including 141,315 with malignant bone tumors. Meta-analysis results showed machine learning sensitivity and specificity of 0.87 [95% CI: 0.81,0.91] and 0.91 [95% CI: 0.86,0.94] in the training set, and 0.83 [95% CI: 0.74,0.89] and 0.87 [95% CI: 0.79,0.92] in the validation set. Subgroup analysis revealed MRI-based radiomics was the most common approach, with sensitivity and specificity of 0.85 [95% CI: 0.74,0.91] and 0.87 [95% CI: 0.81,0.91] in the training set, and 0.79 [95% CI: 0.70,0.86] and 0.79 [95% CI: 0.70,0.86] in the validation set. Convolutional neural networks were the most common model type, with sensitivity and specificity of 0.86 [95% CI: 0.72,0.94] and 0.92 [95% CI: 0.82,0.97] in the training set, and 0.87 [95% CI: 0.51,0.98] and 0.87 [95% CI: 0.69,0.96] in the validation set. Conclusion Machine learning is mainly applied in radiomics for diagnosing malignant bone tumors, showing desirable diagnostic performance. Machine learning can be an early adjunctive diagnostic method but requires further research and validation to determine its practical efficiency and clinical application prospects. Systematic review registration https://www.crd.york.ac.uk/prospero/, identifier CRD42023387057.
Collapse
Affiliation(s)
| | - Bo Dong
- Department of Orthopedics, Xi’an Honghui Hospital, Xi’an Jiaotong University, Xi’an Shaanxi, China
| | | |
Collapse
|
43
|
Hajianfar G, Kalayinia S, Hosseinzadeh M, Samanian S, Maleki M, Sossi V, Rahmim A, Salmanpour MR. Prediction of Parkinson's disease pathogenic variants using hybrid Machine learning systems and radiomic features. Phys Med 2023; 113:102647. [PMID: 37579523 DOI: 10.1016/j.ejmp.2023.102647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 05/08/2023] [Accepted: 07/29/2023] [Indexed: 08/16/2023] Open
Abstract
PURPOSE In Parkinson's disease (PD), 5-10% of cases are of genetic origin with mutations identified in several genes such as leucine-rich repeat kinase 2 (LRRK2) and glucocerebrosidase (GBA). We aim to predict these two gene mutations using hybrid machine learning systems (HMLS), via imaging and non-imaging data, with the long-term goal to predict conversion to active disease. METHODS We studied 264 and 129 patients with known LRRK2 and GBA mutations status from PPMI database. Each dataset includes 513 features such as clinical features (CFs), conventional imaging features (CIFs) and radiomic features (RFs) extracted from DAT-SPECT images. Features, normalized by Z-score, were univariately analyzed for statistical significance by the t-test and chi-square test, adjusted by Benjamini-Hochberg correction. Multiple HMLSs, including 11 features extraction (FEA) or 10 features selection algorithms (FSA) linked with 21 classifiers were utilized. We also employed Ensemble Voting (EV) to classify the genes. RESULTS For prediction of LRRK2 mutation status, a number of HMLSs resulted in accuracies of 0.98 ± 0.02 and 1.00 in 5-fold cross-validation (80% out of total data points) and external testing (remaining 20%), respectively. For predicting GBA mutation status, multiple HMLSs resulted in high accuracies of 0.90 ± 0.08 and 0.96 in 5-fold cross-validation and external testing, respectively. We additionally showed that SPECT-based RFs added value to the specific prediction of of GBA mutation status. CONCLUSION We demonstrated that combining medical information with SPECT-based imaging features, and optimal utilization of HMLS can produce excellent prediction of the mutations status in PD patients.
Collapse
Affiliation(s)
- Ghasem Hajianfar
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran; Technological Virtual Collaboration (TECVICO Corp.), Vancouver BC, Canada
| | - Samira Kalayinia
- Cardiogenetic Research Center, Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Mahdi Hosseinzadeh
- Technological Virtual Collaboration (TECVICO Corp.), Vancouver BC, Canada; Department of Electrical and Computer Engineering, Tarbiat Modares University, Tehran, Iran
| | - Sara Samanian
- Firoozgar Hospital Medical Genetics Laboratory, Iran University of Medical Sciences, Tehran, Iran
| | - Majid Maleki
- Rajaie Cardiovascular Medical and Research Center, Iran University of Medical Sciences, Tehran, Iran
| | - Vesna Sossi
- Department of Physics and Astronomy, University of British Columbia, Vancouver, BC, Canada
| | - Arman Rahmim
- Department of Physics and Astronomy, University of British Columbia, Vancouver, BC, Canada; Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada
| | - Mohammad R Salmanpour
- Technological Virtual Collaboration (TECVICO Corp.), Vancouver BC, Canada; Department of Integrative Oncology, BC Cancer Research Institute, Vancouver, BC, Canada.
| |
Collapse
|
44
|
Najjar R. Redefining Radiology: A Review of Artificial Intelligence Integration in Medical Imaging. Diagnostics (Basel) 2023; 13:2760. [PMID: 37685300 PMCID: PMC10487271 DOI: 10.3390/diagnostics13172760] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 08/01/2023] [Accepted: 08/10/2023] [Indexed: 09/10/2023] Open
Abstract
This comprehensive review unfolds a detailed narrative of Artificial Intelligence (AI) making its foray into radiology, a move that is catalysing transformational shifts in the healthcare landscape. It traces the evolution of radiology, from the initial discovery of X-rays to the application of machine learning and deep learning in modern medical image analysis. The primary focus of this review is to shed light on AI applications in radiology, elucidating their seminal roles in image segmentation, computer-aided diagnosis, predictive analytics, and workflow optimisation. A spotlight is cast on the profound impact of AI on diagnostic processes, personalised medicine, and clinical workflows, with empirical evidence derived from a series of case studies across multiple medical disciplines. However, the integration of AI in radiology is not devoid of challenges. The review ventures into the labyrinth of obstacles that are inherent to AI-driven radiology-data quality, the 'black box' enigma, infrastructural and technical complexities, as well as ethical implications. Peering into the future, the review contends that the road ahead for AI in radiology is paved with promising opportunities. It advocates for continuous research, embracing avant-garde imaging technologies, and fostering robust collaborations between radiologists and AI developers. The conclusion underlines the role of AI as a catalyst for change in radiology, a stance that is firmly rooted in sustained innovation, dynamic partnerships, and a steadfast commitment to ethical responsibility.
Collapse
Affiliation(s)
- Reabal Najjar
- Canberra Health Services, Australian Capital Territory 2605, Australia
| |
Collapse
|
45
|
Reddaway J, Richardson PE, Bevan RJ, Stoneman J, Palombo M. Microglial morphometric analysis: so many options, so little consistency. Front Neuroinform 2023; 17:1211188. [PMID: 37637472 PMCID: PMC10448193 DOI: 10.3389/fninf.2023.1211188] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Accepted: 07/05/2023] [Indexed: 08/29/2023] Open
Abstract
Quantification of microglial activation through morphometric analysis has long been a staple of the neuroimmunologist's toolkit. Microglial morphological phenomics can be conducted through either manual classification or constructing a digital skeleton and extracting morphometric data from it. Multiple open-access and paid software packages are available to generate these skeletons via semi-automated and/or fully automated methods with varying degrees of accuracy. Despite advancements in methods to generate morphometrics (quantitative measures of cellular morphology), there has been limited development of tools to analyze the datasets they generate, in particular those containing parameters from tens of thousands of cells analyzed by fully automated pipelines. In this review, we compare and critique the approaches using cluster analysis and machine learning driven predictive algorithms that have been developed to tackle these large datasets, and propose improvements for these methods. In particular, we highlight the need for a commitment to open science from groups developing these classifiers. Furthermore, we call attention to a need for communication between those with a strong software engineering/computer science background and neuroimmunologists to produce effective analytical tools with simplified operability if we are to see their wide-spread adoption by the glia biology community.
Collapse
Affiliation(s)
- Jack Reddaway
- Division of Neuroscience, School of Biosciences, Cardiff University, Cardiff, United Kingdom
- Hodge Centre for Neuropsychiatric Immunology, Neuroscience and Mental Health Innovation Institute (NMHII), Cardiff University, Cardiff, United Kingdom
| | | | - Ryan J. Bevan
- UK Dementia Research Institute, Cardiff University, Cardiff, United Kingdom
| | - Jessica Stoneman
- Division of Neuroscience, School of Biosciences, Cardiff University, Cardiff, United Kingdom
| | - Marco Palombo
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff University, Cardiff, United Kingdom
- School of Computer Science and Informatics, Cardiff University, Cardiff, United Kingdom
| |
Collapse
|
46
|
Ramesh KK, Xu KM, Trivedi AG, Huang V, Sharghi VK, Kleinberg LR, Mellon EA, Shu HKG, Shim H, Weinberg BD. A Fully Automated Post-Surgical Brain Tumor Segmentation Model for Radiation Treatment Planning and Longitudinal Tracking. Cancers (Basel) 2023; 15:3956. [PMID: 37568773 PMCID: PMC10417353 DOI: 10.3390/cancers15153956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 07/26/2023] [Accepted: 08/02/2023] [Indexed: 08/13/2023] Open
Abstract
Glioblastoma (GBM) has a poor survival rate even with aggressive surgery, concomitant radiation therapy (RT), and adjuvant chemotherapy. Standard-of-care RT involves irradiating a lower dose to the hyperintense lesion in T2-weighted fluid-attenuated inversion recovery MRI (T2w/FLAIR) and a higher dose to the enhancing tumor on contrast-enhanced, T1-weighted MRI (CE-T1w). While there have been several attempts to segment pre-surgical brain tumors, there have been minimal efforts to segment post-surgical tumors, which are complicated by a resection cavity and postoperative blood products, and tools are needed to assist physicians in generating treatment contours and assessing treated patients on follow up. This report is one of the first to train and test multiple deep learning models for the purpose of post-surgical brain tumor segmentation for RT planning and longitudinal tracking. Post-surgical FLAIR and CE-T1w MRIs, as well as their corresponding RT targets (GTV1 and GTV2, respectively) from 225 GBM patients treated with standard RT were trained on multiple deep learning models including: Unet, ResUnet, Swin-Unet, 3D Unet, and Swin-UNETR. These models were tested on an independent dataset of 30 GBM patients with the Dice metric used to evaluate segmentation accuracy. Finally, the best-performing segmentation model was integrated into our longitudinal tracking web application to assign automated structured reporting scores using change in percent cutoffs of lesion volume. The 3D Unet was our best-performing model with mean Dice scores of 0.72 for GTV1 and 0.73 for GTV2 with a standard deviation of 0.17 for both in the test dataset. We have successfully developed a lightweight post-surgical segmentation model for RT planning and longitudinal tracking.
Collapse
Affiliation(s)
- Karthik K. Ramesh
- Department of Radiation Oncology, Emory University School of Medicine, Atlanta, GA 30322, USA; (K.K.R.); (A.G.T.); (V.H.)
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Karen M. Xu
- Department of Radiation Oncology, Emory University School of Medicine, Atlanta, GA 30322, USA; (K.K.R.); (A.G.T.); (V.H.)
| | - Anuradha G. Trivedi
- Department of Radiation Oncology, Emory University School of Medicine, Atlanta, GA 30322, USA; (K.K.R.); (A.G.T.); (V.H.)
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA 30332, USA
| | - Vicki Huang
- Department of Radiation Oncology, Emory University School of Medicine, Atlanta, GA 30322, USA; (K.K.R.); (A.G.T.); (V.H.)
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA 30332, USA
| | | | - Lawrence R. Kleinberg
- Department of Radiation Oncology, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Eric A. Mellon
- Department of Radiation Oncology, Sylvester Comprehensive Cancer Center, Miller School of Medicine, University of Miami, Miami, FL 45056, USA
| | - Hui-Kuo G. Shu
- Department of Radiation Oncology, Emory University School of Medicine, Atlanta, GA 30322, USA; (K.K.R.); (A.G.T.); (V.H.)
- Winship Cancer Institute, Emory University School of Medicine, Atlanta, GA 30322, USA
| | - Hyunsuk Shim
- Department of Radiation Oncology, Emory University School of Medicine, Atlanta, GA 30322, USA; (K.K.R.); (A.G.T.); (V.H.)
- Department of Biomedical Engineering, Emory University and Georgia Institute of Technology, Atlanta, GA 30332, USA
- Winship Cancer Institute, Emory University School of Medicine, Atlanta, GA 30322, USA
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA 30322, USA
| | - Brent D. Weinberg
- Winship Cancer Institute, Emory University School of Medicine, Atlanta, GA 30322, USA
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA 30322, USA
| |
Collapse
|
47
|
Shishido T, Ono Y, Kumazawa I, Iwai I, Suziki K. Artificial intelligence model substantially improves stratum corneum moisture content prediction from visible-light skin images and skin feature factors. Skin Res Technol 2023; 29:e13414. [PMID: 37632180 PMCID: PMC10363786 DOI: 10.1111/srt.13414] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Accepted: 07/05/2023] [Indexed: 08/27/2023]
Abstract
BACKGROUND Appropriate skin treatment and care warrants an accurate prediction of skin moisture. However, current diagnostic tools are costly and time-consuming. Stratum corneum moisture content has been measured with moisture content meters or from a near-infrared image. OBJECTIVE Here, we establish an artificial intelligence (AI) alternative for conventional skin moisture content measurements. METHODS Skin feature factors positively or negatively correlated with the skin moisture content were created and selected by using the PolynomialFeatures(3) of scikit-learn. Then, an integrated AI model using, as inputs, a visible-light skin image and the skin feature factors were trained with 914 skin images, the corresponding skin feature factors, and the corresponding skin moisture contents. RESULTS A regression-type AI model using only a visible-light skin-containing image was insufficiently implemented. To improve the accuracy of the prediction of skin moisture content, we searched for new features through feature engineering ("creation of new factors") correlated with the moisture content from various combinations of the existing skin features, and have found that factors created by combining the brown spot count, the pore count, and/or the visually assessed skin roughness give significant correlation coefficients. Then, an integrated AI deep-learning model using a visible-light skin image and these factors resulted in significantly improved skin moisture content prediction. CONCLUSION Skin moisture content interacts with the brown spot count, the pore count, and/or the visually assessed skin roughness so that better inference of stratum corneum moisture content can be provided using a common visible-light skin photo image and skin feature factors.
Collapse
Affiliation(s)
- Tomoyuki Shishido
- Department of Information and Communications Engineering,Biomedical AI Research UnitTokyo Institute of TechnologyTokyoJapan
- Shishido & AssociatesTokyoJapan
| | | | - Itsuo Kumazawa
- Department of Information and Communications Engineering, Laboratory for Future Interdisciplinary Research of Science and TechnologyTokyo Institute of TechnologyTokyoJapan
| | - Ichiro Iwai
- Saticine MedicalResearch InstituteTokyoJapan
| | - Kenji Suziki
- Department of Information and Communications Engineering,Biomedical AI Research UnitTokyo Institute of TechnologyTokyoJapan
| |
Collapse
|
48
|
Jeyaraman M, Balaji S, Jeyaraman N, Yadav S. Unraveling the Ethical Enigma: Artificial Intelligence in Healthcare. Cureus 2023; 15:e43262. [PMID: 37692617 PMCID: PMC10492220 DOI: 10.7759/cureus.43262] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/10/2023] [Indexed: 09/12/2023] Open
Abstract
The integration of artificial intelligence (AI) into healthcare promises groundbreaking advancements in patient care, revolutionizing clinical diagnosis, predictive medicine, and decision-making. This transformative technology uses machine learning, natural language processing, and large language models (LLMs) to process and reason like human intelligence. OpenAI's ChatGPT, a sophisticated LLM, holds immense potential in medical practice, research, and education. However, as AI in healthcare gains momentum, it brings forth profound ethical challenges that demand careful consideration. This comprehensive review explores key ethical concerns in the domain, including privacy, transparency, trust, responsibility, bias, and data quality. Protecting patient privacy in data-driven healthcare is crucial, with potential implications for psychological well-being and data sharing. Strategies like homomorphic encryption (HE) and secure multiparty computation (SMPC) are vital to preserving confidentiality. Transparency and trustworthiness of AI systems are essential, particularly in high-risk decision-making scenarios. Explainable AI (XAI) emerges as a critical aspect, ensuring a clear understanding of AI-generated predictions. Cybersecurity becomes a pressing concern as AI's complexity creates vulnerabilities for potential breaches. Determining responsibility in AI-driven outcomes raises important questions, with debates on AI's moral agency and human accountability. Shifting from data ownership to data stewardship enables responsible data management in compliance with regulations. Addressing bias in healthcare data is crucial to avoid AI-driven inequities. Biases present in data collection and algorithm development can perpetuate healthcare disparities. A public-health approach is advocated to address inequalities and promote diversity in AI research and the workforce. Maintaining data quality is imperative in AI applications, with convolutional neural networks showing promise in multi-input/mixed data models, offering a comprehensive patient perspective. In this ever-evolving landscape, it is imperative to adopt a multidimensional approach involving policymakers, developers, healthcare practitioners, and patients to mitigate ethical concerns. By understanding and addressing these challenges, we can harness the full potential of AI in healthcare while ensuring ethical and equitable outcomes.
Collapse
Affiliation(s)
- Madhan Jeyaraman
- Orthopedics, ACS Medical College and Hospital, Dr. MGR Educational and Research Institute, Chennai, IND
| | - Sangeetha Balaji
- Orthopedics, Government Medical College, Omandurar Government Estate, Chennai, IND
| | - Naveen Jeyaraman
- Orthopedics, ACS Medical College and Hospital, Dr. MGR Educational and Research Institute, Chennai, IND
| | - Sankalp Yadav
- Medicine, Shri Madan Lal Khurana Chest Clinic, New Delhi, IND
| |
Collapse
|
49
|
Xiong S, Dong W, Deng Z, Jiang M, Li S, Hu B, Liu X, Chen L, Xu S, Fan B, Fu B. Value of the application of computed tomography-based radiomics for preoperative prediction of unfavorable pathology in initial bladder cancer. Cancer Med 2023; 12:15868-15880. [PMID: 37434436 PMCID: PMC10469743 DOI: 10.1002/cam4.6225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 05/15/2023] [Accepted: 06/01/2023] [Indexed: 07/13/2023] Open
Abstract
OBJECTIVES To construct and validate unfavorable pathology (UFP) prediction models for patients with the first diagnosis of bladder cancer (initial BLCA) and to compare the comprehensive predictive performance of these models. MATERIALS AND METHODS A total of 105 patients with initial BLCA were included and randomly enrolled into the training and testing cohorts in a 7:3 ratio. The clinical model was constructed using independent UFP-risk factors determined by multivariate logistic regression (LR) analysis in the training cohort. Radiomics features were extracted from manually segmented regions of interest in computed tomography (CT) images. The optimal CT-based radiomics features to predict UFP were determined by the optimal feature filter and the least absolute shrinkage and selection operator algorithm. The radiomics model consist with the optimal features was constructed by the best of the six machine learning filters. The clinic-radiomics model combined the clinical and radiomics models via LR. The area under the curve (AUC), accuracy, sensitivity, specificity, positive and negative predictive value, calibration curve and decision curve analysis were used to evaluate the predictive performance of the models. RESULTS Patients in the UFP group had a significantly older age (69.61 vs. 63.93 years, p = 0.034), lager tumor size (45.7% vs. 11.1%, p = 0.002) and higher neutrophil to lymphocyte ratio (NLR; 2.76 vs. 2.33, p = 0.017) than favorable pathologic group in the training cohort. Tumor size (OR, 6.02; 95% CI, 1.50-24.10; p = 0.011) and NLR (OR, 1.50; 95% CI, 1.05-2.16; p = 0.026) were identified as independent predictive factors for UFP, and the clinical model was constructed using these factors. The LR classifier with the best AUC (0.817, the testing cohorts) was used to construct the radiomics model based on the optimal radiomics features. Finally, the clinic-radiomics model was developed by combining the clinical and radiomics models using LR. After comparison, the clinic-radiomics model had the best performance in comprehensive predictive efficacy (accuracy = 0.750, AUC = 0.817, the testing cohorts) and clinical net benefit among UFP-prediction models, while the clinical model (accuracy = 0.625, AUC = 0.742, the testing cohorts) was the worst. CONCLUSION Our study demonstrates that the clinic-radiomics model exhibits the best predictive efficacy and clinical net benefit for predicting UFP in initial BLCA compared with the clinical and radiomics model. The integration of radiomics features significantly improves the comprehensive performance of the clinical model.
Collapse
Affiliation(s)
- Situ Xiong
- Department of UrologyThe First Affiliated Hospital of Nanchang UniversityNanchangChina
- Jiangxi Institute of UrologyNanchangChina
| | - Wentao Dong
- Department of RadiologyJiangxi Provincial People's Hospital, The First Affiliated Hospital of Nanchang Medical CollegeNanchangChina
| | - Zhikang Deng
- Department of Nuclear Medicine, Jiangxi Provincial People's HospitalThe First Affiliated Hospital of Nanchang Medical CollegeNanchangChina
| | - Ming Jiang
- Department of UrologyThe First Affiliated Hospital of Nanchang UniversityNanchangChina
- Jiangxi Institute of UrologyNanchangChina
| | - Sheng Li
- Department of UrologyThe First Affiliated Hospital of Nanchang UniversityNanchangChina
- Jiangxi Institute of UrologyNanchangChina
| | - Bing Hu
- Department of UrologyThe First Affiliated Hospital of Nanchang UniversityNanchangChina
- Jiangxi Institute of UrologyNanchangChina
| | - Xiaoqiang Liu
- Department of UrologyThe First Affiliated Hospital of Nanchang UniversityNanchangChina
- Jiangxi Institute of UrologyNanchangChina
| | - Luyao Chen
- Department of UrologyThe First Affiliated Hospital of Nanchang UniversityNanchangChina
- Jiangxi Institute of UrologyNanchangChina
| | - Songhui Xu
- Department of UrologyThe First Affiliated Hospital of Nanchang UniversityNanchangChina
- Jiangxi Institute of UrologyNanchangChina
| | - Bing Fan
- Department of RadiologyJiangxi Provincial People's Hospital, The First Affiliated Hospital of Nanchang Medical CollegeNanchangChina
| | - Bin Fu
- Department of UrologyThe First Affiliated Hospital of Nanchang UniversityNanchangChina
- Jiangxi Institute of UrologyNanchangChina
| |
Collapse
|
50
|
Yu KL, Tseng YS, Yang HC, Liu CJ, Kuo PC, Lee MR, Huang CT, Kuo LC, Wang JY, Ho CC, Shih JY, Yu CJ. Deep learning with test-time augmentation for radial endobronchial ultrasound image differentiation: a multicentre verification study. BMJ Open Respir Res 2023; 10:e001602. [PMID: 37532473 PMCID: PMC10401203 DOI: 10.1136/bmjresp-2022-001602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2022] [Accepted: 06/23/2023] [Indexed: 08/04/2023] Open
Abstract
PURPOSE Despite the importance of radial endobronchial ultrasound (rEBUS) in transbronchial biopsy, researchers have yet to apply artificial intelligence to the analysis of rEBUS images. MATERIALS AND METHODS This study developed a convolutional neural network (CNN) to differentiate between malignant and benign tumours in rEBUS images. This study retrospectively collected rEBUS images from medical centres in Taiwan, including 769 from National Taiwan University Hospital Hsin-Chu Branch, Hsinchu Hospital for model training (615 images) and internal validation (154 images) as well as 300 from National Taiwan University Hospital (NTUH-TPE) and 92 images were obtained from National Taiwan University Hospital Hsin-Chu Branch, Biomedical Park Hospital (NTUH-BIO) for external validation. Further assessments of the model were performed using image augmentation in the training phase and test-time augmentation (TTA). RESULTS Using the internal validation dataset, the results were as follows: area under the curve (AUC) (0.88 (95% CI 0.83 to 0.92)), sensitivity (0.80 (95% CI 0.73 to 0.88)), specificity (0.75 (95% CI 0.66 to 0.83)). Using the NTUH-TPE external validation dataset, the results were as follows: AUC (0.76 (95% CI 0.71 to 0.80)), sensitivity (0.58 (95% CI 0.50 to 0.65)), specificity (0.92 (95% CI 0.88 to 0.97)). Using the NTUH-BIO external validation dataset, the results were as follows: AUC (0.72 (95% CI 0.64 to 0.82)), sensitivity (0.71 (95% CI 0.55 to 0.86)), specificity (0.76 (95% CI 0.64 to 0.87)). After fine-tuning, the AUC values for the external validation cohorts were as follows: NTUH-TPE (0.78) and NTUH-BIO (0.82). Our findings also demonstrated the feasibility of the model in differentiating between lung cancer subtypes, as indicated by the following AUC values: adenocarcinoma (0.70; 95% CI 0.64 to 0.76), squamous cell carcinoma (0.64; 95% CI 0.54 to 0.74) and small cell lung cancer (0.52; 95% CI 0.32 to 0.72). CONCLUSIONS Our results demonstrate the feasibility of the proposed CNN-based algorithm in differentiating between malignant and benign lesions in rEBUS images.
Collapse
Affiliation(s)
- Kai-Lun Yu
- Department of Internal Medicine, National Taiwan University Hospital Hsin-Chu Branch, Hsinchu, Taiwan
- Graduate Institute of Clinical Medicine, National Taiwan University College of Medicine, Taipei, Taiwan
| | - Yi-Shiuan Tseng
- Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan
| | - Han-Ching Yang
- Department of Internal Medicine, National Taiwan University Hospital Hsin-Chu Branch, Hsinchu, Taiwan
| | - Chia-Jung Liu
- Department of Internal Medicine, National Taiwan University Hospital Hsin-Chu Branch, Hsinchu, Taiwan
| | - Po-Chih Kuo
- Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan
| | - Meng-Rui Lee
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Chun-Ta Huang
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Lu-Cheng Kuo
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Jann-Yuan Wang
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Chao-Chi Ho
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Jin-Yuan Shih
- Graduate Institute of Clinical Medicine, National Taiwan University College of Medicine, Taipei, Taiwan
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
| | - Chong-Jen Yu
- Department of Internal Medicine, National Taiwan University Hospital Hsin-Chu Branch, Hsinchu, Taiwan
- Department of Internal Medicine, National Taiwan University Hospital, Taipei, Taiwan
| |
Collapse
|