1
|
Yang X, Gao C, Sun N, Qin X, Liu X, Zhang C. An interpretable clinical ultrasound-radiomics combined model for diagnosis of stage I cervical cancer. Front Oncol 2024; 14:1353780. [PMID: 38846980 PMCID: PMC11153703 DOI: 10.3389/fonc.2024.1353780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 05/06/2024] [Indexed: 06/09/2024] Open
Abstract
Objective The purpose of this retrospective study was to establish a combined model based on ultrasound (US)-radiomics and clinical factors to predict patients with stage I cervical cancer (CC) before surgery. Materials and methods A total of 209 CC patients who had cervical lesions found by transvaginal sonography (TVS) from the First Affiliated Hospital of Anhui Medical University were retrospectively reviewed, patients were divided into the training set (n = 146) and internal validation set (n = 63), and 52 CC patients from Anhui Provincial Maternity and Child Health Hospital and Nanchong Central Hospital were taken as the external validation set. The clinical independent predictors were selected by univariate and multivariate logistic regression analyses. US-radiomics features were extracted from US images. After selecting the most significant features by univariate analysis, Spearman's correlation analysis, and the least absolute shrinkage and selection operator (LASSO) algorithm, six machine learning (ML) algorithms were used to build the radiomics model. Next, the ability of the clinical, US-radiomics, and clinical US-radiomics combined model was compared to diagnose stage I CC. Finally, the Shapley additive explanations (SHAP) method was used to explain the contribution of each feature. Results Long diameter of the cervical lesion (L) and squamous cell carcinoma-associated antigen (SCCa) were independent clinical predictors of stage I CC. The eXtreme Gradient Boosting (Xgboost) model performed the best among the six ML radiomics models, with area under the curve (AUC) values in the training, internal validation, and external validation sets being 0.778, 0.751, and 0.751, respectively. In the final three models, the combined model based on clinical features and rad-score showed good discriminative power, with AUC values in the training, internal validation, and external validation sets being 0.837, 0.828, and 0.839, respectively. The decision curve analysis validated the clinical utility of the combined nomogram. The SHAP algorithm illustrates the contribution of each feature in the combined model. Conclusion We established an interpretable combined model to predict stage I CC. This non-invasive prediction method may be used for the preoperative identification of patients with stage I CC.
Collapse
Affiliation(s)
- Xianyue Yang
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, China
| | - Chuanfen Gao
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, China
| | - Nian Sun
- Department of Ultrasound, Anhui Provincial Maternity and Child Health Hospital, Hefei, Anhui, China
| | - Xiachuan Qin
- Department of Ultrasound, Nanchong Central Hospital (Beijing Anzhen Hospital Nanchong Hospital), The Second Clinical Medical College, North Sichuan Medical College (University), Nanchong, Sichuan, China
| | - Xiaoling Liu
- Department of Ultrasound, Nanchong Central Hospital (Beijing Anzhen Hospital Nanchong Hospital), The Second Clinical Medical College, North Sichuan Medical College (University), Nanchong, Sichuan, China
| | - Chaoxue Zhang
- Department of Ultrasound, The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, China
| |
Collapse
|
2
|
Wennmann M, Rotkopf LT, Bauer F, Hielscher T, Kächele J, Mai EK, Weinhold N, Raab MS, Goldschmidt H, Weber TF, Schlemmer HP, Delorme S, Maier-Hein K, Neher P. Reproducible Radiomics Features from Multi-MRI-Scanner Test-Retest-Study: Influence on Performance and Generalizability of Models. J Magn Reson Imaging 2024. [PMID: 38733369 DOI: 10.1002/jmri.29442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Revised: 03/29/2024] [Accepted: 04/01/2024] [Indexed: 05/13/2024] Open
Abstract
BACKGROUND Radiomics models trained on data from one center typically show a decline of performance when applied to data from external centers, hindering their introduction into large-scale clinical practice. Current expert recommendations suggest to use only reproducible radiomics features isolated by multiscanner test-retest experiments, which might help to overcome the problem of limited generalizability to external data. PURPOSE To evaluate the influence of using only a subset of robust radiomics features, defined in a prior in vivo multi-MRI-scanner test-retest-study, on the performance and generalizability of radiomics models. STUDY TYPE Retrospective. POPULATION Patients with monoclonal plasma cell disorders. Training set (117 MRIs from center 1); internal test set (42 MRIs from center 1); external test set (143 MRIs from center 2-8). FIELD STRENGTH/SEQUENCE 1.5T and 3.0T; T1-weighted turbo spin echo. ASSESSMENT The task for the radiomics models was to predict plasma cell infiltration, determined by bone marrow biopsy, noninvasively from MRI. Radiomics machine learning models, including linear regressor, support vector regressor (SVR), and random forest regressor (RFR), were trained on data from center 1, using either all radiomics features, or using only reproducible radiomics features. Models were tested on an internal (center 1) and a multicentric external data set (center 2-8). STATISTICAL TESTS Pearson correlation coefficient r and mean absolute error (MAE) between predicted and actual plasma cell infiltration. Fisher's z-transformation, Wilcoxon signed-rank test, Wilcoxon rank-sum test; significance level P < 0.05. RESULTS When using only reproducible features compared with all features, the performance of the SVR on the external test set significantly improved (r = 0.43 vs. r = 0.18 and MAE = 22.6 vs. MAE = 28.2). For the RFR, the performance on the external test set deteriorated when using only reproducible instead of all radiomics features (r = 0.33 vs. r = 0.44, P = 0.29 and MAE = 21.9 vs. MAE = 20.5, P = 0.10). CONCLUSION Using only reproducible radiomics features improves the external performance of some, but not all machine learning models, and did not automatically lead to an improvement of the external performance of the overall best radiomics model. LEVEL OF EVIDENCE: 3 TECHNICAL EFFICACY Stage 2.
Collapse
Affiliation(s)
- Markus Wennmann
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- Diagnostic and Interventional Radiology, University Hospital Heidelberg, Heidelberg, Germany
| | - Lukas T Rotkopf
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
| | - Fabian Bauer
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
| | - Thomas Hielscher
- Division of Biostatistics, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
| | - Jessica Kächele
- Division of Medical Image Computing, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- German Cancer Consortium (DKTK), Partner Site Heidelberg, Heidelberg, Germany
| | - Elias K Mai
- Heidelberg Myeloma Center, Department of Medicine, University Hospital Heidelberg, Heidelberg, Germany
| | - Niels Weinhold
- Heidelberg Myeloma Center, Department of Medicine, University Hospital Heidelberg, Heidelberg, Germany
| | - Marc-Steffen Raab
- Heidelberg Myeloma Center, Department of Medicine, University Hospital Heidelberg, Heidelberg, Germany
| | - Hartmut Goldschmidt
- Heidelberg Myeloma Center, Department of Medicine, University Hospital Heidelberg, Heidelberg, Germany
- National Center for Tumor Diseases (NCT) Heidelberg, Heidelberg, Germany
| | - Tim F Weber
- Diagnostic and Interventional Radiology, University Hospital Heidelberg, Heidelberg, Germany
| | - Heinz-Peter Schlemmer
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- National Center for Tumor Diseases (NCT) Heidelberg, Heidelberg, Germany
| | - Stefan Delorme
- Division of Radiology, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
| | - Klaus Maier-Hein
- Division of Medical Image Computing, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- German Cancer Consortium (DKTK), Partner Site Heidelberg, Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, University Hospital Heidelberg, Heidelberg, Germany
| | - Peter Neher
- Division of Medical Image Computing, German Cancer Research Center (DKFZ) Heidelberg, Heidelberg, Germany
- German Cancer Consortium (DKTK), Partner Site Heidelberg, Heidelberg, Germany
- Pattern Analysis and Learning Group, Department of Radiation Oncology, University Hospital Heidelberg, Heidelberg, Germany
| |
Collapse
|
3
|
İnce O, Önder H, Gençtürk M, Cebeci H, Golzarian J, Young S. Machine Learning Models in Prediction of Treatment Response After Chemoembolization with MRI Clinicoradiomics Features. Cardiovasc Intervent Radiol 2023; 46:1732-1742. [PMID: 37884802 DOI: 10.1007/s00270-023-03574-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 09/25/2023] [Indexed: 10/28/2023]
Abstract
PURPOSE To evaluate machine learning models, created with radiomics and clinicoradiomics features, ability to predict local response after TACE. MATERIALS AND METHODS 188 treatment-naïve patients (150 responders, 38 non-responders) with HCC who underwent TACE were included in this retrospective study. Laboratory, clinical and procedural information were recorded. Local response was evaluated by European Association for the Study of the Liver criteria at 3-months. Radiomics features were extracted from pretreatment pre-contrast enhanced T1 (T1WI) and late arterial-phase contrast-enhanced T1 (CE-T1) MRI images. After data augmentation, data were split into training and test sets (70/30). Intra-class correlations, Pearson's correlation coefficients were analyzed and followed by a sequential-feature-selection (SFS) algorithm for feature selection. Support-vector-machine (SVM) models were trained with radiomics and clinicoradiomics features of T1WI, CE-T1 and the combination of both datasets, respectively. Performance metrics were calculated with the test sets. Models' performances were compared with Delong's test. RESULTS 1128 features were extracted. In feature selection, SFS algorithm selected 18, 12, 24 and 8 features in T1WI, CE-T1, combined datasets and clinical features, respectively. The SVM models area-under-curve was 0.86 and 0.88 in T1WI; 0.76, 0.71 in CE-T1 and 0.82, 0.91 in the combined dataset, with and without clinical features, respectively. The only significant change was observed after inclusion of clinical features in the combined dataset (p = 0.001). Higher WBC and neutrophil levels were significantly associated with lower treatment response in univariant analysis (p = 0.02, for both). CONCLUSION Machine learning models created with clinical and MRI radiomics features, may have promise in predicting local response after TACE. LEVEL OF EVIDENCE Level 4, Case-control study.
Collapse
Affiliation(s)
- Okan İnce
- Department of Radiology, Medical School, University of Minnesota, 420 Delaware Street S.E, Minneapolis, MN, 55455, USA.
| | - Hakan Önder
- Department of Radiology, Health Sciences University, Prof. Dr. Cemil TASCIOGLU City Hospital, Istanbul, Turkey
| | - Mehmet Gençtürk
- Department of Radiology, Medical School, University of Minnesota, 420 Delaware Street S.E, Minneapolis, MN, 55455, USA
| | - Hakan Cebeci
- Department of Radiology, Medical School, University of Minnesota, 420 Delaware Street S.E, Minneapolis, MN, 55455, USA
| | - Jafar Golzarian
- Department of Radiology, Medical School, University of Minnesota, 420 Delaware Street S.E, Minneapolis, MN, 55455, USA
| | - Shamar Young
- Department of Radiology, College of Medicine, University of Arizona, 1501 N. Campbell Avenue, Tucson, AZ, 85724, USA
| |
Collapse
|
4
|
Wang P, Luo S, Cheng S, Gong M, Zhang J, Liang R, Ma W, Li Y, Liu Y. Construction and validation of infection risk model for patients with external ventricular drainage: a multicenter retrospective study. Acta Neurochir (Wien) 2023; 165:3255-3266. [PMID: 37697007 DOI: 10.1007/s00701-023-05771-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2023] [Accepted: 08/13/2023] [Indexed: 09/13/2023]
Abstract
PURPOSE External ventricular drainage (EVD) is a life-saving neurosurgical procedure, of which the most concerning complication is EVD-related infection (ERI). We aimed to construct and validate an ERI risk model and establish a monographic chart. METHODS We retrospectively analyzed the adult EVD patients in four medical centers and split the data into a training and a validation set. We selected features via single-factor logistic regression and trained the ERI risk model using multi-factor logistic regression. We further evaluated the model discrimination, calibration, and clinical usefulness, with internal and external validation to assess the reproducibility and generalizability. We finally visualized the model as a nomogram and created an online calculator (dynamic nomogram). RESULTS Our research enrolled 439 EVD patients and found 75 cases (17.1%) had ERI. Diabetes, drainage duration, site leakage, and other infections were independent risk factors that we used to fit the ERI risk model. The area under the receiver operating characteristic curve (AUC) and the Brier score of the model were 0.758 and 0.118, and these indicators' values were similar when internally validated. In external validation, the model discrimination had a moderate decline, of which the AUC was 0.720. However, the Brier score was 0.114, suggesting no degradation in overall performance. Spiegelhalter's Z-test indicated that the model had adequate calibration when validated internally or externally (P = 0.464 vs. P = 0.612). The model was transformed into a nomogram with an online calculator built, which is available through the website: https://wang-cdutcm.shinyapps.io/DynNomapp/ . CONCLUSIONS The present study developed an infection risk model for EVD patients, which is freely accessible and may serve as a simple decision tool in the clinic.
Collapse
Affiliation(s)
- Peng Wang
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, Sichuan, China
- Department of Neurosurgery, Chengdu Fifth People's Hospital/Affiliated Fifth People's Hospital of Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Shuang Luo
- Department of Neurosurgery, Chengdu Fifth People's Hospital/Affiliated Fifth People's Hospital of Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Shuwen Cheng
- Department of Neurosurgery, Chengdu Fifth People's Hospital/Affiliated Fifth People's Hospital of Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Min Gong
- Department of Neurosurgery, Chengdu Fifth People's Hospital/Affiliated Fifth People's Hospital of Chengdu University of Traditional Chinese Medicine, Chengdu, Sichuan, China
| | - Jie Zhang
- Department of Neurosurgery, The First Affiliated Hospital of Chengdu Medical College, Chengdu, Sichuan, China
| | - Ruofei Liang
- Department of Neurosurgery, Affiliated Hospital of North Sichuan Medical College, Nanchong, Sichuan, China
| | - Weichao Ma
- Department of Neurosurgery, Sichuan Cancer Hospital, Chengdu, Sichuan, China
| | - Yaxin Li
- West China Fourth Hospital/West China School of Public Health, Sichuan University, Chengdu, Sichuan, China
| | - Yanhui Liu
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, Sichuan, China.
| |
Collapse
|
5
|
Orrù G, De Marchi B, Sartori G, Gemignani A, Scarpazza C, Monaro M, Mazza C, Roma P. Machine learning item selection for short scale construction: A proof-of-concept using the SIMS. Clin Neuropsychol 2023; 37:1371-1388. [PMID: 36017966 DOI: 10.1080/13854046.2022.2114548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 08/12/2022] [Indexed: 11/03/2022]
Abstract
ObjectiveThis proof-of-concept paper provides evidence to support machine learning (ML) as a valid alternative to traditional psychometric techniques in the development of short forms of longer parent psychological tests. ML comprises a variety of feature selection techniques that can be efficiently applied to identify the set of items that best replicates the characteristics of the original test. MethodsIn the present study, we integrated a dataset of 329 participants from published and unpublished datasets used in previous research on the Structured Inventory of Malingered Symptomatology (SIMS) to develop a short version of the scale. The SIMS is a multi-axial self-report questionnaire and a highly efficient psychometric measure of symptom validity, which is frequently applied in forensic settings. Results State-of-the-art ML item selection techniques achieved a 72% reduction in length while capturing 92% of the variance of the original SIMS. The new SIMS short form now consists of 21 items. ConclusionsThe results suggest that the proposed ML-based item selection technique represents a promising alternative to standard psychometric correlation-based methods (i.e. item selection, item response theory), especially when selection techniques (e.g. wrapper) are employed that evaluate global, rather than local, item value.
Collapse
Affiliation(s)
- Graziella Orrù
- Department of Surgical, Medical, Molecular & Critical Area Pathology, University of Pisa, Pisa, Italy
| | - Barbara De Marchi
- Department of Neuroscience and Rehabilitation, University of Ferrara, Ferrara, Italy
| | - Giuseppe Sartori
- Department of General Psychology, University of Padua, Padua, Italy
| | - Angelo Gemignani
- Department of Surgical, Medical, Molecular & Critical Area Pathology, University of Pisa, Pisa, Italy
| | | | - Merylin Monaro
- Department of General Psychology, University of Padua, Padua, Italy
| | - Cristina Mazza
- Department of Neuroscience, Imaging and Clinical Sciences, G. d'Annunzio University of Chieti-Pescara, Chieti, Italy
| | - Paolo Roma
- Department of Human Neuroscience, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
6
|
Park SH, Sul AR, Ko Y, Jang HY, Lee JG. Radiologist's Guide to Evaluating Publications of Clinical Research on AI: How We Do It. Radiology 2023; 308:e230288. [PMID: 37750772 DOI: 10.1148/radiol.230288] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/27/2023]
Abstract
Literacy in research studies of artificial intelligence (AI) has become an important skill for radiologists. It is required to make a proper assessment of the validity, reproducibility, and clinical applicability of AI studies. However, AI studies are generally perceived to be more difficult for clinician readers to evaluate than traditional clinical research studies. This special report-as an effective, concise guide for readers-aims to assist clinical radiologists in critically evaluating different types of clinical research articles involving AI. It does not intend to be a comprehensive checklist or methodological summary for complete clinical evaluation of AI or a reporting guideline. Ten key items for readers to check are described, regarding study purpose, function and clinical context of AI, training data, data preprocessing, AI modeling techniques, test data, AI performance, helpfulness and value of AI, interpretability of AI, and code sharing. The important aspects of each item are explained for readers to consider when reading publications on AI clinical research. Evaluating each item can help radiologists assess the validity, reproducibility, and clinical applicability of clinical research articles involving AI.
Collapse
Affiliation(s)
- Seong Ho Park
- From the Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea (S.H.P., Y.K., H.Y.J.); Division of Healthcare Research Outcomes Research, National Evidence-based Healthcare Collaborating Agency, Seoul, South Korea (A.R.S.); and Biomedical Engineering Research Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea (J.G.L.)
| | - Ah-Ram Sul
- From the Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea (S.H.P., Y.K., H.Y.J.); Division of Healthcare Research Outcomes Research, National Evidence-based Healthcare Collaborating Agency, Seoul, South Korea (A.R.S.); and Biomedical Engineering Research Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea (J.G.L.)
| | - Yousun Ko
- From the Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea (S.H.P., Y.K., H.Y.J.); Division of Healthcare Research Outcomes Research, National Evidence-based Healthcare Collaborating Agency, Seoul, South Korea (A.R.S.); and Biomedical Engineering Research Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea (J.G.L.)
| | - Hye Young Jang
- From the Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea (S.H.P., Y.K., H.Y.J.); Division of Healthcare Research Outcomes Research, National Evidence-based Healthcare Collaborating Agency, Seoul, South Korea (A.R.S.); and Biomedical Engineering Research Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea (J.G.L.)
| | - June-Goo Lee
- From the Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, South Korea (S.H.P., Y.K., H.Y.J.); Division of Healthcare Research Outcomes Research, National Evidence-based Healthcare Collaborating Agency, Seoul, South Korea (A.R.S.); and Biomedical Engineering Research Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea (J.G.L.)
| |
Collapse
|
7
|
Wehkamp K, Krawczak M, Schreiber S. The Quality and Utility of Artificial Intelligence in Patient Care. DEUTSCHES ARZTEBLATT INTERNATIONAL 2023; 120:463-469. [PMID: 37218054 PMCID: PMC10487679 DOI: 10.3238/arztebl.m2023.0124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 11/30/2022] [Accepted: 05/08/2023] [Indexed: 05/24/2023]
Abstract
BACKGROUND Artificial intelligence (AI) is increasingly being used in patient care. In the future, physicians will need to understand not only the basic functioning of AI applications, but also their quality, utility, and risks. METHODS This article is based on a selective review of the literature on the principles, quality, limitations, and benefits AI applications in patient care, along with examples of individual applications. RESULTS The number of AI applications in patient care is rising, with more than 500 approvals in the United States to date. Their quality and utility are based on a number of interdependent factors, including the real-life setting, the type and amount of data collected, the choice of variables used by the application, the algorithms used, and the goal and implementation of each application. Bias (which may be hidden) and errors can arise at all these levels. Any evaluation of the quality and utility of an AI application must, therefore, be conducted according to the scientific principles of evidence-based medicine-a requirement that is often hampered by a lack of transparency. CONCLUSION AI has the potential to improve patient care while meeting the challenge of dealing with an ever-increasing surfeit of information and data in medicine with limited human resources. The limitations and risks of AI applications require critical and responsible consideration. This can best be achieved through a combination of scientific.
Collapse
Affiliation(s)
- Kai Wehkamp
- Department of Internal Medicine I, University Medical Center Schleswig-Holstein, Campus Lübeck, Kiel, Germany
- Department for Medical Management, MSH Medical School Hamburg, Hamburg, Germany
| | - Michael Krawczak
- Institute of Medical Informatics and Statistics, Christian-Albrechts-University of Kiel, University Medical Center Schleswig-Holstein Campus Kiel, Germany
| | - Stefan Schreiber
- Department of Internal Medicine I, University Medical Center Schleswig-Holstein, Campus Lübeck, Kiel, Germany
- Institute of Clinical Molecular Biology, Christian-Albrechts-University of Kiel, University Medical Center Schleswig-Holstein Campus Kiel, Germany
| |
Collapse
|
8
|
Chen YC, Li YT, Kuo PC, Cheng SJ, Chung YH, Kuo DP, Chen CY. Automatic segmentation and radiomic texture analysis for osteoporosis screening using chest low-dose computed tomography. Eur Radiol 2023; 33:5097-5106. [PMID: 36719495 DOI: 10.1007/s00330-023-09421-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Revised: 12/24/2022] [Accepted: 01/01/2023] [Indexed: 02/01/2023]
Abstract
OBJECTIVE This study developed a diagnostic tool combining machine learning (ML) segmentation and radiomic texture analysis (RTA) for bone density screening using chest low-dose computed tomography (LDCT). METHODS A total of 197 patients who underwent LDCT followed by dual-energy X-ray absorptiometry were analyzed. First, an autosegmentation model was trained using LDCT to delineate the thoracic vertebral body (VB). Second, a two-level classifier was developed using radiomic features extracted from VBs for the hierarchical pairwise classification of each patient's bone status. All the patients were initially classified as either normal or abnormal, and all patients with abnormal bone density were then subdivided into an osteopenia group and an osteoporosis group. The performance of the classifier was evaluated through fivefold cross-validation. RESULTS The model for automated VB segmentation achieved a Sorenson-Dice coefficient of 0.87 ± 0.01. Furthermore, the area under the receiver operating characteristic curve scores for the two-level classifier were 0.96 ± 0.01 for detecting abnormal bone density (accuracy = 0.91 ± 0.02; sensitivity = 0.93 ± 0.03; specificity = 0.89 ± 0.03) and 0.98 ± 0.01 for distinguishing osteoporosis (accuracy = 0.94 ± 0.02; sensitivity = 0.95 ± 0.03; specificity = 0.93 ± 0.03). The testing prediction accuracy levels for the first- and second-level classifiers were 0.92 ± 0.04 and 0.94 ± 0.05, respectively. The overall testing prediction accuracy of our method was 0.90 ± 0.05. CONCLUSION The combination of ML segmentation and RTA for automated bone density prediction based on LDCT scans is a feasible approach that could be valuable for osteoporosis screening during lung cancer screening. KEY POINTS • This study developed an automatic diagnostic tool combining machine learning-based segmentation and radiomic texture analysis for bone density screening using chest low-dose computed tomography. • The developed method enables opportunistic screening without quantitative computed tomography or a dedicated phantom. • The developed method could be integrated into the current clinical workflow and used as an adjunct for opportunistic screening or for patients who are ineligible for screening with dual-energy X-ray absorptiometry.
Collapse
Affiliation(s)
- Yung-Chieh Chen
- Translational Imaging Research Center, Taipei Medical University Hospital, Taipei, Taiwan
- Department of Medical Imaging, Taipei Medical University Hospital, Taipei, Taiwan
| | - Yi-Tien Li
- Translational Imaging Research Center, Taipei Medical University Hospital, Taipei, Taiwan
- Neuroscience Research Center, Taipei Medical University, Taipei, Taiwan
| | - Po-Chih Kuo
- Department of Computer Science, National Tsing Hua University, Hsinchu, Taiwan
| | - Sho-Jen Cheng
- Translational Imaging Research Center, Taipei Medical University Hospital, Taipei, Taiwan
- Department of Medical Imaging, Taipei Medical University Hospital, Taipei, Taiwan
| | - Yi-Hsiang Chung
- Department of Medical Imaging, Taipei Medical University Hospital, Taipei, Taiwan
| | - Duen-Pang Kuo
- Translational Imaging Research Center, Taipei Medical University Hospital, Taipei, Taiwan.
- Department of Medical Imaging, Taipei Medical University Hospital, Taipei, Taiwan.
| | - Cheng-Yu Chen
- Translational Imaging Research Center, Taipei Medical University Hospital, Taipei, Taiwan
- Department of Medical Imaging, Taipei Medical University Hospital, Taipei, Taiwan
- Research Center for Artificial Intelligence in Medicine, Taipei Medical University, Taipei, Taiwan
- Department of Radiology, School of Medicine, College of Medicine, Taipei Medical University, Taipei, Taiwan
- Department of Radiology, National Defense Medical Center, Taipei, Taiwan
| |
Collapse
|
9
|
Kocak B, Baessler B, Bakas S, Cuocolo R, Fedorov A, Maier-Hein L, Mercaldo N, Müller H, Orlhac F, Pinto Dos Santos D, Stanzione A, Ugga L, Zwanenburg A. CheckList for EvaluAtion of Radiomics research (CLEAR): a step-by-step reporting guideline for authors and reviewers endorsed by ESR and EuSoMII. Insights Imaging 2023; 14:75. [PMID: 37142815 PMCID: PMC10160267 DOI: 10.1186/s13244-023-01415-8] [Citation(s) in RCA: 83] [Impact Index Per Article: 83.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 03/24/2023] [Indexed: 05/06/2023] Open
Abstract
Even though radiomics can hold great potential for supporting clinical decision-making, its current use is mostly limited to academic research, without applications in routine clinical practice. The workflow of radiomics is complex due to several methodological steps and nuances, which often leads to inadequate reporting and evaluation, and poor reproducibility. Available reporting guidelines and checklists for artificial intelligence and predictive modeling include relevant good practices, but they are not tailored to radiomic research. There is a clear need for a complete radiomics checklist for study planning, manuscript writing, and evaluation during the review process to facilitate the repeatability and reproducibility of studies. We here present a documentation standard for radiomic research that can guide authors and reviewers. Our motivation is to improve the quality and reliability and, in turn, the reproducibility of radiomic research. We name the checklist CLEAR (CheckList for EvaluAtion of Radiomics research), to convey the idea of being more transparent. With its 58 items, the CLEAR checklist should be considered a standardization tool providing the minimum requirements for presenting clinical radiomics research. In addition to a dynamic online version of the checklist, a public repository has also been set up to allow the radiomics community to comment on the checklist items and adapt the checklist for future versions. Prepared and revised by an international group of experts using a modified Delphi method, we hope the CLEAR checklist will serve well as a single and complete scientific documentation tool for authors and reviewers to improve the radiomics literature.
Collapse
Affiliation(s)
- Burak Kocak
- Department of Radiology, University of Health Sciences, Basaksehir Cam and Sakura City Hospital, Basaksehir, Istanbul, 34480, Turkey.
| | - Bettina Baessler
- Institute of Diagnostic and Interventional Radiology, University Hospital Würzburg, Würzburg, Germany
| | - Spyridon Bakas
- Center for Artificial Intelligence for Integrated Diagnostics (AI2D) & Center for Biomedical Image Computing & Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Pathology and Laboratory Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Renato Cuocolo
- Department of Medicine, Surgery, and Dentistry, University of Salerno, Baronissi, Italy
| | - Andrey Fedorov
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Lena Maier-Hein
- Division of Intelligent Medical Systems, German Cancer Research Center, Heidelberg, Germany
- National Center for Tumor Diseases (NCT), Heidelberg, Germany
| | - Nathaniel Mercaldo
- Institute for Technology Assessment, Massachusetts General Hospital, Boston, MA, USA
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA
| | - Henning Müller
- University of Applied Sciences of Western Switzerland (HES-SO Valais), Valais, Switzerland
- Department of Radiology and Medical Informatics, University of Geneva (UniGe), Geneva, Switzerland
| | - Fanny Orlhac
- Laboratoire d'Imagerie Translationnelle en Oncologie (LITO)-U1288, Institut Curie, Inserm, Université PSL, Orsay, France
| | - Daniel Pinto Dos Santos
- Department of Radiology, University Hospital of Cologne, Cologne, Germany
- Institute for Diagnostic and Interventional Radiology, Goethe-University Frankfurt Am Main, Frankfurt, Germany
| | - Arnaldo Stanzione
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples, Italy
| | - Lorenzo Ugga
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples, Italy
| | - Alex Zwanenburg
- OncoRay-National Center for Radiation Research in Oncology, Faculty of Medicine and University Hospital Carl Gustav Carus, Technische Universität Dresden, Helmholtz-Zentrum Dresden-Rossendorf, Dresden, Germany
- National Center for Tumor Diseases (NCT), Partner Site Dresden, Dresden, Germany
- German Cancer Research Center (DKFZ), Heidelberg, Germany
| |
Collapse
|
10
|
İnce O, Önder H, Gençtürk M, Cebeci H, Golzarian J, Young S. Prediction of Response of Hepatocellular Carcinoma to Radioembolization: Machine Learning Using Preprocedural Clinical Factors and MR Imaging Radiomics. J Vasc Interv Radiol 2023; 34:235-243.e3. [PMID: 36384224 DOI: 10.1016/j.jvir.2022.11.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 10/22/2022] [Accepted: 11/06/2022] [Indexed: 11/14/2022] Open
Abstract
PURPOSE To create and evaluate the ability of machine learning-based models with clinicoradiomic features to predict radiologic response after transarterial radioembolization (TARE). MATERIALS AND METHODS 82 treatment-naïve patients (65 responders and 17 nonresponders; median age: 65 years; interquartile range: 11) who underwent selective TARE were included. Treatment responses were evaluated using the European Association for the Study of the Liver criteria at 3-month follow-up. Laboratory, clinical, and procedural information were collected. Radiomic features were extracted from pretreatment contrast-enhanced T1-weighted magnetic resonance images obtained within 3 months before TARE. Feature selection consisted of intraclass correlation, followed by Pearson correlation analysis and finally, sequential feature selection algorithm. Support vector machine, logistic regression, random forest, and LightGBM models were created with both clinicoradiomic features and clinical features alone. Performance metrics were calculated with a nested 5-fold cross-validation technique. The performances of the models were compared by Wilcoxon signed-rank and Friedman tests. RESULTS In total, 1,128 features were extracted. The feature selection process resulted in 12 features (8 radiomic and 4 clinical features) being included in the final analysis. The area under the receiver operating characteristic curve values from the support vector machine, logistic regression, random forest, and LightGBM models were 0.94, 0.94, 0.88, and 0.92 with clinicoradiomic features and 0.82, 0.83, 0.82, and 0.83 with clinical features alone, respectively. All models exhibited significantly higher performances when radiomic features were included (P = .028, .028, .043, and .028, respectively). CONCLUSIONS Based on clinical and imaging-based information before treatment, machine learning-based clinicoradiomic models demonstrated potential to predict response to TARE.
Collapse
Affiliation(s)
- Okan İnce
- Department of Radiology, Medical School, University of Minnesota, Minneapolis, Minnesota.
| | - Hakan Önder
- Department of Radiology, Prof. Dr. Cemil Taşcıoğlu City Hospital, Health Sciences University, Istanbul, Turkey
| | - Mehmet Gençtürk
- Department of Radiology, Medical School, University of Minnesota, Minneapolis, Minnesota
| | - Hakan Cebeci
- Department of Radiology, Medical School, University of Minnesota, Minneapolis, Minnesota
| | - Jafar Golzarian
- Department of Radiology, Medical School, University of Minnesota, Minneapolis, Minnesota
| | - Shamar Young
- Department of Radiology, College of Medicine, University of Arizona, Tucson, Arizona
| |
Collapse
|
11
|
Kotsyfakis S, Iliaki-Giannakoudaki E, Anagnostopoulos A, Papadokostaki E, Giannakoudakis K, Goumenakis M, Kotsyfakis M. The application of machine learning to imaging in hematological oncology: A scoping review. Front Oncol 2022; 12:1080988. [PMID: 36605438 PMCID: PMC9808781 DOI: 10.3389/fonc.2022.1080988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 12/05/2022] [Indexed: 12/24/2022] Open
Abstract
Background Here, we conducted a scoping review to (i) establish which machine learning (ML) methods have been applied to hematological malignancy imaging; (ii) establish how ML is being applied to hematological cancer radiology; and (iii) identify addressable research gaps. Methods The review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Extension for Scoping Reviews guidelines. The inclusion criteria were (i) pediatric and adult patients with suspected or confirmed hematological malignancy undergoing imaging (population); (ii) any study using ML techniques to derive models using radiological images to apply to the clinical management of these patients (concept); and (iii) original research articles conducted in any setting globally (context). Quality Assessment of Diagnostic Accuracy Studies 2 criteria were used to assess diagnostic and segmentation studies, while the Newcastle-Ottawa scale was used to assess the quality of observational studies. Results Of 53 eligible studies, 33 applied diverse ML techniques to diagnose hematological malignancies or to differentiate them from other diseases, especially discriminating gliomas from primary central nervous system lymphomas (n=18); 11 applied ML to segmentation tasks, while 9 applied ML to prognostication or predicting therapeutic responses, especially for diffuse large B-cell lymphoma. All studies reported discrimination statistics, but no study calculated calibration statistics. Every diagnostic/segmentation study had a high risk of bias due to their case-control design; many studies failed to provide adequate details of the reference standard; and only a few studies used independent validation. Conclusion To deliver validated ML-based models to radiologists managing hematological malignancies, future studies should (i) adhere to standardized, high-quality reporting guidelines such as the Checklist for Artificial Intelligence in Medical Imaging; (ii) validate models in independent cohorts; (ii) standardize volume segmentation methods for segmentation tasks; (iv) establish comprehensive prospective studies that include different tumor grades, comparisons with radiologists, optimal imaging modalities, sequences, and planes; (v) include side-by-side comparisons of different methods; and (vi) include low- and middle-income countries in multicentric studies to enhance generalizability and reduce inequity.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Michail Kotsyfakis
- Biology Center of the Czech Academy of Sciences, Budweis (Ceske Budejovice), Czechia,*Correspondence: Michail Kotsyfakis,
| |
Collapse
|
12
|
Joslyn S, Alexander K. Evaluating artificial intelligence algorithms for use in veterinary radiology. Vet Radiol Ultrasound 2022; 63 Suppl 1:871-879. [PMID: 36514228 DOI: 10.1111/vru.13159] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 02/16/2022] [Accepted: 03/30/2022] [Indexed: 12/15/2022] Open
Abstract
Artificial intelligence is increasingly being used for applications in veterinary radiology, including detection of abnormalities and automated measurements. Unlike human radiology, there is no formal regulation or validation of AI algorithms for veterinary medicine and both general practitioner and specialist veterinarians must rely on their own judgment when deciding whether or not to incorporate AI algorithms to aid their clinical decision-making. The benefits and challenges to developing clinically useful and diagnostically accurate AI algorithms are discussed. Considerations for the development of AI research projects are also addressed. A framework is suggested to help veterinarians, in both research and clinical practice contexts, assess AI algorithms for veterinary radiology.
Collapse
Affiliation(s)
- Steve Joslyn
- ACVR/ECVDI AI Education and Development Committee, Vedi, Perth, Western Australia, Australia
| | - Kate Alexander
- ACVR/ECVDI AI Education and Development Committee, DMV Veterinary Center, Lachine, Quebec, Canada
| |
Collapse
|
13
|
Harvey J, Reijnders RA, Cavill R, Duits A, Köhler S, Eijssen L, Rutten BPF, Shireby G, Torkamani A, Creese B, Leentjens AFG, Lunnon K, Pishva E. Machine learning-based prediction of cognitive outcomes in de novo Parkinson's disease. NPJ Parkinsons Dis 2022; 8:150. [PMID: 36344548 PMCID: PMC9640625 DOI: 10.1038/s41531-022-00409-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Accepted: 10/11/2022] [Indexed: 11/09/2022] Open
Abstract
Cognitive impairment is a debilitating symptom in Parkinson's disease (PD). We aimed to establish an accurate multivariate machine learning (ML) model to predict cognitive outcome in newly diagnosed PD cases from the Parkinson's Progression Markers Initiative (PPMI). Annual cognitive assessments over an 8-year time span were used to define two cognitive outcomes of (i) cognitive impairment, and (ii) dementia conversion. Selected baseline variables were organized into three subsets of clinical, biofluid and genetic/epigenetic measures and tested using four different ML algorithms. Irrespective of the ML algorithm used, the models consisting of the clinical variables performed best and showed better prediction of cognitive impairment outcome over dementia conversion. We observed a marginal improvement in the prediction performance when clinical, biofluid, and epigenetic/genetic variables were all included in one model. Several cerebrospinal fluid measures and an epigenetic marker showed high predictive weighting in multiple models when included alongside clinical variables.
Collapse
Affiliation(s)
- Joshua Harvey
- Medical School, Faculty of Health and Life Sciences, University of Exeter, Exeter, UK
| | - Rick A Reijnders
- Department of Psychiatry and Neuropsychology, School for Mental Health and Neuroscience (MHeNs), Maastricht University, Maastricht, The Netherlands
| | - Rachel Cavill
- Department of Advanced Computing Sciences, FSE, Maastricht University, Maastricht, The Netherlands
| | - Annelien Duits
- Department of Psychiatry and Neuropsychology, School for Mental Health and Neuroscience (MHeNs), Maastricht University, Maastricht, The Netherlands
- Department of Medical Psychology, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Sebastian Köhler
- Department of Psychiatry and Neuropsychology, School for Mental Health and Neuroscience (MHeNs), Maastricht University, Maastricht, The Netherlands
| | - Lars Eijssen
- Department of Psychiatry and Neuropsychology, School for Mental Health and Neuroscience (MHeNs), Maastricht University, Maastricht, The Netherlands
- Department of Bioinformatics-BiGCaT, School of Nutrition and Translational Research in Metabolism (NUTRIM), Maastricht University, Maastricht, The Netherlands
| | - Bart P F Rutten
- Department of Psychiatry and Neuropsychology, School for Mental Health and Neuroscience (MHeNs), Maastricht University, Maastricht, The Netherlands
| | - Gemma Shireby
- Medical School, Faculty of Health and Life Sciences, University of Exeter, Exeter, UK
| | - Ali Torkamani
- Department of Integrative Structural and Computational Biology, Scripps Research, La Jolla, CA, 92037, USA
| | - Byron Creese
- Medical School, Faculty of Health and Life Sciences, University of Exeter, Exeter, UK
| | - Albert F G Leentjens
- Department of Psychiatry and Neuropsychology, School for Mental Health and Neuroscience (MHeNs), Maastricht University, Maastricht, The Netherlands
| | - Katie Lunnon
- Medical School, Faculty of Health and Life Sciences, University of Exeter, Exeter, UK
| | - Ehsan Pishva
- Medical School, Faculty of Health and Life Sciences, University of Exeter, Exeter, UK.
- Department of Psychiatry and Neuropsychology, School for Mental Health and Neuroscience (MHeNs), Maastricht University, Maastricht, The Netherlands.
| |
Collapse
|
14
|
Rouzrokh P, Khosravi B, Faghani S, Moassefi M, Vera Garcia DV, Singh Y, Zhang K, Conte GM, Erickson BJ. Mitigating Bias in Radiology Machine Learning: 1. Data Handling. Radiol Artif Intell 2022; 4:e210290. [PMID: 36204544 PMCID: PMC9533091 DOI: 10.1148/ryai.210290] [Citation(s) in RCA: 39] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 07/19/2022] [Accepted: 07/20/2022] [Indexed: 05/08/2023]
Abstract
Minimizing bias is critical to adoption and implementation of machine learning (ML) in clinical practice. Systematic mathematical biases produce consistent and reproducible differences between the observed and expected performance of ML systems, resulting in suboptimal performance. Such biases can be traced back to various phases of ML development: data handling, model development, and performance evaluation. This report presents 12 suboptimal practices during data handling of an ML study, explains how those practices can lead to biases, and describes what may be done to mitigate them. Authors employ an arbitrary and simplified framework that splits ML data handling into four steps: data collection, data investigation, data splitting, and feature engineering. Examples from the available research literature are provided. A Google Colaboratory Jupyter notebook includes code examples to demonstrate the suboptimal practices and steps to prevent them. Keywords: Data Handling, Bias, Machine Learning, Deep Learning, Convolutional Neural Network (CNN), Computer-aided Diagnosis (CAD) © RSNA, 2022.
Collapse
|
15
|
Wang P, Cheng S, Li Y, Liu L, Liu J, Zhao Q, Luo S. Prediction of Lumbar Drainage-Related Meningitis Based on Supervised Machine Learning Algorithms. Front Public Health 2022; 10:910479. [PMID: 35836985 PMCID: PMC9273930 DOI: 10.3389/fpubh.2022.910479] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 05/26/2022] [Indexed: 11/13/2022] Open
Abstract
Background Lumbar drainage is widely used in the clinic; however, forecasting lumbar drainage-related meningitis (LDRM) is limited. We aimed to establish prediction models using supervised machine learning (ML) algorithms. Methods We utilized a cohort of 273 eligible lumbar drainage cases. Data were preprocessed and split into training and testing sets. Optimal hyper-parameters were archived by 10-fold cross-validation and grid search. The support vector machine (SVM), random forest (RF), and artificial neural network (ANN) were adopted for model training. The area under the operating characteristic curve (AUROC) and precision-recall curve (AUPRC), true positive ratio (TPR), true negative ratio (TNR), specificity, sensitivity, accuracy, and kappa coefficient were used for model evaluation. All trained models were internally validated. The importance of features was also analyzed. Results In the training set, all the models had AUROC exceeding 0.8. SVM and the RF models had an AUPRC of more than 0.6, but the ANN model had an unexpectedly low AUPRC (0.380). The RF and ANN models revealed similar TPR, whereas the ANN model had a higher TNR and demonstrated better specificity, sensitivity, accuracy, and kappa efficiency. In the testing set, most performance indicators of established models decreased. However, the RF and AVM models maintained adequate AUROC (0.828 vs. 0.719) and AUPRC (0.413 vs. 0.520), and the RF model also had better TPR, specificity, sensitivity, accuracy, and kappa efficiency. Site leakage showed the most considerable mean decrease in accuracy. Conclusions The RF and SVM models could predict LDRM, in which the RF model owned the best performance, and site leakage was the most meaningful predictor.
Collapse
Affiliation(s)
- Peng Wang
- Department of Neurosurgery, Cancer Prevention and Treatment Institute of Chengdu, Chengdu Fifth People's Hospital (The Second Clinical Medical College, Affiliated Fifth People's Hospital of Chengdu University of Traditional Chinese Medicine), Chengdu, China
| | - Shuwen Cheng
- Department of Neurosurgery, Cancer Prevention and Treatment Institute of Chengdu, Chengdu Fifth People's Hospital (The Second Clinical Medical College, Affiliated Fifth People's Hospital of Chengdu University of Traditional Chinese Medicine), Chengdu, China
| | - Yaxin Li
- West China Fourth Hospital/West China School of Public Health, Sichuan University, Chengdu, China
| | - Li Liu
- Department of Neurosurgery, Cancer Prevention and Treatment Institute of Chengdu, Chengdu Fifth People's Hospital (The Second Clinical Medical College, Affiliated Fifth People's Hospital of Chengdu University of Traditional Chinese Medicine), Chengdu, China
| | - Jia Liu
- Department of Neurosurgery, Cancer Prevention and Treatment Institute of Chengdu, Chengdu Fifth People's Hospital (The Second Clinical Medical College, Affiliated Fifth People's Hospital of Chengdu University of Traditional Chinese Medicine), Chengdu, China
| | - Qiang Zhao
- Department of Neurosurgery, Cancer Prevention and Treatment Institute of Chengdu, Chengdu Fifth People's Hospital (The Second Clinical Medical College, Affiliated Fifth People's Hospital of Chengdu University of Traditional Chinese Medicine), Chengdu, China
| | - Shuang Luo
- Department of Neurosurgery, Cancer Prevention and Treatment Institute of Chengdu, Chengdu Fifth People's Hospital (The Second Clinical Medical College, Affiliated Fifth People's Hospital of Chengdu University of Traditional Chinese Medicine), Chengdu, China
- *Correspondence: Shuang Luo
| |
Collapse
|
16
|
Moskowitz CS, Welch ML, Jacobs MA, Kurland BF, Simpson AL. Radiomic Analysis: Study Design, Statistical Analysis, and Other Bias Mitigation Strategies. Radiology 2022; 304:265-273. [PMID: 35579522 PMCID: PMC9340236 DOI: 10.1148/radiol.211597] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Rapid advances in automated methods for extracting large numbers of quantitative features from medical images have led to tremendous growth of publications reporting on radiomic analyses. Translation of these research studies into clinical practice can be hindered by biases introduced during the design, analysis, or reporting of the studies. Herein, the authors review biases, sources of variability, and pitfalls that frequently arise in radiomic research, with an emphasis on study design and statistical analysis considerations. Drawing on existing work in the statistical, radiologic, and machine learning literature, approaches for avoiding these pitfalls are described.
Collapse
Affiliation(s)
- Chaya S Moskowitz
- From the Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, 485 Lexington Ave, 2nd Floor, New York, NY, NY 10017 (C.S.M.); Cancer Digital Intelligence Program, University Health Network, Toronto, ON, Canada (M.L.W.); The Russell H. Morgan Department of Radiology and Radiological Science and Sidney Kimmel Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md (M.A.J.); ERT, Pittsburgh, Pa (B.F.K.); and School of Computing, Department of Biomedical and Molecular Sciences, Queen's University, Kingston, ON, Canada (A.L.S.)
| | - Mattea L Welch
- From the Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, 485 Lexington Ave, 2nd Floor, New York, NY, NY 10017 (C.S.M.); Cancer Digital Intelligence Program, University Health Network, Toronto, ON, Canada (M.L.W.); The Russell H. Morgan Department of Radiology and Radiological Science and Sidney Kimmel Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md (M.A.J.); ERT, Pittsburgh, Pa (B.F.K.); and School of Computing, Department of Biomedical and Molecular Sciences, Queen's University, Kingston, ON, Canada (A.L.S.)
| | - Michael A Jacobs
- From the Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, 485 Lexington Ave, 2nd Floor, New York, NY, NY 10017 (C.S.M.); Cancer Digital Intelligence Program, University Health Network, Toronto, ON, Canada (M.L.W.); The Russell H. Morgan Department of Radiology and Radiological Science and Sidney Kimmel Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md (M.A.J.); ERT, Pittsburgh, Pa (B.F.K.); and School of Computing, Department of Biomedical and Molecular Sciences, Queen's University, Kingston, ON, Canada (A.L.S.)
| | - Brenda F Kurland
- From the Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, 485 Lexington Ave, 2nd Floor, New York, NY, NY 10017 (C.S.M.); Cancer Digital Intelligence Program, University Health Network, Toronto, ON, Canada (M.L.W.); The Russell H. Morgan Department of Radiology and Radiological Science and Sidney Kimmel Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md (M.A.J.); ERT, Pittsburgh, Pa (B.F.K.); and School of Computing, Department of Biomedical and Molecular Sciences, Queen's University, Kingston, ON, Canada (A.L.S.)
| | - Amber L Simpson
- From the Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center, 485 Lexington Ave, 2nd Floor, New York, NY, NY 10017 (C.S.M.); Cancer Digital Intelligence Program, University Health Network, Toronto, ON, Canada (M.L.W.); The Russell H. Morgan Department of Radiology and Radiological Science and Sidney Kimmel Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, Md (M.A.J.); ERT, Pittsburgh, Pa (B.F.K.); and School of Computing, Department of Biomedical and Molecular Sciences, Queen's University, Kingston, ON, Canada (A.L.S.)
| |
Collapse
|
17
|
Marti-Bonmati L, Koh DM, Riklund K, Bobowicz M, Roussakis Y, Vilanova JC, Fütterer JJ, Rimola J, Mallol P, Ribas G, Miguel A, Tsiknakis M, Lekadir K, Tsakou G. Considerations for artificial intelligence clinical impact in oncologic imaging: an AI4HI position paper. Insights Imaging 2022; 13:89. [PMID: 35536446 PMCID: PMC9091068 DOI: 10.1186/s13244-022-01220-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 04/07/2022] [Indexed: 01/12/2023] Open
Abstract
To achieve clinical impact in daily oncological practice, emerging AI-based cancer imaging research needs to have clearly defined medical focus, AI methods, and outcomes to be estimated. AI-supported cancer imaging should predict major relevant clinical endpoints, aiming to extract associations and draw inferences in a fair, robust, and trustworthy way. AI-assisted solutions as medical devices, developed using multicenter heterogeneous datasets, should be targeted to have an impact on the clinical care pathway. When designing an AI-based research study in oncologic imaging, ensuring clinical impact in AI solutions requires careful consideration of key aspects, including target population selection, sample size definition, standards, and common data elements utilization, balanced dataset splitting, appropriate validation methodology, adequate ground truth, and careful selection of clinical endpoints. Endpoints may be pathology hallmarks, disease behavior, treatment response, or patient prognosis. Ensuring ethical, safety, and privacy considerations are also mandatory before clinical validation is performed. The Artificial Intelligence for Health Imaging (AI4HI) Clinical Working Group has discussed and present in this paper some indicative Machine Learning (ML) enabled decision-support solutions currently under research in the AI4HI projects, as well as the main considerations and requirements that AI solutions should have from a clinical perspective, which can be adopted into clinical practice. If effectively designed, implemented, and validated, cancer imaging AI-supported tools will have the potential to revolutionize the field of precision medicine in oncology.
Collapse
Affiliation(s)
- Luis Marti-Bonmati
- Radiology Department and Biomedical Imaging Research Group (GIBI230), La Fe Polytechnics and University Hospital and Health Research Institute, Valencia, Spain.
| | - Dow-Mu Koh
- Department of Radiology, Royal Marsden Hospital and Division of Radiotherapy and Imaging, Institute of Cancer Research, London, UK.,Department of Radiology, The Royal Marsden NHS Trust, London, UK
| | - Katrine Riklund
- Department of Radiation Sciences, Diagnostic Radiology, Umeå University, 901 85, Umeå, Sweden
| | - Maciej Bobowicz
- 2nd Department of Radiology, Medical University of Gdansk, 17 Smoluchowskiego Str, 80-214, Gdansk, Poland
| | - Yiannis Roussakis
- Department of Medical Physics, German Oncology Center, 4108, Limassol, Cyprus
| | - Joan C Vilanova
- Department of Radiology, Clínica Girona, Institute of Diagnostic Imaging (IDI)-Girona, Faculty of Medicine, University of Girona, Girona, Spain
| | - Jurgen J Fütterer
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Jordi Rimola
- CIBERehd, Barcelona Clinic Liver Cancer (BCLC) Group, Department of Radiology, Hospital Clínic, University of Barcelona, Barcelona, Spain
| | - Pedro Mallol
- Radiology Department and Biomedical Imaging Research Group (GIBI230), La Fe Polytechnics and University Hospital and Health Research Institute, Valencia, Spain
| | - Gloria Ribas
- Radiology Department and Biomedical Imaging Research Group (GIBI230), La Fe Polytechnics and University Hospital and Health Research Institute, Valencia, Spain
| | - Ana Miguel
- Radiology Department and Biomedical Imaging Research Group (GIBI230), La Fe Polytechnics and University Hospital and Health Research Institute, Valencia, Spain
| | - Manolis Tsiknakis
- Foundation for Research and Technology Hellas, Institute of Computer Science, Computational Biomedicine Lab (CBML), FORTH-ICS Heraklion, Crete, Greece
| | - Karim Lekadir
- Departament de Matemàtiques and Informàtica, Artificial Intelligence in Medicine Lab (BCN-AIM), Universitat de Barcelona, Barcelona, Spain
| | - Gianna Tsakou
- Maggioli S.P.A., Research and Development Lab, Athens, Greece
| |
Collapse
|
18
|
Fan C, Sun K, Min X, Cai W, Lv W, Ma X, Li Y, Chen C, Zhao P, Qiao J, Lu J, Guo Y, Xia L. Discriminating malignant from benign testicular masses using machine-learning based radiomics signature of appearance diffusion coefficient maps: Comparing with conventional mean and minimum ADC values. Eur J Radiol 2022; 148:110158. [DOI: 10.1016/j.ejrad.2022.110158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 01/04/2022] [Accepted: 01/11/2022] [Indexed: 11/03/2022]
|
19
|
Xv Y, Lv F, Guo H, Zhou X, Tan H, Xiao M, Zheng Y. Machine learning-based CT radiomics approach for predicting WHO/ISUP nuclear grade of clear cell renal cell carcinoma: an exploratory and comparative study. Insights Imaging 2021; 12:170. [PMID: 34800179 PMCID: PMC8605949 DOI: 10.1186/s13244-021-01107-1] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2021] [Accepted: 10/09/2021] [Indexed: 12/14/2022] Open
Abstract
Purpose To investigate the predictive performance of machine learning-based CT radiomics for differentiating between low- and high-nuclear grade of clear cell renal cell carcinomas (CCRCCs). Methods This retrospective study enrolled 406 patients with pathologically confirmed low- and high-nuclear grade of CCRCCs according to the WHO/ISUP grading system, which were divided into the training and testing cohorts. Radiomics features were extracted from nephrographic-phase CT images using PyRadiomics. A support vector machine (SVM) combined with three feature selection algorithms such as least absolute shrinkage and selection operator (LASSO), recursive feature elimination (RFE), and ReliefF was performed to determine the most suitable classification model, respectively. Clinicoradiological, radiomics, and combined models were constructed using the radiological and clinical characteristics with significant differences between the groups, selected radiomics features, and a combination of both, respectively. Model performance was evaluated by receiver operating characteristic (ROC) curve, calibration curve, and decision curve analyses. Results SVM-ReliefF algorithm outperformed SVM-LASSO and SVM-RFE in distinguishing low- from high-grade CCRCCs. The combined model showed better prediction performance than the clinicoradiological and radiomics models (p < 0.05, DeLong test), which achieved the highest efficacy, with an area under the ROC curve (AUC) value of 0.887 (95% confidence interval [CI] 0.798–0.952), 0.859 (95% CI 0.748–0.935), and 0.828 (95% CI 0.731–0.929) in the training, validation, and testing cohorts, respectively. The calibration and decision curves also indicated the favorable performance of the combined model. Conclusion A combined model incorporating the radiomics features and clinicoradiological characteristics can better predict the WHO/ISUP nuclear grade of CCRCC preoperatively, thus providing effective and noninvasive assessment. Supplementary Information The online version contains supplementary material available at 10.1186/s13244-021-01107-1.
Collapse
Affiliation(s)
- Yingjie Xv
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, No. 1 Youyi Road, Chongqing, 400016, Yuzhong, China.,Department of Urology, The First Affiliated Hospital of Chongqing Medical University, No. 1 Youyi Road, Chongqing, 400016, Yuzhong, China
| | - Fajin Lv
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, No. 1 Youyi Road, Chongqing, 400016, Yuzhong, China
| | - Haoming Guo
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, No. 1 Youyi Road, Chongqing, 400016, Yuzhong, China
| | - Xiang Zhou
- Department of Urology, The First Affiliated Hospital of Chongqing Medical University, No. 1 Youyi Road, Chongqing, 400016, Yuzhong, China
| | - Hao Tan
- Department of Urology, The First Affiliated Hospital of Chongqing Medical University, No. 1 Youyi Road, Chongqing, 400016, Yuzhong, China
| | - Mingzhao Xiao
- Department of Urology, The First Affiliated Hospital of Chongqing Medical University, No. 1 Youyi Road, Chongqing, 400016, Yuzhong, China.
| | - Yineng Zheng
- Department of Radiology, The First Affiliated Hospital of Chongqing Medical University, No. 1 Youyi Road, Chongqing, 400016, Yuzhong, China.
| |
Collapse
|
20
|
Molnár V, Molnár A, Lakner Z, Tárnoki DL, Tárnoki ÁD, Jokkel Z, Szabó H, Dienes A, Angyal E, Németh F, Kunos L, Tamás L. Examination of the diaphragm in obstructive sleep apnea using ultrasound imaging. Sleep Breath 2021; 26:1333-1339. [PMID: 34478056 PMCID: PMC9418095 DOI: 10.1007/s11325-021-02472-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 07/31/2021] [Accepted: 08/09/2021] [Indexed: 11/02/2022]
Abstract
PURPOSE The aim of this study was to analyze the effect of obstructive sleep apnea (OSA) on the ultrasound (US) features of the diaphragm and to determine if diaphragmatic US may be a useful screening tool for patients with possible OSA. METHODS Patients complaining of snoring were prospectively enrolled for overnight polygraphy using the ApneaLink Air device. Thickness and motion of the diaphragm during tidal and deep inspiration were measured. Logistic regression was used to assess parameters of the diaphragm associated with OSA. RESULTS Of 100 patients, 64 were defined as having OSA. Thicknesses of the left and right hemidiaphragms were significantly different between OSA and control groups. Using a combination of diaphragmatic dimensions, diaphragm dilation, age, sex, and BMI, we developed an algorithm that predicted the presence of OSA with 91% sensitivity and 81% specificity. CONCLUSION A combination of anthropometric measurements, demographic factors, and US imaging may be useful for screening patients for possible OSA. These findings need to be confirmed in larger sample sizes in different clinical settings.
Collapse
Affiliation(s)
- Viktória Molnár
- Department of Otolaryngology and Head and Neck Surgery, Semmelweis University, Szigony u. 36, H-1083, Budapest, Hungary
| | - András Molnár
- Department of Otolaryngology and Head and Neck Surgery, Semmelweis University, Szigony u. 36, H-1083, Budapest, Hungary.
| | - Zoltán Lakner
- Faculty of Food Science, Szent István University, Budapest, Hungary
| | | | | | - Zsófia Jokkel
- Medical Imaging Centre, Semmelweis University, Budapest, Hungary
| | - Helga Szabó
- Medical Imaging Centre, Semmelweis University, Budapest, Hungary
| | - András Dienes
- Medical Imaging Centre, Semmelweis University, Budapest, Hungary
| | - Emese Angyal
- Department of Otolaryngology and Head and Neck Surgery, Semmelweis University, Szigony u. 36, H-1083, Budapest, Hungary
| | - Fruzsina Németh
- Department of Otolaryngology and Head and Neck Surgery, Semmelweis University, Szigony u. 36, H-1083, Budapest, Hungary
| | | | - László Tamás
- Department of Otolaryngology and Head and Neck Surgery, Semmelweis University, Szigony u. 36, H-1083, Budapest, Hungary
| |
Collapse
|
21
|
Shelmerdine SC, Arthurs OJ, Denniston A, Sebire NJ. Review of study reporting guidelines for clinical studies using artificial intelligence in healthcare. BMJ Health Care Inform 2021; 28:bmjhci-2021-100385. [PMID: 34426417 PMCID: PMC8383863 DOI: 10.1136/bmjhci-2021-100385] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 08/09/2021] [Indexed: 02/07/2023] Open
Abstract
High-quality research is essential in guiding evidence-based care, and should be reported in a way that is reproducible, transparent and where appropriate, provide sufficient detail for inclusion in future meta-analyses. Reporting guidelines for various study designs have been widely used for clinical (and preclinical) studies, consisting of checklists with a minimum set of points for inclusion. With the recent rise in volume of research using artificial intelligence (AI), additional factors need to be evaluated, which do not neatly conform to traditional reporting guidelines (eg, details relating to technical algorithm development). In this review, reporting guidelines are highlighted to promote awareness of essential content required for studies evaluating AI interventions in healthcare. These include published and in progress extensions to well-known reporting guidelines such as Standard Protocol Items: Recommendations for Interventional Trials-AI (study protocols), Consolidated Standards of Reporting Trials-AI (randomised controlled trials), Standards for Reporting of Diagnostic Accuracy Studies-AI (diagnostic accuracy studies) and Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis-AI (prediction model studies). Additionally there are a number of guidelines that consider AI for health interventions more generally (eg, Checklist for Artificial Intelligence in Medical Imaging (CLAIM), minimum information (MI)-CLAIM, MI for Medical AI Reporting) or address a specific element such as the ‘learning curve’ (Developmental and Exploratory Clinical Investigation of Decision-AI). Economic evaluation of AI health interventions is not currently addressed, and may benefit from extension to an existing guideline. In the face of a rapid influx of studies of AI health interventions, reporting guidelines help ensure that investigators and those appraising studies consider both the well-recognised elements of good study design and reporting, while also adequately addressing new challenges posed by AI-specific elements.
Collapse
Affiliation(s)
| | - Owen J Arthurs
- Radiology, Great Ormond Street Hospital NHS Foundation Trust, London, UK
| | - Alastair Denniston
- Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK
| | - Neil J Sebire
- Digital Research, Informatics and Virtual Environments Unit (DRIVE), London, UK
| |
Collapse
|
22
|
Wood DA, Kafiabadi S, Al Busaidi A, Guilhem EL, Lynch J, Townend MK, Montvila A, Kiik M, Siddiqui J, Gadapa N, Benger MD, Mazumder A, Barker G, Ourselin S, Cole JH, Booth TC. Deep learning to automate the labelling of head MRI datasets for computer vision applications. Eur Radiol 2021; 32:725-736. [PMID: 34286375 PMCID: PMC8660736 DOI: 10.1007/s00330-021-08132-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 06/02/2021] [Accepted: 06/14/2021] [Indexed: 02/07/2023]
Abstract
Objectives The purpose of this study was to build a deep learning model to derive labels from neuroradiology reports and assign these to the corresponding examinations, overcoming a bottleneck to computer vision model development. Methods Reference-standard labels were generated by a team of neuroradiologists for model training and evaluation. Three thousand examinations were labelled for the presence or absence of any abnormality by manually scrutinising the corresponding radiology reports (‘reference-standard report labels’); a subset of these examinations (n = 250) were assigned ‘reference-standard image labels’ by interrogating the actual images. Separately, 2000 reports were labelled for the presence or absence of 7 specialised categories of abnormality (acute stroke, mass, atrophy, vascular abnormality, small vessel disease, white matter inflammation, encephalomalacia), with a subset of these examinations (n = 700) also assigned reference-standard image labels. A deep learning model was trained using labelled reports and validated in two ways: comparing predicted labels to (i) reference-standard report labels and (ii) reference-standard image labels. The area under the receiver operating characteristic curve (AUC-ROC) was used to quantify model performance. Accuracy, sensitivity, specificity, and F1 score were also calculated. Results Accurate classification (AUC-ROC > 0.95) was achieved for all categories when tested against reference-standard report labels. A drop in performance (ΔAUC-ROC > 0.02) was seen for three categories (atrophy, encephalomalacia, vascular) when tested against reference-standard image labels, highlighting discrepancies in the original reports. Once trained, the model assigned labels to 121,556 examinations in under 30 min. Conclusions Our model accurately classifies head MRI examinations, enabling automated dataset labelling for downstream computer vision applications. Key Points • Deep learning is poised to revolutionise image recognition tasks in radiology; however, a barrier to clinical adoption is the difficulty of obtaining large labelled datasets for model training. • We demonstrate a deep learning model which can derive labels from neuroradiology reports and assign these to the corresponding examinations at scale, facilitating the development of downstream computer vision models. • We rigorously tested our model by comparing labels predicted on the basis of neuroradiology reports with two sets of reference-standard labels: (1) labels derived by manually scrutinising each radiology report and (2) labels derived by interrogating the actual images. Supplementary Information The online version contains supplementary material available at 10.1007/s00330-021-08132-0.
Collapse
Affiliation(s)
- David A Wood
- School of Biomedical Engineering & Imaging Sciences, Kings College London, Rayne Institute, 4th Floor, Lambeth Wing, London, SE1 7EH, UK
| | - Sina Kafiabadi
- Department of Neuroradiology, Ruskin Wing, King's College Hospital NHS Foundation Trust, London, SE5 9RS, UK
| | - Aisha Al Busaidi
- Department of Neuroradiology, Ruskin Wing, King's College Hospital NHS Foundation Trust, London, SE5 9RS, UK
| | - Emily L Guilhem
- Department of Neuroradiology, Ruskin Wing, King's College Hospital NHS Foundation Trust, London, SE5 9RS, UK
| | - Jeremy Lynch
- Department of Neuroradiology, Ruskin Wing, King's College Hospital NHS Foundation Trust, London, SE5 9RS, UK
| | | | - Antanas Montvila
- Department of Neuroradiology, Ruskin Wing, King's College Hospital NHS Foundation Trust, London, SE5 9RS, UK.,Hospital of Lithuanian University of Health Sciences, Kaunas Clinics, Kaunas, Lithuania
| | - Martin Kiik
- School of Biomedical Engineering & Imaging Sciences, Kings College London, Rayne Institute, 4th Floor, Lambeth Wing, London, SE1 7EH, UK
| | - Juveria Siddiqui
- Department of Neuroradiology, Ruskin Wing, King's College Hospital NHS Foundation Trust, London, SE5 9RS, UK
| | - Naveen Gadapa
- Department of Neurology, Ruskin Wing, King's College Hospital NHS Foundation Trust, London, SE5 9RS, UK
| | - Matthew D Benger
- Department of Neuroradiology, Ruskin Wing, King's College Hospital NHS Foundation Trust, London, SE5 9RS, UK
| | - Asif Mazumder
- Guy's and St Thomas' NHS Foundation Trust, Westminster Bridge Road, London, SE1 7EH, UK
| | - Gareth Barker
- Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, SE5 8AF, UK
| | - Sebastian Ourselin
- School of Biomedical Engineering & Imaging Sciences, Kings College London, Rayne Institute, 4th Floor, Lambeth Wing, London, SE1 7EH, UK
| | - James H Cole
- Institute of Psychiatry, Psychology & Neuroscience, King's College London, London, SE5 8AF, UK.,Centre for Medical Image Computing, Department of Computer Science, University College London, London, WC1V 6LJ, UK.,Dementia Research Centre, University College London, London, WC1N 3BG, UK
| | - Thomas C Booth
- School of Biomedical Engineering & Imaging Sciences, Kings College London, Rayne Institute, 4th Floor, Lambeth Wing, London, SE1 7EH, UK. .,Department of Neuroradiology, Ruskin Wing, King's College Hospital NHS Foundation Trust, London, SE5 9RS, UK.
| |
Collapse
|
23
|
Song E, Ang L, Park JY, Jun EY, Kim KH, Jun J, Park S, Lee MS. A scoping review on biomedical journal peer review guides for reviewers. PLoS One 2021; 16:e0251440. [PMID: 34014958 PMCID: PMC8136639 DOI: 10.1371/journal.pone.0251440] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Accepted: 04/27/2021] [Indexed: 11/22/2022] Open
Abstract
Background Peer review is widely used in academic fields to assess a manuscript’s significance and to improve its quality for publication. This scoping review will assess existing peer review guidelines and/or checklists intended for reviewers of biomedical journals and provide an overview on the review guidelines. Methods PubMed, Embase, and Allied and Complementary Medicine (AMED) databases were searched for review guidelines from the date of inception until February 19, 2021. There was no date restriction nor article type restriction. In addition to the database search, websites of journal publishers and non-publishers were additionally hand-searched. Results Of 14,633 database publication records and 24 website records, 65 publications and 14 websites met inclusion criteria for the review (78 records in total). From the included records, a total of 1,811 checklist items were identified. The items related to Methods, Results, and Discussion were found to be the highly discussed in reviewer guidelines. Conclusion This review identified existing literature on peer review guidelines and provided an overview of the current state of peer review guides. Review guidelines were varying by journals and publishers. This calls for more research to determine the need to use uniform review standards for transparent and standardized peer review. Protocol registration The protocol for this study has been registered at Research Registry (www.researchregistry.com): reviewregistry881.
Collapse
Affiliation(s)
- Eunhye Song
- Global Strategy Division, Korea Institute of Oriental Medicine, Daejeon, Korea
| | - Lin Ang
- Clinical Medicine Division, Korea Institute of Oriental Medicine, Daejeon, Korea
- Korean Convergence Medicine, University of Science and Technology, Daejeon, Korea
| | - Ji-Yeun Park
- College of Korean Medicine, Daejeon University, Daejeon, Korea
| | - Eun-Young Jun
- Department of Nursing, Daejeon University, Daejeon, Korea
| | - Kyeong Han Kim
- Department of Preventive Medicine, College of Korean Medicine, Woosuk University, Jeonju, Republic of Korea
| | - Jihee Jun
- Clinical Medicine Division, Korea Institute of Oriental Medicine, Daejeon, Korea
| | - Sunju Park
- Department of Preventive Medicine, College of Korean Medicine, Daejeon University, Daejeon, Korea
- * E-mail: (SP); (MSL)
| | - Myeong Soo Lee
- Clinical Medicine Division, Korea Institute of Oriental Medicine, Daejeon, Korea
- Korean Convergence Medicine, University of Science and Technology, Daejeon, Korea
- * E-mail: (SP); (MSL)
| |
Collapse
|
24
|
Ciuman RR. Understanding Human Body Maintenance, Protection, and Modification: Antibodies, Genetics, Stem Cells and Connected Artificial Intelligence Applications—Where Are We? Health (London) 2021. [DOI: 10.4236/health.2021.137059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
25
|
Sollini M, Bartoli F, Marciano A, Zanca R, Slart RHJA, Erba PA. Artificial intelligence and hybrid imaging: the best match for personalized medicine in oncology. Eur J Hybrid Imaging 2020; 4:24. [PMID: 34191197 PMCID: PMC8218106 DOI: 10.1186/s41824-020-00094-8] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Accepted: 11/26/2020] [Indexed: 12/20/2022] Open
Abstract
Artificial intelligence (AI) refers to a field of computer science aimed to perform tasks typically requiring human intelligence. Currently, AI is recognized in the broader technology radar within the five key technologies which emerge for their wide-ranging applications and impact in communities, companies, business, and value chain framework alike. However, AI in medical imaging is at an early phase of development, and there are still hurdles to take related to reliability, user confidence, and adoption. The present narrative review aimed to provide an overview on AI-based approaches (distributed learning, statistical learning, computer-aided diagnosis and detection systems, fully automated image analysis tool, natural language processing) in oncological hybrid medical imaging with respect to clinical tasks (detection, contouring and segmentation, prediction of histology and tumor stage, prediction of mutational status and molecular therapies targets, prediction of treatment response, and outcome). Particularly, AI-based approaches have been briefly described according to their purpose and, finally lung cancer-being one of the most extensively malignancy studied by hybrid medical imaging-has been used as illustrative scenario. Finally, we discussed clinical challenges and open issues including ethics, validation strategies, effective data-sharing methods, regulatory hurdles, educational resources, and strategy to facilitate the interaction among different stakeholders. Some of the major changes in medical imaging will come from the application of AI to workflow and protocols, eventually resulting in improved patient management and quality of life. Overall, several time-consuming tasks could be automatized. Machine learning algorithms and neural networks will permit sophisticated analysis resulting not only in major improvements in disease characterization through imaging, but also in the integration of multiple-omics data (i.e., derived from pathology, genomic, proteomics, and demographics) for multi-dimensional disease featuring. Nevertheless, to accelerate the transition of the theory to practice a sustainable development plan considering the multi-dimensional interactions between professionals, technology, industry, markets, policy, culture, and civil society directed by a mindset which will allow talents to thrive is necessary.
Collapse
Affiliation(s)
- Martina Sollini
- Department of Biomedical Sciences, Humanitas University, Pieve Emanuele (Milan), Italy
- Humanitas Clinical and Research Center, Rozzano (Milan), Italy
| | - Francesco Bartoli
- Regional Center of Nuclear Medicine, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Andrea Marciano
- Regional Center of Nuclear Medicine, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Roberta Zanca
- Regional Center of Nuclear Medicine, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy
| | - Riemer H J A Slart
- University Medical Center Groningen, Medical Imaging Center, University of Groningen, Groningen, The Netherlands
- Faculty of Science and Technology, Biomedical Photonic Imaging, University of Twente, Enschede, The Netherlands
| | - Paola A Erba
- Regional Center of Nuclear Medicine, Department of Translational Research and New Technologies in Medicine and Surgery, University of Pisa, Pisa, Italy.
- University Medical Center Groningen, Medical Imaging Center, University of Groningen, Groningen, The Netherlands.
| |
Collapse
|