451
|
Quality of Multicenter Studies Using MRI Radiomics for Diagnosing Clinically Significant Prostate Cancer: A Systematic Review. LIFE (BASEL, SWITZERLAND) 2022; 12:life12070946. [PMID: 35888036 PMCID: PMC9324573 DOI: 10.3390/life12070946] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 06/20/2022] [Accepted: 06/21/2022] [Indexed: 11/17/2022]
Abstract
Background: Reproducibility and generalization are major challenges for clinically significant prostate cancer modeling using MRI radiomics. Multicenter data seem indispensable to deal with these challenges, but the quality of such studies is currently unknown. The aim of this study was to systematically review the quality of multicenter studies on MRI radiomics for diagnosing clinically significant PCa. Methods: This systematic review followed the 2020 Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) checklist. Multicenter studies investigating the value of MRI radiomics for the diagnosis of clinically significant prostate cancer were included. Quality was assessed using the checklist for artificial intelligence in medical imaging (CLAIM) and the radiomics quality score (RQS). CLAIM consisted of 42 equally important items referencing different elements of good practice AI in medical imaging. RQS consisted of 36 points awarded over 16 items related to good practice radiomics. Final CLAIM and RQS scores were percentage-based, allowing for a total quality score consisting of the average of CLAIM and RQS. Results: Four studies were included. The average total CLAIM score was 74.6% and the average RQS was 52.8%. The corresponding average total quality score (CLAIM + RQS) was 63.7%. Conclusions: A very small number of multicenter radiomics PCa classification studies have been performed with the existing studies being of bad or average quality. Good multicenter studies might increase by encouraging preferably prospective data sharing and paying extra care to documentation in regards to reproducibility and clinical utility.
Collapse
|
452
|
Cejudo Grano de Oro JE, Koch PJ, Krois J, Garcia Cantu Ros A, Patel J, Meyer-Lueckel H, Schwendicke F. Hyperparameter Tuning and Automatic Image Augmentation for Deep Learning-Based Angle Classification on Intraoral Photographs-A Retrospective Study. Diagnostics (Basel) 2022; 12:1526. [PMID: 35885432 PMCID: PMC9319779 DOI: 10.3390/diagnostics12071526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 06/13/2022] [Accepted: 06/18/2022] [Indexed: 11/16/2022] Open
Abstract
We aimed to assess the effects of hyperparameter tuning and automatic image augmentation for deep learning-based classification of orthodontic photographs along the Angle classes. Our dataset consisted of 605 images of Angle class I, 1038 images of class II, and 408 images of class III. We trained ResNet architectures for classification of different combinations of learning rate and batch size. For the best combination, we compared the performance of models trained with and without automatic augmentation using 10-fold cross-validation. We used GradCAM to increase explainability, which can provide heat maps containing the salient areas relevant for the classification. The best combination of hyperparameters yielded a model with an accuracy of 0.63-0.64, F1-score 0.61-0.62, sensitivity 0.59-0.65, and specificity 0.80-0.81. For all metrics, it was apparent that there was an ideal corridor of batch size and learning rate combinations; smaller learning rates were associated with higher classification performance. Overall, the performance was highest for learning rates of around 1-3 × 10-6 and a batch size of eight, respectively. Additional automatic augmentation improved all metrics by 5-10% for all metrics. Misclassifications were most common between Angle classes I and II. GradCAM showed that the models employed features relevant for human classification, too. The choice of hyperparameters drastically affected the performance of deep learning models in orthodontics, and automatic image augmentation resulted in further improvements. Our models managed to classify the dental sagittal occlusion along Angle classes based on digital intraoral photos.
Collapse
Affiliation(s)
- José Eduardo Cejudo Grano de Oro
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité Center for Oral Health Sciences CC3, Charité–Universitätsmedizin Berlin (Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin), Aßmannshauser Straße 4-6, 14197 Berlin, Germany; (J.E.C.G.d.O.); (J.K.); (A.G.C.R.)
| | - Petra Julia Koch
- Department of Orthodontics and Dentofacial Orthopedics, Charité Center for Oral Health Sciences CC3, Charité–Universitätsmedizin Berlin (Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin), Aßmannshauser Straße 4-6, 14197 Berlin, Germany;
| | - Joachim Krois
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité Center for Oral Health Sciences CC3, Charité–Universitätsmedizin Berlin (Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin), Aßmannshauser Straße 4-6, 14197 Berlin, Germany; (J.E.C.G.d.O.); (J.K.); (A.G.C.R.)
| | - Anselmo Garcia Cantu Ros
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité Center for Oral Health Sciences CC3, Charité–Universitätsmedizin Berlin (Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin), Aßmannshauser Straße 4-6, 14197 Berlin, Germany; (J.E.C.G.d.O.); (J.K.); (A.G.C.R.)
| | - Jay Patel
- Health Informatics, Department of Health Services Administrations and Policy, Temple University College of Public Health, Philadelphia, PA 19122, USA;
| | - Hendrik Meyer-Lueckel
- Department of Restorative Preventive and Pediatric Dentistry, zmk Bern, University of Bern, 3012 Bern, Switzerland;
| | - Falk Schwendicke
- Department of Oral Diagnostics, Digital Health and Health Services Research, Charité Center for Oral Health Sciences CC3, Charité–Universitätsmedizin Berlin (Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin), Aßmannshauser Straße 4-6, 14197 Berlin, Germany; (J.E.C.G.d.O.); (J.K.); (A.G.C.R.)
| |
Collapse
|
453
|
Saw SN, Ng KH. Current challenges of implementing artificial intelligence in medical imaging. Phys Med 2022; 100:12-17. [PMID: 35714523 DOI: 10.1016/j.ejmp.2022.06.003] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 04/26/2022] [Accepted: 06/11/2022] [Indexed: 12/31/2022] Open
Abstract
The idea of using artificial intelligence (AI) in medical practice has gained vast interest due to its potential to revolutionise healthcare systems. However, only some AI algorithms are utilised due to systems' uncertainties, besides the never-ending list of ethical and legal concerns. This paper intends to provide an overview of current AI challenges in medical imaging with an ultimate aim to foster better and effective communication among various stakeholders to encourage AI technology development. We identify four main challenges in implementing AI in medical imaging, supported with consequences and past events when these problems fail to mitigate. Among them is the creation of a robust AI algorithm that is fair, trustable and transparent. Another issue is on data governance, in which best practices in data sharing must be established to promote trust and protect the patients' privacy. Next, stakeholders, such as the government, technology companies and hospital management, should come to a consensus in creating trustworthy AI policies and regulatory frameworks, which is the fourth challenge, to support, encourage and spur innovation in digital AI healthcare technology. Lastly, we discussed the efforts of various organizations such as the World Health Organisation (WHO), American College of Radiology (ACR), European Society of Radiology (ESR) and Radiological Society of North America (RSNA), who are already actively pursuing ethical developments in AI. The efforts by various stakeholders will eventually overcome hurdles and the deployment of AI-driven healthcare applications in clinical practice will become a reality and hence lead to better healthcare services and outcomes.
Collapse
Affiliation(s)
- Shier Nee Saw
- Department of Artificial Intelligence, Faculty of Computer Science and Information Technology, Universiti Malaya, 50603 Kuala Lumpur, Malaysia.
| | - Kwan Hoong Ng
- Department of Biomedical Imaging, Universiti Malaya, 50603 Kuala Lumpur, Malaysia; Department of Medical Imaging and Radiological Sciences, College of Health Sciences, Kaohsiung Medical University, Kaohsiung, Taiwan
| |
Collapse
|
454
|
Klontzas ME, Karantanas AH. Research in Musculoskeletal Radiology: Setting Goals and Strategic Directions. Semin Musculoskelet Radiol 2022; 26:354-358. [PMID: 35654100 DOI: 10.1055/s-0042-1748319] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
The future of musculoskeletal (MSK) radiology is being built on research developments in the field. Over the past decade, MSK imaging research has been dominated by advancements in molecular imaging biomarkers, artificial intelligence, radiomics, and novel high-resolution equipment. Adequate preparation of trainees and specialists will ensure that current and future leaders will be prepared to embrace and critically appraise technological developments, will be up to date on clinical developments, such as the use of artificial tissues, will define research directions, and will actively participate and lead multidisciplinary research. This review presents an overview of the current MSK research landscape and proposes tangible future goals and strategic directions that will fortify the future of MSK radiology.
Collapse
Affiliation(s)
- Michail E Klontzas
- Department of Medical Imaging, University Hospital of Heraklion, Crete, Greece.,Advanced Hybrid Imaging Systems, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece.,Department of Radiology, School of Medicine, University of Crete, Heraklion, Greece
| | - Apostolos H Karantanas
- Department of Medical Imaging, University Hospital of Heraklion, Crete, Greece.,Advanced Hybrid Imaging Systems, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece.,Department of Radiology, School of Medicine, University of Crete, Heraklion, Greece
| |
Collapse
|
455
|
Shelmerdine SC, White RD, Liu H, Arthurs OJ, Sebire NJ. Artificial intelligence for radiological paediatric fracture assessment: a systematic review. Insights Imaging 2022; 13:94. [PMID: 35657439 PMCID: PMC9166920 DOI: 10.1186/s13244-022-01234-3] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 05/12/2022] [Indexed: 12/16/2022] Open
Abstract
BACKGROUND Majority of research and commercial efforts have focussed on use of artificial intelligence (AI) for fracture detection in adults, despite the greater long-term clinical and medicolegal implications of missed fractures in children. The objective of this study was to assess the available literature regarding diagnostic performance of AI tools for paediatric fracture assessment on imaging, and where available, how this compares with the performance of human readers. MATERIALS AND METHODS MEDLINE, Embase and Cochrane Library databases were queried for studies published between 1 January 2011 and 2021 using terms related to 'fracture', 'artificial intelligence', 'imaging' and 'children'. Risk of bias was assessed using a modified QUADAS-2 tool. Descriptive statistics for diagnostic accuracies were collated. RESULTS Nine eligible articles from 362 publications were included, with most (8/9) evaluating fracture detection on radiographs, with the elbow being the most common body part. Nearly all articles used data derived from a single institution, and used deep learning methodology with only a few (2/9) performing external validation. Accuracy rates generated by AI ranged from 88.8 to 97.9%. In two of the three articles where AI performance was compared to human readers, sensitivity rates for AI were marginally higher, but this was not statistically significant. CONCLUSIONS Wide heterogeneity in the literature with limited information on algorithm performance on external datasets makes it difficult to understand how such tools may generalise to a wider paediatric population. Further research using a multicentric dataset with real-world evaluation would help to better understand the impact of these tools.
Collapse
Affiliation(s)
- Susan C. Shelmerdine
- grid.420468.cDepartment of Clinical Radiology, Great Ormond Street Hospital for Children, London, UK ,grid.83440.3b0000000121901201Great Ormond Street Hospital for Children, UCL Great Ormond Street Institute of Child Health, London, UK ,grid.420468.cGreat Ormond Street Hospital NIHR Biomedical Research Centre, London, UK ,grid.464688.00000 0001 2300 7844Department of Clinical Radiology, St. George’s Hospital, London, UK
| | - Richard D. White
- grid.241103.50000 0001 0169 7725Department of Radiology, University Hospital of Wales, Cardiff, UK
| | - Hantao Liu
- grid.5600.30000 0001 0807 5670School of Computer Science and Informatics, Cardiff University, Cardiff, UK
| | - Owen J. Arthurs
- grid.420468.cDepartment of Clinical Radiology, Great Ormond Street Hospital for Children, London, UK ,grid.83440.3b0000000121901201Great Ormond Street Hospital for Children, UCL Great Ormond Street Institute of Child Health, London, UK ,grid.420468.cGreat Ormond Street Hospital NIHR Biomedical Research Centre, London, UK
| | - Neil J. Sebire
- grid.420468.cDepartment of Clinical Radiology, Great Ormond Street Hospital for Children, London, UK ,grid.83440.3b0000000121901201Great Ormond Street Hospital for Children, UCL Great Ormond Street Institute of Child Health, London, UK ,grid.420468.cGreat Ormond Street Hospital NIHR Biomedical Research Centre, London, UK
| |
Collapse
|
456
|
Schellenberg M, Dreher KK, Holzwarth N, Isensee F, Reinke A, Schreck N, Seitel A, Tizabi MD, Maier-Hein L, Gröhl J. Semantic segmentation of multispectral photoacoustic images using deep learning. PHOTOACOUSTICS 2022; 26:100341. [PMID: 35371919 PMCID: PMC8968659 DOI: 10.1016/j.pacs.2022.100341] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Revised: 02/15/2022] [Accepted: 02/20/2022] [Indexed: 05/08/2023]
Abstract
Photoacoustic (PA) imaging has the potential to revolutionize functional medical imaging in healthcare due to the valuable information on tissue physiology contained in multispectral photoacoustic measurements. Clinical translation of the technology requires conversion of the high-dimensional acquired data into clinically relevant and interpretable information. In this work, we present a deep learning-based approach to semantic segmentation of multispectral photoacoustic images to facilitate image interpretability. Manually annotated photoacoustic and ultrasound imaging data are used as reference and enable the training of a deep learning-based segmentation algorithm in a supervised manner. Based on a validation study with experimentally acquired data from 16 healthy human volunteers, we show that automatic tissue segmentation can be used to create powerful analyses and visualizations of multispectral photoacoustic images. Due to the intuitive representation of high-dimensional information, such a preprocessing algorithm could be a valuable means to facilitate the clinical translation of photoacoustic imaging.
Collapse
Affiliation(s)
- Melanie Schellenberg
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
- HIDSS4Health - Helmholtz Information and Data Science School for Health, Heidelberg, Germany
| | - Kris K. Dreher
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
| | - Niklas Holzwarth
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Fabian Isensee
- HI Applied Computer Vision Lab, Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Annika Reinke
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
- HI Applied Computer Vision Lab, Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Nicholas Schreck
- Division of Biostatistics, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Alexander Seitel
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Minu D. Tizabi
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Lena Maier-Hein
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
- HIDSS4Health - Helmholtz Information and Data Science School for Health, Heidelberg, Germany
- HI Applied Computer Vision Lab, Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Medical Faculty, Heidelberg University, Heidelberg, Germany
| | - Janek Gröhl
- Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), Heidelberg, Germany
| |
Collapse
|
457
|
Dana J, Venkatasamy A, Saviano A, Lupberger J, Hoshida Y, Vilgrain V, Nahon P, Reinhold C, Gallix B, Baumert TF. Conventional and artificial intelligence-based imaging for biomarker discovery in chronic liver disease. Hepatol Int 2022; 16:509-522. [PMID: 35138551 PMCID: PMC9177703 DOI: 10.1007/s12072-022-10303-0] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Accepted: 01/17/2022] [Indexed: 12/14/2022]
Abstract
Chronic liver diseases, resulting from chronic injuries of various causes, lead to cirrhosis with life-threatening complications including liver failure, portal hypertension, hepatocellular carcinoma. A key unmet medical need is robust non-invasive biomarkers to predict patient outcome, stratify patients for risk of disease progression and monitor response to emerging therapies. Quantitative imaging biomarkers have already been developed, for instance, liver elastography for staging fibrosis or proton density fat fraction on magnetic resonance imaging for liver steatosis. Yet, major improvements, in the field of image acquisition and analysis, are still required to be able to accurately characterize the liver parenchyma, monitor its changes and predict any pejorative evolution across disease progression. Artificial intelligence has the potential to augment the exploitation of massive multi-parametric data to extract valuable information and achieve precision medicine. Machine learning algorithms have been developed to assess non-invasively certain histological characteristics of chronic liver diseases, including fibrosis and steatosis. Although still at an early stage of development, artificial intelligence-based imaging biomarkers provide novel opportunities to predict the risk of progression from early-stage chronic liver diseases toward cirrhosis-related complications, with the ultimate perspective of precision medicine. This review provides an overview of emerging quantitative imaging techniques and the application of artificial intelligence for biomarker discovery in chronic liver disease.
Collapse
Affiliation(s)
- Jérémy Dana
- Institut de Recherche sur les Maladies Virales et Hépatiques, Institut National de la Santé et de la Recherche Médicale (Inserm), U1110, 3 Rue Koeberlé, 67000, Strasbourg, France.
- Institut Hospitalo-Universitaire (IHU), Strasbourg, France.
- Université de Strasbourg, Strasbourg, France.
- Department of Diagnostic Radiology, McGill University, Montreal, Canada.
| | - Aïna Venkatasamy
- Institut Hospitalo-Universitaire (IHU), Strasbourg, France
- Streinth Lab (Stress Response and Innovative Therapies), Inserm UMR_S 1113 IRFAC, Interface Recherche Fondamentale et Appliquée à la Cancérologie, 3 Avenue Moliere, Strasbourg, France
- Department of Radiology Medical Physics, Faculty of Medicine, Medical Center-University of Freiburg, University of Freiburg, Killianstrasse 5a, 79106, Freiburg, Germany
| | - Antonio Saviano
- Institut de Recherche sur les Maladies Virales et Hépatiques, Institut National de la Santé et de la Recherche Médicale (Inserm), U1110, 3 Rue Koeberlé, 67000, Strasbourg, France
- Université de Strasbourg, Strasbourg, France
- Pôle Hépato-Digestif, Hôpitaux Universitaires de Strasbourg, Strasbourg, France
| | - Joachim Lupberger
- Institut de Recherche sur les Maladies Virales et Hépatiques, Institut National de la Santé et de la Recherche Médicale (Inserm), U1110, 3 Rue Koeberlé, 67000, Strasbourg, France
- Université de Strasbourg, Strasbourg, France
| | - Yujin Hoshida
- Liver Tumor Translational Research Program, Division of Digestive and Liver Diseases, Department of Internal Medicine, Simmons Comprehensive Cancer Center, University of Texas Southwestern Medical Center, Dallas, USA
| | - Valérie Vilgrain
- Radiology Department, Hôpital Beaujon, Université de Paris, CRI, INSERM 1149, APHP. Nord, Paris, France
| | - Pierre Nahon
- Liver Unit, Assistance Publique-Hôpitaux de Paris (AP-HP), Hôpitaux Universitaires Paris Seine Saint-Denis, Bobigny, France
- Université Sorbonne Paris Nord, 93000, Bobigny, France
- Inserm, UMR-1138 "Functional Genomics of Solid Tumors", Paris, France
| | - Caroline Reinhold
- Department of Diagnostic Radiology, McGill University, Montreal, Canada
- Augmented Intelligence and Precision Health Laboratory, Research Institute of McGill University Health Centre, Montreal, Canada
- Montreal Imaging Experts Inc., Montreal, Canada
| | - Benoit Gallix
- Institut Hospitalo-Universitaire (IHU), Strasbourg, France
- Université de Strasbourg, Strasbourg, France
- Department of Diagnostic Radiology, McGill University, Montreal, Canada
| | - Thomas F Baumert
- Institut de Recherche sur les Maladies Virales et Hépatiques, Institut National de la Santé et de la Recherche Médicale (Inserm), U1110, 3 Rue Koeberlé, 67000, Strasbourg, France.
- Université de Strasbourg, Strasbourg, France.
- Pôle Hépato-Digestif, Hôpitaux Universitaires de Strasbourg, Strasbourg, France.
| |
Collapse
|
458
|
Predictive values of AI-based triage model in suboptimal CT pulmonary angiography. Clin Imaging 2022; 86:25-30. [DOI: 10.1016/j.clinimag.2022.03.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 01/11/2022] [Accepted: 03/12/2022] [Indexed: 11/19/2022]
|
459
|
Nan Y, Ser JD, Walsh S, Schönlieb C, Roberts M, Selby I, Howard K, Owen J, Neville J, Guiot J, Ernst B, Pastor A, Alberich-Bayarri A, Menzel MI, Walsh S, Vos W, Flerin N, Charbonnier JP, van Rikxoort E, Chatterjee A, Woodruff H, Lambin P, Cerdá-Alberich L, Martí-Bonmatí L, Herrera F, Yang G. Data harmonisation for information fusion in digital healthcare: A state-of-the-art systematic review, meta-analysis and future research directions. AN INTERNATIONAL JOURNAL ON INFORMATION FUSION 2022; 82:99-122. [PMID: 35664012 PMCID: PMC8878813 DOI: 10.1016/j.inffus.2022.01.001] [Citation(s) in RCA: 47] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Revised: 12/22/2021] [Accepted: 01/07/2022] [Indexed: 05/13/2023]
Abstract
Removing the bias and variance of multicentre data has always been a challenge in large scale digital healthcare studies, which requires the ability to integrate clinical features extracted from data acquired by different scanners and protocols to improve stability and robustness. Previous studies have described various computational approaches to fuse single modality multicentre datasets. However, these surveys rarely focused on evaluation metrics and lacked a checklist for computational data harmonisation studies. In this systematic review, we summarise the computational data harmonisation approaches for multi-modality data in the digital healthcare field, including harmonisation strategies and evaluation metrics based on different theories. In addition, a comprehensive checklist that summarises common practices for data harmonisation studies is proposed to guide researchers to report their research findings more effectively. Last but not least, flowcharts presenting possible ways for methodology and metric selection are proposed and the limitations of different methods have been surveyed for future research.
Collapse
Affiliation(s)
- Yang Nan
- National Heart and Lung Institute, Imperial College London, London, Northern Ireland UK
| | - Javier Del Ser
- Department of Communications Engineering, University of the Basque Country UPV/EHU, Bilbao 48013, Spain
- TECNALIA, Basque Research and Technology Alliance (BRTA), Derio 48160, Spain
| | - Simon Walsh
- National Heart and Lung Institute, Imperial College London, London, Northern Ireland UK
| | - Carola Schönlieb
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, Northern Ireland UK
| | - Michael Roberts
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge, Northern Ireland UK
- Oncology R&D, AstraZeneca, Cambridge, Northern Ireland UK
| | - Ian Selby
- Department of Radiology, University of Cambridge, Cambridge, Northern Ireland UK
| | - Kit Howard
- Clinical Data Interchange Standards Consortium, Austin, TX, United States of America
| | - John Owen
- Clinical Data Interchange Standards Consortium, Austin, TX, United States of America
| | - Jon Neville
- Clinical Data Interchange Standards Consortium, Austin, TX, United States of America
| | - Julien Guiot
- University Hospital of Liège (CHU Liège), Respiratory medicine department, Liège, Belgium
- University of Liege, Department of clinical sciences, Pneumology-Allergology, Liège, Belgium
| | - Benoit Ernst
- University Hospital of Liège (CHU Liège), Respiratory medicine department, Liège, Belgium
- University of Liege, Department of clinical sciences, Pneumology-Allergology, Liège, Belgium
| | | | | | - Marion I. Menzel
- Technische Hochschule Ingolstadt, Ingolstadt, Germany
- GE Healthcare GmbH, Munich, Germany
| | - Sean Walsh
- Radiomics (Oncoradiomics SA), Liège, Belgium
| | - Wim Vos
- Radiomics (Oncoradiomics SA), Liège, Belgium
| | - Nina Flerin
- Radiomics (Oncoradiomics SA), Liège, Belgium
| | | | | | - Avishek Chatterjee
- Department of Precision Medicine, Maastricht University, Maastricht, The Netherlands
| | - Henry Woodruff
- Department of Precision Medicine, Maastricht University, Maastricht, The Netherlands
| | - Philippe Lambin
- Department of Precision Medicine, Maastricht University, Maastricht, The Netherlands
| | - Leonor Cerdá-Alberich
- Medical Imaging Department, Hospital Universitari i Politècnic La Fe, Valencia, Spain
| | - Luis Martí-Bonmatí
- Medical Imaging Department, Hospital Universitari i Politècnic La Fe, Valencia, Spain
| | - Francisco Herrera
- Department of Computer Sciences and Artificial Intelligence, Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI) University of Granada, Granada, Spain
- Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, London, Northern Ireland UK
- Cardiovascular Research Centre, Royal Brompton Hospital, London, Northern Ireland UK
- School of Biomedical Engineering & Imaging Sciences, King's College London, London, Northern Ireland UK
| |
Collapse
|
460
|
Kouli O, Hassane A, Badran D, Kouli T, Hossain-Ibrahim K, Steele JD. Automated brain tumour identification using magnetic resonance imaging: a systematic review and meta-analysis. Neurooncol Adv 2022; 4:vdac081. [PMID: 35769411 PMCID: PMC9234754 DOI: 10.1093/noajnl/vdac081] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
Background Automated brain tumor identification facilitates diagnosis and treatment planning. We evaluate the performance of traditional machine learning (TML) and deep learning (DL) in brain tumor detection and segmentation, using MRI. Methods A systematic literature search from January 2000 to May 8, 2021 was conducted. Study quality was assessed using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM). Detection meta-analysis was performed using a unified hierarchical model. Segmentation studies were evaluated using a random effects model. Sensitivity analysis was performed for externally validated studies. Results Of 224 studies included in the systematic review, 46 segmentation and 38 detection studies were eligible for meta-analysis. In detection, DL achieved a lower false positive rate compared to TML; 0.018 (95% CI, 0.011 to 0.028) and 0.048 (0.032 to 0.072) (P < .001), respectively. In segmentation, DL had a higher dice similarity coefficient (DSC), particularly for tumor core (TC); 0.80 (0.77 to 0.83) and 0.63 (0.56 to 0.71) (P < .001), persisting on sensitivity analysis. Both manual and automated whole tumor (WT) segmentation had “good” (DSC ≥ 0.70) performance. Manual TC segmentation was superior to automated; 0.78 (0.69 to 0.86) and 0.64 (0.53 to 0.74) (P = .014), respectively. Only 30% of studies reported external validation. Conclusions The comparable performance of automated to manual WT segmentation supports its integration into clinical practice. However, manual outperformance for sub-compartmental segmentation highlights the need for further development of automated methods in this area. Compared to TML, DL provided superior performance for detection and sub-compartmental segmentation. Improvements in the quality and design of studies, including external validation, are required for the interpretability and generalizability of automated models.
Collapse
Affiliation(s)
- Omar Kouli
- School of Medicine, University of Dundee , Dundee UK
- NHS Greater Glasgow and Clyde , Dundee UK
| | | | | | - Tasnim Kouli
- School of Medicine, University of Dundee , Dundee UK
| | | | - J Douglas Steele
- Division of Imaging Science and Technology, School of Medicine, University of Dundee , UK
| |
Collapse
|
461
|
Machine Learning for the Detection and Segmentation of Benign Tumors of the Central Nervous System: A Systematic Review. Cancers (Basel) 2022; 14:cancers14112676. [PMID: 35681655 PMCID: PMC9179850 DOI: 10.3390/cancers14112676] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Revised: 05/18/2022] [Accepted: 05/26/2022] [Indexed: 11/20/2022] Open
Abstract
Simple Summary Machine learning in radiology of the central nervous system has seen many interesting publications in the past few years. Since the focus has largely been on malignant tumors such as brain metastases and high-grade gliomas, we conducted a systematic review on benign tumors to summarize what has been published and where there might be gaps in the research. We found several studies that report good results, but the descriptions of methodologies could be improved to enable better comparisons and assessment of biases. Abstract Objectives: To summarize the available literature on using machine learning (ML) for the detection and segmentation of benign tumors of the central nervous system (CNS) and to assess the adherence of published ML/diagnostic accuracy studies to best practice. Methods: The MEDLINE database was searched for the use of ML in patients with any benign tumor of the CNS, and the records were screened according to PRISMA guidelines. Results: Eleven retrospective studies focusing on meningioma (n = 4), vestibular schwannoma (n = 4), pituitary adenoma (n = 2) and spinal schwannoma (n = 1) were included. The majority of studies attempted segmentation. Links to repositories containing code were provided in two manuscripts, and no manuscripts shared imaging data. Only one study used an external test set, which raises the question as to whether some of the good performances that have been reported were caused by overfitting and may not generalize to data from other institutions. Conclusions: Using ML for detecting and segmenting benign brain tumors is still in its infancy. Stronger adherence to ML best practices could facilitate easier comparisons between studies and contribute to the development of models that are more likely to one day be used in clinical practice.
Collapse
|
462
|
Merkaj S, Bahar RC, Zeevi T, Lin M, Ikuta I, Bousabarah K, Cassinelli Petersen GI, Staib L, Payabvash S, Mongan JT, Cha S, Aboian MS. Machine Learning Tools for Image-Based Glioma Grading and the Quality of Their Reporting: Challenges and Opportunities. Cancers (Basel) 2022; 14:cancers14112623. [PMID: 35681603 PMCID: PMC9179416 DOI: 10.3390/cancers14112623] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2022] [Revised: 05/21/2022] [Accepted: 05/23/2022] [Indexed: 01/27/2023] Open
Abstract
Technological innovation has enabled the development of machine learning (ML) tools that aim to improve the practice of radiologists. In the last decade, ML applications to neuro-oncology have expanded significantly, with the pre-operative prediction of glioma grade using medical imaging as a specific area of interest. We introduce the subject of ML models for glioma grade prediction by remarking upon the models reported in the literature as well as by describing their characteristic developmental workflow and widely used classifier algorithms. The challenges facing these models-including data sources, external validation, and glioma grade classification methods -are highlighted. We also discuss the quality of how these models are reported, explore the present and future of reporting guidelines and risk of bias tools, and provide suggestions for the reporting of prospective works. Finally, this review offers insights into next steps that the field of ML glioma grade prediction can take to facilitate clinical implementation.
Collapse
Affiliation(s)
- Sara Merkaj
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, P.O. Box 208042, New Haven, CT 06520, USA; (S.M.); (R.C.B.); (T.Z.); (M.L.); (I.I.); (G.I.C.P.); (L.S.); (S.P.)
- Department of Neurosurgery, University of Ulm, Albert-Einstein-Allee 23, 89081 Ulm, Germany
| | - Ryan C. Bahar
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, P.O. Box 208042, New Haven, CT 06520, USA; (S.M.); (R.C.B.); (T.Z.); (M.L.); (I.I.); (G.I.C.P.); (L.S.); (S.P.)
| | - Tal Zeevi
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, P.O. Box 208042, New Haven, CT 06520, USA; (S.M.); (R.C.B.); (T.Z.); (M.L.); (I.I.); (G.I.C.P.); (L.S.); (S.P.)
| | - MingDe Lin
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, P.O. Box 208042, New Haven, CT 06520, USA; (S.M.); (R.C.B.); (T.Z.); (M.L.); (I.I.); (G.I.C.P.); (L.S.); (S.P.)
- Visage Imaging, Inc., 12625 High Bluff Dr, Suite 205, San Diego, CA 92130, USA
| | - Ichiro Ikuta
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, P.O. Box 208042, New Haven, CT 06520, USA; (S.M.); (R.C.B.); (T.Z.); (M.L.); (I.I.); (G.I.C.P.); (L.S.); (S.P.)
| | | | - Gabriel I. Cassinelli Petersen
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, P.O. Box 208042, New Haven, CT 06520, USA; (S.M.); (R.C.B.); (T.Z.); (M.L.); (I.I.); (G.I.C.P.); (L.S.); (S.P.)
| | - Lawrence Staib
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, P.O. Box 208042, New Haven, CT 06520, USA; (S.M.); (R.C.B.); (T.Z.); (M.L.); (I.I.); (G.I.C.P.); (L.S.); (S.P.)
| | - Seyedmehdi Payabvash
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, P.O. Box 208042, New Haven, CT 06520, USA; (S.M.); (R.C.B.); (T.Z.); (M.L.); (I.I.); (G.I.C.P.); (L.S.); (S.P.)
| | - John T. Mongan
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 505 Parnassus Ave., San Francisco, CA 94143, USA; (J.T.M.); (S.C.)
| | - Soonmee Cha
- Department of Radiology and Biomedical Imaging, University of California San Francisco, 505 Parnassus Ave., San Francisco, CA 94143, USA; (J.T.M.); (S.C.)
| | - Mariam S. Aboian
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, P.O. Box 208042, New Haven, CT 06520, USA; (S.M.); (R.C.B.); (T.Z.); (M.L.); (I.I.); (G.I.C.P.); (L.S.); (S.P.)
- Correspondence: ; Tel.: +650-285-7577
| |
Collapse
|
463
|
Meningioma Radiomics: At the Nexus of Imaging, Pathology and Biomolecular Characterization. Cancers (Basel) 2022; 14:cancers14112605. [PMID: 35681585 PMCID: PMC9179263 DOI: 10.3390/cancers14112605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 05/20/2022] [Accepted: 05/23/2022] [Indexed: 12/10/2022] Open
Abstract
Simple Summary Meningiomas are typically benign, common extra-axial tumors of the central nervous system. Routine clinical assessment by radiologists presents some limitations regarding long-term patient outcome prediction and risk stratification. Given the exponential growth of interest in radiomics and artificial intelligence in medical imaging, numerous studies have evaluated the potential of these tools in the setting of meningioma imaging. These were aimed at the development of reliable and reproducible models based on quantitative data. Although several limitations have yet to be overcome for their routine use in clinical practice, their innovative potential is evident. In this review, we present a wide-ranging overview of radiomics and artificial intelligence applications in meningioma imaging. Abstract Meningiomas are the most common extra-axial tumors of the central nervous system (CNS). Even though recurrence is uncommon after surgery and most meningiomas are benign, an aggressive behavior may still be exhibited in some cases. Although the diagnosis can be made by radiologists, typically with magnetic resonance imaging, qualitative analysis has some limitations in regard to outcome prediction and risk stratification. The acquisition of this information could help the referring clinician in the decision-making process and selection of the appropriate treatment. Following the increased attention and potential of radiomics and artificial intelligence in the healthcare domain, including oncological imaging, researchers have investigated their use over the years to overcome the current limitations of imaging. The aim of these new tools is the replacement of subjective and, therefore, potentially variable medical image analysis by more objective quantitative data, using computational algorithms. Although radiomics has not yet fully entered clinical practice, its potential for the detection, diagnostic, and prognostic characterization of tumors is evident. In this review, we present a wide-ranging overview of radiomics and artificial intelligence applications in meningioma imaging.
Collapse
|
464
|
Rohrer C, Krois J, Patel J, Meyer-Lueckel H, Rodrigues JA, Schwendicke F. Segmentation of Dental Restorations on Panoramic Radiographs Using Deep Learning. Diagnostics (Basel) 2022; 12:1316. [PMID: 35741125 PMCID: PMC9221749 DOI: 10.3390/diagnostics12061316] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 05/22/2022] [Accepted: 05/25/2022] [Indexed: 11/16/2022] Open
Abstract
Convolutional Neural Networks (CNNs) such as U-Net have been widely used for medical image segmentation. Dental restorations are prominent features of dental radiographs. Applying U-Net on the panoramic image is challenging, as the shape, size and frequency of different restoration types vary. We hypothesized that models trained on smaller, equally spaced rectangular image crops (tiles) of the panoramic would outperform models trained on the full image. A total of 1781 panoramic radiographs were annotated pixelwise for fillings, crowns, and root canal fillings by dental experts. We used different numbers of tiles for our experiments. Five-times-repeated three-fold cross-validation was used for model evaluation. Training with more tiles improved model performance and accelerated convergence. The F1-score for the full panoramic image was 0.7, compared to 0.83, 0.92 and 0.95 for 6, 10 and 20 tiles, respectively. For root canals fillings, which are small, cone-shaped features that appear less frequently on the radiographs, the performance improvement was even higher (+294%). Training on tiles and pooling the results thereafter improved pixelwise classification performance and reduced the time to model convergence for segmenting dental restorations. Segmentation of panoramic radiographs is biased towards more frequent and extended classes. Tiling may help to overcome this bias and increase accuracy.
Collapse
Affiliation(s)
- Csaba Rohrer
- Oral Diagnostics, Digital Health and Health Services Research, Charité–Universitätsmedizin Berlin, 10117 Berlin, Germany; (C.R.); (J.K.); (J.A.R.)
| | - Joachim Krois
- Oral Diagnostics, Digital Health and Health Services Research, Charité–Universitätsmedizin Berlin, 10117 Berlin, Germany; (C.R.); (J.K.); (J.A.R.)
- ITU/WHO Focus Group on AI for Health, Topic Group Dental Diagnostics and Digital Dentistry, 1202 Geneva, Switzerland
| | - Jay Patel
- Informatics, Department of Health Services Administrations and Policy, Temple University College of Public Health, Philadelphia, PA 19140, USA;
| | | | - Jonas Almeida Rodrigues
- Oral Diagnostics, Digital Health and Health Services Research, Charité–Universitätsmedizin Berlin, 10117 Berlin, Germany; (C.R.); (J.K.); (J.A.R.)
- Surgery and Orthopedics, UFRGS, Porto Alegre 90040-060, Brazil
| | - Falk Schwendicke
- Oral Diagnostics, Digital Health and Health Services Research, Charité–Universitätsmedizin Berlin, 10117 Berlin, Germany; (C.R.); (J.K.); (J.A.R.)
- ITU/WHO Focus Group on AI for Health, Topic Group Dental Diagnostics and Digital Dentistry, 1202 Geneva, Switzerland
| |
Collapse
|
465
|
Validation of a machine learning software tool for automated large vessel occlusion detection in patients with suspected acute stroke. Neuroradiology 2022; 64:2245-2255. [PMID: 35606655 DOI: 10.1007/s00234-022-02978-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Accepted: 05/10/2022] [Indexed: 10/18/2022]
Abstract
PURPOSE CT angiography (CTA) is the imaging standard for large vessel occlusion (LVO) detection in patients with acute ischemic stroke. StrokeSENS LVO is an automated tool that utilizes a machine learning algorithm to identify anterior large vessel occlusions (LVO) on CTA. The aim of this study was to test the algorithm's performance in LVO detection in an independent dataset. METHODS A total of 400 studies (217 LVO, 183 other/no occlusion) read by expert consensus were used for retrospective analysis. The LVO was defined as intracranial internal carotid artery (ICA) occlusion and M1 middle cerebral artery (MCA) occlusion. Software performance in detecting anterior LVO was evaluated using receiver operator characteristics (ROC) analysis, reporting area under the curve (AUC), sensitivity, and specificity. Subgroup analyses were performed to evaluate if performance in detecting LVO differed by subgroups, namely M1 MCA and ICA occlusion sites, and in data stratified by patient age, sex, and CTA acquisition characteristics (slice thickness, kilovoltage tube peak, and scanner manufacturer). RESULTS AUC, sensitivity, and specificity overall were as follows: 0.939, 0.894, and 0.874, respectively, in the full cohort; 0.927, 0.857, and 0.874, respectively, in the ICA occlusion cohort; 0.945, 0.914, and 0.874, respectively, in the M1 MCA occlusion cohort. Performance did not differ significantly by patient age, sex, or CTA acquisition characteristics. CONCLUSION The StrokeSENS LVO machine learning algorithm detects anterior LVO with high accuracy from a range of scans in a large dataset.
Collapse
|
466
|
Bhandari A, Marwah R, Smith J, Nguyen D, Bhatti A, Lim CP, Lasocki A. Machine learning imaging applications in the differentiation of true tumour progression from
treatment‐related
effects in brain tumours: A systematic review and
meta‐analysis. J Med Imaging Radiat Oncol 2022; 66:781-797. [PMID: 35599360 PMCID: PMC9545346 DOI: 10.1111/1754-9485.13436] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Accepted: 05/04/2022] [Indexed: 12/21/2022]
Abstract
Introduction Chemotherapy and radiotherapy can produce treatment‐related effects, which may mimic tumour progression. Advances in Artificial Intelligence (AI) offer the potential to provide a more consistent approach of diagnosis with improved accuracy. The aim of this study was to determine the efficacy of machine learning models to differentiate treatment‐related effects (TRE), consisting of pseudoprogression (PsP) and radiation necrosis (RN), and true tumour progression (TTP). Methods The systematic review was conducted in accordance with PRISMA‐DTA guidelines. Searches were performed on PubMed, Scopus, Embase, Medline (Ovid) and ProQuest databases. Quality was assessed according to the PROBAST and CLAIM criteria. There were 25 original full‐text journal articles eligible for inclusion. Results For gliomas: PsP versus TTP (16 studies, highest AUC = 0.98), RN versus TTP (4 studies, highest AUC = 0.9988) and TRE versus TTP (3 studies, highest AUC = 0.94). For metastasis: RN vs. TTP (2 studies, highest AUC = 0.81). A meta‐analysis was performed on 9 studies in the gliomas PsP versus TTP group using STATA. The meta‐analysis reported a high sensitivity of 95.2% (95%CI: 86.6–98.4%) and specificity of 82.4% (95%CI: 67.0–91.6%). Conclusion TRE can be distinguished from TTP with good performance using machine learning‐based imaging models. There remain issues with the quality of articles and the integration of models into clinical practice. Future studies should focus on the external validation of models and utilize standardized criteria such as CLAIM to allow for consistency in reporting.
Collapse
Affiliation(s)
- Abhishta Bhandari
- Townsville University Hospital Townsville Queensland Australia
- College of Medicine and Dentistry James Cook University Townsville Queensland Australia
| | - Ravi Marwah
- Townsville University Hospital Townsville Queensland Australia
| | - Justin Smith
- Townsville University Hospital Townsville Queensland Australia
- College of Medicine and Dentistry James Cook University Townsville Queensland Australia
| | - Duy Nguyen
- Institute for Intelligent Systems Research and Innovation Deakin University Melbourne Victoria Australia
| | - Asim Bhatti
- Department of Cancer Imaging Peter MacCallum Cancer Centre Melbourne Victoria Australia
| | - Chee Peng Lim
- Institute for Intelligent Systems Research and Innovation Deakin University Melbourne Victoria Australia
| | - Arian Lasocki
- Department of Cancer Imaging Peter MacCallum Cancer Centre Melbourne Victoria Australia
- Sir Peter MacCallum Department of Oncology The University of Melbourne Melbourne Victoria Australia
| |
Collapse
|
467
|
Wang C, Shao J, Xu X, Yi L, Wang G, Bai C, Guo J, He Y, Zhang L, Yi Z, Li W. DeepLN: A Multi-Task AI Tool to Predict the Imaging Characteristics, Malignancy and Pathological Subtypes in CT-Detected Pulmonary Nodules. Front Oncol 2022; 12:683792. [PMID: 35646699 PMCID: PMC9130467 DOI: 10.3389/fonc.2022.683792] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 03/07/2022] [Indexed: 02/05/2023] Open
Abstract
Objectives Distinction of malignant pulmonary nodules from the benign ones based on computed tomography (CT) images can be time-consuming but significant in routine clinical management. The advent of artificial intelligence (AI) has provided an opportunity to improve the accuracy of cancer risk prediction. Methods A total of 8950 detected pulmonary nodules with complete pathological results were retrospectively enrolled. The different radiological manifestations were identified mainly as various nodules densities and morphological features. Then, these nodules were classified into benign and malignant groups, both of which were subdivided into finer specific pathological types. Here, we proposed a deep convolutional neural network for the assessment of lung nodules named DeepLN to identify the radiological features and predict the pathologic subtypes of pulmonary nodules. Results In terms of density, the area under the receiver operating characteristic curves (AUCs) of DeepLN were 0.9707 (95% confidence interval, CI: 0.9645-0.9765), 0.7789 (95%CI: 0.7569-0.7995), and 0.8950 (95%CI: 0.8822-0.9088) for the pure-ground glass opacity (pGGO), mixed-ground glass opacity (mGGO) and solid nodules. As for the morphological features, the AUCs were 0.8347 (95%CI: 0.8193-0.8499) and 0.9074 (95%CI: 0.8834-0.9314) for spiculation and lung cavity respectively. For the identification of malignant nodules, our DeepLN algorithm achieved an AUC of 0.8503 (95%CI: 0.8319-0.8681) in the test set. Pertaining to predicting the pathological subtypes in the test set, the multi-task AUCs were 0.8841 (95%CI: 0.8567-0.9083) for benign tumors, 0.8265 (95%CI: 0.8004-0.8499) for inflammation, and 0.8022 (95%CI: 0.7616-0.8445) for other benign ones, while AUCs were 0.8675 (95%CI: 0.8525-0.8813) for lung adenocarcinoma (LUAD), 0.8792 (95%CI: 0.8640-0.8950) for squamous cell carcinoma (LUSC), 0.7404 (95%CI: 0.7031-0.7782) for other malignant ones respectively in the malignant group. Conclusions The DeepLN based on deep learning algorithm represented a competitive performance to predict the imaging characteristics, malignancy and pathologic subtypes on the basis of non-invasive CT images, and thus had great possibility to be utilized in the routine clinical workflow.
Collapse
Affiliation(s)
- Chengdi Wang
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, West China Hospital, West China School of Medicine, Sichuan University, Chengdu, China
| | - Jun Shao
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, West China Hospital, West China School of Medicine, Sichuan University, Chengdu, China
| | - Xiuyuan Xu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Le Yi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Gang Wang
- Precision Medicine Center, West China Hospital, Sichuan University, Chengdu, China
| | - Congchen Bai
- Department of Medical Informatics, West China Hospital, Sichuan University, Chengdu, China
| | - Jixiang Guo
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Yanqi He
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, West China Hospital, West China School of Medicine, Sichuan University, Chengdu, China
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Weimin Li
- Department of Respiratory and Critical Care Medicine, Med-X Center for Manufacturing, West China Hospital, West China School of Medicine, Sichuan University, Chengdu, China
| |
Collapse
|
468
|
Gong B, Soyer P, McInnes MDF, Patlas MN. Elements of a Good Radiology Artificial Intelligence Paper. Can Assoc Radiol J 2022; 74:231-233. [PMID: 35535439 DOI: 10.1177/08465371221101195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Affiliation(s)
- Bo Gong
- Department of Radiology, University of British Columbia, Vancouver, BC, Canada
| | - Philippe Soyer
- Department of Body and Interventional Imaging, Hôpital Cochin, France & Univerité Paris Centre, Paris, France
| | - Matthew D. F. McInnes
- University of Ottawa Department of Radiology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
| | | |
Collapse
|
469
|
An endorectal ultrasound-based radiomics signature for preoperative prediction of lymphovascular invasion of rectal cancer. BMC Med Imaging 2022; 22:84. [PMID: 35538520 PMCID: PMC9087958 DOI: 10.1186/s12880-022-00813-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Accepted: 04/25/2022] [Indexed: 11/24/2022] Open
Abstract
Objective To investigate whether radiomics based on ultrasound images can predict lymphovascular invasion (LVI) of rectal cancer (RC) before surgery. Methods A total of 203 patients with RC were enrolled retrospectively, and they were divided into a training set (143 patients) and a validation set (60 patients). We extracted the radiomic features from the largest gray ultrasound image of the RC lesion. The intraclass correlation coefficient (ICC) was applied to test the repeatability of the radiomic features. The least absolute shrinkage and selection operator (LASSO) was used to reduce the data dimension and select significant features. Logistic regression (LR) analysis was applied to establish the radiomics model. The receiver operating characteristic (ROC) curve, calibration curve, and decision curve analysis (DCA) were used to evaluate the comprehensive performance of the model. Results Among the 203 patients, 33 (16.7%) were LVI positive and 170 (83.7%) were LVI negative. A total of 5350 (90.1%) radiomic features with ICC values of ≥ 0.75 were reported, which were subsequently subjected to hypothesis testing and LASSO regression dimension reduction analysis. Finally, 15 selected features were used to construct the radiomics model. The area under the curve (AUC) of the training set was 0.849, and the AUC of the validation set was 0.781. The calibration curve indicated that the radiomics model had good calibration, and DCA demonstrated that the model had clinical benefits. Conclusion The proposed endorectal ultrasound-based radiomics model has the potential to predict LVI preoperatively in RC. Supplementary Information The online version contains supplementary material available at 10.1186/s12880-022-00813-6.
Collapse
|
470
|
Yu AC, Mohajer B, Eng J. External Validation of Deep Learning Algorithms for Radiologic Diagnosis: A Systematic Review. Radiol Artif Intell 2022; 4:e210064. [PMID: 35652114 DOI: 10.1148/ryai.210064] [Citation(s) in RCA: 126] [Impact Index Per Article: 42.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 03/09/2022] [Accepted: 04/12/2022] [Indexed: 01/17/2023]
Abstract
Purpose To assess generalizability of published deep learning (DL) algorithms for radiologic diagnosis. Materials and Methods In this systematic review, the PubMed database was searched for peer-reviewed studies of DL algorithms for image-based radiologic diagnosis that included external validation, published from January 1, 2015, through April 1, 2021. Studies using nonimaging features or incorporating non-DL methods for feature extraction or classification were excluded. Two reviewers independently evaluated studies for inclusion, and any discrepancies were resolved by consensus. Internal and external performance measures and pertinent study characteristics were extracted, and relationships among these data were examined using nonparametric statistics. Results Eighty-three studies reporting 86 algorithms were included. The vast majority (70 of 86, 81%) reported at least some decrease in external performance compared with internal performance, with nearly half (42 of 86, 49%) reporting at least a modest decrease (≥0.05 on the unit scale) and nearly a quarter (21 of 86, 24%) reporting a substantial decrease (≥0.10 on the unit scale). No study characteristics were found to be associated with the difference between internal and external performance. Conclusion Among published external validation studies of DL algorithms for image-based radiologic diagnosis, the vast majority demonstrated diminished algorithm performance on the external dataset, with some reporting a substantial performance decrease.Keywords: Meta-Analysis, Computer Applications-Detection/Diagnosis, Neural Networks, Computer Applications-General (Informatics), Epidemiology, Technology Assessment, Diagnosis, Informatics Supplemental material is available for this article. © RSNA, 2022.
Collapse
Affiliation(s)
- Alice C Yu
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 1800 Orleans St, Baltimore, MD 21287
| | - Bahram Mohajer
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 1800 Orleans St, Baltimore, MD 21287
| | - John Eng
- Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, 1800 Orleans St, Baltimore, MD 21287
| |
Collapse
|
471
|
Shamrat FMJM, Azam S, Karim A, Islam R, Tasnim Z, Ghosh P, De Boer F. LungNet22: A Fine-Tuned Model for Multiclass Classification and Prediction of Lung Disease Using X-ray Images. J Pers Med 2022; 12:jpm12050680. [PMID: 35629103 PMCID: PMC9143659 DOI: 10.3390/jpm12050680] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 04/01/2022] [Accepted: 04/20/2022] [Indexed: 12/29/2022] Open
Abstract
In recent years, lung disease has increased manyfold, causing millions of casualties annually. To combat the crisis, an efficient, reliable, and affordable lung disease diagnosis technique has become indispensable. In this study, a multiclass classification of lung disease from frontal chest X-ray imaging using a fine-tuned CNN model is proposed. The classification is conducted on 10 disease classes of the lungs, namely COVID-19, Effusion, Tuberculosis, Pneumonia, Lung Opacity, Mass, Nodule, Pneumothorax, and Pulmonary Fibrosis, along with the Normal class. The dataset is a collective dataset gathered from multiple sources. After pre-processing and balancing the dataset with eight augmentation techniques, a total of 80,000 X-ray images were fed to the model for classification purposes. Initially, eight pre-trained CNN models, AlexNet, GoogLeNet, InceptionV3, MobileNetV2, VGG16, ResNet 50, DenseNet121, and EfficientNetB7, were employed on the dataset. Among these, the VGG16 achieved the highest accuracy at 92.95%. To further improve the classification accuracy, LungNet22 was constructed upon the primary structure of the VGG16 model. An ablation study was used in the work to determine the different hyper-parameters. Using the Adam Optimizer, the proposed model achieved a commendable accuracy of 98.89%. To verify the performance of the model, several performance matrices, including the ROC curve and the AUC values, were computed as well.
Collapse
Affiliation(s)
- F. M. Javed Mehedi Shamrat
- Department of Software Engineering, Daffodil International University, Dhaka 1207, Bangladesh; (F.M.J.M.S.); (Z.T.)
| | - Sami Azam
- College of Engineering, IT and Environment, Charles Darwin University, Casuarina, NT 0909, Australia; (A.K.); (F.D.B.)
- Correspondence:
| | - Asif Karim
- College of Engineering, IT and Environment, Charles Darwin University, Casuarina, NT 0909, Australia; (A.K.); (F.D.B.)
| | - Rakibul Islam
- Department of Computer Science and Engineering, Daffodil International University, Dhaka 1207, Bangladesh;
| | - Zarrin Tasnim
- Department of Software Engineering, Daffodil International University, Dhaka 1207, Bangladesh; (F.M.J.M.S.); (Z.T.)
| | - Pronab Ghosh
- Department of Computer Science (CS), Lakehead University, 955 Oliver Rd, Thunder Bay, ON P7B 5E1, Canada;
| | - Friso De Boer
- College of Engineering, IT and Environment, Charles Darwin University, Casuarina, NT 0909, Australia; (A.K.); (F.D.B.)
| |
Collapse
|
472
|
The impact of radiomics for human papillomavirus status prediction in oropharyngeal cancer: systematic review and radiomics quality score assessment. Neuroradiology 2022; 64:1639-1647. [PMID: 35459957 PMCID: PMC9271107 DOI: 10.1007/s00234-022-02959-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 04/07/2022] [Indexed: 11/19/2022]
Abstract
Purpose
Human papillomavirus (HPV) status assessment is crucial for decision making in oropharyngeal cancer patients. In last years, several articles have been published investigating the possible role of radiomics in distinguishing HPV-positive from HPV-negative neoplasms. Aim of this review was to perform a systematic quality assessment of radiomic studies published on this topic. Methods Radiomics studies on HPV status prediction in oropharyngeal cancer patients were selected. The Radiomic Quality Score (RQS) was assessed by three readers to evaluate their methodological quality. In addition, possible correlations between RQS% and journal type, year of publication, impact factor, and journal rank were investigated. Results After the literature search, 19 articles were selected whose RQS median was 33% (range 0–42%). Overall, 16/19 studies included a well-documented imaging protocol, 13/19 demonstrated phenotypic differences, and all were compared with the current gold standard. No study included a public protocol, phantom study, or imaging at multiple time points. More than half (13/19) included feature selection and only 2 were comprehensive of non-radiomic features. Mean RQS was significantly higher in clinical journals. Conclusion Radiomics has been proposed for oropharyngeal cancer HPV status assessment, with promising results. However, these are supported by low methodological quality investigations. Further studies with higher methodological quality, appropriate standardization, and greater attention to validation are necessary prior to clinical adoption. Supplementary Information The online version contains supplementary material available at 10.1007/s00234-022-02959-0.
Collapse
|
473
|
Bahar RC, Merkaj S, Cassinelli Petersen GI, Tillmanns N, Subramanian H, Brim WR, Zeevi T, Staib L, Kazarian E, Lin M, Bousabarah K, Huttner AJ, Pala A, Payabvash S, Ivanidze J, Cui J, Malhotra A, Aboian MS. Machine Learning Models for Classifying High- and Low-Grade Gliomas: A Systematic Review and Quality of Reporting Analysis. Front Oncol 2022; 12:856231. [PMID: 35530302 PMCID: PMC9076130 DOI: 10.3389/fonc.2022.856231] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Accepted: 03/25/2022] [Indexed: 12/11/2022] Open
Abstract
Objectives To systematically review, assess the reporting quality of, and discuss improvement opportunities for studies describing machine learning (ML) models for glioma grade prediction. Methods This study followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses of Diagnostic Test Accuracy (PRISMA-DTA) statement. A systematic search was performed in September 2020, and repeated in January 2021, on four databases: Embase, Medline, CENTRAL, and Web of Science Core Collection. Publications were screened in Covidence, and reporting quality was measured against the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) Statement. Descriptive statistics were calculated using GraphPad Prism 9. Results The search identified 11,727 candidate articles with 1,135 articles undergoing full text review and 85 included in analysis. 67 (79%) articles were published between 2018-2021. The mean prediction accuracy of the best performing model in each study was 0.89 ± 0.09. The most common algorithm for conventional machine learning studies was Support Vector Machine (mean accuracy: 0.90 ± 0.07) and for deep learning studies was Convolutional Neural Network (mean accuracy: 0.91 ± 0.10). Only one study used both a large training dataset (n>200) and external validation (accuracy: 0.72) for their model. The mean adherence rate to TRIPOD was 44.5% ± 11.1%, with poor reporting adherence for model performance (0%), abstracts (0%), and titles (0%). Conclusions The application of ML to glioma grade prediction has grown substantially, with ML model studies reporting high predictive accuracies but lacking essential metrics and characteristics for assessing model performance. Several domains, including generalizability and reproducibility, warrant further attention to enable translation into clinical practice. Systematic Review Registration PROSPERO, identifier CRD42020209938.
Collapse
Affiliation(s)
- Ryan C. Bahar
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, United States
| | - Sara Merkaj
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, United States
- Department of Neurosurgery, University of Ulm, Ulm, Germany
| | | | - Niklas Tillmanns
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, United States
| | - Harry Subramanian
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, United States
| | - Waverly Rose Brim
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, United States
| | - Tal Zeevi
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, United States
| | - Lawrence Staib
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, United States
| | - Eve Kazarian
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, United States
| | - MingDe Lin
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, United States
- Visage Imaging, Inc., San Diego, CA, United States
| | | | - Anita J. Huttner
- Department of Pathology, Yale-New Haven Hospital, Yale School of Medicine, New Haven, CT, United States
| | - Andrej Pala
- Department of Neurosurgery, University of Ulm, Ulm, Germany
| | - Seyedmehdi Payabvash
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, United States
| | - Jana Ivanidze
- Department of Radiology, Weill Cornell Medicine, New York, NY, United States
| | - Jin Cui
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, United States
| | - Ajay Malhotra
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, United States
| | - Mariam S. Aboian
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, United States
- *Correspondence: Mariam S. Aboian,
| |
Collapse
|
474
|
Wilder-Smith AJ, Yang S, Weikert T, Bremerich J, Haaf P, Segeroth M, Ebert LC, Sauter A, Sexauer R. Automated Detection, Segmentation, and Classification of Pericardial Effusions on Chest CT Using a Deep Convolutional Neural Network. Diagnostics (Basel) 2022; 12:1045. [PMID: 35626201 PMCID: PMC9139725 DOI: 10.3390/diagnostics12051045] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 04/09/2022] [Accepted: 04/19/2022] [Indexed: 01/15/2023] Open
Abstract
Pericardial effusions (PEFs) are often missed on Computed Tomography (CT), which particularly affects the outcome of patients presenting with hemodynamic compromise. An automatic PEF detection, segmentation, and classification tool would expedite and improve CT based PEF diagnosis; 258 CTs with (206 with simple PEF, 52 with hemopericardium) and without PEF (each 134 with contrast, 124 non-enhanced) were identified using the radiology report (01/2016−01/2021). PEF were manually 3D-segmented. A deep convolutional neural network (nnU-Net) was trained on 316 cases and separately tested on the remaining 200 and 22 external post-mortem CTs. Inter-reader variability was tested on 40 CTs. PEF classification utilized the median Hounsfield unit from each prediction. The sensitivity and specificity for PEF detection was 97% (95% CI 91.48−99.38%) and 100.00% (95% CI 96.38−100.00%) and 89.74% and 83.61% for diagnosing hemopericardium (AUC 0.944, 95% CI 0.904−0.984). Model performance (Dice coefficient: 0.75 ± 0.01) was non-inferior to inter-reader (0.69 ± 0.02) and was unaffected by contrast administration nor alternative chest pathology (p > 0.05). External dataset testing yielded similar results. Our model reliably detects, segments, and classifies PEF on CT in a complex dataset, potentially serving as an alert tool whilst enhancing report quality. The model and corresponding datasets are publicly available.
Collapse
Affiliation(s)
- Adrian Jonathan Wilder-Smith
- Division of Research and Analytical Services, University Hospital Basel, 4031 Basel, Switzerland; (A.J.W.-S.); (S.Y.); (T.W.); (M.S.); (A.S.)
- Department of Radiology, University Hospital Basel, University of Basel, 4031 Basel, Switzerland;
| | - Shan Yang
- Division of Research and Analytical Services, University Hospital Basel, 4031 Basel, Switzerland; (A.J.W.-S.); (S.Y.); (T.W.); (M.S.); (A.S.)
| | - Thomas Weikert
- Division of Research and Analytical Services, University Hospital Basel, 4031 Basel, Switzerland; (A.J.W.-S.); (S.Y.); (T.W.); (M.S.); (A.S.)
- Department of Radiology, University Hospital Basel, University of Basel, 4031 Basel, Switzerland;
| | - Jens Bremerich
- Department of Radiology, University Hospital Basel, University of Basel, 4031 Basel, Switzerland;
| | - Philip Haaf
- Department of Cardiology, University Hospital Basel, University of Basel, 4031 Basel, Switzerland;
| | - Martin Segeroth
- Division of Research and Analytical Services, University Hospital Basel, 4031 Basel, Switzerland; (A.J.W.-S.); (S.Y.); (T.W.); (M.S.); (A.S.)
| | - Lars C. Ebert
- 3D Center Zurich, Institute of Forensic Medicine, University of Zürich, 8057 Zürich, Switzerland;
| | - Alexander Sauter
- Division of Research and Analytical Services, University Hospital Basel, 4031 Basel, Switzerland; (A.J.W.-S.); (S.Y.); (T.W.); (M.S.); (A.S.)
- Department of Radiology, University Hospital Basel, University of Basel, 4031 Basel, Switzerland;
| | - Raphael Sexauer
- Division of Research and Analytical Services, University Hospital Basel, 4031 Basel, Switzerland; (A.J.W.-S.); (S.Y.); (T.W.); (M.S.); (A.S.)
- Department of Radiology, University Hospital Basel, University of Basel, 4031 Basel, Switzerland;
| |
Collapse
|
475
|
|
476
|
Jemioło P, Storman D, Orzechowski P. Artificial Intelligence for COVID-19 Detection in Medical Imaging-Diagnostic Measures and Wasting-A Systematic Umbrella Review. J Clin Med 2022; 11:2054. [PMID: 35407664 PMCID: PMC9000039 DOI: 10.3390/jcm11072054] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 03/24/2022] [Accepted: 03/26/2022] [Indexed: 01/08/2023] Open
Abstract
The COVID-19 pandemic has sparked a barrage of primary research and reviews. We investigated the publishing process, time and resource wasting, and assessed the methodological quality of the reviews on artificial intelligence techniques to diagnose COVID-19 in medical images. We searched nine databases from inception until 1 September 2020. Two independent reviewers did all steps of identification, extraction, and methodological credibility assessment of records. Out of 725 records, 22 reviews analysing 165 primary studies met the inclusion criteria. This review covers 174,277 participants in total, including 19,170 diagnosed with COVID-19. The methodological credibility of all eligible studies was rated as critically low: 95% of papers had significant flaws in reporting quality. On average, 7.24 (range: 0-45) new papers were included in each subsequent review, and 14% of studies did not include any new paper into consideration. Almost three-quarters of the studies included less than 10% of available studies. More than half of the reviews did not comment on the previously published reviews at all. Much wasting time and resources could be avoided if referring to previous reviews and following methodological guidelines. Such information chaos is alarming. It is high time to draw conclusions from what we experienced and prepare for future pandemics.
Collapse
Affiliation(s)
- Paweł Jemioło
- AGH University of Science and Technology, Faculty of Electrical Engineering, Automatics, Computer Science and Biomedical Engineering, al. A. Mickiewicza 30, 30-059 Krakow, Poland;
| | - Dawid Storman
- Chair of Epidemiology and Preventive Medicine, Department of Hygiene and Dietetics, Jagiellonian University Medical College, ul. M. Kopernika 7, 31-034 Krakow, Poland;
| | - Patryk Orzechowski
- AGH University of Science and Technology, Faculty of Electrical Engineering, Automatics, Computer Science and Biomedical Engineering, al. A. Mickiewicza 30, 30-059 Krakow, Poland;
- Institute for Biomedical Informatics, University of Pennsylvania, 3700 Hamilton Walk, Philadelphia, PA 19104, USA
| |
Collapse
|
477
|
Automated prediction of the neoadjuvant chemotherapy response in osteosarcoma with deep learning and an MRI-based radiomics nomogram. Eur Radiol 2022; 32:6196-6206. [PMID: 35364712 DOI: 10.1007/s00330-022-08735-1] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Revised: 02/22/2022] [Accepted: 03/05/2022] [Indexed: 01/06/2023]
Abstract
OBJECTIVES To implement a pipeline to automatically segment the ROI and to use a nomogram integrating the MRI-based radiomics score and clinical variables to predict responses to neoadjuvant chemotherapy (NAC) in osteosarcoma patients. METHODS A total of 144 osteosarcoma patients treated with NAC were separated into training (n = 101) and test (n = 43) groups. After normalisation, ROIs for the preoperative MRI were segmented by a deep learning segmentation model trained with nnU-Net by using two independent manual segmentations as labels. Radiomics features were extracted using automatically segmented ROIs. Feature selection was performed in the training dataset by five-fold cross-validation. The clinical, radiomics, and clinical-radiomics models were built using multiple machine learning methods with the same training dataset and validated with the same test dataset. The segmentation model was evaluated by the Dice coefficient. AUC and decision curve analysis (DCA) were employed to illustrate the model performance and clinical utility. RESULTS 36/144 (25.0%) patients were pathological good responders (pGRs) to NAC, while 108/144 (75.0%) were non-pGRs. The segmentation model achieved a Dice coefficient of 0.869 on the test dataset. The clinical and radiomics models reached AUCs of 0.636 with a 95% confidence interval (CI) of 0.427-0.860 and 0.759 (95% CI, 0.589-0.937), respectively, in the test dataset. The clinical-radiomics nomogram demonstrated good discrimination, with an AUC of 0.793 (95% CI, 0.610-0.975), and accuracy of 79.1%. The DCA suggested the clinical utility of the nomogram. CONCLUSION The automatic nomogram could be applied to aid radiologists in identifying pGRs to NAC. KEY POINTS • The nnU-Net trained by manual labels enables the use of an automatic segmentation tool for ROI delineation of osteosarcoma. • A pipeline using automatic lesion segmentation and followed by a radiomics classifier could aid the evaluation of NAC response of osteosarcoma. • A predictive nomogram composed of clinical variables and MRI-based radiomics score provides support for individualised treatment planning.
Collapse
|
478
|
Bedrikovetski S, Seow W, Kroon HM, Traeger L, Moore JW, Sammour T. Artificial intelligence for body composition and sarcopenia evaluation on computed tomography: A systematic review and meta-analysis. Eur J Radiol 2022; 149:110218. [DOI: 10.1016/j.ejrad.2022.110218] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 12/30/2021] [Accepted: 02/10/2022] [Indexed: 12/13/2022]
|
479
|
Bhalla D, Ramachandran A, Rangarajan K, Dhanakshirur R, Banerjee S, Arora C. Basic Principles AI Simplified For A Medical Practitioner: Pearls And Pitfalls In Evaluating AI Algorithms. Curr Probl Diagn Radiol 2022; 52:47-55. [DOI: 10.1067/j.cpradiol.2022.04.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Revised: 03/14/2022] [Accepted: 04/18/2022] [Indexed: 11/22/2022]
|
480
|
Cassinelli Petersen GI, Shatalov J, Verma T, Brim WR, Subramanian H, Brackett A, Bahar RC, Merkaj S, Zeevi T, Staib LH, Cui J, Omuro A, Bronen RA, Malhotra A, Aboian MS. Machine Learning in Differentiating Gliomas from Primary CNS Lymphomas: A Systematic Review, Reporting Quality, and Risk of Bias Assessment. AJNR Am J Neuroradiol 2022; 43:526-533. [PMID: 35361577 PMCID: PMC8993193 DOI: 10.3174/ajnr.a7473] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Accepted: 01/31/2022] [Indexed: 12/12/2022]
Abstract
BACKGROUND Differentiating gliomas and primary CNS lymphoma represents a diagnostic challenge with important therapeutic ramifications. Biopsy is the preferred method of diagnosis, while MR imaging in conjunction with machine learning has shown promising results in differentiating these tumors. PURPOSE Our aim was to evaluate the quality of reporting and risk of bias, assess data bases with which the machine learning classification algorithms were developed, the algorithms themselves, and their performance. DATA SOURCES Ovid EMBASE, Ovid MEDLINE, Cochrane Central Register of Controlled Trials, and the Web of Science Core Collection were searched according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. STUDY SELECTION From 11,727 studies, 23 peer-reviewed studies used machine learning to differentiate primary CNS lymphoma from gliomas in 2276 patients. DATA ANALYSIS Characteristics of data sets and machine learning algorithms were extracted. A meta-analysis on a subset of studies was performed. Reporting quality and risk of bias were assessed using the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) and Prediction Model Study Risk Of Bias Assessment Tool. DATA SYNTHESIS The highest area under the receiver operating characteristic curve (0.961) and accuracy (91.2%) in external validation were achieved by logistic regression and support vector machines models using conventional radiomic features. Meta-analysis of machine learning classifiers using these features yielded a mean area under the receiver operating characteristic curve of 0.944 (95% CI, 0.898-0.99). The median TRIPOD score was 51.7%. The risk of bias was high for 16 studies. LIMITATIONS Exclusion of abstracts decreased the sensitivity in evaluating all published studies. Meta-analysis had high heterogeneity. CONCLUSIONS Machine learning-based methods of differentiating primary CNS lymphoma from gliomas have shown great potential, but most studies lack large, balanced data sets and external validation. Assessment of the studies identified multiple deficiencies in reporting quality and risk of bias. These factors reduce the generalizability and reproducibility of the findings.
Collapse
Affiliation(s)
- G I Cassinelli Petersen
- From the Department of Radiology and Biomedical Imaging (G.I.C.P., T.V., H.S., R.C.B., S.M., T.Z., L.H.S., J.C., R.A.B., A.M., M.S.A.)
- Universitätsmedizin Göttingen (G.I.C.P.), Göttingen, Germany
| | - J Shatalov
- University of Richmond (J.S.), Richmond, Virginia
| | - T Verma
- From the Department of Radiology and Biomedical Imaging (G.I.C.P., T.V., H.S., R.C.B., S.M., T.Z., L.H.S., J.C., R.A.B., A.M., M.S.A.)
- New York University (T.V.), New York, New York
| | - W R Brim
- Whiting School of Engineering (W.R.B.), Johns Hopkins University, Baltimore, Maryland
| | - H Subramanian
- From the Department of Radiology and Biomedical Imaging (G.I.C.P., T.V., H.S., R.C.B., S.M., T.Z., L.H.S., J.C., R.A.B., A.M., M.S.A.)
| | | | - R C Bahar
- From the Department of Radiology and Biomedical Imaging (G.I.C.P., T.V., H.S., R.C.B., S.M., T.Z., L.H.S., J.C., R.A.B., A.M., M.S.A.)
| | - S Merkaj
- From the Department of Radiology and Biomedical Imaging (G.I.C.P., T.V., H.S., R.C.B., S.M., T.Z., L.H.S., J.C., R.A.B., A.M., M.S.A.)
| | - T Zeevi
- From the Department of Radiology and Biomedical Imaging (G.I.C.P., T.V., H.S., R.C.B., S.M., T.Z., L.H.S., J.C., R.A.B., A.M., M.S.A.)
| | - L H Staib
- From the Department of Radiology and Biomedical Imaging (G.I.C.P., T.V., H.S., R.C.B., S.M., T.Z., L.H.S., J.C., R.A.B., A.M., M.S.A.)
| | - J Cui
- From the Department of Radiology and Biomedical Imaging (G.I.C.P., T.V., H.S., R.C.B., S.M., T.Z., L.H.S., J.C., R.A.B., A.M., M.S.A.)
| | - A Omuro
- Department of Neurology (A.O.), Yale School of Medicine, New Haven, Connecticut
| | - R A Bronen
- From the Department of Radiology and Biomedical Imaging (G.I.C.P., T.V., H.S., R.C.B., S.M., T.Z., L.H.S., J.C., R.A.B., A.M., M.S.A.)
| | - A Malhotra
- From the Department of Radiology and Biomedical Imaging (G.I.C.P., T.V., H.S., R.C.B., S.M., T.Z., L.H.S., J.C., R.A.B., A.M., M.S.A.)
| | - M S Aboian
- From the Department of Radiology and Biomedical Imaging (G.I.C.P., T.V., H.S., R.C.B., S.M., T.Z., L.H.S., J.C., R.A.B., A.M., M.S.A.)
| |
Collapse
|
481
|
Cohen JF, McInnes MDF. Deep Learning Algorithms to Detect Fractures: Systematic Review Shows Promising Results but Many Limitations. Radiology 2022; 304:63-64. [PMID: 35348385 DOI: 10.1148/radiol.212966] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Jérémie F Cohen
- From the Department of Pediatrics and Inserm UMR 1153 (Centre of Research in Epidemiology and Statistics), Necker-Enfants Malades Hospital, Assistance Publique-Hôpitaux de Paris, Université de Paris, 149 rue de Sèvres 75015 Paris, France (J.F.C.); and University of Ottawa Department of Radiology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada (M.D.F.M.)
| | - Matthew D F McInnes
- From the Department of Pediatrics and Inserm UMR 1153 (Centre of Research in Epidemiology and Statistics), Necker-Enfants Malades Hospital, Assistance Publique-Hôpitaux de Paris, Université de Paris, 149 rue de Sèvres 75015 Paris, France (J.F.C.); and University of Ottawa Department of Radiology, Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada (M.D.F.M.)
| |
Collapse
|
482
|
Sushentsev N, Moreira Da Silva N, Yeung M, Barrett T, Sala E, Roberts M, Rundo L. Comparative performance of fully-automated and semi-automated artificial intelligence methods for the detection of clinically significant prostate cancer on MRI: a systematic review. Insights Imaging 2022; 13:59. [PMID: 35347462 PMCID: PMC8960511 DOI: 10.1186/s13244-022-01199-3] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 02/24/2022] [Indexed: 12/12/2022] Open
Abstract
OBJECTIVES We systematically reviewed the current literature evaluating the ability of fully-automated deep learning (DL) and semi-automated traditional machine learning (TML) MRI-based artificial intelligence (AI) methods to differentiate clinically significant prostate cancer (csPCa) from indolent PCa (iPCa) and benign conditions. METHODS We performed a computerised bibliographic search of studies indexed in MEDLINE/PubMed, arXiv, medRxiv, and bioRxiv between 1 January 2016 and 31 July 2021. Two reviewers performed the title/abstract and full-text screening. The remaining papers were screened by four reviewers using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) for DL studies and Radiomics Quality Score (RQS) for TML studies. Papers that fulfilled the pre-defined screening requirements underwent full CLAIM/RQS evaluation alongside the risk of bias assessment using QUADAS-2, both conducted by the same four reviewers. Standard measures of discrimination were extracted for the developed predictive models. RESULTS 17/28 papers (five DL and twelve TML) passed the quality screening and were subject to a full CLAIM/RQS/QUADAS-2 assessment, which revealed a substantial study heterogeneity that precluded us from performing quantitative analysis as part of this review. The mean RQS of TML papers was 11/36, and a total of five papers had a high risk of bias. AUCs of DL and TML papers with low risk of bias ranged between 0.80-0.89 and 0.75-0.88, respectively. CONCLUSION We observed comparable performance of the two classes of AI methods and identified a number of common methodological limitations and biases that future studies will need to address to ensure the generalisability of the developed models.
Collapse
Affiliation(s)
- Nikita Sushentsev
- Department of Radiology, University of Cambridge School of Clinical Medicine, Addenbrooke's Hospital and University of Cambridge, Cambridge Biomedical Campus, Box 218, Cambridge, CB2 0QQ, UK.
| | | | - Michael Yeung
- Department of Radiology, University of Cambridge School of Clinical Medicine, Addenbrooke's Hospital and University of Cambridge, Cambridge Biomedical Campus, Box 218, Cambridge, CB2 0QQ, UK
| | - Tristan Barrett
- Department of Radiology, University of Cambridge School of Clinical Medicine, Addenbrooke's Hospital and University of Cambridge, Cambridge Biomedical Campus, Box 218, Cambridge, CB2 0QQ, UK
| | - Evis Sala
- Department of Radiology, University of Cambridge School of Clinical Medicine, Addenbrooke's Hospital and University of Cambridge, Cambridge Biomedical Campus, Box 218, Cambridge, CB2 0QQ, UK
- Lucida Medical Ltd, Biomedical Innovation Hub, University of Cambridge, Cambridge, UK
- Cancer Research UK Cambridge Centre, University of Cambridge, Cambridge, UK
| | - Michael Roberts
- Department of Applied Mathematics and Theoretical Physics, The Cambridge Mathematics of Information in Healthcare Hub, University of Cambridge, Cambridge, UK
- Oncology R&D, AstraZeneca, Cambridge, UK
| | - Leonardo Rundo
- Department of Radiology, University of Cambridge School of Clinical Medicine, Addenbrooke's Hospital and University of Cambridge, Cambridge Biomedical Campus, Box 218, Cambridge, CB2 0QQ, UK
- Lucida Medical Ltd, Biomedical Innovation Hub, University of Cambridge, Cambridge, UK
- Department of Information and Electrical Engineering and Applied Mathematics (DIEM), University of Salerno, Fisciano, SA, Italy
| |
Collapse
|
483
|
Mohammad-Rahimi H, Motamadian SR, Nadimi M, Hassanzadeh-Samani S, Minabi MAS, Mahmoudinia E, Lee VY, Rohban MH. Deep learning for the classification of cervical maturation degree and pubertal growth spurts: A pilot study. Korean J Orthod 2022; 52:112-122. [PMID: 35321950 PMCID: PMC8964471 DOI: 10.4041/kjod.2022.52.2.112] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 09/25/2021] [Accepted: 10/01/2021] [Indexed: 11/10/2022] Open
Abstract
Objective This study aimed to present and evaluate a new deep learning model for determining cervical vertebral maturation (CVM) degree and growth spurts by analyzing lateral cephalometric radiographs. Methods The study sample included 890 cephalograms. The images were classified into six cervical stages independently by two orthodontists. The images were also categorized into three degrees on the basis of the growth spurt pre-pubertal, growth spurt, and post-pubertal. Subsequently, the samples were fed to a transfer learning model implemented using the Python programming language and PyTorch library. In the last step, the test set of cephalograms was randomly coded and provided to two new orthodontists in order to compare their diagnosis to the artificial intelligence (AI) model’s performance using weighted kappa and Cohen’s kappa statistical analyses. Results The model’s validation and test accuracy for the six-class CVM diagnosis were 62.63% and 61.62%, respectively. Moreover, the model’s validation and test accuracy for the three-class classification were 75.76% and 82.83%, respectively. Furthermore, substantial agreements were observed between the two orthodontists as well as one of them and the AI model. Conclusions The newly developed AI model had reasonable accuracy in detecting the CVM stage and high reliability in detecting the pubertal stage. However, its accuracy was still less than that of human observers. With further improvements in data quality, this model should be able to provide practical assistance to practicing dentists in the future.
Collapse
Affiliation(s)
- Hossein Mohammad-Rahimi
- Department of Computer Engineering, Sharif University of Technology, Tehran, Iran.,Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health, Berlin, Germany
| | - Saeed Reza Motamadian
- Dentofacial Deformities Research Center, Research Institute of Dental Sciences & Department of Orthodontics, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mohadeseh Nadimi
- Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences (TUMS), Tehran, Iran
| | - Sahel Hassanzadeh-Samani
- Dentofacial Deformities Research Center, Research Institute of Dental Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mohammad A S Minabi
- Department of Computer Engineering, Sirjan University of Technology, Kerman, Iran
| | - Erfan Mahmoudinia
- Department of Computer Engineering, Sharif University of Technology, Tehran, Iran
| | | | | |
Collapse
|
484
|
Yonezawa H, Ueda D, Yamamoto A, Kageyama K, Walston SL, Nota T, Murai K, Ogawa S, Sohgawa E, Jogo A, Kabata D, Miki Y. Mask-less Two-dimensional Digital Subtraction Angiography Generation Model for Abdominal Vasculature using Deep Learning. J Vasc Interv Radiol 2022; 33:845-851.e8. [PMID: 35311665 DOI: 10.1016/j.jvir.2022.03.010] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Revised: 01/27/2022] [Accepted: 03/09/2022] [Indexed: 11/25/2022] Open
Abstract
PURPOSE To develop a deep learning model to generate synthetic, two-dimensional subtraction angiography images free of artifacts from native abdominal angiograms. MATERIALS AND METHODS In this retrospective study, two-dimensional digital subtraction angiograms (2D-DSA) and native angiograms were consecutively collected from July 2019 to March 2020. Images were divided into motion-free (training, validation, and motion-free test datasets) and containing motion artifacts (motion-artifact test dataset) sets. A total of 3185, 393, 383, and 345 images from 87 patients (mean age, 71 ± 10 years; 64 men, 23 women) were included in the training, validation, motion-free, and motion-artifacts test datasets, respectively. Native angiograms and 2D-DSA image pairs were used to train and validate an image-to-image translation model to generate synthetic deep learning-based subtraction angiography (DLSA) images. DLSA images were quantitatively evaluated by peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) using the motion-free dataset and were qualitatively evaluated by visual assessments by radiologists with a numerical rating scale using the motion-artifacts dataset. RESULTS The DLSA images showed mean PSNR (± standard deviation) of 43.05 ± 3.65 dB and mean SSIM of 0.98 ± 0.01, indicating high agreement with the original 2D-DSA images in the motion-free dataset. Qualitative visual evaluation by radiologists on the motion-artifacts dataset showed that DLSA images contained fewer motion artifacts than 2D-DSA. Additionally, DLSA images scored similarly to or higher than 2D-DSA images for vascular visualization and clinical usefulness. CONCLUSION The developed deep learning model could generate synthetic, motion-free subtraction images from abdominal angiograms with similar imaging characteristics to 2D-DSA images.
Collapse
Affiliation(s)
- Hiroki Yonezawa
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan.
| | - Akira Yamamoto
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| | - Ken Kageyama
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| | - Shannon Leigh Walston
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| | - Takehito Nota
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| | - Kazuki Murai
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| | - Satoyuki Ogawa
- Department of Radiology, Osaka Saiseikai Nakatsu Hospital, 2-10-39, Shibata, Kita-ku, Osaka 530-0012, Japan
| | - Etsuji Sohgawa
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| | - Atsushi Jogo
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| | - Daijiro Kabata
- Department of Medical Statistics, Graduate School of Medicine, Osaka City University, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| | - Yukio Miki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| |
Collapse
|
485
|
Machine Learning Applications for Differentiation of Glioma from Brain Metastasis-A Systematic Review. Cancers (Basel) 2022; 14:cancers14061369. [PMID: 35326526 PMCID: PMC8946855 DOI: 10.3390/cancers14061369] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Revised: 02/22/2022] [Accepted: 03/01/2022] [Indexed: 12/19/2022] Open
Abstract
Simple Summary We present a systematic review of published reports on machine learning (ML) applications for the differentiation of gliomas from brain metastases by summarizing study characteristics, strengths, and pitfalls. Based on these findings, we present recommendations for future research in this field. Abstract Glioma and brain metastasis can be difficult to distinguish on conventional magnetic resonance imaging (MRI) due to the similarity of imaging features in specific clinical circumstances. Multiple studies have investigated the use of machine learning (ML) models for non-invasive differentiation of glioma from brain metastasis. Many of the studies report promising classification results, however, to date, none have been implemented into clinical practice. After a screening of 12,470 studies, we included 29 eligible studies in our systematic review. From each study, we aggregated data on model design, development, and best classifiers, as well as quality of reporting according to the TRIPOD statement. In a subset of eligible studies, we conducted a meta-analysis of the reported AUC. It was found that data predominantly originated from single-center institutions (n = 25/29) and only two studies performed external validation. The median TRIPOD adherence was 0.48, indicating insufficient quality of reporting among surveyed studies. Our findings illustrate that despite promising classification results, reliable model assessment is limited by poor reporting of study design and lack of algorithm validation and generalizability. Therefore, adherence to quality guidelines and validation on outside datasets is critical for the clinical translation of ML for the differentiation of glioma and brain metastasis.
Collapse
|
486
|
Weaver CGW, Basmadjian RB, Williamson T, McBrien K, Sajobi T, Boyne D, Yusuf M, Ronksley PE. Reporting of Model Performance and Statistical Methods in Studies That Use Machine Learning to Develop Clinical Prediction Models: Protocol for a Systematic Review. JMIR Res Protoc 2022; 11:e30956. [PMID: 35238322 PMCID: PMC8931652 DOI: 10.2196/30956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2021] [Revised: 12/09/2021] [Accepted: 12/31/2021] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND With the growing excitement of the potential benefits of using machine learning and artificial intelligence in medicine, the number of published clinical prediction models that use these approaches has increased. However, there is evidence (albeit limited) that suggests that the reporting of machine learning-specific aspects in these studies is poor. Further, there are no reviews assessing the reporting quality or broadly accepted reporting guidelines for these aspects. OBJECTIVE This paper presents the protocol for a systematic review that will assess the reporting quality of machine learning-specific aspects in studies that use machine learning to develop clinical prediction models. METHODS We will include studies that use a supervised machine learning algorithm to develop a prediction model for use in clinical practice (ie, for diagnosis or prognosis of a condition or identification of candidates for health care interventions). We will search MEDLINE for studies published in 2019, pseudorandomly sort the records, and screen until we obtain 100 studies that meet our inclusion criteria. We will assess reporting quality with a novel checklist developed in parallel with this review, which includes content derived from existing reporting guidelines, textbooks, and consultations with experts. The checklist will cover 4 key areas where the reporting of machine learning studies is unique: modelling steps (order and data used for each step), model performance (eg, reporting the performance of each model compared), statistical methods (eg, describing the tuning approach), and presentation of models (eg, specifying the predictors that contributed to the final model). RESULTS We completed data analysis in August 2021 and are writing the manuscript. We expect to submit the results to a peer-reviewed journal in early 2022. CONCLUSIONS This review will contribute to more standardized and complete reporting in the field by identifying areas where reporting is poor and can be improved. TRIAL REGISTRATION PROSPERO International Prospective Register of Systematic Reviews CRD42020206167; https://www.crd.york.ac.uk/PROSPERO/display_record.php?RecordID=206167. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID) RR1-10.2196/30956.
Collapse
Affiliation(s)
- Colin George Wyllie Weaver
- Department of Community Health Sciences, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| | - Robert B Basmadjian
- Department of Community Health Sciences, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| | - Tyler Williamson
- Department of Community Health Sciences, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| | - Kerry McBrien
- Department of Community Health Sciences, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada.,Department of Family Medicine, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| | - Tolu Sajobi
- Department of Community Health Sciences, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| | - Devon Boyne
- Department of Oncology, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| | - Mohamed Yusuf
- Faculty of Science & Engineering, Manchester Metropolitan University, Manchester, United Kingdom
| | - Paul Everett Ronksley
- Department of Community Health Sciences, Cumming School of Medicine, University of Calgary, Calgary, AB, Canada
| |
Collapse
|
487
|
Gillman AG, Lunardo F, Prinable J, Belous G, Nicolson A, Min H, Terhorst A, Dowling JA. Automated COVID-19 diagnosis and prognosis with medical imaging and who is publishing: a systematic review. Phys Eng Sci Med 2022; 45:13-29. [PMID: 34919204 PMCID: PMC8678975 DOI: 10.1007/s13246-021-01093-0] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Accepted: 12/13/2021] [Indexed: 12/31/2022]
Abstract
OBJECTIVES To conduct a systematic survey of published techniques for automated diagnosis and prognosis of COVID-19 diseases using medical imaging, assessing the validity of reported performance and investigating the proposed clinical use-case. To conduct a scoping review into the authors publishing such work. METHODS The Scopus database was queried and studies were screened for article type, and minimum source normalized impact per paper and citations, before manual relevance assessment and a bias assessment derived from a subset of the Checklist for Artificial Intelligence in Medical Imaging (CLAIM). The number of failures of the full CLAIM was adopted as a surrogate for risk-of-bias. Methodological and performance measurements were collected from each technique. Each study was assessed by one author. Comparisons were evaluated for significance with a two-sided independent t-test. FINDINGS Of 1002 studies identified, 390 remained after screening and 81 after relevance and bias exclusion. The ratio of exclusion for bias was 71%, indicative of a high level of bias in the field. The mean number of CLAIM failures per study was 8.3 ± 3.9 [1,17] (mean ± standard deviation [min,max]). 58% of methods performed diagnosis versus 31% prognosis. Of the diagnostic methods, 38% differentiated COVID-19 from healthy controls. For diagnostic techniques, area under the receiver operating curve (AUC) = 0.924 ± 0.074 [0.810,0.991] and accuracy = 91.7% ± 6.4 [79.0,99.0]. For prognostic techniques, AUC = 0.836 ± 0.126 [0.605,0.980] and accuracy = 78.4% ± 9.4 [62.5,98.0]. CLAIM failures did not correlate with performance, providing confidence that the highest results were not driven by biased papers. Deep learning techniques reported higher AUC (p < 0.05) and accuracy (p < 0.05), but no difference in CLAIM failures was identified. INTERPRETATION A majority of papers focus on the less clinically impactful diagnosis task, contrasted with prognosis, with a significant portion performing a clinically unnecessary task of differentiating COVID-19 from healthy. Authors should consider the clinical scenario in which their work would be deployed when developing techniques. Nevertheless, studies report superb performance in a potentially impactful application. Future work is warranted in translating techniques into clinical tools.
Collapse
Affiliation(s)
- Ashley G Gillman
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia.
| | - Febrio Lunardo
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia
- College of Science and Engineering, James Cook University, Australian Tropical Science Innovation Precinct, Townsville, QLD, 4814, Australia
| | - Joseph Prinable
- ACRF Image X Institute, University of Sydney, Level 2, Biomedical Building (C81), 1 Central Ave, Australian Technology Park, Eveleigh, Sydney, NSW, 2015, Australia
| | - Gregg Belous
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia
| | - Aaron Nicolson
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia
| | - Hang Min
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia
| | - Andrew Terhorst
- Data61, Commonwealth Scientific and Industrial Research Organisation, College Road, Sandy Bay, Hobart, TAS, 7005, Australia
| | - Jason A Dowling
- Australian e-Health Research Centre, Commonwealth Scientific and Industrial Research Organisation, Surgical Treatment and Rehabilitation Service, 296 Herston Road, Brisbane, QLD, 4029, Australia
| |
Collapse
|
488
|
Goel A, Shih G, Riyahi S, Jeph S, Dev H, Hu R, Romano D, Teichman K, Blumenfeld JD, Barash I, Chicos I, Rennert H, Prince MR. Deployed Deep Learning Kidney Segmentation for Polycystic Kidney Disease MRI. Radiol Artif Intell 2022; 4:e210205. [PMID: 35391774 PMCID: PMC8980881 DOI: 10.1148/ryai.210205] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 01/20/2022] [Accepted: 01/25/2022] [Indexed: 12/18/2022]
Abstract
This study develops, validates, and deploys deep learning for automated total kidney volume (TKV) measurement (a marker of disease severity) on T2-weighted MRI studies of autosomal dominant polycystic kidney disease (ADPKD). The model was based on the U-Net architecture with an EfficientNet encoder, developed using 213 abdominal MRI studies in 129 patients with ADPKD. Patients were randomly divided into 70% training, 15% validation, and 15% test sets for model development. Model performance was assessed using Dice similarity coefficient (DSC) and Bland-Altman analysis. External validation in 20 patients from outside institutions demonstrated a DSC of 0.98 (IQR, 0.97-0.99) and a Bland-Altman difference of 2.6% (95% CI: 1.0%, 4.1%). Prospective validation in 53 patients demonstrated a DSC of 0.97 (IQR, 0.94-0.98) and a Bland-Altman difference of 3.6% (95% CI: 2.0%, 5.2%). Last, the efficiency of model-assisted annotation was evaluated on the first 50% of prospective cases (n = 28), with a 51% mean reduction in contouring time (P < .001), from 1724 seconds (95% CI: 1373, 2075) to 723 seconds (95% CI: 555, 892). In conclusion, our deployed artificial intelligence pipeline accurately performs automated segmentation for TKV estimation of polycystic kidneys and reduces expert contouring time. Keywords: Convolutional Neural Network (CNN), Segmentation, Kidney ClinicalTrials.gov identification no.: NCT00792155 Supplemental material is available for this article. © RSNA, 2022.
Collapse
Affiliation(s)
- Akshay Goel
- From the Departments of Radiology (A.G., G.S., S.R., S.J., H.D.,
R.H., D.R., K.T., M.R.P.), Internal Medicine (J.D.B., I.B., I.C.), and Pathology
and Laboratory Medicine (H.R.), Weill Cornell Medicine, 525 E 68th St, New York,
NY 10021
| | - George Shih
- From the Departments of Radiology (A.G., G.S., S.R., S.J., H.D.,
R.H., D.R., K.T., M.R.P.), Internal Medicine (J.D.B., I.B., I.C.), and Pathology
and Laboratory Medicine (H.R.), Weill Cornell Medicine, 525 E 68th St, New York,
NY 10021
| | - Sadjad Riyahi
- From the Departments of Radiology (A.G., G.S., S.R., S.J., H.D.,
R.H., D.R., K.T., M.R.P.), Internal Medicine (J.D.B., I.B., I.C.), and Pathology
and Laboratory Medicine (H.R.), Weill Cornell Medicine, 525 E 68th St, New York,
NY 10021
| | - Sunil Jeph
- From the Departments of Radiology (A.G., G.S., S.R., S.J., H.D.,
R.H., D.R., K.T., M.R.P.), Internal Medicine (J.D.B., I.B., I.C.), and Pathology
and Laboratory Medicine (H.R.), Weill Cornell Medicine, 525 E 68th St, New York,
NY 10021
| | - Hreedi Dev
- From the Departments of Radiology (A.G., G.S., S.R., S.J., H.D.,
R.H., D.R., K.T., M.R.P.), Internal Medicine (J.D.B., I.B., I.C.), and Pathology
and Laboratory Medicine (H.R.), Weill Cornell Medicine, 525 E 68th St, New York,
NY 10021
| | - Rejoice Hu
- From the Departments of Radiology (A.G., G.S., S.R., S.J., H.D.,
R.H., D.R., K.T., M.R.P.), Internal Medicine (J.D.B., I.B., I.C.), and Pathology
and Laboratory Medicine (H.R.), Weill Cornell Medicine, 525 E 68th St, New York,
NY 10021
| | - Dominick Romano
- From the Departments of Radiology (A.G., G.S., S.R., S.J., H.D.,
R.H., D.R., K.T., M.R.P.), Internal Medicine (J.D.B., I.B., I.C.), and Pathology
and Laboratory Medicine (H.R.), Weill Cornell Medicine, 525 E 68th St, New York,
NY 10021
| | - Kurt Teichman
- From the Departments of Radiology (A.G., G.S., S.R., S.J., H.D.,
R.H., D.R., K.T., M.R.P.), Internal Medicine (J.D.B., I.B., I.C.), and Pathology
and Laboratory Medicine (H.R.), Weill Cornell Medicine, 525 E 68th St, New York,
NY 10021
| | - Jon D. Blumenfeld
- From the Departments of Radiology (A.G., G.S., S.R., S.J., H.D.,
R.H., D.R., K.T., M.R.P.), Internal Medicine (J.D.B., I.B., I.C.), and Pathology
and Laboratory Medicine (H.R.), Weill Cornell Medicine, 525 E 68th St, New York,
NY 10021
| | - Irina Barash
- From the Departments of Radiology (A.G., G.S., S.R., S.J., H.D.,
R.H., D.R., K.T., M.R.P.), Internal Medicine (J.D.B., I.B., I.C.), and Pathology
and Laboratory Medicine (H.R.), Weill Cornell Medicine, 525 E 68th St, New York,
NY 10021
| | - Ines Chicos
- From the Departments of Radiology (A.G., G.S., S.R., S.J., H.D.,
R.H., D.R., K.T., M.R.P.), Internal Medicine (J.D.B., I.B., I.C.), and Pathology
and Laboratory Medicine (H.R.), Weill Cornell Medicine, 525 E 68th St, New York,
NY 10021
| | - Hanna Rennert
- From the Departments of Radiology (A.G., G.S., S.R., S.J., H.D.,
R.H., D.R., K.T., M.R.P.), Internal Medicine (J.D.B., I.B., I.C.), and Pathology
and Laboratory Medicine (H.R.), Weill Cornell Medicine, 525 E 68th St, New York,
NY 10021
| | - Martin R. Prince
- From the Departments of Radiology (A.G., G.S., S.R., S.J., H.D.,
R.H., D.R., K.T., M.R.P.), Internal Medicine (J.D.B., I.B., I.C.), and Pathology
and Laboratory Medicine (H.R.), Weill Cornell Medicine, 525 E 68th St, New York,
NY 10021
| |
Collapse
|
489
|
Dikici E, Nguyen XV, Bigelow M, Prevedello LM. Augmented Networks for Faster Brain Metastases Detection in T1-Weighted Contrast-Enhanced 3D MRI. Comput Med Imaging Graph 2022; 98:102059. [DOI: 10.1016/j.compmedimag.2022.102059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 01/21/2022] [Accepted: 03/17/2022] [Indexed: 10/18/2022]
|
490
|
Garbin C, Marques O. Assessing Methods and Tools to Improve Reporting, Increase Transparency, and Reduce Failures in Machine Learning Applications in Health Care. Radiol Artif Intell 2022; 4:e210127. [PMID: 35391771 DOI: 10.1148/ryai.210127] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2021] [Revised: 12/17/2021] [Accepted: 01/05/2022] [Indexed: 11/11/2022]
Abstract
Artificial intelligence applications for health care have come a long way. Despite the remarkable progress, there are several examples of unfulfilled promises and outright failures. There is still a struggle to translate successful research into successful real-world applications. Machine learning (ML) products diverge from traditional software products in fundamental ways. Particularly, the main component of an ML solution is not a specific piece of code that is written for a specific purpose; rather, it is a generic piece of code, a model, customized by a training process driven by hyperparameters and a dataset. Datasets are usually large, and models are opaque. Therefore, datasets and models cannot be inspected in the same, direct way as traditional software products. Other methods are needed to detect failures in ML products. This report investigates recent advancements that promote auditing, supported by transparency, as a mechanism to detect potential failures in ML products for health care applications. It reviews practices that apply to the early stages of the ML lifecycle, when datasets and models are created; these stages are unique to ML products. Concretely, this report demonstrates how two recently proposed checklists, datasheets for datasets and model cards, can be adopted to increase the transparency of crucial stages of the ML lifecycle, using ChestX-ray8 and CheXNet as examples. The adoption of checklists to document the strengths, limitations, and applications of datasets and models in a structured format leads to increased transparency, allowing early detection of potential problems and opportunities for improvement. Keywords: Artificial Intelligence, Machine Learning, Lifecycle, Auditing, Transparency, Failures, Datasheets, Datasets, Model Cards Supplemental material is available for this article. © RSNA, 2022.
Collapse
Affiliation(s)
- Christian Garbin
- College of Engineering & Computer Science, Florida Atlantic University, 777 Glades Rd, EE441, Boca Raton, FL 33431-0991
| | - Oge Marques
- College of Engineering & Computer Science, Florida Atlantic University, 777 Glades Rd, EE441, Boca Raton, FL 33431-0991
| |
Collapse
|
491
|
Ortiz A, Trivedi A, Desbiens J, Blazes M, Robinson C, Gupta S, Dodhia R, Bhatraju PK, Liles WC, Lee A, Ferres JML. Effective deep learning approaches for predicting COVID-19 outcomes from chest computed tomography volumes. Sci Rep 2022; 12:1716. [PMID: 35110593 PMCID: PMC8810911 DOI: 10.1038/s41598-022-05532-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Accepted: 01/13/2022] [Indexed: 12/23/2022] Open
Abstract
The rapid evolution of the novel coronavirus disease (COVID-19) pandemic has resulted in an urgent need for effective clinical tools to reduce transmission and manage severe illness. Numerous teams are quickly developing artificial intelligence approaches to these problems, including using deep learning to predict COVID-19 diagnosis and prognosis from chest computed tomography (CT) imaging data. In this work, we assess the value of aggregated chest CT data for COVID-19 prognosis compared to clinical metadata alone. We develop a novel patient-level algorithm to aggregate the chest CT volume into a 2D representation that can be easily integrated with clinical metadata to distinguish COVID-19 pneumonia from chest CT volumes from healthy participants and participants with other viral pneumonia. Furthermore, we present a multitask model for joint segmentation of different classes of pulmonary lesions present in COVID-19 infected lungs that can outperform individual segmentation models for each task. We directly compare this multitask segmentation approach to combining feature-agnostic volumetric CT classification feature maps with clinical metadata for predicting mortality. We show that the combination of features derived from the chest CT volumes improve the AUC performance to 0.80 from the 0.52 obtained by using patients' clinical data alone. These approaches enable the automated extraction of clinically relevant features from chest CT volumes for risk stratification of COVID-19 patients.
Collapse
Affiliation(s)
- Anthony Ortiz
- AI for Good Research Lab, Microsoft, Seattle, WA, USA
| | | | | | - Marian Blazes
- Department of Ophthalmology, University of Washington, Seattle, WA, USA
| | | | - Sunil Gupta
- Intelligent Retinal Imaging Systems, Pensacola, FL, USA
| | - Rahul Dodhia
- AI for Good Research Lab, Microsoft, Seattle, WA, USA
| | - Pavan K Bhatraju
- Department of Medicine and Sepsis Center of Research Excellence, University of Washington (SCORE-UW), Seattle, WA, USA
| | - W Conrad Liles
- Department of Medicine and Sepsis Center of Research Excellence, University of Washington (SCORE-UW), Seattle, WA, USA
| | - Aaron Lee
- Department of Ophthalmology, University of Washington, Seattle, WA, USA.
| | | |
Collapse
|
492
|
Nam D, Chapiro J, Paradis V, Seraphin TP, Kather JN. Artificial intelligence in liver diseases: improving diagnostics, prognostics and response prediction. JHEP REPORTS : INNOVATION IN HEPATOLOGY 2022; 4:100443. [PMID: 35243281 PMCID: PMC8867112 DOI: 10.1016/j.jhepr.2022.100443] [Citation(s) in RCA: 66] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Revised: 12/26/2021] [Accepted: 01/11/2022] [Indexed: 12/18/2022]
Abstract
Clinical routine in hepatology involves the diagnosis and treatment of a wide spectrum of metabolic, infectious, autoimmune and neoplastic diseases. Clinicians integrate qualitative and quantitative information from multiple data sources to make a diagnosis, prognosticate the disease course, and recommend a treatment. In the last 5 years, advances in artificial intelligence (AI), particularly in deep learning, have made it possible to extract clinically relevant information from complex and diverse clinical datasets. In particular, histopathology and radiology image data contain diagnostic, prognostic and predictive information which AI can extract. Ultimately, such AI systems could be implemented in clinical routine as decision support tools. However, in the context of hepatology, this requires further large-scale clinical validation and regulatory approval. Herein, we summarise the state of the art in AI in hepatology with a particular focus on histopathology and radiology data. We present a roadmap for the further development of novel biomarkers in hepatology and outline critical obstacles which need to be overcome.
Collapse
|
493
|
Estai M, Tennant M, Gebauer D, Brostek A, Vignarajan J, Mehdizadeh M, Saha S. Deep learning for automated detection and numbering of permanent teeth on panoramic images. Dentomaxillofac Radiol 2022; 51:20210296. [PMID: 34644152 PMCID: PMC8802702 DOI: 10.1259/dmfr.20210296] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2021] [Revised: 09/23/2021] [Accepted: 09/23/2021] [Indexed: 02/03/2023] Open
Abstract
OBJECTIVE This study aimed to evaluate an automated detection system to detect and classify permanent teeth on orthopantomogram (OPG) images using convolutional neural networks (CNNs). METHODS In total, 591 digital OPGs were collected from patients older than 18 years. Three qualified dentists performed individual teeth labelling on images to generate the ground truth annotations. A three-step procedure, relying upon CNNs, was proposed for automated detection and classification of teeth. Firstly, U-Net, a type of CNN, performed preliminary segmentation of tooth regions or detecting regions of interest (ROIs) on panoramic images. Secondly, the Faster R-CNN, an advanced object detection architecture, identified each tooth within the ROI determined by the U-Net. Thirdly, VGG-16 architecture classified each tooth into 32 categories, and a tooth number was assigned. A total of 17,135 teeth cropped from 591 radiographs were used to train and validate the tooth detection and tooth numbering modules. 90% of OPG images were used for training, and the remaining 10% were used for validation. 10-folds cross-validation was performed for measuring the performance. The intersection over union (IoU), F1 score, precision, and recall (i.e. sensitivity) were used as metrics to evaluate the performance of resultant CNNs. RESULTS The ROI detection module had an IoU of 0.70. The tooth detection module achieved a recall of 0.99 and a precision of 0.99. The tooth numbering module had a recall, precision and F1 score of 0.98. CONCLUSION The resultant automated method achieved high performance for automated tooth detection and numbering from OPG images. Deep learning can be helpful in the automatic filing of dental charts in general dentistry and forensic medicine.
Collapse
Affiliation(s)
| | - Marc Tennant
- School of Human Sciences, The University of Western Australia, Crawley, Australia
| | | | - Andrew Brostek
- The UWA Dental School, The University of Western Australia, Crawley, Australia
| | | | | | - Sajib Saha
- The Australian e-Health Research Centre, CSIRO, Floreat, Australia
| |
Collapse
|
494
|
Umer F, Habib S. Critical Analysis of Artificial Intelligence in Endodontics: A Scoping Review. J Endod 2022; 48:152-160. [PMID: 34838523 DOI: 10.1016/j.joen.2021.11.007] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Revised: 11/17/2021] [Accepted: 11/17/2021] [Indexed: 02/07/2023]
Abstract
INTRODUCTION Artificial intelligence (AI) comprises computational models that mimic the human brain to perform various diagnostic tasks in clinical practice. The aim of this scoping review was to systematically analyze the AI algorithms and models used in endodontics and identify the source quality and type of evidence. METHODS A literature search was conducted in October 2020 to identify the relevant literature in English language in the 4 major health sciences databases, ie, MEDLINE, Dentistry & Oral Science, CINAHL Plus, and Cochrane Library. Our review questions were the following: what are the different AI algorithms and models used in endodontics?, what are the datasets being used?, what type of performance metrics were reported?, and what diagnostic performance measures were used?. The quality of the included studies was evaluated by a modified Quality Assessment of Studies of Diagnostic Accuracy risk (QUADAS) tool. RESULTS Out of 300 studies, 12 articles met our inclusion criteria and were subjected to final analysis. Among the included studies, 6 studies focused on periapical pathology, and 3 studies investigated vertical root fractures. Most studies (n = 10) used neural networks, among which convolutional neural networks were commonly used. The datasets that were mostly studied were radiographs. Out of 12 studies, only 3 studies achieved a high score according to the modified QUADAS tool. CONCLUSIONS AI models had acceptable performance, ie, accuracy >90% in executing various diagnostic tasks. The scientific reporting of AI-related research is irregular. The endodontic community needs to implement recommended guidelines to improve the weaknesses in the current planning and reporting of AI-related research to improve its scientific vigor.
Collapse
Affiliation(s)
- Fahad Umer
- Department of Surgery, Aga Khan University Hospital, Karachi, Pakistan.
| | - Saqib Habib
- Operative Dentistry and Endodontics, Department of Surgery, Aga Khan University Hospital, Karachi, Pakistan
| |
Collapse
|
495
|
Soffer S, Morgenthau AS, Shimon O, Barash Y, Konen E, Glicksberg BS, Klang E. Artificial Intelligence for Interstitial Lung Disease Analysis on Chest Computed Tomography: A Systematic Review. Acad Radiol 2022; 29 Suppl 2:S226-S235. [PMID: 34219012 DOI: 10.1016/j.acra.2021.05.014] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 05/10/2021] [Accepted: 05/11/2021] [Indexed: 12/22/2022]
Abstract
RATIONALE AND OBJECTIVES High-resolution computed tomography (HRCT) is paramount in the assessment of interstitial lung disease (ILD). Yet, HRCT interpretation of ILDs may be hampered by inter- and intra-observer variability. Recently, artificial intelligence (AI) has revolutionized medical image analysis. This technology has the potential to advance patient care in ILD. We aimed to systematically evaluate the application of AI for the analysis of ILD in HRCT. MATERIALS AND METHODS We searched MEDLINE/PubMed databases for original publications of deep learning for ILD analysis on chest CT. The search included studies published up to March 1, 2021. The risk of bias evaluation included tailored Quality Assessment of Diagnostic Accuracy Studies and the modified Joanna Briggs Institute Critical Appraisal checklist. RESULTS Data was extracted from 19 retrospective studies. Deep learning techniques included detection, segmentation, and classification of ILD on HRCT. Most studies focused on the classification of ILD into different morphological patterns. Accuracies of 78%-91% were achieved. Two studies demonstrated near-expert performance for the diagnosis of idiopathic pulmonary fibrosis (IPF). The Quality Assessment of Diagnostic Accuracy Studies tool identified a high risk of bias in 15/19 (78.9%) of the studies. CONCLUSION AI has the potential to contribute to the radiologic diagnosis and classification of ILD. However, the accuracy performance is still not satisfactory, and research is limited by a small number of retrospective studies. Hence, the existing published data may not be sufficiently reliable. Only well-designed prospective controlled studies can accurately assess the value of existing AI tools for ILD evaluation.
Collapse
|
496
|
Abstract
Artificial intelligence (AI) has illuminated a clear path towards an evolving health-care system replete with enhanced precision and computing capabilities. Medical imaging analysis can be strengthened by machine learning as the multidimensional data generated by imaging naturally lends itself to hierarchical classification. In this Review, we describe the role of machine intelligence in image-based endocrine cancer diagnostics. We first provide a brief overview of AI and consider its intuitive incorporation into the clinical workflow. We then discuss how AI can be applied for the characterization of adrenal, pancreatic, pituitary and thyroid masses in order to support clinicians in their diagnostic interpretations. This Review also puts forth a number of key evaluation criteria for machine learning in medicine that physicians can use in their appraisals of these algorithms. We identify mitigation strategies to address ongoing challenges around data availability and model interpretability in the context of endocrine cancer diagnosis. Finally, we delve into frontiers in systems integration for AI, discussing automated pipelines and evolving computing platforms that leverage distributed, decentralized and quantum techniques.
Collapse
Affiliation(s)
| | - Ihab R Kamel
- Department of Imaging & Imaging Sciences, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Harrison X Bai
- Department of Imaging & Imaging Sciences, Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
497
|
Booth TC, Grzeda M, Chelliah A, Roman A, Al Busaidi A, Dragos C, Shuaib H, Luis A, Mirchandani A, Alparslan B, Mansoor N, Lavrador J, Vergani F, Ashkan K, Modat M, Ourselin S. Imaging Biomarkers of Glioblastoma Treatment Response: A Systematic Review and Meta-Analysis of Recent Machine Learning Studies. Front Oncol 2022; 12:799662. [PMID: 35174084 PMCID: PMC8842649 DOI: 10.3389/fonc.2022.799662] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 01/03/2022] [Indexed: 12/21/2022] Open
Abstract
OBJECTIVE Monitoring biomarkers using machine learning (ML) may determine glioblastoma treatment response. We systematically reviewed quality and performance accuracy of recently published studies. METHODS Following Preferred Reporting Items for Systematic Reviews and Meta-Analysis: Diagnostic Test Accuracy, we extracted articles from MEDLINE, EMBASE and Cochrane Register between 09/2018-01/2021. Included study participants were adults with glioblastoma having undergone standard treatment (maximal resection, radiotherapy with concomitant and adjuvant temozolomide), and follow-up imaging to determine treatment response status (specifically, distinguishing progression/recurrence from progression/recurrence mimics, the target condition). Using Quality Assessment of Diagnostic Accuracy Studies Two/Checklist for Artificial Intelligence in Medical Imaging, we assessed bias risk and applicability concerns. We determined test set performance accuracy (sensitivity, specificity, precision, F1-score, balanced accuracy). We used a bivariate random-effect model to determine pooled sensitivity, specificity, area-under the receiver operator characteristic curve (ROC-AUC). Pooled measures of balanced accuracy, positive/negative likelihood ratios (PLR/NLR) and diagnostic odds ratio (DOR) were calculated. PROSPERO registered (CRD42021261965). RESULTS Eighteen studies were included (1335/384 patients for training/testing respectively). Small patient numbers, high bias risk, applicability concerns (particularly confounding in reference standard and patient selection) and low level of evidence, allow limited conclusions from studies. Ten studies (10/18, 56%) included in meta-analysis gave 0.769 (0.649-0.858) sensitivity [pooled (95% CI)]; 0.648 (0.749-0.532) specificity; 0.706 (0.623-0.779) balanced accuracy; 2.220 (1.560-3.140) PLR; 0.366 (0.213-0.572) NLR; 6.670 (2.800-13.500) DOR; 0.765 ROC-AUC. CONCLUSION ML models using MRI features to distinguish between progression and mimics appear to demonstrate good diagnostic performance. However, study quality and design require improvement.
Collapse
Affiliation(s)
- Thomas C. Booth
- School of Biomedical Engineering & Imaging Sciences, King’s College London, St. Thomas’ Hospital, London, United Kingdom
- Department of Neuroradiology, King’s College Hospital National Health Service Foundation Trust, London, United Kingdom
| | - Mariusz Grzeda
- School of Biomedical Engineering & Imaging Sciences, King’s College London, St. Thomas’ Hospital, London, United Kingdom
| | - Alysha Chelliah
- School of Biomedical Engineering & Imaging Sciences, King’s College London, St. Thomas’ Hospital, London, United Kingdom
| | - Andrei Roman
- Department of Radiology, Guy’s & St. Thomas’ National Health Service Foundation Trust, London, United Kingdom
- Department of Radiology, The Oncology Institute “Prof. Dr. Ion Chiricuţă” Cluj-Napoca, Cluj-Napoca, Romania
| | - Ayisha Al Busaidi
- Department of Neuroradiology, King’s College Hospital National Health Service Foundation Trust, London, United Kingdom
| | - Carmen Dragos
- Department of Radiology, Buckinghamshire Healthcare National Health Service Trust, Amersham, United Kingdom
| | - Haris Shuaib
- Department of Medical Physics, Guy’s & St. Thomas’ National Health Service Foundation Trust, London, United Kingdom
- Institute of Psychiatry, Psychology & Neuroscience, King’s College London, London, United Kingdom
| | - Aysha Luis
- Department of Neuroradiology, King’s College Hospital National Health Service Foundation Trust, London, United Kingdom
| | - Ayesha Mirchandani
- Department of Radiology, Cambridge University Hospitals National Health Service Foundation Trust, Cambridge, United Kingdom
| | - Burcu Alparslan
- Department of Neuroradiology, King’s College Hospital National Health Service Foundation Trust, London, United Kingdom
- Department of Radiology, Kocaeli University, İzmit, Turkey
| | - Nina Mansoor
- Department of Neuroradiology, King’s College Hospital National Health Service Foundation Trust, London, United Kingdom
| | - Jose Lavrador
- Department of Neurosurgery, King’s College Hospital National Health Service Foundation Trust, London, United Kingdom
| | - Francesco Vergani
- Department of Neurosurgery, King’s College Hospital National Health Service Foundation Trust, London, United Kingdom
| | - Keyoumars Ashkan
- Department of Neurosurgery, King’s College Hospital National Health Service Foundation Trust, London, United Kingdom
| | - Marc Modat
- School of Biomedical Engineering & Imaging Sciences, King’s College London, St. Thomas’ Hospital, London, United Kingdom
| | - Sebastien Ourselin
- School of Biomedical Engineering & Imaging Sciences, King’s College London, St. Thomas’ Hospital, London, United Kingdom
| |
Collapse
|
498
|
Ren J, Li Y, Yang JJ, Zhao J, Xiang Y, Xia C, Cao Y, Chen B, Guan H, Qi YF, Tang W, Chen K, He YL, Jin ZY, Xue HD. MRI-based radiomics analysis improves preoperative diagnostic performance for the depth of stromal invasion in patients with early stage cervical cancer. Insights Imaging 2022; 13:17. [PMID: 35092505 PMCID: PMC8800977 DOI: 10.1186/s13244-022-01156-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 12/31/2021] [Indexed: 11/10/2022] Open
Abstract
Background The depth of cervical stromal invasion is one of the important prognostic factors affecting decision-making for early stage cervical cancer (CC). This study aimed to develop and validate a T2-weighted imaging (T2WI)-based radiomics model and explore independent risk factors (factors with statistical significance in both univariate and multivariate analyses) of middle or deep stromal invasion in early stage CC. Methods Between March 2017 and March 2021, a total of 234 International Federation of Gynecology and Obstetrics IB1-IIA1 CC patients were enrolled and randomly divided into a training cohort (n = 188) and a validation cohort (n = 46). The radiomics features of each patient were extracted from preoperative sagittal T2WI, and key features were selected. After independent risk factors were identified, a combined model and nomogram incorporating radiomics signature and independent risk factors were developed. Diagnostic accuracy of radiologists was also evaluated. Results The maximal tumor diameter (MTD) on magnetic resonance imaging was identified as an independent risk factor. In the validation cohort, the radiomics model, MTD, and combined model showed areas under the curve of 0.879, 0.844, and 0.886. The radiomics model and combined model showed the same sensitivity and specificity of 87.9% and 84.6%, which were better than radiologists (sensitivity, senior = 75.7%, junior = 63.6%; specificity, senior = 69.2%, junior = 53.8%) and MTD (sensitivity = 69.7%, specificity = 76.9%). Conclusion MRI-based radiomics analysis outperformed radiologists for the preoperative diagnosis of middle or deep stromal invasion in early stage CC, and the probability can be individually evaluated by a nomogram.
Collapse
|
499
|
Automated grading of enlarged perivascular spaces in clinical imaging data of an acute stroke cohort using an interpretable, 3D deep learning framework. Sci Rep 2022; 12:788. [PMID: 35039524 PMCID: PMC8764081 DOI: 10.1038/s41598-021-04287-4] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 12/13/2021] [Indexed: 01/10/2023] Open
Abstract
Enlarged perivascular spaces (EPVS), specifically in stroke patients, has been shown to strongly correlate with other measures of small vessel disease and cognitive impairment at 1 year follow-up. Typical grading of EPVS is often challenging and time consuming and is usually based on a subjective visual rating scale. The purpose of the current study was to develop an interpretable, 3D neural network for grading enlarged perivascular spaces (EPVS) severity at the level of the basal ganglia using clinical-grade imaging in a heterogenous acute stroke cohort, in the context of total cerebral small vessel disease (CSVD) burden. T2-weighted images from a retrospective cohort of 262 acute stroke patients, collected in 2015 from 5 regional medical centers, were used for analyses. Patients were given a label of 0 for none-to-mild EPVS (< 10) and 1 for moderate-to-severe EPVS (≥ 10). A three-dimensional residual network of 152 layers (3D-ResNet-152) was created to predict EPVS severity and 3D gradient class activation mapping (3DGradCAM) was used for visual interpretation of results. Our model achieved an accuracy 0.897 and area-under-the-curve of 0.879 on a hold-out test set of 15% of the total cohort (n = 39). 3DGradCAM showed areas of focus that were in physiologically valid locations, including other prevalent areas for EPVS. These maps also suggested that distribution of class activation values is indicative of the confidence in the model's decision. Potential clinical implications of our results include: (1) support for feasibility of automated of EPVS scoring using clinical-grade neuroimaging data, potentially alleviating rater subjectivity and improving confidence of visual rating scales, and (2) demonstration that explainable models are critical for clinical translation.
Collapse
|
500
|
Benzakoun J, Deslys MA, Legrand L, Hmeydia G, Turc G, Hassen WB, Charron S, Debacker C, Naggara O, Baron JC, Thirion B, Oppenheim C. Synthetic FLAIR as a Substitute for FLAIR Sequence in Acute Ischemic Stroke. Radiology 2022; 303:153-159. [PMID: 35014901 DOI: 10.1148/radiol.211394] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Background In acute ischemic stroke (AIS), fluid-attenuated inversion recovery (FLAIR) is used for treatment decisions when onset time is unknown. Synthetic FLAIR could be generated with deep learning from information embedded in diffusion-weighted imaging (DWI) and could replace acquired FLAIR sequence (real FLAIR) and shorten MRI duration. Purpose To compare performance of synthetic and real FLAIR for DWI-FLAIR mismatch estimation and identification of patients presenting within 4.5 hours from symptom onset. Materials and Methods In this retrospective study, all pretreatment and early follow-up (<48 hours after symptom onset) MRI data sets including DWI (b = 0-1000 sec/mm2) and FLAIR sequences obtained in consecutive patients with AIS referred for reperfusion therapies between January 2002 and May 2019 were included. On the training set (80%), a generative adversarial network was trained to produce synthetic FLAIR with DWI as input. On the test set (20%), synthetic FLAIR was computed without real FLAIR knowledge. The DWI-FLAIR mismatch was evaluated on both FLAIR data sets by four independent readers. Interobserver reproducibility and DWI-FLAIR mismatch concordance between synthetic and real FLAIR were evaluated with κ statistics. Sensitivity and specificity for identification of AIS within 4.5 hours were compared in patients with known onset time by using McNemar test. Results The study included 1416 MRI scans (861 patients; median age, 71 years [interquartile range, 57-81 years]; 375 men), yielding 1134 and 282 scans for training and test sets, respectively. Regarding DWI-FLAIR mismatch, interobserver reproducibility was substantial for real and synthetic FLAIR (κ = 0.80 [95% CI: 0.74, 0.87] and 0.80 [95% CI: 0.74, 0.87], respectively). After consensus, concordance between real and synthetic FLAIR was almost perfect (κ = 0.88; 95% CI: 0.82, 0.93). Diagnostic value for identifying AIS within 4.5 hours did not differ between real and synthetic FLAIR (sensitivity: 107 of 131 [82%] vs 111 of 131 [85%], P = .2; specificity: 96 of 104 [92%] vs 96 of 104 [92%], respectively, P > .99). Conclusion Synthetic fluid-attenuated inversion recovery (FLAIR) had diagnostic performances similar to real FLAIR in depicting diffusion-weighted imaging-FLAIR mismatch and in helping to identify early acute ischemic stroke, and it may accelerate MRI protocols. © RSNA, 2022 Online supplemental material is available for this article. See also the editorial by Carroll and Hurley in this issue.
Collapse
Affiliation(s)
- Joseph Benzakoun
- From the Departments of Neuroradiology (J.B., L.L., G.H., W.B.H., O.N., C.O.) and Neurology (G.T., J.C.B.), GHU Paris Psychiatrie et Neurosciences, Site Sainte-Anne, 1 rue Cabanis, 75014 Paris, France; INSERM U1266, Paris, France (J.B., M.A.D., L.L., G.T., W.B.H., S.C., C.D., O.N., J.C.B., C.O.); Université de Paris, FHU Neurovasc, Paris, France (J.B., L.L., G.T., W.B.H., S.C., C.D., O.N., J.C.B., C.O.); and PARIETAL Team, INRIA, Saclay, France (B.T.)
| | - Marc-Antoine Deslys
- From the Departments of Neuroradiology (J.B., L.L., G.H., W.B.H., O.N., C.O.) and Neurology (G.T., J.C.B.), GHU Paris Psychiatrie et Neurosciences, Site Sainte-Anne, 1 rue Cabanis, 75014 Paris, France; INSERM U1266, Paris, France (J.B., M.A.D., L.L., G.T., W.B.H., S.C., C.D., O.N., J.C.B., C.O.); Université de Paris, FHU Neurovasc, Paris, France (J.B., L.L., G.T., W.B.H., S.C., C.D., O.N., J.C.B., C.O.); and PARIETAL Team, INRIA, Saclay, France (B.T.)
| | - Laurence Legrand
- From the Departments of Neuroradiology (J.B., L.L., G.H., W.B.H., O.N., C.O.) and Neurology (G.T., J.C.B.), GHU Paris Psychiatrie et Neurosciences, Site Sainte-Anne, 1 rue Cabanis, 75014 Paris, France; INSERM U1266, Paris, France (J.B., M.A.D., L.L., G.T., W.B.H., S.C., C.D., O.N., J.C.B., C.O.); Université de Paris, FHU Neurovasc, Paris, France (J.B., L.L., G.T., W.B.H., S.C., C.D., O.N., J.C.B., C.O.); and PARIETAL Team, INRIA, Saclay, France (B.T.)
| | - Ghazi Hmeydia
- From the Departments of Neuroradiology (J.B., L.L., G.H., W.B.H., O.N., C.O.) and Neurology (G.T., J.C.B.), GHU Paris Psychiatrie et Neurosciences, Site Sainte-Anne, 1 rue Cabanis, 75014 Paris, France; INSERM U1266, Paris, France (J.B., M.A.D., L.L., G.T., W.B.H., S.C., C.D., O.N., J.C.B., C.O.); Université de Paris, FHU Neurovasc, Paris, France (J.B., L.L., G.T., W.B.H., S.C., C.D., O.N., J.C.B., C.O.); and PARIETAL Team, INRIA, Saclay, France (B.T.)
| | - Guillaume Turc
- From the Departments of Neuroradiology (J.B., L.L., G.H., W.B.H., O.N., C.O.) and Neurology (G.T., J.C.B.), GHU Paris Psychiatrie et Neurosciences, Site Sainte-Anne, 1 rue Cabanis, 75014 Paris, France; INSERM U1266, Paris, France (J.B., M.A.D., L.L., G.T., W.B.H., S.C., C.D., O.N., J.C.B., C.O.); Université de Paris, FHU Neurovasc, Paris, France (J.B., L.L., G.T., W.B.H., S.C., C.D., O.N., J.C.B., C.O.); and PARIETAL Team, INRIA, Saclay, France (B.T.)
| | - Wagih Ben Hassen
- From the Departments of Neuroradiology (J.B., L.L., G.H., W.B.H., O.N., C.O.) and Neurology (G.T., J.C.B.), GHU Paris Psychiatrie et Neurosciences, Site Sainte-Anne, 1 rue Cabanis, 75014 Paris, France; INSERM U1266, Paris, France (J.B., M.A.D., L.L., G.T., W.B.H., S.C., C.D., O.N., J.C.B., C.O.); Université de Paris, FHU Neurovasc, Paris, France (J.B., L.L., G.T., W.B.H., S.C., C.D., O.N., J.C.B., C.O.); and PARIETAL Team, INRIA, Saclay, France (B.T.)
| | - Sylvain Charron
- From the Departments of Neuroradiology (J.B., L.L., G.H., W.B.H., O.N., C.O.) and Neurology (G.T., J.C.B.), GHU Paris Psychiatrie et Neurosciences, Site Sainte-Anne, 1 rue Cabanis, 75014 Paris, France; INSERM U1266, Paris, France (J.B., M.A.D., L.L., G.T., W.B.H., S.C., C.D., O.N., J.C.B., C.O.); Université de Paris, FHU Neurovasc, Paris, France (J.B., L.L., G.T., W.B.H., S.C., C.D., O.N., J.C.B., C.O.); and PARIETAL Team, INRIA, Saclay, France (B.T.)
| | - Clément Debacker
- From the Departments of Neuroradiology (J.B., L.L., G.H., W.B.H., O.N., C.O.) and Neurology (G.T., J.C.B.), GHU Paris Psychiatrie et Neurosciences, Site Sainte-Anne, 1 rue Cabanis, 75014 Paris, France; INSERM U1266, Paris, France (J.B., M.A.D., L.L., G.T., W.B.H., S.C., C.D., O.N., J.C.B., C.O.); Université de Paris, FHU Neurovasc, Paris, France (J.B., L.L., G.T., W.B.H., S.C., C.D., O.N., J.C.B., C.O.); and PARIETAL Team, INRIA, Saclay, France (B.T.)
| | - Olivier Naggara
- From the Departments of Neuroradiology (J.B., L.L., G.H., W.B.H., O.N., C.O.) and Neurology (G.T., J.C.B.), GHU Paris Psychiatrie et Neurosciences, Site Sainte-Anne, 1 rue Cabanis, 75014 Paris, France; INSERM U1266, Paris, France (J.B., M.A.D., L.L., G.T., W.B.H., S.C., C.D., O.N., J.C.B., C.O.); Université de Paris, FHU Neurovasc, Paris, France (J.B., L.L., G.T., W.B.H., S.C., C.D., O.N., J.C.B., C.O.); and PARIETAL Team, INRIA, Saclay, France (B.T.)
| | - Jean-Claude Baron
- From the Departments of Neuroradiology (J.B., L.L., G.H., W.B.H., O.N., C.O.) and Neurology (G.T., J.C.B.), GHU Paris Psychiatrie et Neurosciences, Site Sainte-Anne, 1 rue Cabanis, 75014 Paris, France; INSERM U1266, Paris, France (J.B., M.A.D., L.L., G.T., W.B.H., S.C., C.D., O.N., J.C.B., C.O.); Université de Paris, FHU Neurovasc, Paris, France (J.B., L.L., G.T., W.B.H., S.C., C.D., O.N., J.C.B., C.O.); and PARIETAL Team, INRIA, Saclay, France (B.T.)
| | - Bertrand Thirion
- From the Departments of Neuroradiology (J.B., L.L., G.H., W.B.H., O.N., C.O.) and Neurology (G.T., J.C.B.), GHU Paris Psychiatrie et Neurosciences, Site Sainte-Anne, 1 rue Cabanis, 75014 Paris, France; INSERM U1266, Paris, France (J.B., M.A.D., L.L., G.T., W.B.H., S.C., C.D., O.N., J.C.B., C.O.); Université de Paris, FHU Neurovasc, Paris, France (J.B., L.L., G.T., W.B.H., S.C., C.D., O.N., J.C.B., C.O.); and PARIETAL Team, INRIA, Saclay, France (B.T.)
| | - Catherine Oppenheim
- From the Departments of Neuroradiology (J.B., L.L., G.H., W.B.H., O.N., C.O.) and Neurology (G.T., J.C.B.), GHU Paris Psychiatrie et Neurosciences, Site Sainte-Anne, 1 rue Cabanis, 75014 Paris, France; INSERM U1266, Paris, France (J.B., M.A.D., L.L., G.T., W.B.H., S.C., C.D., O.N., J.C.B., C.O.); Université de Paris, FHU Neurovasc, Paris, France (J.B., L.L., G.T., W.B.H., S.C., C.D., O.N., J.C.B., C.O.); and PARIETAL Team, INRIA, Saclay, France (B.T.)
| |
Collapse
|