501
|
Haymond S, Master SR. How Can We Ensure Reproducibility and Clinical Translation of Machine Learning Applications in Laboratory Medicine? Clin Chem 2022; 68:392-395. [PMID: 35019992 PMCID: PMC8894866 DOI: 10.1093/clinchem/hvab272] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Accepted: 12/02/2021] [Indexed: 01/12/2023]
Affiliation(s)
- Shannon Haymond
- Ann & Robert H. Lurie Children’s Hospital of Chicago, Chicago, IL, USA,Department of Pathology, Northwestern University Feinberg School of Medicine, Chicago, IL, USA,Address correspondence to this author at: Ann & Robert H. Lurie Children’s Hospital of Chicago,225 E. Chicago Ave, Chicago, IL 60611, USA. Fax 312-227-9616; e-mail
| | - Stephen R Master
- Department of Pathology and Laboratory Medicine, Children’s Hospital of Philadelphia; Philadelphia, PA, USA,Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
502
|
Empirical Study on Classifiers for Earlier Prediction of COVID-19 Infection Cure and Death Rate in the Indian States. Healthcare (Basel) 2022; 10:healthcare10010085. [PMID: 35052249 PMCID: PMC8775063 DOI: 10.3390/healthcare10010085] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2021] [Revised: 12/11/2021] [Accepted: 12/29/2021] [Indexed: 02/04/2023] Open
Abstract
Machine Learning methods can play a key role in predicting the spread of respiratory infection with the help of predictive analytics. Machine Learning techniques help mine data to better estimate and predict the COVID-19 infection status. A Fine-tuned Ensemble Classification approach for predicting the death and cure rates of patients from infection using Machine Learning techniques has been proposed for different states of India. The proposed classification model is applied to the recent COVID-19 dataset for India, and a performance evaluation of various state-of-the-art classifiers to the proposed model is performed. The classifiers forecasted the patients’ infection status in different regions to better plan resources and response care systems. The appropriate classification of the output class based on the extracted input features is essential to achieve accurate results of classifiers. The experimental outcome exhibits that the proposed Hybrid Model reached a maximum F1-score of 94% compared to Ensembles and other classifiers like Support Vector Machine, Decision Trees, and Gaussian Naïve Bayes on a dataset of 5004 instances through 10-fold cross-validation for predicting the right class. The feasibility of automated prediction for COVID-19 infection cure and death rates in the Indian states was demonstrated.
Collapse
|
503
|
Badr FF, Jadu FM. Performance of artificial intelligence using oral and maxillofacial CBCT images: A systematic review and meta-analysis. Niger J Clin Pract 2022; 25:1918-1927. [DOI: 10.4103/njcp.njcp_394_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
504
|
Hickman SE, Woitek R, Le EPV, Im YR, Mouritsen Luxhøj C, Aviles-Rivero AI, Baxter GC, MacKay JW, Gilbert FJ. Machine Learning for Workflow Applications in Screening Mammography: Systematic Review and Meta-Analysis. Radiology 2022; 302:88-104. [PMID: 34665034 PMCID: PMC8717814 DOI: 10.1148/radiol.2021210391] [Citation(s) in RCA: 45] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 07/14/2021] [Accepted: 08/05/2021] [Indexed: 01/03/2023]
Abstract
Background Advances in computer processing and improvements in data availability have led to the development of machine learning (ML) techniques for mammographic imaging. Purpose To evaluate the reported performance of stand-alone ML applications for screening mammography workflow. Materials and Methods Ovid Embase, Ovid Medline, Cochrane Central Register of Controlled Trials, Scopus, and Web of Science literature databases were searched for relevant studies published from January 2012 to September 2020. The study was registered with the PROSPERO International Prospective Register of Systematic Reviews (protocol no. CRD42019156016). Stand-alone technology was defined as a ML algorithm that can be used independently of a human reader. Studies were quality assessed using the Quality Assessment of Diagnostic Accuracy Studies 2 and the Prediction Model Risk of Bias Assessment Tool, and reporting was evaluated using the Checklist for Artificial Intelligence in Medical Imaging. A primary meta-analysis included the top-performing algorithm and corresponding reader performance from which pooled summary estimates for the area under the receiver operating characteristic curve (AUC) were calculated using a bivariate model. Results Fourteen articles were included, which detailed 15 studies for stand-alone detection (n = 8) and triage (n = 7). Triage studies reported that 17%-91% of normal mammograms identified could be read by adapted screening, while "missing" an estimated 0%-7% of cancers. In total, an estimated 185 252 cases from three countries with more than 39 readers were included in the primary meta-analysis. The pooled sensitivity, specificity, and AUC was 75.4% (95% CI: 65.6, 83.2; P = .11), 90.6% (95% CI: 82.9, 95.0; P = .40), and 0.89 (95% CI: 0.84, 0.98), respectively, for algorithms, and 73.0% (95% CI: 60.7, 82.6), 88.6% (95% CI: 72.4, 95.8), and 0.85 (95% CI: 0.78, 0.97), respectively, for readers. Conclusion Machine learning (ML) algorithms that demonstrate a stand-alone application in mammographic screening workflows achieve or even exceed human reader detection performance and improve efficiency. However, this evidence is from a small number of retrospective studies. Therefore, further rigorous independent external prospective testing of ML algorithms to assess performance at preassigned thresholds is required to support these claims. ©RSNA, 2021 Online supplemental material is available for this article. See also the editorial by Whitman and Moseley in this issue.
Collapse
Affiliation(s)
- Sarah E. Hickman
- From the Department of Radiology (S.E.H., R.W., G.C.B., J.W.M.,
F.J.G.) and Department of Medicine (E.P.V.L., Y.R.I., C.M.L.), University of
Cambridge School of Clinical Medicine, Box 218, Cambridge Biomedical Campus,
Cambridge, CB2 0QQ, England; Department of Radiology, Addenbrooke's
Hospital, Cambridge University Hospitals National Health Service Foundation
Trust, Cambridge, England (R.W., F.J.G.); Department of Biomedical Imaging and
Image-guided Therapy, Medical University of Vienna, Vienna, Austria (R.W.);
Department of Pure Mathematics and Mathematical Statistics, University of
Cambridge, Cambridge, England (A.I.A.R.); and Norwich Medical School, University
of East Anglia, Norwich, England (J.W.M.)
| | - Ramona Woitek
- From the Department of Radiology (S.E.H., R.W., G.C.B., J.W.M.,
F.J.G.) and Department of Medicine (E.P.V.L., Y.R.I., C.M.L.), University of
Cambridge School of Clinical Medicine, Box 218, Cambridge Biomedical Campus,
Cambridge, CB2 0QQ, England; Department of Radiology, Addenbrooke's
Hospital, Cambridge University Hospitals National Health Service Foundation
Trust, Cambridge, England (R.W., F.J.G.); Department of Biomedical Imaging and
Image-guided Therapy, Medical University of Vienna, Vienna, Austria (R.W.);
Department of Pure Mathematics and Mathematical Statistics, University of
Cambridge, Cambridge, England (A.I.A.R.); and Norwich Medical School, University
of East Anglia, Norwich, England (J.W.M.)
| | - Elizabeth Phuong Vi Le
- From the Department of Radiology (S.E.H., R.W., G.C.B., J.W.M.,
F.J.G.) and Department of Medicine (E.P.V.L., Y.R.I., C.M.L.), University of
Cambridge School of Clinical Medicine, Box 218, Cambridge Biomedical Campus,
Cambridge, CB2 0QQ, England; Department of Radiology, Addenbrooke's
Hospital, Cambridge University Hospitals National Health Service Foundation
Trust, Cambridge, England (R.W., F.J.G.); Department of Biomedical Imaging and
Image-guided Therapy, Medical University of Vienna, Vienna, Austria (R.W.);
Department of Pure Mathematics and Mathematical Statistics, University of
Cambridge, Cambridge, England (A.I.A.R.); and Norwich Medical School, University
of East Anglia, Norwich, England (J.W.M.)
| | - Yu Ri Im
- From the Department of Radiology (S.E.H., R.W., G.C.B., J.W.M.,
F.J.G.) and Department of Medicine (E.P.V.L., Y.R.I., C.M.L.), University of
Cambridge School of Clinical Medicine, Box 218, Cambridge Biomedical Campus,
Cambridge, CB2 0QQ, England; Department of Radiology, Addenbrooke's
Hospital, Cambridge University Hospitals National Health Service Foundation
Trust, Cambridge, England (R.W., F.J.G.); Department of Biomedical Imaging and
Image-guided Therapy, Medical University of Vienna, Vienna, Austria (R.W.);
Department of Pure Mathematics and Mathematical Statistics, University of
Cambridge, Cambridge, England (A.I.A.R.); and Norwich Medical School, University
of East Anglia, Norwich, England (J.W.M.)
| | - Carina Mouritsen Luxhøj
- From the Department of Radiology (S.E.H., R.W., G.C.B., J.W.M.,
F.J.G.) and Department of Medicine (E.P.V.L., Y.R.I., C.M.L.), University of
Cambridge School of Clinical Medicine, Box 218, Cambridge Biomedical Campus,
Cambridge, CB2 0QQ, England; Department of Radiology, Addenbrooke's
Hospital, Cambridge University Hospitals National Health Service Foundation
Trust, Cambridge, England (R.W., F.J.G.); Department of Biomedical Imaging and
Image-guided Therapy, Medical University of Vienna, Vienna, Austria (R.W.);
Department of Pure Mathematics and Mathematical Statistics, University of
Cambridge, Cambridge, England (A.I.A.R.); and Norwich Medical School, University
of East Anglia, Norwich, England (J.W.M.)
| | - Angelica I. Aviles-Rivero
- From the Department of Radiology (S.E.H., R.W., G.C.B., J.W.M.,
F.J.G.) and Department of Medicine (E.P.V.L., Y.R.I., C.M.L.), University of
Cambridge School of Clinical Medicine, Box 218, Cambridge Biomedical Campus,
Cambridge, CB2 0QQ, England; Department of Radiology, Addenbrooke's
Hospital, Cambridge University Hospitals National Health Service Foundation
Trust, Cambridge, England (R.W., F.J.G.); Department of Biomedical Imaging and
Image-guided Therapy, Medical University of Vienna, Vienna, Austria (R.W.);
Department of Pure Mathematics and Mathematical Statistics, University of
Cambridge, Cambridge, England (A.I.A.R.); and Norwich Medical School, University
of East Anglia, Norwich, England (J.W.M.)
| | - Gabrielle C. Baxter
- From the Department of Radiology (S.E.H., R.W., G.C.B., J.W.M.,
F.J.G.) and Department of Medicine (E.P.V.L., Y.R.I., C.M.L.), University of
Cambridge School of Clinical Medicine, Box 218, Cambridge Biomedical Campus,
Cambridge, CB2 0QQ, England; Department of Radiology, Addenbrooke's
Hospital, Cambridge University Hospitals National Health Service Foundation
Trust, Cambridge, England (R.W., F.J.G.); Department of Biomedical Imaging and
Image-guided Therapy, Medical University of Vienna, Vienna, Austria (R.W.);
Department of Pure Mathematics and Mathematical Statistics, University of
Cambridge, Cambridge, England (A.I.A.R.); and Norwich Medical School, University
of East Anglia, Norwich, England (J.W.M.)
| | - James W. MacKay
- From the Department of Radiology (S.E.H., R.W., G.C.B., J.W.M.,
F.J.G.) and Department of Medicine (E.P.V.L., Y.R.I., C.M.L.), University of
Cambridge School of Clinical Medicine, Box 218, Cambridge Biomedical Campus,
Cambridge, CB2 0QQ, England; Department of Radiology, Addenbrooke's
Hospital, Cambridge University Hospitals National Health Service Foundation
Trust, Cambridge, England (R.W., F.J.G.); Department of Biomedical Imaging and
Image-guided Therapy, Medical University of Vienna, Vienna, Austria (R.W.);
Department of Pure Mathematics and Mathematical Statistics, University of
Cambridge, Cambridge, England (A.I.A.R.); and Norwich Medical School, University
of East Anglia, Norwich, England (J.W.M.)
| | - Fiona J. Gilbert
- From the Department of Radiology (S.E.H., R.W., G.C.B., J.W.M.,
F.J.G.) and Department of Medicine (E.P.V.L., Y.R.I., C.M.L.), University of
Cambridge School of Clinical Medicine, Box 218, Cambridge Biomedical Campus,
Cambridge, CB2 0QQ, England; Department of Radiology, Addenbrooke's
Hospital, Cambridge University Hospitals National Health Service Foundation
Trust, Cambridge, England (R.W., F.J.G.); Department of Biomedical Imaging and
Image-guided Therapy, Medical University of Vienna, Vienna, Austria (R.W.);
Department of Pure Mathematics and Mathematical Statistics, University of
Cambridge, Cambridge, England (A.I.A.R.); and Norwich Medical School, University
of East Anglia, Norwich, England (J.W.M.)
| |
Collapse
|
505
|
Sollee J, Tang L, Igiraneza AB, Xiao B, Bai HX, Yang L. Artificial Intelligence for Medical Image Analysis in Epilepsy. Epilepsy Res 2022; 182:106861. [DOI: 10.1016/j.eplepsyres.2022.106861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Revised: 11/18/2021] [Accepted: 01/16/2022] [Indexed: 11/16/2022]
|
506
|
Daneshjou R, Barata C, Betz-Stablein B, Celebi ME, Codella N, Combalia M, Guitera P, Gutman D, Halpern A, Helba B, Kittler H, Kose K, Liopyris K, Malvehy J, Seog HS, Soyer HP, Tkaczyk ER, Tschandl P, Rotemberg V. Checklist for Evaluation of Image-Based Artificial Intelligence Reports in Dermatology: CLEAR Derm Consensus Guidelines From the International Skin Imaging Collaboration Artificial Intelligence Working Group. JAMA Dermatol 2022; 158:90-96. [PMID: 34851366 PMCID: PMC9845064 DOI: 10.1001/jamadermatol.2021.4915] [Citation(s) in RCA: 72] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
IMPORTANCE The use of artificial intelligence (AI) is accelerating in all aspects of medicine and has the potential to transform clinical care and dermatology workflows. However, to develop image-based algorithms for dermatology applications, comprehensive criteria establishing development and performance evaluation standards are required to ensure product fairness, reliability, and safety. OBJECTIVE To consolidate limited existing literature with expert opinion to guide developers and reviewers of dermatology AI. EVIDENCE REVIEW In this consensus statement, the 19 members of the International Skin Imaging Collaboration AI working group volunteered to provide a consensus statement. A systematic PubMed search was performed of English-language articles published between December 1, 2008, and August 24, 2021, for "artificial intelligence" and "reporting guidelines," as well as other pertinent studies identified by the expert panel. Factors that were viewed as critical to AI development and performance evaluation were included and underwent 2 rounds of electronic discussion to achieve consensus. FINDINGS A checklist of items was developed that outlines best practices of image-based AI development and assessment in dermatology. CONCLUSIONS AND RELEVANCE Clinically effective AI needs to be fair, reliable, and safe; this checklist of best practices will help both developers and reviewers achieve this goal.
Collapse
Affiliation(s)
- Roxana Daneshjou
- Stanford Department of Dermatology, Stanford School of Medicine, Redwood City, CA, USA,Stanford Department of Biomedical Data Science, Stanford School of Medicine, Stanford, CA, USA
| | - Catarina Barata
- Institute for Systems and Robotics, Instituto Superior Tecnico, Lisboa, Portugal
| | - Brigid Betz-Stablein
- The University of Queensland Diamantina Institute, The University of Queensland, Dermatology Research Centre, Brisbane, Australia
| | - M. Emre Celebi
- Department of Computer Science and Engineering, University of Central Arkansas, Conway, Arkansas, USA
| | | | - Marc Combalia
- Melanoma Unit, Dermatology Department, Hospital Cĺınic Barcelona, Universitat de Barcelona, IDIBAPS, Barcelona, Spain
| | - Pascale Guitera
- Melanoma Institute Australia, the University of Sydney, Camperdown, Australia,Sydney Melanoma Diagnostic Centre, Royal Prince Alfred Hospital, Camperdown, Australia
| | - David Gutman
- Department of Biomedical Informatics, Emory University School of Medicine, Atlanta, GA, USA
| | - Allan Halpern
- Dermatology Service, Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | | | - Harald Kittler
- Department of Dermatology, Medical University of Vienna, Vienna, Austria
| | - Kivanc Kose
- Dermatology Service, Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | | | - Josep Malvehy
- Melanoma Unit, Dermatology Department, Hospital Cĺınic Barcelona, Universitat de Barcelona, IDIBAPS, Barcelona, Spain
| | - Han Seung Seog
- Department of Dermatology, I Dermatology Clinic, Seoul, Korea.,IDerma, Inc., Seoul, Korea
| | - H. Peter Soyer
- The University of Queensland Diamantina Institute, The University of Queensland, Dermatology Research Centre, Brisbane, Australia
| | - Eric R Tkaczyk
- Dermatology Service and Research Service, Tennessee Valley Healthcare System, Department of Veterans Affairs, Nashville TN, USA,Vanderbilt Dermatology Translational Research Clinic, Department of Dermatology, Vanderbilt University Medical Center, Nashville TN, USA,Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Philipp Tschandl
- Department of Dermatology, Medical University of Vienna, Vienna, Austria
| | - Veronica Rotemberg
- Dermatology Service, Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| |
Collapse
|
507
|
De Souza LT, Silva Filho WE, Santana Lima B, Silva T, Takeshita W. Artificial intelligence in oral radiology: A checklist proposal. JOURNAL OF ORAL AND MAXILLOFACIAL RADIOLOGY 2022. [DOI: 10.4103/jomr.jomr_21_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
|
508
|
Rangarajan K. Artificial intelligence in practice: Lessons from the pandemic. CANCER RESEARCH, STATISTICS, AND TREATMENT 2022. [DOI: 10.4103/crst.crst_128_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
|
509
|
Tillmanns N, Lum AE, Cassinelli G, Merkaj S, Verma T, Zeevi T, Staib L, Subramanian H, Bahar RC, Brim W, Lost J, Jekel L, Brackett A, Payabvash S, Ikuta I, Lin M, Bousabarah K, Johnson MH, Cui J, Malhotra A, Omuro A, Turowski B, Aboian MS. Identifying clinically applicable machine learning algorithms for glioma segmentation: recent advances and discoveries. Neurooncol Adv 2022; 4:vdac093. [PMID: 36071926 PMCID: PMC9446682 DOI: 10.1093/noajnl/vdac093] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Background While there are innumerable machine learning (ML) research algorithms used for segmentation of gliomas, there is yet to be a US FDA cleared product. The aim of this study is to explore the systemic limitations of research algorithms that have prevented translation from concept to product by a review of the current research literature. Methods We performed a systematic literature review on 4 databases. Of 11 727 articles, 58 articles met the inclusion criteria and were used for data extraction and screening using TRIPOD. Results We found that while many articles were published on ML-based glioma segmentation and report high accuracy results, there were substantial limitations in the methods and results portions of the papers that result in difficulty reproducing the methods and translation into clinical practice. Conclusions In addition, we identified that more than a third of the articles used the same publicly available BRaTS and TCIA datasets and are responsible for the majority of patient data on which ML algorithms were trained, which leads to limited generalizability and potential for overfitting and bias.
Collapse
Affiliation(s)
- Niklas Tillmanns
- Brain Tumor Research Group, Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut, USA
- University Dusseldorf, Medical Faculty, Department of Diagnostic and Interventional Radiology, Dusseldorf, Germany
| | - Avery E Lum
- Brain Tumor Research Group, Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut, USA
| | - Gabriel Cassinelli
- Brain Tumor Research Group, Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut, USA
| | - Sara Merkaj
- Brain Tumor Research Group, Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut, USA
| | - Tej Verma
- Brain Tumor Research Group, Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut, USA
| | - Tal Zeevi
- Brain Tumor Research Group, Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut, USA
| | - Lawrence Staib
- Brain Tumor Research Group, Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut, USA
| | - Harry Subramanian
- Brain Tumor Research Group, Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut, USA
| | - Ryan C Bahar
- Brain Tumor Research Group, Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut, USA
| | - Waverly Brim
- Brain Tumor Research Group, Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut, USA
| | - Jan Lost
- Brain Tumor Research Group, Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut, USA
| | - Leon Jekel
- Brain Tumor Research Group, Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut, USA
| | - Alexandria Brackett
- Harvey Cushing/John Hay Whitney Medical Library, Yale University, New Haven, Connecticut, USA
| | - Sam Payabvash
- Brain Tumor Research Group, Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut, USA
| | - Ichiro Ikuta
- Brain Tumor Research Group, Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut, USA
| | - MingDe Lin
- Brain Tumor Research Group, Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut, USA
- Visage Imaging, Inc., San Diego, California, USA
| | | | - Michele H Johnson
- Brain Tumor Research Group, Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut, USA
| | - Jin Cui
- Department of Pathology, Boston Children’s Hospital, Boston, Massachusetts, USA
| | - Ajay Malhotra
- Brain Tumor Research Group, Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut, USA
| | - Antonio Omuro
- Department of Neurology and Yale Cancer Center, Yale School of Medicine, New Haven, Connecticut, USA
| | - Bernd Turowski
- University Dusseldorf, Medical Faculty, Department of Diagnostic and Interventional Radiology, Dusseldorf, Germany
| | - Mariam S Aboian
- Brain Tumor Research Group, Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, Connecticut, USA
| |
Collapse
|
510
|
Model for ASsessing the value of Artificial Intelligence in medical imaging (MAS-AI). Int J Technol Assess Health Care 2022; 38:e74. [DOI: 10.1017/s0266462322000551] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Abstract
Objectives
Artificial intelligence (AI) is seen as a major disrupting force in the future healthcare system. However, the assessment of the value of AI technologies is still unclear. Therefore, a multidisciplinary group of experts and patients developed a Model for ASsessing the value of AI (MAS-AI) in medical imaging. Medical imaging is chosen due to the maturity of AI in this area, ensuring a robust evidence-based model.
Methods
MAS-AI was developed in three phases. First, a literature review of existing guides, evaluations, and assessments of the value of AI in the field of medical imaging. Next, we interviewed leading researchers in AI in Denmark. The third phase consisted of two workshops where decision makers, patient organizations, and researchers discussed crucial topics for evaluating AI. The multidisciplinary team revised the model between workshops according to comments.
Results
The MAS-AI guideline consists of two steps covering nine domains and five process factors supporting the assessment. Step 1 contains a description of patients, how the AI model was developed, and initial ethical and legal considerations. In step 2, a multidisciplinary assessment of outcomes of the AI application is done for the five remaining domains: safety, clinical aspects, economics, organizational aspects, and patient aspects.
Conclusions
We have developed an health technology assessment-based framework to support the introduction of AI technologies into healthcare in medical imaging. It is essential to ensure informed and valid decisions regarding the adoption of AI with a structured process and tool. MAS-AI can help support decision making and provide greater transparency for all parties.
Collapse
|
511
|
Germain P, Vardazaryan A, Padoy N, Labani A, Roy C, Schindler TH, El Ghannudi S. Deep Learning Supplants Visual Analysis by Experienced Operators for the Diagnosis of Cardiac Amyloidosis by Cine-CMR. Diagnostics (Basel) 2021; 12:diagnostics12010069. [PMID: 35054236 PMCID: PMC8774777 DOI: 10.3390/diagnostics12010069] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Revised: 12/24/2021] [Accepted: 12/27/2021] [Indexed: 12/03/2022] Open
Abstract
Background: Diagnosing cardiac amyloidosis (CA) from cine-CMR (cardiac magnetic resonance) alone is not reliable. In this study, we tested if a convolutional neural network (CNN) could outperform the visual diagnosis of experienced operators. Method: 119 patients with cardiac amyloidosis and 122 patients with left ventricular hypertrophy (LVH) of other origins were retrospectively selected. Diastolic and systolic cine-CMR images were preprocessed and labeled. A dual-input visual geometry group (VGG ) model was used for binary image classification. All images belonging to the same patient were distributed in the same set. Accuracy and area under the curve (AUC) were calculated per frame and per patient from a 40% held-out test set. Results were compared to a visual analysis assessed by three experienced operators. Results: frame-based comparisons between humans and a CNN provided an accuracy of 0.605 vs. 0.746 (p < 0.0008) and an AUC of 0.630 vs. 0.824 (p < 0.0001). Patient-based comparisons provided an accuracy of 0.660 vs. 0.825 (p < 0.008) and an AUC of 0.727 vs. 0.895 (p < 0.002). Conclusion: based on cine-CMR images alone, a CNN is able to discriminate cardiac amyloidosis from LVH of other origins better than experienced human operators (15 to 20 points more in absolute value for accuracy and AUC), demonstrating a unique capability to identify what the eyes cannot see through classical radiological analysis.
Collapse
Affiliation(s)
- Philippe Germain
- Department of Radiology, Nouvel Hopital Civil, University Hospital, 67000 Strasbourg, France; (A.L.); (C.R.); (S.E.G.)
- Correspondence:
| | - Armine Vardazaryan
- ICube, University of Strasbourg, CNRS, 67000 Strasbourg, France; (A.V.); (N.P.)
- IHU (Institut Hopitalo-Universitaire), 67000 Strasbourg, France
| | - Nicolas Padoy
- ICube, University of Strasbourg, CNRS, 67000 Strasbourg, France; (A.V.); (N.P.)
- IHU (Institut Hopitalo-Universitaire), 67000 Strasbourg, France
| | - Aissam Labani
- Department of Radiology, Nouvel Hopital Civil, University Hospital, 67000 Strasbourg, France; (A.L.); (C.R.); (S.E.G.)
| | - Catherine Roy
- Department of Radiology, Nouvel Hopital Civil, University Hospital, 67000 Strasbourg, France; (A.L.); (C.R.); (S.E.G.)
| | - Thomas Hellmut Schindler
- Mallinckrodt Institute of Radiology, Division of Nuclear Medicine, Washington University School of Medicine, Saint Louis, MO 63110, USA;
| | - Soraya El Ghannudi
- Department of Radiology, Nouvel Hopital Civil, University Hospital, 67000 Strasbourg, France; (A.L.); (C.R.); (S.E.G.)
- Department of Nuclear Medicine, Nouvel Hopital Civil, University Hospital, 67000 Strasbourg, France
| |
Collapse
|
512
|
Linguraru MG, Maier-Hein L, Summers RM, Kahn CE. RSNA-MICCAI Panel Discussion: 2. Leveraging the Full Potential of AI-Radiologists and Data Scientists Working Together. Radiol Artif Intell 2021; 3:e210248. [PMID: 34870225 DOI: 10.1148/ryai.2021210248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Revised: 10/13/2021] [Accepted: 10/13/2021] [Indexed: 11/11/2022]
Abstract
In March 2021, the Radiological Society of North America hosted a virtual panel discussion with members of the Medical Image Computing and Computer Assisted Intervention Society. Both organizations share a vision to develop radiologic and medical imaging techniques through advanced quantitative imaging biomarkers and artificial intelligence. The panel addressed how radiologists and data scientists can collaborate to advance the science of AI in radiology. Keywords: Adults and Pediatrics, Segmentation, Feature Detection, Quantification, Diagnosis/Classification, Prognosis/Classification © RSNA, 2021.
Collapse
Affiliation(s)
- Marius George Linguraru
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC (M.G.L.); Department of Computer Assisted Medical Interventions, German Cancer Research Centre, Heidelberg, Germany (L.M.H.); Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); and Department of Radiology, University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA 19104 (C.E.K.)
| | - Lena Maier-Hein
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC (M.G.L.); Department of Computer Assisted Medical Interventions, German Cancer Research Centre, Heidelberg, Germany (L.M.H.); Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); and Department of Radiology, University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA 19104 (C.E.K.)
| | - Ronald M Summers
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC (M.G.L.); Department of Computer Assisted Medical Interventions, German Cancer Research Centre, Heidelberg, Germany (L.M.H.); Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); and Department of Radiology, University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA 19104 (C.E.K.)
| | - Charles E Kahn
- Sheikh Zayed Institute for Pediatric Surgical Innovation, Children's National Hospital, Washington, DC (M.G.L.); Department of Computer Assisted Medical Interventions, German Cancer Research Centre, Heidelberg, Germany (L.M.H.); Department of Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); and Department of Radiology, University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA 19104 (C.E.K.)
| |
Collapse
|
513
|
Ueda D, Yamamoto A, Ehara S, Iwata S, Abo K, Walston SL, Matsumoto T, Shimazaki A, Yoshiyama M, Miki Y. Artificial intelligence-based detection of aortic stenosis from chest radiographs. EUROPEAN HEART JOURNAL. DIGITAL HEALTH 2021; 3:20-28. [PMID: 36713993 PMCID: PMC9707887 DOI: 10.1093/ehjdh/ztab102] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Revised: 10/16/2021] [Accepted: 11/30/2021] [Indexed: 02/01/2023]
Abstract
Aims We aimed to develop models to detect aortic stenosis (AS) from chest radiographs-one of the most basic imaging tests-with artificial intelligence. Methods and results We used 10 433 retrospectively collected digital chest radiographs from 5638 patients to train, validate, and test three deep learning models. Chest radiographs were collected from patients who had also undergone echocardiography at a single institution between July 2016 and May 2019. These were labelled from the corresponding echocardiography assessments as AS-positive or AS-negative. The radiographs were separated on a patient basis into training [8327 images from 4512 patients, mean age 65 ± (standard deviation) 15 years], validation (1041 images from 563 patients, mean age 65 ± 14 years), and test (1065 images from 563 patients, mean age 65 ± 14 years) datasets. The soft voting-based ensemble of the three developed models had the best overall performance for predicting AS with an area under the receiver operating characteristic curve, sensitivity, specificity, accuracy, positive predictive value, and negative predictive value of 0.83 (95% confidence interval 0.77-0.88), 0.78 (0.67-0.86), 0.71 (0.68-0.73), 0.71 (0.68-0.74), 0.18 (0.14-0.23), and 0.97 (0.96-0.98), respectively, in the validation dataset and 0.83 (0.78-0.88), 0.83 (0.74-0.90), 0.69 (0.66-0.72), 0.71 (0.68-0.73), 0.23 (0.19-0.28), and 0.97 (0.96-0.98), respectively, in the test dataset. Conclusion Deep learning models using chest radiographs have the potential to differentiate between radiographs of patients with and without AS. Lay Summary We created artificial intelligence (AI) models using deep learning to identify aortic stenosis (AS) from chest radiographs. Three AI models were developed and evaluated with 10 433 retrospectively collected radiographs and labelled from echocardiography reports. The ensemble AI model could detect AS in a test dataset with an area under the receiver operating characteristic curve of 0.83 (95% confidence interval 0.78-0.88). Since chest radiography is a cost-effective and widely available imaging test, our model can provide an additive resource for the detection of AS.
Collapse
Affiliation(s)
- Daiju Ueda
- Corresponding author. Tel: +81 6 6645 3831, Fax: +81 6 6646 6655,
| | - Akira Yamamoto
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| | - Shoichi Ehara
- Department of Cardiovascular Medicine, Graduate School of Medicine, Osaka City University, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| | - Shinichi Iwata
- Department of Cardiovascular Medicine, Graduate School of Medicine, Osaka City University, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| | - Koji Abo
- Central Clinical Laboratory, Osaka City University Hospital, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| | - Shannon L Walston
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| | - Toshimasa Matsumoto
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| | - Akitoshi Shimazaki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| | - Minoru Yoshiyama
- Department of Cardiovascular Medicine, Graduate School of Medicine, Osaka City University, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| | - Yukio Miki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka City University, 1-4-3 Asahi-machi, Abeno-ku, Osaka 545-8585, Japan
| |
Collapse
|
514
|
Daugaard Jørgensen M, Antulov R, Hess S, Lysdahlgaard S. Convolutional neural network performance compared to radiologists in detecting intracranial hemorrhage from brain computed tomography: A systematic review and meta-analysis. Eur J Radiol 2021; 146:110073. [PMID: 34847397 DOI: 10.1016/j.ejrad.2021.110073] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 10/01/2021] [Accepted: 11/22/2021] [Indexed: 12/23/2022]
Abstract
PURPOSE To compare the diagnostic accuracy of convolutional neural networks (CNN) with radiologists as the reference standard in the diagnosis of intracranial hemorrhages (ICH) with non contrast computed tomography of the cerebrum (NCTC). METHODS PubMed, Embase, Scopus, and Web of Science were searched for the period from 1 January 2012 to 20 July 2020; eligible studies included patients with and without ICH as the target condition undergoing NCTC, studies had deep learning algorithms based on CNNs and radiologists reports as the minimum reference standard. Pooled sensitivities, specificities and a summary receiver operating characteristics curve (SROC) were employed for meta-analysis. RESULTS 5,119 records were identified through database searching. Title-screening left 47 studies for full-text assessment and 6 studies for meta-analysis. Comparing the CNN performance to reference standards in the retrospective studies found a pooled sensitivity of 96.00% (95% CI: 93.00% to 97.00%), pooled specificity of 97.00% (95% CI: 90.00% to 99.00%) and SROC of 98.00% (95% CI: 97.00% to 99.00%), and combining retrospective and studies with external datasets found a pooled sensitivity of 95.00% (95% CI: 91.00% to 97.00%), pooled specificity of 96.00% (95% CI: 91.00% to 98.00%) and a pooled SROC of 98.00% (95% CI: 97.00% to 99.00%). CONCLUSION This review found the diagnostic performance of CNNs to be equivalent to that of radiologists for retrospective studies. Out-of-sample external validation studies pooled with retrospective studies found CNN performance to be slightly worse. There is a critical need for studies with a robust reference standard and external data-set validation.
Collapse
Affiliation(s)
| | - Ronald Antulov
- Department of Radiology and Nuclear Medicine, Hospital of South West Jutland, University Hospital of Southern Denmark, Esbjerg, Denmark; Department of Regional Health Research, Faculty of Health Sciences, University of Southern Denmark, Odense, Denmark
| | - Søren Hess
- Department of Radiology and Nuclear Medicine, Hospital of South West Jutland, University Hospital of Southern Denmark, Esbjerg, Denmark; Department of Regional Health Research, Faculty of Health Sciences, University of Southern Denmark, Odense, Denmark
| | - Simon Lysdahlgaard
- Department of Radiology and Nuclear Medicine, Hospital of South West Jutland, University Hospital of Southern Denmark, Esbjerg, Denmark; Department of Regional Health Research, Faculty of Health Sciences, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
515
|
Hardacre CJ, Robertshaw JA, Barratt SL, Adams HL, MacKenzie Ross RV, Robinson GRE, Suntharalingam J, Pauling JD, Rodrigues JCL. Diagnostic test accuracy of artificial intelligence analysis of cross-sectional imaging in pulmonary hypertension: a systematic literature review. Br J Radiol 2021; 94:20210332. [PMID: 34541861 PMCID: PMC8631018 DOI: 10.1259/bjr.20210332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 07/28/2021] [Accepted: 08/31/2021] [Indexed: 11/05/2022] Open
Abstract
OBJECTIVES To undertake the first systematic review examining the performance of artificial intelligence (AI) applied to cross-sectional imaging for the diagnosis of acquired pulmonary arterial hypertension (PAH). METHODS Searches of Medline, Embase and Web of Science were undertaken on 1 July 2020. Original publications studying AI applied to cross-sectional imaging for the diagnosis of acquired PAH in adults were identified through two-staged double-blinded review. Study quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies and Checklist for Artificial Intelligence in Medicine frameworks. Narrative synthesis was undertaken following Synthesis Without Meta-Analysis guidelines. This review received no funding and was registered in the International Prospective Register of Systematic Reviews (ID:CRD42020196295). RESULTS Searches returned 476 citations. Three retrospective observational studies, published between 2016 and 2020, were selected for data-extraction. Two methods applied to cardiac-MRI demonstrated high diagnostic accuracy, with the best model achieving AUC=0.90 (95% CI: 0.85-0.93), 89% sensitivity and 81% specificity. Stronger results were achieved using cardiac-MRI for classification of idiopathic PAH, achieving AUC=0.97 (95% CI: 0.89-1.0), 96% sensitivity and 87% specificity. One study reporting CT-based AI demonstrated lower accuracy, with 64.6% sensitivity and 97.0% specificity. CONCLUSIONS Automated methods for identifying PAH on cardiac-MRI are emerging with high diagnostic accuracy. AI applied to cross-sectional imaging may provide non-invasive support to reduce diagnostic delay in PAH. This would be helped by stronger solutions in other modalities. ADVANCES IN KNOWLEDGE There is a significant shortage of research in this important area. Early detection of PAH would be supported by further research advances on the promising emerging technologies identified.
Collapse
Affiliation(s)
| | | | - Shaney L Barratt
- Department of Respiratory Medicine, North Bristol NHS Trust, Bristol, UK
| | | | | | | | | | | | | |
Collapse
|
516
|
Radiomics Models for Predicting Microvascular Invasion in Hepatocellular Carcinoma: A Systematic Review and Radiomics Quality Score Assessment. Cancers (Basel) 2021. [DOI: 10.3390/cancers13225864
expr 925508420 + 988274397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/16/2023] Open
Abstract
Preoperative prediction of microvascular invasion (MVI) is of importance in hepatocellular carcinoma (HCC) patient treatment management. Plenty of radiomics models for MVI prediction have been proposed. This study aimed to elucidate the role of radiomics models in the prediction of MVI and to evaluate their methodological quality. The methodological quality was assessed by the Radiomics Quality Score (RQS), and the risk of bias was evaluated by the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). Twenty-two studies using CT, MRI, or PET/CT for MVI prediction were included. All were retrospective studies, and only two had an external validation cohort. The AUC values of the prediction models ranged from 0.69 to 0.94 in the test cohort. Substantial methodological heterogeneity existed, and the methodological quality was low, with an average RQS score of 10 (28% of the total). Most studies demonstrated a low or unclear risk of bias in the domains of QUADAS-2. In conclusion, a radiomics model could be an accurate and effective tool for MVI prediction in HCC patients, although the methodological quality has so far been insufficient. Future prospective studies with an external validation cohort in accordance with a standardized radiomics workflow are expected to supply a reliable model that translates into clinical utilization.
Collapse
|
517
|
Wang Q, Li C, Zhang J, Hu X, Fan Y, Ma K, Sparrelid E, Brismar TB. Radiomics Models for Predicting Microvascular Invasion in Hepatocellular Carcinoma: A Systematic Review and Radiomics Quality Score Assessment. Cancers (Basel) 2021; 13:5864. [PMID: 34831018 PMCID: PMC8616379 DOI: 10.3390/cancers13225864] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 11/13/2021] [Accepted: 11/17/2021] [Indexed: 12/12/2022] Open
Abstract
Preoperative prediction of microvascular invasion (MVI) is of importance in hepatocellular carcinoma (HCC) patient treatment management. Plenty of radiomics models for MVI prediction have been proposed. This study aimed to elucidate the role of radiomics models in the prediction of MVI and to evaluate their methodological quality. The methodological quality was assessed by the Radiomics Quality Score (RQS), and the risk of bias was evaluated by the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). Twenty-two studies using CT, MRI, or PET/CT for MVI prediction were included. All were retrospective studies, and only two had an external validation cohort. The AUC values of the prediction models ranged from 0.69 to 0.94 in the test cohort. Substantial methodological heterogeneity existed, and the methodological quality was low, with an average RQS score of 10 (28% of the total). Most studies demonstrated a low or unclear risk of bias in the domains of QUADAS-2. In conclusion, a radiomics model could be an accurate and effective tool for MVI prediction in HCC patients, although the methodological quality has so far been insufficient. Future prospective studies with an external validation cohort in accordance with a standardized radiomics workflow are expected to supply a reliable model that translates into clinical utilization.
Collapse
Affiliation(s)
- Qiang Wang
- Division of Medical Imaging and Technology, Department of Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet, 14186 Stockholm, Sweden;
- Division of Radiology, Department of Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet, Karolinska University Hospital, 14186 Stockholm, Sweden
| | - Changfeng Li
- Institute of Hepatobiliary Surgery, Southwest Hospital, Army Medical University (Third Military Medical University), Chongqing 400038, China; (C.L.); (K.M.)
| | - Jiaxing Zhang
- Department of Pharmacy, Guizhou Provincial People’s Hospital, Guiyang 550002, China;
| | - Xiaojun Hu
- Hepatobiliary Surgery, The Fifth Affiliated Hospital, Southern Medical University, Guangzhou 510999, China;
- The Second School of Clinical Medicine, Southern Medical University, Guangzhou 510515, China;
| | - Yingfang Fan
- The Second School of Clinical Medicine, Southern Medical University, Guangzhou 510515, China;
- Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou 510280, China
| | - Kuansheng Ma
- Institute of Hepatobiliary Surgery, Southwest Hospital, Army Medical University (Third Military Medical University), Chongqing 400038, China; (C.L.); (K.M.)
| | - Ernesto Sparrelid
- Division of Surgery, Department of Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet, Karolinska University Hospital, 14186 Stockholm, Sweden;
| | - Torkel B. Brismar
- Division of Medical Imaging and Technology, Department of Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet, 14186 Stockholm, Sweden;
- Division of Radiology, Department of Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet, Karolinska University Hospital, 14186 Stockholm, Sweden
| |
Collapse
|
518
|
Cho J, Kim YJ, Sunwoo L, Lee GP, Nguyen TQ, Cho SJ, Baik SH, Bae YJ, Choi BS, Jung C, Sohn CH, Han JH, Kim CY, Kim KG, Kim JH. Deep Learning-Based Computer-Aided Detection System for Automated Treatment Response Assessment of Brain Metastases on 3D MRI. Front Oncol 2021; 11:739639. [PMID: 34778056 PMCID: PMC8579083 DOI: 10.3389/fonc.2021.739639] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2021] [Accepted: 09/30/2021] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Although accurate treatment response assessment for brain metastases (BMs) is crucial, it is highly labor intensive. This retrospective study aimed to develop a computer-aided detection (CAD) system for automated BM detection and treatment response evaluation using deep learning. METHODS We included 214 consecutive MRI examinations of 147 patients with BM obtained between January 2015 and August 2016. These were divided into the training (174 MR images from 127 patients) and test datasets according to temporal separation (temporal test set #1; 40 MR images from 20 patients). For external validation, 24 patients with BM and 11 patients without BM from other institutions were included (geographic test set). In addition, we included 12 MRIs from BM patients obtained between August 2017 and March 2020 (temporal test set #2). Detection sensitivity, dice similarity coefficient (DSC) for segmentation, and agreements in one-dimensional and volumetric Response Assessment in Neuro-Oncology Brain Metastases (RANO-BM) criteria between CAD and radiologists were assessed. RESULTS In the temporal test set #1, the sensitivity was 75.1% (95% confidence interval [CI]: 69.6%, 79.9%), mean DSC was 0.69 ± 0.22, and false-positive (FP) rate per scan was 0.8 for BM ≥ 5 mm. Agreements in the RANO-BM criteria were moderate (κ, 0.52) and substantial (κ, 0.68) for one-dimensional and volumetric, respectively. In the geographic test set, sensitivity was 87.7% (95% CI: 77.2%, 94.5%), mean DSC was 0.68 ± 0.20, and FP rate per scan was 1.9 for BM ≥ 5 mm. In the temporal test set #2, sensitivity was 94.7% (95% CI: 74.0%, 99.9%), mean DSC was 0.82 ± 0.20, and FP per scan was 0.5 (6/12) for BM ≥ 5 mm. CONCLUSIONS Our CAD showed potential for automated treatment response assessment of BM ≥ 5 mm.
Collapse
Affiliation(s)
- Jungheum Cho
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Young Jae Kim
- Department of Biomedical Engineering, Gachon University Gil Medical Center, Incheon, South Korea
| | - Leonard Sunwoo
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
- Center for Artificial Intelligence in Healthcare, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Gi Pyo Lee
- Department of Biomedical Engineering, Gachon University Gil Medical Center, Incheon, South Korea
| | - Toan Quang Nguyen
- Department of Radiology, Vietnam National Cancer Hospital, Hanoi, Vietnam
| | - Se Jin Cho
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Sung Hyun Baik
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Yun Jung Bae
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Byung Se Choi
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Cheolkyu Jung
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Chul-Ho Sohn
- Department of Radiology, Seoul National University Hospital, Seoul, South Korea
| | - Jung-Ho Han
- Department of Neurosurgery, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Chae-Yong Kim
- Department of Neurosurgery, Seoul National University Bundang Hospital, Seongnam, South Korea
| | - Kwang Gi Kim
- Department of Biomedical Engineering, Gachon University Gil Medical Center, Incheon, South Korea
| | - Jae Hyoung Kim
- Department of Radiology, Seoul National University Bundang Hospital, Seongnam, South Korea
| |
Collapse
|
519
|
Jayachandran Preetha C, Meredig H, Brugnara G, Mahmutoglu MA, Foltyn M, Isensee F, Kessler T, Pflüger I, Schell M, Neuberger U, Petersen J, Wick A, Heiland S, Debus J, Platten M, Idbaih A, Brandes AA, Winkler F, van den Bent MJ, Nabors B, Stupp R, Maier-Hein KH, Gorlia T, Tonn JC, Weller M, Wick W, Bendszus M, Vollmuth P. Deep-learning-based synthesis of post-contrast T1-weighted MRI for tumour response assessment in neuro-oncology: a multicentre, retrospective cohort study. LANCET DIGITAL HEALTH 2021; 3:e784-e794. [PMID: 34688602 DOI: 10.1016/s2589-7500(21)00205-3] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Revised: 07/14/2021] [Accepted: 08/10/2021] [Indexed: 12/30/2022]
Abstract
BACKGROUND Gadolinium-based contrast agents (GBCAs) are widely used to enhance tissue contrast during MRI scans and play a crucial role in the management of patients with cancer. However, studies have shown gadolinium deposition in the brain after repeated GBCA administration with yet unknown clinical significance. We aimed to assess the feasibility and diagnostic value of synthetic post-contrast T1-weighted MRI generated from pre-contrast MRI sequences through deep convolutional neural networks (dCNN) for tumour response assessment in neuro-oncology. METHODS In this multicentre, retrospective cohort study, we used MRI examinations to train and validate a dCNN for synthesising post-contrast T1-weighted sequences from pre-contrast T1-weighted, T2-weighted, and fluid-attenuated inversion recovery sequences. We used MRI scans with availability of these sequences from 775 patients with glioblastoma treated at Heidelberg University Hospital, Heidelberg, Germany (775 MRI examinations); 260 patients who participated in the phase 2 CORE trial (1083 MRI examinations, 59 institutions); and 505 patients who participated in the phase 3 CENTRIC trial (3147 MRI examinations, 149 institutions). Separate training runs to rank the importance of individual sequences and (for a subset) diffusion-weighted imaging were conducted. Independent testing was performed on MRI data from the phase 2 and phase 3 EORTC-26101 trial (521 patients, 1924 MRI examinations, 32 institutions). The similarity between synthetic and true contrast enhancement on post-contrast T1-weighted MRI was quantified using the structural similarity index measure (SSIM). Automated tumour segmentation and volumetric tumour response assessment based on synthetic versus true post-contrast T1-weighted sequences was performed in the EORTC-26101 trial and agreement was assessed with Kaplan-Meier plots. FINDINGS The median SSIM score for predicting contrast enhancement on synthetic post-contrast T1-weighted sequences in the EORTC-26101 test set was 0·818 (95% CI 0·817-0·820). Segmentation of the contrast-enhancing tumour from synthetic post-contrast T1-weighted sequences yielded a median tumour volume of 6·31 cm3 (5·60 to 7·14), thereby underestimating the true tumour volume by a median of -0·48 cm3 (-0·37 to -0·76) with the concordance correlation coefficient suggesting a strong linear association between tumour volumes derived from synthetic versus true post-contrast T1-weighted sequences (0·782, 0·751-0·807, p<0·0001). Volumetric tumour response assessment in the EORTC-26101 trial showed a median time to progression of 4·2 months (95% CI 4·1-5·2) with synthetic post-contrast T1-weighted and 4·3 months (4·1-5·5) with true post-contrast T1-weighted sequences (p=0·33). The strength of the association between the time to progression as a surrogate endpoint for predicting the patients' overall survival in the EORTC-26101 cohort was similar when derived from synthetic post-contrast T1-weighted sequences (hazard ratio of 1·749, 95% CI 1·282-2·387, p=0·0004) and model C-index (0·667, 0·622-0·708) versus true post-contrast T1-weighted MRI (1·799, 95% CI 1·314-2·464, p=0·0003) and model C-index (0·673, 95% CI 0·626-0·711). INTERPRETATION Generating synthetic post-contrast T1-weighted MRI from pre-contrast MRI using dCNN is feasible and quantification of the contrast-enhancing tumour burden from synthetic post-contrast T1-weighted MRI allows assessment of the patient's response to treatment with no significant difference by comparison with true post-contrast T1-weighted sequences with administration of GBCAs. This finding could guide the application of dCNN in radiology to potentially reduce the necessity of GBCA administration. FUNDING Deutsche Forschungsgemeinschaft.
Collapse
Affiliation(s)
| | - Hagen Meredig
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Gianluca Brugnara
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Mustafa A Mahmutoglu
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Martha Foltyn
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Fabian Isensee
- Medical Image Computing, German Cancer Research Center, Heidelberg, Germany
| | - Tobias Kessler
- Neurology Clinic, Heidelberg University Hospital, Heidelberg, Germany; Clinical Cooperation Unit Neurooncology, German Cancer Research Center, Heidelberg, Germany
| | - Irada Pflüger
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Marianne Schell
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Ulf Neuberger
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Jens Petersen
- Medical Image Computing, German Cancer Research Center, Heidelberg, Germany
| | - Antje Wick
- Neurology Clinic, Heidelberg University Hospital, Heidelberg, Germany
| | - Sabine Heiland
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Jürgen Debus
- Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany; Heidelberg Institute of Radiation Oncology, Heidelberg, Germany; Heidelberg Ion-Beam Therapy Center, Heidelberg, Germany
| | - Michael Platten
- Clinical Cooperation Unit Neuroimmunology and Brain Tumor Immunology, German Cancer Research Center, Heidelberg, Germany; Department of Neurology, Medical Faculty Mannheim, University of Heidelberg, Mannheim, Germany
| | - Ahmed Idbaih
- Sorbonne Université, Inserm, Institut du Cerveau, Assistance Publique-Hôpitaux de Paris, Hôpitaux Universitaires La Pitié Salpêtrière-Charles Foix, Service de Neurologie 2-Mazarin, Paris, France
| | - Alba A Brandes
- Department of Medical Oncology, Azienda USL of Bologna, Bologna, Italy
| | - Frank Winkler
- Neurology Clinic, Heidelberg University Hospital, Heidelberg, Germany; Clinical Cooperation Unit Neurooncology, German Cancer Research Center, Heidelberg, Germany
| | | | - Burt Nabors
- Department of Neurology and O'Neal Comprehensive Cancer Center, Division of Neuro-Oncology, University of Alabama at Birmingham, Birmingham, AL, USA
| | - Roger Stupp
- Malnati Brain Tumor Institute of the Lurie Comprehensive Cancer Center, Department of Neurological Surgery and Department of Neurology, Northwestern Medicine and Northwestern University, Chicago, IL, USA
| | - Klaus H Maier-Hein
- Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg, Germany; Medical Image Computing, German Cancer Research Center, Heidelberg, Germany
| | - Thierry Gorlia
- European Organisation for Research and Treatment of Cancer, Brussels, Belgium
| | | | - Michael Weller
- Department of Neurology, University Hospital and University of Zurich, Zurich, Switzerland
| | - Wolfgang Wick
- Neurology Clinic, Heidelberg University Hospital, Heidelberg, Germany; Clinical Cooperation Unit Neurooncology, German Cancer Research Center, Heidelberg, Germany
| | - Martin Bendszus
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Philipp Vollmuth
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany.
| |
Collapse
|
520
|
Jamtheim Gustafsson C, Lempart M, Swärd J, Persson E, Nyholm T, Thellenberg Karlsson C, Scherman J. Deep learning-based classification and structure name standardization for organ at risk and target delineations in prostate cancer radiotherapy. J Appl Clin Med Phys 2021; 22:51-63. [PMID: 34623738 PMCID: PMC8664152 DOI: 10.1002/acm2.13446] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2021] [Revised: 09/16/2021] [Accepted: 09/24/2021] [Indexed: 11/12/2022] Open
Abstract
Radiotherapy (RT) datasets can suffer from variations in annotation of organ at risk (OAR) and target structures. Annotation standards exist, but their description for prostate targets is limited. This restricts the use of such data for supervised machine learning purposes as it requires properly annotated data. The aim of this work was to develop a modality independent deep learning (DL) model for automatic classification and annotation of prostate RT DICOM structures. Delineated prostate organs at risk (OAR), support- and target structures (gross tumor volume [GTV]/clinical target volume [CTV]/planning target volume [PTV]), along with or without separate vesicles and/or lymph nodes, were extracted as binary masks from 1854 patients. An image modality independent 2D InceptionResNetV2 classification network was trained with varying amounts of training data using four image input channels. Channel 1-3 consisted of orthogonal 2D projections from each individual binary structure. The fourth channel contained a summation of the other available binary structure masks. Structure classification performance was assessed in independent CT (n = 200 pat) and magnetic resonance imaging (MRI) (n = 40 pat) test datasets and an external CT (n = 99 pat) dataset from another clinic. A weighted classification accuracy of 99.4% was achieved during training. The unweighted classification accuracy and the weighted average F1 score among different structures in the CT test dataset were 98.8% and 98.4% and 98.6% and 98.5% for the MRI test dataset, respectively. The external CT dataset yielded the corresponding results 98.4% and 98.7% when analyzed for trained structures only, and results from the full dataset yielded 79.6% and 75.2%. Most misclassifications in the external CT dataset occurred due to multiple CTVs and PTVs being fused together, which was not included in the training data. Our proposed DL-based method for automated renaming and standardization of prostate radiotherapy annotations shows great potential. Clinic specific contouring standards however need to be represented in the training data for successful use. Source code is available at https://github.com/jamtheim/DicomRTStructRenamerPublic.
Collapse
Affiliation(s)
- Christian Jamtheim Gustafsson
- Department of Hematology Oncology and Radiation Physics, Skåne University Hospital, Lund, Sweden.,Department of Translational Sciences, Medical Radiation Physics, Lund University, Malmö, Sweden
| | - Michael Lempart
- Department of Hematology Oncology and Radiation Physics, Skåne University Hospital, Lund, Sweden.,Department of Translational Sciences, Medical Radiation Physics, Lund University, Malmö, Sweden
| | - Johan Swärd
- Centre for Mathematical Sciences, Mathematical Statistics, Lund University, Lund, Sweden
| | - Emilia Persson
- Department of Hematology Oncology and Radiation Physics, Skåne University Hospital, Lund, Sweden.,Department of Translational Sciences, Medical Radiation Physics, Lund University, Malmö, Sweden
| | - Tufve Nyholm
- Department of Radiation Sciences, Radiation Physics, Umeå University, Umeå, Sweden
| | | | - Jonas Scherman
- Department of Hematology Oncology and Radiation Physics, Skåne University Hospital, Lund, Sweden
| |
Collapse
|
521
|
Ghodrati V, Rivenson Y, Prosper A, de Haan K, Ali F, Yoshida T, Bedayat A, Nguyen KL, Finn JP, Hu P. Automatic segmentation of peripheral arteries and veins in ferumoxytol-enhanced MR angiography. Magn Reson Med 2021; 87:984-998. [PMID: 34611937 DOI: 10.1002/mrm.29026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2020] [Revised: 09/03/2021] [Accepted: 09/09/2021] [Indexed: 11/10/2022]
Abstract
PURPOSE To automate the segmentation of the peripheral arteries and veins in the lower extremities based on ferumoxytol-enhanced MR angiography (FE-MRA). METHODS Our automated pipeline has 2 sequential stages. In the first stage, we used a 3D U-Net with local attention gates, which was trained based on a combination of the Focal Tversky loss with region mutual loss under a deep supervision mechanism to segment the vasculature from the high-resolution FE-MRA datasets. In the second stage, we used time-resolved images to separate the arteries from the veins. Because the ultimate segmentation quality of the arteries and veins relies on the performance of the first stage, we thoroughly evaluated the different aspects of the segmentation network and compared its performance in blood vessel segmentation with currently accepted state-of-the-art networks, including Volumetric-Net, DeepVesselNet-FCN, and Uception. RESULTS We achieved a competitive F1 = 0.8087 and recall = 0.8410 for blood vessel segmentation compared with F1 = (0.7604, 0.7573, 0.7651) and recall = (0.7791, 0.7570, 0.7774) obtained with Volumetric-Net, DeepVesselNet-FCN, and Uception. For the artery and vein separation stage, we achieved F1 = (0.8274/0.7863) in the calf region, which is the most challenging region in peripheral arteries and veins segmentation. CONCLUSION Our pipeline is capable of fully automatic vessel segmentation based on FE-MRA without need for human interaction in <4 min. This method improves upon manual segmentation by radiologists, which routinely takes several hours.
Collapse
Affiliation(s)
- Vahid Ghodrati
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA.,Biomedical Physics Inter-Departmental Graduate Program, University of California, Los Angeles, California, USA
| | - Yair Rivenson
- Electrical and Computer Engineering Department, University of California, Los Angeles, California, USA
| | - Ashley Prosper
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Kevin de Haan
- Electrical and Computer Engineering Department, University of California, Los Angeles, California, USA
| | - Fadil Ali
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA.,Biomedical Physics Inter-Departmental Graduate Program, University of California, Los Angeles, California, USA
| | - Takegawa Yoshida
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Arash Bedayat
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Kim-Lien Nguyen
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA.,Department of Medicine (Cardiology), David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - J Paul Finn
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Peng Hu
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA.,Biomedical Physics Inter-Departmental Graduate Program, University of California, Los Angeles, California, USA
| |
Collapse
|
522
|
Shur JD, Doran SJ, Kumar S, Ap Dafydd D, Downey K, O'Connor JPB, Papanikolaou N, Messiou C, Koh DM, Orton MR. Radiomics in Oncology: A Practical Guide. Radiographics 2021; 41:1717-1732. [PMID: 34597235 PMCID: PMC8501897 DOI: 10.1148/rg.2021210037] [Citation(s) in RCA: 166] [Impact Index Per Article: 41.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
Radiomics refers to the extraction of mineable data from medical imaging
and has been applied within oncology to improve diagnosis,
prognostication, and clinical decision support, with the goal of
delivering precision medicine. The authors provide a practical approach
for successfully implementing a radiomic workflow from planning and
conceptualization through manuscript writing. Applications in oncology
typically are either classification tasks that involve computing the
probability of a sample belonging to a category, such as benign versus
malignant, or prediction of clinical events with a time-to-event
analysis, such as overall survival. The radiomic workflow is
multidisciplinary, involving radiologists and data and imaging
scientists, and follows a stepwise process involving tumor segmentation,
image preprocessing, feature extraction, model development, and
validation. Images are curated and processed before segmentation, which
can be performed on tumors, tumor subregions, or peritumoral zones.
Extracted features typically describe the distribution of signal
intensities and spatial relationship of pixels within a region of
interest. To improve model performance and reduce overfitting, redundant
and nonreproducible features are removed. Validation is essential to
estimate model performance in new data and can be performed iteratively
on samples of the dataset (cross-validation) or on a separate hold-out
dataset by using internal or external data. A variety of noncommercial
and commercial radiomic software applications can be used. Guidelines
and artificial intelligence checklists are useful when planning and
writing up radiomic studies. Although interest in the field continues to
grow, radiologists should be familiar with potential pitfalls to ensure
that meaningful conclusions can be drawn. Online supplemental material is available for this
article. Published under a CC BY 4.0 license.
Collapse
Affiliation(s)
- Joshua D Shur
- From the Department of Radiology, Royal Marsden Hospital NHS Foundation Trust, Sutton, England (J.D.S., D.a.D., K.D., N. P., C.M., D.M.K.); Institute of Cancer Research, 15 Cotswold Road, Sutton SM2 5NG, England (S.J.D., S.K., J.P.B.O., N. P., C.M., D.M.K., M.R.O.); and Computational Clinical Imaging Group, Champalimaud Foundation, Centre for the Unknown, Lisbon, Portugal (N.P.)
| | - Simon J Doran
- From the Department of Radiology, Royal Marsden Hospital NHS Foundation Trust, Sutton, England (J.D.S., D.a.D., K.D., N. P., C.M., D.M.K.); Institute of Cancer Research, 15 Cotswold Road, Sutton SM2 5NG, England (S.J.D., S.K., J.P.B.O., N. P., C.M., D.M.K., M.R.O.); and Computational Clinical Imaging Group, Champalimaud Foundation, Centre for the Unknown, Lisbon, Portugal (N.P.)
| | - Santosh Kumar
- From the Department of Radiology, Royal Marsden Hospital NHS Foundation Trust, Sutton, England (J.D.S., D.a.D., K.D., N. P., C.M., D.M.K.); Institute of Cancer Research, 15 Cotswold Road, Sutton SM2 5NG, England (S.J.D., S.K., J.P.B.O., N. P., C.M., D.M.K., M.R.O.); and Computational Clinical Imaging Group, Champalimaud Foundation, Centre for the Unknown, Lisbon, Portugal (N.P.)
| | - Derfel Ap Dafydd
- From the Department of Radiology, Royal Marsden Hospital NHS Foundation Trust, Sutton, England (J.D.S., D.a.D., K.D., N. P., C.M., D.M.K.); Institute of Cancer Research, 15 Cotswold Road, Sutton SM2 5NG, England (S.J.D., S.K., J.P.B.O., N. P., C.M., D.M.K., M.R.O.); and Computational Clinical Imaging Group, Champalimaud Foundation, Centre for the Unknown, Lisbon, Portugal (N.P.)
| | - Kate Downey
- From the Department of Radiology, Royal Marsden Hospital NHS Foundation Trust, Sutton, England (J.D.S., D.a.D., K.D., N. P., C.M., D.M.K.); Institute of Cancer Research, 15 Cotswold Road, Sutton SM2 5NG, England (S.J.D., S.K., J.P.B.O., N. P., C.M., D.M.K., M.R.O.); and Computational Clinical Imaging Group, Champalimaud Foundation, Centre for the Unknown, Lisbon, Portugal (N.P.)
| | - James P B O'Connor
- From the Department of Radiology, Royal Marsden Hospital NHS Foundation Trust, Sutton, England (J.D.S., D.a.D., K.D., N. P., C.M., D.M.K.); Institute of Cancer Research, 15 Cotswold Road, Sutton SM2 5NG, England (S.J.D., S.K., J.P.B.O., N. P., C.M., D.M.K., M.R.O.); and Computational Clinical Imaging Group, Champalimaud Foundation, Centre for the Unknown, Lisbon, Portugal (N.P.)
| | - Nikolaos Papanikolaou
- From the Department of Radiology, Royal Marsden Hospital NHS Foundation Trust, Sutton, England (J.D.S., D.a.D., K.D., N. P., C.M., D.M.K.); Institute of Cancer Research, 15 Cotswold Road, Sutton SM2 5NG, England (S.J.D., S.K., J.P.B.O., N. P., C.M., D.M.K., M.R.O.); and Computational Clinical Imaging Group, Champalimaud Foundation, Centre for the Unknown, Lisbon, Portugal (N.P.)
| | - Christina Messiou
- From the Department of Radiology, Royal Marsden Hospital NHS Foundation Trust, Sutton, England (J.D.S., D.a.D., K.D., N. P., C.M., D.M.K.); Institute of Cancer Research, 15 Cotswold Road, Sutton SM2 5NG, England (S.J.D., S.K., J.P.B.O., N. P., C.M., D.M.K., M.R.O.); and Computational Clinical Imaging Group, Champalimaud Foundation, Centre for the Unknown, Lisbon, Portugal (N.P.)
| | - Dow-Mu Koh
- From the Department of Radiology, Royal Marsden Hospital NHS Foundation Trust, Sutton, England (J.D.S., D.a.D., K.D., N. P., C.M., D.M.K.); Institute of Cancer Research, 15 Cotswold Road, Sutton SM2 5NG, England (S.J.D., S.K., J.P.B.O., N. P., C.M., D.M.K., M.R.O.); and Computational Clinical Imaging Group, Champalimaud Foundation, Centre for the Unknown, Lisbon, Portugal (N.P.)
| | - Matthew R Orton
- From the Department of Radiology, Royal Marsden Hospital NHS Foundation Trust, Sutton, England (J.D.S., D.a.D., K.D., N. P., C.M., D.M.K.); Institute of Cancer Research, 15 Cotswold Road, Sutton SM2 5NG, England (S.J.D., S.K., J.P.B.O., N. P., C.M., D.M.K., M.R.O.); and Computational Clinical Imaging Group, Champalimaud Foundation, Centre for the Unknown, Lisbon, Portugal (N.P.)
| |
Collapse
|
523
|
Heil BJ, Hoffman MM, Markowetz F, Lee SI, Greene CS, Hicks SC. Reproducibility standards for machine learning in the life sciences. Nat Methods 2021; 18:1132-1135. [PMID: 34462593 PMCID: PMC9131851 DOI: 10.1038/s41592-021-01256-7] [Citation(s) in RCA: 71] [Impact Index Per Article: 17.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
To make machine learning analyses in the life sciences more computationally reproducible, we propose standards based on data, model, and code publication, programming best practices, and workflow automation. By meeting these standards, the community of researchers applying machine learning methods in the life sciences can ensure that their analyses are worthy of trust. this article has been peer reviewed.
Collapse
Affiliation(s)
- Benjamin J Heil
- Genomics and Computational Biology Graduate Group, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Michael M Hoffman
- Princess Margaret Cancer Centre, University Health Network, Toronto, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
- Vector Institute, Toronto, Ontario, Canada
| | - Florian Markowetz
- Cancer Research UK Cambridge Institute, University of Cambridge, Cambridge, UK
| | - Su-In Lee
- Paul G. Allen School of Computer Science and Engineering, University of Washington, Seattle, WA, USA
| | - Casey S Greene
- Department of Biochemistry and Molecular Genetics, University of Colorado School of Medicine, Aurora, CO, USA.
- Center for Health AI, University of Colorado School of Medicine, Aurora, CO, USA.
| | - Stephanie C Hicks
- Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA.
| |
Collapse
|
524
|
O'Shea RJ, Sharkey AR, Cook GJR, Goh V. Systematic review of research design and reporting of imaging studies applying convolutional neural networks for radiological cancer diagnosis. Eur Radiol 2021; 31:7969-7983. [PMID: 33860829 PMCID: PMC8452579 DOI: 10.1007/s00330-021-07881-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2020] [Revised: 02/24/2021] [Accepted: 03/12/2021] [Indexed: 11/05/2022]
Abstract
OBJECTIVES To perform a systematic review of design and reporting of imaging studies applying convolutional neural network models for radiological cancer diagnosis. METHODS A comprehensive search of PUBMED, EMBASE, MEDLINE and SCOPUS was performed for published studies applying convolutional neural network models to radiological cancer diagnosis from January 1, 2016, to August 1, 2020. Two independent reviewers measured compliance with the Checklist for Artificial Intelligence in Medical Imaging (CLAIM). Compliance was defined as the proportion of applicable CLAIM items satisfied. RESULTS One hundred eighty-six of 655 screened studies were included. Many studies did not meet the criteria for current design and reporting guidelines. Twenty-seven percent of studies documented eligibility criteria for their data (50/186, 95% CI 21-34%), 31% reported demographics for their study population (58/186, 95% CI 25-39%) and 49% of studies assessed model performance on test data partitions (91/186, 95% CI 42-57%). Median CLAIM compliance was 0.40 (IQR 0.33-0.49). Compliance correlated positively with publication year (ρ = 0.15, p = .04) and journal H-index (ρ = 0.27, p < .001). Clinical journals demonstrated higher mean compliance than technical journals (0.44 vs. 0.37, p < .001). CONCLUSIONS Our findings highlight opportunities for improved design and reporting of convolutional neural network research for radiological cancer diagnosis. KEY POINTS • Imaging studies applying convolutional neural networks (CNNs) for cancer diagnosis frequently omit key clinical information including eligibility criteria and population demographics. • Fewer than half of imaging studies assessed model performance on explicitly unobserved test data partitions. • Design and reporting standards have improved in CNN research for radiological cancer diagnosis, though many opportunities remain for further progress.
Collapse
Affiliation(s)
- Robert J O'Shea
- Cancer Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, 5th floor, Becket House, 1 Lambeth Palace Road, London, SE1 7EU, UK.
| | - Amy Rose Sharkey
- Cancer Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, 5th floor, Becket House, 1 Lambeth Palace Road, London, SE1 7EU, UK
- Department of Radiology, Guy's & St Thomas' NHS Foundation Trust, London, UK
| | - Gary J R Cook
- Cancer Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, 5th floor, Becket House, 1 Lambeth Palace Road, London, SE1 7EU, UK
- King's College London & Guy's and St. Thomas' PET Centre, London, UK
| | - Vicky Goh
- Cancer Imaging, School of Biomedical Engineering and Imaging Sciences, King's College London, 5th floor, Becket House, 1 Lambeth Palace Road, London, SE1 7EU, UK
- Department of Radiology, Guy's & St Thomas' NHS Foundation Trust, London, UK
| |
Collapse
|
525
|
Orlhac F, Nioche C, Klyuzhin I, Rahmim A, Buvat I. Radiomics in PET Imaging:: A Practical Guide for Newcomers. PET Clin 2021; 16:597-612. [PMID: 34537132 DOI: 10.1016/j.cpet.2021.06.007] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Radiomics has undergone considerable development in recent years. In PET imaging, very promising results concerning the ability of handcrafted features to predict the biological characteristics of lesions and to assess patient prognosis or response to treatment have been reported in the literature. This article presents a checklist for designing a reliable radiomic study, gives an overview of the steps of the pipeline, and outlines approaches for data harmonization. Tips are provided for critical reading of the content of articles. The advantages and limitations of handcrafted radiomics compared with deep-learning approaches for the characterization of PET images are also discussed.
Collapse
Affiliation(s)
- Fanny Orlhac
- Institut Curie Centre de Recherche, Centre Universitaire, Bat 101B, Rue Henri Becquerel, CS 90030, 91401 Orsay Cedex, France.
| | - Christophe Nioche
- Institut Curie Centre de Recherche, Centre Universitaire, Bat 101B, Rue Henri Becquerel, CS 90030, 91401 Orsay Cedex, France
| | - Ivan Klyuzhin
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, BC V5Z 1L3, Canada; Department of Radiology, University of British Columbia, 675 West 10th Avenue, Vancouver, BC V5Z 1L3, Canada
| | - Arman Rahmim
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, BC V5Z 1L3, Canada; Department of Radiology, University of British Columbia, 675 West 10th Avenue, Vancouver, BC V5Z 1L3, Canada
| | - Irène Buvat
- Institut Curie Centre de Recherche, Centre Universitaire, Bat 101B, Rue Henri Becquerel, CS 90030, 91401 Orsay Cedex, France
| |
Collapse
|
526
|
El Naqa I, Boone JM, Benedict SH, Goodsitt MM, Chan HP, Drukker K, Hadjiiski L, Ruan D, Sahiner B. AI in medical physics: guidelines for publication. Med Phys 2021; 48:4711-4714. [PMID: 34545957 DOI: 10.1002/mp.15170] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 08/10/2021] [Accepted: 08/10/2021] [Indexed: 12/16/2022] Open
Abstract
The Abstract is intended to provide a concise summary of the study and its scientific findings. For AI/ML applications in medical physics, a problem statement and rationale for utilizing these algorithms are necessary while highlighting the novelty of the approach. A brief numerical description of how the data are partitioned into subsets for training of the AI/ML algorithm, validation (including tuning of parameters), and independent testing of algorithm performance is required. This is to be followed by a summary of the results and statistical metrics that quantify the performance of the AI/ML algorithm.
Collapse
Affiliation(s)
- Issam El Naqa
- Machine Learning & Radiation Oncology, Moffitt Cancer Center, 12902 Magnolia Drive, Tampa, FL, 33612, USA
| | - John M Boone
- Department of Radiology, University of California Davis Health, Sacramento, CA, 95817, USA
| | - Stanley H Benedict
- Radiation Oncology, University of California Davis Health, Sacramento, CA, 95817, USA
| | - Mitchell M Goodsitt
- Department of Radiology, University Michigan, 1500 E Medical Center Dr, Ann Arbor, MI, 48109, USA
| | - Heang-Ping Chan
- Department of Radiology, University Michigan, 1500 E Medical Center Dr, Ann Arbor, MI, 48109, USA
| | - Karen Drukker
- Department of Radiology, University of Chicago, 5841 S. Maryland Ave, Chicago, IL, 60637, USA
| | - Lubomir Hadjiiski
- Department of Radiology, University Michigan, 1500 E Medical Center Dr, Ann Arbor, MI, 48109, USA
| | - Dan Ruan
- Radiation Oncology, University of California Los Angeles School of Medicine, 200 UCLA Medical Plaza, Los Angeles, CA, 90095, USA
| | - Berkman Sahiner
- Food and Drug Administration, 10903 New Hampshire Ave., Silver Spring, MD, 20993, USA
| |
Collapse
|
527
|
Wagner MW, Namdar K, Biswas A, Monah S, Khalvati F, Ertl-Wagner BB. Radiomics, machine learning, and artificial intelligence-what the neuroradiologist needs to know. Neuroradiology 2021; 63:1957-1967. [PMID: 34537858 PMCID: PMC8449698 DOI: 10.1007/s00234-021-02813-9] [Citation(s) in RCA: 69] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Accepted: 09/09/2021] [Indexed: 01/04/2023]
Abstract
PURPOSE Artificial intelligence (AI) is playing an ever-increasing role in Neuroradiology. METHODS When designing AI-based research in neuroradiology and appreciating the literature, it is important to understand the fundamental principles of AI. Training, validation, and test datasets must be defined and set apart as priorities. External validation and testing datasets are preferable, when feasible. The specific type of learning process (supervised vs. unsupervised) and the machine learning model also require definition. Deep learning (DL) is an AI-based approach that is modelled on the structure of neurons of the brain; convolutional neural networks (CNN) are a commonly used example in neuroradiology. RESULTS Radiomics is a frequently used approach in which a multitude of imaging features are extracted from a region of interest and subsequently reduced and selected to convey diagnostic or prognostic information. Deep radiomics uses CNNs to directly extract features and obviate the need for predefined features. CONCLUSION Common limitations and pitfalls in AI-based research in neuroradiology are limited sample sizes ("small-n-large-p problem"), selection bias, as well as overfitting and underfitting.
Collapse
Affiliation(s)
- Matthias W Wagner
- Division of Neuroradiology, The Hospital for Sick Children, Toronto, Canada
- Department of Medical Imaging, University of Toronto, 555 University Ave, Toronto, ON, M5G 1X8, Canada
| | - Khashayar Namdar
- Neurosciences and Mental Health Program, SickKids Research Institute, Toronto, Canada
| | - Asthik Biswas
- Division of Neuroradiology, The Hospital for Sick Children, Toronto, Canada
- Department of Medical Imaging, University of Toronto, 555 University Ave, Toronto, ON, M5G 1X8, Canada
| | - Suranna Monah
- Division of Neuroradiology, The Hospital for Sick Children, Toronto, Canada
| | - Farzad Khalvati
- Neurosciences and Mental Health Program, SickKids Research Institute, Toronto, Canada
- Department of Medical Imaging, University of Toronto, 555 University Ave, Toronto, ON, M5G 1X8, Canada
| | - Birgit B Ertl-Wagner
- Division of Neuroradiology, The Hospital for Sick Children, Toronto, Canada.
- Department of Medical Imaging, University of Toronto, 555 University Ave, Toronto, ON, M5G 1X8, Canada.
| |
Collapse
|
528
|
Yousefirizi F, Jha AK, Brosch-Lenz J, Saboury B, Rahmim A. Toward High-Throughput Artificial Intelligence-Based Segmentation in Oncological PET Imaging. PET Clin 2021; 16:577-596. [PMID: 34537131 DOI: 10.1016/j.cpet.2021.06.001] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Artificial intelligence (AI) techniques for image-based segmentation have garnered much attention in recent years. Convolutional neural networks have shown impressive results and potential toward fully automated segmentation in medical imaging, and particularly PET imaging. To cope with the limited access to annotated data needed in supervised AI methods, given tedious and prone-to-error manual delineations, semi-supervised and unsupervised AI techniques have also been explored for segmentation of tumors or normal organs in single- and bimodality scans. This work reviews existing AI techniques for segmentation tasks and the evaluation criteria for translational AI-based segmentation efforts toward routine adoption in clinical workflows.
Collapse
Affiliation(s)
- Fereshteh Yousefirizi
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada.
| | - Abhinav K Jha
- Department of Biomedical Engineering, Washington University in St. Louis, St Louis, MO 63130, USA; Mallinckrodt Institute of Radiology, Washington University School of Medicine, St Louis, MO 63110, USA
| | - Julia Brosch-Lenz
- Department of Integrative Oncology, BC Cancer Research Institute, 675 West 10th Avenue, Vancouver, British Columbia V5Z 1L3, Canada
| | - Babak Saboury
- Department of Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, 9000 Rockville Pike, Bethesda, MD 20892, USA; Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, Baltimore, MD, USA; Department of Radiology, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Arman Rahmim
- Department of Radiology, University of British Columbia, BC Cancer, BC Cancer Research Institute, 675 West 10th Avenue, Office 6-112, Vancouver, British Columbia V5Z 1L3, Canada; Department of Physics, University of British Columbia, Senior Scientist & Provincial Medical Imaging Physicist, BC Cancer, BC Cancer Research Institute, 675 West 10th Avenue, Office 6-112, Vancouver, British Columbia V5Z 1L3, Canada
| |
Collapse
|
529
|
Radiomics and Machine Learning Can Differentiate Transient Osteoporosis from Avascular Necrosis of the Hip. Diagnostics (Basel) 2021; 11:diagnostics11091686. [PMID: 34574027 PMCID: PMC8468167 DOI: 10.3390/diagnostics11091686] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2021] [Revised: 09/12/2021] [Accepted: 09/14/2021] [Indexed: 02/07/2023] Open
Abstract
Differentiation between transient osteoporosis (TOH) and avascular necrosis (AVN) of the hip is a longstanding challenge in musculoskeletal radiology. The purpose of this study was to utilize MRI-based radiomics and machine learning (ML) for accurate differentiation between the two entities. A total of 109 hips with TOH and 104 hips with AVN were retrospectively included. Femoral heads and necks with segmented radiomics features were extracted. Three ML classifiers (XGboost, CatBoost and SVM) using 38 relevant radiomics features were trained on 70% and validated on 30% of the dataset. ML performance was compared to two musculoskeletal radiologists, a general radiologist and two radiology residents. XGboost achieved the best performance with an area under the curve (AUC) of 93.7% (95% CI from 87.7 to 99.8%) among ML models. MSK radiologists achieved an AUC of 90.6% (95% CI from 86.7% to 94.5%) and 88.3% (95% CI from 84% to 92.7%), respectively, similar to residents. The general radiologist achieved an AUC of 84.5% (95% CI from 80% to 89%), significantly lower than of XGboost (p = 0.017). In conclusion, radiomics-based ML achieved a performance similar to MSK radiologists and significantly higher compared to general radiologists in differentiating between TOH and AVN.
Collapse
|
530
|
Melo MCR, Maasch JRMA, de la Fuente-Nunez C. Accelerating antibiotic discovery through artificial intelligence. Commun Biol 2021; 4:1050. [PMID: 34504303 PMCID: PMC8429579 DOI: 10.1038/s42003-021-02586-0] [Citation(s) in RCA: 70] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Accepted: 07/16/2021] [Indexed: 02/07/2023] Open
Abstract
By targeting invasive organisms, antibiotics insert themselves into the ancient struggle of the host-pathogen evolutionary arms race. As pathogens evolve tactics for evading antibiotics, therapies decline in efficacy and must be replaced, distinguishing antibiotics from most other forms of drug development. Together with a slow and expensive antibiotic development pipeline, the proliferation of drug-resistant pathogens drives urgent interest in computational methods that promise to expedite candidate discovery. Strides in artificial intelligence (AI) have encouraged its application to multiple dimensions of computer-aided drug design, with increasing application to antibiotic discovery. This review describes AI-facilitated advances in the discovery of both small molecule antibiotics and antimicrobial peptides. Beyond the essential prediction of antimicrobial activity, emphasis is also given to antimicrobial compound representation, determination of drug-likeness traits, antimicrobial resistance, and de novo molecular design. Given the urgency of the antimicrobial resistance crisis, we analyze uptake of open science best practices in AI-driven antibiotic discovery and argue for openness and reproducibility as a means of accelerating preclinical research. Finally, trends in the literature and areas for future inquiry are discussed, as artificially intelligent enhancements to drug discovery at large offer many opportunities for future applications in antibiotic development.
Collapse
Affiliation(s)
- Marcelo C R Melo
- Machine Biology Group, Departments of Psychiatry and Microbiology, Institute for Biomedical Informatics, Institute for Translational Medicine and Therapeutics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Departments of Bioengineering and Chemical and Biomolecular Engineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, USA
- Penn Institute for Computational Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Jacqueline R M A Maasch
- Machine Biology Group, Departments of Psychiatry and Microbiology, Institute for Biomedical Informatics, Institute for Translational Medicine and Therapeutics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Departments of Bioengineering and Chemical and Biomolecular Engineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, USA
- Penn Institute for Computational Science, University of Pennsylvania, Philadelphia, PA, USA
- Department of Computer and Information Science, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, USA
| | - Cesar de la Fuente-Nunez
- Machine Biology Group, Departments of Psychiatry and Microbiology, Institute for Biomedical Informatics, Institute for Translational Medicine and Therapeutics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.
- Departments of Bioengineering and Chemical and Biomolecular Engineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA, USA.
- Penn Institute for Computational Science, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
531
|
Parkinson C, Matthams C, Foley K, Spezi E. Artificial intelligence in radiation oncology: A review of its current status and potential application for the radiotherapy workforce. Radiography (Lond) 2021; 27 Suppl 1:S63-S68. [PMID: 34493445 DOI: 10.1016/j.radi.2021.07.012] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Revised: 07/05/2021] [Accepted: 07/20/2021] [Indexed: 12/15/2022]
Abstract
OBJECTIVE Radiation oncology is a continually evolving speciality. With the development of new imaging modalities and advanced imaging processing techniques, there is an increasing amount of data available to practitioners. In this narrative review, Artificial Intelligence (AI) is used as a reference to machine learning, and its potential, along with current problems in the field of radiation oncology, are considered from a technical position. KEY FINDINGS AI has the potential to harness the availability of data for improving patient outcomes, reducing toxicity, and easing clinical burdens. However, problems including the requirement of complexity of data, undefined core outcomes and limited generalisability are apparent. CONCLUSION This original review highlights considerations for the radiotherapy workforce, particularly therapeutic radiographers, as there will be an increasing requirement for their familiarity with AI due to their unique position as the interface between imaging technology and patients. IMPLICATIONS FOR PRACTICE Collaboration between AI experts and the radiotherapy workforce are required to overcome current issues before clinical adoption. The development of educational resources and standardised reporting of AI studies may help facilitate this.
Collapse
Affiliation(s)
- C Parkinson
- School of Engineering, Cardiff University, UK.
| | | | | | - E Spezi
- School of Engineering, Cardiff University, UK
| |
Collapse
|
532
|
Olthof AW, van Ooijen PMA, Cornelissen LJ. Deep Learning-Based Natural Language Processing in Radiology: The Impact of Report Complexity, Disease Prevalence, Dataset Size, and Algorithm Type on Model Performance. J Med Syst 2021; 45:91. [PMID: 34480231 PMCID: PMC8416876 DOI: 10.1007/s10916-021-01761-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Accepted: 08/04/2021] [Indexed: 12/12/2022]
Abstract
In radiology, natural language processing (NLP) allows the extraction of valuable information from radiology reports. It can be used for various downstream tasks such as quality improvement, epidemiological research, and monitoring guideline adherence. Class imbalance, variation in dataset size, variation in report complexity, and algorithm type all influence NLP performance but have not yet been systematically and interrelatedly evaluated. In this study, we investigate these factors on the performance of four types [a fully connected neural network (Dense), a long short-term memory recurrent neural network (LSTM), a convolutional neural network (CNN), and a Bidirectional Encoder Representations from Transformers (BERT)] of deep learning-based NLP. Two datasets consisting of radiologist-annotated reports of both trauma radiographs (n = 2469) and chest radiographs and computer tomography (CT) studies (n = 2255) were split into training sets (80%) and testing sets (20%). The training data was used as a source to train all four model types in 84 experiments (Fracture-data) and 45 experiments (Chest-data) with variation in size and prevalence. The performance was evaluated on sensitivity, specificity, positive predictive value, negative predictive value, area under the curve, and F score. After the NLP of radiology reports, all four model-architectures demonstrated high performance with metrics up to > 0.90. CNN, LSTM, and Dense were outperformed by the BERT algorithm because of its stable results despite variation in training size and prevalence. Awareness of variation in prevalence is warranted because it impacts sensitivity and specificity in opposite directions.
Collapse
Affiliation(s)
- A W Olthof
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Hanzeplein 1, Groningen, The Netherlands. .,Treant Health Care Group, Department of Radiology, Dr G.H. Amshoffweg 1, Hoogeveen, The Netherlands. .,Hospital Group Twente (ZGT), Department of Radiology, Almelo, The Netherlands.
| | - P M A van Ooijen
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Hanzeplein 1, Groningen, The Netherlands.,Data Science Center in Health (DASH), University of Groningen, University Medical Center Groningen, Machine Learning Lab, L.J, Zielstraweg 2, Groningen, The Netherlands
| | - L J Cornelissen
- Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Hanzeplein 1, Groningen, The Netherlands.,COSMONiO Imaging BV, L.J, Zielstraweg 2, Groningen, The Netherlands
| |
Collapse
|
533
|
Bitencourt A, Daimiel Naranjo I, Lo Gullo R, Rossi Saccarelli C, Pinker K. AI-enhanced breast imaging: Where are we and where are we heading? Eur J Radiol 2021; 142:109882. [PMID: 34392105 PMCID: PMC8387447 DOI: 10.1016/j.ejrad.2021.109882] [Citation(s) in RCA: 37] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Revised: 07/15/2021] [Accepted: 07/26/2021] [Indexed: 12/22/2022]
Abstract
Significant advances in imaging analysis and the development of high-throughput methods that can extract and correlate multiple imaging parameters with different clinical outcomes have led to a new direction in medical research. Radiomics and artificial intelligence (AI) studies are rapidly evolving and have many potential applications in breast imaging, such as breast cancer risk prediction, lesion detection and classification, radiogenomics, and prediction of treatment response and clinical outcomes. AI has been applied to different breast imaging modalities, including mammography, ultrasound, and magnetic resonance imaging, in different clinical scenarios. The application of AI tools in breast imaging has an unprecedented opportunity to better derive clinical value from imaging data and reshape the way we care for our patients. The aim of this study is to review the current knowledge and future applications of AI-enhanced breast imaging in clinical practice.
Collapse
Affiliation(s)
- Almir Bitencourt
- Department of Imaging, A.C.Camargo Cancer Center, Sao Paulo, SP, Brazil; Dasa, Sao Paulo, SP, Brazil
| | - Isaac Daimiel Naranjo
- Department of Radiology, Breast Imaging Service, Guy's and St. Thomas' NHS Trust, Great Maze Pond, London, UK
| | - Roberto Lo Gullo
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | | | - Katja Pinker
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, NY, USA.
| |
Collapse
|
534
|
Korot E, Gonçalves MB, Khan SM, Struyven R, Wagner SK, Keane PA. Clinician-driven artificial intelligence in ophthalmology: resources enabling democratization. Curr Opin Ophthalmol 2021; 32:445-451. [PMID: 34265784 DOI: 10.1097/icu.0000000000000785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW This article aims to discuss the current state of resources enabling the democratization of artificial intelligence (AI) in ophthalmology. RECENT FINDINGS Open datasets, efficient labeling techniques, code-free automated machine learning (AutoML) and cloud-based platforms for deployment are resources that enable clinicians with scarce resources to drive their own AI projects. SUMMARY Clinicians are the use-case experts who are best suited to drive AI projects tackling patient-relevant outcome measures. Taken together, open datasets, efficient labeling techniques, code-free AutoML and cloud platforms break the barriers for clinician-driven AI. As AI becomes increasingly democratized through such tools, clinicians and patients stand to benefit greatly.
Collapse
Affiliation(s)
- Edward Korot
- Stanford University Byers Eye Institute, Palo Alto, California, USA
- Moorfields Eye Hospital, London, UK
| | - Mariana B Gonçalves
- Moorfields Eye Hospital, London, UK
- Federal University of São Paulo (UNIFESP)
- Vision Institute (IPEPO), Sao Paulo, Brazil
| | | | - Robbert Struyven
- Moorfields Eye Hospital, London, UK
- University College London, London, UK
| | | | | |
Collapse
|
535
|
|
536
|
Stanzione A, Verde F, Romeo V, Boccadifuoco F, Mainenti PP, Maurea S. Radiomics and machine learning applications in rectal cancer: Current update and future perspectives. World J Gastroenterol 2021; 27:5306-5321. [PMID: 34539134 PMCID: PMC8409167 DOI: 10.3748/wjg.v27.i32.5306] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/13/2021] [Accepted: 07/22/2021] [Indexed: 02/06/2023] Open
Abstract
The high incidence of rectal cancer in both sexes makes it one of the most common tumors, with significant morbidity and mortality rates. To define the best treatment option and optimize patient outcome, several rectal cancer biological variables must be evaluated. Currently, medical imaging plays a crucial role in the characterization of this disease, and it often requires a multimodal approach. Magnetic resonance imaging is the first-choice imaging modality for local staging and restaging and can be used to detect high-risk prognostic factors. Computed tomography is widely adopted for the detection of distant metastases. However, conventional imaging has recognized limitations, and many rectal cancer characteristics remain assessable only after surgery and histopathology evaluation. There is a growing interest in artificial intelligence applications in medicine, and imaging is by no means an exception. The introduction of radiomics, which allows the extraction of quantitative features that reflect tumor heterogeneity, allows the mining of data in medical images and paved the way for the identification of potential new imaging biomarkers. To manage such a huge amount of data, the use of machine learning algorithms has been proposed. Indeed, without prior explicit programming, they can be employed to build prediction models to support clinical decision making. In this review, current applications and future perspectives of artificial intelligence in medical imaging of rectal cancer are presented, with an imaging modality-based approach and a keen eye on unsolved issues. The results are promising, but the road ahead for translation in clinical practice is rather long.
Collapse
Affiliation(s)
- Arnaldo Stanzione
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples 80131, Italy
| | - Francesco Verde
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples 80131, Italy
| | - Valeria Romeo
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples 80131, Italy
| | - Francesca Boccadifuoco
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples 80131, Italy
| | - Pier Paolo Mainenti
- Institute of Biostructures and Bioimaging, National Council of Research, Napoli 80131, Italy
| | - Simone Maurea
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Naples 80131, Italy
| |
Collapse
|
537
|
Shelmerdine SC, Arthurs OJ, Denniston A, Sebire NJ. Review of study reporting guidelines for clinical studies using artificial intelligence in healthcare. BMJ Health Care Inform 2021; 28:bmjhci-2021-100385. [PMID: 34426417 PMCID: PMC8383863 DOI: 10.1136/bmjhci-2021-100385] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 08/09/2021] [Indexed: 02/07/2023] Open
Abstract
High-quality research is essential in guiding evidence-based care, and should be reported in a way that is reproducible, transparent and where appropriate, provide sufficient detail for inclusion in future meta-analyses. Reporting guidelines for various study designs have been widely used for clinical (and preclinical) studies, consisting of checklists with a minimum set of points for inclusion. With the recent rise in volume of research using artificial intelligence (AI), additional factors need to be evaluated, which do not neatly conform to traditional reporting guidelines (eg, details relating to technical algorithm development). In this review, reporting guidelines are highlighted to promote awareness of essential content required for studies evaluating AI interventions in healthcare. These include published and in progress extensions to well-known reporting guidelines such as Standard Protocol Items: Recommendations for Interventional Trials-AI (study protocols), Consolidated Standards of Reporting Trials-AI (randomised controlled trials), Standards for Reporting of Diagnostic Accuracy Studies-AI (diagnostic accuracy studies) and Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis-AI (prediction model studies). Additionally there are a number of guidelines that consider AI for health interventions more generally (eg, Checklist for Artificial Intelligence in Medical Imaging (CLAIM), minimum information (MI)-CLAIM, MI for Medical AI Reporting) or address a specific element such as the ‘learning curve’ (Developmental and Exploratory Clinical Investigation of Decision-AI). Economic evaluation of AI health interventions is not currently addressed, and may benefit from extension to an existing guideline. In the face of a rapid influx of studies of AI health interventions, reporting guidelines help ensure that investigators and those appraising studies consider both the well-recognised elements of good study design and reporting, while also adequately addressing new challenges posed by AI-specific elements.
Collapse
Affiliation(s)
| | - Owen J Arthurs
- Radiology, Great Ormond Street Hospital NHS Foundation Trust, London, UK
| | - Alastair Denniston
- Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK
| | - Neil J Sebire
- Digital Research, Informatics and Virtual Environments Unit (DRIVE), London, UK
| |
Collapse
|
538
|
Gitto S, Cuocolo R, Emili I, Tofanelli L, Chianca V, Albano D, Messina C, Imbriaco M, Sconfienza LM. Effects of Interobserver Variability on 2D and 3D CT- and MRI-Based Texture Feature Reproducibility of Cartilaginous Bone Tumors. J Digit Imaging 2021; 34:820-832. [PMID: 34405298 PMCID: PMC8455795 DOI: 10.1007/s10278-021-00498-3] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Revised: 05/27/2021] [Accepted: 07/19/2021] [Indexed: 12/13/2022] Open
Abstract
This study aims to investigate the influence of interobserver manual segmentation variability on the reproducibility of 2D and 3D unenhanced computed tomography (CT)- and magnetic resonance imaging (MRI)-based texture analysis. Thirty patients with cartilaginous bone tumors (10 enchondromas, 10 atypical cartilaginous tumors, 10 chondrosarcomas) were retrospectively included. Three radiologists independently performed manual contour-focused segmentation on unenhanced CT and T1-weighted and T2-weighted MRI by drawing both a 2D region of interest (ROI) on the slice showing the largest tumor area and a 3D ROI including the whole tumor volume. Additionally, a marginal erosion was applied to both 2D and 3D segmentations to evaluate the influence of segmentation margins. A total of 783 and 1132 features were extracted from original and filtered 2D and 3D images, respectively. Intraclass correlation coefficient ≥ 0.75 defined feature stability. In 2D vs. 3D contour-focused segmentation, the rates of stable features were 74.71% vs. 86.57% (p < 0.001), 77.14% vs. 80.04% (p = 0.142), and 95.66% vs. 94.97% (p = 0.554) for CT and T1-weighted and T2-weighted images, respectively. Margin shrinkage did not improve 2D (p = 0.343) and performed worse than 3D (p < 0.001) contour-focused segmentation in terms of feature stability. In 2D vs. 3D contour-focused segmentation, matching stable features derived from CT and MRI were 65.8% vs. 68.7% (p = 0.191), and those derived from T1-weighted and T2-weighted images were 76.0% vs. 78.2% (p = 0.285). 2D and 3D radiomic features of cartilaginous bone tumors extracted from unenhanced CT and MRI are reproducible, although some degree of interobserver segmentation variability highlights the need for reliability analysis in future studies.
Collapse
Affiliation(s)
- Salvatore Gitto
- Dipartimento Di Scienze Biomediche Per La Salute, Università Degli Studi Di Milano, Via Luigi Mangiagalli 31, 20133, Milan, Italy.
| | - Renato Cuocolo
- Dipartimento Di Medicina Clinica E Chirurgia, Università Degli Studi Di Napoli "Federico II", Naples, Italy.,Laboratory of Augmented Reality for Health Monitoring (ARHeMLab), Dipartimento Di Ingegneria Elettrica E Delle Tecnologie Dell'Informazione, Università Degli Studi Di Napoli "Federico II", Naples, Italy
| | - Ilaria Emili
- Unità di Radiodiagnostica, Presidio CTO, ASST Pini-CTO, Milan, Italy
| | - Laura Tofanelli
- Dipartimento di Radiologia Diagnostica ed Interventistica, Università degli Studi di Milano, Ospedale San Paolo, Milan, Italy
| | - Vito Chianca
- Ospedale Evangelico Betania, Naples, Italy.,Clinica Di Radiologia, Istituto Imaging Della Svizzera Italiana - Ente Ospedaliero Cantonale, Lugano, Switzerland
| | - Domenico Albano
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy.,Sezione Di Scienze Radiologiche, Dipartimento Di Biomedicina, Neuroscienze E Diagnostica Avanzata, Università Degli Studi Di Palermo, Palermo, Italy
| | | | - Massimo Imbriaco
- Dipartimento Di Scienze Biomediche Avanzate, Università Degli Studi Di Napoli "Federico II", Naples, Italy
| | - Luca Maria Sconfienza
- Dipartimento Di Scienze Biomediche Per La Salute, Università Degli Studi Di Milano, Via Luigi Mangiagalli 31, 20133, Milan, Italy.,IRCCS Istituto Ortopedico Galeazzi, Milan, Italy
| |
Collapse
|
539
|
Artificial intelligence for the next generation of precision oncology. NPJ Precis Oncol 2021; 5:79. [PMID: 34408248 PMCID: PMC8373978 DOI: 10.1038/s41698-021-00216-w] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 07/21/2021] [Indexed: 12/14/2022] Open
|
540
|
Li K, Zhang R, Cai W. Deep learning convolutional neural network (DLCNN): unleashing the potential of 18F-FDG PET/CT in lymphoma. AMERICAN JOURNAL OF NUCLEAR MEDICINE AND MOLECULAR IMAGING 2021; 11:327-331. [PMID: 34513286 PMCID: PMC8414402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Accepted: 06/07/2021] [Indexed: 06/13/2023]
Abstract
This perspective briefly reviewed the applications of 18F-FDG PET/CT in the clinical management of lymphoma and the need for lesion segmentation in those applications. It discussed the limitations of existing segmentation technologies and the great potential of using deep learning convolutional neural network (DLCNN) to accomplish automatic lymphoma segmentation and characterizations. Finally, the authors shared perspectives on the technical challenges that need to be addressed to fully unleash the potential of DLCNN and 18F-FDG PET/CT in the diagnosis, prognosis, and treatment of lymphoma.
Collapse
Affiliation(s)
- Ke Li
- Department of Medical Physics, University of Wisconsin-Madison1111 Highland Avenue, Madison, WI, USA
- Department of Radiology, University of Wisconsin-Madison600 Highland Avenue, Madison, WI, USA
- University of Wisconsin Carbone Cancer Center600 Highland Avenue, Madison, WI, USA
| | - Ran Zhang
- Department of Medical Physics, University of Wisconsin-Madison1111 Highland Avenue, Madison, WI, USA
| | - Weibo Cai
- Department of Medical Physics, University of Wisconsin-Madison1111 Highland Avenue, Madison, WI, USA
- Department of Radiology, University of Wisconsin-Madison600 Highland Avenue, Madison, WI, USA
- University of Wisconsin Carbone Cancer Center600 Highland Avenue, Madison, WI, USA
| |
Collapse
|
541
|
Si L, Zhong J, Huo J, Xuan K, Zhuang Z, Hu Y, Wang Q, Zhang H, Yao W. Deep learning in knee imaging: a systematic review utilizing a Checklist for Artificial Intelligence in Medical Imaging (CLAIM). Eur Radiol 2021; 32:1353-1361. [PMID: 34347157 DOI: 10.1007/s00330-021-08190-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 04/15/2021] [Accepted: 05/07/2021] [Indexed: 12/11/2022]
Abstract
PURPOSE Our purposes were (1) to explore the methodologic quality of the studies on the deep learning in knee imaging with CLAIM criterion and (2) to offer our vision for the development of CLAIM to assure high-quality reports about the application of AI to medical imaging in knee joint. MATERIALS AND METHODS A Checklist for Artificial Intelligence in Medical Imaging systematic review was conducted from January 1, 2015, to June 1, 2020, using PubMed, EMBASE, and Web of Science databases. A total of 36 articles discussing deep learning applications in knee joint imaging were identified, divided by imaging modality, and characterized by imaging task, data source, algorithm type, and outcome metrics. RESULTS A total of 36 studies were identified and divided into: X-ray (44.44%) and MRI (55.56%). The mean CLAIM score of the 36 studies was 27.94 (standard deviation, 4.26), which was 66.53% of the ideal score of 42.00. The CLAIM items achieved an average good inter-rater agreement (ICC 0.815, 95% CI 0.660-0.902). In total, 32 studies performed internal cross-validation on the data set, while only 4 studies conducted external validation of the data set. CONCLUSIONS The overall scientific quality of deep learning in knee imaging is insufficient; however, deep learning remains a promising technology for diagnostic or predictive purpose. Improvements in study design, validation, and open science need to be made to demonstrate the generalizability of findings and to achieve clinical applications. Widespread application, pre-trained scoring procedure, and modification of CLAIM in response to clinical needs are necessary in the future. KEY POINTS • Limited deep learning studies were established in knee imaging with mean score of 27.94, which was 66.53% of the ideal score of 42.00, commonly due to invalidated results, retrospective study design, and absence of a clear definition of the CLAIM items in detail. • A previous trained data extraction instrument allowed reaching moderate inter-rater agreement in the application of the CLAIM, while CLAIM still needs improvement in scoring items and result reporting to become a wide adaptive tool in reviews of deep learning studies.
Collapse
Affiliation(s)
- Liping Si
- Department of Imaging, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, No. 1111 Xianxia Road, Changning District, Shanghai, 200336, China
| | - Jingyu Zhong
- Department of Imaging, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, No. 1111 Xianxia Road, Changning District, Shanghai, 200336, China
| | - Jiayu Huo
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Huashan Road #1954, Shanghai, 200030, China
| | - Kai Xuan
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Huashan Road #1954, Shanghai, 200030, China
| | - Zixu Zhuang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Huashan Road #1954, Shanghai, 200030, China
| | - Yangfan Hu
- Department of Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, Shanghai, 200233, China
| | - Qian Wang
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Huashan Road #1954, Shanghai, 200030, China.
| | - Huan Zhang
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China
| | - Weiwu Yao
- Department of Imaging, Tongren Hospital, Shanghai Jiao Tong University School of Medicine, No. 1111 Xianxia Road, Changning District, Shanghai, 200336, China.
| |
Collapse
|
542
|
Sabottke CF, Spieler BM, Moawad AW, Elsayes KM. Artificial Intelligence in Imaging of Chronic Liver Diseases: Current Update and Future Perspectives. Magn Reson Imaging Clin N Am 2021; 29:451-463. [PMID: 34243929 DOI: 10.1016/j.mric.2021.05.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Here we review artificial intelligence (AI) models which aim to assess various aspects of chronic liver disease. Despite the clinical importance of hepatocellular carcinoma in the setting of chronic liver disease, we focus this review on AI models which are not lesion-specific and instead review models developed for liver parenchyma segmentation, evaluation of portal circulation, assessment of hepatic fibrosis, and identification of hepatic steatosis. Optimization of these models offers the opportunity to potentially reduce the need for invasive procedures such as catheterization to measure hepatic venous pressure gradient or biopsy to assess fibrosis and steatosis. We compare the performance of these AI models amongst themselves as well as to radiomics approaches and alternate modality assessments. We conclude that these models show promising performance and merit larger-scale evaluation. We review artificial intelligence models that aim to assess various aspects of chronic liver disease aside from hepatocellular carcinoma. We focus this review on models for liver parenchyma segmentation, evaluation of portal circulation, assessment of hepatic fibrosis, and identification of hepatic steatosis. We conclude that these models show promising performance and merit a larger scale evaluation.
Collapse
Affiliation(s)
- Carl F Sabottke
- Department of Medical Imaging, University of Arizona College of Medicine, 1501 N. Campbell, P.O. Box 245067, Tucson, AZ 85724-5067, USA.
| | - Bradley M Spieler
- Department of Radiology, Louisiana State University Health Sciences Center, 1542 Tulane Avenue, Rm 343, New Orleans, LA 70112, USA
| | - Ahmed W Moawad
- Department of Imaging Physics, The University of Texas, MD Anderson Cancer Center, Unit 1472, P.O. Box 301402, Houston, TX 77230-1402, USA
| | - Khaled M Elsayes
- Department of Abdominal Imaging, The University of Texas, MD Anderson Cancer Center, 1400 Pressler St, Houston, TX 77030, USA
| |
Collapse
|
543
|
Chaudhari AS, Sandino CM, Cole EK, Larson DB, Gold GE, Vasanawala SS, Lungren MP, Hargreaves BA, Langlotz CP. Prospective Deployment of Deep Learning in MRI: A Framework for Important Considerations, Challenges, and Recommendations for Best Practices. J Magn Reson Imaging 2021; 54:357-371. [PMID: 32830874 PMCID: PMC8639049 DOI: 10.1002/jmri.27331] [Citation(s) in RCA: 44] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Revised: 07/27/2020] [Accepted: 07/31/2020] [Indexed: 12/16/2022] Open
Abstract
Artificial intelligence algorithms based on principles of deep learning (DL) have made a large impact on the acquisition, reconstruction, and interpretation of MRI data. Despite the large number of retrospective studies using DL, there are fewer applications of DL in the clinic on a routine basis. To address this large translational gap, we review the recent publications to determine three major use cases that DL can have in MRI, namely, that of model-free image synthesis, model-based image reconstruction, and image or pixel-level classification. For each of these three areas, we provide a framework for important considerations that consist of appropriate model training paradigms, evaluation of model robustness, downstream clinical utility, opportunities for future advances, as well recommendations for best current practices. We draw inspiration for this framework from advances in computer vision in natural imaging as well as additional healthcare fields. We further emphasize the need for reproducibility of research studies through the sharing of datasets and software. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY STAGE: 2.
Collapse
Affiliation(s)
| | - Christopher M Sandino
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - Elizabeth K Cole
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - David B Larson
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Garry E Gold
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Orthopaedic Surgery, Stanford University, Stanford, California, USA
- Department of Bioengineering, Stanford University, Stanford, California, USA
| | | | - Matthew P Lungren
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Brian A Hargreaves
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
- Department of Biomedical Informatics, Stanford University, Stanford, California, USA
| | - Curtis P Langlotz
- Department of Radiology, Stanford University, Stanford, California, USA
- Department of Biomedical Informatics, Stanford University, Stanford, California, USA
| |
Collapse
|
544
|
Dournes G, Hall CS, Willmering MM, Brody AS, Macey J, Bui S, Denis-De-Senneville B, Berger P, Laurent F, Benlala I, Woods JC. Artificial intelligence in CT for quantifying lung changes in the era of CFTR modulators. Eur Respir J 2021; 59:13993003.00844-2021. [PMID: 34266943 DOI: 10.1183/13993003.00844-2021] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Accepted: 07/02/2021] [Indexed: 11/05/2022]
Abstract
RATIONALE Chest computed tomography (CT) remains the imaging standard for demonstrating cystic fibrosis airway structural disease in vivo. However, visual scorings as an outcome measure are time-consuming, require training, and lack high reproducibility. OBJECTIVE To validate a fully automated artificial intelligence-driven scoring of cystic fibrosis lung disease severity. METHODS Data were retrospectively collected in three cystic fibrosis reference centers, between 2008 and 2020, in 184 patients 4 to 54-years-old. An algorithm using three two-dimensional convolutional neural networks was trained with 78 patients' CTs (23 530 CT slices) for the semantic labeling of bronchiectasis, peribronchial thickening, bronchial mucus, bronchiolar mucus, and collapse/consolidation. 36 patients' CTs (11 435 CT slices) were used for testing versus ground-truth labels. The method's clinical validity was assessed in an independent group of 70 patients with or without lumacaftor/ivacaftor treatment (n=10 and 60, respectively) with repeat examinations. Similarity and reproducibility were assessed using Dice coefficient, correlations using Spearman test, and paired comparisons using Wilcoxon rank test. MEASUREMENT AND MAIN RESULTS The overall pixelwise similarity of artificial intelligence-driven versus ground-truth labels was good (Dice coefficient=0.71). All artificial intelligence-driven volumetric quantifications had moderate to very good correlations to a visual imaging scoring (p<0.001) and fair to good correlations to FEV1% at pulmonary function test (p<0.001). Significant decreases in peribronchial thickening (p=0.005), bronchial mucus (p=0.005), bronchiolar mucus (p=0.007) volumes were measured in patients with lumacaftor/ivacaftor. Conversely, bronchiectasis (p=0.002) and peribronchial thickening (p=0.008) volumes increased in patients without lumacaftor/ivacaftor. The reproducibility was almost perfect (Dice>0.99). CONCLUSION Artificial intelligence allows a fully automated volumetric quantification of cystic fibrosis-related modifications over an entire lung. The novel scoring system could provide a robust disease outcome in the era of effective CFTR modulator therapy.
Collapse
Affiliation(s)
- Gael Dournes
- Univ. Bordeaux, INSERM, Centre de Recherche Cardio-Thoracique de Bordeaux, U1045, CIC 1401, Bordeaux, France .,CHU Bordeaux, Service d'Imagerie Thoracique et Cardiovasculaire, Service des Maladies Respiratoires, Service d'Exploration Fonctionnelle Respiratoire, CIC 1401, Pessac, France.,Indicates that both authors contributed the same to the study
| | - Chase S Hall
- Division of Pulmonary, Critical Care and Sleep Medicine, Department of Internal Medicine, University of Kansas School of Medicine, Kansas City, Kansas, United States of America.,Indicates that both authors contributed the same to the study
| | - Matthew M Willmering
- Division of Pulmonary Medicine and Department of Radiology, Center for Pulmonary Imaging Research, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, United States of America
| | - Alan S Brody
- Division of Pulmonary Medicine and Department of Radiology, Center for Pulmonary Imaging Research, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, United States of America
| | - Julie Macey
- CHU Bordeaux, Service d'Imagerie Thoracique et Cardiovasculaire, Service des Maladies Respiratoires, Service d'Exploration Fonctionnelle Respiratoire, CIC 1401, Pessac, France
| | - Stephanie Bui
- Bordeaux University Hospital, Hôpital Pellegrin-Enfants, paediatric Cystic Fibrosis Reference Center (CRCM), Centre d'Investigation Clinique (CIC 1401), Bordeaux, France
| | | | - Patrick Berger
- Univ. Bordeaux, INSERM, Centre de Recherche Cardio-Thoracique de Bordeaux, U1045, CIC 1401, Bordeaux, France.,CHU Bordeaux, Service d'Imagerie Thoracique et Cardiovasculaire, Service des Maladies Respiratoires, Service d'Exploration Fonctionnelle Respiratoire, CIC 1401, Pessac, France
| | - François Laurent
- Univ. Bordeaux, INSERM, Centre de Recherche Cardio-Thoracique de Bordeaux, U1045, CIC 1401, Bordeaux, France.,CHU Bordeaux, Service d'Imagerie Thoracique et Cardiovasculaire, Service des Maladies Respiratoires, Service d'Exploration Fonctionnelle Respiratoire, CIC 1401, Pessac, France
| | - Ilyes Benlala
- Univ. Bordeaux, INSERM, Centre de Recherche Cardio-Thoracique de Bordeaux, U1045, CIC 1401, Bordeaux, France.,CHU Bordeaux, Service d'Imagerie Thoracique et Cardiovasculaire, Service des Maladies Respiratoires, Service d'Exploration Fonctionnelle Respiratoire, CIC 1401, Pessac, France
| | - Jason C Woods
- Division of Pulmonary Medicine and Department of Radiology, Center for Pulmonary Imaging Research, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, United States of America.,Department of Pediatrics, College of Medicine, University of Cincinnati, Cincinnati, Ohio, United States of America
| |
Collapse
|
545
|
Hickman SE, Baxter GC, Gilbert FJ. Adoption of artificial intelligence in breast imaging: evaluation, ethical constraints and limitations. Br J Cancer 2021; 125:15-22. [PMID: 33772149 PMCID: PMC8257639 DOI: 10.1038/s41416-021-01333-w] [Citation(s) in RCA: 47] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Revised: 02/15/2021] [Accepted: 02/24/2021] [Indexed: 02/07/2023] Open
Abstract
Retrospective studies have shown artificial intelligence (AI) algorithms can match as well as enhance radiologist's performance in breast screening. These tools can facilitate tasks not feasible by humans such as the automatic triage of patients and prediction of treatment outcomes. Breast imaging faces growing pressure with the exponential growth in imaging requests and a predicted reduced workforce to provide reports. Solutions to alleviate these pressures are being sought with an increasing interest in the adoption of AI to improve workflow efficiency as well as patient outcomes. Vast quantities of data are needed to test and monitor AI algorithms before and after their incorporation into healthcare systems. Availability of data is currently limited, although strategies are being devised to harness the data that already exists within healthcare institutions. Challenges that underpin the realisation of AI into everyday breast imaging cannot be underestimated and the provision of guidance from national agencies to tackle these challenges, taking into account views from a societal, industrial and healthcare prospective is essential. This review provides background on the evaluation and use of AI in breast imaging in addition to exploring key ethical, technical, legal and regulatory challenges that have been identified so far.
Collapse
Affiliation(s)
- Sarah E Hickman
- Department of Radiology, University of Cambridge School of Clinical Medicine, Cambridge, UK
| | - Gabrielle C Baxter
- Department of Radiology, University of Cambridge School of Clinical Medicine, Cambridge, UK
| | - Fiona J Gilbert
- Department of Radiology, University of Cambridge School of Clinical Medicine, Cambridge, UK.
- Department of Radiology, Addenbrookes Hospital, Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK.
| |
Collapse
|
546
|
Driggs D, Selby I, Roberts M, Gkrania-Klotsas E, Rudd JHF, Yang G, Babar J, Sala E, Schönlieb CB. Machine Learning for COVID-19 Diagnosis and Prognostication: Lessons for Amplifying the Signal While Reducing the Noise. Radiol Artif Intell 2021; 3:e210011. [PMID: 34240059 PMCID: PMC7995449 DOI: 10.1148/ryai.2021210011] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 03/01/2021] [Accepted: 03/03/2021] [Indexed: 12/19/2022]
Affiliation(s)
| | | | - Michael Roberts
- From the Department of Applied Mathematics and Theoretical Physics
(D.D., M.R., C.B.S.) and Division of Cardiovascular Medicine (J.H.F.R.),
University of Cambridge, Cambridge, England; Department of Radiology, School of
Clinical Medicine, University of Cambridge and CRUK Cambridge Centre, Cambridge
Biomedical Campus, Cambridge CB2 0QQ, England (I.S., J.B., E.S.);
Oncology R&D, AstraZeneca, Cambridge, England (M.R.); Department of
Infectious Diseases, University of Cambridge Hospitals, Cambridge, England
(E.G.K.); and National Heart and Lung Institute, Imperial College London,
London, England (G.Y.)
| | - Effrossyni Gkrania-Klotsas
- From the Department of Applied Mathematics and Theoretical Physics
(D.D., M.R., C.B.S.) and Division of Cardiovascular Medicine (J.H.F.R.),
University of Cambridge, Cambridge, England; Department of Radiology, School of
Clinical Medicine, University of Cambridge and CRUK Cambridge Centre, Cambridge
Biomedical Campus, Cambridge CB2 0QQ, England (I.S., J.B., E.S.);
Oncology R&D, AstraZeneca, Cambridge, England (M.R.); Department of
Infectious Diseases, University of Cambridge Hospitals, Cambridge, England
(E.G.K.); and National Heart and Lung Institute, Imperial College London,
London, England (G.Y.)
| | - James H. F. Rudd
- From the Department of Applied Mathematics and Theoretical Physics
(D.D., M.R., C.B.S.) and Division of Cardiovascular Medicine (J.H.F.R.),
University of Cambridge, Cambridge, England; Department of Radiology, School of
Clinical Medicine, University of Cambridge and CRUK Cambridge Centre, Cambridge
Biomedical Campus, Cambridge CB2 0QQ, England (I.S., J.B., E.S.);
Oncology R&D, AstraZeneca, Cambridge, England (M.R.); Department of
Infectious Diseases, University of Cambridge Hospitals, Cambridge, England
(E.G.K.); and National Heart and Lung Institute, Imperial College London,
London, England (G.Y.)
| | - Guang Yang
- From the Department of Applied Mathematics and Theoretical Physics
(D.D., M.R., C.B.S.) and Division of Cardiovascular Medicine (J.H.F.R.),
University of Cambridge, Cambridge, England; Department of Radiology, School of
Clinical Medicine, University of Cambridge and CRUK Cambridge Centre, Cambridge
Biomedical Campus, Cambridge CB2 0QQ, England (I.S., J.B., E.S.);
Oncology R&D, AstraZeneca, Cambridge, England (M.R.); Department of
Infectious Diseases, University of Cambridge Hospitals, Cambridge, England
(E.G.K.); and National Heart and Lung Institute, Imperial College London,
London, England (G.Y.)
| | - Judith Babar
- From the Department of Applied Mathematics and Theoretical Physics
(D.D., M.R., C.B.S.) and Division of Cardiovascular Medicine (J.H.F.R.),
University of Cambridge, Cambridge, England; Department of Radiology, School of
Clinical Medicine, University of Cambridge and CRUK Cambridge Centre, Cambridge
Biomedical Campus, Cambridge CB2 0QQ, England (I.S., J.B., E.S.);
Oncology R&D, AstraZeneca, Cambridge, England (M.R.); Department of
Infectious Diseases, University of Cambridge Hospitals, Cambridge, England
(E.G.K.); and National Heart and Lung Institute, Imperial College London,
London, England (G.Y.)
| | - Evis Sala
- From the Department of Applied Mathematics and Theoretical Physics
(D.D., M.R., C.B.S.) and Division of Cardiovascular Medicine (J.H.F.R.),
University of Cambridge, Cambridge, England; Department of Radiology, School of
Clinical Medicine, University of Cambridge and CRUK Cambridge Centre, Cambridge
Biomedical Campus, Cambridge CB2 0QQ, England (I.S., J.B., E.S.);
Oncology R&D, AstraZeneca, Cambridge, England (M.R.); Department of
Infectious Diseases, University of Cambridge Hospitals, Cambridge, England
(E.G.K.); and National Heart and Lung Institute, Imperial College London,
London, England (G.Y.)
| | - Carola-Bibiane Schönlieb
- From the Department of Applied Mathematics and Theoretical Physics
(D.D., M.R., C.B.S.) and Division of Cardiovascular Medicine (J.H.F.R.),
University of Cambridge, Cambridge, England; Department of Radiology, School of
Clinical Medicine, University of Cambridge and CRUK Cambridge Centre, Cambridge
Biomedical Campus, Cambridge CB2 0QQ, England (I.S., J.B., E.S.);
Oncology R&D, AstraZeneca, Cambridge, England (M.R.); Department of
Infectious Diseases, University of Cambridge Hospitals, Cambridge, England
(E.G.K.); and National Heart and Lung Institute, Imperial College London,
London, England (G.Y.)
| | - on behalf of the AIX-COVNET collaboration
- From the Department of Applied Mathematics and Theoretical Physics
(D.D., M.R., C.B.S.) and Division of Cardiovascular Medicine (J.H.F.R.),
University of Cambridge, Cambridge, England; Department of Radiology, School of
Clinical Medicine, University of Cambridge and CRUK Cambridge Centre, Cambridge
Biomedical Campus, Cambridge CB2 0QQ, England (I.S., J.B., E.S.);
Oncology R&D, AstraZeneca, Cambridge, England (M.R.); Department of
Infectious Diseases, University of Cambridge Hospitals, Cambridge, England
(E.G.K.); and National Heart and Lung Institute, Imperial College London,
London, England (G.Y.)
| |
Collapse
|
547
|
Sanaat A, Shiri I, Arabi H, Mainta I, Nkoulou R, Zaidi H. Deep learning-assisted ultra-fast/low-dose whole-body PET/CT imaging. Eur J Nucl Med Mol Imaging 2021; 48:2405-2415. [PMID: 33495927 PMCID: PMC8241799 DOI: 10.1007/s00259-020-05167-1] [Citation(s) in RCA: 55] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Accepted: 12/15/2020] [Indexed: 12/21/2022]
Abstract
PURPOSE Tendency is to moderate the injected activity and/or reduce acquisition time in PET examinations to minimize potential radiation hazards and increase patient comfort. This work aims to assess the performance of regular full-dose (FD) synthesis from fast/low-dose (LD) whole-body (WB) PET images using deep learning techniques. METHODS Instead of using synthetic LD scans, two separate clinical WB 18F-Fluorodeoxyglucose (18F-FDG) PET/CT studies of 100 patients were acquired: one regular FD (~ 27 min) and one fast or LD (~ 3 min) consisting of 1/8th of the standard acquisition time. A modified cycle-consistent generative adversarial network (CycleGAN) and residual neural network (ResNET) models, denoted as CGAN and RNET, respectively, were implemented to predict FD PET images. The quality of the predicted PET images was assessed by two nuclear medicine physicians. Moreover, the diagnostic quality of the predicted PET images was evaluated using a pass/fail scheme for lesion detectability task. Quantitative analysis using established metrics including standardized uptake value (SUV) bias was performed for the liver, left/right lung, brain, and 400 malignant lesions from the test and evaluation datasets. RESULTS CGAN scored 4.92 and 3.88 (out of 5) (adequate to good) for brain and neck + trunk, respectively. The average SUV bias calculated over normal tissues was 3.39 ± 0.71% and - 3.83 ± 1.25% for CGAN and RNET, respectively. Bland-Altman analysis reported the lowest SUV bias (0.01%) and 95% confidence interval of - 0.36, + 0.47 for CGAN compared with the reference FD images for malignant lesions. CONCLUSION CycleGAN is able to synthesize clinical FD WB PET images from LD images with 1/8th of standard injected activity or acquisition time. The predicted FD images present almost similar performance in terms of lesion detectability, qualitative scores, and quantification bias and variance.
Collapse
Affiliation(s)
- Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Isaac Shiri
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Ismini Mainta
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - René Nkoulou
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland
- Geneva University Neurocenter, Geneva University, 1205 Geneva, Switzerland
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- Department of Nuclear Medicine, University of Southern Denmark, DK-500 Odense, Denmark
| |
Collapse
|
548
|
Mosquera C, Diaz FN, Binder F, Rabellino JM, Benitez SE, Beresñak AD, Seehaus A, Ducrey G, Ocantos JA, Luna DR. Chest x-ray automated triage: A semiologic approach designed for clinical implementation, exploiting different types of labels through a combination of four Deep Learning architectures. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 206:106130. [PMID: 34023576 DOI: 10.1016/j.cmpb.2021.106130] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/14/2020] [Accepted: 04/21/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVES The multiple chest x-ray datasets released in the last years have ground-truth labels intended for different computer vision tasks, suggesting that performance in automated chest x-ray interpretation might improve by using a method that can exploit diverse types of annotations. This work presents a Deep Learning method based on the late fusion of different convolutional architectures, that allows training with heterogeneous data with a simple implementation, and evaluates its performance on independent test data. We focused on obtaining a clinically useful tool that could be successfully integrated into a hospital workflow. MATERIALS AND METHODS Based on expert opinion, we selected four target chest x-ray findings, namely lung opacities, fractures, pneumothorax and pleural effusion. For each finding we defined the most suitable type of ground-truth label, and built four training datasets combining images from public chest x-ray datasets and our institutional archive. We trained four different Deep Learning architectures and combined their outputs with a late fusion strategy, obtaining a unified tool. The performance was measured on two test datasets: an external openly-available dataset, and a retrospective institutional dataset, to estimate performance on the local population. RESULTS The external and local test sets had 4376 and 1064 images, respectively, for which the model showed an area under the Receiver Operating Characteristics curve of 0.75 (95%CI: 0.74-0.76) and 0.87 (95%CI: 0.86-0.89) in the detection of abnormal chest x-rays. For the local population, a sensitivity of 86% (95%CI: 84-90), and a specificity of 88% (95%CI: 86-90) were obtained, with no significant differences between demographic subgroups. We present examples of heatmaps to show the accomplished level of interpretability, examining true and false positives. CONCLUSION This study presents a new approach for exploiting heterogeneous labels from different chest x-ray datasets, by choosing Deep Learning architectures according to the radiological characteristics of each pathological finding. We estimated the tool's performance on the local population, obtaining results comparable to state-of-the-art metrics. We believe this approach is closer to the actual reading process of chest x-rays by professionals, and therefore more likely to be successful in a real clinical setting.
Collapse
Affiliation(s)
- Candelaria Mosquera
- Health Informatics Department, Hospital Italiano de Buenos Aires, Juan Domingo Perón 4190, Ciudad Autónoma de Buenos Aires C1199AAB, Argentina; Universidad Tecnológica Nacional, Av. Medrano 951, Ciudad Autónoma de Buenos Aires C1179AAQ, Argentina.
| | - Facundo Nahuel Diaz
- Radiology Department, Hospital Italiano de Buenos Aires, Juan Domingo Perón 4190, Ciudad Autónoma de Buenos Aires C1199AAB, Argentina; Universidad de Buenos Aires. Facultad de Medicina. II Cátedra de Anatomía. Buenos Aires, Argentina.
| | - Fernando Binder
- Health Informatics Department, Hospital Italiano de Buenos Aires, Juan Domingo Perón 4190, Ciudad Autónoma de Buenos Aires C1199AAB, Argentina.
| | - José Martín Rabellino
- Radiology Department, Hospital Italiano de Buenos Aires, Juan Domingo Perón 4190, Ciudad Autónoma de Buenos Aires C1199AAB, Argentina.
| | - Sonia Elizabeth Benitez
- Health Informatics Department, Hospital Italiano de Buenos Aires, Juan Domingo Perón 4190, Ciudad Autónoma de Buenos Aires C1199AAB, Argentina.
| | - Alejandro Daniel Beresñak
- Radiology Department, Hospital Italiano de Buenos Aires, Juan Domingo Perón 4190, Ciudad Autónoma de Buenos Aires C1199AAB, Argentina.
| | - Alberto Seehaus
- Radiology Department, Hospital Italiano de Buenos Aires, Juan Domingo Perón 4190, Ciudad Autónoma de Buenos Aires C1199AAB, Argentina.
| | - Gabriel Ducrey
- Radiology Department, Hospital Italiano de Buenos Aires, Juan Domingo Perón 4190, Ciudad Autónoma de Buenos Aires C1199AAB, Argentina.
| | - Jorge Alberto Ocantos
- Radiology Department, Hospital Italiano de Buenos Aires, Juan Domingo Perón 4190, Ciudad Autónoma de Buenos Aires C1199AAB, Argentina.
| | - Daniel Roberto Luna
- Health Informatics Department, Hospital Italiano de Buenos Aires, Juan Domingo Perón 4190, Ciudad Autónoma de Buenos Aires C1199AAB, Argentina.
| |
Collapse
|
549
|
Klontzas ME, Volitakis E, Aydingöz Ü, Chlapoutakis K, Karantanas AH. Machine learning identifies factors related to early joint space narrowing in dysplastic and non-dysplastic hips. Eur Radiol 2021; 32:542-550. [PMID: 34136948 DOI: 10.1007/s00330-021-08070-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 04/28/2021] [Accepted: 05/11/2021] [Indexed: 12/22/2022]
Abstract
OBJECTIVES To utilise machine learning, unsupervised clustering and multivariate modelling in order to predict severe early joint space narrowing (JSN) from anatomical hip parameters while identifying factors related to joint space width (JSW) in dysplastic and non-dysplastic hips. METHODS A total of 507 hip CT examinations of patients 20-55 years old were retrospectively examined, and JSW, center-edge (CE) angle, alpha angle, anterior acetabular sector angle (AASA), and neck-shaft angle (NSA) were recorded. Dysplasia and severe JSN were defined with CE angle < 25o and JSW< 2 mm, respectively. A random forest classifier was developed to predict severe JSN based on anatomical and demographical data. Multivariate linear regression and two-step unsupervised clustering were performed to identify factors linked to JSW. RESULTS In dysplastic hips, lateral or anterior undercoverage alone was not correlated to JSN. AASA (p < 0.005) and CE angle (p < 0.032) were the only factors significantly correlated with JSN in dysplastic hips. In non-dysplastic hips, JSW was inversely correlated to CE angle, AASA, and age and positively correlated to NSA (p < 0.001). A random forest classifier predicted severe JSN (AUC 69.9%, 95%CI 47.9-91.8%). TwoStep cluster modelling identified two distinct patient clusters one with low and one with normal JSW and different anatomical characteristics. CONCLUSION Machine learning predicted severe JSN and identified population characteristics related to normal and abnormal joint space width. Dysplasia in one plane was found to be insufficient to cause JSN, highlighting the need for hip anatomy assessment on multiple planes. KEY POINTS • Neither anterior nor lateral acetabular dysplasia was sufficient to independently reduce joint space width in a multivariate linear regression model of dysplastic hips. • A random forest classifier was developed based on measurements and demographic parameters from 507 hip joints, achieving an area under the curve of 69.9% in the external validation set, in predicting severe joint space narrowing based on anatomical hip parameters and age. • Unsupervised TwoStep cluster analysis revealed two distinct population groups, one with low and one with normal joint space width, characterised by differences in hip morphology.
Collapse
Affiliation(s)
- Michail E Klontzas
- International Interdisciplinary Consensus Committee on DDH Evaluation (ICODE), Heraklion, Greece
- Department of Medical Imaging, University Hospital of Heraklion, Voutes, 71110, Crete, Greece
- Advanced Hybrid Imaging Systems, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece
- Department of Radiology, School of Medicine, University of Crete, Heraklion, Greece
| | - Emmanouil Volitakis
- Department of Medical Imaging, University Hospital of Heraklion, Voutes, 71110, Crete, Greece
- Department of Radiology, School of Medicine, University of Crete, Heraklion, Greece
| | - Üstün Aydingöz
- International Interdisciplinary Consensus Committee on DDH Evaluation (ICODE), Heraklion, Greece
- Department of Radiology, Hacettepe University School of Medicine, Sihhiye, 06100, Ankara, Turkey
| | - Konstantinos Chlapoutakis
- International Interdisciplinary Consensus Committee on DDH Evaluation (ICODE), Heraklion, Greece
- Department of Radiology, Vioapeikonisi Imaging Lab, Arkoleon 9, 71202, Heraklion, Greece
| | - Apostolos H Karantanas
- International Interdisciplinary Consensus Committee on DDH Evaluation (ICODE), Heraklion, Greece.
- Department of Medical Imaging, University Hospital of Heraklion, Voutes, 71110, Crete, Greece.
- Advanced Hybrid Imaging Systems, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece.
- Department of Radiology, School of Medicine, University of Crete, Heraklion, Greece.
| |
Collapse
|
550
|
Gitto S, Cuocolo R, Albano D, Morelli F, Pescatori LC, Messina C, Imbriaco M, Sconfienza LM. CT and MRI radiomics of bone and soft-tissue sarcomas: a systematic review of reproducibility and validation strategies. Insights Imaging 2021; 12:68. [PMID: 34076740 PMCID: PMC8172744 DOI: 10.1186/s13244-021-01008-3] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Accepted: 05/05/2021] [Indexed: 02/07/2023] Open
Abstract
Background Feature reproducibility and model validation are two main challenges of radiomics. This study aims to systematically review radiomic feature reproducibility and predictive model validation strategies in studies dealing with CT and MRI radiomics of bone and soft-tissue sarcomas. The ultimate goal is to promote achieving a consensus on these aspects in radiomic workflows and facilitate clinical transferability. Results Out of 278 identified papers, forty-nine papers published between 2008 and 2020 were included. They dealt with radiomics of bone (n = 12) or soft-tissue (n = 37) tumors. Eighteen (37%) studies included a feature reproducibility analysis. Inter-/intra-reader segmentation variability was the theme of reproducibility analysis in 16 (33%) investigations, outnumbering the analyses focused on image acquisition or post-processing (n = 2, 4%). The intraclass correlation coefficient was the most commonly used statistical method to assess reproducibility, which ranged from 0.6 and 0.9. At least one machine learning validation technique was used for model development in 25 (51%) papers, and K-fold cross-validation was the most commonly employed. A clinical validation of the model was reported in 19 (39%) papers. It was performed using a separate dataset from the primary institution (i.e., internal validation) in 14 (29%) studies and an independent dataset related to different scanners or from another institution (i.e., independent validation) in 5 (10%) studies. Conclusions The issues of radiomic feature reproducibility and model validation varied largely among the studies dealing with musculoskeletal sarcomas and should be addressed in future investigations to bring the field of radiomics from a preclinical research area to the clinical stage.
Collapse
Affiliation(s)
- Salvatore Gitto
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Riccardo Galeazzi 4, 20161, Milan, Italy.
| | - Renato Cuocolo
- Dipartimento di Medicina Clinica e Chirurgia, Università degli Studi di Napoli "Federico II", Naples, Italy.,Laboratory of Augmented Reality for Health Monitoring (ARHeMLab), Dipartimento di Ingegneria Elettrica e delle Tecnologie dell'Informazione, Università degli Studi di Napoli "Federico II", Naples, Italy
| | - Domenico Albano
- IRCCS Istituto Ortopedico Galeazzi, Milan, Italy.,Sezione di Scienze Radiologiche, Dipartimento di Biomedicina, Neuroscienze e Diagnostica Avanzata, Università degli Studi di Palermo, Palermo, Italy
| | | | - Lorenzo Carlo Pescatori
- Assistance Publique - Hôpitaux de Paris (AP-HP), Service d'Imagerie Médicale, CHU Henri Mondor, Créteil, France
| | - Carmelo Messina
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Riccardo Galeazzi 4, 20161, Milan, Italy.,IRCCS Istituto Ortopedico Galeazzi, Milan, Italy
| | - Massimo Imbriaco
- Dipartimento di Scienze Biomediche Avanzate, Università degli Studi di Napoli "Federico II", Naples, Italy
| | - Luca Maria Sconfienza
- Dipartimento di Scienze Biomediche per la Salute, Università degli Studi di Milano, Via Riccardo Galeazzi 4, 20161, Milan, Italy.,IRCCS Istituto Ortopedico Galeazzi, Milan, Italy
| |
Collapse
|