1
|
Wang Z, Kim Y, Mortani Barbosa EJ. Demographics and socioeconomic determinants of health predict continued participation in a CT lung cancer screening program. Curr Probl Diagn Radiol 2024:S0363-0188(24)00077-X. [PMID: 38658287 DOI: 10.1067/j.cpradiol.2024.04.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Accepted: 04/18/2024] [Indexed: 04/26/2024]
Abstract
PURPOSE We developed machine learning (ML) models to assess demographic and socioeconomic status (SES) variables' value in predicting continued participation in a low-dose CT lung cancer screening (LCS) program. MATERIALS AND METHODS 480 LCS subjects were retrospectively examined for the following outcomes: (#1) no follow-up (single LCS scan) vs. multiple follow-ups (220 and 260 subjects respectively) and (#2) absent or delayed (>1 month past the due date) follow-up vs timely follow-up (356 and 124 subjects respectively). We quantified the contributions of 14 socioeconomic, demographic, and clinical predictors to LCS adherence, and validated and compared prediction performances of multivariate logistic regression (MLR), support vector machine (SVM) and shallow neural network (NN) models. RESULTS For outcome #1, age, sex, race, insurance status, personal cancer history, and median household income were found to be associated with returning for follow-ups. For outcome #2, age, sex, race, and insurance status were significant predictor of absent/delayed LCS follow-up. Across 5-fold cross-validation, the MLR model achieved an average AUC of 0.732 (95% CI, 0.661-0.803) for outcome #1 and 0.633 (95% CI, 0.602-0.664) for outcome #2 and is the model with best predictive performance overall, whereas NN and SVM tended to overfit training data and fell short on testing data performance for either outcome. CONCLUSIONS We identified significant predictors of LCS adherence, and our ML models can predict which subjects are at higher risk of receiving no or delayed LCS follow-ups. Our results could inform data-driven interventions to engage vulnerable populations and extend the benefits of LCS.
Collapse
Affiliation(s)
- Zhuoyang Wang
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Yohan Kim
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Eduardo J Mortani Barbosa
- Division of Cardiothoracic Imaging, Department of Radiology, Perelman School of Medicine, University of Pennsylvania, 3400 Spruce Street, Ground Floor Founders Bldg, Philadelphia, PA 19104, USA.
| |
Collapse
|
2
|
Flory MN, Napel S, Tsai EB. Artificial Intelligence in Radiology: Opportunities and Challenges. Semin Ultrasound CT MR 2024; 45:152-160. [PMID: 38403128 DOI: 10.1053/j.sult.2024.02.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
Artificial intelligence's (AI) emergence in radiology elicits both excitement and uncertainty. AI holds promise for improving radiology with regards to clinical practice, education, and research opportunities. Yet, AI systems are trained on select datasets that can contain bias and inaccuracies. Radiologists must understand these limitations and engage with AI developers at every step of the process - from algorithm initiation and design to development and implementation - to maximize benefit and minimize harm that can be enabled by this technology.
Collapse
Affiliation(s)
- Marta N Flory
- Department of Radiology, Stanford University School of Medicine, Center for Academic Medicine, Palo Alto, CA
| | - Sandy Napel
- Department of Radiology, Stanford University School of Medicine, Center for Academic Medicine, Palo Alto, CA
| | - Emily B Tsai
- Department of Radiology, Stanford University School of Medicine, Center for Academic Medicine, Palo Alto, CA.
| |
Collapse
|
3
|
Guermazi A, Omoumi P, Tordjman M, Fritz J, Kijowski R, Regnard NE, Carrino J, Kahn CE, Knoll F, Rueckert D, Roemer FW, Hayashi D. How AI May Transform Musculoskeletal Imaging. Radiology 2024; 310:e230764. [PMID: 38165245 PMCID: PMC10831478 DOI: 10.1148/radiol.230764] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2023] [Revised: 06/18/2023] [Accepted: 07/11/2023] [Indexed: 01/03/2024]
Abstract
While musculoskeletal imaging volumes are increasing, there is a relative shortage of subspecialized musculoskeletal radiologists to interpret the studies. Will artificial intelligence (AI) be the solution? For AI to be the solution, the wide implementation of AI-supported data acquisition methods in clinical practice requires establishing trusted and reliable results. This implementation will demand close collaboration between core AI researchers and clinical radiologists. Upon successful clinical implementation, a wide variety of AI-based tools can improve the musculoskeletal radiologist's workflow by triaging imaging examinations, helping with image interpretation, and decreasing the reporting time. Additional AI applications may also be helpful for business, education, and research purposes if successfully integrated into the daily practice of musculoskeletal radiology. The question is not whether AI will replace radiologists, but rather how musculoskeletal radiologists can take advantage of AI to enhance their expert capabilities.
Collapse
Affiliation(s)
- Ali Guermazi
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Patrick Omoumi
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Mickael Tordjman
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Jan Fritz
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Richard Kijowski
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Nor-Eddine Regnard
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - John Carrino
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Charles E. Kahn
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Florian Knoll
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Daniel Rueckert
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Frank W. Roemer
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| | - Daichi Hayashi
- From the Department of Radiology, Boston University School of
Medicine, Boston, Mass (A.G., F.W.R., D.H.); Department of Radiology, VA Boston
Healthcare System, 1400 VFW Parkway, Suite 1B105, West Roxbury, MA 02132 (A.G.);
Department of Radiology, Lausanne University Hospital and University of
Lausanne, Lausanne, Switzerland (P.O.); Department of Radiology, Hotel Dieu
Hospital and University Paris Cité, Paris, France (M.T.); Department of
Radiology, New York University Grossman School of Medicine, New York, NY (J.F.,
R.K.); Gleamer, Paris, France (N.E.R.); Réseau d’Imagerie Sud
Francilien, Clinique du Mousseau Ramsay Santé, Evry, France (N.E.R.);
Pôle Médical Sénart, Lieusaint, France (N.E.R.); Department
of Radiology and Imaging, Hospital for Special Surgery and Weill Cornell
Medicine, New York, NY (J.C.); Department of Radiology and Institute for
Biomedical Informatics, University of Pennsylvania, Philadelphia, Penn (C.E.K.);
Departments of Artificial Intelligence in Biomedical Engineering (F.K.) and
Radiology (F.W.R.), Universitätsklinikum Erlangen &
Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen,
Germany (F.K.); School of Medicine & Computation, Information and
Technology Klinikum rechts der Isar, Technical University Munich,
München, Germany (D.R.); Department of Computing, Imperial College
London, London, England (D.R.); and Department of Radiology, Tufts Medical
Center, Tufts University School of Medicine, Boston, Mass (D.H.)
| |
Collapse
|
4
|
Aromiwura AA, Settle T, Umer M, Joshi J, Shotwell M, Mattumpuram J, Vorla M, Sztukowska M, Contractor S, Amini A, Kalra DK. Artificial intelligence in cardiac computed tomography. Prog Cardiovasc Dis 2023; 81:54-77. [PMID: 37689230 DOI: 10.1016/j.pcad.2023.09.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Accepted: 09/04/2023] [Indexed: 09/11/2023]
Abstract
Artificial Intelligence (AI) is a broad discipline of computer science and engineering. Modern application of AI encompasses intelligent models and algorithms for automated data analysis and processing, data generation, and prediction with applications in visual perception, speech understanding, and language translation. AI in healthcare uses machine learning (ML) and other predictive analytical techniques to help sort through vast amounts of data and generate outputs that aid in diagnosis, clinical decision support, workflow automation, and prognostication. Coronary computed tomography angiography (CCTA) is an ideal union for these applications due to vast amounts of data generation and analysis during cardiac segmentation, coronary calcium scoring, plaque quantification, adipose tissue quantification, peri-operative planning, fractional flow reserve quantification, and cardiac event prediction. In the past 5 years, there has been an exponential increase in the number of studies exploring the use of AI for cardiac computed tomography (CT) image acquisition, de-noising, analysis, and prognosis. Beyond image processing, AI has also been applied to improve the imaging workflow in areas such as patient scheduling, urgent result notification, report generation, and report communication. In this review, we discuss algorithms applicable to AI and radiomic analysis; we then present a summary of current and emerging clinical applications of AI in cardiac CT. We conclude with AI's advantages and limitations in this new field.
Collapse
Affiliation(s)
| | - Tyler Settle
- Medical Imaging Laboratory, Department of Electrical and Computer Engineering, University of Louisville, Louisville, KY, USA
| | - Muhammad Umer
- Division of Cardiology, Department of Medicine, University of Louisville, Louisville, KY, USA
| | - Jonathan Joshi
- Center for Artificial Intelligence in Radiological Sciences (CAIRS), Department of Radiology, University of Louisville, Louisville, KY, USA
| | - Matthew Shotwell
- Division of Cardiology, Department of Medicine, University of Louisville, Louisville, KY, USA
| | - Jishanth Mattumpuram
- Division of Cardiology, Department of Medicine, University of Louisville, Louisville, KY, USA
| | - Mounica Vorla
- Division of Cardiology, Department of Medicine, University of Louisville, Louisville, KY, USA
| | - Maryta Sztukowska
- Clinical Trials Unit, University of Louisville, Louisville, KY, USA; University of Information Technology and Management, Rzeszow, Poland
| | - Sohail Contractor
- Center for Artificial Intelligence in Radiological Sciences (CAIRS), Department of Radiology, University of Louisville, Louisville, KY, USA
| | - Amir Amini
- Medical Imaging Laboratory, Department of Electrical and Computer Engineering, University of Louisville, Louisville, KY, USA; Center for Artificial Intelligence in Radiological Sciences (CAIRS), Department of Radiology, University of Louisville, Louisville, KY, USA
| | - Dinesh K Kalra
- Division of Cardiology, Department of Medicine, University of Louisville, Louisville, KY, USA; Center for Artificial Intelligence in Radiological Sciences (CAIRS), Department of Radiology, University of Louisville, Louisville, KY, USA.
| |
Collapse
|
5
|
Pesapane F, Tantrige P, De Marco P, Carriero S, Zugni F, Nicosia L, Bozzini AC, Rotili A, Latronico A, Abbate F, Origgi D, Santicchia S, Petralia G, Carrafiello G, Cassano E. Advancements in Standardizing Radiological Reports: A Comprehensive Review. MEDICINA (KAUNAS, LITHUANIA) 2023; 59:1679. [PMID: 37763797 PMCID: PMC10535385 DOI: 10.3390/medicina59091679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Revised: 08/18/2023] [Accepted: 09/14/2023] [Indexed: 09/29/2023]
Abstract
Standardized radiological reports stimulate debate in the medical imaging field. This review paper explores the advantages and challenges of standardized reporting. Standardized reporting can offer improved clarity and efficiency of communication among radiologists and the multidisciplinary team. However, challenges include limited flexibility, initially increased time and effort, and potential user experience issues. The efforts toward standardization are examined, encompassing the establishment of reporting templates, use of common imaging lexicons, and integration of clinical decision support tools. Recent technological advancements, including multimedia-enhanced reporting and AI-driven solutions, are discussed for their potential to improve the standardization process. Organizations such as the ACR, ESUR, RSNA, and ESR have developed standardized reporting systems, templates, and platforms to promote uniformity and collaboration. However, challenges remain in terms of workflow adjustments, language and format variability, and the need for validation. The review concludes by presenting a set of ten essential rules for creating standardized radiology reports, emphasizing clarity, consistency, and adherence to structured formats.
Collapse
Affiliation(s)
- Filippo Pesapane
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.N.); (A.C.B.); (A.R.); (F.A.); (E.C.)
| | - Priyan Tantrige
- Department of Radiology, King’s College Hospital NHS Foundation Trust, London SE5 9RS, UK;
| | - Paolo De Marco
- Medical Physics Unit, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (P.D.M.); (D.O.)
| | - Serena Carriero
- Postgraduate School of Radiodiagnostics, University of Milan, 20122 Milan, Italy;
| | - Fabio Zugni
- Division of Radiology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (F.Z.); (G.P.)
| | - Luca Nicosia
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.N.); (A.C.B.); (A.R.); (F.A.); (E.C.)
| | - Anna Carla Bozzini
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.N.); (A.C.B.); (A.R.); (F.A.); (E.C.)
| | - Anna Rotili
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.N.); (A.C.B.); (A.R.); (F.A.); (E.C.)
| | - Antuono Latronico
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.N.); (A.C.B.); (A.R.); (F.A.); (E.C.)
| | - Francesca Abbate
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.N.); (A.C.B.); (A.R.); (F.A.); (E.C.)
| | - Daniela Origgi
- Medical Physics Unit, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (P.D.M.); (D.O.)
| | - Sonia Santicchia
- Foundation IRCCS Cà Granda-Ospedale Maggiore Policlinico, 20122 Milan, Italy; (S.S.); (G.C.)
| | - Giuseppe Petralia
- Division of Radiology, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (F.Z.); (G.P.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Gianpaolo Carrafiello
- Foundation IRCCS Cà Granda-Ospedale Maggiore Policlinico, 20122 Milan, Italy; (S.S.); (G.C.)
- Department of Oncology and Hemato-Oncology, University of Milan, 20122 Milan, Italy
| | - Enrico Cassano
- Breast Imaging Division, IEO European Institute of Oncology IRCCS, 20141 Milan, Italy; (L.N.); (A.C.B.); (A.R.); (F.A.); (E.C.)
| |
Collapse
|
6
|
Abbasi N, Lacson R, Kapoor N, Licaros A, Guenette JP, Burk KS, Hammer M, Desai S, Eappen S, Saini S, Khorasani R. Development and External Validation of an Artificial Intelligence Model for Identifying Radiology Reports Containing Recommendations for Additional Imaging. AJR Am J Roentgenol 2023; 221:377-385. [PMID: 37073901 DOI: 10.2214/ajr.23.29120] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/20/2023]
Abstract
BACKGROUND. Reported rates of recommendations for additional imaging (RAIs) in radiology reports are low. Bidirectional encoder representations from transformers (BERT), a deep learning model pretrained to understand language context and ambiguity, has potential for identifying RAIs and thereby assisting large-scale quality improvement efforts. OBJECTIVE. The purpose of this study was to develop and externally validate an artificial intelligence (AI)-based model for identifying radiology reports containing RAIs. METHODS. This retrospective study was performed at a multisite health center. A total of 6300 radiology reports generated at one site from January 1, 2015, to June 30, 2021, were randomly selected and split by 4:1 ratio to create training (n = 5040) and test (n = 1260) sets. A total of 1260 reports generated at the center's other sites (including academic and community hospitals) from April 1 to April 30, 2022, were randomly selected as an external validation group. Referring practitioners and radiologists of varying sub-specialties manually reviewed report impressions for presence of RAIs. A BERT-based technique for identifying RAIs was developed by use of the training set. Performance of the BERT-based model and a previously developed traditional machine learning (TML) model was assessed in the test set. Finally, performance was assessed in the external validation set. The code for the BERT-based RAI model is publicly available. RESULTS. Among a total of 7419 unique patients (4133 women, 3286 men; mean age, 58.8 years), 10.0% of 7560 reports contained RAI. In the test set, the BERT-based model had 94.4% precision, 98.5% recall, and an F1 score of 96.4%. In the test set, the TML model had 69.0% precision, 65.4% recall, and an F1 score of 67.2%. In the test set, accuracy was greater for the BERT-based than for the TML model (99.2% vs 93.1%, p < .001). In the external validation set, the BERT-based model had 99.2% precision, 91.6% recall, an F1 score of 95.2%, and 99.0% accuracy. CONCLUSION. The BERT-based AI model accurately identified reports with RAIs, outperforming the TML model. High performance in the external validation set suggests the potential for other health systems to adapt the model without requiring institution-specific training. CLINICAL IMPACT. The model could potentially be used for real-time EHR monitoring for RAIs and other improvement initiatives to help ensure timely performance of clinically necessary recommended follow-up.
Collapse
Affiliation(s)
- Nooshin Abbasi
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Ronilda Lacson
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| | - Neena Kapoor
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| | - Andro Licaros
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Jeffrey P Guenette
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| | - Kristine Specht Burk
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| | - Mark Hammer
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| | - Sonali Desai
- Department of Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Sunil Eappen
- Department of Anesthesiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Sanjay Saini
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Ramin Khorasani
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, MA
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| |
Collapse
|
7
|
Amin K, Khosla P, Doshi R, Chheang S, Forman HP. Artificial Intelligence to Improve Patient Understanding of Radiology Reports. THE YALE JOURNAL OF BIOLOGY AND MEDICINE 2023; 96:407-417. [PMID: 37780992 PMCID: PMC10524809 DOI: 10.59249/nkoy5498] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 10/03/2023]
Abstract
Diagnostic imaging reports are generally written with a target audience of other providers. As a result, the reports are written with medical jargon and technical detail to ensure accurate communication. With implementation of the 21st Century Cures Act, patients have greater and quicker access to their imaging reports, but these reports are still written above the comprehension level of the average patient. Consequently, many patients have requested reports to be conveyed in language accessible to them. Numerous studies have shown that improving patient understanding of their condition results in better outcomes, so driving comprehension of imaging reports is essential. Summary statements, second reports, and the inclusion of the radiologist's phone number have been proposed, but these solutions have implications for radiologist workflow. Artificial intelligence (AI) has the potential to simplify imaging reports without significant disruptions. Many AI technologies have been applied to radiology reports in the past for various clinical and research purposes, but patient focused solutions have largely been ignored. New natural language processing technologies and large language models (LLMs) have the potential to improve patient understanding of their imaging reports. However, LLMs are a nascent technology and significant research is required before LLM-driven report simplification is used in patient care.
Collapse
Affiliation(s)
| | | | | | - Sophie Chheang
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Howard P Forman
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
- Yale School of Management, New Haven, CT, USA
- Department of Health Policy and Management, Yale School of Public Health, New Haven, CT, USA
| |
Collapse
|
8
|
Debs P, Fayad LM. The promise and limitations of artificial intelligence in musculoskeletal imaging. FRONTIERS IN RADIOLOGY 2023; 3:1242902. [PMID: 37609456 PMCID: PMC10440743 DOI: 10.3389/fradi.2023.1242902] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 07/26/2023] [Indexed: 08/24/2023]
Abstract
With the recent developments in deep learning and the rapid growth of convolutional neural networks, artificial intelligence has shown promise as a tool that can transform several aspects of the musculoskeletal imaging cycle. Its applications can involve both interpretive and non-interpretive tasks such as the ordering of imaging, scheduling, protocoling, image acquisition, report generation and communication of findings. However, artificial intelligence tools still face a number of challenges that can hinder effective implementation into clinical practice. The purpose of this review is to explore both the successes and limitations of artificial intelligence applications throughout the muscuskeletal imaging cycle and to highlight how these applications can help enhance the service radiologists deliver to their patients, resulting in increased efficiency as well as improved patient and provider satisfaction.
Collapse
Affiliation(s)
- Patrick Debs
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions, Baltimore, MD, United States
| | - Laura M. Fayad
- The Russell H. Morgan Department of Radiology & Radiological Science, The Johns Hopkins Medical Institutions, Baltimore, MD, United States
- Department of Orthopaedic Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, United States
- Department of Oncology, Johns Hopkins University School of Medicine, Baltimore, MD, United States
| |
Collapse
|
9
|
DeSimone AK, Kapoor N, Lacson R, Budiawan E, Hammer MM, Desai SP, Eappen S, Khorasani R. Impact of an Automated Closed-Loop Communication and Tracking Tool on the Rate of Recommendations for Additional Imaging in Thoracic Radiology Reports. J Am Coll Radiol 2023; 20:781-788. [PMID: 37307897 DOI: 10.1016/j.jacr.2023.05.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2023] [Revised: 04/20/2023] [Accepted: 05/01/2023] [Indexed: 06/14/2023]
Abstract
OBJECTIVE Assess the effects of feedback reports and implementing a closed-loop communication system on rates of recommendations for additional imaging (RAIs) in thoracic radiology reports. METHODS In this retrospective, institutional review board-approved study at an academic quaternary care hospital, we analyzed 176,498 thoracic radiology reports during a pre-intervention (baseline) period from April 1, 2018, to November 30, 2018; a feedback report only period from December 1, 2018, to September 30, 2019; and a closed-loop communication system plus feedback report (IT intervention) period from October 1, 2019, to December 31, 2020, promoting explicit documentation of rationale, time frame, and imaging modality for RAI, defined as complete RAI. A previously validated natural language processing tool was used to classify reports with an RAI. Primary outcome of rate of RAI was compared using a control chart. Multivariable logistic regression determined factors associated with likelihood of RAI. We also estimated the completeness of RAI in reports comparing IT intervention to baseline using χ2 statistic. RESULTS The natural language processing tool classified 3.2% (5,682 of 176,498) reports as having an RAI; 3.5% (1,783 of 51,323) during the pre-intervention period, 3.8% (2,147 of 56,722) during the feedback report only period (odds ratio: 1.1, P = .03), and 2.6% (1,752 of 68,453) during the IT intervention period (odds ratio: 0.60, P < .001). In subanalysis, the proportion of incomplete RAI decreased from 84.0% (79 of 94) during the pre-intervention period to 48.5% (47 of 97) during the IT intervention period (P < .001). DISCUSSION Feedback reports alone increased RAI rates, and an IT intervention promoting documentation of complete RAI in addition to feedback reports led to significant reductions in RAI rate, incomplete RAI, and improved overall completeness of the radiology recommendations.
Collapse
Affiliation(s)
- Ariadne K DeSimone
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts.
| | - Neena Kapoor
- Director of Diversity, Inclusion, and Equity and Quality and Patient Safety Officer, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Ronilda Lacson
- Director of Education, Center for Evidence-Based Imaging, Brigham and Women's Hospital, and Director of Clinical Informatics, Harvard Medical School Library of Evidence, Boston, Massachusetts
| | - Elvira Budiawan
- Center for Evidence-Based Imaging, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Mark M Hammer
- Cardiothoracic Fellowship Program Director, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Sonali P Desai
- Senior Vice President and Chief Quality Officer, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Sunil Eappen
- Senior Vice President, Medical Affairs, and Chief Medical Officer, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Ramin Khorasani
- Vice Chair of Radiology Quality and Safety, Mass General Brigham; Director of the Center for Evidence-Based Imaging and Vice Chair of Quality/Safety, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
10
|
Bobba PS, Sailer A, Pruneski JA, Beck S, Mozayan A, Mozayan S, Arango J, Cohan A, Chheang S. Natural language processing in radiology: Clinical applications and future directions. Clin Imaging 2023; 97:55-61. [PMID: 36889116 DOI: 10.1016/j.clinimag.2023.02.014] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Revised: 02/10/2023] [Accepted: 02/20/2023] [Indexed: 03/07/2023]
Abstract
Natural language processing (NLP) is a wide range of techniques that allows computers to interact with human text. Applications of NLP in everyday life include language translation aids, chat bots, and text prediction. It has been increasingly utilized in the medical field with increased reliance on electronic health records. As findings in radiology are primarily communicated via text, the field is particularly suited to benefit from NLP based applications. Furthermore, rapidly increasing imaging volume will continue to increase burden on clinicians, emphasizing the need for improvements in workflow. In this article, we highlight the numerous non-clinical, provider focused, and patient focused applications of NLP in radiology. We also comment on challenges associated with development and incorporation of NLP based applications in radiology as well as potential future directions.
Collapse
Affiliation(s)
- Pratheek S Bobba
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, United States
| | - Anne Sailer
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, United States
| | | | - Spencer Beck
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, United States
| | - Ali Mozayan
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, United States
| | - Sara Mozayan
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, United States
| | - Jennifer Arango
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, United States
| | - Arman Cohan
- Department of Computer Science, Yale University, New Haven, CT, United States
| | - Sophie Chheang
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, United States.
| |
Collapse
|
11
|
Moezzi SAR, Ghaedi A, Rahmanian M, Mousavi SZ, Sami A. Application of Deep Learning in Generating Structured Radiology Reports: A Transformer-Based Technique. J Digit Imaging 2023; 36:80-90. [PMID: 36002778 PMCID: PMC9984654 DOI: 10.1007/s10278-022-00692-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 06/20/2022] [Accepted: 07/27/2022] [Indexed: 11/29/2022] Open
Abstract
Since radiology reports needed for clinical practice and research are written and stored in free-text narrations, extraction of relative information for further analysis is difficult. In these circumstances, natural language processing (NLP) techniques can facilitate automatic information extraction and transformation of free-text formats to structured data. In recent years, deep learning (DL)-based models have been adapted for NLP experiments with promising results. Despite the significant potential of DL models based on artificial neural networks (ANN) and convolutional neural networks (CNN), the models face some limitations to implement in clinical practice. Transformers, another new DL architecture, have been increasingly applied to improve the process. Therefore, in this study, we propose a transformer-based fine-grained named entity recognition (NER) architecture for clinical information extraction. We collected 88 abdominopelvic sonography reports in free-text formats and annotated them based on our developed information schema. The text-to-text transfer transformer model (T5) and Scifive, a pre-trained domain-specific adaptation of the T5 model, were applied for fine-tuning to extract entities and relations and transform the input into a structured format. Our transformer-based model in this study outperformed previously applied approaches such as ANN and CNN models based on ROUGE-1, ROUGE-2, ROUGE-L, and BLEU scores of 0.816, 0.668, 0.528, and 0.743, respectively, while providing an interpretable structured report.
Collapse
Affiliation(s)
- Seyed Ali Reza Moezzi
- Department of Computer Science and Engineering and IT, Shiraz University, Shiraz, Iran
| | - Abdolrahman Ghaedi
- Department of Computer Science and Engineering and IT, Shiraz University, Shiraz, Iran
| | - Mojdeh Rahmanian
- Department of Computer Science and Engineering and IT, Shiraz University, Shiraz, Iran
| | | | - Ashkan Sami
- Department of Computer Science and Engineering and IT, Shiraz University, Shiraz, Iran.
| |
Collapse
|
12
|
Sánchez-Puente A, Dorado-Díaz PI, Sampedro-Gómez J, Bermejo J, Martinez-Legazpi P, Fernández-Avilés F, Sánchez-González J, Pérez Del Villar C, Vicente-Palacios V, Sanchez PL. Machine Learning to Optimize the Echocardiographic Follow-Up of Aortic Stenosis. JACC Cardiovasc Imaging 2023:S1936-878X(22)00735-5. [PMID: 36881417 DOI: 10.1016/j.jcmg.2022.12.008] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Revised: 11/17/2022] [Accepted: 12/02/2022] [Indexed: 02/10/2023]
Abstract
BACKGROUND Disease progression in patients with mild-to-moderate aortic stenosis is heterogenous and requires periodic echocardiographic examinations to evaluate severity. OBJECTIVES This study sought to explore the use of machine learning to optimize aortic stenosis echocardiographic surveillance automatically. METHODS The study investigators trained, validated, and externally applied a machine learning model to predict whether a patient with mild-to-moderate aortic stenosis will develop severe valvular disease at 1, 2, or 3 years. Demographic and echocardiographic patient data to develop the model were obtained from a tertiary hospital consisting of 4,633 echocardiograms from 1,638 consecutive patients. The external cohort was obtained from an independent tertiary hospital, consisting of 4,531 echocardiograms from 1,533 patients. Echocardiographic surveillance timing results were compared with the European and American guidelines echocardiographic follow-up recommendations. RESULTS In internal validation, the model discriminated severe from nonsevere aortic stenosis development with an area under the receiver-operating characteristic curve (AUC-ROC) of 0.90, 0.92, and 0.92 for the 1-, 2-, or 3-year interval, respectively. In external application, the model showed an AUC-ROC of 0.85, 0.85, and 0.85, for the 1-, 2-, or 3-year interval. A simulated application of the model in the external validation cohort resulted in savings of 49% and 13% of unnecessary echocardiographic examinations per year compared with European and American guideline recommendations, respectively. CONCLUSIONS Machine learning provides real-time, automated, personalized timing of next echocardiographic follow-up examination for patients with mild-to-moderate aortic stenosis. Compared with European and American guidelines, the model reduces the number of patient examinations.
Collapse
Affiliation(s)
- Antonio Sánchez-Puente
- Cardiology Service, Salamanca University Hospital, Biomedical Research Institute of Salamanca (IBSAL), Department of Medicine, University of Salamanca, Salamanca, Spain; Spanish Cardiovascular Network (CIBERCV), Carlos III Health Institute, Spain
| | - P Ignacio Dorado-Díaz
- Cardiology Service, Salamanca University Hospital, Biomedical Research Institute of Salamanca (IBSAL), Department of Medicine, University of Salamanca, Salamanca, Spain; Spanish Cardiovascular Network (CIBERCV), Carlos III Health Institute, Spain
| | - Jesús Sampedro-Gómez
- Cardiology Service, Salamanca University Hospital, Biomedical Research Institute of Salamanca (IBSAL), Department of Medicine, University of Salamanca, Salamanca, Spain; Spanish Cardiovascular Network (CIBERCV), Carlos III Health Institute, Spain
| | - Javier Bermejo
- Spanish Cardiovascular Network (CIBERCV), Carlos III Health Institute, Spain; Cardiology Service, Gregorio Marañón University Hospital, Gregorio Marañón Health Research Institute (IISGM), Faculty of Medicine, Complutense University, Madrid, Spain
| | - Pablo Martinez-Legazpi
- Department of Mathematical Physics and Fluids, Faculty of Sciences, National University of Distance Education (UNED) and CIBERCV, Madrid, Spain
| | - Francisco Fernández-Avilés
- Spanish Cardiovascular Network (CIBERCV), Carlos III Health Institute, Spain; Cardiology Service, Gregorio Marañón University Hospital, Gregorio Marañón Health Research Institute (IISGM), Faculty of Medicine, Complutense University, Madrid, Spain
| | | | - Candelas Pérez Del Villar
- Cardiology Service, Salamanca University Hospital, Biomedical Research Institute of Salamanca (IBSAL), Department of Medicine, University of Salamanca, Salamanca, Spain; Spanish Cardiovascular Network (CIBERCV), Carlos III Health Institute, Spain
| | | | - Pedro L Sanchez
- Cardiology Service, Salamanca University Hospital, Biomedical Research Institute of Salamanca (IBSAL), Department of Medicine, University of Salamanca, Salamanca, Spain; Spanish Cardiovascular Network (CIBERCV), Carlos III Health Institute, Spain.
| |
Collapse
|
13
|
Xu X, Qin L, Ding L, Wang C, Wang M, Li Z, Li J. Identifying stroke diagnosis-related features from medical imaging reports to improve clinical decision-making support. BMC Med Inform Decis Mak 2022; 22:275. [PMID: 36266650 PMCID: PMC9583470 DOI: 10.1186/s12911-022-02012-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 09/30/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Medical imaging reports play an important role in communication of diagnostic information between radiologists and clinicians. Head magnetic resonance imaging (MRI) reports can provide evidence that is widely used in the diagnosis and treatment of ischaemic stroke. The high-signal regions of diffusion-weighted imaging (DWI) images in MRI reports are key evidence. Correctly identifying high-signal regions of DWI images is helpful for the treatment of ischaemic stroke patients. Since most of the multiple signals recorded in head MRI reports appear in the same part, it is challenging to identify high-signal regions of DWI images from MRI reports. METHODS We developed a deep learning model to automatically identify high-signal regions of DWI images from head MRI reports. We proposed a fine-grained entity typing model based on machine reading comprehension that transformed the traditional two-step fine-grained entity typing task into a question-answering task. RESULTS To prove the validity of the model proposed, we compared it with the fine-grained entity typing model, of which the F1 measure was 5.9% and 3.2% higher than the F1 measures of the models based on LSTM and BERT, respectively. CONCLUSION In this study, we explore the automatic identification of high-signal regions of DWI images from the description part of a head MRI report. We transformed the identification of high-signal regions of DWI images to an FET task and proposed an MRC-FET model. Compared with the traditional two-step FET method, the model we proposed not only simplifies the task but also has better performance. The comparable result shows that the work in this study can contribute to improving the clinical decision support system.
Collapse
Affiliation(s)
- Xiaowei Xu
- Institute of Medical Information, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Lu Qin
- Institute of Medical Information, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China
| | - Lingling Ding
- Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Chunjuan Wang
- Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Meng Wang
- Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Zixiao Li
- Beijing Tiantan Hospital, Capital Medical University, Beijing, China.
| | - Jiao Li
- Institute of Medical Information, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China.
| |
Collapse
|
14
|
Machine Learning Model Drift: Predicting Diagnostic Imaging Follow-Up as a Case Example. J Am Coll Radiol 2022; 19:1162-1169. [PMID: 35981636 DOI: 10.1016/j.jacr.2022.05.030] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 05/19/2022] [Accepted: 05/20/2022] [Indexed: 11/23/2022]
Abstract
OBJECTIVE Address model drift in a machine learning (ML) model for predicting diagnostic imaging follow-up using data augmentation with more recent data versus retraining new predictive models. METHODS This institutional review board-approved retrospective study was conducted January 1, 2016, to December 31, 2020, at a large academic institution. A previously trained ML model was trained on 1,000 radiology reports from 2016 (old data). An additional 1,385 randomly selected reports from 2019 to 2020 (new data) were annotated for follow-up recommendations and randomly divided into two sets: training (n = 900) and testing (n = 485). Support vector machine and random forest (RF) algorithms were constructed and trained using 900 new data reports plus old data (augmented data, new models) and using only new data (new data, new models). The 2016 baseline model was used as comparator as is and trained with augmented data. Recall was compared with baseline using McNemar's test. RESULTS Follow-up recommendations were contained in 11.3% of reports (157 or 1,385). The baseline model retrained with new data had precision = 0.83 and recall = 0.54; none significantly different from baseline. A new RF model trained with augmented data had significantly better recall versus the baseline model (0.80 versus 0.66, P = .04) and comparable precision (0.90 versus 0.86). DISCUSSION ML methods for monitoring follow-up recommendations in radiology reports suffer model drift over time. A newly developed RF model achieved better recall with comparable precision versus simply retraining a previously trained original model with augmented data. Thus, regularly assessing and updating these models is necessary using more recent historical data.
Collapse
|
15
|
Hu Z, Hu R, Yau O, Teng M, Wang P, Hu G, Singla R. Tempering Expectations on the Medical Artificial Intelligence Revolution: The Medical Trainee Viewpoint. JMIR Med Inform 2022; 10:e34304. [PMID: 35969464 PMCID: PMC9425164 DOI: 10.2196/34304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2021] [Revised: 07/29/2022] [Accepted: 08/02/2022] [Indexed: 12/02/2022] Open
Abstract
The rapid development of artificial intelligence (AI) in medicine has resulted in an increased number of applications deployed in clinical trials. AI tools have been developed with goals of improving diagnostic accuracy, workflow efficiency through automation, and discovery of novel features in clinical data. There is subsequent concern on the role of AI in replacing existing tasks traditionally entrusted to physicians. This has implications for medical trainees who may make decisions based on the perception of how disruptive AI may be to their future career. This commentary discusses current barriers to AI adoption to moderate concerns of the role of AI in the clinical setting, particularly as a standalone tool that replaces physicians. Technical limitations of AI include generalizability of performance and deficits in existing infrastructure to accommodate data, both of which are less obvious in pilot studies, where high performance is achieved in a controlled data processing environment. Economic limitations include rigorous regulatory requirements to deploy medical devices safely, particularly if AI is to replace human decision-making. Ethical guidelines are also required in the event of dysfunction to identify responsibility of the developer of the tool, health care authority, and patient. The consequences are apparent when identifying the scope of existing AI tools, most of which aim to be physician assisting rather than a physician replacement. The combination of the limitations will delay the onset of ubiquitous AI tools that perform standalone clinical tasks. The role of the physician likely remains paramount to clinical decision-making in the near future.
Collapse
Affiliation(s)
- Zoe Hu
- School of Medicine, Queen's University, Kingston, ON, Canada
| | - Ricky Hu
- School of Medicine, Queen's University, Kingston, ON, Canada.,School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada
| | - Olivia Yau
- School of Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Minnie Teng
- School of Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Patrick Wang
- School of Medicine, Queen's University, Kingston, ON, Canada
| | - Grace Hu
- Temerty Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Rohit Singla
- School of Biomedical Engineering, University of British Columbia, Vancouver, BC, Canada.,School of Medicine, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
16
|
Potočnik J, Thomas E, Killeen R, Foley S, Lawlor A, Stowe J. Automated vetting of radiology referrals: exploring natural language processing and traditional machine learning approaches. Insights Imaging 2022; 13:127. [PMID: 35925429 PMCID: PMC9352827 DOI: 10.1186/s13244-022-01267-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 07/07/2022] [Indexed: 11/10/2022] Open
Abstract
Background With a significant increase in utilisation of computed tomography (CT), inappropriate imaging is a significant concern. Manual justification audits of radiology referrals are time-consuming and require financial resources. We aimed to retrospectively audit justification of brain CT referrals by applying natural language processing and traditional machine learning (ML) techniques to predict their justification based on the audit outcomes. Methods Two human experts retrospectively analysed justification of 375 adult brain CT referrals performed in a tertiary referral hospital during the 2019 calendar year, using a cloud-based platform for structured referring. Cohen’s kappa was computed to measure inter-rater reliability. Referrals were represented as bag-of-words (BOW) and term frequency-inverse document frequency models. Text preprocessing techniques, including custom stop words (CSW) and spell correction (SC), were applied to the referral text. Logistic regression, random forest, and support vector machines (SVM) were used to predict the justification of referrals. A test set (300/75) was used to compute weighted accuracy, sensitivity, specificity, and the area under the curve (AUC). Results In total, 253 (67.5%) examinations were deemed justified, 75 (20.0%) as unjustified, and 47 (12.5%) as maybe justified. The agreement between the annotators was strong (κ = 0.835). The BOW + CSW + SC + SVM outperformed other binary models with a weighted accuracy of 92%, a sensitivity of 91%, a specificity of 93%, and an AUC of 0.948. Conclusions Traditional ML models can accurately predict justification of unstructured brain CT referrals. This offers potential for automated justification analysis of CT referrals in clinical departments.
Collapse
Affiliation(s)
- Jaka Potočnik
- University College Dublin School of Medicine, Dublin, Ireland.
| | - Edel Thomas
- University College Dublin School of Medicine, Dublin, Ireland
| | - Ronan Killeen
- University College Dublin School of Medicine, Dublin, Ireland
| | - Shane Foley
- University College Dublin School of Medicine, Dublin, Ireland
| | - Aonghus Lawlor
- University College Dublin School of Computer Science, Dublin, Ireland
| | - John Stowe
- University College Dublin School of Medicine, Dublin, Ireland
| |
Collapse
|
17
|
Li J, Lin Y, Zhao P, Liu W, Cai L, Sun J, Zhao L, Yang Z, Song H, Lv H, Wang Z. Automatic text classification of actionable radiology reports of tinnitus patients using bidirectional encoder representations from transformer (BERT) and in-domain pre-training (IDPT). BMC Med Inform Decis Mak 2022; 22:200. [PMID: 35907966 PMCID: PMC9338483 DOI: 10.1186/s12911-022-01946-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2022] [Accepted: 07/18/2022] [Indexed: 11/17/2022] Open
Abstract
Background Given the increasing number of people suffering from tinnitus, the accurate categorization of patients with actionable reports is attractive in assisting clinical decision making. However, this process requires experienced physicians and significant human labor. Natural language processing (NLP) has shown great potential in big data analytics of medical texts; yet, its application to domain-specific analysis of radiology reports is limited. Objective The aim of this study is to propose a novel approach in classifying actionable radiology reports of tinnitus patients using bidirectional encoder representations from transformer BERT-based models and evaluate the benefits of in domain pre-training (IDPT) along with a sequence adaptation strategy. Methods A total of 5864 temporal bone computed tomography(CT) reports are labeled by two experienced radiologists as follows: (1) normal findings without notable lesions; (2) notable lesions but uncorrelated to tinnitus; and (3) at least one lesion considered as potential cause of tinnitus. We then constructed a framework consisting of deep learning (DL) neural networks and self-supervised BERT models. A tinnitus domain-specific corpus is used to pre-train the BERT model to further improve its embedding weights. In addition, we conducted an experiment to evaluate multiple groups of max sequence length settings in BERT to reduce the excessive quantity of calculations. After a comprehensive comparison of all metrics, we determined the most promising approach through the performance comparison of F1-scores and AUC values. Results In the first experiment, the BERT finetune model achieved a more promising result (AUC-0.868, F1-0.760) compared with that of the Word2Vec-based models(AUC-0.767, F1-0.733) on validation data. In the second experiment, the BERT in-domain pre-training model (AUC-0.948, F1-0.841) performed significantly better than the BERT based model(AUC-0.868, F1-0.760). Additionally, in the variants of BERT fine-tuning models, Mengzi achieved the highest AUC of 0.878 (F1-0.764). Finally, we found that the BERT max-sequence-length of 128 tokens achieved an AUC of 0.866 (F1-0.736), which is almost equal to the BERT max-sequence-length of 512 tokens (AUC-0.868,F1-0.760). Conclusion In conclusion, we developed a reliable BERT-based framework for tinnitus diagnosis from Chinese radiology reports, along with a sequence adaptation strategy to reduce computational resources while maintaining accuracy. The findings could provide a reference for NLP development in Chinese radiology reports. Supplementary Information The online version contains supplementary material available at 10.1186/s12911-022-01946-y.
Collapse
Affiliation(s)
- Jia Li
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 YongAn Road, Beijing, 100050, People's Republic of China
| | - Yucong Lin
- School of Medical Technology, Beijing Institute of Technology, No.5 Zhongguancun East Road, Beijing, 100050, People's Republic of China
| | - Pengfei Zhao
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 YongAn Road, Beijing, 100050, People's Republic of China
| | - Wenjuan Liu
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 YongAn Road, Beijing, 100050, People's Republic of China
| | - Linkun Cai
- School of Biological Science and Medical Engineering, Beihang University, No.37 XueYuan Road, Beijing, 100191, People's Republic of China
| | - Jing Sun
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 YongAn Road, Beijing, 100050, People's Republic of China
| | - Lei Zhao
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 YongAn Road, Beijing, 100050, People's Republic of China
| | - Zhenghan Yang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 YongAn Road, Beijing, 100050, People's Republic of China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, No. 5, South Street, Zhongguancun, Haidian District, Beijing, 100050, People's Republic of China.
| | - Han Lv
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 YongAn Road, Beijing, 100050, People's Republic of China.
| | - Zhenchang Wang
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, No. 95 YongAn Road, Beijing, 100050, People's Republic of China. .,School of Biological Science and Medical Engineering, Beihang University, No.37 XueYuan Road, Beijing, 100191, People's Republic of China.
| |
Collapse
|
18
|
White T, Aronson MD, Sternberg SB, Shafiq U, Berkowitz SJ, Benneyan J, Phillips RS, Schiff GD. Analysis of Radiology Report Recommendation Characteristics and Rate of Recommended Action Performance. JAMA Netw Open 2022; 5:e2222549. [PMID: 35867062 PMCID: PMC9308057 DOI: 10.1001/jamanetworkopen.2022.22549] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/01/2023] Open
Abstract
IMPORTANCE Following up on recommendations from radiologic findings is important for patient care, but frequently there are failures to carry out these recommendations. The lack of reliable systems to characterize and track completion of actionable radiology report recommendations poses an important patient safety challenge. OBJECTIVES To characterize actionable radiology recommendations and, using this taxonomy, track and understand rates of loop closure for radiology recommendations in a primary care setting. DESIGN, SETTING, AND PARTICIPANTS Radiology reports in a primary care clinic at a large academic center were redesigned to include actionable recommendations in a separate dedicated field. Manual review of all reports generated from imaging tests ordered between January 1 and December 31, 2018, by primary care physicians that contained actionable recommendations was performed. For this quality improvement study, a taxonomy system that conceptualized recommendations was developed based on 3 domains: (1) what is recommended (eg, repeat a test or perform a different test, specialty referral), (2) specified time frame in which to perform the recommended action, and (3) contingency language qualifying the recommendation. Using this framework, a 2-stage process was used to review patients' records to classify recommendations and determine loop closure rates and factors associated with failure to complete recommended actions. Data analysis was conducted from April to July 2021. MAIN OUTCOMES AND MEASURES Radiology recommendations, time frames, and contingencies. Rates of carrying out vs not closing the loop on these recommendations in the recommended time frame were assessed. RESULTS A total of 598 radiology reports were identified with structured recommendations: 462 for additional or future radiologic studies and 196 for nonradiologic actions (119 specialty referrals, 47 invasive procedures, and 43 other actions). The overall rate of completed actions (loop closure) within the recommended time frame was 87.4%, with 31 open loop cases rated by quality expert reviewers to pose substantial clinical risks. Factors associated with successful loop closure included (1) absence of accompanying contingency language, (2) shorter recommended time frames, and (3) evidence of direct radiologist communication with the ordering primary care physicians. A clinically significant lack of loop closure was found in approximately 5% of cases. CONCLUSIONS AND RELEVANCE The findings of this study suggest that creating structured radiology reports featuring a dedicated recommendations field permits the development of taxonomy to classify such recommendations and determine whether they were carried out. The lack of loop closure suggests the need for more reliable systems.
Collapse
Affiliation(s)
- Tiantian White
- Harvard Medical School, Boston, Massachusetts
- Department of Family Medicine, Oregon Health & Science University, Portland
| | - Mark D. Aronson
- Department of Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts
| | - Scot B. Sternberg
- Department of Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts
| | - Umber Shafiq
- Department of Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts
| | - Seth J. Berkowitz
- Harvard Medical School, Boston, Massachusetts
- Department of Radiology, Beth Israel Deaconess Medical Center, Boston, Massachusetts
| | - James Benneyan
- Healthcare Systems Engineering Institute, College of Engineering, Northeastern University, Boston, Massachusetts
| | - Russell S. Phillips
- Department of Medicine, Beth Israel Deaconess Medical Center, Boston, Massachusetts
- Harvard Medical School, Center for Primary Care, Boston, Massachusetts
| | - Gordon D. Schiff
- Harvard Medical School, Center for Primary Care, Boston, Massachusetts
- Center for Patient Safety Research and Practice, Brigham and Women’s Hospital, Boston, Massachusetts
| |
Collapse
|
19
|
Liu B, Shi J. A Machine Learning-Based Approach to Discriminating Basaltic Tectonic Settings. INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE AND APPLICATIONS 2022. [DOI: 10.1142/s1469026822500122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
The geochemical characteristics of magmatic rocks can distinguish the tectonic setting of magma formation and their geochemical signatures are discriminated by using the whole-rock geochemical data. As a new attempt of artificial intelligence technology in geochemistry, the machine learning discrimination method is gradually complementary to the classical discriminative graphical method. However, the feature selection of high-dimensional data and the determination of many unknown parameters are the two main factors affecting the classification accuracy of the algorithm. In this paper, a particle swarm optimized support vector machine (PSO-SVM) model is established to classify the tectonic environments of basaltic rocks in the GEOROC database. The model mainly relies on the powerful search capability of the particle swarm algorithm to find the best parameter combination selected by the SVM based on experience to improve the accuracy. In this study, based on the basalt samples in the database and the confusion matrix, the performance of PSO-SVM model is evaluated by simulation experiments. The results show that the model proposed in this paper is more effective in distinguishing the basaltic tectonic environments, with an accuracy of more than 90%. Therefore, compared with the traditional discriminant map method, the machine learning method based on the fusion of two algorithms performs better in the tectonic environment classification problems.
Collapse
Affiliation(s)
- Baoshun Liu
- School of Civil and Resource Engineering, University of Science and Technology Beijing, Beijing 100083, P. R. China
| | - Junxia Shi
- School of Civil and Resource Engineering, University of Science and Technology Beijing, Beijing 100083, P. R. China
| |
Collapse
|
20
|
Voreis S, Mattay G, Cook T. Informatics Solutions to Mitigate Legal Risk Associated With Communication Failures. J Am Coll Radiol 2022; 19:823-828. [PMID: 35654145 DOI: 10.1016/j.jacr.2022.05.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Revised: 05/06/2022] [Accepted: 05/09/2022] [Indexed: 11/25/2022]
Abstract
Communication failures are a documented cause of malpractice litigation against radiologists. As imaging volumes have increased, and with them the number of findings requiring further workup, radiologists are increasingly expected to communicate with ordering clinicians. However, communication may be unsuccessful for a variety of reasons that expose radiologists to potential malpractice risk. Informatics solutions have the potential to improve communication and decrease this risk. We discuss human-powered, purely automated, and hybrid approaches to closing the communications loop. In addition, we describe the Patient Test Results Information Act (Pennsylvania Act 112) and its implications for closing the loop on noncritical actionable findings.
Collapse
Affiliation(s)
- Shahodat Voreis
- Department of Radiology, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania
| | - Govind Mattay
- John T. Milliken Department of Medicine, Washington University School of Medicine, St Louis, Missouri
| | - Tessa Cook
- Department of Radiology, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, Pennsylvania; Chief, 3-D and Advanced Imaging; Codirector, Center for Practice Transformation in Radiology; Fellowship Director, Imaging Informatics; Member, ACR Informatics Commission; Vice Chair, ACR Commission on Patient- and Family-Centered Care; Past Cochair, ACR Informatics Summit.
| |
Collapse
|
21
|
Shaikh SG, Suresh Kumar B, Narang G. Recommender system for health care analysis using machine learning technique: a review. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2022. [DOI: 10.1080/1463922x.2022.2061078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Salim G. Shaikh
- Amity School of Engineering and Technology, Amity University Jaipur, Jaipur, India
| | | | | |
Collapse
|
22
|
Short RG, Dondlinger S, Wildman-Tobriner B. Management of Incidental Thyroid Nodules on Chest CT: Using Natural Language Processing to Assess White Paper Adherence and Track Patient Outcomes. Acad Radiol 2022; 29:e18-e24. [PMID: 33757722 DOI: 10.1016/j.acra.2021.02.019] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Revised: 02/17/2021] [Accepted: 02/21/2021] [Indexed: 12/20/2022]
Abstract
OBJECTIVE The purpose of this study was to develop a natural language processing (NLP) pipeline to identify incidental thyroid nodules (ITNs) meeting criteria for sonographic follow-up and to assess both adherence rates to white paper recommendations and downstream outcomes related to these incidental findings. METHODS 21583 non-contrast chest CT reports from 2017 and 2018 were retrospectively evaluated to identify reports which included either an explicit recommendation for thyroid ultrasound, a description of a nodule ≥ 1.5 cm, or description of a nodule with suspicious features. Reports from 2018 were used to train an NLP algorithm called fastText for automated identification of such reports. Algorithm performance was then evaluated on the 2017 reports. Next, any patient from 2017 with a report meeting criteria for ultrasound follow-up was further evaluated with manual chart review to determine follow-up adherence rates and nodule-related outcomes. RESULTS NLP identified reports with ITNs meeting criteria for sonographic follow-up with an accuracy of 96.5% (95% CI 96.2-96.7) and sensitivity of 92.1% (95% CI 89.8-94.3). In 10006 chest CTs from 2017, ITN follow-up ultrasound was indicated according to white paper criteria in 81 patients (0.8%), explicitly recommended in 46.9% (38/81) of patients, and obtained in less than half of patients in which it was appropriately recommended (17/35, 48.6%). DISCUSSION NLP accurately identified chest CT reports meeting criteria for ITN ultrasound follow-up. Radiologist adherence to white paper guidelines and subsequent referrer adherence to radiologist recommendations showed room for improvement.
Collapse
Affiliation(s)
- Ryan G Short
- Mallinckrodt Institute of Radiology, Washington University School of Medicine in Saint Louis, 510 South Kingshighway Blvd., Saint Louis, MO 63110.
| | | | | |
Collapse
|
23
|
Tadavarthi Y, Makeeva V, Wagstaff W, Zhan H, Podlasek A, Bhatia N, Heilbrun M, Krupinski E, Safdar N, Banerjee I, Gichoya J, Trivedi H. Overview of Noninterpretive Artificial Intelligence Models for Safety, Quality, Workflow, and Education Applications in Radiology Practice. Radiol Artif Intell 2022; 4:e210114. [PMID: 35391770 DOI: 10.1148/ryai.210114] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Revised: 12/17/2021] [Accepted: 01/11/2022] [Indexed: 12/17/2022]
Abstract
Artificial intelligence has become a ubiquitous term in radiology over the past several years, and much attention has been given to applications that aid radiologists in the detection of abnormalities and diagnosis of diseases. However, there are many potential applications related to radiologic image quality, safety, and workflow improvements that present equal, if not greater, value propositions to radiology practices, insurance companies, and hospital systems. This review focuses on six major categories for artificial intelligence applications: study selection and protocoling, image acquisition, worklist prioritization, study reporting, business applications, and resident education. All of these categories can substantially affect different aspects of radiology practices and workflows. Each of these categories has different value propositions in terms of whether they could be used to increase efficiency, improve patient safety, increase revenue, or save costs. Each application is covered in depth in the context of both current and future areas of work. Keywords: Use of AI in Education, Application Domain, Supervised Learning, Safety © RSNA, 2022.
Collapse
Affiliation(s)
- Yasasvi Tadavarthi
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Valeria Makeeva
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - William Wagstaff
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Henry Zhan
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Anna Podlasek
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Neil Bhatia
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Marta Heilbrun
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Elizabeth Krupinski
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Nabile Safdar
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Imon Banerjee
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Judy Gichoya
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| | - Hari Trivedi
- Department of Medicine, Medical College of Georgia, Augusta, Ga (Y.T.); Department of Radiology and Imaging Sciences (V.M., W.W., H.Z., M.H., E.K., N.S., J.G., H.T.), School of Medicine (N.B.), and Department of Biomedical Informatics (I.B.), Emory University, 1364 E Clifton Rd NE, Atlanta, GA 30322; and Southend University Hospital NHS Foundation Trust, Westcliff-on-Sea, UK (A.P.)
| |
Collapse
|
24
|
Zaman S, Petri C, Vimalesvaran K, Howard J, Bharath A, Francis D, Peters N, Cole GD, Linton N. Automatic Diagnosis Labeling of Cardiovascular MRI by Using Semisupervised Natural Language Processing of Text Reports. Radiol Artif Intell 2022; 4:e210085. [PMID: 35146435 PMCID: PMC8823679 DOI: 10.1148/ryai.210085] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Revised: 10/29/2021] [Accepted: 11/03/2021] [Indexed: 12/22/2022]
Abstract
PURPOSE To assess whether the semisupervised natural language processing (NLP) of text from clinical radiology reports could provide useful automated diagnosis categorization for ground truth labeling to overcome manual labeling bottlenecks in the machine learning pipeline. MATERIALS AND METHODS In this retrospective study, 1503 text cardiac MRI reports from 2016 to 2019 were manually annotated for five diagnoses by clinicians: normal, dilated cardiomyopathy (DCM), hypertrophic cardiomyopathy, myocardial infarction (MI), and myocarditis. A semisupervised method that uses bidirectional encoder representations from transformers (BERT) pretrained on 1.14 million scientific publications was fine-tuned by using the manually extracted labels, with a report dataset split into groups of 801 for training, 302 for validation, and 400 for testing. The model's performance was compared with two traditional NLP models: a rule-based model and a support vector machine (SVM) model. The models' F1 scores and receiver operating characteristic curves were used to analyze performance. RESULTS After 15 epochs, the F1 scores on the test set of 400 reports were as follows: normal, 84%; DCM, 79%; hypertrophic cardiomyopathy, 86%; MI, 91%; and myocarditis, 86%. The pooled F1 score and area under the receiver operating curve were 86% and 0.96, respectively. On the same test set, the BERT model had a higher performance than the rule-based model (F1 score, 42%) and SVM model (F1 score, 82%). Diagnosis categories classified by using the BERT model performed the labeling of 1000 MR images in 0.2 second. CONCLUSION The developed model used labels extracted from radiology reports to provide automated diagnosis categorization of MR images with a high level of performance.Keywords: Semisupervised Learning, Diagnosis/Classification/Application Domain, Named Entity Recognition, MRI Supplemental material is available for this article. © RSNA, 2021.
Collapse
Affiliation(s)
| | | | - Kavitha Vimalesvaran
- From the National Heart and Lung Institute, Imperial College London, Hammersmith Hospital, Du Cane Road, Second Floor B Block, London W12 0HS, England (S.Z., C.P., K.V., J.H., D.F., N.P., G.D.C.); Imperial College Healthcare National Health Service Trust, London, England (J.H., D.F., N.P., G.D.C., N.L.); and Department of Bioengineering, Imperial College London, London, England (A.B., N.L.)
| | - James Howard
- From the National Heart and Lung Institute, Imperial College London, Hammersmith Hospital, Du Cane Road, Second Floor B Block, London W12 0HS, England (S.Z., C.P., K.V., J.H., D.F., N.P., G.D.C.); Imperial College Healthcare National Health Service Trust, London, England (J.H., D.F., N.P., G.D.C., N.L.); and Department of Bioengineering, Imperial College London, London, England (A.B., N.L.)
| | - Anil Bharath
- From the National Heart and Lung Institute, Imperial College London, Hammersmith Hospital, Du Cane Road, Second Floor B Block, London W12 0HS, England (S.Z., C.P., K.V., J.H., D.F., N.P., G.D.C.); Imperial College Healthcare National Health Service Trust, London, England (J.H., D.F., N.P., G.D.C., N.L.); and Department of Bioengineering, Imperial College London, London, England (A.B., N.L.)
| | - Darrel Francis
- From the National Heart and Lung Institute, Imperial College London, Hammersmith Hospital, Du Cane Road, Second Floor B Block, London W12 0HS, England (S.Z., C.P., K.V., J.H., D.F., N.P., G.D.C.); Imperial College Healthcare National Health Service Trust, London, England (J.H., D.F., N.P., G.D.C., N.L.); and Department of Bioengineering, Imperial College London, London, England (A.B., N.L.)
| | - Nicholas Peters
- From the National Heart and Lung Institute, Imperial College London, Hammersmith Hospital, Du Cane Road, Second Floor B Block, London W12 0HS, England (S.Z., C.P., K.V., J.H., D.F., N.P., G.D.C.); Imperial College Healthcare National Health Service Trust, London, England (J.H., D.F., N.P., G.D.C., N.L.); and Department of Bioengineering, Imperial College London, London, England (A.B., N.L.)
| | - Graham D. Cole
- From the National Heart and Lung Institute, Imperial College London, Hammersmith Hospital, Du Cane Road, Second Floor B Block, London W12 0HS, England (S.Z., C.P., K.V., J.H., D.F., N.P., G.D.C.); Imperial College Healthcare National Health Service Trust, London, England (J.H., D.F., N.P., G.D.C., N.L.); and Department of Bioengineering, Imperial College London, London, England (A.B., N.L.)
| | - Nick Linton
- From the National Heart and Lung Institute, Imperial College London, Hammersmith Hospital, Du Cane Road, Second Floor B Block, London W12 0HS, England (S.Z., C.P., K.V., J.H., D.F., N.P., G.D.C.); Imperial College Healthcare National Health Service Trust, London, England (J.H., D.F., N.P., G.D.C., N.L.); and Department of Bioengineering, Imperial College London, London, England (A.B., N.L.)
| |
Collapse
|
25
|
Taylor AM. The role of artificial intelligence in paediatric cardiovascular magnetic resonance imaging. Pediatr Radiol 2022; 52:2131-2138. [PMID: 34936019 PMCID: PMC9537201 DOI: 10.1007/s00247-021-05218-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Revised: 08/13/2021] [Accepted: 10/05/2021] [Indexed: 11/24/2022]
Abstract
Artificial intelligence (AI) offers the potential to change many aspects of paediatric cardiac imaging. At present, there are only a few clinically validated examples of AI applications in this field. This review focuses on the use of AI in paediatric cardiovascular MRI, using examples from paediatric cardiovascular MRI, adult cardiovascular MRI and other radiologic experience.
Collapse
Affiliation(s)
- Andrew M. Taylor
- Great Ormond Street Hospital for Children, Zayed Centre for Research, 20 Guildford St., Room 3.7, London, WC1N 1DZ UK ,Cardiovascular Imaging, UCL Institute of Cardiovascular Science, London, UK
| |
Collapse
|
26
|
Steinkamp J, Cook TS. Basic Artificial Intelligence Techniques: Natural Language Processing of Radiology Reports. Radiol Clin North Am 2021; 59:919-931. [PMID: 34689877 DOI: 10.1016/j.rcl.2021.06.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Natural language processing (NLP) is a subfield of computer science and linguistics that can be applied to extract meaningful information from radiology reports. Symbolic NLP is rule based and well suited to problems that can be explicitly defined by a set of rules. Statistical NLP is better situated to problems that cannot be well defined and requires annotated or labeled examples from which machine learning algorithms can infer the rules. Both symbolic and statistical NLP have found success in a variety of radiology use cases. More recently, deep learning approaches, including transformers, have gained traction and demonstrated good performance.
Collapse
Affiliation(s)
- Jackson Steinkamp
- Department of Medicine, Hospital of the University of Pennsylvania, 3400 Spruce Street, Philadelphia, PA 19104, USA
| | - Tessa S Cook
- Perelman School of Medicine at the University of Pennsylvania, 3400 Spruce Street, 1 Silverstein Radiology, Philadelphia, PA 19104, USA.
| |
Collapse
|
27
|
Automatic detection of actionable radiology reports using bidirectional encoder representations from transformers. BMC Med Inform Decis Mak 2021; 21:262. [PMID: 34511100 PMCID: PMC8436473 DOI: 10.1186/s12911-021-01623-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Accepted: 08/23/2021] [Indexed: 01/27/2023] Open
Abstract
Background It is essential for radiologists to communicate actionable findings to the referring clinicians reliably. Natural language processing (NLP) has been shown to help identify free-text radiology reports including actionable findings. However, the application of recent deep learning techniques to radiology reports, which can improve the detection performance, has not been thoroughly examined. Moreover, free-text that clinicians input in the ordering form (order information) has seldom been used to identify actionable reports. This study aims to evaluate the benefits of two new approaches: (1) bidirectional encoder representations from transformers (BERT), a recent deep learning architecture in NLP, and (2) using order information in addition to radiology reports. Methods We performed a binary classification to distinguish actionable reports (i.e., radiology reports tagged as actionable in actual radiological practice) from non-actionable ones (those without an actionable tag). 90,923 Japanese radiology reports in our hospital were used, of which 788 (0.87%) were actionable. We evaluated four methods, statistical machine learning with logistic regression (LR) and with gradient boosting decision tree (GBDT), and deep learning with a bidirectional long short-term memory (LSTM) model and a publicly available Japanese BERT model. Each method was used with two different inputs, radiology reports alone and pairs of order information and radiology reports. Thus, eight experiments were conducted to examine the performance. Results Without order information, BERT achieved the highest area under the precision-recall curve (AUPRC) of 0.5138, which showed a statistically significant improvement over LR, GBDT, and LSTM, and the highest area under the receiver operating characteristic curve (AUROC) of 0.9516. Simply coupling the order information with the radiology reports slightly increased the AUPRC of BERT but did not lead to a statistically significant improvement. This may be due to the complexity of clinical decisions made by radiologists. Conclusions BERT was assumed to be useful to detect actionable reports. More sophisticated methods are required to use order information effectively.
Collapse
|
28
|
Olthof AW, Shouche P, Fennema EM, IJpma FFA, Koolstra RHC, Stirler VMA, van Ooijen PMA, Cornelissen LJ. Machine learning based natural language processing of radiology reports in orthopaedic trauma. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106304. [PMID: 34333208 DOI: 10.1016/j.cmpb.2021.106304] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2020] [Accepted: 07/18/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVES To compare different Machine Learning (ML) Natural Language Processing (NLP) methods to classify radiology reports in orthopaedic trauma for the presence of injuries. Assessing NLP performance is a prerequisite for downstream tasks and therefore of importance from a clinical perspective (avoiding missed injuries, quality check, insight in diagnostic yield) as well as from a research perspective (identification of patient cohorts, annotation of radiographs). METHODS Datasets of Dutch radiology reports of injured extremities (n = 2469, 33% fractures) and chest radiographs (n = 799, 20% pneumothorax) were collected in two different hospitals and labeled by radiologists and trauma surgeons for the presence or absence of injuries. NLP classification was applied and optimized by testing different preprocessing steps and different classifiers (Rule-based, ML, and Bidirectional Encoder Representations from Transformers (BERT)). Performance was assessed by F1-score, AUC, sensitivity, specificity and accuracy. RESULTS The deep learning based BERT model outperforms all other classification methods which were assessed. The model achieved an F1-score of (95 ± 2)% and accuracy of (96 ± 1)% on a dataset of simple reports (n= 2469), and an F1 of (83 ± 7)% with accuracy (93 ± 2)% on a dataset of complex reports (n= 799). CONCLUSION BERT NLP outperforms traditional ML and rule-base classifiers when applied to Dutch radiology reports in orthopaedic trauma.
Collapse
Affiliation(s)
- A W Olthof
- Department of Radiology, Treant Health Care Group, Dr. G.H. Amshoffweg 1, Hoogeveen, the Netherlands; Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, Groningen, the Netherlands.
| | - P Shouche
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, Groningen, the Netherlands
| | - E M Fennema
- Department of Trauma Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, Groningen, the Netherlands
| | - F F A IJpma
- Department of Trauma Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, Groningen, the Netherlands
| | - R H C Koolstra
- Department of Radiology, Treant Health Care Group, Dr. G.H. Amshoffweg 1, Hoogeveen, the Netherlands
| | - V M A Stirler
- Department of Trauma Surgery, University Medical Center Groningen, University of Groningen, Hanzeplein 1, Groningen, the Netherlands
| | - P M A van Ooijen
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, Groningen, the Netherlands; Machine Learning Lab, Data Science Center in Health (DASH),University Medical Center Groningen, University of Groningen, L.J. Zielstraweg 2, Groningen, the Netherlands
| | - L J Cornelissen
- Department of Radiation Oncology, University Medical Center Groningen, University of Groningen, Hanzeplein 1, Groningen, the Netherlands; COSMONiO Imaging BV, L.J. Zielstraweg 2, Groningen, the Netherlands
| |
Collapse
|
29
|
Kadom N, Fredericks N, Moore CL, Seidenwurm D, Shugarman S, Venkatesh A. Closing the Compliance Loop on Follow-Up Imaging Recommendations: Comparing Radiologists' and Administrators' Attitudes. Curr Probl Diagn Radiol 2021; 51:486-490. [PMID: 34565635 DOI: 10.1067/j.cpradiol.2021.08.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 07/23/2021] [Accepted: 08/04/2021] [Indexed: 01/28/2023]
Abstract
PURPOSE To compare non-physician healthcare professional and radiologists' survey responses regarding attitudes and current practices, policies, and procedures related to the follow-up of nonemergent actionable incidental findings (AIF). MATERIALS AND METHODS The American College of Radiology (ACR) developed a survey with input from a technical expert panel (TEP). Survey items were developed by TEP members, refined by an ACR market research expert, and were examined for face and construct validity. The survey was distributed among ACR membership and various medical professional organizations. Responses from non-physician responders and radiologists were analyzed and compared using descriptive statistics. RESULTS The analysis included 375 responses, 247 from radiologists and 128 from non-physicians. All respondent groups stated that radiology follow-up recommendations are evidence-based. Both respondent groups indicated that there is up to moderate risk associated with AIF follow-up. Both respondent groups similarly favored that the accountability for communicating AIF lies first with the ordering provider, followed by primary care providers, then the patient, and lastly an automated process that is managed by a staff member and/or the radiologist. All respondent groups indicated that tracking processes were more commonly funded by the healthcare system than through the radiology budget. CONCLUSION There is alignment between non-physicians and radiologists regarding the implementation of tracking systems that assure completion of radiology follow-up recommendations. Building tracking systems represents an opportunity for multi-disciplinary collaboration to address care transition communication and process gaps.
Collapse
Affiliation(s)
- Nadja Kadom
- Department of Radiology, Children's Healthcare of Atlanta, Atlanta, GA; Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA.
| | | | - Christopher L Moore
- Section of Emergency Ultrasound, Emergency Ultrasound Fellowship, Department of Emergency Medicine, Yale School of Medicine, New Haven, CT
| | | | | | - Arjun Venkatesh
- Department of Emergency Medicine, Yale School of Medicine, New Haven, CT
| |
Collapse
|
30
|
Kapoor N, Lacson R, Eskian M, Cochon L, Glazer D, Ip I, Khorasani R. Variation in Radiologists' Follow-Up Imaging Recommendations for Small Cystic Pancreatic Lesions. J Am Coll Radiol 2021; 18:1405-1414. [PMID: 34174205 DOI: 10.1016/j.jacr.2021.06.007] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 05/24/2021] [Accepted: 06/07/2021] [Indexed: 02/07/2023]
Abstract
OBJECTIVE This study aimed to determine the incidence, identify imaging and patient factors, and measure individual radiologist variation associated with follow-up recommendations for small focal cystic pancreatic lesions (FCPLs), a common incidental imaging finding. METHODS This institutional review board-approved retrospective study analyzed 146,709 reports from abdominal CTs and MRIs performed in a large academic hospital from July 1, 2016, to June 30, 2018. A trained natural language processing tool identified 4,345 reports with FCPLs, which were manually reviewed to identify those containing one or more <1.5-cm pancreatic cysts. For these patients, patient, lesion, and radiologist features and follow-up recommendations for FCPL were extracted. A nonlinear random-effects model estimated degree of variation in follow-up recommendations across radiologists at department and division levels. RESULTS Of 2,872 reports with FCPLs < 1.5 cm, 708 (24.7%) had FCPL-related follow-up recommendations. Average patient age was 67 years (SD ± 11). In all, 1,721 (60.0%) reports were for female patients; 59.3% of patients had only one cyst. In multivariable analysis, older patients had slightly lower follow-up recommendation rates (odds ratio [OR]: 0.98 [0.98-1.00] per additional year), and lesions associated with main duct dilatation and septation were more likely to have a follow-up recommendation (ORs: 1.93 [1.11-3.36] and 2.88 [1.89-4.38], respectively). Radiologist years in practice (P = .51), trainee presence (P = .21), and radiologist gender (P = .52) were not associated with increased follow-up recommendations. There was significant interradiologist variation in the Abdominal Imaging Division (P = .04), but not in Emergency Radiology (P = .31) or Cancer Imaging Divisions (P = .29). DISCUSSION Interradiologist variation significantly contributes to variability in follow-up imaging recommendations for FCPLs.
Collapse
Affiliation(s)
- Neena Kapoor
- Director of Diversity, Equity, and Inclusion Quality and Patient Safety Officer, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts.
| | - Ronilda Lacson
- Director of Education, Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts; Director of Clinical Informatics, Harvard Medical School Library of Evidence, Boston, Massachusetts
| | - Mahsa Eskian
- Research Fellow, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Laila Cochon
- Research Fellow, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Daniel Glazer
- Medical Director of CT, and Director, Cross-Sectional Interventional Radiology (CSIR), Department or Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Ivan Ip
- Faculty, Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Ramin Khorasani
- Director, Center for Evidence-Based Imaging, and Vice Chair of Quality/Safety, Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
31
|
Kapoor N, Lacson R, Khorasani R. Workflow Applications of Artificial Intelligence in Radiology and an Overview of Available Tools. J Am Coll Radiol 2021; 17:1363-1370. [PMID: 33153540 DOI: 10.1016/j.jacr.2020.08.016] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 08/21/2020] [Accepted: 08/25/2020] [Indexed: 12/18/2022]
Abstract
In the past decade, there has been tremendous interest in applying artificial intelligence (AI) to improve the field of radiology. Currently, numerous AI applications are in development, with potential benefits spanning all steps of the imaging chain from test ordering to report communication. AI has been proposed as a means to optimize patient scheduling, improve worklist management, enhance image acquisition, and help radiologists interpret diagnostic studies. Although the potential for AI in radiology appears almost endless, the field is still in the early stages, with many uses still theoretical, in development, or limited to single institutions. Moreover, although the current use of AI in radiology has emphasized its clinical applications, some of which are in the distant future, it is increasingly clear that AI algorithms could also be used in the more immediate future for a variety of noninterpretive and quality improvement uses. Such uses include the integration of AI into electronic health record systems to reduce unwarranted variation in radiologists' follow-up recommendations and to improve other dimensions of radiology report quality. In the end, the potential of AI in radiology must be balanced with acknowledgment of its current limitations regarding generalizability and data privacy.
Collapse
Affiliation(s)
- Neena Kapoor
- Director of Diversity, Inclusion, and Equity, Department of Radiology, Brigham and Women's Hospital; Quality and Patient Safety Officer, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts.
| | - Ronilda Lacson
- Director of Education, Center for Evidence-Based Imaging, Brigham and Women's Hospital; Director of Clinical Informatics, Harvard Medical School Library of Evidence, Boston, Massachusetts
| | - Ramin Khorasani
- Director of the Center of Evidence Imaging and Vice Chair of Quality/Safety, Department of Radiology, Center for Evidence Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
32
|
Dyer DS, Zelarney PT, Carr LL, Kern EO. Improvement in Follow-up Imaging With a Patient Tracking System and Computerized Registry for Lung Nodule Management. J Am Coll Radiol 2021; 18:937-946. [PMID: 33607066 DOI: 10.1016/j.jacr.2021.01.018] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 01/27/2021] [Accepted: 01/29/2021] [Indexed: 12/17/2022]
Abstract
PURPOSE Despite established guidelines, radiologists' recommendations and timely follow-up of incidental lung nodules remain variable. To improve follow-up of nodules, a system using standardized language (tracker phrases) recommending time-based follow-up in chest CT reports, coupled with a computerized registry, was created. MATERIALS AND METHODS Data were obtained from the electronic health record and a facility-built electronic lung nodule registry. We evaluated two randomly selected patient cohorts with incidental nodules on chest CT reports: before intervention (September 2008 to March 2011) and after intervention (August 2011 to December 2016). Multivariable logistic regression was used to compare the cohorts for the main outcome of timely follow-up, defined as a subsequent report within 13 months of the initial report. RESULTS In all, 410 patients were included in the pretracker cohort versus 626 in the tracker cohort. Before system inception, 30% of CT reports lacked an explicit time-based recommendation for nodule follow-up. The proportion of patients with timely follow-up increased from 46% to 55%, and the proportion of those with no documented follow-up or follow-up beyond 24 months decreased from 48% to 31%. The likelihood of timely follow-up increased 41%, adjusted for high risk for lung cancer and age 65 years or older. After system inception, reports missing a tracker phrase for nodule recommendation averaged 6%, without significant interyear variation. CONCLUSIONS Standardized language added to CT reports combined with a computerized registry designed to identify and track patients with incidental lung nodules was associated with improved likelihood of follow-up imaging.
Collapse
Affiliation(s)
- Debra S Dyer
- Chair, Department of Radiology, National Jewish Health, Denver, Colorado.
| | | | - Laurie L Carr
- Past President, Medical Executive Committee; Division of Oncology, Department of Medicine, National Jewish Health, Denver, Colorado
| | - Elizabeth O Kern
- Chief, Division of Medical, Behavioral and Community Health, Department of Medicine; Past Chair, Institutional Review Board; Chair, Ethics Resource Committee, National Jewish Health, Denver, Colorado
| |
Collapse
|
33
|
Kaur B, Goyal B, Daniel E. A survey on Machine learning based Medical Assistive systems in Current Oncological Sciences. Curr Med Imaging 2021; 18:445-459. [PMID: 33596810 DOI: 10.2174/1573405617666210217154446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Revised: 12/04/2020] [Accepted: 01/15/2021] [Indexed: 11/22/2022]
Abstract
BACKGROUND Cancer is one of the life threatening disease which is affecting a large number of population worldwide. The cancer cells multiply inside the body without showing much symptoms on the surface of the skin thereby making it difficult to predict and detect at the onset of disease. Many organizations are working towards automating the process of cancer detection with minimal false detection rates. INTRODUCTION The machine learning algorithms serve to be a promising alternative to support health care practitioners to rule out the disease and predict the growth with various imaging and statistical analysis tools. The medical practitioners are utilizing the output of these algorithms to diagnose and design the course of treatment. These algorithms are capable of finding out the risk level of the patient and can reduce the mortality rate concerning to cancer disease. METHOD This article presents the existing state of art techniques for identifying cancer affecting human organs based on machine learning models. The supported set of imaging operations are also elaborated for each type of Cancer. CONCLUSION The CAD tools are the aid for the diagnostic radiologists for preliminary investigations and detecting the nature of tumor cells.
Collapse
Affiliation(s)
| | | | - Ebenezer Daniel
- City of Hope, National Medical Centre, California. United States
| |
Collapse
|
34
|
Kapoor N, Lacson R, Cochon L, Hammer M, Ip I, Boland G, Khorasani R. Radiologist Variation in the Rates of Follow-up Imaging Recommendations Made for Pulmonary Nodules. J Am Coll Radiol 2021; 18:896-905. [PMID: 33567312 DOI: 10.1016/j.jacr.2020.12.031] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 12/23/2020] [Accepted: 12/29/2020] [Indexed: 12/21/2022]
Abstract
OBJECTIVE Determine whether differences exist in rates of follow-up recommendations made for pulmonary nodules after accounting for multiple patient and radiologist factors. METHODS This Institutional Review Board-approved, retrospective study was performed at an urban academic quaternary care hospital. We analyzed 142,001 chest and abdominal CT reports from January 1, 2016, to December 31, 2018, from abdominal, thoracic, and emergency radiology subspecialty divisions. A previously validated natural language processing (NLP) tool identified 24,512 reports documenting pulmonary nodule(s), excluding reports NLP-positive for lung cancer. A second validated NLP tool identified reports with follow-up recommendations specifically for pulmonary nodules. Multivariable logistic regression was used to determine the likelihood of pulmonary nodule follow-up recommendation. Interradiologist variability was quantified within subspecialty divisions. RESULTS NLP classified 4,939 of 24,512 (20.1%) reports as having a follow-up recommendation for pulmonary nodule. Male patients comprised 45.3% (11,097) of the patient cohort; average patient age was 61.4 years (±14.1 years). The majority of reports were from outpatient studies (62.7%, 15,376 of 24,512), were chest CTs (75.9%, 18,615 of 24,512), and were interpreted by thoracic radiologists (63.7%, 15,614 of 24,512). In multivariable analysis, studies for male patients (odds ratio [OR]: 0.9 [0.8-0.9]) and abdominal CTs (OR: 0.6 [0.6-0.7] compared with chest CT) were less likely to have a pulmonary nodule follow-up recommendation. Older patients had higher rates of follow-up recommendation (OR: 1.01 for each additional year). Division-level analysis showed up to 4.3-fold difference between radiologists in the probability of making a follow-up recommendation for a pulmonary nodule. DISCUSSION Significant differences exist in the probability of making a follow-up recommendation for pulmonary nodules among radiologists within the same subspecialty division.
Collapse
Affiliation(s)
- Neena Kapoor
- Director of Diversity, Inclusion, and Equity, Department of Radiology, Brigham and Women's Hospital, Quality and Patient Safety Officer, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts.
| | - Ronilda Lacson
- Director of Education, Center for Evidence-Based Imaging, Brigham and Women's Hospital, Director of Clinical Informatics, Harvard Medical School Library of Evidence, Boston, Massachusetts
| | - Laila Cochon
- Research Fellow, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Mark Hammer
- Cardiothoracic Fellowship Program Director, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Ivan Ip
- Center for Evidence-Based Imaging, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Giles Boland
- President of the Brigham and Women's Physicians Organization, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Ramin Khorasani
- Director of the Center for Evidence Imaging and Vice Chair of Quality/Safety, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
35
|
Goldberg-Stein S, Chernyak V. Adding Value in Radiology Reporting. J Am Coll Radiol 2020; 16:1292-1298. [PMID: 31492407 DOI: 10.1016/j.jacr.2019.05.042] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2019] [Revised: 05/23/2019] [Accepted: 05/25/2019] [Indexed: 12/29/2022]
Abstract
The major goal of the radiology report is to deliver timely, accurate, and actionable information to the patient care team and the patient. Structured reporting offers multiple advantages over traditional free-text reporting, including reduction in diagnostic error, comprehensiveness, adherence to national consensus guidelines, revenue capture, data collection, and research. Various technological innovations enhance integration of structured reporting into everyday clinical practice. This review discusses the benefits of innovations in radiology reporting to the clinical decision process, the patient experience, the cost of imaging, and the overall contributions to the health of the population. Future directions, including the use of artificial intelligence, are reviewed.
Collapse
Affiliation(s)
| | - Victoria Chernyak
- Department of Radiology, Montefiore Medical Center, Bronx, New York.
| |
Collapse
|
36
|
Gorelik N, Gyftopoulos S. Applications of Artificial Intelligence in Musculoskeletal Imaging: From the Request to the Report. Can Assoc Radiol J 2020; 72:45-59. [PMID: 32809857 DOI: 10.1177/0846537120947148] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
Artificial intelligence (AI) will transform every step in the imaging value chain, including interpretive and noninterpretive components. Radiologists should familiarize themselves with AI developments to become leaders in their clinical implementation. This article explores the impact of AI through the entire imaging cycle of musculoskeletal radiology, from the placement of the requisition to the generation of the report, with an added Canadian perspective. Noninterpretive tasks which may be assisted by AI include the ordering of appropriate imaging tests, automatic exam protocoling, optimized scheduling, shorter magnetic resonance imaging acquisition time, computed tomography imaging with reduced artifact and radiation dose, and new methods of generation and utilization of radiology reports. Applications of AI for image interpretation consist of the determination of bone age, body composition measurements, screening for osteoporosis, identification of fractures, evaluation of segmental spine pathology, detection and temporal monitoring of osseous metastases, diagnosis of primary bone and soft tissue tumors, and grading of osteoarthritis.
Collapse
Affiliation(s)
- Natalia Gorelik
- Department of Diagnostic Radiology, 54473McGill University Health Center, Montreal, Quebec, Canada
| | - Soterios Gyftopoulos
- Department of Radiology, 12297NYU Langone Medical Center/NYU Langone Orthopedic Center, New York, NY, USA.,Department of Orthopedic Surgery, 12297NYU Langone Medical Center/NYU Langone Orthopedic Center, New York, NY, USA
| |
Collapse
|
37
|
Qin L, Xu X, Ding L, Li Z, Li J. Identifying diagnosis evidence of cardiogenic stroke from Chinese echocardiograph reports. BMC Med Inform Decis Mak 2020; 20:126. [PMID: 32646410 PMCID: PMC7346320 DOI: 10.1186/s12911-020-1106-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Cardiogenic stroke has increasing morbidity in China and brought economic burden to patient families. In cardiogenic stroke diagnosis, echocardiograph examination is one of the most important examinations. Sonographers will investigate patients' heart via echocardiograph, and describe them in the echocardiograph reports. In this study, we developed a machine learning model to automatically identify diagnosis evidences of cardiogenic stroke providing to neurologist for clinical decision making. METHODS We collected 4188 Chinese echocardiograph reports of 4018 patients, with average length 177 Chinese characters in free-text style. Collaborating with neurologists and sonographers, we summarized 149 phrases on diagnosis evidence of cardiogenic stroke such as "" (severe mitral stenosis), "" (aortic valve degeneration) and so on. Furthermore, we developed an annotated corpus via mapping 149 phrases to the 4188 reports. We selected 11 most frequent diagnosis evidence types such as "" (mitral stenosis) for further identifying. The generated corpus is divided into training set and testing set in the ratio of 8:2, which is used to train and validate a machine learning model to identify the evidence of cardiogenic stroke using BiLSTM-CRF algorithm. RESULTS Our machine learning method achieved the average performance on the diagnosis evidence identification is 98.03, 90.17 and 93.94% respectively. In addition, our method is capable to identify the novel diagnosis evidence of cardiogenic stroke description such as "-" (mitral stenosis), "" (aortic valve calcification) et al. CONCLUSIONS: In this study, we analyze the structure of the echocardiograph reports and summarized 149 phrases on diagnosis evidence of cardiogenic stroke. We use the phrases to generate an annotated corpus automatically, which greatly reduces the cost of manual annotation. The model trained based on the corpus also has a good performance on the testing set. The method of automatically identifying diagnosis evidence of cardiogenic stroke proposed in this study will be further refined in the practice.
Collapse
Affiliation(s)
- Lu Qin
- Institute of Medical Information, Chinese Academy of Medical Sciences/ Peking Union Medical College, Beijing, China
| | - Xiaowei Xu
- Institute of Medical Information, Chinese Academy of Medical Sciences/ Peking Union Medical College, Beijing, China
| | - Lingling Ding
- Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Zixiao Li
- Beijing Tiantan Hospital, Capital Medical University, Beijing, China.
| | - Jiao Li
- Institute of Medical Information, Chinese Academy of Medical Sciences/ Peking Union Medical College, Beijing, China.
| |
Collapse
|
38
|
Lau W, Payne TH, Uzuner O, Yetisgen M. Extraction and Analysis of Clinically Important Follow-up Recommendations in a Large Radiology Dataset. AMIA JOINT SUMMITS ON TRANSLATIONAL SCIENCE PROCEEDINGS. AMIA JOINT SUMMITS ON TRANSLATIONAL SCIENCE 2020; 2020:335-344. [PMID: 32477653 PMCID: PMC7233090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Communication of follow-up recommendations when abnormalities are identified on imaging studies is prone to error. In this paper, we present a natural language processing approach based on deep learning to automatically identify clinically important recommendations in radiology reports. Our approach first identifies the recommendation sentences and then extracts reason, test, and time frame of the identified recommendations. To train our extraction models, we created a corpus of 1367 radiology reports annotated for recommendation information. Our extraction models achieved 0.93 f-score for recommendation sentence, 0.65 f-score for reason, 0.73 f-score for test, and 0.84 f-score for time frame. We applied the extraction models to a set of over 3.3 million radiology reports and analyzed the adherence of follow-up recommendations.
Collapse
Affiliation(s)
- Wilson Lau
- Department of Biomedical and Health Informatics, University of Washington, Seattle, WA
| | - Thomas H Payne
- School of Medicine, University of Washington, Seattle, WA
- Information Technology Services, University of Washington, Seattle, WA
| | - Ozlem Uzuner
- Department of Information Sciences and Technology, George Mason University, Fairfax, VA
| | - Meliha Yetisgen
- Department of Biomedical and Health Informatics, University of Washington, Seattle, WA
| |
Collapse
|
39
|
Mortani Barbosa EJ, Kelly K. Statistical modeling can determine what factors are predictive of appropriate follow-up in patients presenting with incidental pulmonary nodules on CT. Eur J Radiol 2020; 128:109062. [PMID: 32422551 DOI: 10.1016/j.ejrad.2020.109062] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2020] [Revised: 05/04/2020] [Accepted: 05/05/2020] [Indexed: 12/13/2022]
Abstract
PURPOSE To assess the performance of statistical modeling in predicting follow-up adherence of incidentally detected pulmonary nodules (IPN) on CT, based on patient variables (PV), radiology report related variables (RRRV) and physician-patient communication variables (PPCV). METHODS 200 patients with IPN on CT were retrospectively identified and randomly selected. PV (age, gender, smoking status, ethnicity), RRRV (nodule size, patient context, whether follow-up recommendations were provided) and PPCV (whether referring physician documented IPN and ordered follow-up on the electronic medical record) were recorded. Primary outcome was whether patients received appropriate follow-up within +/- 1 month of the recommended time frame. Statistical methods included logistic regression and machine learning (K-nearest neighbors and support vector machine). RESULTS Adherence was low, with or without recommendations provided in the radiology report (23.4 %-27.4 %). Whether the referring physician ordered follow-up was the dominant predictor of adherence in all models. The following variables were statistically significant predictors of whether referring physician ordered follow-up: recommendations provided in the radiology report, smoking status, patient context and nodule size (FDR logworth of respectively 21.18, 11.66, 2.35, 1.63, p < 0.05). Prediction accuracy varied from 72 % (PV) to 93 % (PPCV, all variables). CONCLUSION PPCV are the most important predictors of adherence. Amongst all variables, patient context, smoking status, nodule size, and whether the radiologist provided follow-up recommendations in the report were all statistically significant predictors of patient follow-up adherence, supporting the utility of statistical modeling for analytics, quality assurance and optimization of outcomes related to IPN.
Collapse
Affiliation(s)
| | - Kate Kelly
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
40
|
Barash Y, Guralnik G, Tau N, Soffer S, Levy T, Shimon O, Zimlichman E, Konen E, Klang E. Comparison of deep learning models for natural language processing-based classification of non-English head CT reports. Neuroradiology 2020; 62:1247-1256. [PMID: 32335686 DOI: 10.1007/s00234-020-02420-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Accepted: 03/26/2020] [Indexed: 02/07/2023]
Abstract
PURPOSE Natural language processing (NLP) can be used for automatic flagging of radiology reports. We assessed deep learning models for classifying non-English head CT reports. METHODS We retrospectively collected head CT reports (2011-2018). Reports were signed in Hebrew. Emergency department (ED) reports of adult patients from January to February for each year (2013-2018) were manually labeled. All other reports were used to pre-train an embedding layer. We explored two use cases: (1) general labeling use case, in which reports were labeled as normal vs. pathological; (2) specific labeling use case, in which reports were labeled as with and without intra-cranial hemorrhage. We tested long short-term memory (LSTM) and LSTM-attention (LSTM-ATN) networks for classifying reports. We also evaluated the improvement of adding Word2Vec word embedding. Deep learning models were compared with a bag-of-words (BOW) model. RESULTS We retrieved 176,988 head CT reports for pre-training. We manually labeled 7784 reports as normal (46.3%) or pathological (53.7%), and 7.1% with intra-cranial hemorrhage. For the general labeling, LSTM-ATN-Word2Vec showed the best results (AUC = 0.967 ± 0.006, accuracy 90.8% ± 0.01). For the specific labeling, all methods showed similar accuracies between 95.0 and 95.9%. Both LSTM-ATN-Word2Vec and BOW had the highest AUC (0.970). CONCLUSION For a general use case, word embedding using a large cohort of non-English head CT reports and ATN improves NLP performance. For a more specific task, BOW and deep learning showed similar results. Models should be explored and tailored to the NLP task.
Collapse
Affiliation(s)
- Yiftach Barash
- Division of Diagnostic Imaging, Sheba Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Derech Sheba St 2, Ramat Gan, Israel.,DeepVision Lab, Sheba Medical Center, Ramat Gan, Israel
| | | | - Noam Tau
- Division of Diagnostic Imaging, Sheba Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Derech Sheba St 2, Ramat Gan, Israel
| | - Shelly Soffer
- Division of Diagnostic Imaging, Sheba Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Derech Sheba St 2, Ramat Gan, Israel.,DeepVision Lab, Sheba Medical Center, Ramat Gan, Israel.,Management, Sheba Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Ramat Gan, Israel
| | - Tal Levy
- DeepVision Lab, Sheba Medical Center, Ramat Gan, Israel.,Tel Aviv University, Tel Aviv, Israel
| | | | - Eyal Zimlichman
- Management, Sheba Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Ramat Gan, Israel
| | - Eli Konen
- Division of Diagnostic Imaging, Sheba Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Derech Sheba St 2, Ramat Gan, Israel
| | - Eyal Klang
- Division of Diagnostic Imaging, Sheba Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Derech Sheba St 2, Ramat Gan, Israel. .,DeepVision Lab, Sheba Medical Center, Ramat Gan, Israel.
| |
Collapse
|
41
|
Artificial Intelligence Pertaining to Cardiothoracic Imaging and Patient Care: Beyond Image Interpretation. J Thorac Imaging 2020; 35:137-142. [PMID: 32141963 DOI: 10.1097/rti.0000000000000486] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Artificial intelligence (AI) is a broad field of computational science that includes many subsets. Today the most widely used subset in medical imaging is machine learning (ML). Many articles have focused on the use of ML for pattern recognition to detect and potentially diagnose various pathologies. However, AI algorithm development is now directed toward workflow management. AI can impact patient care at multiple stages of their imaging experience and assist in efficient and effective scheduling, imaging performance, worklist prioritization, image interpretation, and quality assurance. The purpose of this manuscript was to review the potential AI applications in radiology focusing on workflow management and discuss how ML will affect cardiothoracic imaging.
Collapse
|
42
|
|
43
|
Deep Learning for Natural Language Processing in Radiology-Fundamentals and a Systematic Review. J Am Coll Radiol 2020; 17:639-648. [PMID: 32004480 DOI: 10.1016/j.jacr.2019.12.026] [Citation(s) in RCA: 64] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2019] [Revised: 12/23/2019] [Accepted: 12/30/2019] [Indexed: 12/22/2022]
Abstract
PURPOSE Natural language processing (NLP) enables conversion of free text into structured data. Recent innovations in deep learning technology provide improved NLP performance. We aimed to survey deep learning NLP fundamentals and review radiology-related research. METHODS This systematic review was reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. We searched for deep learning NLP radiology studies published up to September 2019. MEDLINE, Scopus, and Google Scholar were used as search databases. RESULTS Ten relevant studies published between 2018 and 2019 were identified. Deep learning models applied for NLP in radiology are convolutional neural networks, recurrent neural networks, long short-term memory networks, and attention networks. Deep learning NLP applications in radiology include flagging of diagnoses such as pulmonary embolisms and fractures, labeling follow-up recommendations, and automatic selection of imaging protocols. Deep learning NLP models perform as well as or better than traditional NLP models. CONCLUSION Research and use of deep learning NLP in radiology is increasing. Acquaintance with this technology can help prepare radiologists for the coming changes in their field.
Collapse
|
44
|
Cochon LR, Kapoor N, Carrodeguas E, Ip IK, Lacson R, Boland G, Khorasani R. Variation in Follow-up Imaging Recommendations in Radiology Reports: Patient, Modality, and Radiologist Predictors. Radiology 2019; 291:700-707. [PMID: 31063082 PMCID: PMC7526331 DOI: 10.1148/radiol.2019182826] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
Background Variation between radiologists when making recommendations for additional imaging and associated factors are, to the knowledge of the authors, unknown. Clear identification of factors that account for variation in follow-up recommendations might prevent unnecessary tests for incidental or ambiguous image findings. Purpose To determine incidence and identify factors associated with follow-up recommendations in radiology reports from multiple modalities, patient care settings, and imaging divisions. Materials and Methods This retrospective study analyzed 318 366 reports obtained from diagnostic imaging examinations performed at a large urban quaternary care hospital from January 1 to December 31, 2016, excluding breast and US reports. A subset of 1000 reports were randomly selected and manually annotated to train and validate a machine learning algorithm to predict whether a report included a follow-up imaging recommendation (training-and-validation set consisted of 850 reports and test set of 150 reports). The trained algorithm was used to classify 318 366 reports. Multivariable logistic regression was used to determine the likelihood of follow-up recommendation. Additional analysis by imaging subspecialty division was performed, and intradivision and interradiologist variability was quantified. Results The machine learning algorithm classified 38 745 of 318 366 (12.2%) reports as containing follow-up recommendations. Average patient age was 59 years ± 17 (standard deviation); 45.2% (143 767 of 318 366) of reports were from male patients. Among 65 radiologists, 57% (37 of 65) were men. At multivariable analysis, older patients had higher rates of follow-up recommendations (odds ratio [OR], 1.01 [95% confidence interval {CI}: 1.01, 1.01] for each additional year), male patients had lower rates of follow-up recommendations (OR, 0.9; 95% CI: 0.9, 1.0), and follow-up recommendations were most common among CT studies (OR, 4.2 [95% CI: 4.0, 4.4] compared with radiography). Radiologist sex (P = .54), presence of a trainee (P = .45), and years in practice (P = .49) were not significant predictors overall. A division-level analysis showed 2.8-fold to 6.7-fold interradiologist variation. Conclusion Substantial interradiologist variation exists in the probability of recommending a follow-up examination in a radiology report, after adjusting for patient, examination, and radiologist factors. © RSNA, 2019 See also the editorial by Russell in this issue.
Collapse
Affiliation(s)
- Laila R Cochon
- From the Center for Evidence-Based Imaging, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| | - Neena Kapoor
- From the Center for Evidence-Based Imaging, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| | - Emmanuel Carrodeguas
- From the Center for Evidence-Based Imaging, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| | - Ivan K Ip
- From the Center for Evidence-Based Imaging, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| | - Ronilda Lacson
- From the Center for Evidence-Based Imaging, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| | - Giles Boland
- From the Center for Evidence-Based Imaging, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| | - Ramin Khorasani
- From the Center for Evidence-Based Imaging, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| |
Collapse
|