1
|
López-Úbeda P, Martín-Noguerol T, Luna A. Radiology, explicability and AI: closing the gap. Eur Radiol 2023; 33:9466-9468. [PMID: 37410108 DOI: 10.1007/s00330-023-09902-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 04/17/2023] [Accepted: 04/28/2023] [Indexed: 07/07/2023]
Affiliation(s)
| | | | - Antonio Luna
- Radiology Department, MRI Unit, HT Medica, Jaén, Spain
| |
Collapse
|
2
|
Liu J, Pasumarthi S, Duffy B, Gong E, Datta K, Zaharchuk G. One Model to Synthesize Them All: Multi-Contrast Multi-Scale Transformer for Missing Data Imputation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2577-2591. [PMID: 37030684 PMCID: PMC10543020 DOI: 10.1109/tmi.2023.3261707] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Multi-contrast magnetic resonance imaging (MRI) is widely used in clinical practice as each contrast provides complementary information. However, the availability of each imaging contrast may vary amongst patients, which poses challenges to radiologists and automated image analysis algorithms. A general approach for tackling this problem is missing data imputation, which aims to synthesize the missing contrasts from existing ones. While several convolutional neural networks (CNN) based algorithms have been proposed, they suffer from the fundamental limitations of CNN models, such as the requirement for fixed numbers of input and output channels, the inability to capture long-range dependencies, and the lack of interpretability. In this work, we formulate missing data imputation as a sequence-to-sequence learning problem and propose a multi-contrast multi-scale Transformer (MMT), which can take any subset of input contrasts and synthesize those that are missing. MMT consists of a multi-scale Transformer encoder that builds hierarchical representations of inputs combined with a multi-scale Transformer decoder that generates the outputs in a coarse-to-fine fashion. The proposed multi-contrast Swin Transformer blocks can efficiently capture intra- and inter-contrast dependencies for accurate image synthesis. Moreover, MMT is inherently interpretable as it allows us to understand the importance of each input contrast in different regions by analyzing the in-built attention maps of Transformer blocks in the decoder. Extensive experiments on two large-scale multi-contrast MRI datasets demonstrate that MMT outperforms the state-of-the-art methods quantitatively and qualitatively.
Collapse
|
3
|
Alberts IL, Mercolli L, Pyka T, Prenosil G, Shi K, Rominger A, Afshar-Oromieh A. Large language models (LLM) and ChatGPT: what will the impact on nuclear medicine be? Eur J Nucl Med Mol Imaging 2023; 50:1549-1552. [PMID: 36892666 PMCID: PMC9995718 DOI: 10.1007/s00259-023-06172-w] [Citation(s) in RCA: 35] [Impact Index Per Article: 35.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Accepted: 02/19/2023] [Indexed: 03/10/2023]
Affiliation(s)
- Ian L Alberts
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstr. 18, 3010, Bern, Switzerland.
| | - Lorenzo Mercolli
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstr. 18, 3010, Bern, Switzerland
| | - Thomas Pyka
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstr. 18, 3010, Bern, Switzerland
| | - George Prenosil
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstr. 18, 3010, Bern, Switzerland
| | - Kuangyu Shi
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstr. 18, 3010, Bern, Switzerland
| | - Axel Rominger
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstr. 18, 3010, Bern, Switzerland
| | - Ali Afshar-Oromieh
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Freiburgstr. 18, 3010, Bern, Switzerland
| |
Collapse
|
4
|
Machine Learning Models to Forecast Outcomes of Pituitary Surgery: A Systematic Review in Quality of Reporting and Current Evidence. Brain Sci 2023; 13:brainsci13030495. [PMID: 36979305 PMCID: PMC10046799 DOI: 10.3390/brainsci13030495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 03/08/2023] [Accepted: 03/13/2023] [Indexed: 03/17/2023] Open
Abstract
Background: The complex nature and heterogeneity involving pituitary surgery results have increased interest in machine learning (ML) applications for prediction of outcomes over the last decade. This study aims to systematically review the characteristics of ML models involving pituitary surgery outcome prediction and assess their reporting quality. Methods: We searched the PubMed, Scopus, and Web of Knowledge databases for publications on the use of ML to predict pituitary surgery outcomes. We used the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD) to assess report quality. Our search strategy was based on the terms “artificial intelligence”, “machine learning”, and “pituitary”. Results: 20 studies were included in this review. The principal models reported in each article were post-surgical endocrine outcomes (n = 10), tumor management (n = 3), and intra- and postoperative complications (n = 7). Overall, the included studies adhered to a median of 65% (IQR = 60–72%) of TRIPOD criteria, ranging from 43% to 83%. The median reported AUC was 0.84 (IQR = 0.80–0.91). The most popular algorithms were support vector machine (n = 5) and random forest (n = 5). Only two studies reported external validation and adherence to any reporting guideline. Calibration methods were not reported in 15 studies. No model achieved the phase of actual clinical applicability. Conclusion: Applications of ML in the prediction of pituitary outcomes are still nascent, as evidenced by the lack of any model validated for clinical practice. Although studies have demonstrated promising results, greater transparency in model development and reporting is needed to enable their use in clinical practice. Further adherence to reporting guidelines can help increase AI’s real-world utility and improve clinical practice.
Collapse
|
5
|
Daye D, Wiggins WF, Lungren MP, Alkasab T, Kottler N, Allen B, Roth CJ, Bizzo BC, Durniak K, Brink JA, Larson DB, Dreyer KJ, Langlotz CP. Implementation of Clinical Artificial Intelligence in Radiology: Who Decides and How? Radiology 2022; 305:555-563. [PMID: 35916673 PMCID: PMC9713445 DOI: 10.1148/radiol.212151] [Citation(s) in RCA: 50] [Impact Index Per Article: 25.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 03/30/2022] [Accepted: 04/12/2022] [Indexed: 01/03/2023]
Abstract
As the role of artificial intelligence (AI) in clinical practice evolves, governance structures oversee the implementation, maintenance, and monitoring of clinical AI algorithms to enhance quality, manage resources, and ensure patient safety. In this article, a framework is established for the infrastructure required for clinical AI implementation and presents a road map for governance. The road map answers four key questions: Who decides which tools to implement? What factors should be considered when assessing an application for implementation? How should applications be implemented in clinical practice? Finally, how should tools be monitored and maintained after clinical implementation? Among the many challenges for the implementation of AI in clinical practice, devising flexible governance structures that can quickly adapt to a changing environment will be essential to ensure quality patient care and practice improvement objectives.
Collapse
Affiliation(s)
- Dania Daye
- From the Department of Radiology, Massachusetts General Hospital,
Harvard Medical School, 55 Fruit St, GRB 297, Boston, MA 02155 (D.D., T.A.,
B.C.B., K.D., J.A.B., K.J.D.); Department of Radiology, Duke University, Durham,
NC (W.F.W., C.J.R.); Department of Radiology, Stanford University, Stanford,
Calif (M.P.L., D.B.L., C.P.L.); Radiology Partners, El Segundo, Calif (N.K.);
and Department of Radiology, Grandview Medical Center, Birmingham, Ala
(B.A.)
| | - Walter F. Wiggins
- From the Department of Radiology, Massachusetts General Hospital,
Harvard Medical School, 55 Fruit St, GRB 297, Boston, MA 02155 (D.D., T.A.,
B.C.B., K.D., J.A.B., K.J.D.); Department of Radiology, Duke University, Durham,
NC (W.F.W., C.J.R.); Department of Radiology, Stanford University, Stanford,
Calif (M.P.L., D.B.L., C.P.L.); Radiology Partners, El Segundo, Calif (N.K.);
and Department of Radiology, Grandview Medical Center, Birmingham, Ala
(B.A.)
| | - Matthew P. Lungren
- From the Department of Radiology, Massachusetts General Hospital,
Harvard Medical School, 55 Fruit St, GRB 297, Boston, MA 02155 (D.D., T.A.,
B.C.B., K.D., J.A.B., K.J.D.); Department of Radiology, Duke University, Durham,
NC (W.F.W., C.J.R.); Department of Radiology, Stanford University, Stanford,
Calif (M.P.L., D.B.L., C.P.L.); Radiology Partners, El Segundo, Calif (N.K.);
and Department of Radiology, Grandview Medical Center, Birmingham, Ala
(B.A.)
| | - Tarik Alkasab
- From the Department of Radiology, Massachusetts General Hospital,
Harvard Medical School, 55 Fruit St, GRB 297, Boston, MA 02155 (D.D., T.A.,
B.C.B., K.D., J.A.B., K.J.D.); Department of Radiology, Duke University, Durham,
NC (W.F.W., C.J.R.); Department of Radiology, Stanford University, Stanford,
Calif (M.P.L., D.B.L., C.P.L.); Radiology Partners, El Segundo, Calif (N.K.);
and Department of Radiology, Grandview Medical Center, Birmingham, Ala
(B.A.)
| | - Nina Kottler
- From the Department of Radiology, Massachusetts General Hospital,
Harvard Medical School, 55 Fruit St, GRB 297, Boston, MA 02155 (D.D., T.A.,
B.C.B., K.D., J.A.B., K.J.D.); Department of Radiology, Duke University, Durham,
NC (W.F.W., C.J.R.); Department of Radiology, Stanford University, Stanford,
Calif (M.P.L., D.B.L., C.P.L.); Radiology Partners, El Segundo, Calif (N.K.);
and Department of Radiology, Grandview Medical Center, Birmingham, Ala
(B.A.)
| | - Bibb Allen
- From the Department of Radiology, Massachusetts General Hospital,
Harvard Medical School, 55 Fruit St, GRB 297, Boston, MA 02155 (D.D., T.A.,
B.C.B., K.D., J.A.B., K.J.D.); Department of Radiology, Duke University, Durham,
NC (W.F.W., C.J.R.); Department of Radiology, Stanford University, Stanford,
Calif (M.P.L., D.B.L., C.P.L.); Radiology Partners, El Segundo, Calif (N.K.);
and Department of Radiology, Grandview Medical Center, Birmingham, Ala
(B.A.)
| | - Christopher J. Roth
- From the Department of Radiology, Massachusetts General Hospital,
Harvard Medical School, 55 Fruit St, GRB 297, Boston, MA 02155 (D.D., T.A.,
B.C.B., K.D., J.A.B., K.J.D.); Department of Radiology, Duke University, Durham,
NC (W.F.W., C.J.R.); Department of Radiology, Stanford University, Stanford,
Calif (M.P.L., D.B.L., C.P.L.); Radiology Partners, El Segundo, Calif (N.K.);
and Department of Radiology, Grandview Medical Center, Birmingham, Ala
(B.A.)
| | - Bernardo C. Bizzo
- From the Department of Radiology, Massachusetts General Hospital,
Harvard Medical School, 55 Fruit St, GRB 297, Boston, MA 02155 (D.D., T.A.,
B.C.B., K.D., J.A.B., K.J.D.); Department of Radiology, Duke University, Durham,
NC (W.F.W., C.J.R.); Department of Radiology, Stanford University, Stanford,
Calif (M.P.L., D.B.L., C.P.L.); Radiology Partners, El Segundo, Calif (N.K.);
and Department of Radiology, Grandview Medical Center, Birmingham, Ala
(B.A.)
| | - Kimberly Durniak
- From the Department of Radiology, Massachusetts General Hospital,
Harvard Medical School, 55 Fruit St, GRB 297, Boston, MA 02155 (D.D., T.A.,
B.C.B., K.D., J.A.B., K.J.D.); Department of Radiology, Duke University, Durham,
NC (W.F.W., C.J.R.); Department of Radiology, Stanford University, Stanford,
Calif (M.P.L., D.B.L., C.P.L.); Radiology Partners, El Segundo, Calif (N.K.);
and Department of Radiology, Grandview Medical Center, Birmingham, Ala
(B.A.)
| | - James A. Brink
- From the Department of Radiology, Massachusetts General Hospital,
Harvard Medical School, 55 Fruit St, GRB 297, Boston, MA 02155 (D.D., T.A.,
B.C.B., K.D., J.A.B., K.J.D.); Department of Radiology, Duke University, Durham,
NC (W.F.W., C.J.R.); Department of Radiology, Stanford University, Stanford,
Calif (M.P.L., D.B.L., C.P.L.); Radiology Partners, El Segundo, Calif (N.K.);
and Department of Radiology, Grandview Medical Center, Birmingham, Ala
(B.A.)
| | - David B. Larson
- From the Department of Radiology, Massachusetts General Hospital,
Harvard Medical School, 55 Fruit St, GRB 297, Boston, MA 02155 (D.D., T.A.,
B.C.B., K.D., J.A.B., K.J.D.); Department of Radiology, Duke University, Durham,
NC (W.F.W., C.J.R.); Department of Radiology, Stanford University, Stanford,
Calif (M.P.L., D.B.L., C.P.L.); Radiology Partners, El Segundo, Calif (N.K.);
and Department of Radiology, Grandview Medical Center, Birmingham, Ala
(B.A.)
| | | | | |
Collapse
|
6
|
You S, Reyes M. Influence of contrast and texture based image modifications on the performance and attention shift of U-Net models for brain tissue segmentation. FRONTIERS IN NEUROIMAGING 2022; 1:1012639. [PMID: 37555149 PMCID: PMC10406260 DOI: 10.3389/fnimg.2022.1012639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 10/12/2022] [Indexed: 08/10/2023]
Abstract
Contrast and texture modifications applied during training or test-time have recently shown promising results to enhance the generalization performance of deep learning segmentation methods in medical image analysis. However, a deeper understanding of this phenomenon has not been investigated. In this study, we investigated this phenomenon using a controlled experimental setting, using datasets from the Human Connectome Project and a large set of simulated MR protocols, in order to mitigate data confounders and investigate possible explanations as to why model performance changes when applying different levels of contrast and texture-based modifications. Our experiments confirm previous findings regarding the improved performance of models subjected to contrast and texture modifications employed during training and/or testing time, but further show the interplay when these operations are combined, as well as the regimes of model improvement/worsening across scanning parameters. Furthermore, our findings demonstrate a spatial attention shift phenomenon of trained models, occurring for different levels of model performance, and varying in relation to the type of applied image modification.
Collapse
Affiliation(s)
- Suhang You
- Medical Image Analysis Group, ARTORG, Graduate School for Cellular and Biomedical Sciences, University of Bern, Bern, Switzerland
| | | |
Collapse
|
7
|
Artificial intelligence and machine learning in cancer imaging. COMMUNICATIONS MEDICINE 2022; 2:133. [PMID: 36310650 PMCID: PMC9613681 DOI: 10.1038/s43856-022-00199-0] [Citation(s) in RCA: 73] [Impact Index Per Article: 36.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2020] [Accepted: 10/06/2022] [Indexed: 11/16/2022] Open
Abstract
An increasing array of tools is being developed using artificial intelligence (AI) and machine learning (ML) for cancer imaging. The development of an optimal tool requires multidisciplinary engagement to ensure that the appropriate use case is met, as well as to undertake robust development and testing prior to its adoption into healthcare systems. This multidisciplinary review highlights key developments in the field. We discuss the challenges and opportunities of AI and ML in cancer imaging; considerations for the development of algorithms into tools that can be widely used and disseminated; and the development of the ecosystem needed to promote growth of AI and ML in cancer imaging.
Collapse
|
8
|
Mahapatra D, Poellinger A, Reyes M. Interpretability-Guided Inductive Bias For Deep Learning Based Medical Image Classification And Segmentation. Med Image Anal 2022; 81:102551. [DOI: 10.1016/j.media.2022.102551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 07/05/2022] [Accepted: 07/13/2022] [Indexed: 10/17/2022]
|
9
|
Rainey C, O'Regan T, Matthew J, Skelton E, Woznitza N, Chu KY, Goodman S, McConnell J, Hughes C, Bond R, Malamateniou C, McFadden S. UK reporting radiographers' perceptions of AI in radiographic image interpretation - Current perspectives and future developments. Radiography (Lond) 2022; 28:881-888. [PMID: 35780627 DOI: 10.1016/j.radi.2022.06.006] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 06/07/2022] [Accepted: 06/13/2022] [Indexed: 02/03/2023]
Abstract
INTRODUCTION Radiographer reporting is accepted practice in the UK. With a national shortage of radiographers and radiologists, artificial intelligence (AI) support in reporting may help minimise the backlog of unreported images. Modern AI is not well understood by human end-users. This may have ethical implications and impact human trust in these systems, due to over- and under-reliance. This study investigates the perceptions of reporting radiographers about AI, gathers information to explain how they may interact with AI in future and identifies features perceived as necessary for appropriate trust in these systems. METHODS A Qualtrics® survey was designed and piloted by a team of UK AI expert radiographers. This paper reports the third part of the survey, open to reporting radiographers only. RESULTS 86 responses were received. Respondents were confident in how an AI reached its decision (n = 53, 62%). Less than a third of respondents would be confident communicating the AI decision to stakeholders. Affirmation from AI would improve confidence (n = 49, 57%) and disagreement would make respondents seek a second opinion (n = 60, 70%). There is a moderate trust level in AI for image interpretation. System performance data and AI visual explanations would increase trust. CONCLUSIONS Responses indicate that AI will have a strong impact on reporting radiographers' decision making in the future. Respondents are confident in how an AI makes decisions but less confident explaining this to others. Trust levels could be improved with explainable AI solutions. IMPLICATIONS FOR PRACTICE This survey clarifies UK reporting radiographers' perceptions of AI, used for image interpretation, highlighting key issues with AI integration.
Collapse
Affiliation(s)
- C Rainey
- Ulster University, School of Health Sciences, Faculty of Life and Health Sciences, Shore Road, Newtownabbey, N. Ireland.
| | - T O'Regan
- The Society and College of Radiographers, 207 Providence Square, Mill Street, London, UK
| | - J Matthew
- School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, UK
| | - E Skelton
- School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, UK; Department of Radiography, Division of Midwifery and Radiography, School of Health Sciences, City, University of London, London, UK
| | - N Woznitza
- University College London Hospitals, Bloomsbury, London, UK; School of Allied & Public Health Professions, Canterbury Christ Church University, Canterbury, UK
| | - K-Y Chu
- Department of Oncology, Oxford Institute for Radiation Oncology, University of Oxford, Oxford, UK; Radiotherapy Department, Churchill Hospital, Oxford University Hospitals NHS FT, Oxford, UK
| | - S Goodman
- The Society and College of Radiographers, 207 Providence Square, Mill Street, London, UK
| | | | - C Hughes
- Ulster University, School of Health Sciences, Faculty of Life and Health Sciences, Shore Road, Newtownabbey, N. Ireland
| | - R Bond
- Ulster University, School of Computing, Faculty of Computing, Engineering and the Built Environment, Shore Road, Newtownabbey, N. Ireland
| | - C Malamateniou
- School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, UK; Department of Radiography, Division of Midwifery and Radiography, School of Health Sciences, City, University of London, London, UK
| | - S McFadden
- Ulster University, School of Health Sciences, Faculty of Life and Health Sciences, Shore Road, Newtownabbey, N. Ireland
| |
Collapse
|
10
|
Abstract
Artificial intelligence is already innovating in the provision of neurologic care. This review explores key artificial intelligence concepts; their application to neurologic diagnosis, prognosis, and treatment; and challenges that await their broader adoption. The development of new diagnostic biomarkers, individualization of prognostic information, and improved access to treatment are among the plethora of possibilities. These advances, however, reflect only the tip of the iceberg for the ways in which artificial intelligence may transform neurologic care in the future.
Collapse
Affiliation(s)
- James M Hillis
- Digital Clinical Research Organization, Data Science Office, Mass General Brigham, Boston, Massachusetts.,Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| | - Bernardo C Bizzo
- Digital Clinical Research Organization, Data Science Office, Mass General Brigham, Boston, Massachusetts.,Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
11
|
Rainey C, O'Regan T, Matthew J, Skelton E, Woznitza N, Chu KY, Goodman S, McConnell J, Hughes C, Bond R, McFadden S, Malamateniou C. Beauty Is in the AI of the Beholder: Are We Ready for the Clinical Integration of Artificial Intelligence in Radiography? An Exploratory Analysis of Perceived AI Knowledge, Skills, Confidence, and Education Perspectives of UK Radiographers. Front Digit Health 2021; 3:739327. [PMID: 34859245 PMCID: PMC8631824 DOI: 10.3389/fdgth.2021.739327] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2021] [Accepted: 10/19/2021] [Indexed: 12/19/2022] Open
Abstract
Introduction: The use of artificial intelligence (AI) in medical imaging and radiotherapy has been met with both scepticism and excitement. However, clinical integration of AI is already well-underway. Many authors have recently reported on the AI knowledge and perceptions of radiologists/medical staff and students however there is a paucity of information regarding radiographers. Published literature agrees that AI is likely to have significant impact on radiology practice. As radiographers are at the forefront of radiology service delivery, an awareness of the current level of their perceived knowledge, skills, and confidence in AI is essential to identify any educational needs necessary for successful adoption into practice. Aim: The aim of this survey was to determine the perceived knowledge, skills, and confidence in AI amongst UK radiographers and highlight priorities for educational provisions to support a digital healthcare ecosystem. Methods: A survey was created on Qualtrics® and promoted via social media (Twitter®/LinkedIn®). This survey was open to all UK radiographers, including students and retired radiographers. Participants were recruited by convenience, snowball sampling. Demographic information was gathered as well as data on the perceived, self-reported, knowledge, skills, and confidence in AI of respondents. Insight into what the participants understand by the term “AI” was gained by means of a free text response. Quantitative analysis was performed using SPSS® and qualitative thematic analysis was performed on NVivo®. Results: Four hundred and eleven responses were collected (80% from diagnostic radiography and 20% from a radiotherapy background), broadly representative of the workforce distribution in the UK. Although many respondents stated that they understood the concept of AI in general (78.7% for diagnostic and 52.1% for therapeutic radiography respondents, respectively) there was a notable lack of sufficient knowledge of AI principles, understanding of AI terminology, skills, and confidence in the use of AI technology. Many participants, 57% of diagnostic and 49% radiotherapy respondents, do not feel adequately trained to implement AI in the clinical setting. Furthermore 52% and 64%, respectively, said they have not developed any skill in AI whilst 62% and 55%, respectively, stated that there is not enough AI training for radiographers. The majority of the respondents indicate that there is an urgent need for further education (77.4% of diagnostic and 73.9% of therapeutic radiographers feeling they have not had adequate training in AI), with many respondents stating that they had to educate themselves to gain some basic AI skills. Notable correlations between confidence in working with AI and gender, age, and highest qualification were reported. Conclusion: Knowledge of AI terminology, principles, and applications by healthcare practitioners is necessary for adoption and integration of AI applications. The results of this survey highlight the perceived lack of knowledge, skills, and confidence for radiographers in applying AI solutions but also underline the need for formalised education on AI to prepare the current and prospective workforce for the upcoming clinical integration of AI in healthcare, to safely and efficiently navigate a digital future. Focus should be given on different needs of learners depending on age, gender, and highest qualification to ensure optimal integration.
Collapse
Affiliation(s)
- Clare Rainey
- Faculty of Life and Health Sciences, School of Health Sciences, Ulster University, Newtownabbey, United Kingdom
| | - Tracy O'Regan
- The Society and College of Radiographers, London, United Kingdom
| | - Jacqueline Matthew
- School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, United Kingdom
| | - Emily Skelton
- School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, United Kingdom.,Department of Radiography, Division of Midwifery and Radiography, School of Health Sciences, University of London, London, United Kingdom
| | - Nick Woznitza
- University College London Hospitals, London, United Kingdom.,School of Allied and Public Health Professions, Canterbury Christ Church University, Canterbury, United Kingdom
| | - Kwun-Ye Chu
- Department of Oncology, Oxford Institute for Radiation Oncology, University of Oxford, Oxford, United Kingdom.,Radiotherapy Department, Churchill Hospital, Oxford University Hospitals NHS FT, Oxford, United Kingdom
| | - Spencer Goodman
- The Society and College of Radiographers, London, United Kingdom
| | | | - Ciara Hughes
- Faculty of Life and Health Sciences, School of Health Sciences, Ulster University, Newtownabbey, United Kingdom
| | - Raymond Bond
- Faculty of Computing, Engineering and the Built Environment, School of Computing, Ulster University, Newtownabbey, United Kingdom
| | - Sonyia McFadden
- Faculty of Life and Health Sciences, School of Health Sciences, Ulster University, Newtownabbey, United Kingdom
| | - Christina Malamateniou
- School of Biomedical Engineering and Imaging Sciences, King's College London, St Thomas' Hospital, London, United Kingdom.,Department of Radiography, Division of Midwifery and Radiography, School of Health Sciences, University of London, London, United Kingdom
| |
Collapse
|
12
|
Hanif AM, Beqiri S, Keane PA, Campbell JP. Applications of interpretability in deep learning models for ophthalmology. Curr Opin Ophthalmol 2021; 32:452-458. [PMID: 34231530 PMCID: PMC8373813 DOI: 10.1097/icu.0000000000000780] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
PURPOSE OF REVIEW In this article, we introduce the concept of model interpretability, review its applications in deep learning models for clinical ophthalmology, and discuss its role in the integration of artificial intelligence in healthcare. RECENT FINDINGS The advent of deep learning in medicine has introduced models with remarkable accuracy. However, the inherent complexity of these models undermines its users' ability to understand, debug and ultimately trust them in clinical practice. Novel methods are being increasingly explored to improve models' 'interpretability' and draw clearer associations between their outputs and features in the input dataset. In the field of ophthalmology, interpretability methods have enabled users to make informed adjustments, identify clinically relevant imaging patterns, and predict outcomes in deep learning models. SUMMARY Interpretability methods support the transparency necessary to implement, operate and modify complex deep learning models. These benefits are becoming increasingly demonstrated in models for clinical ophthalmology. As quality standards for deep learning models used in healthcare continue to evolve, interpretability methods may prove influential in their path to regulatory approval and acceptance in clinical practice.
Collapse
Affiliation(s)
- Adam M. Hanif
- Ophthalmology, Oregon Health & Science University, Portland, Oregon
| | - Sara Beqiri
- University College London Division of Medicine, London, United Kingdom
| | - Pearse A. Keane
- Moorfields Eye Hospital NHS Foundation Trust, London, United Kingdom
- University College London Institute of Ophthalmology, United Kingdom
| | | |
Collapse
|