1
|
Ellis S, Gomes S, Trumble M, Halling-Brown MD, Young KC, Chaudhry NS, Harris P, Warren LM. Deep Learning for Breast Cancer Risk Prediction: Application to a Large Representative UK Screening Cohort. Radiol Artif Intell 2024; 6:e230431. [PMID: 38775671 PMCID: PMC11294956 DOI: 10.1148/ryai.230431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 04/08/2024] [Accepted: 05/01/2024] [Indexed: 07/11/2024]
Abstract
Purpose To develop an artificial intelligence (AI) deep learning tool capable of predicting future breast cancer risk from a current negative screening mammographic examination and to evaluate the model on data from the UK National Health Service Breast Screening Program. Materials and Methods The OPTIMAM Mammography Imaging Database contains screening data, including mammograms and information on interval cancers, for more than 300 000 female patients who attended screening at three different sites in the United Kingdom from 2012 onward. Cancer-free screening examinations from women aged 50-70 years were performed and classified as risk-positive or risk-negative based on the occurrence of cancer within 3 years of the original examination. Examinations with confirmed cancer and images containing implants were excluded. From the resulting 5264 risk-positive and 191 488 risk-negative examinations, training (n = 89 285), validation (n = 2106), and test (n = 39 351) datasets were produced for model development and evaluation. The AI model was trained to predict future cancer occurrence based on screening mammograms and patient age. Performance was evaluated on the test dataset using the area under the receiver operating characteristic curve (AUC) and compared across subpopulations to assess potential biases. Interpretability of the model was explored, including with saliency maps. Results On the hold-out test set, the AI model achieved an overall AUC of 0.70 (95% CI: 0.69, 0.72). There was no evidence of a difference in performance across the three sites, between patient ethnicities, or across age groups. Visualization of saliency maps and sample images provided insights into the mammographic features associated with AI-predicted cancer risk. Conclusion The developed AI tool showed good performance on a multisite, United Kingdom-specific dataset. Keywords: Deep Learning, Artificial Intelligence, Breast Cancer, Screening, Risk Prediction Supplemental material is available for this article. ©RSNA, 2024.
Collapse
Affiliation(s)
- Sam Ellis
- From the Department of Scientific Computing (S.E., S.G., M.T.,
M.D.H.B., N.S.C., P.H., L.M.W.) and National Co-ordinating Centre for the
Physics of Mammography (K.C.Y.), Royal Surrey NHS Foundation Trust, Egerton
Road, Guildford GU2 7XX, England; and Centre for Vision, Speech and
Signal Processing (M.D.H.B.) and Department of Physics (K.C.Y.), University of
Surrey, Guildford, England
| | - Sandra Gomes
- From the Department of Scientific Computing (S.E., S.G., M.T.,
M.D.H.B., N.S.C., P.H., L.M.W.) and National Co-ordinating Centre for the
Physics of Mammography (K.C.Y.), Royal Surrey NHS Foundation Trust, Egerton
Road, Guildford GU2 7XX, England; and Centre for Vision, Speech and
Signal Processing (M.D.H.B.) and Department of Physics (K.C.Y.), University of
Surrey, Guildford, England
| | - Matthew Trumble
- From the Department of Scientific Computing (S.E., S.G., M.T.,
M.D.H.B., N.S.C., P.H., L.M.W.) and National Co-ordinating Centre for the
Physics of Mammography (K.C.Y.), Royal Surrey NHS Foundation Trust, Egerton
Road, Guildford GU2 7XX, England; and Centre for Vision, Speech and
Signal Processing (M.D.H.B.) and Department of Physics (K.C.Y.), University of
Surrey, Guildford, England
| | - Mark D. Halling-Brown
- From the Department of Scientific Computing (S.E., S.G., M.T.,
M.D.H.B., N.S.C., P.H., L.M.W.) and National Co-ordinating Centre for the
Physics of Mammography (K.C.Y.), Royal Surrey NHS Foundation Trust, Egerton
Road, Guildford GU2 7XX, England; and Centre for Vision, Speech and
Signal Processing (M.D.H.B.) and Department of Physics (K.C.Y.), University of
Surrey, Guildford, England
| | - Kenneth C. Young
- From the Department of Scientific Computing (S.E., S.G., M.T.,
M.D.H.B., N.S.C., P.H., L.M.W.) and National Co-ordinating Centre for the
Physics of Mammography (K.C.Y.), Royal Surrey NHS Foundation Trust, Egerton
Road, Guildford GU2 7XX, England; and Centre for Vision, Speech and
Signal Processing (M.D.H.B.) and Department of Physics (K.C.Y.), University of
Surrey, Guildford, England
| | - Nouman S. Chaudhry
- From the Department of Scientific Computing (S.E., S.G., M.T.,
M.D.H.B., N.S.C., P.H., L.M.W.) and National Co-ordinating Centre for the
Physics of Mammography (K.C.Y.), Royal Surrey NHS Foundation Trust, Egerton
Road, Guildford GU2 7XX, England; and Centre for Vision, Speech and
Signal Processing (M.D.H.B.) and Department of Physics (K.C.Y.), University of
Surrey, Guildford, England
| | - Peter Harris
- From the Department of Scientific Computing (S.E., S.G., M.T.,
M.D.H.B., N.S.C., P.H., L.M.W.) and National Co-ordinating Centre for the
Physics of Mammography (K.C.Y.), Royal Surrey NHS Foundation Trust, Egerton
Road, Guildford GU2 7XX, England; and Centre for Vision, Speech and
Signal Processing (M.D.H.B.) and Department of Physics (K.C.Y.), University of
Surrey, Guildford, England
| | - Lucy M. Warren
- From the Department of Scientific Computing (S.E., S.G., M.T.,
M.D.H.B., N.S.C., P.H., L.M.W.) and National Co-ordinating Centre for the
Physics of Mammography (K.C.Y.), Royal Surrey NHS Foundation Trust, Egerton
Road, Guildford GU2 7XX, England; and Centre for Vision, Speech and
Signal Processing (M.D.H.B.) and Department of Physics (K.C.Y.), University of
Surrey, Guildford, England
| |
Collapse
|
2
|
Mota AM, Mendes J, Matela N. Breast Cancer Molecular Subtype Prediction: A Mammography-Based AI Approach. Biomedicines 2024; 12:1371. [PMID: 38927578 PMCID: PMC11201998 DOI: 10.3390/biomedicines12061371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Revised: 06/14/2024] [Accepted: 06/18/2024] [Indexed: 06/28/2024] Open
Abstract
Breast cancer remains a leading cause of mortality among women, with molecular subtypes significantly influencing prognosis and treatment strategies. Currently, identifying the molecular subtype of cancer requires a biopsy-a specialized, expensive, and time-consuming procedure, often yielding to results that must be supported with additional biopsies due to technique errors or tumor heterogeneity. This study introduces a novel approach for predicting breast cancer molecular subtypes using mammography images and advanced artificial intelligence (AI) methodologies. Using the OPTIMAM imaging database, 1397 images from 660 patients were selected. The pretrained deep learning model ResNet-101 was employed to classify tumors into five subtypes: Luminal A, Luminal B1, Luminal B2, HER2, and Triple Negative. Various classification strategies were studied: binary classifications (one vs. all others, specific combinations) and multi-class classification (evaluating all subtypes simultaneously). To address imbalanced data, strategies like oversampling, undersampling, and data augmentation were explored. Performance was evaluated using accuracy and area under the receiver operating characteristic curve (AUC). Binary classification results showed a maximum average accuracy and AUC of 79.02% and 64.69%, respectively, while multi-class classification achieved an average AUC of 60.62% with oversampling and data augmentation. The most notable binary classification was HER2 vs. non-HER2, with an accuracy of 89.79% and an AUC of 73.31%. Binary classification for specific combinations of subtypes revealed an accuracy of 76.42% for HER2 vs. Luminal A and an AUC of 73.04% for HER2 vs. Luminal B1. These findings highlight the potential of mammography-based AI for non-invasive breast cancer subtype prediction, offering a promising alternative to biopsies and paving the way for personalized treatment plans.
Collapse
Affiliation(s)
- Ana M. Mota
- Instituto de Biofísica e Engenharia Biomédica, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisbon, Portugal; (J.M.); (N.M.)
| | - João Mendes
- Instituto de Biofísica e Engenharia Biomédica, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisbon, Portugal; (J.M.); (N.M.)
- LASIGE, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisbon, Portugal
| | - Nuno Matela
- Instituto de Biofísica e Engenharia Biomédica, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisbon, Portugal; (J.M.); (N.M.)
| |
Collapse
|
3
|
Pedemonte S, Tsue T, Mombourquette B, Truong Vu YN, Matthews T, Morales Hoil R, Shah M, Ghare N, Zingman-Daniels N, Holley S, Appleton CM, Su J, Wahl RL. A Semiautonomous Deep Learning System to Reduce False Positives in Screening Mammography. Radiol Artif Intell 2024; 6:e230033. [PMID: 38597785 PMCID: PMC11140506 DOI: 10.1148/ryai.230033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 02/16/2024] [Accepted: 03/19/2024] [Indexed: 04/11/2024]
Abstract
Purpose To evaluate the ability of a semiautonomous artificial intelligence (AI) model to identify screening mammograms not suspicious for breast cancer and reduce the number of false-positive examinations. Materials and Methods The deep learning algorithm was trained using 123 248 two-dimensional digital mammograms (6161 cancers) and a retrospective study was performed on three nonoverlapping datasets of 14 831 screening mammography examinations (1026 cancers) from two U.S. institutions and one U.K. institution (2008-2017). The stand-alone performance of humans and AI was compared. Human plus AI performance was simulated to examine reductions in the cancer detection rate, number of examinations, false-positive callbacks, and benign biopsies. Metrics were adjusted to mimic the natural distribution of a screening population, and bootstrapped CIs and P values were calculated. Results Retrospective evaluation on all datasets showed minimal changes to the cancer detection rate with use of the AI device (noninferiority margin of 0.25 cancers per 1000 examinations: U.S. dataset 1, P = .02; U.S. dataset 2, P < .001; U.K. dataset, P < .001). On U.S. dataset 1 (11 592 mammograms; 101 cancers; 3810 female patients; mean age, 57.3 years ± 10.0 [SD]), the device reduced screening examinations requiring radiologist interpretation by 41.6% (95% CI: 40.6%, 42.4%; P < .001), diagnostic examinations callbacks by 31.1% (95% CI: 28.7%, 33.4%; P < .001), and benign needle biopsies by 7.4% (95% CI: 4.1%, 12.4%; P < .001). U.S. dataset 2 (1362 mammograms; 330 cancers; 1293 female patients; mean age, 55.4 years ± 10.5) was reduced by 19.5% (95% CI: 16.9%, 22.1%; P < .001), 11.9% (95% CI: 8.6%, 15.7%; P < .001), and 6.5% (95% CI: 0.0%, 19.0%; P = .08), respectively. The U.K. dataset (1877 mammograms; 595 cancers; 1491 female patients; mean age, 63.5 years ± 7.1) was reduced by 36.8% (95% CI: 34.4%, 39.7%; P < .001), 17.1% (95% CI: 5.9%, 30.1%: P < .001), and 5.9% (95% CI: 2.9%, 11.5%; P < .001), respectively. Conclusion This work demonstrates the potential of a semiautonomous breast cancer screening system to reduce false positives, unnecessary procedures, patient anxiety, and medical expenses. Keywords: Artificial Intelligence, Semiautonomous Deep Learning, Breast Cancer, Screening Mammography Supplemental material is available for this article. Published under a CC BY 4.0 license.
Collapse
Affiliation(s)
- Stefano Pedemonte
- From Whiterabbit.ai, 3930 Freedom Cir, Santa Clara, CA 95054 (S.P.,
T.T., B.M., Y.N.T.V., T.M., R.M.H., M.S., N.G., N.Z.D., J.S.); Onsite
Women's Health, Westfield, Mass (S.H.); SSM Health, St Louis, Mo
(C.M.A.); and Mallinckrodt Institute of Radiology, Washington University School
of Medicine, St Louis, Mo (R.L.W.)
| | - Trevor Tsue
- From Whiterabbit.ai, 3930 Freedom Cir, Santa Clara, CA 95054 (S.P.,
T.T., B.M., Y.N.T.V., T.M., R.M.H., M.S., N.G., N.Z.D., J.S.); Onsite
Women's Health, Westfield, Mass (S.H.); SSM Health, St Louis, Mo
(C.M.A.); and Mallinckrodt Institute of Radiology, Washington University School
of Medicine, St Louis, Mo (R.L.W.)
| | - Brent Mombourquette
- From Whiterabbit.ai, 3930 Freedom Cir, Santa Clara, CA 95054 (S.P.,
T.T., B.M., Y.N.T.V., T.M., R.M.H., M.S., N.G., N.Z.D., J.S.); Onsite
Women's Health, Westfield, Mass (S.H.); SSM Health, St Louis, Mo
(C.M.A.); and Mallinckrodt Institute of Radiology, Washington University School
of Medicine, St Louis, Mo (R.L.W.)
| | - Yen Nhi Truong Vu
- From Whiterabbit.ai, 3930 Freedom Cir, Santa Clara, CA 95054 (S.P.,
T.T., B.M., Y.N.T.V., T.M., R.M.H., M.S., N.G., N.Z.D., J.S.); Onsite
Women's Health, Westfield, Mass (S.H.); SSM Health, St Louis, Mo
(C.M.A.); and Mallinckrodt Institute of Radiology, Washington University School
of Medicine, St Louis, Mo (R.L.W.)
| | - Thomas Matthews
- From Whiterabbit.ai, 3930 Freedom Cir, Santa Clara, CA 95054 (S.P.,
T.T., B.M., Y.N.T.V., T.M., R.M.H., M.S., N.G., N.Z.D., J.S.); Onsite
Women's Health, Westfield, Mass (S.H.); SSM Health, St Louis, Mo
(C.M.A.); and Mallinckrodt Institute of Radiology, Washington University School
of Medicine, St Louis, Mo (R.L.W.)
| | - Rodrigo Morales Hoil
- From Whiterabbit.ai, 3930 Freedom Cir, Santa Clara, CA 95054 (S.P.,
T.T., B.M., Y.N.T.V., T.M., R.M.H., M.S., N.G., N.Z.D., J.S.); Onsite
Women's Health, Westfield, Mass (S.H.); SSM Health, St Louis, Mo
(C.M.A.); and Mallinckrodt Institute of Radiology, Washington University School
of Medicine, St Louis, Mo (R.L.W.)
| | - Meet Shah
- From Whiterabbit.ai, 3930 Freedom Cir, Santa Clara, CA 95054 (S.P.,
T.T., B.M., Y.N.T.V., T.M., R.M.H., M.S., N.G., N.Z.D., J.S.); Onsite
Women's Health, Westfield, Mass (S.H.); SSM Health, St Louis, Mo
(C.M.A.); and Mallinckrodt Institute of Radiology, Washington University School
of Medicine, St Louis, Mo (R.L.W.)
| | - Nikita Ghare
- From Whiterabbit.ai, 3930 Freedom Cir, Santa Clara, CA 95054 (S.P.,
T.T., B.M., Y.N.T.V., T.M., R.M.H., M.S., N.G., N.Z.D., J.S.); Onsite
Women's Health, Westfield, Mass (S.H.); SSM Health, St Louis, Mo
(C.M.A.); and Mallinckrodt Institute of Radiology, Washington University School
of Medicine, St Louis, Mo (R.L.W.)
| | - Naomi Zingman-Daniels
- From Whiterabbit.ai, 3930 Freedom Cir, Santa Clara, CA 95054 (S.P.,
T.T., B.M., Y.N.T.V., T.M., R.M.H., M.S., N.G., N.Z.D., J.S.); Onsite
Women's Health, Westfield, Mass (S.H.); SSM Health, St Louis, Mo
(C.M.A.); and Mallinckrodt Institute of Radiology, Washington University School
of Medicine, St Louis, Mo (R.L.W.)
| | - Susan Holley
- From Whiterabbit.ai, 3930 Freedom Cir, Santa Clara, CA 95054 (S.P.,
T.T., B.M., Y.N.T.V., T.M., R.M.H., M.S., N.G., N.Z.D., J.S.); Onsite
Women's Health, Westfield, Mass (S.H.); SSM Health, St Louis, Mo
(C.M.A.); and Mallinckrodt Institute of Radiology, Washington University School
of Medicine, St Louis, Mo (R.L.W.)
| | - Catherine M. Appleton
- From Whiterabbit.ai, 3930 Freedom Cir, Santa Clara, CA 95054 (S.P.,
T.T., B.M., Y.N.T.V., T.M., R.M.H., M.S., N.G., N.Z.D., J.S.); Onsite
Women's Health, Westfield, Mass (S.H.); SSM Health, St Louis, Mo
(C.M.A.); and Mallinckrodt Institute of Radiology, Washington University School
of Medicine, St Louis, Mo (R.L.W.)
| | - Jason Su
- From Whiterabbit.ai, 3930 Freedom Cir, Santa Clara, CA 95054 (S.P.,
T.T., B.M., Y.N.T.V., T.M., R.M.H., M.S., N.G., N.Z.D., J.S.); Onsite
Women's Health, Westfield, Mass (S.H.); SSM Health, St Louis, Mo
(C.M.A.); and Mallinckrodt Institute of Radiology, Washington University School
of Medicine, St Louis, Mo (R.L.W.)
| | - Richard L. Wahl
- From Whiterabbit.ai, 3930 Freedom Cir, Santa Clara, CA 95054 (S.P.,
T.T., B.M., Y.N.T.V., T.M., R.M.H., M.S., N.G., N.Z.D., J.S.); Onsite
Women's Health, Westfield, Mass (S.H.); SSM Health, St Louis, Mo
(C.M.A.); and Mallinckrodt Institute of Radiology, Washington University School
of Medicine, St Louis, Mo (R.L.W.)
| |
Collapse
|
4
|
Monteiro Cordeiro N, Facina G, Pinto Nazário AC, Monteiro Sanvido V, Araujo Neto JT, Rodrigues Dos Santos E, Domingues da Silva M, Elias S. Towards precision medicine in breast imaging: A novel open mammography database with tailor-made 3D image retrieval for AI and teaching. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 248:108117. [PMID: 38498955 DOI: 10.1016/j.cmpb.2024.108117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 02/29/2024] [Accepted: 03/02/2024] [Indexed: 03/20/2024]
Abstract
This project addresses the global challenge of breast cancer, particularly in low-resource settings, by creating a pioneering mammography database. Breast cancer, identified by the World Health Organization as a leading cause of cancer death among women, often faces diagnostic and treatment resource constraints in low- and middle-income countries. To enhance early diagnosis and address educational setbacks, the project focuses on leveraging artificial intelligence (AI) technologies through a comprehensive database. Developed in collaboration with Ambra Health, a cloud-based medical image management software, the database comprises 941 mammography images from 100 anonymized cases, with 62 % including 3D images. Accessible through http://mamografia.unifesp.br, the platform facilitates a simple registration process and an advanced search system based on 169 clinical and imaging variables. The website, customizable to the user's native language, ensures data security through an automatic anonymization system. By providing high-resolution, 3D digital images and supplementary clinical information, the platform aims to promote education and research in breast cancer diagnosis, representing a significant advancement in resource-constrained healthcare environments.
Collapse
Affiliation(s)
| | - Gil Facina
- Federal University of São Paulo, R. Marselhesa, 249 - Vila Mariana, São Paulo, SP 04020-060, Brazil
| | | | - Vanessa Monteiro Sanvido
- Federal University of São Paulo, R. Marselhesa, 249 - Vila Mariana, São Paulo, SP 04020-060, Brazil
| | | | | | | | - Simone Elias
- Federal University of São Paulo, R. Marselhesa, 249 - Vila Mariana, São Paulo, SP 04020-060, Brazil.
| |
Collapse
|
5
|
Wu DY, Vo DT, Seiler SJ. Opinion: Big Data Elements Key to Medical Imaging Machine Learning Tool Development. JOURNAL OF BREAST IMAGING 2024; 6:217-219. [PMID: 38271153 DOI: 10.1093/jbi/wbad102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Indexed: 01/27/2024]
Affiliation(s)
- Dolly Y Wu
- UT Southwestern Medical Center, Volunteer Services, Dallas, TX, USA
| | - Dat T Vo
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX, USA
| | - Stephen J Seiler
- Department of Radiology, UT Southwestern Medical Center, Dallas, TX, USA
| |
Collapse
|
6
|
Velarde OM, Lin C, Eskreis-Winkler S, Parra LC. Robustness of Deep Networks for Mammography: Replication Across Public Datasets. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:536-546. [PMID: 38343223 PMCID: PMC11031505 DOI: 10.1007/s10278-023-00943-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 10/04/2023] [Accepted: 10/05/2023] [Indexed: 02/22/2024]
Abstract
Deep neural networks have demonstrated promising performance in screening mammography with recent studies reporting performance at or above the level of trained radiologists on internal datasets. However, it remains unclear whether the performance of these trained models is robust and replicates across external datasets. In this study, we evaluate four state-of-the-art publicly available models using four publicly available mammography datasets (CBIS-DDSM, INbreast, CMMD, OMI-DB). Where test data was available, published results were replicated. The best-performing model, which achieved an area under the ROC curve (AUC) of 0.88 on internal data from NYU, achieved here an AUC of 0.9 on the external CMMD dataset (N = 826 exams). On the larger OMI-DB dataset (N = 11,440 exams), it achieved an AUC of 0.84 but did not match the performance of individual radiologists (at a specificity of 0.92, the sensitivity was 0.97 for the radiologist and 0.53 for the network for a 1-year follow-up). The network showed higher performance for in situ cancers, as opposed to invasive cancers. Among invasive cancers, it was relatively weaker at identifying asymmetries and was relatively stronger at identifying masses. The three other trained models that we evaluated all performed poorly on external datasets. Independent validation of trained models is an essential step to ensure safe and reliable use. Future progress in AI for mammography may depend on a concerted effort to make larger datasets publicly available that span multiple clinical sites.
Collapse
Affiliation(s)
- Osvaldo M Velarde
- The Department of Biomedical Engineering, The City College of New York, 10030, New York, NY, USA.
| | - Clarissa Lin
- Department of Radiology, Memorial Sloan Kettering Cancer Center, 10065, New York, NY, USA
| | - Sarah Eskreis-Winkler
- Department of Radiology, Memorial Sloan Kettering Cancer Center, 10065, New York, NY, USA
| | - Lucas C Parra
- The Department of Biomedical Engineering, The City College of New York, 10030, New York, NY, USA
| |
Collapse
|
7
|
Montoya-del-Angel R, Sam-Millan K, Vilanova JC, Martí R. MAM-E: Mammographic Synthetic Image Generation with Diffusion Models. SENSORS (BASEL, SWITZERLAND) 2024; 24:2076. [PMID: 38610288 PMCID: PMC11014323 DOI: 10.3390/s24072076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 03/12/2024] [Accepted: 03/22/2024] [Indexed: 04/14/2024]
Abstract
Generative models are used as an alternative data augmentation technique to alleviate the data scarcity problem faced in the medical imaging field. Diffusion models have gathered special attention due to their innovative generation approach, the high quality of the generated images, and their relatively less complex training process compared with Generative Adversarial Networks. Still, the implementation of such models in the medical domain remains at an early stage. In this work, we propose exploring the use of diffusion models for the generation of high-quality, full-field digital mammograms using state-of-the-art conditional diffusion pipelines. Additionally, we propose using stable diffusion models for the inpainting of synthetic mass-like lesions on healthy mammograms. We introduce MAM-E, a pipeline of generative models for high-quality mammography synthesis controlled by a text prompt and capable of generating synthetic mass-like lesions on specific regions of the breast. Finally, we provide quantitative and qualitative assessment of the generated images and easy-to-use graphical user interfaces for mammography synthesis.
Collapse
Affiliation(s)
- Ricardo Montoya-del-Angel
- Computer Vision and Robotics Institute (ViCOROB), University of Girona, 17004 Girona, Spain; (K.S.-M.); (R.M.)
| | - Karla Sam-Millan
- Computer Vision and Robotics Institute (ViCOROB), University of Girona, 17004 Girona, Spain; (K.S.-M.); (R.M.)
| | - Joan C. Vilanova
- Department of Radiology, Clínica Girona, Institute of Diagnostic Imaging (IDI) Girona, University of Girona, 17004 Girona, Spain;
| | - Robert Martí
- Computer Vision and Robotics Institute (ViCOROB), University of Girona, 17004 Girona, Spain; (K.S.-M.); (R.M.)
| |
Collapse
|
8
|
Wu DY, Vo DT, Seiler SJ. Long overdue national big data policies hinder accurate and equitable cancer detection AI systems. J Med Imaging Radiat Sci 2024; 55:101387. [PMID: 38443215 DOI: 10.1016/j.jmir.2024.02.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 02/04/2024] [Accepted: 02/09/2024] [Indexed: 03/07/2024]
Affiliation(s)
- Dolly Y Wu
- Volunteer Services, UT Southwestern Medical Center, Dallas, TX, USA.
| | - Dat T Vo
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX, USA
| | - Stephen J Seiler
- Department of Radiology, UT Southwestern Medical Center, Dallas, TX, USA
| |
Collapse
|
9
|
Wu DY, Fang YV, Vo DT, Spangler A, Seiler SJ. Detailed Image Data Quality and Cleaning Practices for Artificial Intelligence Tools for Breast Cancer. JCO Clin Cancer Inform 2024; 8:e2300074. [PMID: 38552191 PMCID: PMC10994436 DOI: 10.1200/cci.23.00074] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 11/30/2023] [Accepted: 02/13/2024] [Indexed: 04/02/2024] Open
Abstract
Standardizing image-data preparation practices to improve accuracy/consistency of AI diagnostic tools.
Collapse
Affiliation(s)
- Dolly Y. Wu
- Volunteer Services, UT Southwestern Medical Center, Dallas, TX
| | - Yisheng V. Fang
- Department of Pathology, UT Southwestern Medical Center, Dallas, TX
| | - Dat T. Vo
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX
| | - Ann Spangler
- Retired, Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX
| | | |
Collapse
|
10
|
Santeramo R, Damiani C, Wei J, Montana G, Brentnall AR. Are better AI algorithms for breast cancer detection also better at predicting risk? A paired case-control study. Breast Cancer Res 2024; 26:25. [PMID: 38326868 PMCID: PMC10848404 DOI: 10.1186/s13058-024-01775-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Accepted: 01/20/2024] [Indexed: 02/09/2024] Open
Abstract
BACKGROUND There is increasing evidence that artificial intelligence (AI) breast cancer risk evaluation tools using digital mammograms are highly informative for 1-6 years following a negative screening examination. We hypothesized that algorithms that have previously been shown to work well for cancer detection will also work well for risk assessment and that performance of algorithms for detection and risk assessment is correlated. METHODS To evaluate our hypothesis, we designed a case-control study using paired mammograms at diagnosis and at the previous screening visit. The study included n = 3386 women from the OPTIMAM registry, that includes mammograms from women diagnosed with breast cancer in the English breast screening program 2010-2019. Cases were diagnosed with invasive breast cancer or ductal carcinoma in situ at screening and were selected if they had a mammogram available at the screening examination that led to detection, and a paired mammogram at their previous screening visit 3y prior to detection when no cancer was detected. Controls without cancer were matched 1:1 to cases based on age (year), screening site, and mammography machine type. Risk assessment was conducted using a deep-learning model designed for breast cancer risk assessment (Mirai), and three open-source deep-learning algorithms designed for breast cancer detection. Discrimination was assessed using a matched area under the curve (AUC) statistic. RESULTS Overall performance using the paired mammograms followed the same order by algorithm for risk assessment (AUC range 0.59-0.67) and detection (AUC 0.81-0.89), with Mirai performing best for both. There was also a correlation in performance for risk and detection within algorithms by cancer size, with much greater accuracy for large cancers (30 mm+, detection AUC: 0.88-0.92; risk AUC: 0.64-0.74) than smaller cancers (0 to < 10 mm, detection AUC: 0.73-0.86, risk AUC: 0.54-0.64). Mirai was relatively strong for risk assessment of smaller cancers (0 to < 10 mm, risk, Mirai AUC: 0.64 (95% CI 0.57 to 0.70); other algorithms AUC 0.54-0.56). CONCLUSIONS Improvements in risk assessment could stem from enhancing cancer detection capabilities of smaller cancers. Other state-of-the-art AI detection algorithms with high performance for smaller cancers might achieve relatively high performance for risk assessment.
Collapse
Affiliation(s)
- Ruggiero Santeramo
- Wolfson Institute of Population Health, Queen Mary University of London, Charterhouse square, London, EC1M 6BQ, England, UK.
- Warwick Manufacturing Group, University of Warwick, Coventry, CV4 7AL, England, UK.
| | - Celeste Damiani
- Wolfson Institute of Population Health, Queen Mary University of London, Charterhouse square, London, EC1M 6BQ, England, UK
- Fondazione Istituto Italiano di Tecnologia (IIT), 16163, Genoa, Italy
| | - Jiefei Wei
- Department of Statistics, University of Warwick, Coventry, CV4 7AL, England, UK
| | - Giovanni Montana
- Warwick Manufacturing Group, University of Warwick, Coventry, CV4 7AL, England, UK.
- Department of Statistics, University of Warwick, Coventry, CV4 7AL, England, UK.
| | - Adam R Brentnall
- Wolfson Institute of Population Health, Queen Mary University of London, Charterhouse square, London, EC1M 6BQ, England, UK.
| |
Collapse
|
11
|
Brentnall AR, Atakpa EC, Hill H, Santeramo R, Damiani C, Cuzick J, Montana G, Duffy SW. An optimization framework to guide the choice of thresholds for risk-based cancer screening. NPJ Digit Med 2023; 6:223. [PMID: 38017184 PMCID: PMC10684532 DOI: 10.1038/s41746-023-00967-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Accepted: 11/15/2023] [Indexed: 11/30/2023] Open
Abstract
It is uncommon for risk groups defined by statistical or artificial intelligence (AI) models to be chosen by jointly considering model performance and potential interventions available. We develop a framework to rapidly guide choice of risk groups in this manner, and apply it to guide breast cancer screening intervals using an AI model. Linear programming is used to define risk groups that minimize expected advanced cancer incidence subject to resource constraints. In the application risk stratification performance is estimated from a case-control study (2044 cases, 1:1 matching), and other parameters are taken from screening trials and the screening programme in England. Under the model, re-screening in 1 year for the highest 4% AI model risk, in 3 years for the middle 64%, and in 4 years for 32% of the population at lowest risk, was expected to reduce the number of advanced cancers diagnosed by approximately 18 advanced cancers per 1000 diagnosed with triennial screening, for the same average number of screens in the population as triennial screening for all. Sensitivity analyses found the choice of thresholds was robust to model parameters, but the estimated reduction in advanced cancers was not precise and requires further evaluation. Our framework helps define thresholds with the greatest chance of success for reducing the population health burden of cancer when used in risk-adapted screening, which should be further evaluated such as in health-economic modelling based on computer simulation models, and real-world evaluations.
Collapse
Affiliation(s)
- Adam R Brentnall
- Wolfson Institute of Population Health, Queen Mary University of London, London, UK.
| | - Emma C Atakpa
- Wolfson Institute of Population Health, Queen Mary University of London, London, UK
| | - Harry Hill
- Sheffield Centre for Health and Related Research, University of Sheffield, Sheffield, UK
| | - Ruggiero Santeramo
- Wolfson Institute of Population Health, Queen Mary University of London, London, UK
- Warwick Manufacturing Group, University of Warwick, Coventry, UK
| | - Celeste Damiani
- Wolfson Institute of Population Health, Queen Mary University of London, London, UK
- Data Science & Computation Facility, Fondazione Istituto Italiano di Tecnologia, Genoa, Italy
| | - Jack Cuzick
- Wolfson Institute of Population Health, Queen Mary University of London, London, UK
| | - Giovanni Montana
- Warwick Manufacturing Group, University of Warwick, Coventry, UK
| | - Stephen W Duffy
- Wolfson Institute of Population Health, Queen Mary University of London, London, UK
| |
Collapse
|
12
|
Yadav N, Pandey S, Gupta A, Dudani P, Gupta S, Rangarajan K. Data Privacy in Healthcare: In the Era of Artificial Intelligence. Indian Dermatol Online J 2023; 14:788-792. [PMID: 38099022 PMCID: PMC10718098 DOI: 10.4103/idoj.idoj_543_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 09/01/2023] [Accepted: 09/08/2023] [Indexed: 12/17/2023] Open
Abstract
Data Privacy has increasingly become a matter of concern in the era of large public digital respositories of data. This is particularly true in healthcare where data can be misused if traced back to patients, and brings with itself a myriad of possibilities. Bring custodians of data, as well as being at the helm of disigning studies and products that can potentially benefit products, healthcare professionals often find themselves unsure about ethical and legal constraints that undelie data sharing. In this review we touch upon the concerns, leal frameworks as well as some common practices in these respects.
Collapse
Affiliation(s)
- Neel Yadav
- Department of Radiodiagnosis and Interventional Radiology, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Saumya Pandey
- Department of Radiodiagnosis and Interventional Radiology, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Amit Gupta
- Department of Radiodiagnosis and Interventional Radiology, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Pankhuri Dudani
- Department of Dermatology, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Somesh Gupta
- Department of Dermatology, All India Institute of Medical Sciences, New Delhi, Delhi, India
| | - Krithika Rangarajan
- Department of Radiodiagnosis and Interventional Radiology, All India Institute of Medical Sciences, New Delhi, Delhi, India
| |
Collapse
|
13
|
Osuala R, Skorupko G, Lazrak N, Garrucho L, García E, Joshi S, Jouide S, Rutherford M, Prior F, Kushibar K, Díaz O, Lekadir K. medigan: a Python library of pretrained generative models for medical image synthesis. J Med Imaging (Bellingham) 2023; 10:061403. [PMID: 36814939 PMCID: PMC9940031 DOI: 10.1117/1.jmi.10.6.061403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Accepted: 01/23/2023] [Indexed: 02/22/2023] Open
Abstract
Purpose Deep learning has shown great promise as the backbone of clinical decision support systems. Synthetic data generated by generative models can enhance the performance and capabilities of data-hungry deep learning models. However, there is (1) limited availability of (synthetic) datasets and (2) generative models are complex to train, which hinders their adoption in research and clinical applications. To reduce this entry barrier, we explore generative model sharing to allow more researchers to access, generate, and benefit from synthetic data. Approach We propose medigan, a one-stop shop for pretrained generative models implemented as an open-source framework-agnostic Python library. After gathering end-user requirements, design decisions based on usability, technical feasibility, and scalability are formulated. Subsequently, we implement medigan based on modular components for generative model (i) execution, (ii) visualization, (iii) search & ranking, and (iv) contribution. We integrate pretrained models with applications across modalities such as mammography, endoscopy, x-ray, and MRI. Results The scalability and design of the library are demonstrated by its growing number of integrated and readily-usable pretrained generative models, which include 21 models utilizing nine different generative adversarial network architectures trained on 11 different datasets. We further analyze three medigan applications, which include (a) enabling community-wide sharing of restricted data, (b) investigating generative model evaluation metrics, and (c) improving clinical downstream tasks. In (b), we extract Fréchet inception distances (FID) demonstrating FID variability based on image normalization and radiology-specific feature extractors. Conclusion medigan allows researchers and developers to create, increase, and domain-adapt their training data in just a few lines of code. Capable of enriching and accelerating the development of clinical machine learning models, we show medigan's viability as platform for generative model sharing. Our multimodel synthetic data experiments uncover standards for assessing and reporting metrics, such as FID, in image synthesis studies.
Collapse
Affiliation(s)
- Richard Osuala
- Universitat de Barcelona, Barcelona Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Barcelona, Spain
| | - Grzegorz Skorupko
- Universitat de Barcelona, Barcelona Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Barcelona, Spain
| | - Noussair Lazrak
- Universitat de Barcelona, Barcelona Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Barcelona, Spain
| | - Lidia Garrucho
- Universitat de Barcelona, Barcelona Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Barcelona, Spain
| | - Eloy García
- Universitat de Barcelona, Facultat de Matemàtiques i Informàtica, Barcelona, Spain
| | - Smriti Joshi
- Universitat de Barcelona, Barcelona Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Barcelona, Spain
| | - Socayna Jouide
- Universitat de Barcelona, Barcelona Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Barcelona, Spain
| | - Michael Rutherford
- University of Arkansas for Medical Sciences, Department of Biomedical Informatics, Little Rock, Arkansas, United States
| | - Fred Prior
- University of Arkansas for Medical Sciences, Department of Biomedical Informatics, Little Rock, Arkansas, United States
| | - Kaisar Kushibar
- Universitat de Barcelona, Barcelona Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Barcelona, Spain
| | - Oliver Díaz
- Universitat de Barcelona, Barcelona Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Barcelona, Spain
| | - Karim Lekadir
- Universitat de Barcelona, Barcelona Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Barcelona, Spain
| |
Collapse
|
14
|
Mackenzie A, Lewis E, Loveland J. Successes and challenges in extracting information from DICOM image databases for audit and research. Br J Radiol 2023; 96:20230104. [PMID: 37698251 PMCID: PMC10607388 DOI: 10.1259/bjr.20230104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 05/05/2023] [Accepted: 05/11/2023] [Indexed: 09/13/2023] Open
Abstract
In radiography, much valuable associated data (metadata) is generated during image acquisition. The current setup of picture archive and communication systems (PACS) can make extraction of this metadata difficult, especially as it is typically stored with the image. The aim of this work is to examine the current challenges in extracting image metadata and to discuss the potential benefits of using this rich information. This work focuses on breast screening, though the conclusions are applicable to other modalities.The data stored in PACS contain information, currently underutilised, and is of great benefit for auditing and improving imaging and radiographic practice. From the literature, we present examples of the potential clinical benefit such as audits of dose, and radiographic practice, as well as more advanced research highlighting the effects of radiographic practice, e.g. cancer detection rates affected by imaging technology.This review considers the challenges in extracting data, namely,• The search tools for data on most PACS are inadequate being both time-consuming and limited in elements that can be searched.• Security and information governance considerations• Anonymisation of data if required• Data curationThe review describes some solutions that have been successfully implemented.• Retrospective extraction: direct query on PACS• Extracting data prospectively• Use of structured reports• Use of trusted research environmentsUltimately, the data access process will be made easier by inclusion during PACS procurement. Auditing data from PACS can be used to improve quality of imaging and workflow, all of which will be a clinical benefit to patients.
Collapse
Affiliation(s)
| | | | - John Loveland
- NCCPM, Royal Surrey NHS Foundation Trust, Guildford, United Kingdom
| |
Collapse
|
15
|
Cossío F, Schurz H, Engström M, Barck-Holst C, Tsirikoglou A, Lundström C, Gustafsson H, Smith K, Zackrisson S, Strand F. VAI-B: a multicenter platform for the external validation of artificial intelligence algorithms in breast imaging. J Med Imaging (Bellingham) 2023; 10:061404. [PMID: 36949901 PMCID: PMC10026999 DOI: 10.1117/1.jmi.10.6.061404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Accepted: 02/06/2023] [Indexed: 03/21/2023] Open
Abstract
Purpose Multiple vendors are currently offering artificial intelligence (AI) computer-aided systems for triage detection, diagnosis, and risk prediction of breast cancer based on screening mammography. There is an imminent need to establish validation platforms that enable fair and transparent testing of these systems against external data. Approach We developed validation of artificial intelligence for breast imaging (VAI-B), a platform for independent validation of AI algorithms in breast imaging. The platform is a hybrid solution, with one part implemented in the cloud and another in an on-premises environment at Karolinska Institute. Cloud services provide the flexibility of scaling the computing power during inference time, while secure on-premises clinical data storage preserves their privacy. A MongoDB database and a python package were developed to store and manage the data on-premises. VAI-B requires four data components: radiological images, AI inferences, radiologist assessments, and cancer outcomes. Results To pilot test VAI-B, we defined a case-control population based on 8080 patients diagnosed with breast cancer and 36,339 healthy women based on the Swedish national quality registry for breast cancer. Images and radiological assessments from more than 100,000 mammography examinations were extracted from hospitals in three regions of Sweden. The images were processed by AI systems from three vendors in a virtual private cloud to produce abnormality scores related to signs of cancer in the images. A total of 105,706 examinations have been processed and stored in the database. Conclusions We have created a platform that will allow downstream evaluation of AI systems for breast cancer detection, which enables faster development cycles for participating vendors and safer AI adoption for participating hospitals. The platform was designed to be scalable and ready to be expanded should a new vendor want to evaluate their system or should a new hospital wish to obtain an evaluation of different AI systems on their images.
Collapse
Affiliation(s)
- Fernando Cossío
- Karolinska Institute, Department of Oncology-Pathology, Stockholm, Sweden
- Karolinska University Hospital, Department of Radiology, Stockholm, Sweden
| | - Haiko Schurz
- Karolinska Institute, Department of Oncology-Pathology, Stockholm, Sweden
| | | | | | | | - Claes Lundström
- Linköping University, Center for Medical Image Science and Visualization (CMIV), Linköping, Sweden
| | - Håkan Gustafsson
- Linköping University, Center for Medical Image Science and Visualization (CMIV), Linköping, Sweden
- Linköping University, Department of Medical Radiation Physics, Department of Health, Medicine and Caring Sciences, Linköping, Sweden
| | - Kevin Smith
- Royal Institute of Technology (KTH), Division of Computational Science and Technology, Stockholm, Sweden
| | - Sophia Zackrisson
- Lund University, Department of Diagnostic Radiology, Translational Medicine, Malmö, Sweden
- Skåne University Hospital, Department of Imaging and Physiology, Malmö, Sweden
| | - Fredrik Strand
- Karolinska Institute, Department of Oncology-Pathology, Stockholm, Sweden
- Karolinska University Hospital, Department of Radiology, Stockholm, Sweden
| |
Collapse
|
16
|
Suzuki Y, Hanaoka S, Tanabe M, Yoshikawa T, Seto Y. Predicting Breast Cancer Risk Using Radiomics Features of Mammography Images. J Pers Med 2023; 13:1528. [PMID: 38003843 PMCID: PMC10672551 DOI: 10.3390/jpm13111528] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2023] [Revised: 10/23/2023] [Accepted: 10/23/2023] [Indexed: 11/26/2023] Open
Abstract
Mammography images contain a lot of information about not only the mammary glands but also the skin, adipose tissue, and stroma, which may reflect the risk of developing breast cancer. We aimed to establish a method to predict breast cancer risk using radiomics features of mammography images and to enable further examinations and prophylactic treatment to reduce breast cancer mortality. We used mammography images of 4000 women with breast cancer and 1000 healthy women from the 'starting point set' of the OPTIMAM dataset, a public dataset. We trained a Light Gradient Boosting Machine using radiomics features extracted from mammography images of women with breast cancer (only the healthy side) and healthy women. This model was a binary classifier that could discriminate whether a given mammography image was of the contralateral side of women with breast cancer or not, and its performance was evaluated using five-fold cross-validation. The average area under the curve for five folds was 0.60122. Some radiomics features, such as 'wavelet-H_glcm_Correlation' and 'wavelet-H_firstorder_Maximum', showed distribution differences between the malignant and normal groups. Therefore, a single radiomics feature might reflect the breast cancer risk. The odds ratio of breast cancer incidence was 7.38 in women whose estimated malignancy probability was ≥0.95. Radiomics features from mammography images can help predict breast cancer risk.
Collapse
Affiliation(s)
- Yusuke Suzuki
- Department of Breast and Endocrine Surgery, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Shouhei Hanaoka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan;
| | - Masahiko Tanabe
- Department of Breast and Endocrine Surgery, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Takeharu Yoshikawa
- Department of Computational Diagnostic Radiology and Preventive Medicine, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Yasuyuki Seto
- Department of Breast and Endocrine Surgery, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| |
Collapse
|
17
|
Roschewitz M, Khara G, Yearsley J, Sharma N, James JJ, Ambrózay É, Heroux A, Kecskemethy P, Rijken T, Glocker B. Automatic correction of performance drift under acquisition shift in medical image classification. Nat Commun 2023; 14:6608. [PMID: 37857643 PMCID: PMC10587231 DOI: 10.1038/s41467-023-42396-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Accepted: 10/10/2023] [Indexed: 10/21/2023] Open
Abstract
Image-based prediction models for disease detection are sensitive to changes in data acquisition such as the replacement of scanner hardware or updates to the image processing software. The resulting differences in image characteristics may lead to drifts in clinically relevant performance metrics which could cause harm in clinical decision making, even for models that generalise in terms of area under the receiver-operating characteristic curve. We propose Unsupervised Prediction Alignment, a generic automatic recalibration method that requires no ground truth annotations and only limited amounts of unlabelled example images from the shifted data distribution. We illustrate the effectiveness of the proposed method to detect and correct performance drift in mammography-based breast cancer screening and on publicly available histopathology data. We show that the proposed method can preserve the expected performance in terms of sensitivity/specificity under various realistic scenarios of image acquisition shift, thus offering an important safeguard for clinical deployment.
Collapse
Affiliation(s)
- Mélanie Roschewitz
- Kheiron Medical Technologies, London, UK.
- Imperial College London, Department of Computing, London, UK.
| | | | | | - Nisha Sharma
- Leeds Teaching Hospital NHS Trust, Department of Radiology, Leeds, UK
| | - Jonathan J James
- Nottingham University Hospitals NHS Trust, Nottingham City Hospital, Nottingham Breast Institute, Nottingham, UK
| | | | | | | | | | - Ben Glocker
- Kheiron Medical Technologies, London, UK.
- Imperial College London, Department of Computing, London, UK.
| |
Collapse
|
18
|
van Nijnatten TJA, Payne NR, Hickman SE, Ashrafian H, Gilbert FJ. Overview of trials on artificial intelligence algorithms in breast cancer screening - A roadmap for international evaluation and implementation. Eur J Radiol 2023; 167:111087. [PMID: 37690352 DOI: 10.1016/j.ejrad.2023.111087] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Revised: 08/23/2023] [Accepted: 09/04/2023] [Indexed: 09/12/2023]
Abstract
Accumulating evidence from retrospective studies demonstrate at least non-inferior performance when using AI algorithms with different strategies versus double-reading in mammography screening. In addition, AI algorithms for mammography screening can reduce work load by moving to single human reading. Prospective trials are essential to avoid unintended adverse consequences before incorporation of AI algorithms into UK's National Health Service (NHS) Breast Screening Programme (BSP). A stakeholders' meeting was organized in Newnham College, Cambridge, UK to undertake a review of the current evidence to enable consensus discussion on next steps required before implementation into a screening programme. It was concluded that a multicentre multivendor testing platform study with opt-out consent is preferred. AI thresholds from different vendors should be determined while maintaining non-inferior screening performance results, particularly ensuring recall rates are not increased. Automatic recall of cases using an agreed high sensitivity AI score versus automatic rule out with a low AI score set at a high sensitivity could be used. A human reader should still be involved in decision making with AI-only recalls requiring human arbitration. Standalone AI algorithms used without prompting maintain unbiased screening reading performance, but reading with prompts should be tested prospectively and ideally provided for arbitration.
Collapse
Affiliation(s)
- T J A van Nijnatten
- Department of Radiology, University of Cambridge School of Clinical Medicine, Box 218, Level 5, Cambridge Biomedical Campus, Cambridge CB2 0QQ, United Kingdom; Department of Radiology and Nuclear Medicine, Maastricht University Medical Center+, Maastricht, the Netherlands; GROW - School for Oncology and Reproduction, Maastricht University Medical Center+, Maastricht, the Netherlands
| | - N R Payne
- Department of Radiology, University of Cambridge School of Clinical Medicine, Box 218, Level 5, Cambridge Biomedical Campus, Cambridge CB2 0QQ, United Kingdom
| | - S E Hickman
- Department of Radiology, University of Cambridge School of Clinical Medicine, Box 218, Level 5, Cambridge Biomedical Campus, Cambridge CB2 0QQ, United Kingdom; Department of Radiology, Barts Health NHS Trust, The Royal London Hospital, 80 Newark Street, London E1 2ES, United Kingdom
| | - H Ashrafian
- Institute of Global Health Innovation, Department of Surgery and Cancer, Imperial College London, St Mary's Hospital, London, United Kingdom
| | - F J Gilbert
- Department of Radiology, University of Cambridge School of Clinical Medicine, Box 218, Level 5, Cambridge Biomedical Campus, Cambridge CB2 0QQ, United Kingdom; Cambridge University Hospitals NHS Foundation Trust, Hills Road, Cambridge CB2 0QQ, United Kingdom.
| |
Collapse
|
19
|
Jin E, Zhao D, Wu G, Zhu J, Wang Z, Wei Z, Zhang S, Wang A, Tang B, Chen X, Sun Y, Zhang Z, Zhao W, Meng Y. OBIA: An Open Biomedical Imaging Archive. GENOMICS, PROTEOMICS & BIOINFORMATICS 2023; 21:1059-1065. [PMID: 37806555 PMCID: PMC10928373 DOI: 10.1016/j.gpb.2023.09.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/28/2023] [Revised: 09/26/2023] [Accepted: 09/29/2023] [Indexed: 10/10/2023]
Abstract
With the development of artificial intelligence (AI) technologies, biomedical imaging data play an important role in scientific research and clinical application, but the available resources are limited. Here we present Open Biomedical Imaging Archive (OBIA), a repository for archiving biomedical imaging and related clinical data. OBIA adopts five data objects (Collection, Individual, Study, Series, and Image) for data organization, and accepts the submission of biomedical images of multiple modalities, organs, and diseases. In order to protect personal privacy, OBIA has formulated a unified de-identification and quality control process. In addition, OBIA provides friendly and intuitive web interfaces for data submission, browsing, and retrieval, as well as image retrieval. As of September 2023, OBIA has housed data for a total of 937 individuals, 4136 studies, 24,701 series, and 1,938,309 images covering 9 modalities and 30 anatomical sites. Collectively, OBIA provides a reliable platform for biomedical imaging data management and offers free open access to all publicly available data to support research activities throughout the world. OBIA can be accessed at https://ngdc.cncb.ac.cn/obia.
Collapse
Affiliation(s)
- Enhui Jin
- National Genomics Data Center, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing 100101, China; CAS Key Laboratory of Genome Sciences and Information, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing 100101, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Dongli Zhao
- Chinese People's Liberation Army (PLA) Medical School, Beijing 100853, China
| | - Gangao Wu
- National Genomics Data Center, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing 100101, China; CAS Key Laboratory of Genome Sciences and Information, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing 100101, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Junwei Zhu
- National Genomics Data Center, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing 100101, China; CAS Key Laboratory of Genome Sciences and Information, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing 100101, China
| | - Zhonghuang Wang
- National Genomics Data Center, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing 100101, China; CAS Key Laboratory of Genome Sciences and Information, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing 100101, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Zhiyao Wei
- Chinese People's Liberation Army (PLA) Medical School, Beijing 100853, China
| | - Sisi Zhang
- National Genomics Data Center, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing 100101, China; CAS Key Laboratory of Genome Sciences and Information, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing 100101, China
| | - Anke Wang
- National Genomics Data Center, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing 100101, China; CAS Key Laboratory of Genome Sciences and Information, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing 100101, China
| | - Bixia Tang
- National Genomics Data Center, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing 100101, China; CAS Key Laboratory of Genome Sciences and Information, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing 100101, China
| | - Xu Chen
- National Genomics Data Center, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing 100101, China; CAS Key Laboratory of Genome Sciences and Information, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing 100101, China
| | - Yanling Sun
- National Genomics Data Center, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing 100101, China; CAS Key Laboratory of Genome Sciences and Information, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing 100101, China.
| | - Zhe Zhang
- Department of Obstetrics and Gynecology, Seventh Medical Center of Chinese PLA General Hospital, Beijing 100700, China.
| | - Wenming Zhao
- National Genomics Data Center, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing 100101, China; CAS Key Laboratory of Genome Sciences and Information, Beijing Institute of Genomics, Chinese Academy of Sciences and China National Center for Bioinformation, Beijing 100101, China; University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Yuanguang Meng
- Chinese People's Liberation Army (PLA) Medical School, Beijing 100853, China; Department of Obstetrics and Gynecology, Seventh Medical Center of Chinese PLA General Hospital, Beijing 100700, China.
| |
Collapse
|
20
|
Logan J, Kennedy PJ, Catchpoole D. A review of the machine learning datasets in mammography, their adherence to the FAIR principles and the outlook for the future. Sci Data 2023; 10:595. [PMID: 37684306 PMCID: PMC10491669 DOI: 10.1038/s41597-023-02430-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Accepted: 07/31/2023] [Indexed: 09/10/2023] Open
Abstract
The increasing rates of breast cancer, particularly in emerging economies, have led to interest in scalable deep learning-based solutions that improve the accuracy and cost-effectiveness of mammographic screening. However, such tools require large volumes of high-quality training data, which can be challenging to obtain. This paper combines the experience of an AI startup with an analysis of the FAIR principles of the eight available datasets. It demonstrates that the datasets vary considerably, particularly in their interoperability, as each dataset is skewed towards a particular clinical use-case. Additionally, the mix of digital captures and scanned film compounds the problem of variability, along with differences in licensing terms, ease of access, labelling reliability, and file formats. Improving interoperability through adherence to standards such as the BIRADS criteria for labelling and annotation, and a consistent file format, could markedly improve access and use of larger amounts of standardized data. This, in turn, could be increased further by GAN-based synthetic data generation, paving the way towards better health outcomes for breast cancer.
Collapse
Affiliation(s)
- Joe Logan
- Alixir Technologies Pty Ltd, Sydney, NSW, Australia.
- Australian Artificial Intelligence Institute, University of Technology Sydney, Sydney, NSW, Australia.
| | - Paul J Kennedy
- Australian Artificial Intelligence Institute, University of Technology Sydney, Sydney, NSW, Australia
| | - Daniel Catchpoole
- Australian Artificial Intelligence Institute, University of Technology Sydney, Sydney, NSW, Australia
- The Tumour Bank, The Children's Cancer Research Unit, Kids Research, The Children's Hospital at Westmead, Sydney, NSW, Australia
| |
Collapse
|
21
|
Cantone M, Marrocco C, Tortorella F, Bria A. Learnable DoG convolutional filters for microcalcification detection. Artif Intell Med 2023; 143:102629. [PMID: 37673567 DOI: 10.1016/j.artmed.2023.102629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 06/13/2023] [Accepted: 07/17/2023] [Indexed: 09/08/2023]
Abstract
Difference of Gaussians (DoG) convolutional filters are one of the earliest image processing methods employed for detecting microcalcifications on mammogram images before machine and deep learning methods became widespread. DoG is a blob enhancement filter that consists in subtracting one Gaussian-smoothed version of an image from another less Gaussian-smoothed version of the same image. Smoothing with a Gaussian kernel suppresses high-frequency spatial information, thus DoG can be regarded as a band-pass filter. However, due to their small size and overimposed breast tissue, microcalcifications vary greatly in contrast-to-noise ratio and sharpness. This makes it difficult to find a single DoG configuration that enhances all microcalcifications. In this work, we propose a convolutional network, named DoG-MCNet, where the first layer automatically learns a bank of DoG filters parameterized by their associated standard deviations. We experimentally show that when employed for microcalcification detection, our DoG layer acts as a learnable bank of band-pass preprocessing filters and improves detection performance by 4.86% AUFROC over baseline MCNet and 1.53% AUFROC over state-of-the-art multicontext ensemble of CNNs.
Collapse
Affiliation(s)
- Marco Cantone
- Department of Electrical and Information Engineering, University of Cassino and Southern Latium, Cassino, FR 03043, Italy.
| | - Claudio Marrocco
- Department of Electrical and Information Engineering, University of Cassino and Southern Latium, Cassino, FR 03043, Italy.
| | - Francesco Tortorella
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, Fisciano, SA 84084, Italy.
| | - Alessandro Bria
- Department of Electrical and Information Engineering, University of Cassino and Southern Latium, Cassino, FR 03043, Italy.
| |
Collapse
|
22
|
Mongan J, Halabi SS. On the Centrality of Data: Data Resources in Radiologic Artificial Intelligence. Radiol Artif Intell 2023; 5:e230231. [PMID: 37795139 PMCID: PMC10546351 DOI: 10.1148/ryai.230231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 07/16/2023] [Accepted: 07/24/2023] [Indexed: 10/06/2023]
Affiliation(s)
- John Mongan
- From the Department of Radiology and Biomedical Imaging; Center for
Intelligent Imaging, University of California San Francisco, 505 Parnassus Ave,
Room M-391, San Francisco, CA 94143 (J.M.); and Department of Medical Imaging,
Ann & Robert H. Lurie Children’s Hospital of Chicago, Chicago, Ill
(S.S.H.)
| | - Safwan S. Halabi
- From the Department of Radiology and Biomedical Imaging; Center for
Intelligent Imaging, University of California San Francisco, 505 Parnassus Ave,
Room M-391, San Francisco, CA 94143 (J.M.); and Department of Medical Imaging,
Ann & Robert H. Lurie Children’s Hospital of Chicago, Chicago, Ill
(S.S.H.)
| |
Collapse
|
23
|
Banerjee I, Bhattacharjee K, Burns JL, Trivedi H, Purkayastha S, Seyyed-Kalantari L, Patel BN, Shiradkar R, Gichoya J. "Shortcuts" Causing Bias in Radiology Artificial Intelligence: Causes, Evaluation, and Mitigation. J Am Coll Radiol 2023; 20:842-851. [PMID: 37506964 PMCID: PMC11192466 DOI: 10.1016/j.jacr.2023.06.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Accepted: 06/14/2023] [Indexed: 07/30/2023]
Abstract
Despite the expert-level performance of artificial intelligence (AI) models for various medical imaging tasks, real-world performance failures with disparate outputs for various subgroups limit the usefulness of AI in improving patients' lives. Many definitions of fairness have been proposed, with discussions of various tensions that arise in the choice of an appropriate metric to use to evaluate bias; for example, should one aim for individual or group fairness? One central observation is that AI models apply "shortcut learning" whereby spurious features (such as chest tubes and portable radiographic markers on intensive care unit chest radiography) on medical images are used for prediction instead of identifying true pathology. Moreover, AI has been shown to have a remarkable ability to detect protected attributes of age, sex, and race, while the same models demonstrate bias against historically underserved subgroups of age, sex, and race in disease diagnosis. Therefore, an AI model may take shortcut predictions from these correlations and subsequently generate an outcome that is biased toward certain subgroups even when protected attributes are not explicitly used as inputs into the model. As a result, these subgroups became nonprivileged subgroups. In this review, the authors discuss the various types of bias from shortcut learning that may occur at different phases of AI model development, including data bias, modeling bias, and inference bias. The authors thereafter summarize various tool kits that can be used to evaluate and mitigate bias and note that these have largely been applied to nonmedical domains and require more evaluation for medical AI. The authors then summarize current techniques for mitigating bias from preprocessing (data-centric solutions) and during model development (computational solutions) and postprocessing (recalibration of learning). Ongoing legal changes where the use of a biased model will be penalized highlight the necessity of understanding, detecting, and mitigating biases from shortcut learning and will require diverse research teams looking at the whole AI pipeline.
Collapse
Affiliation(s)
- Imon Banerjee
- Department of Radiology, Mayo Clinic, Scottsdale, Arizona; School of Computing and Augmented Intelligence, Arizona State University, Tempe, Arizona
| | | | - John L Burns
- Department of Radiology and Imaging Sciences, Indiana University School of Medicine, Indianapolis, Indiana
| | - Hari Trivedi
- Department of Radiology, Emory School of Medicine, Atlanta, Georgia
| | - Saptarshi Purkayastha
- Department of BioHealth Informatics, Indiana University-Purdue University Indianapolis, Indianapolis, Indiana
| | - Laleh Seyyed-Kalantari
- Department of Electrical Engineering and Computer Science, York University, Toronto, Ontario, Canada
| | - Bhavik N Patel
- Department of Radiology, Mayo Clinic, Scottsdale, Arizona; School of Computing and Augmented Intelligence, Arizona State University, Tempe, Arizona
| | - Rakesh Shiradkar
- Department of Biomedical Engineering, Emory University, Atlanta, Georgia; Georgia Institute of Technology, Atlanta, Georgia
| | - Judy Gichoya
- Department of Radiology, Emory School of Medicine, Atlanta, Georgia.
| |
Collapse
|
24
|
Liu CF, Leigh R, Johnson B, Urrutia V, Hsu J, Xu X, Li X, Mori S, Hillis AE, Faria AV. A large public dataset of annotated clinical MRIs and metadata of patients with acute stroke. Sci Data 2023; 10:548. [PMID: 37607929 PMCID: PMC10444746 DOI: 10.1038/s41597-023-02457-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 08/09/2023] [Indexed: 08/24/2023] Open
Abstract
To extract meaningful and reproducible models of brain function from stroke images, for both clinical and research proposes, is a daunting task severely hindered by the great variability of lesion frequency and patterns. Large datasets are therefore imperative, as well as fully automated image post-processing tools to analyze them. The development of such tools, particularly with artificial intelligence, is highly dependent on the availability of large datasets to model training and testing. We present a public dataset of 2,888 multimodal clinical MRIs of patients with acute and early subacute stroke, with manual lesion segmentation, and metadata. The dataset provides high quality, large scale, human-supervised knowledge to feed artificial intelligence models and enable further development of tools to automate several tasks that currently rely on human labor, such as lesion segmentation, labeling, calculation of disease-relevant scores, and lesion-based studies relating function to frequency lesion maps.
Collapse
Affiliation(s)
- Chin-Fu Liu
- Center for Imaging Science, Johns Hopkins University, Baltimore, MD, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Richard Leigh
- Department of Neurology, School of Medicine, Johns Hopkins University, Baltimore, MD, USA
| | - Brenda Johnson
- Department of Neurology, School of Medicine, Johns Hopkins University, Baltimore, MD, USA
| | - Victor Urrutia
- Department of Neurology, School of Medicine, Johns Hopkins University, Baltimore, MD, USA
| | - Johnny Hsu
- Department of Radiology, School of Medicine, Johns Hopkins University, Baltimore, MD, USA
| | - Xin Xu
- Department of Radiology, School of Medicine, Johns Hopkins University, Baltimore, MD, USA
| | - Xin Li
- Department of Radiology, School of Medicine, Johns Hopkins University, Baltimore, MD, USA
| | - Susumu Mori
- Department of Radiology, School of Medicine, Johns Hopkins University, Baltimore, MD, USA
| | - Argye E Hillis
- Department of Neurology, School of Medicine, Johns Hopkins University, Baltimore, MD, USA
- Department of Physical Medicine & Rehabilitation, and Department of Cognitive Science, Johns Hopkins University, Baltimore, MD, USA
| | - Andreia V Faria
- Department of Radiology, School of Medicine, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
25
|
Taylor CR, Monga N, Johnson C, Hawley JR, Patel M. Artificial Intelligence Applications in Breast Imaging: Current Status and Future Directions. Diagnostics (Basel) 2023; 13:2041. [PMID: 37370936 DOI: 10.3390/diagnostics13122041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Revised: 05/20/2023] [Accepted: 05/29/2023] [Indexed: 06/29/2023] Open
Abstract
Attempts to use computers to aid in the detection of breast malignancies date back more than 20 years. Despite significant interest and investment, this has historically led to minimal or no significant improvement in performance and outcomes with traditional computer-aided detection. However, recent advances in artificial intelligence and machine learning are now starting to deliver on the promise of improved performance. There are at present more than 20 FDA-approved AI applications for breast imaging, but adoption and utilization are widely variable and low overall. Breast imaging is unique and has aspects that create both opportunities and challenges for AI development and implementation. Breast cancer screening programs worldwide rely on screening mammography to reduce the morbidity and mortality of breast cancer, and many of the most exciting research projects and available AI applications focus on cancer detection for mammography. There are, however, multiple additional potential applications for AI in breast imaging, including decision support, risk assessment, breast density quantitation, workflow and triage, quality evaluation, response to neoadjuvant chemotherapy assessment, and image enhancement. In this review the current status, availability, and future directions of investigation of these applications are discussed, as well as the opportunities and barriers to more widespread utilization.
Collapse
Affiliation(s)
- Clayton R Taylor
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA
| | - Natasha Monga
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA
| | - Candise Johnson
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA
| | - Jeffrey R Hawley
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA
| | - Mitva Patel
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH 43210, USA
| |
Collapse
|
26
|
Damiani C, Kalliatakis G, Sreenivas M, Al-Attar M, Rose J, Pudney C, Lane EF, Cuzick J, Montana G, Brentnall AR. Evaluation of an AI Model to Assess Future Breast Cancer Risk. Radiology 2023; 307:e222679. [PMID: 37310244 DOI: 10.1148/radiol.222679] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Background Accurate breast cancer risk assessment after a negative screening result could enable better strategies for early detection. Purpose To evaluate a deep learning algorithm for risk assessment based on digital mammograms. Materials and Methods A retrospective observational matched case-control study was designed using the OPTIMAM Mammography Image Database from the National Health Service Breast Screening Programme in the United Kingdom from February 2010 to September 2019. Patients with breast cancer (cases) were diagnosed following a mammographic screening or between two triannual screening rounds. Controls were matched based on mammography device, screening site, and age. The artificial intelligence (AI) model only used mammograms at screening before diagnosis. The primary objective was to assess model performance, with a secondary objective to assess heterogeneity and calibration slope. The area under the receiver operating characteristic curve (AUC) was estimated for 3-year risk. Heterogeneity according to cancer subtype was assessed using a likelihood ratio interaction test. Statistical significance was set at P < .05. Results Analysis included patients with screen-detected (median age, 60 years [IQR, 55-65 years]; 2044 female, including 1528 with invasive cancer and 503 with ductal carcinoma in situ [DCIS]) or interval (median age, 59 years [IQR, 53-65 years]; 696 female, including 636 with invasive cancer and 54 with DCIS) breast cancer and 1:1 matched controls, each with a complete set of mammograms at the screening preceding diagnosis. The AI model had an overall AUC of 0.68 (95% CI: 0.66, 0.70), with no evidence of a significant difference between interval and screen-detected (AUC, 0.69 vs 0.67; P = .085) cancer. The calibration slope was 1.13 (95% CI: 1.01, 1.26). There was similar performance for the detection of invasive cancer versus DCIS (AUC, 0.68 vs 0.66; P = .057). The model had higher performance for advanced cancer risk (AUC, 0.72 ≥stage II vs 0.66
Collapse
Affiliation(s)
- Celeste Damiani
- From the Center for Human Technologies, Istituto Italiano di Tecnologia, Via Melen 83, Genoa 16152, Italy (C.D.); Wolfson Institute of Population Health, Queen Mary University of London, London, UK (C.D., E.F.L., J.C., A.R.B.); Institute of Computer Science (ICS), Foundation of Research and Technology Hellas, Heraklion, Crete, Greece (G.K.); Joint for Director Breast Screening, University Hospitals Coventry and Warwickshire NHS Trust Coventry, Coventry, UK (M.S.); Department of Oncoplastic Breast Surgery, University Hospitals of Leicester NHS Trust, Leicester, UK (M.A.A.); Consumer member at National Cancer Research Institute, Breast Group, London, UK (J.R., C.P.); and University of Warwick, WMG, Coventry, UK (G.M.)
| | - Grigorios Kalliatakis
- From the Center for Human Technologies, Istituto Italiano di Tecnologia, Via Melen 83, Genoa 16152, Italy (C.D.); Wolfson Institute of Population Health, Queen Mary University of London, London, UK (C.D., E.F.L., J.C., A.R.B.); Institute of Computer Science (ICS), Foundation of Research and Technology Hellas, Heraklion, Crete, Greece (G.K.); Joint for Director Breast Screening, University Hospitals Coventry and Warwickshire NHS Trust Coventry, Coventry, UK (M.S.); Department of Oncoplastic Breast Surgery, University Hospitals of Leicester NHS Trust, Leicester, UK (M.A.A.); Consumer member at National Cancer Research Institute, Breast Group, London, UK (J.R., C.P.); and University of Warwick, WMG, Coventry, UK (G.M.)
| | - Muthyala Sreenivas
- From the Center for Human Technologies, Istituto Italiano di Tecnologia, Via Melen 83, Genoa 16152, Italy (C.D.); Wolfson Institute of Population Health, Queen Mary University of London, London, UK (C.D., E.F.L., J.C., A.R.B.); Institute of Computer Science (ICS), Foundation of Research and Technology Hellas, Heraklion, Crete, Greece (G.K.); Joint for Director Breast Screening, University Hospitals Coventry and Warwickshire NHS Trust Coventry, Coventry, UK (M.S.); Department of Oncoplastic Breast Surgery, University Hospitals of Leicester NHS Trust, Leicester, UK (M.A.A.); Consumer member at National Cancer Research Institute, Breast Group, London, UK (J.R., C.P.); and University of Warwick, WMG, Coventry, UK (G.M.)
| | - Miaad Al-Attar
- From the Center for Human Technologies, Istituto Italiano di Tecnologia, Via Melen 83, Genoa 16152, Italy (C.D.); Wolfson Institute of Population Health, Queen Mary University of London, London, UK (C.D., E.F.L., J.C., A.R.B.); Institute of Computer Science (ICS), Foundation of Research and Technology Hellas, Heraklion, Crete, Greece (G.K.); Joint for Director Breast Screening, University Hospitals Coventry and Warwickshire NHS Trust Coventry, Coventry, UK (M.S.); Department of Oncoplastic Breast Surgery, University Hospitals of Leicester NHS Trust, Leicester, UK (M.A.A.); Consumer member at National Cancer Research Institute, Breast Group, London, UK (J.R., C.P.); and University of Warwick, WMG, Coventry, UK (G.M.)
| | - Janice Rose
- From the Center for Human Technologies, Istituto Italiano di Tecnologia, Via Melen 83, Genoa 16152, Italy (C.D.); Wolfson Institute of Population Health, Queen Mary University of London, London, UK (C.D., E.F.L., J.C., A.R.B.); Institute of Computer Science (ICS), Foundation of Research and Technology Hellas, Heraklion, Crete, Greece (G.K.); Joint for Director Breast Screening, University Hospitals Coventry and Warwickshire NHS Trust Coventry, Coventry, UK (M.S.); Department of Oncoplastic Breast Surgery, University Hospitals of Leicester NHS Trust, Leicester, UK (M.A.A.); Consumer member at National Cancer Research Institute, Breast Group, London, UK (J.R., C.P.); and University of Warwick, WMG, Coventry, UK (G.M.)
| | - Clare Pudney
- From the Center for Human Technologies, Istituto Italiano di Tecnologia, Via Melen 83, Genoa 16152, Italy (C.D.); Wolfson Institute of Population Health, Queen Mary University of London, London, UK (C.D., E.F.L., J.C., A.R.B.); Institute of Computer Science (ICS), Foundation of Research and Technology Hellas, Heraklion, Crete, Greece (G.K.); Joint for Director Breast Screening, University Hospitals Coventry and Warwickshire NHS Trust Coventry, Coventry, UK (M.S.); Department of Oncoplastic Breast Surgery, University Hospitals of Leicester NHS Trust, Leicester, UK (M.A.A.); Consumer member at National Cancer Research Institute, Breast Group, London, UK (J.R., C.P.); and University of Warwick, WMG, Coventry, UK (G.M.)
| | - Emily F Lane
- From the Center for Human Technologies, Istituto Italiano di Tecnologia, Via Melen 83, Genoa 16152, Italy (C.D.); Wolfson Institute of Population Health, Queen Mary University of London, London, UK (C.D., E.F.L., J.C., A.R.B.); Institute of Computer Science (ICS), Foundation of Research and Technology Hellas, Heraklion, Crete, Greece (G.K.); Joint for Director Breast Screening, University Hospitals Coventry and Warwickshire NHS Trust Coventry, Coventry, UK (M.S.); Department of Oncoplastic Breast Surgery, University Hospitals of Leicester NHS Trust, Leicester, UK (M.A.A.); Consumer member at National Cancer Research Institute, Breast Group, London, UK (J.R., C.P.); and University of Warwick, WMG, Coventry, UK (G.M.)
| | - Jack Cuzick
- From the Center for Human Technologies, Istituto Italiano di Tecnologia, Via Melen 83, Genoa 16152, Italy (C.D.); Wolfson Institute of Population Health, Queen Mary University of London, London, UK (C.D., E.F.L., J.C., A.R.B.); Institute of Computer Science (ICS), Foundation of Research and Technology Hellas, Heraklion, Crete, Greece (G.K.); Joint for Director Breast Screening, University Hospitals Coventry and Warwickshire NHS Trust Coventry, Coventry, UK (M.S.); Department of Oncoplastic Breast Surgery, University Hospitals of Leicester NHS Trust, Leicester, UK (M.A.A.); Consumer member at National Cancer Research Institute, Breast Group, London, UK (J.R., C.P.); and University of Warwick, WMG, Coventry, UK (G.M.)
| | - Giovanni Montana
- From the Center for Human Technologies, Istituto Italiano di Tecnologia, Via Melen 83, Genoa 16152, Italy (C.D.); Wolfson Institute of Population Health, Queen Mary University of London, London, UK (C.D., E.F.L., J.C., A.R.B.); Institute of Computer Science (ICS), Foundation of Research and Technology Hellas, Heraklion, Crete, Greece (G.K.); Joint for Director Breast Screening, University Hospitals Coventry and Warwickshire NHS Trust Coventry, Coventry, UK (M.S.); Department of Oncoplastic Breast Surgery, University Hospitals of Leicester NHS Trust, Leicester, UK (M.A.A.); Consumer member at National Cancer Research Institute, Breast Group, London, UK (J.R., C.P.); and University of Warwick, WMG, Coventry, UK (G.M.)
| | - Adam R Brentnall
- From the Center for Human Technologies, Istituto Italiano di Tecnologia, Via Melen 83, Genoa 16152, Italy (C.D.); Wolfson Institute of Population Health, Queen Mary University of London, London, UK (C.D., E.F.L., J.C., A.R.B.); Institute of Computer Science (ICS), Foundation of Research and Technology Hellas, Heraklion, Crete, Greece (G.K.); Joint for Director Breast Screening, University Hospitals Coventry and Warwickshire NHS Trust Coventry, Coventry, UK (M.S.); Department of Oncoplastic Breast Surgery, University Hospitals of Leicester NHS Trust, Leicester, UK (M.A.A.); Consumer member at National Cancer Research Institute, Breast Group, London, UK (J.R., C.P.); and University of Warwick, WMG, Coventry, UK (G.M.)
| |
Collapse
|
27
|
Ozcan BB, Patel BK, Banerjee I, Dogan BE. Artificial Intelligence in Breast Imaging: Challenges of Integration Into Clinical Practice. JOURNAL OF BREAST IMAGING 2023; 5:248-257. [PMID: 38416888 DOI: 10.1093/jbi/wbad007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Indexed: 03/01/2024]
Abstract
Artificial intelligence (AI) in breast imaging is a rapidly developing field with promising results. Despite the large number of recent publications in this field, unanswered questions have led to limited implementation of AI into daily clinical practice for breast radiologists. This paper provides an overview of the key limitations of AI in breast imaging including, but not limited to, limited numbers of FDA-approved algorithms and annotated data sets with histologic ground truth; concerns surrounding data privacy, security, algorithm transparency, and bias; and ethical issues. Ultimately, the successful implementation of AI into clinical care will require thoughtful action to address these challenges, transparency, and sharing of AI implementation workflows, limitations, and performance metrics within the breast imaging community and other end-users.
Collapse
Affiliation(s)
- B Bersu Ozcan
- The University of Texas Southwestern Medical Center, Department of Radiology, Dallas, TX, USA
| | | | - Imon Banerjee
- Mayo Clinic, Department of Radiology, Scottsdale, AZ, USA
| | - Basak E Dogan
- The University of Texas Southwestern Medical Center, Department of Radiology, Dallas, TX, USA
| |
Collapse
|
28
|
Nguyen HT, Nguyen HQ, Pham HH, Lam K, Le LT, Dao M, Vu V. VinDr-Mammo: A large-scale benchmark dataset for computer-aided diagnosis in full-field digital mammography. Sci Data 2023; 10:277. [PMID: 37173336 PMCID: PMC10182079 DOI: 10.1038/s41597-023-02100-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 03/24/2023] [Indexed: 05/15/2023] Open
Abstract
Mammography, or breast X-ray imaging, is the most widely used imaging modality to detect cancer and other breast diseases. Recent studies have shown that deep learning-based computer-assisted detection and diagnosis (CADe/x) tools have been developed to support physicians and improve the accuracy of interpreting mammography. A number of large-scale mammography datasets from different populations with various associated annotations and clinical data have been introduced to study the potential of learning-based methods in the field of breast radiology. With the aim to develop more robust and more interpretable support systems in breast imaging, we introduce VinDr-Mammo, a Vietnamese dataset of digital mammography with breast-level assessment and extensive lesion-level annotations, enhancing the diversity of the publicly available mammography data. The dataset consists of 5,000 mammography exams, each of which has four standard views and is double read with disagreement (if any) being resolved by arbitration. The purpose of this dataset is to assess Breast Imaging Reporting and Data System (BI-RADS) and breast density at the individual breast level. In addition, the dataset also provides the category, location, and BI-RADS assessment of non-benign findings. We make VinDr-Mammo publicly available as a new imaging resource to promote advances in developing CADe/x tools for mammography interpretation.
Collapse
Affiliation(s)
| | - Ha Q Nguyen
- Institute of Big Data, Hanoi, Vietnam
- College of Engineering and Computer Science (CECS), VinUniversity, Hanoi, Vietnam
| | - Hieu H Pham
- Institute of Big Data, Hanoi, Vietnam.
- College of Engineering and Computer Science (CECS), VinUniversity, Hanoi, Vietnam.
- VinUni-Illinois Smart Health Center, Hanoi, Vietnam.
| | - Khanh Lam
- Hospital 108, Department of Radiology, Hanoi, Vietnam
| | - Linh T Le
- Hanoi Medical University Hospital, Department of Radiology, Hanoi, Vietnam
| | - Minh Dao
- Institute of Big Data, Hanoi, Vietnam
| | - Van Vu
- Institute of Big Data, Hanoi, Vietnam
- Yale University, Department of Mathematics, New Heaven, CT, 06511, USA
| |
Collapse
|
29
|
Mračko A, Vanovčanová L, Cimrák I. Mammography Datasets for Neural Networks-Survey. J Imaging 2023; 9:jimaging9050095. [PMID: 37233314 DOI: 10.3390/jimaging9050095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Revised: 05/02/2023] [Accepted: 05/05/2023] [Indexed: 05/27/2023] Open
Abstract
Deep neural networks have gained popularity in the field of mammography. Data play an integral role in training these models, as training algorithms requires a large amount of data to capture the general relationship between the model's input and output. Open-access databases are the most accessible source of mammography data for training neural networks. Our work focuses on conducting a comprehensive survey of mammography databases that contain images with defined abnormal areas of interest. The survey includes databases such as INbreast, the Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM), the OPTIMAM Medical Image Database (OMI-DB), and The Mammographic Image Analysis Society Digital Mammogram Database (MIAS). Additionally, we surveyed recent studies that have utilized these databases in conjunction with neural networks and the results they have achieved. From these databases, it is possible to obtain at least 3801 unique images with 4125 described findings from approximately 1842 patients. The number of patients with important findings can be increased to approximately 14,474, depending on the type of agreement with the OPTIMAM team. Furthermore, we provide a description of the annotation process for mammography images to enhance the understanding of the information gained from these datasets.
Collapse
Affiliation(s)
- Adam Mračko
- Faculty of Management Science and Informatics, University of Žilina, 010 26 Žilina, Slovakia
- Research Centre, University of Žilina, 010 26 Žilina, Slovakia
| | - Lucia Vanovčanová
- 2nd Radiology Department, Faculty of Medicine, Comenius University in Bratislava, 813 72 Bratislava, Slovakia
- St. Elizabeth Cancer Institute, 812 50 Bratislava, Slovakia
| | - Ivan Cimrák
- Faculty of Management Science and Informatics, University of Žilina, 010 26 Žilina, Slovakia
- Research Centre, University of Žilina, 010 26 Žilina, Slovakia
| |
Collapse
|
30
|
Cai H, Wang J, Dan T, Li J, Fan Z, Yi W, Cui C, Jiang X, Li L. An Online Mammography Database with Biopsy Confirmed Types. Sci Data 2023; 10:123. [PMID: 36882402 PMCID: PMC9992520 DOI: 10.1038/s41597-023-02025-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Accepted: 02/15/2023] [Indexed: 03/09/2023] Open
Abstract
Breast carcinoma is the second largest cancer in the world among women. Early detection of breast cancer has been shown to increase the survival rate, thereby significantly increasing patients' lifespan. Mammography, a noninvasive imaging tool with low cost, is widely used to diagnose breast disease at an early stage due to its high sensitivity. Although some public mammography datasets are useful, there is still a lack of open access datasets that expand beyond the white population as well as missing biopsy confirmation or with unknown molecular subtypes. To fill this gap, we build a database containing two online breast mammographies. The dataset named by Chinese Mammography Database (CMMD) contains 3712 mammographies involved 1775 patients, which is divided into two branches. The first dataset CMMD1 contains 1026 cases (2214 mammographies) with biopsy confirmed type of benign or malignant tumors. The second dataset CMMD2 includes 1498 mammographies for 749 patients with known molecular subtypes. Our database is constructed to enrich the diversity of mammography data and promote the development of relevant fields.
Collapse
Affiliation(s)
- Hongmin Cai
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510006, China.
| | - Jinhua Wang
- Medical Imaging Center, Shenzhen Hospital, Southern Medical University, Shenzhen, 510515, China
- The Third of Clinical Medicine, Southern Medical University, Shenzhen, 510515, China
| | - Tingting Dan
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510006, China
| | - Jiao Li
- Department of Medical Imaging, Collaborative Innovation Center for Cancer Medicine, State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Zhihao Fan
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510006, China
| | - Weiting Yi
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, 510006, China
| | - Chunyan Cui
- Department of Medical Imaging, Collaborative Innovation Center for Cancer Medicine, State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Xinhua Jiang
- Department of Medical Imaging, Collaborative Innovation Center for Cancer Medicine, State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China
| | - Li Li
- Department of Medical Imaging, Collaborative Innovation Center for Cancer Medicine, State Key Laboratory of Oncology in South China, Sun Yat-sen University Cancer Center, Guangzhou, 510060, China.
| |
Collapse
|
31
|
Cushnan D, Young KC, Ward D, Halling-Brown MD, Duffy S, Given-Wilson R, Wallis MG, Wilkinson L, Lyburn I, Sidebottom R, McAvinchey R, Lewis EB, Mackenzie A, Warren LM. Lessons learned from independent external validation of an AI tool to detect breast cancer using a representative UK data set. Br J Radiol 2023; 96:20211104. [PMID: 36607283 PMCID: PMC9975375 DOI: 10.1259/bjr.20211104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Revised: 11/21/2022] [Accepted: 11/30/2022] [Indexed: 01/07/2023] Open
Abstract
OBJECTIVE To pilot a process for the independent external validation of an artificial intelligence (AI) tool to detect breast cancer using data from the NHS breast screening programme (NHSBSP). METHODS A representative data set of mammography images from 26,000 women attending 2 NHS screening centres, and an enriched data set of 2054 positive cases were used from the OPTIMAM image database. The use case of the AI tool was the replacement of the first or second human reader. The performance of the AI tool was compared to that of human readers in the NHSBSP. RESULTS Recommendations for future external validations of AI tools to detect breast cancer are provided. The tool recalled different breast cancers to the human readers. This study showed the importance of testing AI tools on all types of cases (including non-standard) and the clarity of any warning messages. The acceptable difference in sensitivity and specificity between the AI tool and human readers should be determined. Any information vital for the clinical application should be a required output for the AI tool. It is recommended that the interaction of radiologists with the AI tool, and the effect of the AI tool on arbitration be investigated prior to clinical use. CONCLUSION This pilot demonstrated several lessons for future independent external validation of AI tools for breast cancer detection. ADVANCES IN KNOWLEDGE Knowledge has been gained towards best practice procedures for performing independent external validations of AI tools for the detection of breast cancer using data from the NHS Breast Screening Programme.
Collapse
Affiliation(s)
| | | | - Dominic Ward
- Royal Surrey NHS Foundation Trust, Guildford, United Kingdom
| | | | - Stephen Duffy
- Queen Mary University London, London, United Kingdom
| | | | - Matthew G Wallis
- Cambridge Breast Unit and NIHR Cambridge Biomedical Research Centre, Cambridge University Hospitals NHS Trust, Cambridge, United Kingdom
| | - Louise Wilkinson
- Oxford University Hospitals NHS Foundation Trust, Oxford, United Kingdom
| | | | | | | | - Emma B Lewis
- Royal Surrey NHS Foundation Trust, Guildford, United Kingdom
| | | | - Lucy M Warren
- Royal Surrey NHS Foundation Trust, Guildford, United Kingdom
| |
Collapse
|
32
|
Cadrin-Chênevert A. Unleashing the Power of Deep Learning for Breast Cancer Detection through Open Mammography Datasets. Radiol Artif Intell 2023; 5:e220294. [PMID: 37035433 PMCID: PMC10077079 DOI: 10.1148/ryai.220294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 01/11/2023] [Accepted: 01/13/2023] [Indexed: 02/24/2023]
Affiliation(s)
- Alexandre Cadrin-Chênevert
- From the CISSS Lanaudière-Medical Imaging, 200 Louis-Vadeboncoeur Saint-Charles-Borromee, Saint Charles Borromee, QC, Canada J6E 6J2
| |
Collapse
|
33
|
Loizidou K, Elia R, Pitris C. Computer-aided breast cancer detection and classification in mammography: A comprehensive review. Comput Biol Med 2023; 153:106554. [PMID: 36646021 DOI: 10.1016/j.compbiomed.2023.106554] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 12/13/2022] [Accepted: 01/11/2023] [Indexed: 01/15/2023]
Abstract
Cancer is the second cause of mortality worldwide and it has been identified as a perilous disease. Breast cancer accounts for ∼20% of all new cancer cases worldwide, making it a major cause of morbidity and mortality. Mammography is an effective screening tool for the early detection and management of breast cancer. However, the identification and interpretation of breast lesions is challenging even for expert radiologists. For that reason, several Computer-Aided Diagnosis (CAD) systems are being developed to assist radiologists to accurately detect and/or classify breast cancer. This review examines the recent literature on the automatic detection and/or classification of breast cancer in mammograms, using both conventional feature-based machine learning and deep learning algorithms. The review begins with a comparison of algorithms developed specifically for the detection and/or classification of two types of breast abnormalities, micro-calcifications and masses, followed by the use of sequential mammograms for improving the performance of the algorithms. The available Food and Drug Administration (FDA) approved CAD systems related to triage and diagnosis of breast cancer in mammograms are subsequently presented. Finally, a description of the open access mammography datasets is provided and the potential opportunities for future work in this field are highlighted. The comprehensive review provided here can serve both as a thorough introduction to the field but also provide indicative directions to guide future applications.
Collapse
Affiliation(s)
- Kosmia Loizidou
- KIOS Research and Innovation Center of Excellence, Department of Electrical and Computer Engineering, University of Cyprus, Nicosia, Cyprus.
| | - Rafaella Elia
- KIOS Research and Innovation Center of Excellence, Department of Electrical and Computer Engineering, University of Cyprus, Nicosia, Cyprus.
| | - Costas Pitris
- KIOS Research and Innovation Center of Excellence, Department of Electrical and Computer Engineering, University of Cyprus, Nicosia, Cyprus.
| |
Collapse
|
34
|
Garrucho L, Kushibar K, Osuala R, Diaz O, Catanese A, del Riego J, Bobowicz M, Strand F, Igual L, Lekadir K. High-resolution synthesis of high-density breast mammograms: Application to improved fairness in deep learning based mass detection. Front Oncol 2023; 12:1044496. [PMID: 36755853 PMCID: PMC9899892 DOI: 10.3389/fonc.2022.1044496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 12/19/2022] [Indexed: 01/24/2023] Open
Abstract
Computer-aided detection systems based on deep learning have shown good performance in breast cancer detection. However, high-density breasts show poorer detection performance since dense tissues can mask or even simulate masses. Therefore, the sensitivity of mammography for breast cancer detection can be reduced by more than 20% in dense breasts. Additionally, extremely dense cases reported an increased risk of cancer compared to low-density breasts. This study aims to improve the mass detection performance in high-density breasts using synthetic high-density full-field digital mammograms (FFDM) as data augmentation during breast mass detection model training. To this end, a total of five cycle-consistent GAN (CycleGAN) models using three FFDM datasets were trained for low-to-high-density image translation in high-resolution mammograms. The training images were split by breast density BI-RADS categories, being BI-RADS A almost entirely fatty and BI-RADS D extremely dense breasts. Our results showed that the proposed data augmentation technique improved the sensitivity and precision of mass detection in models trained with small datasets and improved the domain generalization of the models trained with large databases. In addition, the clinical realism of the synthetic images was evaluated in a reader study involving two expert radiologists and one surgical oncologist.
Collapse
Affiliation(s)
- Lidia Garrucho
- Barcelona Artificial Intelligence in Medicine Lab, Facultat de Matemàtques i Informàtica, Universitat de Barcelona, Barcelona, Spain
| | - Kaisar Kushibar
- Barcelona Artificial Intelligence in Medicine Lab, Facultat de Matemàtques i Informàtica, Universitat de Barcelona, Barcelona, Spain
| | - Richard Osuala
- Barcelona Artificial Intelligence in Medicine Lab, Facultat de Matemàtques i Informàtica, Universitat de Barcelona, Barcelona, Spain
| | - Oliver Diaz
- Barcelona Artificial Intelligence in Medicine Lab, Facultat de Matemàtques i Informàtica, Universitat de Barcelona, Barcelona, Spain
| | - Alessandro Catanese
- Unitat de Diagnòstic per la Imatge de la Mama (UDIM), Hospital Germans Trias i Pujol, Badalona, Spain
| | - Javier del Riego
- Área de Radiología Mamaria y Ginecólogica (UDIAT CD), Parc Taulí Hospital Universitari, Sabadell, Spain
| | - Maciej Bobowicz
- 2nd Department of Radiology, Medical University of Gdansk, Gdansk, Poland
| | - Fredrik Strand
- Breast Radiology, Karolinska University Hospital and Department of Oncology-Pathology, Karolinska Institutet, Stockholm, Sweden
| | - Laura Igual
- Barcelona Artificial Intelligence in Medicine Lab, Facultat de Matemàtques i Informàtica, Universitat de Barcelona, Barcelona, Spain
| | - Karim Lekadir
- Barcelona Artificial Intelligence in Medicine Lab, Facultat de Matemàtques i Informàtica, Universitat de Barcelona, Barcelona, Spain
| |
Collapse
|
35
|
Cantone M, Marrocco C, Tortorella F, Bria A. Convolutional Networks and Transformers for Mammography Classification: An Experimental Study. SENSORS (BASEL, SWITZERLAND) 2023; 23:1229. [PMID: 36772268 PMCID: PMC9921468 DOI: 10.3390/s23031229] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 01/13/2023] [Accepted: 01/18/2023] [Indexed: 05/31/2023]
Abstract
Convolutional Neural Networks (CNN) have received a large share of research in mammography image analysis due to their capability of extracting hierarchical features directly from raw data. Recently, Vision Transformers are emerging as viable alternative to CNNs in medical imaging, in some cases performing on par or better than their convolutional counterparts. In this work, we conduct an extensive experimental study to compare the most recent CNN and Vision Transformer architectures for whole mammograms classification. We selected, trained and tested 33 different models, 19 convolutional- and 14 transformer-based, on the largest publicly available mammography image database OMI-DB. We also performed an analysis of the performance at eight different image resolutions and considering all the individual lesion categories in isolation (masses, calcifications, focal asymmetries, architectural distortions). Our findings confirm the potential of visual transformers, which performed on par with traditional CNNs like ResNet, but at the same time show a superiority of modern convolutional networks like EfficientNet.
Collapse
Affiliation(s)
- Marco Cantone
- Department of Electrical and Information Engineering, University of Cassino and Southern Latium, 03043 Cassino, FR, Italy
| | - Claudio Marrocco
- Department of Electrical and Information Engineering, University of Cassino and Southern Latium, 03043 Cassino, FR, Italy
| | - Francesco Tortorella
- Department of Information and Electrical Engineering and Applied Mathematics, University of Salerno, 84084 Fisciano, SA, Italy
| | - Alessandro Bria
- Department of Electrical and Information Engineering, University of Cassino and Southern Latium, 03043 Cassino, FR, Italy
| |
Collapse
|
36
|
Hossain MB, Nishikawa RM, Lee J. Developing breast lesion detection algorithms for digital breast tomosynthesis: Leveraging false positive findings. Med Phys 2022; 49:7596-7608. [PMID: 35916103 PMCID: PMC10156088 DOI: 10.1002/mp.15883] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Revised: 07/15/2022] [Accepted: 07/17/2022] [Indexed: 12/27/2022] Open
Abstract
BACKGROUND Due to the complex nature of digital breast tomosynthesis (DBT) in imaging techniques, reading times are longer than 2D mammograms. A robust computer-aided diagnosis system in DBT could help radiologists reduce their workload and reading times. PURPOSE The purpose of this study was to develop algorithms for detecting biopsy-proven breast lesions on DBT using multi-depth level convolutional models and leveraging non-biopsied samples. As biopsied positive samples in a lesion dataset are limited, we hypothesized that false positive (FP) findings by detection algorithms from non-biopsied benign lesions could improve detection algorithms by using them as data augmentation. APPROACH We first extracted 2D slices from DBT volumes with biopsy-proven breast lesions (cancer and benign), with non-biopsied benign lesions (actionable), and for controls. Then, to provide lesion continuity along the z-direction, we combined a lesion slice with its immediate adjacent slices to synthesize 2.5-dimensional (2.5D) images of the lesion by assigning them into R, G, and B color channels. We used 224 biopsy-proven lesions from 39 cancer and 62 benign patients from a DBTex challenge dataset of 1000 scans. We included the 2.5D images of immediate neighboring slices from the lesion's center to increase the number of training samples. For lesion detection, we used the YOLOv5 algorithm as our base network. We trained a baseline algorithm (medium-depth level) using biopsied samples to detect actionable FPs in non-biopsied images. Afterward, we fine-tuned the baseline model on the augmented image set (actionable FPs added). For lesion inferencing, we processed the DBT volume slice-by-slice to estimate bounding boxes in each slice, and then combined them by connecting bounding boxes along the depth via volumetric morphological closing. We trained an additional model (large) with deeper-depth levels by repeating the above process. Finally, we developed an ensemble algorithm by combining the medium and large detection models. We used the free-response operating characteristic curve to evaluate our algorithms. We reported mean sensitivity per FPs per DBT volume only for biopsied views and sensitivity at 2-false positives per image (2FPI) for all views. However, due to the limited accessibility to the truth of the challenge validation and test datasets, we used sensitivity at 2FPI for statistical evaluation. RESULTS For the DBTex independent validation set, the medium baseline model achieved a mean sensitivity of 0.627 FPs per DBT volume, and a sensitivity of 0.640 at 2FPI. After adding actionable FP lesions, the model had an improved 2FPI of 0.769 over the baseline (p-value = 0.013). Our ensemble algorithm with multi-depth levels (medium + large) achieved a mean sensitivity of 0.815 FPs per DBT volume and an improved sensitivity at 2FPI of 0.80 over the baseline (p-value < 0.001) on the validation set. Finally, our ensemble model achieved a mean sensitivity of 0.786 FPs per DBT volume and a sensitivity of 0.743 at 2FPI on the DBTex independent test set. CONCLUSIONS Our results show that actionable FP findings hold useful information for lesion detection algorithms, and our ensemble detection model with multi-depth levels improves lesion detection performance.
Collapse
Affiliation(s)
| | | | - Juhun Lee
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
37
|
Chalkidou A, Shokraneh F, Kijauskaite G, Taylor-Phillips S, Halligan S, Wilkinson L, Glocker B, Garrett P, Denniston AK, Mackie A, Seedat F. Recommendations for the development and use of imaging test sets to investigate the test performance of artificial intelligence in health screening. Lancet Digit Health 2022; 4:e899-e905. [PMID: 36427951 DOI: 10.1016/s2589-7500(22)00186-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 08/11/2022] [Accepted: 09/09/2022] [Indexed: 11/24/2022]
Abstract
Rigorous evaluation of artificial intelligence (AI) systems for image classification is essential before deployment into health-care settings, such as screening programmes, so that adoption is effective and safe. A key step in the evaluation process is the external validation of diagnostic performance using a test set of images. We conducted a rapid literature review on methods to develop test sets, published from 2012 to 2020, in English. Using thematic analysis, we mapped themes and coded the principles using the Population, Intervention, and Comparator or Reference standard, Outcome, and Study design framework. A group of screening and AI experts assessed the evidence-based principles for completeness and provided further considerations. From the final 15 principles recommended here, five affect population, one intervention, two comparator, one reference standard, and one both reference standard and comparator. Finally, four are appliable to outcome and one to study design. Principles from the literature were useful to address biases from AI; however, they did not account for screening specific biases, which we now incorporate. The principles set out here should be used to support the development and use of test sets for studies that assess the accuracy of AI within screening programmes, to ensure they are fit for purpose and minimise bias.
Collapse
Affiliation(s)
| | - Farhad Shokraneh
- King's Technology Evaluation Centre, King's College London, London, UK
| | - Goda Kijauskaite
- UK National Screening Committee, Office for Health Improvement and Disparities, Department of Health and Social Care, London, UK
| | | | - Steve Halligan
- Centre for Medical Imaging, Division of Medicine, University College London, London, UK
| | | | - Ben Glocker
- Department of Computing, Imperial College London, London, UK
| | - Peter Garrett
- Department of Chemical Engineering and Analytical Science, University of Manchester, Manchester, UK
| | - Alastair K Denniston
- Department of Ophthalmology, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
| | - Anne Mackie
- UK National Screening Committee, Office for Health Improvement and Disparities, Department of Health and Social Care, London, UK
| | - Farah Seedat
- UK National Screening Committee, Office for Health Improvement and Disparities, Department of Health and Social Care, London, UK
| |
Collapse
|
38
|
Garrucho L, Kushibar K, Jouide S, Diaz O, Igual L, Lekadir K. Domain generalization in deep learning based mass detection in mammography: A large-scale multi-center study. Artif Intell Med 2022; 132:102386. [PMID: 36207090 DOI: 10.1016/j.artmed.2022.102386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2022] [Revised: 08/07/2022] [Accepted: 08/19/2022] [Indexed: 11/02/2022]
Abstract
Computer-aided detection systems based on deep learning have shown great potential in breast cancer detection. However, the lack of domain generalization of artificial neural networks is an important obstacle to their deployment in changing clinical environments. In this study, we explored the domain generalization of deep learning methods for mass detection in digital mammography and analyzed in-depth the sources of domain shift in a large-scale multi-center setting. To this end, we compared the performance of eight state-of-the-art detection methods, including Transformer based models, trained in a single domain and tested in five unseen domains. Moreover, a single-source mass detection training pipeline was designed to improve the domain generalization without requiring images from the new domain. The results show that our workflow generalized better than state-of-the-art transfer learning based approaches in four out of five domains while reducing the domain shift caused by the different acquisition protocols and scanner manufacturers. Subsequently, an extensive analysis was performed to identify the covariate shifts with the greatest effects on detection performance, such as those due to differences in patient age, breast density, mass size, and mass malignancy. Ultimately, this comprehensive study provides key insights and best practices for future research on domain generalization in deep learning based breast cancer detection.
Collapse
Affiliation(s)
- Lidia Garrucho
- Artificial Intelligence in Medicine Lab (BCN-AIM), Faculty of Mathematics and Computer Science, University of Barcelona, Gran Via de les Corts Catalanes 585, Barcelona, 08007, Barcelona, Spain.
| | - Kaisar Kushibar
- Artificial Intelligence in Medicine Lab (BCN-AIM), Faculty of Mathematics and Computer Science, University of Barcelona, Gran Via de les Corts Catalanes 585, Barcelona, 08007, Barcelona, Spain
| | - Socayna Jouide
- Artificial Intelligence in Medicine Lab (BCN-AIM), Faculty of Mathematics and Computer Science, University of Barcelona, Gran Via de les Corts Catalanes 585, Barcelona, 08007, Barcelona, Spain
| | - Oliver Diaz
- Artificial Intelligence in Medicine Lab (BCN-AIM), Faculty of Mathematics and Computer Science, University of Barcelona, Gran Via de les Corts Catalanes 585, Barcelona, 08007, Barcelona, Spain
| | - Laura Igual
- Artificial Intelligence in Medicine Lab (BCN-AIM), Faculty of Mathematics and Computer Science, University of Barcelona, Gran Via de les Corts Catalanes 585, Barcelona, 08007, Barcelona, Spain
| | - Karim Lekadir
- Artificial Intelligence in Medicine Lab (BCN-AIM), Faculty of Mathematics and Computer Science, University of Barcelona, Gran Via de les Corts Catalanes 585, Barcelona, 08007, Barcelona, Spain
| |
Collapse
|
39
|
Rahman A, Hossain MS, Muhammad G, Kundu D, Debnath T, Rahman M, Khan MSI, Tiwari P, Band SS. Federated learning-based AI approaches in smart healthcare: concepts, taxonomies, challenges and open issues. CLUSTER COMPUTING 2022; 26:1-41. [PMID: 35996680 PMCID: PMC9385101 DOI: 10.1007/s10586-022-03658-4] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Revised: 05/10/2022] [Accepted: 06/17/2022] [Indexed: 06/15/2023]
Abstract
Federated Learning (FL), Artificial Intelligence (AI), and Explainable Artificial Intelligence (XAI) are the most trending and exciting technology in the intelligent healthcare field. Traditionally, the healthcare system works based on centralized agents sharing their raw data. Therefore, huge vulnerabilities and challenges are still existing in this system. However, integrating with AI, the system would be multiple agent collaborators who are capable of communicating with their desired host efficiently. Again, FL is another interesting feature, which works decentralized manner; it maintains the communication based on a model in the preferred system without transferring the raw data. The combination of FL, AI, and XAI techniques can be capable of minimizing several limitations and challenges in the healthcare system. This paper presents a complete analysis of FL using AI for smart healthcare applications. Initially, we discuss contemporary concepts of emerging technologies such as FL, AI, XAI, and the healthcare system. We integrate and classify the FL-AI with healthcare technologies in different domains. Further, we address the existing problems, including security, privacy, stability, and reliability in the healthcare field. In addition, we guide the readers to solving strategies of healthcare using FL and AI. Finally, we address extensive research areas as well as future potential prospects regarding FL-based AI research in the healthcare management system.
Collapse
Affiliation(s)
- Anichur Rahman
- Present Address: Department of Computer Science and Engineering, National Institute of Textile Engineering and Research (NITER), Constituent Institute of the University of Dhaka, Savar, Dhaka Bangladesh
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Md. Sazzad Hossain
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Ghulam Muhammad
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia
| | - Dipanjali Kundu
- Present Address: Department of Computer Science and Engineering, National Institute of Textile Engineering and Research (NITER), Constituent Institute of the University of Dhaka, Savar, Dhaka Bangladesh
| | - Tanoy Debnath
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Muaz Rahman
- Present Address: Department of Computer Science and Engineering, National Institute of Textile Engineering and Research (NITER), Constituent Institute of the University of Dhaka, Savar, Dhaka Bangladesh
| | - Md. Saikat Islam Khan
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Prayag Tiwari
- Department of Computer Science, Aalto University, Espoo, Finland
| | - Shahab S. Band
- Future Technology Research Center, National Yunlin University of Science and Technology, 123 University Road, Section 3, Douliou, Yunlin, 64002 Taiwan
| |
Collapse
|
40
|
McKay F, Williams BJ, Prestwich G, Treanor D, Hallowell N. Public governance of medical artificial intelligence research in the UK: an integrated multi-scale model. RESEARCH INVOLVEMENT AND ENGAGEMENT 2022; 8:21. [PMID: 35598004 PMCID: PMC9123617 DOI: 10.1186/s40900-022-00357-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 05/11/2022] [Indexed: 06/15/2023]
Abstract
There is a growing consensus among scholars, national governments, and intergovernmental organisations of the need to involve the public in decision-making around the use of artificial intelligence (AI) in society. Focusing on the UK, this paper asks how that can be achieved for medical AI research, that is, for research involving the training of AI on data from medical research databases. Public governance of medical AI research in the UK is generally achieved in three ways, namely, via lay representation on data access committees, through patient and public involvement groups, and by means of various deliberative democratic projects such as citizens' juries, citizen panels, citizen assemblies, etc.-what we collectively call "citizen forums". As we will show, each of these public involvement initiatives have complementary strengths and weaknesses for providing oversight of medical AI research. As they are currently utilized, however, they are unable to realize the full potential of their complementarity due to insufficient information transfer across them. In order to synergistically build on their contributions, we offer here a multi-scale model integrating all three. In doing so we provide a unified public governance model for medical AI research, one that, we argue, could improve the trustworthiness of big data and AI related medical research in the future.
Collapse
Affiliation(s)
- Francis McKay
- Department of Population Health, The Ethox Centre and the Wellcome Centre for Ethics and Humanities, Nuffield, University of Oxford, Oxford, OX3 7LF England
| | - Bethany J. Williams
- Department of Histopathology, St James University Hospital, Bexley Wing, Leeds, LS9 7TF England
| | - Graham Prestwich
- Yorkshire and Humber Academic Health Science Network, Unit 1, Calder Close, Calder Park, Wakefield, WF4 3BA England
| | - Darren Treanor
- Department of Histopathology, St James University Hospital, Leeds, LS9 7TF England
| | - Nina Hallowell
- Department of Population Health, The Ethox Centre and the Wellcome Centre for Ethics and Humanities, Nuffield, University of Oxford, Oxford, OX3 7LF England
| |
Collapse
|
41
|
Oza P, Sharma P, Patel S, Adedoyin F, Bruno A. Image Augmentation Techniques for Mammogram Analysis. J Imaging 2022; 8:jimaging8050141. [PMID: 35621905 PMCID: PMC9147240 DOI: 10.3390/jimaging8050141] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Revised: 04/19/2022] [Accepted: 04/22/2022] [Indexed: 01/30/2023] Open
Abstract
Research in the medical imaging field using deep learning approaches has become progressively contingent. Scientific findings reveal that supervised deep learning methods’ performance heavily depends on training set size, which expert radiologists must manually annotate. The latter is quite a tiring and time-consuming task. Therefore, most of the freely accessible biomedical image datasets are small-sized. Furthermore, it is challenging to have big-sized medical image datasets due to privacy and legal issues. Consequently, not a small number of supervised deep learning models are prone to overfitting and cannot produce generalized output. One of the most popular methods to mitigate the issue above goes under the name of data augmentation. This technique helps increase training set size by utilizing various transformations and has been publicized to improve the model performance when tested on new data. This article surveyed different data augmentation techniques employed on mammogram images. The article aims to provide insights into basic and deep learning-based augmentation techniques.
Collapse
Affiliation(s)
- Parita Oza
- Computer Science and Engineering Department, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India; (P.S.); (S.P.)
- Correspondence: or (P.O.); (A.B.)
| | - Paawan Sharma
- Computer Science and Engineering Department, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India; (P.S.); (S.P.)
| | - Samir Patel
- Computer Science and Engineering Department, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India; (P.S.); (S.P.)
| | - Festus Adedoyin
- Department of Computing and Informatics, Bournemouth University, Poole BH12 5BB, UK;
| | - Alessandro Bruno
- Department of Computing and Informatics, Bournemouth University, Poole BH12 5BB, UK;
- Correspondence: or (P.O.); (A.B.)
| |
Collapse
|
42
|
Advancements in Oncology with Artificial Intelligence—A Review Article. Cancers (Basel) 2022; 14:cancers14051349. [PMID: 35267657 PMCID: PMC8909088 DOI: 10.3390/cancers14051349] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 02/26/2022] [Accepted: 02/28/2022] [Indexed: 02/05/2023] Open
Abstract
Simple Summary With the advancement of artificial intelligence, including machine learning, the field of oncology has seen promising results in cancer detection and classification, epigenetics, drug discovery, and prognostication. In this review, we describe what artificial intelligence is and its function, as well as comprehensively summarize its evolution and role in breast, colorectal, and central nervous system cancers. Understanding the origin and current accomplishments might be essential to improve the quality, accuracy, generalizability, cost-effectiveness, and reliability of artificial intelligence models that can be used in worldwide clinical practice. Students and researchers in the medical field will benefit from a deeper understanding of how to use integrative AI in oncology for innovation and research. Abstract Well-trained machine learning (ML) and artificial intelligence (AI) systems can provide clinicians with therapeutic assistance, potentially increasing efficiency and improving efficacy. ML has demonstrated high accuracy in oncology-related diagnostic imaging, including screening mammography interpretation, colon polyp detection, glioma classification, and grading. By utilizing ML techniques, the manual steps of detecting and segmenting lesions are greatly reduced. ML-based tumor imaging analysis is independent of the experience level of evaluating physicians, and the results are expected to be more standardized and accurate. One of the biggest challenges is its generalizability worldwide. The current detection and screening methods for colon polyps and breast cancer have a vast amount of data, so they are ideal areas for studying the global standardization of artificial intelligence. Central nervous system cancers are rare and have poor prognoses based on current management standards. ML offers the prospect of unraveling undiscovered features from routinely acquired neuroimaging for improving treatment planning, prognostication, monitoring, and response assessment of CNS tumors such as gliomas. By studying AI in such rare cancer types, standard management methods may be improved by augmenting personalized/precision medicine. This review aims to provide clinicians and medical researchers with a basic understanding of how ML works and its role in oncology, especially in breast cancer, colorectal cancer, and primary and metastatic brain cancer. Understanding AI basics, current achievements, and future challenges are crucial in advancing the use of AI in oncology.
Collapse
|
43
|
Yu X, Wang SH, Górriz JM, Jiang XW, Guttery DS, Zhang YD. PeMNet for Pectoral Muscle Segmentation. BIOLOGY 2022; 11:biology11010134. [PMID: 35053131 PMCID: PMC8772963 DOI: 10.3390/biology11010134] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2021] [Revised: 12/17/2021] [Accepted: 01/07/2022] [Indexed: 11/22/2022]
Abstract
Simple Summary Deep learning has become a popular technique in modern computer-aided (CAD) systems. In breast cancer CAD systems, breast pectoral segmentation is an important procedure to remove unwanted pectoral muscle in the images. In recent decades, there are numerous studies aiming at developing efficient and accurate methods for pectoral muscle segmentation. However, some methods heavily rely on manually crafted features that can easily lead to segmentation failure. Moreover, deep learning-based methods are still suffering from poor performance at high computational costs. Therefore, we propose a novel deep learning segmentation framework to provide fast and accurate pectoral muscle segmentation result. In the proposed framework, the novel network architecture enables more useful information to be used and therefore improve the segmentation results. The experimental results using two public datasets validated the effectiveness of the proposed network. Abstract As an important imaging modality, mammography is considered to be the global gold standard for early detection of breast cancer. Computer-Aided (CAD) systems have played a crucial role in facilitating quicker diagnostic procedures, which otherwise could take weeks if only radiologists were involved. In some of these CAD systems, breast pectoral segmentation is required for breast region partition from breast pectoral muscle for specific analysis tasks. Therefore, accurate and efficient breast pectoral muscle segmentation frameworks are in high demand. Here, we proposed a novel deep learning framework, which we code-named PeMNet, for breast pectoral muscle segmentation in mammography images. In the proposed PeMNet, we integrated a novel attention module called the Global Channel Attention Module (GCAM), which can effectively improve the segmentation performance of Deeplabv3+ using minimal parameter overheads. In GCAM, channel attention maps (CAMs) are first extracted by concatenating feature maps after paralleled global average pooling and global maximum pooling operation. CAMs are then refined and scaled up by multi-layer perceptron (MLP) for elementwise multiplication with CAMs in next feature level. By iteratively repeating this procedure, the global CAMs (GCAMs) are then formed and multiplied elementwise with final feature maps to lead to final segmentation. By doing so, CAMs in early stages of a deep convolution network can be effectively passed on to later stages of the network and therefore leads to better information usage. The experiments on a merged dataset derived from two datasets, INbreast and OPTIMAM, showed that PeMNet greatly outperformed state-of-the-art methods by achieving an IoU of 97.46%, global pixel accuracy of 99.48%, Dice similarity coefficient of 96.30%, and Jaccard of 93.33%, respectively.
Collapse
Affiliation(s)
- Xiang Yu
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK; (X.Y.); (S.-H.W.)
| | - Shui-Hua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK; (X.Y.); (S.-H.W.)
| | - Juan Manuel Górriz
- Department of Signal Theory, Networking and Communications, University of Granada, 52005 Granada, Spain;
| | - Xian-Wei Jiang
- Department of Computer Science, Nanjing Normal University of Special Education, No.1 Shennong Road, Nanjing 210038, China
- Correspondence: ; (X.-W.J.); (D.S.G.); (Y.-D.Z.)
| | - David S. Guttery
- Leicester Cancer Research Centre, University of Leicester, Leicester LE2 7LX, UK
- Correspondence: ; (X.-W.J.); (D.S.G.); (Y.-D.Z.)
| | - Yu-Dong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester LE1 7RH, UK; (X.Y.); (S.-H.W.)
- Guangxi Key Laboratory of Trusted Software, Guilin University of Electronic Technology, Guilin 541004, China
- Correspondence: ; (X.-W.J.); (D.S.G.); (Y.-D.Z.)
| |
Collapse
|
44
|
Cushnan D, Berka R, Bertolli O, Williams P, Schofield D, Joshi I, Favaro A, Halling-Brown M, Imreh G, Jefferson E, Sebire NJ, Reilly G, Rodrigues JCL, Robinson G, Copley S, Malik R, Bloomfield C, Gleeson F, Crotty M, Denton E, Dickson J, Leeming G, Hardwick HE, Baillie K, Openshaw PJ, Semple MG, Rubin C, Howlett A, Rockall AG, Bhayat A, Fascia D, Sudlow C, Jacob J. Towards nationally curated data archives for clinical radiology image analysis at scale: Learnings from national data collection in response to a pandemic. Digit Health 2021; 7:20552076211048654. [PMID: 34868617 PMCID: PMC8637703 DOI: 10.1177/20552076211048654] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Accepted: 09/07/2021] [Indexed: 12/27/2022] Open
Abstract
The prevalence of the coronavirus SARS-CoV-2 disease has resulted in the
unprecedented collection of health data to support research. Historically,
coordinating the collation of such datasets on a national scale has been
challenging to execute for several reasons, including issues with data privacy,
the lack of data reporting standards, interoperable technologies, and
distribution methods. The coronavirus SARS-CoV-2 disease pandemic has
highlighted the importance of collaboration between government bodies,
healthcare institutions, academic researchers and commercial companies in
overcoming these issues during times of urgency. The National COVID-19 Chest
Imaging Database, led by NHSX, British Society of Thoracic Imaging, Royal Surrey
NHS Foundation Trust and Faculty, is an example of such a national initiative.
Here, we summarise the experiences and challenges of setting up the National
COVID-19 Chest Imaging Database, and the implications for future ambitions of
national data curation in medical imaging to advance the safe adoption of
artificial intelligence in healthcare.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | - Mark Halling-Brown
- Scientific Computing, Royal Surrey NHS Foundation Trust, UK.,CVSSP, University of Surrey, UK
| | | | - Emily Jefferson
- Health Data Research UK, UK.,Health Informatics Centre (HIC), School of Medicine, University of Dundee, UK
| | | | | | | | - Graham Robinson
- Department of Radiology, Royal United Hospitals Bath NHS Foundation Trust, UK
| | - Susan Copley
- Imaging Department, Hammersmith Hospital, Imperial College NHS Healthcare Trust, UK
| | - Rizwan Malik
- Department of Radiology, Bolton NHS Foundation Trust, UK
| | - Claire Bloomfield
- National Consortium of Intelligent Medical Imaging (NCIMI), The Big Data Institute, University of Oxford, UK.,Dept of Oncology, University of Oxford, UK
| | - Fergus Gleeson
- National Consortium of Intelligent Medical Imaging (NCIMI), The Big Data Institute, University of Oxford, UK.,Dept of Oncology, University of Oxford, UK
| | | | - Erika Denton
- Norfolk and Norwich University Hospital Foundation Trust, UK
| | | | - Gary Leeming
- Institute of Population Health, Faculty of Health and Life Sciences, University of Liverpool, UK
| | - Hayley E Hardwick
- National Institute of Health Research (NIHR) Health Protection Research Unit in Emerging and Zoonotic Infections, UK
| | | | | | - Malcolm G Semple
- NIHR Health Protection Research Unit, Institute of Infection, Veterinary and Ecological Sciences, Faculty of Health and Life Sciences, University of Liverpool, UK
| | - Caroline Rubin
- Department of Radiology, University Hospital Southampton NHS Foundation Trust, UK
| | | | - Andrea G Rockall
- Imaging Department, Hammersmith Hospital, Imperial College NHS Healthcare Trust, UK.,Department of Surgery and Cancer, Faculty of Medicine, Imperial College London, UK
| | - Ayub Bhayat
- NHS Arden & Greater East Midlands Commissioning Support Unit, UK
| | | | - Cathie Sudlow
- British Heart Foundation Data Science Centre Led by Health Data Research UK, UK
| | | | - Joseph Jacob
- Department of Respiratory Medicine, University College London, UK.,Centre for Medical Image Computing, Department of Computer Science, University College London, UK
| |
Collapse
|
45
|
Burnside ES, Warren LM, Myles J, Wilkinson LS, Wallis MG, Patel M, Smith RA, Young KC, Massat NJ, Duffy SW. Quantitative breast density analysis to predict interval and node-positive cancers in pursuit of improved screening protocols: a case-control study. Br J Cancer 2021; 125:884-892. [PMID: 34168297 PMCID: PMC8438060 DOI: 10.1038/s41416-021-01466-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2020] [Revised: 05/18/2021] [Accepted: 06/10/2021] [Indexed: 01/20/2023] Open
Abstract
BACKGROUND This study investigates whether quantitative breast density (BD) serves as an imaging biomarker for more intensive breast cancer screening by predicting interval, and node-positive cancers. METHODS This case-control study of 1204 women aged 47-73 includes 599 cancer cases (302 screen-detected, 297 interval; 239 node-positive, 360 node-negative) and 605 controls. Automated BD software calculated fibroglandular volume (FGV), volumetric breast density (VBD) and density grade (DG). A radiologist assessed BD using a visual analogue scale (VAS) from 0 to 100. Logistic regression and area under the receiver operating characteristic curves (AUC) determined whether BD could predict mode of detection (screen-detected or interval); node-negative cancers; node-positive cancers, and all cancers vs. controls. RESULTS FGV, VBD, VAS, and DG all discriminated interval cancers (all p < 0.01) from controls. Only FGV-quartile discriminated screen-detected cancers (p < 0.01). Based on AUC, FGV discriminated all cancer types better than VBD or VAS. FGV showed a significantly greater discrimination of interval cancers, AUC = 0.65, than of screen-detected cancers, AUC = 0.61 (p < 0.01) as did VBD (0.63 and 0.53, respectively, p < 0.001). CONCLUSION FGV, VBD, VAS and DG discriminate interval cancers from controls, reflecting some masking risk. Only FGV discriminates screen-detected cancers perhaps adding a unique component of breast cancer risk.
Collapse
Affiliation(s)
- Elizabeth S Burnside
- Department of Radiology, University of Wisconsin-Madison School of Medicine and Public Health, E3/311 Clinical Science Center, Madison, WI, USA.
| | - Lucy M Warren
- National Co-ordinating Centre for the Physics of Mammography (NCCPM), Medical Physics Department, Royal Surrey County Hospital, Guildford, UK
| | - Jonathan Myles
- Centre for Cancer Prevention, Queen Mary University of London, Wolfson Institute of Preventive Medicine, London, UK
| | | | - Matthew G Wallis
- Cambridge Breast Unit and NIHR Cambridge Biomedical Research Centre, Cambridge University Hospitals NHS Trust, Cambridge, UK
| | - Mishal Patel
- Scientific Computing, Medical Physics Department, Royal Surrey County Hospital, Guildford, UK
| | | | - Kenneth C Young
- National Co-ordinating Centre for the Physics of Mammography (NCCPM), Medical Physics Department, Royal Surrey County Hospital, Guildford, UK
| | - Nathalie J Massat
- Centre for Cancer Prevention, Queen Mary University of London, Wolfson Institute of Preventive Medicine, London, UK
| | - Stephen W Duffy
- Centre for Cancer Prevention, Queen Mary University of London, Wolfson Institute of Preventive Medicine, London, UK
| |
Collapse
|
46
|
Yu X, Zhou Q, Wang S, Zhang Y. A systematic survey of deep learning in breast cancer. INT J INTELL SYST 2021. [DOI: 10.1002/int.22622] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Affiliation(s)
- Xiang Yu
- School of Computing and Mathematical Sciences University of Leicester Leicester, Leicestershire UK
| | - Qinghua Zhou
- School of Computing and Mathematical Sciences University of Leicester Leicester, Leicestershire UK
| | - Shuihua Wang
- School of Computing and Mathematical Sciences University of Leicester Leicester, Leicestershire UK
| | - Yu‐Dong Zhang
- School of Computing and Mathematical Sciences University of Leicester Leicester, Leicestershire UK
| |
Collapse
|
47
|
Sidebottom R, Lyburn I, Brady M, Vinnicombe S. Fair shares: building and benefiting from healthcare AI with mutually beneficial structures and development partnerships. Br J Cancer 2021; 125:1181-1184. [PMID: 34262148 DOI: 10.1038/s41416-021-01454-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 04/21/2021] [Accepted: 05/28/2021] [Indexed: 11/09/2022] Open
Abstract
Artificial intelligence (AI) algorithms are used in an increasing range of aspects of our lives. In particular, medical applications of AI are being developed and deployed, including many in image analysis. Deep learning methods, which have recently proved successful in image classification, rely on large volumes of clinical data generated by healthcare institutions. Such data is collected from their served populations. In this opinion article, using digital mammographic screening as an example, we briefly consider the background to AI development and some issues around its deployment. We highlight the importance of high quality clinical data as fundamental to these technologies, and question how the ownership of resultant tools should be defined. Though many of the ethical issues concerning the development and use of medical AI technologies continue to be discussed, the value of the data on which they rely remains a subject that is seldom considered. This potentially controversial issue can and should be addressed in a way which is beneficial to all parties, particularly the population in general and the patients we serve.
Collapse
Affiliation(s)
- Richard Sidebottom
- Department of Radiology, Gloucestershire Hospitals NHS Foundation Trust, Gloucestershire, UK. .,Department of Radiology, The Royal Marsden Hospital NHS Foundation Trust, London, UK.
| | - Iain Lyburn
- Department of Radiology, Gloucestershire Hospitals NHS Foundation Trust, Gloucestershire, UK.,Cobalt Medical Charity, Cheltenham, UK.,Cranfield University, Cranfield, UK
| | - Michael Brady
- Department of Oncology, Medical Sciences Division, University of Oxford, Oxford, UK
| | - Sarah Vinnicombe
- Department of Radiology, Gloucestershire Hospitals NHS Foundation Trust, Gloucestershire, UK.,University of Dundee, Dundee, UK
| |
Collapse
|
48
|
Boita J, van Engen RE, Mackenzie A, Tingberg A, Bosmans H, Bolejko A, Zackrisson S, Wallis MG, Ikeda DM, Van Ongeval C, Pijnappel R, Broeders M, Sechopoulos I. How does image quality affect radiologists' perceived ability for image interpretation and lesion detection in digital mammography? Eur Radiol 2021; 31:5335-5343. [PMID: 33475774 PMCID: PMC8213590 DOI: 10.1007/s00330-020-07679-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2020] [Revised: 12/09/2020] [Accepted: 12/29/2020] [Indexed: 11/25/2022]
Abstract
OBJECTIVES To study how radiologists' perceived ability to interpret digital mammography (DM) images is affected by decreases in image quality. METHODS One view from 45 DM cases (including 30 cancers) was degraded to six levels each of two acquisition-related issues (lower spatial resolution and increased quantum noise) and three post-processing-related issues (lower and higher contrast and increased correlated noise) seen during clinical evaluation of DM systems. The images were shown to fifteen breast screening radiologists from five countries. Aware of lesion location, the radiologists selected the most-degraded mammogram (indexed from 1 (reference) to 7 (most degraded)) they still felt was acceptable for interpretation. The median selected index, per degradation type, was calculated separately for calcification and soft tissue (including normal) cases. Using the two-sided, non-parametric Mann-Whitney test, the median indices for each case and degradation type were compared. RESULTS Radiologists were not tolerant to increases (medians: 1.5 (calcifications) and 2 (soft tissue)) or decreases (median: 2, for both types) in contrast, but were more tolerant to correlated noise (median: 3, for both types). Increases in quantum noise were tolerated more for calcifications than for soft tissue cases (medians: 3 vs. 4, p = 0.02). Spatial resolution losses were considered less acceptable for calcification detection than for soft tissue cases (medians: 3.5 vs. 5, p = 0.001). CONCLUSIONS Perceived ability of radiologists for image interpretation in DM was affected not only by image acquisition-related issues but also by image post-processing issues, and some of those issues affected calcification cases more than soft tissue cases. KEY POINTS • Lower spatial resolution and increased quantum noise affected the radiologists' perceived ability to interpret calcification cases more than soft tissue lesion or normal cases. • Post-acquisition image processing-related effects, not only image acquisition-related effects, also impact the perceived ability of radiologists to interpret images and detect lesions. • In addition to current practices, post-acquisition image processing-related effects need to also be considered during the testing and evaluation of digital mammography systems.
Collapse
Affiliation(s)
- Joana Boita
- Department of Medical Imaging, Radboud University Medical Center, Geert Grooteplein 10, 6525, GA, Nijmegen, The Netherlands
- Dutch Expert Centre for Screening (LRCB), Wijchenseweg 101, 6538, SW, Nijmegen, The Netherlands
| | - Ruben E van Engen
- Dutch Expert Centre for Screening (LRCB), Wijchenseweg 101, 6538, SW, Nijmegen, The Netherlands
| | - Alistair Mackenzie
- National Coordinating Centre for the Physics of Mammography, Royal Surrey NHS Foundation Trust, Guildford, GU2 7XX, UK
| | - Anders Tingberg
- Department of Medical Radiation Physics, Translational Medicine Malmö, Lund University, Skåne University Hospital, Carl Bertil Laurells gata 9, SE-20502, Malmö, Sweden
| | - Hilde Bosmans
- Department of Imaging and Pathology, Radiology, KUL, Herestraat 49, B-3000, Leuven, Belgium
- Department of Radiology, Radiology, UZ Gasthuisberg, Herestraat 49, B-3000, Leuven, Belgium
| | - Anetta Bolejko
- Department of Medical Imaging and Physiology, Translational Medicine Malmö, Lund University, Skåne University Hospital, Carl Bertil Laurells gata 9, SE-20502, Malmö, Sweden
| | - Sophia Zackrisson
- Department of Medical Imaging and Physiology, Translational Medicine Malmö, Lund University, Skåne University Hospital, Carl Bertil Laurells gata 9, SE-20502, Malmö, Sweden
| | - Matthew G Wallis
- Cambridge Breast Unit, Cambridge University Hospitals NHS Foundation Trust, Cambridge & NIHR Cambridge Biomedical Research Centre, Cambridge, CB2 0QQ, UK
| | - Debra M Ikeda
- Department of Radiology, Stanford University School of Medicine, 875 Blake Wilbur Dr, Stanford, CA, 94305, USA
| | - Chantal Van Ongeval
- Department of Radiology, Radiology, UZ Gasthuisberg, Herestraat 49, B-3000, Leuven, Belgium
| | - Ruud Pijnappel
- Dutch Expert Centre for Screening (LRCB), Wijchenseweg 101, 6538, SW, Nijmegen, The Netherlands
- Department of Radiology, University Medical Center Utrecht, Utrecht University, PO Box 85500, 3508, GA, Utrecht, The Netherlands
| | - Mireille Broeders
- Dutch Expert Centre for Screening (LRCB), Wijchenseweg 101, 6538, SW, Nijmegen, The Netherlands
- Department for Health Evidence, Radboud University Medical Center, Geert Grooteplein 10, 6525, GA, Nijmegen, The Netherlands
| | - Ioannis Sechopoulos
- Department of Medical Imaging, Radboud University Medical Center, Geert Grooteplein 10, 6525, GA, Nijmegen, The Netherlands.
- Dutch Expert Centre for Screening (LRCB), Wijchenseweg 101, 6538, SW, Nijmegen, The Netherlands.
| |
Collapse
|
49
|
Data preparation for artificial intelligence in medical imaging: A comprehensive guide to open-access platforms and tools. Phys Med 2021; 83:25-37. [DOI: 10.1016/j.ejmp.2021.02.007] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 01/27/2021] [Accepted: 02/15/2021] [Indexed: 02/06/2023] Open
|
50
|
Jacob J, Alexander D, Baillie JK, Berka R, Bertolli O, Blackwood J, Buchan I, Bloomfield C, Cushnan D, Docherty A, Edey A, Favaro A, Gleeson F, Halling-Brown M, Hare S, Jefferson E, Johnstone A, Kirby M, McStay R, Nair A, Openshaw PJM, Parker G, Reilly G, Robinson G, Roditi G, Rodrigues JCL, Sebire N, Semple MG, Sudlow C, Woznitza N, Joshi I. Using imaging to combat a pandemic: rationale for developing the UK National COVID-19 Chest Imaging Database. Eur Respir J 2020; 56:2001809. [PMID: 32616598 PMCID: PMC7331656 DOI: 10.1183/13993003.01809-2020] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 06/08/2020] [Indexed: 12/12/2022]
Abstract
The National COVID-19 Chest Imaging Database (NCCID) is a repository of chest radiographs, CT and MRI images and clinical data from COVID-19 patients across the UK, to support research and development of AI technology and give insight into COVID-19 disease https://bit.ly/3eQeuha
Collapse
Affiliation(s)
- Joseph Jacob
- Dept of Respiratory Medicine, University College London, London, UK
- Centre for Medical Image Computing, Dept of Computer Science, University College London, London, UK
| | - Daniel Alexander
- Centre for Medical Image Computing, Dept of Computer Science, University College London, London, UK
| | - J Kenneth Baillie
- Division of Genetics and Genomics, The Roslin Institute, University of Edinburgh, Edinburgh, UK
- Centre for Inflammation Research, University of Edinburgh, Edinburgh, UK
| | | | | | - James Blackwood
- The Industrial Centre for Artificial Intelligence Research in Digital Diagnostics (iCAIRD), Dept of eHealth, NHS Greater Glasgow and Clyde, Glasgow, UK
| | - Iain Buchan
- Institute of Population Health, University of Liverpool, Liverpool, UK
| | - Claire Bloomfield
- National Consortium of Intelligent Medical Imaging (NCIMI), The University of Oxford, Big Data Institute, Oxford, UK
| | | | - Annemarie Docherty
- Centre for Medical Informatics, Usher Institute, University of Edinburgh, Edinburgh, UK
| | - Anthony Edey
- Dept of Radiology, Southmead Hospital, North Bristol NHS Trust, Bristol, UK
| | | | - Fergus Gleeson
- National Consortium of Intelligent Medical Imaging (NCIMI), The University of Oxford, Big Data Institute, Oxford, UK
- Dept of Oncology, University of Oxford, Oxford, UK
| | - Mark Halling-Brown
- Scientific Computing, Royal Surrey NHS Foundation Trust, Guildford, UK
- Centre for Vision, Speech and Signal Processing (CVSSP), University of Surrey, Guildford, UK
| | - Samanjit Hare
- Dept of Radiology, Royal Free London NHS Trust, London, UK
| | - Emily Jefferson
- Health Data Research UK, London, UK
- Health Informatics Centre (HIC), School of Medicine, University of Dundee, Dundee, UK
| | - Annette Johnstone
- Dept of Radiology, Leeds Teaching Hospitals NHS Trust, Leeds General Infirmary, Leeds, UK
| | | | - Ruth McStay
- Dept of Radiology, Freeman Hospital, Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle upon Tyne, UK
| | - Arjun Nair
- Dept of Radiology, University College London Hospital, London, UK
| | - Peter J M Openshaw
- National Heart and Lung Institute, Faculty of Medicine, Imperial College London, London, UK
| | - Geoff Parker
- Centre for Medical Image Computing, Dept of Computer Science, University College London, London, UK
- Bioxydyn Limited, Manchester, UK
| | | | - Graham Robinson
- Dept of Radiology, Royal United Hospitals Bath NHS Foundation Trust, Bath, UK
| | - Giles Roditi
- Dept of Radiology, University of Glasgow, Glasgow Royal Infirmary, Glasgow, UK
| | | | | | - Malcolm G Semple
- NIHR Health Protection Research Unit in Emerging and Zoonotic Infections, Faculty of Health and Life Sciences, University of Liverpool, Liverpool, UK
| | - Catherine Sudlow
- Usher Institute, University of Edinburgh, Edinburgh, UK
- British Heart Foundation (BHF) Data Science Centre, Health Data Research UK, Edinburgh, UK
| | - Nick Woznitza
- Radiology Dept, Homerton University Hospital, London, UK
- School of Allied and Public Health Professions, Canterbury Christ Church University, Canterbury, UK
- 12 NHS Nightingale Hospital London, London, UK
| | | |
Collapse
|