1
|
Bergan MB, Larsen M, Moshina N, Bartsch H, Koch HW, Aase HS, Satybaldinov Z, Haldorsen IHS, Lee CI, Hofvind S. AI performance by mammographic density in a retrospective cohort study of 99,489 participants in BreastScreen Norway. Eur Radiol 2024:10.1007/s00330-024-10681-z. [PMID: 38528136 DOI: 10.1007/s00330-024-10681-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 01/19/2024] [Accepted: 02/10/2024] [Indexed: 03/27/2024]
Abstract
OBJECTIVE To explore the ability of artificial intelligence (AI) to classify breast cancer by mammographic density in an organized screening program. MATERIALS AND METHOD We included information about 99,489 examinations from 74,941 women who participated in BreastScreen Norway, 2013-2019. All examinations were analyzed with an AI system that assigned a malignancy risk score (AI score) from 1 (lowest) to 10 (highest) for each examination. Mammographic density was classified into Volpara density grade (VDG), VDG1-4; VDG1 indicated fatty and VDG4 extremely dense breasts. Screen-detected and interval cancers with an AI score of 1-10 were stratified by VDG. RESULTS We found 10,406 (10.5% of the total) examinations to have an AI risk score of 10, of which 6.7% (704/10,406) was breast cancer. The cancers represented 89.7% (617/688) of the screen-detected and 44.6% (87/195) of the interval cancers. 20.3% (20,178/99,489) of the examinations were classified as VDG1 and 6.1% (6047/99,489) as VDG4. For screen-detected cancers, 84.0% (68/81, 95% CI, 74.1-91.2) had an AI score of 10 for VDG1, 88.9% (328/369, 95% CI, 85.2-91.9) for VDG2, 92.5% (185/200, 95% CI, 87.9-95.7) for VDG3, and 94.7% (36/38, 95% CI, 82.3-99.4) for VDG4. For interval cancers, the percentages with an AI score of 10 were 33.3% (3/9, 95% CI, 7.5-70.1) for VDG1 and 48.0% (12/25, 95% CI, 27.8-68.7) for VDG4. CONCLUSION The tested AI system performed well according to cancer detection across all density categories, especially for extremely dense breasts. The highest proportion of screen-detected cancers with an AI score of 10 was observed for women classified as VDG4. CLINICAL RELEVANCE STATEMENT Our study demonstrates that AI can correctly classify the majority of screen-detected and about half of the interval breast cancers, regardless of breast density. KEY POINTS • Mammographic density is important to consider in the evaluation of artificial intelligence in mammographic screening. • Given a threshold representing about 10% of those with the highest malignancy risk score by an AI system, we found an increasing percentage of cancers with increasing mammographic density. • Artificial intelligence risk score and mammographic density combined may help triage examinations to reduce workload for radiologists.
Collapse
Affiliation(s)
- Marie Burns Bergan
- Section for Breast Cancer Screening, Cancer Registry of Norway, Norwegian Institute of Public Health, P.O. Box 5313, 0304, Oslo, Norway
| | - Marthe Larsen
- Section for Breast Cancer Screening, Cancer Registry of Norway, Norwegian Institute of Public Health, P.O. Box 5313, 0304, Oslo, Norway
| | - Nataliia Moshina
- Section for Breast Cancer Screening, Cancer Registry of Norway, Norwegian Institute of Public Health, P.O. Box 5313, 0304, Oslo, Norway
| | - Hauke Bartsch
- Department of Radiology, Mohn Medical Imaging and Visualization Centre (MMIV), Haukeland University Hospital, Bergen, Norway
| | - Henrik Wethe Koch
- Department of Radiology, Stavanger University Hospital, Stavanger, Norway
- Faculty of Health Sciences, University of Stavanger, Stavanger, Norway
| | | | - Zhanbolat Satybaldinov
- Department of Radiology, Mohn Medical Imaging and Visualization Centre (MMIV), Haukeland University Hospital, Bergen, Norway
| | - Ingfrid Helene Salvesen Haldorsen
- Department of Radiology, Mohn Medical Imaging and Visualization Centre (MMIV), Haukeland University Hospital, Bergen, Norway
- Section for Radiology, Department of Clinical Medicine, University of Bergen, Bergen, Norway
| | - Christoph I Lee
- Department of Radiology, University of Washington School of Medicine, Seattle, WA, USA
- Department of Health Systems and Population Health, University of Washington School of Public Health, Seattle, WA, USA
| | - Solveig Hofvind
- Section for Breast Cancer Screening, Cancer Registry of Norway, Norwegian Institute of Public Health, P.O. Box 5313, 0304, Oslo, Norway.
- Department of Health and Care Sciences, Faculty of Health Sciences, UiT The Arctic University of Norway, Tromsø, Norway.
| |
Collapse
|
2
|
Lokaj B, Pugliese MT, Kinkel K, Lovis C, Schmid J. Barriers and facilitators of artificial intelligence conception and implementation for breast imaging diagnosis in clinical practice: a scoping review. Eur Radiol 2024; 34:2096-2109. [PMID: 37658895 PMCID: PMC10873444 DOI: 10.1007/s00330-023-10181-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 06/07/2023] [Accepted: 07/10/2023] [Indexed: 09/05/2023]
Abstract
OBJECTIVE Although artificial intelligence (AI) has demonstrated promise in enhancing breast cancer diagnosis, the implementation of AI algorithms in clinical practice encounters various barriers. This scoping review aims to identify these barriers and facilitators to highlight key considerations for developing and implementing AI solutions in breast cancer imaging. METHOD A literature search was conducted from 2012 to 2022 in six databases (PubMed, Web of Science, CINHAL, Embase, IEEE, and ArXiv). The articles were included if some barriers and/or facilitators in the conception or implementation of AI in breast clinical imaging were described. We excluded research only focusing on performance, or with data not acquired in a clinical radiology setup and not involving real patients. RESULTS A total of 107 articles were included. We identified six major barriers related to data (B1), black box and trust (B2), algorithms and conception (B3), evaluation and validation (B4), legal, ethical, and economic issues (B5), and education (B6), and five major facilitators covering data (F1), clinical impact (F2), algorithms and conception (F3), evaluation and validation (F4), and education (F5). CONCLUSION This scoping review highlighted the need to carefully design, deploy, and evaluate AI solutions in clinical practice, involving all stakeholders to yield improvement in healthcare. CLINICAL RELEVANCE STATEMENT The identification of barriers and facilitators with suggested solutions can guide and inform future research, and stakeholders to improve the design and implementation of AI for breast cancer detection in clinical practice. KEY POINTS • Six major identified barriers were related to data; black-box and trust; algorithms and conception; evaluation and validation; legal, ethical, and economic issues; and education. • Five major identified facilitators were related to data, clinical impact, algorithms and conception, evaluation and validation, and education. • Coordinated implication of all stakeholders is required to improve breast cancer diagnosis with AI.
Collapse
Affiliation(s)
- Belinda Lokaj
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland.
- Faculty of Medicine, University of Geneva, Geneva, Switzerland.
- Division of Medical Information Sciences, Geneva University Hospitals, Geneva, Switzerland.
| | - Marie-Thérèse Pugliese
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland
| | - Karen Kinkel
- Réseau Hospitalier Neuchâtelois, Neuchâtel, Switzerland
| | - Christian Lovis
- Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Division of Medical Information Sciences, Geneva University Hospitals, Geneva, Switzerland
| | - Jérôme Schmid
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland
| |
Collapse
|
3
|
Malliori A, Pallikarakis N. Breast cancer detection using machine learning in digital mammography and breast tomosynthesis: A systematic review. HEALTH AND TECHNOLOGY 2022. [DOI: 10.1007/s12553-022-00693-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
4
|
Chen X, Zhao J, Iselin KC, Borroni D, Romano D, Gokul A, McGhee CNJ, Zhao Y, Sedaghat MR, Momeni-Moghaddam H, Ziaei M, Kaye S, Romano V, Zheng Y. Keratoconus detection of changes using deep learning of colour-coded maps. BMJ Open Ophthalmol 2021; 6:e000824. [PMID: 34337155 PMCID: PMC8278890 DOI: 10.1136/bmjophth-2021-000824] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Accepted: 07/05/2021] [Indexed: 12/26/2022] Open
Abstract
Objective To evaluate the accuracy of convolutional neural networks technique (CNN) in detecting keratoconus using colour-coded corneal maps obtained by a Scheimpflug camera. Design Multicentre retrospective study. Methods and analysis We included the images of keratoconic and healthy volunteers’ eyes provided by three centres: Royal Liverpool University Hospital (Liverpool, UK), Sedaghat Eye Clinic (Mashhad, Iran) and The New Zealand National Eye Center (New Zealand). Corneal tomography scans were used to train and test CNN models, which included healthy controls. Keratoconic scans were classified according to the Amsler-Krumeich classification. Keratoconic scans from Iran were used as an independent testing set. Four maps were considered for each scan: axial map, anterior and posterior elevation map, and pachymetry map. Results A CNN model detected keratoconus versus health eyes with an accuracy of 0.9785 on the testing set, considering all four maps concatenated. Considering each map independently, the accuracy was 0.9283 for axial map, 0.9642 for thickness map, 0.9642 for the front elevation map and 0.9749 for the back elevation map. The accuracy of models in recognising between healthy controls and stage 1 was 0.90, between stages 1 and 2 was 0.9032, and between stages 2 and 3 was 0.8537 using the concatenated map. Conclusion CNN provides excellent detection performance for keratoconus and accurately grades different severities of disease using the colour-coded maps obtained by the Scheimpflug camera. CNN has the potential to be further developed, validated and adopted for screening and management of keratoconus.
Collapse
Affiliation(s)
- Xu Chen
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Jiaxin Zhao
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| | - Katja C Iselin
- Department of Ophthalmology, St Paul's Eye Unit, Royal Liverpool University Hospital, Liverpool, UK
| | - Davide Borroni
- Department of Ophthalmology, St Paul's Eye Unit, Royal Liverpool University Hospital, Liverpool, UK
| | - Davide Romano
- Department of Ophthalmology, St Paul's Eye Unit, Royal Liverpool University Hospital, Liverpool, UK
| | - Akilesh Gokul
- Department of Ophthalmology, New Zealand National Eye Centre, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | - Charles N J McGhee
- Department of Ophthalmology, New Zealand National Eye Centre, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | - Yitian Zhao
- Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo, China
| | - Mohammad-Reza Sedaghat
- Eye Research Center, Mashhad University of Medical Sciences, Mashhad, Iran.,Health Promotion Research Center, Zahedan University of Medical Sciences, Zahedan, Iran
| | - Hamed Momeni-Moghaddam
- Eye Research Center, Mashhad University of Medical Sciences, Mashhad, Iran.,Health Promotion Research Center, Zahedan University of Medical Sciences, Zahedan, Iran
| | - Mohammed Ziaei
- Department of Ophthalmology, New Zealand National Eye Centre, Faculty of Medical and Health Sciences, University of Auckland, Auckland, New Zealand
| | - Stephen Kaye
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK.,Department of Ophthalmology, St Paul's Eye Unit, Royal Liverpool University Hospital, Liverpool, UK
| | - Vito Romano
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK.,Department of Ophthalmology, St Paul's Eye Unit, Royal Liverpool University Hospital, Liverpool, UK
| | - Yalin Zheng
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, Liverpool, UK
| |
Collapse
|