1
|
Jiang B, Bao L, He S, Chen X, Jin Z, Ye Y. Deep learning applications in breast cancer histopathological imaging: diagnosis, treatment, and prognosis. Breast Cancer Res 2024; 26:137. [PMID: 39304962 DOI: 10.1186/s13058-024-01895-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2024] [Accepted: 09/16/2024] [Indexed: 09/22/2024] Open
Abstract
Breast cancer is the most common malignant tumor among women worldwide and remains one of the leading causes of death among women. Its incidence and mortality rates are continuously rising. In recent years, with the rapid advancement of deep learning (DL) technology, DL has demonstrated significant potential in breast cancer diagnosis, prognosis evaluation, and treatment response prediction. This paper reviews relevant research progress and applies DL models to image enhancement, segmentation, and classification based on large-scale datasets from TCGA and multiple centers. We employed foundational models such as ResNet50, Transformer, and Hover-net to investigate the performance of DL models in breast cancer diagnosis, treatment, and prognosis prediction. The results indicate that DL techniques have significantly improved diagnostic accuracy and efficiency, particularly in predicting breast cancer metastasis and clinical prognosis. Furthermore, the study emphasizes the crucial role of robust databases in developing highly generalizable models. Future research will focus on addressing challenges related to data management, model interpretability, and regulatory compliance, ultimately aiming to provide more precise clinical treatment and prognostic evaluation programs for breast cancer patients.
Collapse
Affiliation(s)
- Bitao Jiang
- Department of Hematology and Oncology, Beilun District People's Hospital, Ningbo, 315800, China.
- Department of Hematology and Oncology, Beilun Branch of the First Affiliated Hospital of Zhejiang University, Ningbo, 315800, China.
| | - Lingling Bao
- Department of Hematology and Oncology, Beilun District People's Hospital, Ningbo, 315800, China
- Department of Hematology and Oncology, Beilun Branch of the First Affiliated Hospital of Zhejiang University, Ningbo, 315800, China
| | - Songqin He
- Department of Oncology, The 906th Hospital of the Joint Logistics Force of the Chinese People's Liberation Army, Ningbo, 315100, China
| | - Xiao Chen
- Department of Oncology, The 906th Hospital of the Joint Logistics Force of the Chinese People's Liberation Army, Ningbo, 315100, China
| | - Zhihui Jin
- Department of Hematology and Oncology, Beilun District People's Hospital, Ningbo, 315800, China
- Department of Hematology and Oncology, Beilun Branch of the First Affiliated Hospital of Zhejiang University, Ningbo, 315800, China
| | - Yingquan Ye
- Department of Oncology, The 906th Hospital of the Joint Logistics Force of the Chinese People's Liberation Army, Ningbo, 315100, China.
| |
Collapse
|
2
|
Nanaa M, Gupta VO, Hickman SE, Allajbeu I, Payne NR, Arponen O, Black R, Huang Y, Priest AN, Gilbert FJ. Accuracy of an Artificial Intelligence System for Interval Breast Cancer Detection at Screening Mammography. Radiology 2024; 312:e232303. [PMID: 39189901 DOI: 10.1148/radiol.232303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/28/2024]
Abstract
Background Artificial intelligence (AI) systems can be used to identify interval breast cancers, although the localizations are not always accurate. Purpose To evaluate AI localizations of interval cancers (ICs) on screening mammograms by IC category and histopathologic characteristics. Materials and Methods A screening mammography data set (median patient age, 57 years [IQR, 52-64 years]) that had been assessed by two human readers from January 2011 to December 2018 was retrospectively analyzed using a commercial AI system. The AI outputs were lesion locations (heatmaps) and the highest per-lesion risk score (range, 0-100) assigned to each case. AI heatmaps were considered false positive (FP) if they occurred on normal screening mammograms or on IC screening mammograms (ie, in patients subsequently diagnosed with IC) but outside the cancer boundary. A panel of consultant radiology experts classified ICs as normal or benign (true negative [TN]), uncertain (minimal signs of malignancy [MS]), or suspicious (false negative [FN]). Several specificity and sensitivity thresholds were applied. Mann-Whitney U tests, Kruskal-Wallis tests, and χ2 tests were used to compare groups. Results A total of 2052 screening mammograms (514 ICs and 1548 normal mammograms) were included. The median AI risk score was 50 (IQR, 32-82) for TN ICs, 76 (IQR, 41-90) for ICs with MS, and 89 (IQR, 81-95) for FN ICs (P = .005). Higher median AI scores were observed for invasive tumors (62 [IQR, 39-88]) than for noninvasive tumors (33 [IQR, 20-55]; P < .01) and for high-grade (grade 2-3) tumors (62 [IQR, 40-87]) than for low-grade (grade 0-1) tumors (45 [IQR, 26-81]; P = .02). At the 96% specificity threshold, the AI algorithm flagged 121 of 514 (23.5%) ICs and correctly localized the IC in 93 of 121 (76.9%) cases, with 48 FP heatmaps on the mammograms for ICs (rate, 0.093 per case) and 74 FP heatmaps on normal mammograms (rate, 0.048 per case). The AI algorithm correctly localized a lower proportion of TN ICs (54 of 427; 12.6%) than ICs with MS (35 of 76; 46%) and FN ICs (four of eight; 50% [95% CI: 13, 88]; P < .001). The AI algorithm localized a higher proportion of node-positive than node-negative cancers (P = .03). However, no evidence of a difference by cancer type (P = .09), grade (P = .27), or hormone receptor status (P = .12) was found. At 89.8% specificity and 79% sensitivity thresholds, AI detection increased to 181 (35.2%) and 256 (49.8%) of the 514 ICs, respectively, with FP heatmaps on 158 (10.2%) and 307 (19.8%) of the 1548 normal mammograms. Conclusion Use of a standalone AI system improved early cancer detection by correctly identifying some cancers missed by two human readers, with no differences based on histopathologic features except for node-positive cancers. © RSNA, 2024 Supplemental material is available for this article.
Collapse
Affiliation(s)
- Muzna Nanaa
- From the Department of Radiology, School of Clinical Medicine, University of Cambridge, Box 218, Level 5, Cambridge Biomedical Campus, Cambridge CB2 0QQ, England (M.N., V.O.G., S.E.H., I.A., N.R.P., O.A., Y.H., A.N.P., F.J.G.); Department of Radiology, Addenbrooke's Hospital, Cambridge University Hospitals NHS Foundation Trust, Cambridge, England (M.N., I.A., R.B., A.N.P., F.J.G.); and Department of Radiology, The Royal London Hospital, Barts Health NHS Trust, London, England (S.E.H.)
| | - Vaishnavi O Gupta
- From the Department of Radiology, School of Clinical Medicine, University of Cambridge, Box 218, Level 5, Cambridge Biomedical Campus, Cambridge CB2 0QQ, England (M.N., V.O.G., S.E.H., I.A., N.R.P., O.A., Y.H., A.N.P., F.J.G.); Department of Radiology, Addenbrooke's Hospital, Cambridge University Hospitals NHS Foundation Trust, Cambridge, England (M.N., I.A., R.B., A.N.P., F.J.G.); and Department of Radiology, The Royal London Hospital, Barts Health NHS Trust, London, England (S.E.H.)
| | - Sarah E Hickman
- From the Department of Radiology, School of Clinical Medicine, University of Cambridge, Box 218, Level 5, Cambridge Biomedical Campus, Cambridge CB2 0QQ, England (M.N., V.O.G., S.E.H., I.A., N.R.P., O.A., Y.H., A.N.P., F.J.G.); Department of Radiology, Addenbrooke's Hospital, Cambridge University Hospitals NHS Foundation Trust, Cambridge, England (M.N., I.A., R.B., A.N.P., F.J.G.); and Department of Radiology, The Royal London Hospital, Barts Health NHS Trust, London, England (S.E.H.)
| | - Iris Allajbeu
- From the Department of Radiology, School of Clinical Medicine, University of Cambridge, Box 218, Level 5, Cambridge Biomedical Campus, Cambridge CB2 0QQ, England (M.N., V.O.G., S.E.H., I.A., N.R.P., O.A., Y.H., A.N.P., F.J.G.); Department of Radiology, Addenbrooke's Hospital, Cambridge University Hospitals NHS Foundation Trust, Cambridge, England (M.N., I.A., R.B., A.N.P., F.J.G.); and Department of Radiology, The Royal London Hospital, Barts Health NHS Trust, London, England (S.E.H.)
| | - Nicholas R Payne
- From the Department of Radiology, School of Clinical Medicine, University of Cambridge, Box 218, Level 5, Cambridge Biomedical Campus, Cambridge CB2 0QQ, England (M.N., V.O.G., S.E.H., I.A., N.R.P., O.A., Y.H., A.N.P., F.J.G.); Department of Radiology, Addenbrooke's Hospital, Cambridge University Hospitals NHS Foundation Trust, Cambridge, England (M.N., I.A., R.B., A.N.P., F.J.G.); and Department of Radiology, The Royal London Hospital, Barts Health NHS Trust, London, England (S.E.H.)
| | - Otso Arponen
- From the Department of Radiology, School of Clinical Medicine, University of Cambridge, Box 218, Level 5, Cambridge Biomedical Campus, Cambridge CB2 0QQ, England (M.N., V.O.G., S.E.H., I.A., N.R.P., O.A., Y.H., A.N.P., F.J.G.); Department of Radiology, Addenbrooke's Hospital, Cambridge University Hospitals NHS Foundation Trust, Cambridge, England (M.N., I.A., R.B., A.N.P., F.J.G.); and Department of Radiology, The Royal London Hospital, Barts Health NHS Trust, London, England (S.E.H.)
| | - Richard Black
- From the Department of Radiology, School of Clinical Medicine, University of Cambridge, Box 218, Level 5, Cambridge Biomedical Campus, Cambridge CB2 0QQ, England (M.N., V.O.G., S.E.H., I.A., N.R.P., O.A., Y.H., A.N.P., F.J.G.); Department of Radiology, Addenbrooke's Hospital, Cambridge University Hospitals NHS Foundation Trust, Cambridge, England (M.N., I.A., R.B., A.N.P., F.J.G.); and Department of Radiology, The Royal London Hospital, Barts Health NHS Trust, London, England (S.E.H.)
| | - Yuan Huang
- From the Department of Radiology, School of Clinical Medicine, University of Cambridge, Box 218, Level 5, Cambridge Biomedical Campus, Cambridge CB2 0QQ, England (M.N., V.O.G., S.E.H., I.A., N.R.P., O.A., Y.H., A.N.P., F.J.G.); Department of Radiology, Addenbrooke's Hospital, Cambridge University Hospitals NHS Foundation Trust, Cambridge, England (M.N., I.A., R.B., A.N.P., F.J.G.); and Department of Radiology, The Royal London Hospital, Barts Health NHS Trust, London, England (S.E.H.)
| | - Andrew N Priest
- From the Department of Radiology, School of Clinical Medicine, University of Cambridge, Box 218, Level 5, Cambridge Biomedical Campus, Cambridge CB2 0QQ, England (M.N., V.O.G., S.E.H., I.A., N.R.P., O.A., Y.H., A.N.P., F.J.G.); Department of Radiology, Addenbrooke's Hospital, Cambridge University Hospitals NHS Foundation Trust, Cambridge, England (M.N., I.A., R.B., A.N.P., F.J.G.); and Department of Radiology, The Royal London Hospital, Barts Health NHS Trust, London, England (S.E.H.)
| | - Fiona J Gilbert
- From the Department of Radiology, School of Clinical Medicine, University of Cambridge, Box 218, Level 5, Cambridge Biomedical Campus, Cambridge CB2 0QQ, England (M.N., V.O.G., S.E.H., I.A., N.R.P., O.A., Y.H., A.N.P., F.J.G.); Department of Radiology, Addenbrooke's Hospital, Cambridge University Hospitals NHS Foundation Trust, Cambridge, England (M.N., I.A., R.B., A.N.P., F.J.G.); and Department of Radiology, The Royal London Hospital, Barts Health NHS Trust, London, England (S.E.H.)
| |
Collapse
|
3
|
Biroš M, Kvak D, Dandár J, Hrubý R, Janů E, Atakhanova A, Al-antari MA. Enhancing Accuracy in Breast Density Assessment Using Deep Learning: A Multicentric, Multi-Reader Study. Diagnostics (Basel) 2024; 14:1117. [PMID: 38893643 PMCID: PMC11172127 DOI: 10.3390/diagnostics14111117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Revised: 05/20/2024] [Accepted: 05/24/2024] [Indexed: 06/21/2024] Open
Abstract
The evaluation of mammographic breast density, a critical indicator of breast cancer risk, is traditionally performed by radiologists via visual inspection of mammography images, utilizing the Breast Imaging-Reporting and Data System (BI-RADS) breast density categories. However, this method is subject to substantial interobserver variability, leading to inconsistencies and potential inaccuracies in density assessment and subsequent risk estimations. To address this, we present a deep learning-based automatic detection algorithm (DLAD) designed for the automated evaluation of breast density. Our multicentric, multi-reader study leverages a diverse dataset of 122 full-field digital mammography studies (488 images in CC and MLO projections) sourced from three institutions. We invited two experienced radiologists to conduct a retrospective analysis, establishing a ground truth for 72 mammography studies (BI-RADS class A: 18, BI-RADS class B: 43, BI-RADS class C: 7, BI-RADS class D: 4). The efficacy of the DLAD was then compared to the performance of five independent radiologists with varying levels of experience. The DLAD showed robust performance, achieving an accuracy of 0.819 (95% CI: 0.736-0.903), along with an F1 score of 0.798 (0.594-0.905), precision of 0.806 (0.596-0.896), recall of 0.830 (0.650-0.946), and a Cohen's Kappa (κ) of 0.708 (0.562-0.841). The algorithm achieved robust performance that matches and in four cases exceeds that of individual radiologists. The statistical analysis did not reveal a significant difference in accuracy between DLAD and the radiologists, underscoring the model's competitive diagnostic alignment with professional radiologist assessments. These results demonstrate that the deep learning-based automatic detection algorithm can enhance the accuracy and consistency of breast density assessments, offering a reliable tool for improving breast cancer screening outcomes.
Collapse
Affiliation(s)
- Marek Biroš
- Carebot, Ltd., 128 00 Prague, Czech Republic; (M.B.); (J.D.); (R.H.); (A.A.)
| | - Daniel Kvak
- Carebot, Ltd., 128 00 Prague, Czech Republic; (M.B.); (J.D.); (R.H.); (A.A.)
- Department of Simulation Medicine, Faculty of Medicine, Masaryk University, 625 00 Brno, Czech Republic
| | - Jakub Dandár
- Carebot, Ltd., 128 00 Prague, Czech Republic; (M.B.); (J.D.); (R.H.); (A.A.)
| | - Robert Hrubý
- Carebot, Ltd., 128 00 Prague, Czech Republic; (M.B.); (J.D.); (R.H.); (A.A.)
| | - Eva Janů
- Department of Radiology, Masaryk Memorial Cancer Institute, 602 00 Brno, Czech Republic
| | - Anora Atakhanova
- Carebot, Ltd., 128 00 Prague, Czech Republic; (M.B.); (J.D.); (R.H.); (A.A.)
| | - Mugahed A. Al-antari
- Department of Artificial Intelligence and Data Science, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea;
| |
Collapse
|
4
|
Schopf CM, Ramwala OA, Lowry KP, Hofvind S, Marinovich ML, Houssami N, Elmore JG, Dontchos BN, Lee JM, Lee CI. Artificial Intelligence-Driven Mammography-Based Future Breast Cancer Risk Prediction: A Systematic Review. J Am Coll Radiol 2024; 21:319-328. [PMID: 37949155 PMCID: PMC10926179 DOI: 10.1016/j.jacr.2023.10.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Revised: 10/05/2023] [Accepted: 10/05/2023] [Indexed: 11/12/2023]
Abstract
PURPOSE To summarize the literature regarding the performance of mammography-image based artificial intelligence (AI) algorithms, with and without additional clinical data, for future breast cancer risk prediction. MATERIALS AND METHODS A systematic literature review was performed using six databases (medRixiv, bioRxiv, Embase, Engineer Village, IEEE Xplore, and PubMed) from 2012 through September 30, 2022. Studies were included if they used real-world screening mammography examinations to validate AI algorithms for future risk prediction based on images alone or in combination with clinical risk factors. The quality of studies was assessed, and predictive accuracy was recorded as the area under the receiver operating characteristic curve (AUC). RESULTS Sixteen studies met inclusion and exclusion criteria, of which 14 studies provided AUC values. The median AUC performance of AI image-only models was 0.72 (range 0.62-0.90) compared with 0.61 for breast density or clinical risk factor-based tools (range 0.54-0.69). Of the seven studies that compared AI image-only performance directly to combined image + clinical risk factor performance, six demonstrated no significant improvement, and one study demonstrated increased improvement. CONCLUSIONS Early efforts for predicting future breast cancer risk based on mammography images alone demonstrate comparable or better accuracy to traditional risk tools with little or no improvement when adding clinical risk factor data. Transitioning from clinical risk factor-based to AI image-based risk models may lead to more accurate, personalized risk-based screening approaches.
Collapse
Affiliation(s)
- Cody M Schopf
- Department of Radiology, University of Washington School of Medicine, Seattle, Washington
| | - Ojas A Ramwala
- Department of Biomedical Informatics and Medical Education, University of Washington School of Medicine, Seattle, Washington
| | - Kathryn P Lowry
- Department of Radiology, University of Washington School of Medicine, Seattle, Washington
| | - Solveig Hofvind
- Section Head of Breast Cancer Screening, Cancer Registry of Norway, Oslo, Norway
| | - M Luke Marinovich
- The Daffodil Centre, the University of Sydney, a joint venture with Cancer Council NSW, Sydney, New South Wales, Australia
| | - Nehmat Houssami
- The Daffodil Centre, the University of Sydney, a joint venture with Cancer Council NSW, Sydney, New South Wales, Australia; National Breast Cancer Foundation Chair in Breast Cancer Prevention at the University of Sydney and Coeditor of The Breast
| | - Joann G Elmore
- David Geffen School of Medicine at University of California at Los Angeles, Los Angeles, California; Director of UCLA's National Clinician Scholars Program and Editor-in-Chief of Adult Primary Care at Up-To-Date. https://twitter.com/JoannElmoreMD
| | - Brian N Dontchos
- Department of Radiology, University of Washington School of Medicine, Seattle, Washington; Clinical Director of Breast Imaging at Fred Hutchinson Cancer Center
| | - Janie M Lee
- Section Chief of Breast Imaging, Department of Radiology, University of Washington School of Medicine, Seattle, Washington; Director of Breast Imaging at Fred Hutchinson Cancer Center
| | - Christoph I Lee
- Department of Radiology, University of Washington School of Medicine, Seattle, Washington, and Department of Health Systems & Population Health, University of Washington School of Public Health, Seattle, WA; Director of the Northwest Screening and Cancer Outcomes Research Enterprise at the University of Washington and Deputy Editor of Journal of the American College of Radiology.
| |
Collapse
|
5
|
Yala A, Hughes KS. Rethinking Risk Modeling with Machine Learning. Ann Surg Oncol 2023; 30:6950-6952. [PMID: 37574515 DOI: 10.1245/s10434-023-14144-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Accepted: 07/31/2023] [Indexed: 08/15/2023]
Affiliation(s)
- Adam Yala
- UC Berkeley, Berkeley, USA.
- UCSF, San Francisco, USA.
| | - Kevin S Hughes
- Surgical Oncology, Medical University of South Carolina, Charleston, USA
| |
Collapse
|
6
|
Alruily M, Said W, Mostafa AM, Ezz M, Elmezain M. Breast Ultrasound Images Augmentation and Segmentation Using GAN with Identity Block and Modified U-Net 3. SENSORS (BASEL, SWITZERLAND) 2023; 23:8599. [PMID: 37896692 PMCID: PMC10610596 DOI: 10.3390/s23208599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 10/10/2023] [Accepted: 10/16/2023] [Indexed: 10/29/2023]
Abstract
One of the most prevalent diseases affecting women in recent years is breast cancer. Early breast cancer detection can help in the treatment, lower the infection risk, and worsen the results. This paper presents a hybrid approach for augmentation and segmenting breast cancer. The framework contains two main stages: augmentation and segmentation of ultrasound images. The augmentation of the ultrasounds is applied using generative adversarial networks (GAN) with nonlinear identity block, label smoothing, and a new loss function. The segmentation of the ultrasounds applied a modified U-Net 3+. The hybrid approach achieves efficient results in the segmentation and augmentation steps compared with the other available methods for the same task. The modified version of the GAN with the nonlinear identity block overcomes different types of modified GAN in the ultrasound augmentation process, such as speckle GAN, UltraGAN, and deep convolutional GAN. The modified U-Net 3+ also overcomes the different architectures of U-Nets in the segmentation process. The GAN with nonlinear identity blocks achieved an inception score of 14.32 and a Fréchet inception distance of 41.86 in the augmenting process. The GAN with identity achieves a smaller value in Fréchet inception distance (FID) and a bigger value in inception score; these results prove the model's efficiency compared with other versions of GAN in the augmentation process. The modified U-Net 3+ architecture achieved a Dice Score of 95.49% and an Accuracy of 95.67%.
Collapse
Affiliation(s)
- Meshrif Alruily
- College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia; (M.A.); (M.E.)
| | - Wael Said
- Computer Science Department, Faculty of Computers and Informatics, Zagazig University, Zagazig 44511, Egypt;
- Computer Science Department, College of Computer Science and Engineering, Taibah University, Medina 42353, Saudi Arabia
| | - Ayman Mohamed Mostafa
- College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia; (M.A.); (M.E.)
| | - Mohamed Ezz
- College of Computer and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia; (M.A.); (M.E.)
| | - Mahmoud Elmezain
- Computer Science Department, Faculty of Science, Tanta University, Tanta 31527, Egypt;
- Computer Science Department, College of Computer Science and Engineering, Taibah University, Yanbu 966144, Saudi Arabia
| |
Collapse
|
7
|
Heywang-Köbrunner SH, Hacker A, Jänsch A, Hertlein M, Mieskes C, Elsner S, Sinnatamby R, Katalinic A. Use of novel artificial intelligence computer-assisted detection (AI-CAD) for screening mammography: an analysis of 17,884 consecutive two-view full-field digital mammography screening exams. Acta Radiol 2023; 64:2697-2703. [PMID: 37642981 DOI: 10.1177/02841851231187382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/31/2023]
Abstract
BACKGROUND Novel artificial intelligence computer-assisted detection (AI-CAD) systems based on deep learning (DL) promise to support screen reading. PURPOSE To test a DL-AI-CAD system compared to human reading on consecutive screening mammograms. MATERIAL AND METHODS In this retrospective study, 17,884 consecutive anonymized screening mammograms, double-read from January to November 2018, were processed by the DL-AI-CAD system. AI-CAD reading was considered positive if the AI-CAD case scores exceeded 30 (range = 1-100) and the lesion was correctly marked. Likewise, human reading (R1 or R2, respectively) was considered positive if the lesion was correctly identified and called. Receiver operating characteristic (ROC) analysis was performed and accuracy data were calculated. Ground truth for benign lesions: absence of malignancy after cancer registry matching (2022); for malignancy: histopathologic proof; evaluation was patient-based. RESULTS In total, 114 screen-detected and 17 interval cancers (ICA) occurred. ROC analysis of screen-detected cancers yielded an AUC of 89% for AI-CAD. Sensitivity/specificity was 81.7%/80.2% for AI-CAD; 77.1%/91.7% for R1; 78.6/91.6% for R2. Combining each human reading with AI-CAD was as sensitive as human double-reading (all approximately 88%), but less specific (approximately 75%) compared to human double-reading (approximately 87%). These AI-CAD combinations required consensus readings for twice as many cases as the human combination. Four of 17 ICA exceeded a case score of 30; two of four CAD correctly marked the quadrant of the subsequent ICA. CONCLUSION Including ICA cases, this AI-CAD achieved comparable sensitivity to human reading at lower specificity. Combining human reading and AI-CAD allows increasing sensitivity compared to single-reading.
Collapse
Affiliation(s)
- Sylvia H Heywang-Köbrunner
- National Reference Center for Mammography Munich, Muenchen, Germany
- FFB gGmbH Gesellschaft für Forschung und Fortbildung in der Brustdiagnose, Muenchen, Germany
- Brustdiagnostik München, Sonnenstr. 29, Munich, Muenchen, Germany
| | - Astrid Hacker
- National Reference Center for Mammography Munich, Muenchen, Germany
| | - Alexander Jänsch
- FFB gGmbH Gesellschaft für Forschung und Fortbildung in der Brustdiagnose, Muenchen, Germany
| | - Michael Hertlein
- National Reference Center for Mammography Munich, Muenchen, Germany
| | | | - Susanne Elsner
- Department for Epidemiology and Social Medicine, Universität zu Lübeck, Institut für Sozialmedizin und Epidemiologie, Luebeck, Germany
| | - Ruchira Sinnatamby
- Department for Radiology, Cambridge University Hospitals, Hills Road, Cambridge, UK
| | - Alexander Katalinic
- Department for Epidemiology and Social Medicine, Universität zu Lübeck, Institut für Sozialmedizin und Epidemiologie, Luebeck, Germany
| |
Collapse
|
8
|
Siddique M, Liu M, Duong P, Jambawalikar S, Ha R. Deep Learning Approaches with Digital Mammography for Evaluating Breast Cancer Risk, a Narrative Review. Tomography 2023; 9:1110-1119. [PMID: 37368543 DOI: 10.3390/tomography9030091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Revised: 05/29/2023] [Accepted: 06/02/2023] [Indexed: 06/29/2023] Open
Abstract
Breast cancer remains the leading cause of cancer-related deaths in women worldwide. Current screening regimens and clinical breast cancer risk assessment models use risk factors such as demographics and patient history to guide policy and assess risk. Applications of artificial intelligence methods (AI) such as deep learning (DL) and convolutional neural networks (CNNs) to evaluate individual patient information and imaging showed promise as personalized risk models. We reviewed the current literature for studies related to deep learning and convolutional neural networks with digital mammography for assessing breast cancer risk. We discussed the literature and examined the ongoing and future applications of deep learning techniques in breast cancer risk modeling.
Collapse
Affiliation(s)
- Maham Siddique
- Department of Radiology, Columbia University Medical Center, New York, NY 10032, USA
| | - Michael Liu
- Department of Radiology, Columbia University Medical Center, New York, NY 10032, USA
| | - Phuong Duong
- Department of Radiology, Columbia University Medical Center, New York, NY 10032, USA
| | - Sachin Jambawalikar
- Department of Radiology, Columbia University Medical Center, New York, NY 10032, USA
| | - Richard Ha
- Department of Radiology, Columbia University Medical Center, New York, NY 10032, USA
| |
Collapse
|
9
|
Shi Z, Ma Y, Ma X, Jin A, Zhou J, Li N, Sheng D, Chang C, Chen J, Li J. Differentiation between Phyllodes Tumors and Fibroadenomas through Breast Ultrasound: Deep-Learning Model Outperforms Ultrasound Physicians. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115099. [PMID: 37299826 DOI: 10.3390/s23115099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Revised: 05/14/2023] [Accepted: 05/24/2023] [Indexed: 06/12/2023]
Abstract
The preoperative differentiation of breast phyllodes tumors (PTs) from fibroadenomas (FAs) plays a critical role in identifying an appropriate surgical treatment. Although several imaging modalities are available, reliable differentiation between PT and FA remains a great challenge for radiologists in clinical work. Artificial intelligence (AI)-assisted diagnosis has shown promise in distinguishing PT from FA. However, a very small sample size was adopted in previous studies. In this work, we retrospectively enrolled 656 breast tumors (372 FAs and 284 PTs) with 1945 ultrasound images in total. Two experienced ultrasound physicians independently evaluated the ultrasound images. Meanwhile, three deep-learning models (i.e., ResNet, VGG, and GoogLeNet) were applied to classify FAs and PTs. The robustness of the models was evaluated by fivefold cross validation. The performance of each model was assessed by using the receiver operating characteristic (ROC) curve. The area under the curve (AUC), accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were also calculated. Among the three models, the ResNet model yielded the highest AUC value, of 0.91, with an accuracy value of 95.3%, a sensitivity value of 96.2%, and a specificity value of 94.7% in the testing data set. In contrast, the two physicians yielded an average AUC value of 0.69, an accuracy value of 70.7%, a sensitivity value of 54.4%, and a specificity value of 53.2%. Our findings indicate that the diagnostic performance of deep learning is better than that of physicians in the distinction of PTs from FAs. This further suggests that AI is a valuable tool for aiding clinical diagnosis, thereby advancing precision therapy.
Collapse
Affiliation(s)
- Zhaoting Shi
- Department of Oncology, Shanghai Medical College, Fudan University, No. 270, Dong'an Road, Xuhui District, Shanghai 200032, China
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center, No. 270, Dong'an Road, Xuhui District, Shanghai 200032, China
| | - Yebo Ma
- Shanghai Key Laboratory of Multidimensional Information Processing, School of Communication and Electronic Engineering, East China Normal University, No. 500, Dongchuan Road, Shanghai 200241, China
| | - Xiaowen Ma
- Department of Oncology, Shanghai Medical College, Fudan University, No. 270, Dong'an Road, Xuhui District, Shanghai 200032, China
- Department of Radiology, Fudan University Shanghai Cancer Center, No. 270, Dong'an Road, Xuhui District, Shanghai 200032, China
| | - Anqi Jin
- Department of Oncology, Shanghai Medical College, Fudan University, No. 270, Dong'an Road, Xuhui District, Shanghai 200032, China
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center, No. 270, Dong'an Road, Xuhui District, Shanghai 200032, China
| | - Jin Zhou
- Department of Oncology, Shanghai Medical College, Fudan University, No. 270, Dong'an Road, Xuhui District, Shanghai 200032, China
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center, No. 270, Dong'an Road, Xuhui District, Shanghai 200032, China
| | - Na Li
- Department of Oncology, Shanghai Medical College, Fudan University, No. 270, Dong'an Road, Xuhui District, Shanghai 200032, China
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center, No. 270, Dong'an Road, Xuhui District, Shanghai 200032, China
| | - Danli Sheng
- Department of Oncology, Shanghai Medical College, Fudan University, No. 270, Dong'an Road, Xuhui District, Shanghai 200032, China
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center, No. 270, Dong'an Road, Xuhui District, Shanghai 200032, China
| | - Cai Chang
- Department of Oncology, Shanghai Medical College, Fudan University, No. 270, Dong'an Road, Xuhui District, Shanghai 200032, China
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center, No. 270, Dong'an Road, Xuhui District, Shanghai 200032, China
| | - Jiangang Chen
- Shanghai Key Laboratory of Multidimensional Information Processing, School of Communication and Electronic Engineering, East China Normal University, No. 500, Dongchuan Road, Shanghai 200241, China
- Engineering Research Center of Traditional Chinese Medicine Intelligent Rehabilitation, Ministry of Education, No. 1200, Cailun Road, Pudong District, Shanghai 201203, China
| | - Jiawei Li
- Department of Oncology, Shanghai Medical College, Fudan University, No. 270, Dong'an Road, Xuhui District, Shanghai 200032, China
- Department of Medical Ultrasound, Fudan University Shanghai Cancer Center, No. 270, Dong'an Road, Xuhui District, Shanghai 200032, China
| |
Collapse
|
10
|
Watanabe H, Hayashi S, Kondo Y, Matsuyama E, Hayashi N, Ogura T, Shimosegawa M. Quality control system for mammographic breast positioning using deep learning. Sci Rep 2023; 13:7066. [PMID: 37127674 PMCID: PMC10151341 DOI: 10.1038/s41598-023-34380-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 04/28/2023] [Indexed: 05/03/2023] Open
Abstract
This study proposes a deep convolutional neural network (DCNN) classification for the quality control and validation of breast positioning criteria in mammography. A total of 1631 mediolateral oblique mammographic views were collected from an open database. We designed two main steps for mammographic verification: automated detection of the positioning part and classification of three scales that determine the positioning quality using DCNNs. After acquiring labeled mammograms with three scales visually evaluated based on guidelines, the first step was automatically detecting the region of interest of the subject part by image processing. The next step was classifying mammographic positioning accuracy into three scales using four representative DCNNs. The experimental results showed that the DCNN model achieved the best positioning classification accuracy of 0.7836 using VGG16 in the inframammary fold and a classification accuracy of 0.7278 using Xception in the nipple profile. Furthermore, using the softmax function, the breast positioning criteria could be evaluated quantitatively by presenting the predicted value, which is the probability of determining positioning accuracy. The proposed method can be quantitatively evaluated without the need for an individual qualitative evaluation and has the potential to improve the quality control and validation of breast positioning criteria in mammography.
Collapse
Affiliation(s)
- Haruyuki Watanabe
- School of Radiological Technology, Gunma Prefectural College of Health Sciences, Maebashi, Japan.
| | - Saeko Hayashi
- Department of Radiology, National Hospital Organization Shibukawa Medical Center, Shibukawa, Japan
| | - Yohan Kondo
- Graduate School of Health Sciences, Niigata University, Niigata, Japan
| | - Eri Matsuyama
- Faculty of Informatics, The University of Fukuchiyama, Fukuchiyama, Japan
| | - Norio Hayashi
- School of Radiological Technology, Gunma Prefectural College of Health Sciences, Maebashi, Japan
| | - Toshihiro Ogura
- School of Radiological Technology, Gunma Prefectural College of Health Sciences, Maebashi, Japan
| | - Masayuki Shimosegawa
- School of Radiological Technology, Gunma Prefectural College of Health Sciences, Maebashi, Japan
| |
Collapse
|
11
|
Hendrix N, Lowry KP, Elmore JG, Lotter W, Sorensen G, Hsu W, Liao GJ, Parsian S, Kolb S, Naeim A, Lee CI. Radiologist Preferences for Artificial Intelligence-Based Decision Support During Screening Mammography Interpretation. J Am Coll Radiol 2022; 19:1098-1110. [PMID: 35970474 PMCID: PMC9840464 DOI: 10.1016/j.jacr.2022.06.019] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2022] [Revised: 06/03/2022] [Accepted: 06/07/2022] [Indexed: 01/17/2023]
Abstract
BACKGROUND Artificial intelligence (AI) may improve cancer detection and risk prediction during mammography screening, but radiologists' preferences regarding its characteristics and implementation are unknown. PURPOSE To quantify how different attributes of AI-based cancer detection and risk prediction tools affect radiologists' intentions to use AI during screening mammography interpretation. MATERIALS AND METHODS Through qualitative interviews with radiologists, we identified five primary attributes for AI-based breast cancer detection and four for breast cancer risk prediction. We developed a discrete choice experiment based on these attributes and invited 150 US-based radiologists to participate. Each respondent made eight choices for each tool between three alternatives: two hypothetical AI-based tools versus screening without AI. We analyzed samplewide preferences using random parameters logit models and identified subgroups with latent class models. RESULTS Respondents (n = 66; 44% response rate) were from six diverse practice settings across eight states. Radiologists were more interested in AI for cancer detection when sensitivity and specificity were balanced (94% sensitivity with <25% of examinations marked) and AI markup appeared at the end of the hanging protocol after radiologists complete their independent review. For AI-based risk prediction, radiologists preferred AI models using both mammography images and clinical data. Overall, 46% to 60% intended to adopt any of the AI tools presented in the study; 26% to 33% approached AI enthusiastically but were deterred if the features did not align with their preferences. CONCLUSION Although most radiologists want to use AI-based decision support, short-term uptake may be maximized by implementing tools that meet the preferences of dissuadable users.
Collapse
Affiliation(s)
- Nathaniel Hendrix
- Department of Global Health and Population, Harvard T.H. Chan School of Public Health, Boston, Massachusetts
| | - Kathryn P Lowry
- Department of Radiology, University of Washington, Seattle Cancer Care Alliance, Seattle, Washington.
| | - Joann G Elmore
- Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, California
| | - William Lotter
- Chief Technology Officer, DeepHealth Inc, RadNet AI Solutions, Cambridge, Massachusetts
| | - Gregory Sorensen
- Chief Technology Officer, DeepHealth Inc, RadNet AI Solutions, Cambridge, Massachusetts
| | - William Hsu
- Department of Radiological Sciences, Data Integration, Architecture, and Analytics Group, University of California, Los Angeles, California; American Medical Informatics Association: Member, Governance Committee; RSNA: Deputy Editor, Radiology: Artificial Intelligence
| | - Geraldine J Liao
- Department of Radiology, Virginia Mason Medical Center, Seattle, Washington
| | - Sana Parsian
- Department of Radiology, University of Washington, Seattle Cancer Care Alliance, Seattle, Washington; Department of Radiology, Kaiser Permanente Washington, Seattle, Washington
| | - Suzanne Kolb
- Department of Radiology, University of Washington, Seattle Cancer Care Alliance, Seattle, Washington
| | - Arash Naeim
- Department of Medicine, David Geffen School of Medicine, University of California, Los Angeles, California; Chief Medical Officer for Clinical Research, UCLA Health; Codirector: Clinical and Translational Science Institute and Center for SMART Health; Associate Director: Institute for Precision Health, Jonsson Comprehensive Cancer Center, Garrick Institute for Risk Sciences
| | - Christoph I Lee
- Department of Radiology, University of Washington, Seattle Cancer Care Alliance, Seattle, Washington; Department of Health Services, School of Public Health, University of Washington, Seattle, Washington; and Deputy Editor, JACR
| |
Collapse
|
12
|
A Review of Breast Cancer Risk Factors in Adolescents and Young Adults. Cancers (Basel) 2021; 13:cancers13215552. [PMID: 34771713 PMCID: PMC8583289 DOI: 10.3390/cancers13215552] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Revised: 10/29/2021] [Accepted: 11/03/2021] [Indexed: 12/26/2022] Open
Abstract
Simple Summary Cancer diagnosed in patients between the ages of 15 and 39 deserves special consideration. Diagnoses within this cohort of adolescents and young adults include childhood cancers which present at an older age than expected, or an early presentation of cancers that are typically observed in older adults, such as breast cancer. Cancers within this age group are associated with worse disease-free and overall survival rates, and the incidence of these cases are rising. Knowing an individual’s susceptibility to disease can change their clinical management and allow for the risk-testing of relatives. This review discusses the risk factors that contribute to breast cancer in this unique cohort of patients, including inherited genetic risk factors, as well as environmental and lifestyle factors. We also describe risk models that allow clinicians to quantify a patient’s lifetime risk of developing disease. Abstract Cancer in adolescents and young adults (AYAs) deserves special consideration for several reasons. AYA cancers encompass paediatric malignancies that present at an older age than expected, or early-onset of cancers that are typically observed in adults. However, disease diagnosed in the AYA population is distinct to those same cancers which are diagnosed in a paediatric or older adult setting. Worse disease-free and overall survival outcomes are observed in the AYA setting, and the incidence of AYA cancers is increasing. Knowledge of an individual’s underlying cancer predisposition can influence their clinical care and may facilitate early tumour surveillance strategies and cascade testing of at-risk relatives. This information can further influence reproductive decision making. In this review we discuss the risk factors contributing to AYA breast cancer, such as heritable predisposition, environmental, and lifestyle factors. We also describe a number of risk models which incorporate genetic factors that aid clinicians in quantifying an individual’s lifetime risk of disease.
Collapse
|
13
|
Affiliation(s)
- Min Sun Bae
- From the Department of Radiology, Inha University Hospital and School of Medicine, 27 Inhang-ro, Jung-gu, Incheon 22332, South Korea (M.S.B.); and Department of Radiology, Kyung Hee University Hospital, Seoul, South Korea (H.G.K.)
| | - Hyug-Gi Kim
- From the Department of Radiology, Inha University Hospital and School of Medicine, 27 Inhang-ro, Jung-gu, Incheon 22332, South Korea (M.S.B.); and Department of Radiology, Kyung Hee University Hospital, Seoul, South Korea (H.G.K.)
| |
Collapse
|