1
|
Zippi ZD, Cortopassi IO, Grage RA, Johnson EM, McCann MR, Mergo PJ, Sonavane SK, Stowell JT, White RD, Little BP. United States newspaper and online media coverage of artificial intelligence and radiology from 1998 to 2023. Clin Imaging 2024; 113:110238. [PMID: 39059086 DOI: 10.1016/j.clinimag.2024.110238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Revised: 07/02/2024] [Accepted: 07/19/2024] [Indexed: 07/28/2024]
Abstract
OBJECTIVE To evaluate the frequency and content of media coverage pertaining to artificial intelligence (AI) and radiology in the United States from 1998 to 2023. METHODS The ProQuest US Newsstream database was queried for print and online articles mentioning AI and radiology published between January 1, 1998, and March 30, 2023. A Boolean search using terms related to radiology and AI was used to retrieve full text and publication information. One of 9 readers with radiology expertise independently reviewed randomly assigned articles using a standardized scoring system. RESULTS 379 articles met inclusion criteria, of which 290 were unique and 89 were syndicated articles. Most had a positive sentiment (74 %) towards AI, while negative sentiment was far less common (9 %). Frequency of positive sentiment was highest in articles with a focus on AI and radiology (86 %) and lowest in articles focusing on AI and non-medical topics (55 %). The net impact of AI on radiology was most commonly presented as positive (60 %). Benefits of AI were more frequently mentioned (76 %) than potential harms (46 %). Radiologists were interviewed or quoted in less than one-third of all articles. CONCLUSION Portrayal of the impact of AI on radiology in US media coverage was mostly positive, and advantages of AI were more frequently discussed than potential risks. However, articles with a general non-medical focus were more likely to have a negative sentiment regarding the impact of AI on radiology than articles with a more specific focus on medicine and radiology. Radiologists were infrequently interviewed or quoted in media coverage.
Collapse
Affiliation(s)
- Zachary D Zippi
- Florida International University College of Medicine, United States of America
| | - Isabel O Cortopassi
- Mayo Clinic Florida and Mayo Clinic College of Medicine and Science, United States of America
| | - Rolf A Grage
- Mayo Clinic Florida and Mayo Clinic College of Medicine and Science, United States of America
| | - Elizabeth M Johnson
- Mayo Clinic Florida and Mayo Clinic College of Medicine and Science, United States of America
| | - Matthew R McCann
- Mayo Clinic Florida and Mayo Clinic College of Medicine and Science, United States of America
| | - Patricia J Mergo
- Mayo Clinic Florida and Mayo Clinic College of Medicine and Science, United States of America
| | - Sushilkumar K Sonavane
- Mayo Clinic Florida and Mayo Clinic College of Medicine and Science, United States of America
| | - Justin T Stowell
- Mayo Clinic Florida and Mayo Clinic College of Medicine and Science, United States of America
| | - Richard D White
- Mayo Clinic Florida and Mayo Clinic College of Medicine and Science, United States of America
| | - Brent P Little
- Mayo Clinic Florida and Mayo Clinic College of Medicine and Science, United States of America.
| |
Collapse
|
2
|
Ha SM, Jang MJ, Youn I, Yoen H, Ji H, Lee SH, Yi A, Chang JM. Screening Outcomes of Mammography with AI in Dense Breasts: A Comparative Study with Supplemental Screening US. Radiology 2024; 312:e233391. [PMID: 39041940 DOI: 10.1148/radiol.233391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/24/2024]
Abstract
Background Comparative performance between artificial intelligence (AI) and breast US for women with dense breasts undergoing screening mammography remains unclear. Purpose To compare the performance of mammography alone, mammography with AI, and mammography plus supplemental US for screening women with dense breasts, and to investigate the characteristics of the detected cancers. Materials and Methods A retrospective database search identified consecutive asymptomatic women (≥40 years of age) with dense breasts who underwent mammography plus supplemental whole-breast handheld US from January 2017 to December 2018 at a primary health care center. Sequential reading for mammography alone and mammography with the aid of an AI system was conducted by five breast radiologists, and their recall decisions were recorded. Results of the combined mammography and US examinations were collected from the database. A dedicated breast radiologist reviewed marks for mammography alone or with AI to confirm lesion identification. The reference standard was histologic examination and 1-year follow-up data. The cancer detection rate (CDR) per 1000 screening examinations, sensitivity, specificity, and abnormal interpretation rate (AIR) of mammography alone, mammography with AI, and mammography plus US were compared. Results Among 5707 asymptomatic women (mean age, 52.4 years ± 7.9 [SD]), 33 (0.6%) had cancer (median lesion size, 0.7 cm). Mammography with AI had a higher specificity (95.3% [95% CI: 94.7, 95.8], P = .003) and lower AIR (5.0% [95% CI: 4.5, 5.6], P = .004) than mammography alone (94.3% [95% CI: 93.6, 94.8] and 6.0% [95% CI: 5.4, 6.7], respectively). Mammography plus US had a higher CDR (5.6 vs 3.5 per 1000 examinations, P = .002) and sensitivity (97.0% vs 60.6%, P = .002) but lower specificity (77.6% vs 95.3%, P < .001) and higher AIR (22.9% vs 5.0%, P < .001) than mammography with AI. Supplemental US alone helped detect 12 cancers, mostly stage 0 and I (92%, 11 of 12). Conclusion Although AI improved the specificity of mammography interpretation, mammography plus supplemental US helped detect more node-negative early breast cancers that were undetected using mammography with AI. © RSNA, 2024 Supplemental material is available for this article. See also the editorial by Whitman and Destounis in this issue.
Collapse
Affiliation(s)
- Su Min Ha
- From the Department of Radiology (S.M.H., H.Y., H.J., S.H.L., J.M.C.) and Medical Research Collaborating Center (M.J.J.), Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul 03080, Republic of Korea; Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea (S.M.H., S.H.L., J.M.C.); Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea (S.M.H.); Department of Radiology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea (I.Y.); and Department of Radiology, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Republic of Korea (A.Y.)
| | - Myoung-Jin Jang
- From the Department of Radiology (S.M.H., H.Y., H.J., S.H.L., J.M.C.) and Medical Research Collaborating Center (M.J.J.), Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul 03080, Republic of Korea; Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea (S.M.H., S.H.L., J.M.C.); Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea (S.M.H.); Department of Radiology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea (I.Y.); and Department of Radiology, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Republic of Korea (A.Y.)
| | - Inyoung Youn
- From the Department of Radiology (S.M.H., H.Y., H.J., S.H.L., J.M.C.) and Medical Research Collaborating Center (M.J.J.), Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul 03080, Republic of Korea; Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea (S.M.H., S.H.L., J.M.C.); Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea (S.M.H.); Department of Radiology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea (I.Y.); and Department of Radiology, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Republic of Korea (A.Y.)
| | - Heera Yoen
- From the Department of Radiology (S.M.H., H.Y., H.J., S.H.L., J.M.C.) and Medical Research Collaborating Center (M.J.J.), Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul 03080, Republic of Korea; Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea (S.M.H., S.H.L., J.M.C.); Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea (S.M.H.); Department of Radiology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea (I.Y.); and Department of Radiology, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Republic of Korea (A.Y.)
| | - Hye Ji
- From the Department of Radiology (S.M.H., H.Y., H.J., S.H.L., J.M.C.) and Medical Research Collaborating Center (M.J.J.), Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul 03080, Republic of Korea; Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea (S.M.H., S.H.L., J.M.C.); Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea (S.M.H.); Department of Radiology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea (I.Y.); and Department of Radiology, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Republic of Korea (A.Y.)
| | - Su Hyun Lee
- From the Department of Radiology (S.M.H., H.Y., H.J., S.H.L., J.M.C.) and Medical Research Collaborating Center (M.J.J.), Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul 03080, Republic of Korea; Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea (S.M.H., S.H.L., J.M.C.); Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea (S.M.H.); Department of Radiology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea (I.Y.); and Department of Radiology, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Republic of Korea (A.Y.)
| | - Ann Yi
- From the Department of Radiology (S.M.H., H.Y., H.J., S.H.L., J.M.C.) and Medical Research Collaborating Center (M.J.J.), Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul 03080, Republic of Korea; Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea (S.M.H., S.H.L., J.M.C.); Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea (S.M.H.); Department of Radiology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea (I.Y.); and Department of Radiology, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Republic of Korea (A.Y.)
| | - Jung Min Chang
- From the Department of Radiology (S.M.H., H.Y., H.J., S.H.L., J.M.C.) and Medical Research Collaborating Center (M.J.J.), Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul 03080, Republic of Korea; Department of Radiology, Seoul National University College of Medicine, Seoul, Republic of Korea (S.M.H., S.H.L., J.M.C.); Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea (S.M.H.); Department of Radiology, Kangbuk Samsung Hospital, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea (I.Y.); and Department of Radiology, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Republic of Korea (A.Y.)
| |
Collapse
|
3
|
De Souza J, Viswanath VK, Echterhoff JM, Chamberlain K, Wang EJ. Augmenting Telepostpartum Care With Vision-Based Detection of Breastfeeding-Related Conditions: Algorithm Development and Validation. JMIR AI 2024; 3:e54798. [PMID: 38913995 PMCID: PMC11231616 DOI: 10.2196/54798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Revised: 04/20/2024] [Accepted: 05/09/2024] [Indexed: 06/26/2024]
Abstract
BACKGROUND Breastfeeding benefits both the mother and infant and is a topic of attention in public health. After childbirth, untreated medical conditions or lack of support lead many mothers to discontinue breastfeeding. For instance, nipple damage and mastitis affect 80% and 20% of US mothers, respectively. Lactation consultants (LCs) help mothers with breastfeeding, providing in-person, remote, and hybrid lactation support. LCs guide, encourage, and find ways for mothers to have a better experience breastfeeding. Current telehealth services help mothers seek LCs for breastfeeding support, where images help them identify and address many issues. Due to the disproportional ratio of LCs and mothers in need, these professionals are often overloaded and burned out. OBJECTIVE This study aims to investigate the effectiveness of 5 distinct convolutional neural networks in detecting healthy lactating breasts and 6 breastfeeding-related issues by only using red, green, and blue images. Our goal was to assess the applicability of this algorithm as an auxiliary resource for LCs to identify painful breast conditions quickly, better manage their patients through triage, respond promptly to patient needs, and enhance the overall experience and care for breastfeeding mothers. METHODS We evaluated the potential for 5 classification models to detect breastfeeding-related conditions using 1078 breast and nipple images gathered from web-based and physical educational resources. We used the convolutional neural networks Resnet50, Visual Geometry Group model with 16 layers (VGG16), InceptionV3, EfficientNetV2, and DenseNet169 to classify the images across 7 classes: healthy, abscess, mastitis, nipple blebs, dermatosis, engorgement, and nipple damage by improper feeding or misuse of breast pumps. We also evaluated the models' ability to distinguish between healthy and unhealthy images. We present an analysis of the classification challenges, identifying image traits that may confound the detection model. RESULTS The best model achieves an average area under the receiver operating characteristic curve of 0.93 for all conditions after data augmentation for multiclass classification. For binary classification, we achieved, with the best model, an average area under the curve of 0.96 for all conditions after data augmentation. Several factors contributed to the misclassification of images, including similar visual features in the conditions that precede other conditions (such as the mastitis spectrum disorder), partially covered breasts or nipples, and images depicting multiple conditions in the same breast. CONCLUSIONS This vision-based automated detection technique offers an opportunity to enhance postpartum care for mothers and can potentially help alleviate the workload of LCs by expediting decision-making processes.
Collapse
Affiliation(s)
- Jessica De Souza
- Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA, United States
| | - Varun Kumar Viswanath
- Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA, United States
| | - Jessica Maria Echterhoff
- Department of Computer Science and Engineering, University of California, San Diego, La Jolla, CA, United States
| | - Kristina Chamberlain
- Division of Extended Studies, University of California, San Diego, La Jolla, CA, United States
| | - Edward Jay Wang
- Department of Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA, United States
| |
Collapse
|
4
|
Lee SE, Hong H, Kim EK. Diagnostic performance with and without artificial intelligence assistance in real-world screening mammography. Eur J Radiol Open 2024; 12:100545. [PMID: 38293282 PMCID: PMC10825593 DOI: 10.1016/j.ejro.2023.100545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Revised: 12/27/2023] [Accepted: 12/29/2023] [Indexed: 02/01/2024] Open
Abstract
Purpose To evaluate artificial intelligence-based computer-aided diagnosis (AI-CAD) for screening mammography, we analyzed the diagnostic performance of radiologists by providing and withholding AI-CAD results alternatively every month. Methods This retrospective study was approved by the institutional review board with a waiver for informed consent. Between August 2020 and May 2022, 1819 consecutive women (mean age 50.8 ± 9.4 years) with 2061 screening mammography and ultrasound performed on the same day in a single institution were included. Radiologists interpreted screening mammography in clinical practice with AI-CAD results being provided or withheld alternatively by month. The AI-CAD results were retrospectively obtained for analysis even when withheld from radiologists. The diagnostic performances of radiologists and stand-alone AI-CAD were compared and the performances of radiologists with and without AI-CAD assistance were also compared by cancer detection rate, recall rate, sensitivity, specificity, accuracy and area under the receiver-operating-characteristics curve (AUC). Results Twenty-nine breast cancer patients and 1790 women without cancers were included. Diagnostic performances of the radiologists did not significantly differ with and without AI-CAD assistance. Radiologists with AI-CAD assistance showed the same sensitivity (76.5%) and similar specificity (92.3% vs 93.8%), AUC (0.844 vs 0.851), and recall rates (8.8% vs. 7.4%) compared to standalone AI-CAD. Radiologists without AI-CAD assistance showed lower specificity (91.9% vs 94.6%) and accuracy (91.5% vs 94.1%) and higher recall rates (8.6% vs 5.9%, all p < 0.05) compared to stand-alone AI-CAD. Conclusion Radiologists showed no significant difference in diagnostic performance when both screening mammography and ultrasound were performed with or without AI-CAD assistance for mammography. However, without AI-CAD assistance, radiologists showed lower specificity and accuracy and higher recall rates compared to stand-alone AI-CAD.
Collapse
Affiliation(s)
| | | | - Eun-Kyung Kim
- Correspondence to: Department of Radiology, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gul̥, Yongin-si, Gyeonggi-do, Korea.
| |
Collapse
|
5
|
Yoen H, Jang MJ, Yi A, Moon WK, Chang JM. Artificial Intelligence for Breast Cancer Detection on Mammography: Factors Related to Cancer Detection. Acad Radiol 2024; 31:2239-2247. [PMID: 38216413 DOI: 10.1016/j.acra.2023.12.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 12/01/2023] [Accepted: 12/01/2023] [Indexed: 01/14/2024]
Abstract
RATIONALE AND OBJECTIVES Little is known about the factors affecting the Artificial Intelligence (AI) software performance on mammography for breast cancer detection. This study was to identify factors associated with abnormality scores assigned by the AI software. MATERIALS AND METHODS A retrospective database search was conducted to identify consecutive asymptomatic women who underwent breast cancer surgery between April 2016 and December 2019. A commercially available AI software (Lunit INSIGHT, MMG, Ver. 1.1.4.0) was used for preoperative mammography to assign individual abnormality scores to the lesions and score of 10 or higher was considered as positive detection by AI software. Radiologists without knowledge of the AI results retrospectively assessed the mammographic density and classified mammographic findings into positive and negative finding. General linear model (GLM) analysis was used to identify the clinical, pathological, and mammographic findings related to the abnormality scores, obtaining coefficient β values that represent the mean difference per unit or comparison with the reference value. Additionally, the reasons for non-detection by the AI software were investigated. RESULTS Among the 1001 index cancers (830 invasive cancers and 171 ductal carcinoma in situs) in 1001 patients, 717 (72%) were correctly detected by AI, while the remaining 284 (28%) were not detected. Multivariable GLM analysis showed that abnormal mammography findings (β = 77.0 for mass, β = 73.1 for calcification only, β = 49.4 for architectural distortion, and β = 47.6 for asymmetry compared to negative; all Ps < 0.001), invasive tumor size (β = 4.3 per 1 cm, P < 0.001), and human epidermal growth receptor type 2 (HER2) positivity (β = 9.2 compared to hormone receptor positive, HER2 negative, P = 0.004) were associated with higher mean abnormality score. AI failed to detect small asymmetries in extremely dense breasts, subcentimeter-sized or isodense lesions, and faint amorphous calcifications. CONCLUSION Cancers with positive abnormal mammographic findings on retrospective review, large invasive size, HER2 positivity had high AI abnormality scores. Understanding the patterns of AI software performance is crucial for effectively integrating AI into clinical practice.
Collapse
Affiliation(s)
- Heera Yoen
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, Republic of Korea
| | - Myoung-Jin Jang
- Medical Research Collaborating Center, Seoul National University Hospital, Seoul, Republic of Korea
| | - Ann Yi
- Department of Radiology, Seoul National University Hospital Healthcare System Gangnam Center, Seoul, Korea
| | - Woo Kyung Moon
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, Republic of Korea
| | - Jung Min Chang
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro, Jongno-gu, Seoul, Republic of Korea.
| |
Collapse
|
6
|
Lee SE, Hong H, Kim EK. Positive Predictive Values of Abnormality Scores From a Commercial Artificial Intelligence-Based Computer-Aided Diagnosis for Mammography. Korean J Radiol 2024; 25:343-350. [PMID: 38528692 PMCID: PMC10973732 DOI: 10.3348/kjr.2023.0907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 11/17/2023] [Accepted: 12/05/2023] [Indexed: 03/27/2024] Open
Abstract
OBJECTIVE Artificial intelligence-based computer-aided diagnosis (AI-CAD) is increasingly used in mammography. While the continuous scores of AI-CAD have been related to malignancy risk, the understanding of how to interpret and apply these scores remains limited. We investigated the positive predictive values (PPVs) of the abnormality scores generated by a deep learning-based commercial AI-CAD system and analyzed them in relation to clinical and radiological findings. MATERIALS AND METHODS From March 2020 to May 2022, 656 breasts from 599 women (mean age 52.6 ± 11.5 years, including 0.6% [4/599] high-risk women) who underwent mammography and received positive AI-CAD results (Lunit Insight MMG, abnormality score ≥ 10) were retrospectively included in this study. Univariable and multivariable analyses were performed to evaluate the associations between the AI-CAD abnormality scores and clinical and radiological factors. The breasts were subdivided according to the abnormality scores into groups 1 (10-49), 2 (50-69), 3 (70-89), and 4 (90-100) using the optimal binning method. The PPVs were calculated for all breasts and subgroups. RESULTS Diagnostic indications and positive imaging findings by radiologists were associated with higher abnormality scores in the multivariable regression analysis. The overall PPV of AI-CAD was 32.5% (213/656) for all breasts, including 213 breast cancers, 129 breasts with benign biopsy results, and 314 breasts with benign outcomes in the follow-up or diagnostic studies. In the screening mammography subgroup, the PPVs were 18.6% (58/312) overall and 5.1% (12/235), 29.0% (9/31), 57.9% (11/19), and 96.3% (26/27) for score groups 1, 2, 3, and 4, respectively. The PPVs were significantly higher in women with diagnostic indications (45.1% [155/344]), palpability (51.9% [149/287]), fatty breasts (61.2% [60/98]), and certain imaging findings (masses with or without calcifications and distortion). CONCLUSION PPV increased with increasing AI-CAD abnormality scores. The PPVs of AI-CAD satisfied the acceptable PPV range according to Breast Imaging-Reporting and Data System for screening mammography and were higher for diagnostic mammography.
Collapse
Affiliation(s)
- Si Eun Lee
- Department of Radiology, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Republic of Korea
| | - Hanpyo Hong
- Department of Radiology, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Republic of Korea
| | - Eun-Kyung Kim
- Department of Radiology, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Republic of Korea.
| |
Collapse
|
7
|
Grignaffini F, Barbuto F, Troiano M, Piazzo L, Simeoni P, Mangini F, De Stefanis C, Onetti Muda A, Frezza F, Alisi A. The Use of Artificial Intelligence in the Liver Histopathology Field: A Systematic Review. Diagnostics (Basel) 2024; 14:388. [PMID: 38396427 PMCID: PMC10887838 DOI: 10.3390/diagnostics14040388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 02/02/2024] [Accepted: 02/06/2024] [Indexed: 02/25/2024] Open
Abstract
Digital pathology (DP) has begun to play a key role in the evaluation of liver specimens. Recent studies have shown that a workflow that combines DP and artificial intelligence (AI) applied to histopathology has potential value in supporting the diagnosis, treatment evaluation, and prognosis prediction of liver diseases. Here, we provide a systematic review of the use of this workflow in the field of hepatology. Based on the PRISMA 2020 criteria, a search of the PubMed, SCOPUS, and Embase electronic databases was conducted, applying inclusion/exclusion filters. The articles were evaluated by two independent reviewers, who extracted the specifications and objectives of each study, the AI tools used, and the results obtained. From the 266 initial records identified, 25 eligible studies were selected, mainly conducted on human liver tissues. Most of the studies were performed using whole-slide imaging systems for imaging acquisition and applying different machine learning and deep learning methods for image pre-processing, segmentation, feature extractions, and classification. Of note, most of the studies selected demonstrated good performance as classifiers of liver histological images compared to pathologist annotations. Promising results to date bode well for the not-too-distant inclusion of these techniques in clinical practice.
Collapse
Affiliation(s)
- Flavia Grignaffini
- Department of Information Engineering, Electronics and Telecommunications (DIET), “La Sapienza”, University of Rome, 00184 Rome, Italy; (F.G.); (F.B.); (L.P.); (F.M.); (F.F.)
| | - Francesco Barbuto
- Department of Information Engineering, Electronics and Telecommunications (DIET), “La Sapienza”, University of Rome, 00184 Rome, Italy; (F.G.); (F.B.); (L.P.); (F.M.); (F.F.)
| | - Maurizio Troiano
- Research Unit of Genetics of Complex Phenotypes, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (M.T.); (C.D.S.)
| | - Lorenzo Piazzo
- Department of Information Engineering, Electronics and Telecommunications (DIET), “La Sapienza”, University of Rome, 00184 Rome, Italy; (F.G.); (F.B.); (L.P.); (F.M.); (F.F.)
| | - Patrizio Simeoni
- National Transport Authority (NTA), D02 WT20 Dublin, Ireland;
- Faculty of Lifelong Learning, South East Technological University (SETU), R93 V960 Carlow, Ireland
| | - Fabio Mangini
- Department of Information Engineering, Electronics and Telecommunications (DIET), “La Sapienza”, University of Rome, 00184 Rome, Italy; (F.G.); (F.B.); (L.P.); (F.M.); (F.F.)
| | - Cristiano De Stefanis
- Research Unit of Genetics of Complex Phenotypes, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (M.T.); (C.D.S.)
| | | | - Fabrizio Frezza
- Department of Information Engineering, Electronics and Telecommunications (DIET), “La Sapienza”, University of Rome, 00184 Rome, Italy; (F.G.); (F.B.); (L.P.); (F.M.); (F.F.)
| | - Anna Alisi
- Research Unit of Genetics of Complex Phenotypes, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy; (M.T.); (C.D.S.)
| |
Collapse
|
8
|
Yoon JH, Han K, Suh HJ, Youk JH, Lee SE, Kim EK. Artificial intelligence-based computer-assisted detection/diagnosis (AI-CAD) for screening mammography: Outcomes of AI-CAD in the mammographic interpretation workflow. Eur J Radiol Open 2023; 11:100509. [PMID: 37484980 PMCID: PMC10362167 DOI: 10.1016/j.ejro.2023.100509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Revised: 07/03/2023] [Accepted: 07/09/2023] [Indexed: 07/25/2023] Open
Abstract
Purpose To evaluate the stand-alone diagnostic performances of AI-CAD and outcomes of AI-CAD detected abnormalities when applied to the mammographic interpretation workflow. Methods From January 2016 to December 2017, 6499 screening mammograms of 5228 women were collected from a single screening facility. Historic reads of three radiologists were used as radiologist interpretation. A commercially-available AI-CAD was used for analysis. One radiologist not involved in interpretation had retrospectively reviewed the abnormality features and assessed the significance (negligible vs. need recall) of the AI-CAD marks. Ground truth in terms of cancer, benign or absence of abnormality was confirmed according to histopathologic diagnosis or negative results on the next-round screen. Results Of the 6499 mammograms, 6282 (96.7%) were in the negative, 189 (2.9%) were in the benign, and 28 (0.4%) were in the cancer group. AI-CAD detected 5 (17.9%, 5 of 28) of the 9 cancers that were intially interpreted as negative. Of the 648 AI-CAD recalls, 89.0% (577 of 648) were marks seen on examinations in the negative group, and 267 (41.2%) of the AI-CAD marks were considered to be negligible. Stand-alone AI-CAD has significantly higher recall rates (10.0% vs. 3.4%, P < 0.001) with comparable sensitivity and cancer detection rates (P = 0.086 and 0.102, respectively) when compared to the radiologists' interpretation. Conclusion AI-CAD detected 17.9% additional cancers on screening mammography that were initially overlooked by the radiologists. In spite of the additional cancer detection, AI-CAD had significantly higher recall rates in the clinical workflow, in which 89.0% of AI-CAD marks are on negative mammograms.
Collapse
Affiliation(s)
- Jung Hyun Yoon
- Department of Radiology, Severance Hospital, Research Institute of Radiological Science, Center for Clinical Imaging Data Science, Yonsei University, College of Medicine, South Korea
| | - Kyungwha Han
- Department of Radiology, Center for Clinical Imaging Data Science, Yonsei University, College of Medicine, South Korea
| | - Hee Jung Suh
- Department of Radiology, Severance Check-up Center, South Korea
| | - Ji Hyun Youk
- Department of Radiology, Gangnam Severance Hospital, Yonsei University, College of Medicine, South Korea
| | - Si Eun Lee
- Department of Radiology, Yongin Severance Hospital, Yonsei University, College of Medicine, South Korea
| | - Eun-Kyung Kim
- Department of Radiology, Yongin Severance Hospital, Yonsei University, College of Medicine, South Korea
| |
Collapse
|
9
|
Bayareh-Mancilla R, Medina-Ramos LA, Toriz-Vázquez A, Hernández-Rodríguez YM, Cigarroa-Mayorga OE. Automated Computer-Assisted Medical Decision-Making System Based on Morphological Shape and Skin Thickness Analysis for Asymmetry Detection in Mammographic Images. Diagnostics (Basel) 2023; 13:3440. [PMID: 37998576 PMCID: PMC10670641 DOI: 10.3390/diagnostics13223440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Revised: 07/28/2023] [Accepted: 08/03/2023] [Indexed: 11/25/2023] Open
Abstract
Breast cancer is a significant health concern for women, emphasizing the need for early detection. This research focuses on developing a computer system for asymmetry detection in mammographic images, employing two critical approaches: Dynamic Time Warping (DTW) for shape analysis and the Growing Seed Region (GSR) method for breast skin segmentation. The methodology involves processing mammograms in DICOM format. In the morphological study, a centroid-based mask is computed using extracted images from DICOM files. Distances between the centroid and the breast perimeter are then calculated to assess similarity through Dynamic Time Warping analysis. For skin thickness asymmetry identification, a seed is initially set on skin pixels and expanded based on intensity and depth similarities. The DTW analysis achieves an accuracy of 83%, correctly identifying 23 possible asymmetry cases out of 20 ground truth cases. The GRS method is validated using Average Symmetric Surface Distance and Relative Volumetric metrics, yielding similarities of 90.47% and 66.66%, respectively, for asymmetry cases compared to 182 ground truth segmented images, successfully identifying 35 patients with potential skin asymmetry. Additionally, a Graphical User Interface is designed to facilitate the insertion of DICOM files and provide visual representations of asymmetrical findings for validation and accessibility by physicians.
Collapse
Affiliation(s)
- Rafael Bayareh-Mancilla
- Department Advanced Technologies, UPIITA-Instituto Politécnico Nacional, Av. IPN No. 2580, Mexico City C.P. 07340, Mexico
| | | | - Alfonso Toriz-Vázquez
- Academic Unit, Institute of Applied Mathematics and Systems Research of the State of Yucatan, National Autonomous University of Mexico, Merida C.P. 97302, Yucatan, Mexico
| | | | - Oscar Eduardo Cigarroa-Mayorga
- Department Advanced Technologies, UPIITA-Instituto Politécnico Nacional, Av. IPN No. 2580, Mexico City C.P. 07340, Mexico
| |
Collapse
|
10
|
Hong GS, Jang M, Kyung S, Cho K, Jeong J, Lee GY, Shin K, Kim KD, Ryu SM, Seo JB, Lee SM, Kim N. Overcoming the Challenges in the Development and Implementation of Artificial Intelligence in Radiology: A Comprehensive Review of Solutions Beyond Supervised Learning. Korean J Radiol 2023; 24:1061-1080. [PMID: 37724586 PMCID: PMC10613849 DOI: 10.3348/kjr.2023.0393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 07/01/2023] [Accepted: 07/30/2023] [Indexed: 09/21/2023] Open
Abstract
Artificial intelligence (AI) in radiology is a rapidly developing field with several prospective clinical studies demonstrating its benefits in clinical practice. In 2022, the Korean Society of Radiology held a forum to discuss the challenges and drawbacks in AI development and implementation. Various barriers hinder the successful application and widespread adoption of AI in radiology, such as limited annotated data, data privacy and security, data heterogeneity, imbalanced data, model interpretability, overfitting, and integration with clinical workflows. In this review, some of the various possible solutions to these challenges are presented and discussed; these include training with longitudinal and multimodal datasets, dense training with multitask learning and multimodal learning, self-supervised contrastive learning, various image modifications and syntheses using generative models, explainable AI, causal learning, federated learning with large data models, and digital twins.
Collapse
Affiliation(s)
- Gil-Sun Hong
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Miso Jang
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sunggu Kyung
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Kyungjin Cho
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jiheon Jeong
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Grace Yoojin Lee
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Keewon Shin
- Laboratory for Biosignal Analysis and Perioperative Outcome Research, Biomedical Engineering Center, Asan Institute of Lifesciences, Asan Medical Center, Seoul, Republic of Korea
| | - Ki Duk Kim
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Seung Min Ryu
- Department of Orthopedic Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Joon Beom Seo
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sang Min Lee
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Namkug Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
11
|
Kim H, Choi JS, Kim K, Ko ES, Ko EY, Han BK. Effect of artificial intelligence-based computer-aided diagnosis on the screening outcomes of digital mammography: a matched cohort study. Eur Radiol 2023; 33:7186-7198. [PMID: 37188881 DOI: 10.1007/s00330-023-09692-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 02/21/2023] [Accepted: 03/09/2023] [Indexed: 05/17/2023]
Abstract
OBJECTIVE To investigate whether artificial intelligence-based computer-aided diagnosis (AI-CAD) can improve radiologists' performance when used to support radiologists' interpretation of digital mammography (DM) in breast cancer screening. METHODS A retrospective database search identified 3158 asymptomatic Korean women who consecutively underwent screening DM between January and December 2019 without AI-CAD support, and screening DM between February and July 2020 with image interpretation aided by AI-CAD in a tertiary referral hospital using single reading. Propensity score matching was used to match the DM with AI-CAD group in a 1:1 ratio with the DM without AI-CAD group according to age, breast density, experience level of the interpreting radiologist, and screening round. Performance measures were compared with the McNemar test and generalized estimating equations. RESULTS A total of 1579 women who underwent DM with AI-CAD were matched with 1579 women who underwent DM without AI-CAD. Radiologists showed higher specificity (96% [1500 of 1563] vs 91.6% [1430 of 1561]; p < 0.001) and lower abnormal interpretation rates (AIR) (4.9% [77 of 1579] vs 9.2% [145 of 1579]; p < 0.001) with AI-CAD than without. There was no significant difference in the cancer detection rate (CDR) (AI-CAD vs no AI-CAD, 8.9 vs 8.9 per 1000 examinations; p = 0.999), sensitivity (87.5% vs 77.8%; p = 0.999), and positive predictive value for biopsy (PPV3) (35.0% vs 35.0%; p = 0.999) according to AI-CAD support. CONCLUSIONS AI-CAD increases the specificity for radiologists without decreasing sensitivity as a supportive tool in the single reading of DM for breast cancer screening. CLINICAL RELEVANCE STATEMENT This study shows that AI-CAD could improve the specificity of radiologists' DM interpretation in the single reading system without decreasing sensitivity, suggesting that it can benefit patients by reducing false positive and recall rates. KEY POINTS • In this retrospective-matched cohort study (DM without AI-CAD vs DM with AI-CAD), radiologists showed higher specificity and lower AIR when AI-CAD was used to support decision-making in DM screening. • CDR, sensitivity, and PPV for biopsy did not differ with and without AI-CAD support.
Collapse
Affiliation(s)
- Haejung Kim
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul, 06351, Korea
| | - Ji Soo Choi
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul, 06351, Korea.
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Korea.
| | - Kyunga Kim
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Korea
- Biomedical Statistics Center, Research Institute for Future Medicine, Samsung Medical Center, Seoul, Korea
- Department of Data Convergence & Future Medicine, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Eun Sook Ko
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul, 06351, Korea
| | - Eun Young Ko
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul, 06351, Korea
| | - Boo-Kyung Han
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul, 06351, Korea
| |
Collapse
|
12
|
Youk JH, Han K, Lee SE, Kim EK. Consistency of Artificial Intelligence (AI)-based Diagnostic Support Software in Short-term Digital Mammography Reimaging After Core Needle Biopsy. J Digit Imaging 2023; 36:1965-1973. [PMID: 37326891 PMCID: PMC10501993 DOI: 10.1007/s10278-023-00863-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 05/08/2023] [Accepted: 05/26/2023] [Indexed: 06/17/2023] Open
Abstract
To evaluate the consistency in the performance of Artificial Intelligence (AI)-based diagnostic support software in short-term digital mammography reimaging after core needle biopsy. Of 276 women who underwent short-term (<3 mo) serial digital mammograms followed by breast cancer surgery from Jan. to Dec. 2017, 550 breasts were included. All core needle biopsies for breast lesions were performed between serial exams. All mammography images were analyzed using a commercially available AI-based software providing an abnormality score (0-100). Demographic data for age, interval between serial exams, biopsy, and final diagnosis were compiled. Mammograms were reviewed for mammographic density and finding. Statistical analysis was performed to evaluate the distribution of variables according to biopsy and to test the interaction effects of variables with the difference in AI-based score according to biopsy. AI-based score of 550 exams (benign or normal in 263 and malignant in 287) showed significant difference between malignant and benign/normal exams (0.48 vs. 91.97 in first exam and 0.62 vs. 87.13 in second exam, P<0.0001). In comparison of serial exams, no significant difference was found in AI-based score. AI-based score difference between serial exams was significantly different according to biopsy performed or not (-0.25 vs. 0.07, P = 0.035). In linear regression analysis, there was no significant interaction effect of all clinical and mammographic characteristics with mammographic examinations performed after biopsy or not. The results from AI-based diagnostic support software for digital mammography was relatively consistent in short-term reimaging even after core needle biopsy.
Collapse
Affiliation(s)
- Ji Hyun Youk
- Department of Radiology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Kyunghwa Han
- Department of Radiology, Research Institute of Radiological Science, and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Si Eun Lee
- Department of Radiology, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Gyeonggi‑do, Republic of Korea
| | - Eun-Kyung Kim
- Department of Radiology, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Gyeonggi‑do, Republic of Korea.
| |
Collapse
|
13
|
Krishnan G, Singh S, Pathania M, Gosavi S, Abhishek S, Parchani A, Dhar M. Artificial intelligence in clinical medicine: catalyzing a sustainable global healthcare paradigm. Front Artif Intell 2023; 6:1227091. [PMID: 37705603 PMCID: PMC10497111 DOI: 10.3389/frai.2023.1227091] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 08/09/2023] [Indexed: 09/15/2023] Open
Abstract
As the demand for quality healthcare increases, healthcare systems worldwide are grappling with time constraints and excessive workloads, which can compromise the quality of patient care. Artificial intelligence (AI) has emerged as a powerful tool in clinical medicine, revolutionizing various aspects of patient care and medical research. The integration of AI in clinical medicine has not only improved diagnostic accuracy and treatment outcomes, but also contributed to more efficient healthcare delivery, reduced costs, and facilitated better patient experiences. This review article provides an extensive overview of AI applications in history taking, clinical examination, imaging, therapeutics, prognosis and research. Furthermore, it highlights the critical role AI has played in transforming healthcare in developing nations.
Collapse
Affiliation(s)
- Gokul Krishnan
- Department of Internal Medicine, Kasturba Medical College, Manipal, India
| | - Shiana Singh
- Department of Emergency Medicine, All India Institute of Medical Sciences, Rishikesh, India
| | - Monika Pathania
- Department of Geriatric Medicine, All India Institute of Medical Sciences, Rishikesh, India
| | - Siddharth Gosavi
- Department of Internal Medicine, Kasturba Medical College, Manipal, India
| | - Shuchi Abhishek
- Department of Internal Medicine, Kasturba Medical College, Manipal, India
| | - Ashwin Parchani
- Department of Geriatric Medicine, All India Institute of Medical Sciences, Rishikesh, India
| | - Minakshi Dhar
- Department of Geriatric Medicine, All India Institute of Medical Sciences, Rishikesh, India
| |
Collapse
|
14
|
Nguyen AA, McCarthy AM, Kontos D. Combining Molecular and Radiomic Features for Risk Assessment in Breast Cancer. Annu Rev Biomed Data Sci 2023; 6:299-311. [PMID: 37159874 DOI: 10.1146/annurev-biodatasci-020722-092748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Breast cancer risk is highly variable within the population and current research is leading the shift toward personalized medicine. By accurately assessing an individual woman's risk, we can reduce the risk of over/undertreatment by preventing unnecessary procedures or by elevating screening procedures. Breast density measured from conventional mammography has been established as one of the most dominant risk factors for breast cancer; however, it is currently limited by its ability to characterize more complex breast parenchymal patterns that have been shown to provide additional information to strengthen cancer risk models. Molecular factors ranging from high penetrance, or high likelihood that a mutation will show signs and symptoms of the disease, to combinations of gene mutations with low penetrance have shown promise for augmenting risk assessment. Although imaging biomarkers and molecular biomarkers have both individually demonstrated improved performance in risk assessment, few studies have evaluated them together. This review aims to highlight the current state of the art in breast cancer risk assessment using imaging and genetic biomarkers.
Collapse
Affiliation(s)
- Alex A Nguyen
- Department of Bioengineering, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Anne Marie McCarthy
- Department of Biostatistics, Epidemiology and Informatics, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Despina Kontos
- Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA;
| |
Collapse
|
15
|
Choi WJ, An JK, Woo JJ, Kwak HY. Comparison of Diagnostic Performance in Mammography Assessment: Radiologist with Reference to Clinical Information Versus Standalone Artificial Intelligence Detection. Diagnostics (Basel) 2022; 13:diagnostics13010117. [PMID: 36611409 PMCID: PMC9818877 DOI: 10.3390/diagnostics13010117] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 12/21/2022] [Accepted: 12/28/2022] [Indexed: 12/31/2022] Open
Abstract
We compared diagnostic performances between radiologists with reference to clinical information and standalone artificial intelligence (AI) detection of breast cancer on digital mammography. This study included 392 women (average age: 57.3 ± 12.1 years, range: 30−94 years) diagnosed with malignancy between January 2010 and June 2021 who underwent digital mammography prior to biopsy. Two radiologists assessed mammographic findings based on clinical symptoms and prior mammography. All mammographies were analyzed via AI. Breast cancer detection performance was compared between radiologists and AI based on how the lesion location was concordant between each analysis method (radiologists or AI) and pathological results. Kappa coefficient was used to measure the concordance between radiologists or AI analysis and pathology results. Binominal logistic regression analysis was performed to identify factors influencing the concordance between radiologists’ analysis and pathology results. Overall, the concordance was higher in radiologists’ diagnosis than on AI analysis (kappa coefficient: 0.819 vs. 0.698). Impact of prior mammography (odds ratio (OR): 8.55, p < 0.001), clinical symptom (OR: 5.49, p < 0.001), and fatty breast density (OR: 5.18, p = 0.008) were important factors contributing to the concordance of lesion location between radiologists’ diagnosis and pathology results.
Collapse
Affiliation(s)
- Won Jae Choi
- Department of Radiology, Nowon Eulji University Hospital, Eulji University School of Medicine, Seoul 01830, Republic of Korea
| | - Jin Kyung An
- Department of Radiology, Nowon Eulji University Hospital, Eulji University School of Medicine, Seoul 01830, Republic of Korea
- Correspondence: ; Tel.: +82-2-970-8290; Fax: +82-2-970-8346
| | - Jeong Joo Woo
- Department of Radiology, Nowon Eulji University Hospital, Eulji University School of Medicine, Seoul 01830, Republic of Korea
| | - Hee Yong Kwak
- Department of Surgery, Nowon Eulji University Hospital, Eulji University School of Medicine, Seoul 01830, Republic of Korea
| |
Collapse
|
16
|
Gastounioti A, Eriksson M, Cohen EA, Mankowski W, Pantalone L, Ehsan S, McCarthy AM, Kontos D, Hall P, Conant EF. External Validation of a Mammography-Derived AI-Based Risk Model in a U.S. Breast Cancer Screening Cohort of White and Black Women. Cancers (Basel) 2022; 14:cancers14194803. [PMID: 36230723 PMCID: PMC9564051 DOI: 10.3390/cancers14194803] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 09/26/2022] [Accepted: 09/28/2022] [Indexed: 11/16/2022] Open
Abstract
Despite the demonstrated potential of artificial intelligence (AI) in breast cancer risk assessment for personalizing screening recommendations, further validation is required regarding AI model bias and generalizability. We performed external validation on a U.S. screening cohort of a mammography-derived AI breast cancer risk model originally developed for European screening cohorts. We retrospectively identified 176 breast cancers with exams 3 months to 2 years prior to cancer diagnosis and a random sample of 4963 controls from women with at least one-year negative follow-up. A risk score for each woman was calculated via the AI risk model. Age-adjusted areas under the ROC curves (AUCs) were estimated for the entire cohort and separately for White and Black women. The Gail 5-year risk model was also evaluated for comparison. The overall AUC was 0.68 (95% CIs 0.64−0.72) for all women, 0.67 (0.61−0.72) for White women, and 0.70 (0.65−0.76) for Black women. The AI risk model significantly outperformed the Gail risk model for all women p < 0.01 and for Black women p < 0.01, but not for White women p = 0.38. The performance of the mammography-derived AI risk model was comparable to previously reported European validation results; non-significantly different when comparing White and Black women; and overall, significantly higher than that of the Gail model.
Collapse
Affiliation(s)
- Aimilia Gastounioti
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA 19104, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, USA
- Correspondence: (A.G.); (E.F.C.); Tel.: +1-314-286-0553 (A.G.); +1-2156624032 (E.F.C.)
| | - Mikael Eriksson
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, 171 77 Stockholm, Sweden
| | - Eric A. Cohen
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA 19104, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Walter Mankowski
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA 19104, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Lauren Pantalone
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA 19104, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Sarah Ehsan
- Department of Biostatistics, Epidemiology & Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Anne Marie McCarthy
- Department of Biostatistics, Epidemiology & Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Despina Kontos
- Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA 19104, USA
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Per Hall
- Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, 171 77 Stockholm, Sweden
- Department of Oncology, Södersjukhuset, 118 83 Stockholm, Sweden
| | - Emily F. Conant
- Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA 19104, USA
- Correspondence: (A.G.); (E.F.C.); Tel.: +1-314-286-0553 (A.G.); +1-2156624032 (E.F.C.)
| |
Collapse
|
17
|
Bao C, Shen J, Zhang Y, Zhang Y, Wei W, Wang Z, Ding J, Han L. Evaluation of an artificial intelligence support system for breast cancer screening in Chinese people based on mammogram. Cancer Med 2022; 12:3718-3726. [PMID: 36082949 PMCID: PMC9939225 DOI: 10.1002/cam4.5231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 08/16/2022] [Accepted: 08/30/2022] [Indexed: 11/05/2022] Open
Abstract
BACKGROUND To evaluate the diagnostic performance of radiologists on breast cancer with or without artificial intelligence (AI) support. METHODS A retrospective study was performed. In total, 643 mammograms (average age: 54 years; female: 100%; cancer: 62.05%) were randomly allocated into two groups. Seventy-five percent of mammograms in each group were randomly selected for assessment by two independent radiologists, and the rest were read once. Half of the 71 radiologists could read mammograms with AI support, and the other half could not. Sensitivity, specificity, Youden's index, agreement rate, Kappa value, the area under the receiver operating characteristic curve (AUC) and the reading time of radiologists in each group were analyzed. RESULTS The average AUC was higher if the AI support system was used (unaided: 0.84; with AI support: 0.91; p < 0.01). The average sensitivity increased from 84.77% to 95.07% with AI support (p < 0.01), but the average specificity decreased (p = 0.07). Youden's index, agreement rate and Kappa value were larger in the group with AI support, and the average reading time was shorter (p < 0.01). CONCLUSIONS The AI support system might contribute to enhancing the diagnostic performance (e.g., higher sensitivity and AUC) of radiologists. In the future, the AI algorithm should be improved, and prospective studies should be conducted.
Collapse
Affiliation(s)
- Chengzhen Bao
- Beijing Obstetrics and Gynecology HospitalCapital Medical University. Beijing Maternal and Child Health Care HospitalBeijingChina
| | - Jie Shen
- Beijing Obstetrics and Gynecology HospitalCapital Medical University. Beijing Maternal and Child Health Care HospitalBeijingChina
| | - Yue Zhang
- Beijing Obstetrics and Gynecology HospitalCapital Medical University. Beijing Maternal and Child Health Care HospitalBeijingChina
| | - Yan Zhang
- Beijing Obstetrics and Gynecology HospitalCapital Medical University. Beijing Maternal and Child Health Care HospitalBeijingChina
| | - Wei Wei
- Beijing Obstetrics and Gynecology HospitalCapital Medical University. Beijing Maternal and Child Health Care HospitalBeijingChina
| | | | | | - Lili Han
- Beijing Obstetrics and Gynecology HospitalCapital Medical University. Beijing Maternal and Child Health Care HospitalBeijingChina
| |
Collapse
|
18
|
Koo BS, Lee JJ, Jung JW, Kang CH, Joo KB, Kim TH, Lee S. A pilot study on deep learning-based grading of corners of vertebral bodies for assessment of radiographic progression in patients with ankylosing spondylitis. Ther Adv Musculoskelet Dis 2022; 14:1759720X221114097. [PMID: 35898565 PMCID: PMC9310199 DOI: 10.1177/1759720x221114097] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Accepted: 06/29/2022] [Indexed: 11/25/2022] Open
Abstract
Background: Radiographs are widely used to evaluate radiographic progression with
modified stoke ankylosing spondylitis spinal score (mSASSS). Objective: This pilot study aimed to develop a deep learning model for grading the
corners of the cervical and lumbar vertebral bodies for computer-aided
detection of mSASSS in patients with ankylosing spondylitis (AS). Methods: Digital radiographic examination of the spine was performed using Discovery
XR656 (GE Healthcare) and Digital Diagnost (Philips). The disk points were
detected between the bodies using a key-point detection deep learning model
from the image obtained in DICOM (digital imaging and communications in
medicine) format from the cervical and lumbar spinal radiographs. After
cropping the vertebral regions around the disk point, the lower and upper
corners of the vertebral bodies were classified as grade 3 (total bony
bridges) or grades 0, 1, or 2 (non-bridges). We trained a convolutional
neural network model to predict the grades in the lower and upper corners of
the vertebral bodies. The performance of the model was evaluated in a
validation set, which was separate from the training set. Results: Among 1280 patients with AS for whom mSASSS data were available, 5,083
cervical and 5245 lumbar lateral radiographs were reviewed. The total number
of corners where mSASSS was measured in the cervical and lumbar vertebrae,
including the upper and lower corners, was 119,414. Among them, the number
of corners in the training and validation sets was 110,088 and 9326,
respectively. The mean accuracy, sensitivity, and specificity for mSASSS
scoring in one corner of the vertebral body were 0.91604, 0.80288, and
0.94244, respectively. Conclusion: A high-performance deep learning model for grading the corners of the
vertebral bodies was developed for the first time. This model must be
improved and further validated to develop a computer-aided tool for
assessing mSASSS in the future.
Collapse
Affiliation(s)
- Bon San Koo
- Division of Rheumatology, Department of Internal Medicine, Inje University Seoul Paik Hospital, College of Medicine, Inje University, Seoul, Korea
| | | | | | - Chang Ho Kang
- Department of Radiology, Korea University Anam Hospital, Seoul, Korea
| | - Kyung Bin Joo
- Department of Rheumatology, Hanyang University Hospital for Rheumatic Diseases, Seoul, Korea
| | - Tae-Hwan Kim
- Department of Rheumatology, Hanyang University Hospital for Rheumatic Diseases, Seoul, Korea
| | - Seunghun Lee
- Department of Radiology, Hanyang University Hospital for Rheumatic Diseases, 222-1, Wangsimni-ro, Seongdong-gu, Seoul 04763, Korea
| |
Collapse
|
19
|
Gastounioti A, Desai S, Ahluwalia VS, Conant EF, Kontos D. Artificial intelligence in mammographic phenotyping of breast cancer risk: a narrative review. Breast Cancer Res 2022; 24:14. [PMID: 35184757 PMCID: PMC8859891 DOI: 10.1186/s13058-022-01509-z] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 02/08/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Improved breast cancer risk assessment models are needed to enable personalized screening strategies that achieve better harm-to-benefit ratio based on earlier detection and better breast cancer outcomes than existing screening guidelines. Computational mammographic phenotypes have demonstrated a promising role in breast cancer risk prediction. With the recent exponential growth of computational efficiency, the artificial intelligence (AI) revolution, driven by the introduction of deep learning, has expanded the utility of imaging in predictive models. Consequently, AI-based imaging-derived data has led to some of the most promising tools for precision breast cancer screening. MAIN BODY This review aims to synthesize the current state-of-the-art applications of AI in mammographic phenotyping of breast cancer risk. We discuss the fundamentals of AI and explore the computing advancements that have made AI-based image analysis essential in refining breast cancer risk assessment. Specifically, we discuss the use of data derived from digital mammography as well as digital breast tomosynthesis. Different aspects of breast cancer risk assessment are targeted including (a) robust and reproducible evaluations of breast density, a well-established breast cancer risk factor, (b) assessment of a woman's inherent breast cancer risk, and (c) identification of women who are likely to be diagnosed with breast cancers after a negative or routine screen due to masking or the rapid and aggressive growth of a tumor. Lastly, we discuss AI challenges unique to the computational analysis of mammographic imaging as well as future directions for this promising research field. CONCLUSIONS We provide a useful reference for AI researchers investigating image-based breast cancer risk assessment while indicating key priorities and challenges that, if properly addressed, could accelerate the implementation of AI-assisted risk stratification to future refine and individualize breast cancer screening strategies.
Collapse
Affiliation(s)
- Aimilia Gastounioti
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA.,Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO, 63110, USA
| | - Shyam Desai
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Vinayak S Ahluwalia
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA.,Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Emily F Conant
- Department of Radiology, Hospital of the University of Pennsylvania, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Despina Kontos
- Department of Radiology, Center for Biomedical Image Computing and Analytics (CBICA), University of Pennsylvania, Philadelphia, PA, 19104, USA.
| |
Collapse
|
20
|
Park GE, Kang BJ, Kim SH, Lee J. Retrospective Review of Missed Cancer Detection and Its Mammography Findings with Artificial-Intelligence-Based, Computer-Aided Diagnosis. Diagnostics (Basel) 2022; 12:diagnostics12020387. [PMID: 35204478 PMCID: PMC8871484 DOI: 10.3390/diagnostics12020387] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 01/12/2022] [Accepted: 02/01/2022] [Indexed: 11/24/2022] Open
Abstract
To investigate whether artificial-intelligence-based, computer-aided diagnosis (AI-CAD) could facilitate the detection of missed cancer on digital mammography, a total of 204 women diagnosed with breast cancer with diagnostic (present) and prior mammograms between 2018 and 2020 were included in this study. Two breast radiologists reviewed the mammographic features and classified them into true negative, minimal sign or missed cancer. They analyzed the AI-CAD results with an abnormality score and assessed whether the AI-CAD correctly localized the known cancer sites. Of the 204 cases, 137 were classified as true negative, 33 as minimal signs, and 34 as missed cancer. The sensitivity, specificity and diagnostic accuracy of AI-CAD were 84.7%, 91.5% and 86.3% on diagnostic mammogram and 67.2%, 91.2% and 83.38% on prior mammogram, respectively. The AI-CAD correctly localized 27 cases from 34 missed cancers on prior mammograms. The findings in the preceding mammography of AI-CAD-detected missed cancer were common in the order of calcifications, focal asymmetry and asymmetry. Asymmetry was the most common finding among the seven cases, which could not be detected by AI-CAD in the missed cases (5/7). The assistance of AI-CAD can be helpful in the early detection of breast cancer in mammography screenings.
Collapse
|
21
|
Lee JH, Kim KH, Lee EH, Ahn JS, Ryu JK, Park YM, Shin GW, Kim YJ, Choi HY. Improving the Performance of Radiologists Using Artificial Intelligence-Based Detection Support Software for Mammography: A Multi-Reader Study. Korean J Radiol 2022; 23:505-516. [PMID: 35434976 PMCID: PMC9081685 DOI: 10.3348/kjr.2021.0476] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 01/04/2022] [Accepted: 01/24/2022] [Indexed: 12/24/2022] Open
Abstract
Objective To evaluate whether artificial intelligence (AI) for detecting breast cancer on mammography can improve the performance and time efficiency of radiologists reading mammograms. Materials and Methods A commercial deep learning-based software for mammography was validated using external data collected from 200 patients, 100 each with and without breast cancer (40 with benign lesions and 60 without lesions) from one hospital. Ten readers, including five breast specialist radiologists (BSRs) and five general radiologists (GRs), assessed all mammography images using a seven-point scale to rate the likelihood of malignancy in two sessions, with and without the aid of the AI-based software, and the reading time was automatically recorded using a web-based reporting system. Two reading sessions were conducted with a two-month washout period in between. Differences in the area under the receiver operating characteristic curve (AUROC), sensitivity, specificity, and reading time between reading with and without AI were analyzed, accounting for data clustering by readers when indicated. Results The AUROC of the AI alone, BSR (average across five readers), and GR (average across five readers) groups was 0.915 (95% confidence interval, 0.876–0.954), 0.813 (0.756–0.870), and 0.684 (0.616–0.752), respectively. With AI assistance, the AUROC significantly increased to 0.884 (0.840–0.928) and 0.833 (0.779–0.887) in the BSR and GR groups, respectively (p = 0.007 and p < 0.001, respectively). Sensitivity was improved by AI assistance in both groups (74.6% vs. 88.6% in BSR, p < 0.001; 52.1% vs. 79.4% in GR, p < 0.001), but the specificity did not differ significantly (66.6% vs. 66.4% in BSR, p = 0.238; 70.8% vs. 70.0% in GR, p = 0.689). The average reading time pooled across readers was significantly decreased by AI assistance for BSRs (82.73 vs. 73.04 seconds, p < 0.001) but increased in GRs (35.44 vs. 42.52 seconds, p < 0.001). Conclusion AI-based software improved the performance of radiologists regardless of their experience and affected the reading time.
Collapse
Affiliation(s)
| | | | - Eun Hye Lee
- Department of Radiology, Soonchunhyang University Bucheon Hospital, Soonchunhyang University College of Medicine, Bucheon, Korea
| | | | - Jung Kyu Ryu
- Department of Radiology, Kyung Hee University Hospital at Gangdong, Seoul, Korea
| | - Young Mi Park
- Department of Radiology, Inje University Busan Paik Hospital, Inje University College of Medicine, Busan, Korea
| | - Gi Won Shin
- Department of Radiology, Inje University Busan Paik Hospital, Inje University College of Medicine, Busan, Korea
| | - Young Joong Kim
- Department of Radiology, Konyang University Hospital, Konyang University College of Medicine, Daejeon, Korea
| | - Hye Young Choi
- Department of Radiology, Gyeongsang National University Hospital, Jinju, Korea
| |
Collapse
|
22
|
Chang YW, An JK, Choi N, Ko KH, Kim KH, Han K, Ryu JK. Artificial Intelligence for Breast Cancer Screening in Mammography (AI-STREAM): A Prospective Multicenter Study Design in Korea Using AI-based CADe/x. J Breast Cancer 2022; 25:57-68. [PMID: 35133093 PMCID: PMC8876543 DOI: 10.4048/jbc.2022.25.e4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Revised: 11/18/2021] [Accepted: 12/05/2021] [Indexed: 11/30/2022] Open
Abstract
Purpose Artificial intelligence (AI)-based computer-aided detection/diagnosis (CADe/x) has helped improve radiologists’ performance and provides results equivalent or superior to those of radiologists’ alone. This prospective multicenter cohort study aims to generate real-world evidence on the overall benefits and disadvantages of using AI-based CADe/x for breast cancer detection in a population-based breast cancer screening program comprising Korean women aged ≥ 40 years. The purpose of this report is to compare the diagnostic accuracy of radiologists with and without the use of AI-based CADe/x in mammography readings for breast cancer screening of Korean women with average breast cancer risk. Methods Approximately 32,714 participants will be enrolled between February 2021 and December 2022 at 5 study sites in Korea. A radiologist specializing in breast imaging will interpret the mammography readings with or without the use of AI-based CADe/x. If recall is required, further diagnostic workup will be conducted to confirm the cancer detected on screening. The findings will be recorded for all participants regardless of their screening status to identify study participants with breast cancer diagnosis within both 1 year and 2 years of screening. The national cancer registry database will be reviewed in 2026 and 2027, and the results of this study are expected to be published in 2027. In addition, the diagnostic accuracy of general radiologists and radiologists specializing in breast imaging from another hospital with or without the use of AI-based CADe/x will be compared considering mammography readings for breast cancer screening. Discussion The Artificial Intelligence for Breast Cancer Screening in Mammography (AI-STREAM) study is a prospective multicenter study that aims to compare the diagnostic accuracy of radiologists with and without the use of AI-based CADe/x in mammography readings for breast cancer screening of women with average breast cancer risk. AI-STREAM is currently in the patient enrollment phase. Trial Registration ClinicalTrials.gov Identifier: NCT05024591
Collapse
Affiliation(s)
- Yun-Woo Chang
- Department of Radiology, Soonchunhyang University Seoul Hospital, Soonchunhyang University College of Medicine, Seoul, Korea
| | - Jin Kyung An
- Department of Radiology, Nowon Eulji University Hospital, Eulji University School of medicine, Seoul, Korea
| | - Nami Choi
- Department of Radiology, Konkuk University Medical Center, Konkuk University School of medicine, Seoul, Korea
| | - Kyung Hee Ko
- Department of Radiology, CHA Bundang Medical Center, Seongnam, Korea
| | | | - Kyunghwa Han
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | - Jung Kyu Ryu
- Department of Radiology, Kyung Hee University Hospital at Gangdong, College of Medicine, Kyung Hee University, Seoul, Korea
| |
Collapse
|
23
|
Youk JH, Kim EK. Research Highlight: Artificial Intelligence for Ruling Out Negative Examinations in Screening Breast MRI. Korean J Radiol 2022; 23:153-155. [PMID: 35083890 PMCID: PMC8814698 DOI: 10.3348/kjr.2021.0912] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Accepted: 12/17/2021] [Indexed: 12/03/2022] Open
Affiliation(s)
- Ji Hyun Youk
- Department of Radiology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| | - Eun-Kyung Kim
- Department of Radiology, Yongin Severance Hospital, Yonsei University College of Medicine, Yongin, Korea
| |
Collapse
|