1
|
Sritharan N, Gutierrez C, Perez-Raya I, Gonzalez-Hernandez JL, Owens A, Dabydeen D, Medeiros L, Kandlikar S, Phatak P. Breast Cancer Screening Using Inverse Modeling of Surface Temperatures and Steady-State Thermal Imaging. Cancers (Basel) 2024; 16:2264. [PMID: 38927969 PMCID: PMC11201981 DOI: 10.3390/cancers16122264] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2024] [Revised: 06/06/2024] [Accepted: 06/14/2024] [Indexed: 06/28/2024] Open
Abstract
Cancer is characterized by increased metabolic activity and vascularity, leading to temperature changes in cancerous tissues compared to normal cells. This study focused on patients with abnormal mammogram findings or a clinical suspicion of breast cancer, exclusively those confirmed by biopsy. Utilizing an ultra-high sensitivity thermal camera and prone patient positioning, we measured surface temperatures integrated with an inverse modeling technique based on heat transfer principles to predict malignant breast lesions. Involving 25 breast tumors, our technique accurately predicted all tumors, with maximum errors below 5 mm in size and less than 1 cm in tumor location. Predictive efficacy was unaffected by tumor size, location, or breast density, with no aberrant predictions in the contralateral normal breast. Infrared temperature profiles and inverse modeling using both techniques successfully predicted breast cancer, highlighting its potential in breast cancer screening.
Collapse
Affiliation(s)
- Nithya Sritharan
- Department of Hematology-Oncology, Rochester Regional Health, Rochester, NY 14621, USA; (N.S.); (D.D.); (L.M.)
| | - Carlos Gutierrez
- Department of Mechanical Engineering, Rochester Institute of Technology, Rochester, NY 14623, USA; (C.G.); (I.P.-R.); (J.-L.G.-H.); (A.O.); (S.K.)
| | - Isaac Perez-Raya
- Department of Mechanical Engineering, Rochester Institute of Technology, Rochester, NY 14623, USA; (C.G.); (I.P.-R.); (J.-L.G.-H.); (A.O.); (S.K.)
- BiRed Imaging Inc., Rochester, NY 14609, USA
| | - Jose-Luis Gonzalez-Hernandez
- Department of Mechanical Engineering, Rochester Institute of Technology, Rochester, NY 14623, USA; (C.G.); (I.P.-R.); (J.-L.G.-H.); (A.O.); (S.K.)
| | - Alyssa Owens
- Department of Mechanical Engineering, Rochester Institute of Technology, Rochester, NY 14623, USA; (C.G.); (I.P.-R.); (J.-L.G.-H.); (A.O.); (S.K.)
| | - Donnette Dabydeen
- Department of Hematology-Oncology, Rochester Regional Health, Rochester, NY 14621, USA; (N.S.); (D.D.); (L.M.)
| | - Lori Medeiros
- Department of Hematology-Oncology, Rochester Regional Health, Rochester, NY 14621, USA; (N.S.); (D.D.); (L.M.)
| | - Satish Kandlikar
- Department of Mechanical Engineering, Rochester Institute of Technology, Rochester, NY 14623, USA; (C.G.); (I.P.-R.); (J.-L.G.-H.); (A.O.); (S.K.)
- BiRed Imaging Inc., Rochester, NY 14609, USA
| | - Pradyumna Phatak
- Department of Hematology-Oncology, Rochester Regional Health, Rochester, NY 14621, USA; (N.S.); (D.D.); (L.M.)
- BiRed Imaging Inc., Rochester, NY 14609, USA
| |
Collapse
|
2
|
Kühl J, Elhakim MT, Stougaard SW, Rasmussen BSB, Nielsen M, Gerke O, Larsen LB, Graumann O. Population-wide evaluation of artificial intelligence and radiologist assessment of screening mammograms. Eur Radiol 2024; 34:3935-3946. [PMID: 37938386 PMCID: PMC11166831 DOI: 10.1007/s00330-023-10423-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 10/09/2023] [Accepted: 10/14/2023] [Indexed: 11/09/2023]
Abstract
OBJECTIVES To validate an AI system for standalone breast cancer detection on an entire screening population in comparison to first-reading breast radiologists. MATERIALS AND METHODS All mammography screenings performed between August 4, 2014, and August 15, 2018, in the Region of Southern Denmark with follow-up within 24 months were eligible. Screenings were assessed as normal or abnormal by breast radiologists through double reading with arbitration. For an AI decision of normal or abnormal, two AI-score cut-off points were applied by matching at mean sensitivity (AIsens) and specificity (AIspec) of first readers. Accuracy measures were sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and recall rate (RR). RESULTS The sample included 249,402 screenings (149,495 women) and 2033 breast cancers (72.6% screen-detected cancers, 27.4% interval cancers). AIsens had lower specificity (97.5% vs 97.7%; p < 0.0001) and PPV (17.5% vs 18.7%; p = 0.01) and a higher RR (3.0% vs 2.8%; p < 0.0001) than first readers. AIspec was comparable to first readers in terms of all accuracy measures. Both AIsens and AIspec detected significantly fewer screen-detected cancers (1166 (AIsens), 1156 (AIspec) vs 1252; p < 0.0001) but found more interval cancers compared to first readers (126 (AIsens), 117 (AIspec) vs 39; p < 0.0001) with varying types of cancers detected across multiple subgroups. CONCLUSION Standalone AI can detect breast cancer at an accuracy level equivalent to the standard of first readers when the AI threshold point was matched at first reader specificity. However, AI and first readers detected a different composition of cancers. CLINICAL RELEVANCE STATEMENT Replacing first readers with AI with an appropriate cut-off score could be feasible. AI-detected cancers not detected by radiologists suggest a potential increase in the number of cancers detected if AI is implemented to support double reading within screening, although the clinicopathological characteristics of detected cancers would not change significantly. KEY POINTS • Standalone AI cancer detection was compared to first readers in a double-read mammography screening population. • Standalone AI matched at first reader specificity showed no statistically significant difference in overall accuracy but detected different cancers. • With an appropriate threshold, AI-integrated screening can increase the number of detected cancers with similar clinicopathological characteristics.
Collapse
Affiliation(s)
- Johanne Kühl
- Department of Clinical Research, University of Southern Denmark, Kløvervænget 10, 2ndfloor, 5000, Odense C, Denmark
| | - Mohammad Talal Elhakim
- Department of Clinical Research, University of Southern Denmark, Kløvervænget 10, 2ndfloor, 5000, Odense C, Denmark.
- Department of Radiology, Odense University Hospital, Kløvervænget 47, Ground Floor, 5000, Odense C, Denmark.
| | - Sarah Wordenskjold Stougaard
- Department of Clinical Research, University of Southern Denmark, Kløvervænget 10, 2ndfloor, 5000, Odense C, Denmark
| | - Benjamin Schnack Brandt Rasmussen
- Department of Clinical Research, University of Southern Denmark, Kløvervænget 10, 2ndfloor, 5000, Odense C, Denmark
- Department of Radiology, Odense University Hospital, Kløvervænget 47, Ground Floor, 5000, Odense C, Denmark
- CAI-X - Centre for Clinical Artificial Intelligence, Odense University Hospital, Kløvervænget 8C, 5000, Odense C, Denmark
| | - Mads Nielsen
- Department of Computer Science, University of Copenhagen, Universitetsparken 1, 2100, Copenhagen, Denmark
| | - Oke Gerke
- Department of Clinical Research, University of Southern Denmark, Kløvervænget 10, 2ndfloor, 5000, Odense C, Denmark
- Department of Nuclear Medicine, Odense University Hospital, Kløvervænget 47, 5000, Odense C, Denmark
| | - Lisbet Brønsro Larsen
- Department of Radiology, Odense University Hospital, Kløvervænget 47, Ground Floor, 5000, Odense C, Denmark
| | - Ole Graumann
- Department of Clinical Research, University of Southern Denmark, Kløvervænget 10, 2ndfloor, 5000, Odense C, Denmark
- Department of Radiology, Aarhus University Hospital, Palle Juul-Jensens Blvd. 99, 8200, Aarhus N, Denmark
- Department of Clinical Research, Aarhus University, Palle Juul-Jensens Blvd. 99, 8200, Aarhus N, Denmark
| |
Collapse
|
3
|
Guenette JP, Lynch E, Abbasi N, Schulz K, Kumar S, Haneuse S, Kapoor N, Lacson R, Khorasani R. Recommendations for Additional Imaging on Head and Neck Imaging Examinations: Interradiologist Variation and Associated Factors. AJR Am J Roentgenol 2024; 222:e2330511. [PMID: 38294159 DOI: 10.2214/ajr.23.30511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2024]
Abstract
BACKGROUND. A paucity of relevant guidelines may lead to pronounced variation among radiologists in issuing recommendations for additional imaging (RAI) for head and neck imaging. OBJECTIVE. The purpose of this article was to explore associations of RAI for head and neck imaging examinations with examination, patient, and radiologist factors and to assess the role of individual radiologist-specific behavior in issuing such RAI. METHODS. This retrospective study included 39,200 patients (median age, 58 years; 21,855 women, 17,315 men, 30 with missing sex information) who underwent 39,200 head and neck CT or MRI examinations, interpreted by 61 radiologists, from June 1, 2021, through May 31, 2022. A natural language processing (NLP) tool with manual review of NLP results was used to identify RAI in report impressions. Interradiologist variation in RAI rates was assessed. A generalized mixed-effects model was used to assess associations between RAI and examination, patient, and radiologist factors. RESULTS. A total of 2943 (7.5%) reports contained RAI. Individual radiologist RAI rates ranged from 0.8% to 22.0% (median, 7.1%; IQR, 5.2-10.2%), representing a 27.5-fold difference between minimum and a maximum values and 1.8-fold difference between 25th and 75th percentiles. In multivariable analysis, RAI likelihood was higher for CTA than for CT examinations (OR, 1.32), for examinations that included a trainee in report generation (OR, 1.23), and for patients with self-identified race of Black or African American versus White (OR, 1.25); was lower for male than female patients (OR, 0.90); and was associated with increasing patient age (OR, 1.09 per decade) and inversely associated with radiologist years since training (OR, 0.90 per 5 years). The model accounted for 10.9% of the likelihood of RAI. Of explainable likelihood of RAI, 25.7% was attributable to examination, patient, and radiologist factors; 74.3% was attributable to radiologist-specific behavior. CONCLUSION. Interradiologist variation in RAI rates for head and neck imaging was substantial. RAI appear to be more substantially associated with individual radiologist-specific behavior than with measurable systemic factors. CLINICAL IMPACT. Quality improvement initiatives, incorporating best practices for incidental findings management, may help reduce radiologist preference-sensitive decision-making in issuing RAI for head and neck imaging and associated care variation.
Collapse
Affiliation(s)
- Jeffrey P Guenette
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, 75 Francis St, Boston, MA 02115
| | - Elyse Lynch
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, 75 Francis St, Boston, MA 02115
| | - Nooshin Abbasi
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, 75 Francis St, Boston, MA 02115
| | - Kathryn Schulz
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, 75 Francis St, Boston, MA 02115
| | - Shweta Kumar
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, 75 Francis St, Boston, MA 02115
- Present affiliation: Department of Radiology, Stanford University, Stanford, CA
| | - Sebastien Haneuse
- Department of Biostatistics, Harvard School of Public Health, Boston, MA
| | - Neena Kapoor
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, 75 Francis St, Boston, MA 02115
| | - Ronilda Lacson
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, 75 Francis St, Boston, MA 02115
| | - Ramin Khorasani
- Department of Radiology, Center for Evidence-Based Imaging, Brigham and Women's Hospital, 75 Francis St, Boston, MA 02115
| |
Collapse
|
4
|
Gutierrez C, Owens A, Medeiros L, Dabydeen D, Sritharan N, Phatak P, Kandlikar SG. Breast cancer detection using enhanced IRI-numerical engine and inverse heat transfer modeling: model description and clinical validation. Sci Rep 2024; 14:3316. [PMID: 38332177 PMCID: PMC10853496 DOI: 10.1038/s41598-024-53856-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 02/06/2024] [Indexed: 02/10/2024] Open
Abstract
Effective treatment of breast cancer relies heavily on early detection. Routine annual mammography is a widely accepted screening technique that has resulted in significantly improving the survival rate. However, it suffers from low sensitivity resulting in high false positives from screening. To overcome this problem, adjunctive technologies such as ultrasound are employed on about 10% of women recalled for additional screening following mammography. These adjunctive techniques still result in a significant number of women, about 1.6%, who undergo biopsy while only 0.4% of women screened have cancers. The main reason for missing cancers during mammography screening arises from the masking effect of dense breast tissue. The presence of a tumor results in the alteration of temperature field in the breast, which is not influenced by the tissue density. In the present paper, the IRI-Numerical Engine is presented as an adjunct for detecting cancer from the surface temperature data. It uses a computerized inverse heat transfer approach based on Pennes's bioheat transfer equations. Validation of this enhanced algorithm is conducted on twenty-three biopsy-proven breast cancer patients after obtaining informed consent under IRB protocol. The algorithm correctly predicted the size and location of cancerous tumors in twenty-four breasts, while twenty-two contralateral breasts were also correctly predicted to have no cancer (one woman had bilateral breast cancer). The tumors are seen as highly perfused and metabolically active heat sources that alter the surface temperatures that are used in heat transfer modeling. Furthermore, the results from this study with twenty-four biopsy-proven cancer cases indicate that the detection of breast cancer is not affected by breast density. This study indicates the potential of the IRI-Numerical Engine as an effective adjunct to mammography. A large scale clinical study in a statistically significant sample size is needed before integrating this approach in the current protocol.
Collapse
Affiliation(s)
| | - Alyssa Owens
- Rochester Institute of Technology, Rochester, USA
| | | | | | | | | | | |
Collapse
|
5
|
Chieh AY, Willis JG, Carroll CM, Mobley AA, Li Y, Li M, Woodard S. Why Start Now? Retrospective Study Evaluating Baseline Screening Mammography in Patients Age 60 and Older. Curr Probl Diagn Radiol 2024; 53:62-67. [PMID: 37704485 DOI: 10.1067/j.cpradiol.2023.08.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 07/17/2023] [Accepted: 08/23/2023] [Indexed: 09/15/2023]
Abstract
PURPOSE Extensive data exist regarding the importance of baseline mammography and screening recommendations in the age range of 40-50 years old, however, less is known about women who start screening at age 60. The purpose of this retrospective study is to assess the characteristics and outcomes of women aged 60 years and older presenting for baseline mammographic screening. METHODS This is an IRB-approved single institution retrospective review of data from patients aged 60+ receiving baseline screening mammograms between 2010 and 2022 was obtained. Information regarding patient demographics, breast density, and BI-RADS assessment was acquired from Cerner EHR. Of patients with a BI-RADS 0 assessment, imaging, and chart review was performed. Family history, gynecologic history, prior breast biopsy or surgery, and hormone use was reviewed. For those with a category 4 or 5 assessment after diagnostic work-up, biopsy outcomes were reported. Cancer detection rate (CDR), recall rate (RR), positive predictive value 1 (PPV1), PPV2, and PPV3 were calculated. RESULTS Data was analyzed from 1409 women over age 60 who underwent breast cancer screening. The recall rate was 29.3% (413/1409). The CDR, PPV1, PPV2, and PPV3 were calculated as 15/1000, 5.2% (21/405), 29.2% (21/72), and 31.8% (21/66), respectively. After work-up, 224 diagnostic patients had a 1-year follow-up and none were diagnosed with breast cancer. One (1.4%, 1/71) of the BI-RADS 3 lesions was malignant at 2-year follow-up. Of the patients recalled from screening, 29.6% had a family history of breast cancer, and the majority of both recalled and nonrecalled patients had Category B breast density. There was no statistically significant difference in breast density or race of patients recalled vs not recalled. 93.2% of recalled cases were given BI-RADS descriptors, with mass and focal asymmetry being the most common lesions, and 22.1% of recalled cases included more than one lesion. CONCLUSION Initiating screening mammography for patients over 60 years old may result in higher recall rates, but also leads to a high CDR of potentially clinically relevant invasive cancers. After a diagnostic work-up, BI-RADS 3 assessments are within standard guidelines. This study provides guidance for radiologists reading baseline mammograms and clinicians making screening recommendations in patients over age 60.
Collapse
Affiliation(s)
- Angela Y Chieh
- The University of Alabama at Birmingham Marnix E, Heersink School of Medicine, Birmingham, AL
| | - Joseph G Willis
- The University of Alabama at Birmingham Marnix E, Heersink School of Medicine, Birmingham, AL
| | - Caleb M Carroll
- The University of Alabama at Birmingham Marnix E, Heersink School of Medicine, Birmingham, AL
| | - Alisa A Mobley
- The University of Alabama at Birmingham Marnix E, Heersink School of Medicine, Birmingham, AL
| | - Yufeng Li
- Department of Medicine, The University of Alabama at Birmingham, Birmingham, AL
| | - Mei Li
- Department of Medicine, The University of Alabama at Birmingham, Birmingham, AL
| | - Stefanie Woodard
- Department of Radiology, The University of Alabama at Birmingham, Birmingham, AL.
| |
Collapse
|
6
|
Webster JL, Goldstein ND, Rowland JP, Tuite CM, Siegel SD. A catchment and location-allocation analysis of mammography access in Delaware, US: implications for disparities in geographic access to breast cancer screening. Breast Cancer Res 2023; 25:137. [PMID: 37941020 PMCID: PMC10631173 DOI: 10.1186/s13058-023-01738-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Accepted: 10/30/2023] [Indexed: 11/10/2023] Open
Abstract
BACKGROUND Despite a 40% reduction in breast cancer mortality over the last 30 years, not all groups have benefited equally from these gains. A consistent link between later stage of diagnosis and disparities in breast cancer mortality has been observed by race, socioeconomic status, and rurality. Therefore, ensuring equitable geographic access to screening mammography represents an important priority for reducing breast cancer disparities. Access to breast cancer screening was evaluated in Delaware, a state that experiences an elevated burden from breast cancer but is otherwise representative of the US in terms of race and urban-rural characteristics. We first conducted a catchment analysis of mammography facilities. Finding evidence of disparities by race and rurality, we next conducted a location-allocation analysis to identify candidate locations for the establishment of new mammography facilities to optimize equitable access. METHODS A catchment analysis using the ArcGIS Pro Service Area analytic tool characterized the geographic distribution of mammography sites and Breast Imaging Centers of Excellence (BICOEs). Poisson regression analyses identified census tract-level correlates of access. Next, the ArcGIS Pro Location-Allocation analytic tool identified candidate locations for the placement of additional mammography sites in Delaware according to several sets of breast cancer screening guidelines. RESULTS The catchment analysis showed that for each standard deviation increase in the number of Black women in a census tract, there were 68% (95% CI 38-85%) fewer mammography units and 89% (95% CI 60-98%) fewer BICOEs. The more rural counties in the state accounted for 41% of the population but only 22% of the BICOEs. The results of the location-allocation analysis depended on which set of screening guidelines were adopted, which included increasing mammography sites in communities with a greater proportion of younger Black women and in rural areas. CONCLUSIONS The results of this study illustrate how catchment and location-allocation analytic tools can be leveraged to guide the equitable selection of new mammography facility locations as part of a larger strategy to close breast cancer disparities.
Collapse
Affiliation(s)
- Jessica L Webster
- Department of Epidemiology and Biostatistics, Drexel University Dornsife School of Public Health, Philadelphia, PA, USA
| | - Neal D Goldstein
- Department of Epidemiology and Biostatistics, Drexel University Dornsife School of Public Health, Philadelphia, PA, USA
| | - Jennifer P Rowland
- Department of Radiology, Breast Imaging Section, Helen F. Graham Cancer Center & Research Institute, Christiana Care Health System, Newark, DE, USA
| | - Catherine M Tuite
- Department of Radiology, Breast Imaging Section, Helen F. Graham Cancer Center & Research Institute, Christiana Care Health System, Newark, DE, USA
| | - Scott D Siegel
- Cawley Center for Translational Cancer Research, Helen F. Graham Cancer Center & Research Institute, Christiana Care Health System, 4701 Ogletown-Stanton Road, Newark, DE, 19713, USA.
| |
Collapse
|
7
|
Kim H, Choi JS, Kim K, Ko ES, Ko EY, Han BK. Effect of artificial intelligence-based computer-aided diagnosis on the screening outcomes of digital mammography: a matched cohort study. Eur Radiol 2023; 33:7186-7198. [PMID: 37188881 DOI: 10.1007/s00330-023-09692-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Revised: 02/21/2023] [Accepted: 03/09/2023] [Indexed: 05/17/2023]
Abstract
OBJECTIVE To investigate whether artificial intelligence-based computer-aided diagnosis (AI-CAD) can improve radiologists' performance when used to support radiologists' interpretation of digital mammography (DM) in breast cancer screening. METHODS A retrospective database search identified 3158 asymptomatic Korean women who consecutively underwent screening DM between January and December 2019 without AI-CAD support, and screening DM between February and July 2020 with image interpretation aided by AI-CAD in a tertiary referral hospital using single reading. Propensity score matching was used to match the DM with AI-CAD group in a 1:1 ratio with the DM without AI-CAD group according to age, breast density, experience level of the interpreting radiologist, and screening round. Performance measures were compared with the McNemar test and generalized estimating equations. RESULTS A total of 1579 women who underwent DM with AI-CAD were matched with 1579 women who underwent DM without AI-CAD. Radiologists showed higher specificity (96% [1500 of 1563] vs 91.6% [1430 of 1561]; p < 0.001) and lower abnormal interpretation rates (AIR) (4.9% [77 of 1579] vs 9.2% [145 of 1579]; p < 0.001) with AI-CAD than without. There was no significant difference in the cancer detection rate (CDR) (AI-CAD vs no AI-CAD, 8.9 vs 8.9 per 1000 examinations; p = 0.999), sensitivity (87.5% vs 77.8%; p = 0.999), and positive predictive value for biopsy (PPV3) (35.0% vs 35.0%; p = 0.999) according to AI-CAD support. CONCLUSIONS AI-CAD increases the specificity for radiologists without decreasing sensitivity as a supportive tool in the single reading of DM for breast cancer screening. CLINICAL RELEVANCE STATEMENT This study shows that AI-CAD could improve the specificity of radiologists' DM interpretation in the single reading system without decreasing sensitivity, suggesting that it can benefit patients by reducing false positive and recall rates. KEY POINTS • In this retrospective-matched cohort study (DM without AI-CAD vs DM with AI-CAD), radiologists showed higher specificity and lower AIR when AI-CAD was used to support decision-making in DM screening. • CDR, sensitivity, and PPV for biopsy did not differ with and without AI-CAD support.
Collapse
Affiliation(s)
- Haejung Kim
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul, 06351, Korea
| | - Ji Soo Choi
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul, 06351, Korea.
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Korea.
| | - Kyunga Kim
- Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, Korea
- Biomedical Statistics Center, Research Institute for Future Medicine, Samsung Medical Center, Seoul, Korea
- Department of Data Convergence & Future Medicine, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Eun Sook Ko
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul, 06351, Korea
| | - Eun Young Ko
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul, 06351, Korea
| | - Boo-Kyung Han
- Department of Radiology and Center for Imaging Science, Samsung Medical Center, Sungkyunkwan University School of Medicine, 81 Irwon-Ro, Gangnam-Gu, Seoul, 06351, Korea
| |
Collapse
|
8
|
Zhang J, Mazurowski MA, Grimm LJ. Feasibility of predicting a screening digital breast tomosynthesis recall using features extracted from the electronic medical record. Eur J Radiol 2023; 166:110979. [PMID: 37473618 DOI: 10.1016/j.ejrad.2023.110979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 07/05/2023] [Accepted: 07/12/2023] [Indexed: 07/22/2023]
Abstract
PURPOSE Tools to predict a screening mammogram recall at the time of scheduling could improve patient care. We extracted patient demographic and breast care history information within the electronic medical record (EMR) for women undergoing digital breast tomosynthesis (DBT) to identify which factors were associated with a screening recall recommendation. METHOD In 2018, 21,543 women aged 40 years or greater who underwent screening DBT at our institution were identified. Demographic information and breast care factors were extracted automatically from the EMR. The primary outcome was a screening recall recommendation of BI-RADS 0. A multivariable logistic regression model was built and included age, race, ethnicity groups, family breast cancer history, personal breast cancer history, surgical breast cancer history, recall history, and days since last available screening mammogram. RESULTS Multiple factors were associated with a recall on the multivariable model: history of breast cancer surgery (OR: 2.298, 95% CI: 1.854, 2.836); prior recall within the last five years (vs no prior, OR: 0.768, 95% CI: 0.687, 0.858); prior screening mammogram within 0-18 months (vs no prior, OR: 0.601, 95% CI: 0.520, 0.691), prior screening mammogram within 18-30 months (vs no prior, OR: 0.676, 95% CI: 0.520, 0.691); and age (normalized OR: 0.723, 95% CI: 0.690, 0.758). CONCLUSIONS It is feasible to predict a DBT screening recall recommendation using patient demographics and breast care factors that can be extracted automatically from the EMR.
Collapse
Affiliation(s)
- Jikai Zhang
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, United States Room 10070, 2424 Erwin Road, Durham, NC 27705, United States.
| | - Maciej A Mazurowski
- Department of Radiology, Duke University Medical Center, Durham, NC, United States; Department of Electrical and Computer Engineering, Department of Biostatistics and Bioinformatics, Department of Computer Science, Duke University, Durham, NC, United States
| | - Lars J Grimm
- Department of Electrical and Computer Engineering, Duke University, Durham, NC, United States Room 10070, 2424 Erwin Road, Durham, NC 27705, United States
| |
Collapse
|
9
|
Tran NA, Palotai M, Hanna GJ, Schoenfeld JD, Bay CP, Rettig EM, Bunch PM, Juliano AF, Kelly HR, Suh CH, Zander DA, Morales Pinzon A, Kann BH, Huang RY, Haddad RI, Guttmann CRG, Guenette JP. Diagnostic performance of computed tomography features in detecting oropharyngeal squamous cell carcinoma extranodal extension. Eur Radiol 2023; 33:3693-3703. [PMID: 36719493 DOI: 10.1007/s00330-023-09407-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Revised: 12/06/2022] [Accepted: 12/27/2022] [Indexed: 02/01/2023]
Abstract
OBJECTIVES Accurate pre-treatment imaging determination of extranodal extension (ENE) could facilitate the selection of appropriate initial therapy for HPV-positive oropharyngeal squamous cell carcinoma (HPV + OPSCC). Small studies have associated 7 CT features with ENE with varied results and agreement. This article seeks to determine the replicable diagnostic performance of these CT features for ENE. METHODS Five expert academic head/neck neuroradiologists from 5 institutions evaluate a single academic cancer center cohort of 75 consecutive HPV + OPSCC patients. In a web-based virtual laboratory for imaging research and education, the experts performed training on 7 published CT features associated with ENE and then independently identified the "single most (if any) suspicious" lymph node and presence/absence of each of the features. Inter-rater agreement was assessed using percentage agreement, Gwet's AC1, and Fleiss' kappa. Sensitivity, specificity, and positive and negative predictive values were calculated for each CT feature based on histologic ENE. RESULTS All 5 raters identified the same node in 52 cases (69%). In 15 cases (20%), at least one rater selected a node and at least one rater did not. In 8 cases (11%), all raters selected a node, but at least one rater selected a different node. Percentage agreement and Gwet's AC1 coefficients were > 0.80 for lesion identification, matted/conglomerated nodes, and central necrosis. Fleiss' kappa was always < 0.6. CT sensitivity for histologically confirmed ENE ranged 0.18-0.94, specificity 0.41-0.88, PPV 0.26-0.36, and NPV 0.78-0.96. CONCLUSIONS Previously described CT features appear to have poor reproducibility among expert head/neck neuroradiologists and poor predictive value for histologic ENE. KEY POINTS • Previously described CT imaging features appear to have poor reproducibility among expert head and neck subspecialized neuroradiologists as well as poor predictive value for histologic ENE. • Although it may still be appropriate to comment on the presence or absence of these CT features in imaging reports, the evidence indicates that caution is warranted when incorporating these features into clinical decision-making regarding the likelihood of ENE.
Collapse
Affiliation(s)
- Ngoc-Anh Tran
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Miklos Palotai
- Center for Neurological Imaging, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Glenn J Hanna
- Department of Medical Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Jonathan D Schoenfeld
- Department of Radiation Oncology, Brigham and Women's Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Camden P Bay
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Eleni M Rettig
- Division of Otolaryngology-Head and Neck Surgery, Brigham and Women's Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Paul M Bunch
- Division of Neuroradiology, Wake Forest School of Medicine, Winston Salem, NC, USA
| | - Amy F Juliano
- Department of Radiology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
| | - Hillary R Kelly
- Department of Radiology, Massachusetts Eye and Ear, Harvard Medical School, Boston, MA, USA
- Division of Neuroradiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Chong Hyun Suh
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - David A Zander
- Division of Neuroradiology, University of Colorado, Aurora, CO, USA
| | - Alfredo Morales Pinzon
- Center for Neurological Imaging, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Benjamin H Kann
- Department of Radiation Oncology, Brigham and Women's Hospital, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Raymond Y Huang
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Division of Neuroradiology, Brigham and Women's Hospital, Dana-Farber Cancer Institute, Harvard Medical School, 75 Francis Street Boston, Boston, MA, 02115, USA
| | - Robert I Haddad
- Department of Medical Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Charles R G Guttmann
- Center for Neurological Imaging, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| | - Jeffrey P Guenette
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
- Division of Neuroradiology, Brigham and Women's Hospital, Dana-Farber Cancer Institute, Harvard Medical School, 75 Francis Street Boston, Boston, MA, 02115, USA.
| |
Collapse
|
10
|
Webster JL, Goldstein ND, Rowland JR, Tuite CM, Siegel SD. A Catchment and Location-Allocation Analysis of Mammography Access in Delaware, US: Implications for disparities in geographic access to breast cancer screening. RESEARCH SQUARE 2023:rs.3.rs-2600236. [PMID: 36909545 PMCID: PMC10002803 DOI: 10.21203/rs.3.rs-2600236/v1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
Background Despite a 40% reduction in breast cancer mortality over the last 30 years, not all groups have benefited equally from these gains. A consistent link between later stage of diagnosis and disparities in breast cancer mortality has been observed by race, socioeconomic status, and rurality. Therefore, ensuring equitable geographic access to screening mammography represents an important priority for reducing breast cancer disparities. This study conducted a catchment and location-allocation analysis of mammography access in Delaware, a state that is representative of the US in terms of race and urban-rural characteristics and experiences an elevated burden from breast cancer. Methods A catchment analysis using the ArcGIS Pro Service Area analytic tool characterized the geographic distribution of mammography sites and Breast Imaging Centers of Excellence (BICOEs). Poisson regression analyses identified census tract-level correlates of access. Next, the ArcGIS Pro Location-Allocation analytic tool identified candidate locations for the placement of additional mammography sites in Delaware according to several sets of breast cancer screening guidelines. Results The catchment analysis showed that for each standard deviation increase in the number of Black women in a census tract, there were 64% (95% CI, 0.18-0.66) fewer mammography units and 85% (95% CI, 0.04-0.48) fewer BICOEs. The more rural counties in the state accounted for 41 % of the population but only 22% of the BICOEs. The results of the location-allocation analysis depended on which set of screening guidelines were adopted, which included increasing mammography sites in communities with a greater proportion of younger Black women and in rural areas. Conclusions The results of this study illustrate how catchment and location-allocation analytic tools can be leveraged to guide the equitable selection of new mammography facility locations as part of a larger strategy to close breast cancer disparities.
Collapse
|
11
|
Giess CS, Licaros AL, Kwait DC, Yeh ED, Lacson R, Khorasani R, Chikarmane SA. Live Mammographic Screening Interpretation Versus Offline Same-Day Screening Interpretation at a Tertiary Cancer Center. J Am Coll Radiol 2023; 20:207-214. [PMID: 36496088 DOI: 10.1016/j.jacr.2022.10.014] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Revised: 10/17/2022] [Accepted: 10/19/2022] [Indexed: 12/14/2022]
Abstract
OBJECTIVES The aim of this study was to compare screening mammography performance metrics for immediate (live) interpretation versus offline interpretation at a cancer center. METHODS An institutional review board-approved, retrospective comparison of screening mammography metrics at a cancer center for January 1, 2018, to December 31, 2019 (live period), and September 1, 2020, to March 31, 2022 (offline period), was performed. Before July 2020, screening examinations were interpreted while patients waited (live period), and diagnostic workup was performed concurrently. After the coronavirus disease 2019 shutdown from March to mid-June 2020, offline same-day interpretation was instituted. Patients with abnormal screening results returned for separate diagnostic evaluation. Screening metrics of positive predictive value 1 (PPV1), cancer detection rate (CDR), and abnormal interpretation rate (AIR) were compared for 17 radiologists who interpreted during both periods. Statistical significance was assessed using χ2 analysis. RESULTS In the live period, there were 7,105 screenings, 635 recalls, and 51 screen-detected cancers. In the offline period, there were 7,512 screenings, 586 recalls, and 47 screen-detected cancers. Comparison of live screening metrics versus offline metrics produced the following results: AIR, 8.9% (635 of 7,105) versus 7.8% (586 of 7,512) (P = .01); PPV1, 8.0% (51 of 635) versus 8.0% (47 of 586); and CDR, 7.2/1,000 versus 6.3/1,000 (P = .50). When grouped by >10% AIR or <10% AIR for the live period, the >10% AIR group showed a significant decrease in AIR for offline interpretation (from 12.7% to 9.7%, P < .001), whereas the <10% AIR group showed no significant change (from 7.4% to 6.7%, P = .17). CONCLUSIONS Conversion to offline screening interpretation from immediate interpretation at a cancer center was associated with lower AIR and similar CDR and PPV1. This effect was seen largely in radiologists with AIR > 10% in the live setting.
Collapse
Affiliation(s)
- Catherine S Giess
- Center for Evidence-Based Imaging, Harvard Medical School, Brigham and Women's Hospital, Boston, Massachusetts; Deputy Chair, Department of Radiology, Harvard Medical School, Brigham and Women's Hospital, Boston, Massachusetts.
| | - Andro L Licaros
- Center for Evidence-Based Imaging, Harvard Medical School, Brigham and Women's Hospital, Boston, Massachusetts; Department of Radiology, Harvard Medical School, Brigham and Women's Hospital, Boston, Massachusetts
| | - Dylan C Kwait
- Department of Radiology, Harvard Medical School, Brigham and Women's Hospital, Boston, Massachusetts; Interim Division Chief of Breast Imaging, Brigham and Women's Hospital, Boston, Massachusetts; Chief of Radiology, Brigham and Women's Faulkner Hospital, Boston, Massachusetts
| | - Eren D Yeh
- Department of Radiology, Harvard Medical School, Brigham and Women's Hospital, Boston, Massachusetts
| | - Ronilda Lacson
- Center for Evidence-Based Imaging, Harvard Medical School, Brigham and Women's Hospital, Boston, Massachusetts; Department of Radiology, Harvard Medical School, Brigham and Women's Hospital, Boston, Massachusetts
| | - Ramin Khorasani
- Center for Evidence-Based Imaging, Harvard Medical School, Brigham and Women's Hospital, Boston, Massachusetts; Department of Radiology, Harvard Medical School, Brigham and Women's Hospital, Boston, Massachusetts; Vice Chair, Quality/Safety and Patient Experience, Brigham and Women's Hospital, Mass General Brigham Health Care, Boston, Massachusetts
| | - Sona A Chikarmane
- Department of Radiology, Harvard Medical School, Brigham and Women's Hospital, Boston, Massachusetts
| |
Collapse
|
12
|
Patterns of Screening Recall Behavior Among Subspecialty Breast Radiologists. Acad Radiol 2022; 30:798-806. [PMID: 35803888 DOI: 10.1016/j.acra.2022.06.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Revised: 05/22/2022] [Accepted: 06/08/2022] [Indexed: 11/22/2022]
Abstract
RATIONALE AND OBJECTIVES Determine whether there are patterns of lesion recall among breast imaging subspecialists interpreting screening mammography, and if so, whether recall patterns correlate to morphologies of screen-detected cancers. MATERIALS AND METHODS This Institutional Review Board-approved, retrospective review included all screening examinations January 3, 2012-October 1, 2018 interpreted by fifteen breast imaging subspecialists at a large academic medical center and two outpatient imaging centers. Natural language processing identified radiologist recalls by lesion type (mass, calcifications, asymmetry, architectural distortion); proportions of callbacks by lesion types were calculated per radiologist. Hierarchical cluster analysis grouped radiologists based on recall patterns. Groups were compared to overall practice and each other by proportions of lesion types recalled, and overall and lesion-specific positive predictive value-1 (PPV1). RESULTS Among 161,859 screening mammograms with 13,086 (8.1%) recalls, Hierarchical cluster analysis grouped 15 radiologists into five groups. There was substantial variation in proportions of lesions recalled: calcifications 13%-18% (Chi-square 45.69, p < 0.00001); mass 16%-44% (Chi-square 498.42, p < 0.00001); asymmetry 13%-47% (Chi-square 660.93, p < 0.00001) architectural distortion 6%-20% (Chi-square 283.81, p < 0.00001). Radiologist groups differed significantly in overall PPV1 (range 5.6%-8.8%; Chi-square 17.065, p = 0.0019). PPV1 by lesion type varied among groups: calcifications 9.2%-15.4% (Chi-square 2.56, p = 0.6339); mass 5.6%-8.5% (Chi-square 1.31, p = 0.8597); asymmetry 3.4%-5.9% (Chi-square 2.225, p = 0.6945); architectural distortion 5.6%-10.8% (Chi-square 5.810, p = 0.2138). Proportions of recalled lesions did not consistently correlate to proportions of screen-detected cancer. CONCLUSION Breast imaging subspecialists have patterns for screening mammography recalls, suggesting differential weighting of imaging findings for perceived malignant potential. Radiologist recall patterns are not always predictive of screen-detected cancers nor lesion-specific PPV1s.
Collapse
|
13
|
Linna N, Kahn CE. Applications of Natural Language Processing in Radiology: A Systematic Review. Int J Med Inform 2022; 163:104779. [DOI: 10.1016/j.ijmedinf.2022.104779] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 03/28/2022] [Accepted: 04/21/2022] [Indexed: 12/27/2022]
|
14
|
Walker MJ, Hartman K, Majpruz V, Leung YW, Fienberg S, Rabeneck L, Chiarelli AM. The Impact of Radiologist Screening Mammogram Reading Volume on Performance in the Ontario Breast Screening Program. Can Assoc Radiol J 2021; 73:362-370. [PMID: 34423685 DOI: 10.1177/08465371211031186] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
PURPOSE Although some studies have shown increasing radiologists' mammography volumes improves performance, there is a lack of evidence specific to digital mammography and breast screening program performance targets. This study evaluates the relationship between digital screening volume and meeting performance targets. METHODS This retrospective cohort study included 493 radiologists in the Ontario Breast Screening Program who interpreted 1,762,173 screening mammograms in participants ages 50-90 between 2014 and 2016. Associations between annual screening volume and meeting performance targets for abnormal call rate, positive predictive value (PPV), invasive cancer detection rate (CDR), sensitivity, and specificity were modeled using mixed-effects multivariate logistic regression. RESULTS Most radiologists read 500-999 (36.7%) or 1,000-1,999 (31.0%) screens annually, and 18.5% read ≥2,000. Radiologists who read ≥2,000 annually were more likely to meet abnormal call rate (OR = 3.85; 95% CI: 1.17-12.61), PPV (OR = 5.36; 95% CI: 2.53-11.34), invasive CDR (OR = 4.14; 95% CI: 1.50-11.46), and specificity (OR = 4.07; 95% CI: 1.89-8.79) targets versus those who read 100-499 screens. Radiologists reading 1,000-1,999 screens annually were more likely to meet PPV (OR = 2.32; 95% CI: 1.22-4.40), invasive CDR (OR = 3.36; 95% CI: 1.49-7.59) and specificity (OR = 2.00; 95% CI: 1.04-3.84) targets versus those who read 100-499 screens. No significant differences were observed for sensitivity. CONCLUSIONS Annual reading volume requirements of 1,000 in Canada are supported as screening volume above 1,000 was strongly associated with achieving performance targets for nearly all measures. Increasing the minimum volume to 2,000 may further reduce the potential limitations of screening due to false positives, leading to improvements in overall breast screening program quality.
Collapse
Affiliation(s)
- Meghan J Walker
- Prevention and Cancer Control, 573450Ontario Health (Cancer Care Ontario), Toronto, Ontario, Canada.,Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| | - Krystal Hartman
- Prevention and Cancer Control, 573450Ontario Health (Cancer Care Ontario), Toronto, Ontario, Canada
| | - Vicky Majpruz
- Prevention and Cancer Control, 573450Ontario Health (Cancer Care Ontario), Toronto, Ontario, Canada
| | - Yvonne W Leung
- Prevention and Cancer Control, 573450Ontario Health (Cancer Care Ontario), Toronto, Ontario, Canada
| | - Samantha Fienberg
- Prevention and Cancer Control, 573450Ontario Health (Cancer Care Ontario), Toronto, Ontario, Canada.,Radiology, McMaster University, Hamilton, Ontario, Canada.,Medical Imaging, Grand River Hospital, Kitchener, Ontario, Canada
| | - Linda Rabeneck
- Prevention and Cancer Control, 573450Ontario Health (Cancer Care Ontario), Toronto, Ontario, Canada.,Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada.,Department of Medicine, University of Toronto, Toronto, Ontario, Canada.,IC/ES, Toronto, Ontario, Canada
| | - Anna M Chiarelli
- Prevention and Cancer Control, 573450Ontario Health (Cancer Care Ontario), Toronto, Ontario, Canada.,Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
15
|
Mammographic Surveillance After Breast Conserving Therapy: Impact of Digital Breast Tomosynthesis and Artificial Intelligence-Based Computer-Aided Detection. AJR Am J Roentgenol 2021; 218:42-51. [PMID: 34378399 DOI: 10.2214/ajr.21.26506] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Background: Postoperative mammograms present interpretive challenges due to postoperative distortion and hematomas. The application of digital breast tomosynthesis (DBT) and artificial intelligence-based computer-aided detection (AI-CAD) after breast concerving therapy (BCT) has not been widely investigated. Objective: To assess the impact of additional DBT or AI-CAD on recall rate and diagnostic performance in women undergoing mammographic surveillance after BCT. Methods: This retrospective study included 314 women (mean age 53.2±10.6 years; 4 with bilateral breast cancer) who underwent BCT followed by DBT (mean interval from surgery to DBT of 15.2±15.4 months). Three breast radiologists independently reviewed images in three sessions: digital mammography (DM), DM with DBT (DM+DBT), and DM with AI-CAD (DM+AI-CAD). Recall rates and diagnostic performance were compared between DM, DM+DBT, and DM+AI-CAD, using readers' mean results. Results: Of the 314 women, 6 breast recurrences (3 ipsilateral, 3 contralateral) developed at the time of surveillance mammography. Ipsilateral breast recall rate was lower for DM+AI-CAD (1.9%) than for DM (11.2%) or DM+DBT (4.1%) (p<.001). Contralateral breast recall rate was lower for DM+AI-CAD (1.5%, p<.001) than for DM (6.6%) but not DM+DBT (2.7%, p=.08). In ipsilateral breast, accuracy was higher for DM+AI-CAD (97.0%) than for DM (88.5%) or DM+DBT (94.8%) (p<.05); specificity was higher for DM+AICAD (98.3%) than for DM (89.3%) or DM+DBT (96.1%) (p<.05); sensitivity was lower for DM+AI-CAD (22.2%) than for DM (66.7%, p=.03) but not DM+DBT (22.2%, p>.99). In contralateral breast, accuracy was higher for DM+AI-CAD (97.1%) than for DM (92.5%, p<.001) but not DM+DBT (96.1%, p=.25); specificity was higher for DM+AI-CAD (98.6%) than for DM (93.7%, p<.001) but not DM+DBT (97.5%) (p=.09); sensitivity was not different between DM (33.3%), DM+DBT (22.2%), and DM+AI-CAD (11.1%) (p>.05). Conclusion: After BCT, adjunct DBT or AI-CAD reduced recall rates and improved accuracy in the ipsilateral and contralateral breasts compared with DM. In the ipsilateral breast, addition of AI-CAD resulted in lower recall rate and higher accuracy than addition of DBT. Clinical Impact: AI-CAD may help address the challenges of post-BCT surveillance mammograms.
Collapse
|
16
|
Kapoor N, Lacson R, Khorasani R. Workflow Applications of Artificial Intelligence in Radiology and an Overview of Available Tools. J Am Coll Radiol 2021; 17:1363-1370. [PMID: 33153540 DOI: 10.1016/j.jacr.2020.08.016] [Citation(s) in RCA: 29] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 08/21/2020] [Accepted: 08/25/2020] [Indexed: 12/18/2022]
Abstract
In the past decade, there has been tremendous interest in applying artificial intelligence (AI) to improve the field of radiology. Currently, numerous AI applications are in development, with potential benefits spanning all steps of the imaging chain from test ordering to report communication. AI has been proposed as a means to optimize patient scheduling, improve worklist management, enhance image acquisition, and help radiologists interpret diagnostic studies. Although the potential for AI in radiology appears almost endless, the field is still in the early stages, with many uses still theoretical, in development, or limited to single institutions. Moreover, although the current use of AI in radiology has emphasized its clinical applications, some of which are in the distant future, it is increasingly clear that AI algorithms could also be used in the more immediate future for a variety of noninterpretive and quality improvement uses. Such uses include the integration of AI into electronic health record systems to reduce unwarranted variation in radiologists' follow-up recommendations and to improve other dimensions of radiology report quality. In the end, the potential of AI in radiology must be balanced with acknowledgment of its current limitations regarding generalizability and data privacy.
Collapse
Affiliation(s)
- Neena Kapoor
- Director of Diversity, Inclusion, and Equity, Department of Radiology, Brigham and Women's Hospital; Quality and Patient Safety Officer, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts.
| | - Ronilda Lacson
- Director of Education, Center for Evidence-Based Imaging, Brigham and Women's Hospital; Director of Clinical Informatics, Harvard Medical School Library of Evidence, Boston, Massachusetts
| | - Ramin Khorasani
- Director of the Center of Evidence Imaging and Vice Chair of Quality/Safety, Department of Radiology, Center for Evidence Based Imaging, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
17
|
Martin-Noguerol T, Luna A. External validation of AI algorithms in breast radiology: the last healthcare security checkpoint? Quant Imaging Med Surg 2021; 11:2888-2892. [PMID: 34079749 DOI: 10.21037/qims-20-1409] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Affiliation(s)
| | - Antonio Luna
- Radiology Department, HTmédica, Clinica Las Nieves, Jaén, Spain
| |
Collapse
|
18
|
Brown AL, Al-Khalili R, Song JH, Mahoney MC. Transitioning from trainee to breast radiologist: A guide for a successful first year. Clin Imaging 2020; 69:328-331. [PMID: 33049430 DOI: 10.1016/j.clinimag.2020.10.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2020] [Revised: 09/30/2020] [Accepted: 10/01/2020] [Indexed: 10/23/2022]
Abstract
The transition from trainee to newly minted breast radiologist is exciting and daunting in equal measure. The early years in practice are pivotal to long-term success in breast imaging whether entering academic or nonacademic practice. Yet a paucity of literature exists to guide junior radiologists in their early career transition. New breast radiologists can successfully navigate the start of a prosperous and enriching career by implementing strategies adapted from the business world and collective wisdom from the radiology world. This article provides an outline of tips and habits for new radiologists to incorporate in their work lives as attendings to ensure that they will thrive in breast imaging for years to come.
Collapse
Affiliation(s)
- Ann L Brown
- Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, United States of America.
| | - Rend Al-Khalili
- Department of Radiology, Georgetown University School of Medicine, Washington, DC, United States of America
| | - Judy H Song
- Department of Radiology, Georgetown University School of Medicine, Washington, DC, United States of America
| | - Mary C Mahoney
- Department of Radiology, University of Cincinnati College of Medicine, Cincinnati, OH, United States of America
| |
Collapse
|
19
|
Salim M, Wåhlin E, Dembrower K, Azavedo E, Foukakis T, Liu Y, Smith K, Eklund M, Strand F. External Evaluation of 3 Commercial Artificial Intelligence Algorithms for Independent Assessment of Screening Mammograms. JAMA Oncol 2020; 6:1581-1588. [PMID: 32852536 PMCID: PMC7453345 DOI: 10.1001/jamaoncol.2020.3321] [Citation(s) in RCA: 133] [Impact Index Per Article: 33.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2020] [Accepted: 06/02/2020] [Indexed: 12/21/2022]
Abstract
Importance A computer algorithm that performs at or above the level of radiologists in mammography screening assessment could improve the effectiveness of breast cancer screening. Objective To perform an external evaluation of 3 commercially available artificial intelligence (AI) computer-aided detection algorithms as independent mammography readers and to assess the screening performance when combined with radiologists. Design, Setting, and Participants This retrospective case-control study was based on a double-reader population-based mammography screening cohort of women screened at an academic hospital in Stockholm, Sweden, from 2008 to 2015. The study included 8805 women aged 40 to 74 years who underwent mammography screening and who did not have implants or prior breast cancer. The study sample included 739 women who were diagnosed as having breast cancer (positive) and a random sample of 8066 healthy controls (negative for breast cancer). Main Outcomes and Measures Positive follow-up findings were determined by pathology-verified diagnosis at screening or within 12 months thereafter. Negative follow-up findings were determined by a 2-year cancer-free follow-up. Three AI computer-aided detection algorithms (AI-1, AI-2, and AI-3), sourced from different vendors, yielded a continuous score for the suspicion of cancer in each mammography examination. For a decision of normal or abnormal, the cut point was defined by the mean specificity of the first-reader radiologists (96.6%). Results The median age of study participants was 60 years (interquartile range, 50-66 years) for 739 women who received a diagnosis of breast cancer and 54 years (interquartile range, 47-63 years) for 8066 healthy controls. The cases positive for cancer comprised 618 (84%) screen detected and 121 (16%) clinically detected within 12 months of the screening examination. The area under the receiver operating curve for cancer detection was 0.956 (95% CI, 0.948-0.965) for AI-1, 0.922 (95% CI, 0.910-0.934) for AI-2, and 0.920 (95% CI, 0.909-0.931) for AI-3. At the specificity of the radiologists, the sensitivities were 81.9% for AI-1, 67.0% for AI-2, 67.4% for AI-3, 77.4% for first-reader radiologist, and 80.1% for second-reader radiologist. Combining AI-1 with first-reader radiologists achieved 88.6% sensitivity at 93.0% specificity (abnormal defined by either of the 2 making an abnormal assessment). No other examined combination of AI algorithms and radiologists surpassed this sensitivity level. Conclusions and Relevance To our knowledge, this study is the first independent evaluation of several AI computer-aided detection algorithms for screening mammography. The results of this study indicated that a commercially available AI computer-aided detection algorithm can assess screening mammograms with a sufficient diagnostic performance to be further evaluated as an independent reader in prospective clinical trials. Combining the first readers with the best algorithm identified more cases positive for cancer than combining the first readers with second readers.
Collapse
Affiliation(s)
- Mattie Salim
- Department of Oncology-Pathology, Karolinska Institute, Stockholm, Sweden
- Department of Radiology, Karolinska University Hospital, Stockholm, Sweden
| | - Erik Wåhlin
- Department of Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, Stockholm, Sweden
| | - Karin Dembrower
- Department of Physiology and Pharmacology, Karolinska Institute, Stockholm, Sweden
- Department of Radiology, Capio Sankt Görans Hospital, Stockholm, Sweden
| | - Edward Azavedo
- Department of Oncology-Pathology, Karolinska Institute, Stockholm, Sweden
- Department of Molecular Medicine and Surgery, Karolinska Institute, Stockholm, Sweden
| | - Theodoros Foukakis
- Department of Oncology-Pathology, Karolinska Institute, Stockholm, Sweden
- Department of Radiology, Karolinska University Hospital, Stockholm, Sweden
| | - Yue Liu
- Division of Computational Science and Technology, KTH Royal Institute of Technology, Science for Life Laboratory, Solna, Sweden
| | - Kevin Smith
- KTH Royal Institute of Technology, Science for Life Laboratory, Solna, Sweden
| | - Martin Eklund
- Department of Medical Epidemiology and Biostatistics, Karolinska Institute, Stockholm, Sweden
| | - Fredrik Strand
- Department of Oncology-Pathology, Karolinska Institute, Stockholm, Sweden
- Breast Radiology, Karolinska University Hospital, Stockholm, Sweden
| |
Collapse
|
20
|
Radiologists’ Self-Assessment Versus Peer Assessment of Perceived Probability of Recommending Additional Imaging. J Am Coll Radiol 2020; 17:504-510. [DOI: 10.1016/j.jacr.2019.11.022] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Revised: 11/18/2019] [Accepted: 11/20/2019] [Indexed: 11/22/2022]
|
21
|
Comparing Diagnostic Performance of Digital Breast Tomosynthesis and Full-Field Digital Mammography. J Am Coll Radiol 2020; 17:999-1003. [PMID: 32068009 DOI: 10.1016/j.jacr.2020.01.010] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2019] [Revised: 01/14/2020] [Accepted: 01/16/2020] [Indexed: 11/24/2022]
Abstract
OBJECTIVE Compare diagnostic performance of screening full-field digital mammography (FFDM), a hybrid FFDM and digital breast tomosynthesis (DBT) environment, and DBT only. MATERIALS AND METHODS This institutional review board-approved, retrospective study consisted of all patients undergoing screening mammography at an urban academic medical center and outpatient imaging facility between January 1, 2011, and December 31, 2017. We used the electronic health record data warehouse to extract report data and patient demographics. A validated natural language processing algorithm extracted BI-RADS score from each report. An institutional cancer registry identified cancer diagnoses. Primary outcomes of recall rate, cancer detection rate (CDR), and positive predictive value 1 (PPV1) were calculated for three periods: FFDM-only environment, hybrid environment, and DBT-only environment. A χ2 test was used to compare recall rate, CDR, and PPV1. RESULTS A total of 179,028 screening mammograms comprised the study cohort: 41,818 (23.3%) during the FFDM-only period, 83,125 (46.4%) during the hybrid period, and 54,084 (30.2%) during the DBT-only period. Recall rates were 10.4% (4,279 of 41,280) for the FFDM-only period, 10.6% (8,761 of 82,917) for the hybrid period, and 10.8% (5,850 of 54,020) for the DBT-only period (P = .96). CDR (cancers per 1,000 examinations) was 2.6 per 1,000, 4.9 per 1,000, and 6.0 per 1,000 for FFDM only, hybrid, and DBT only, respectively (P < .01). PPV1s (number of cancers per number of recalls) were 2.5% for the FFDM-only period, 4.6% for the hybrid period, and 5.6% for the DBT-only period (P < .01). CONCLUSION Recall rates were not significantly different within the three periods in the breast imaging practice. However, PPV1 and CDR were significantly higher with DBT only.
Collapse
|
22
|
Lacson R, Wang A, Cochon L, Giess C, Desai S, Eappen S, Khorasani R. Factors Associated With Optimal Follow-up in Women With BI-RADS 3 Breast Findings. J Am Coll Radiol 2019; 17:469-474. [PMID: 31669081 PMCID: PMC7509994 DOI: 10.1016/j.jacr.2019.10.003] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2019] [Revised: 09/20/2019] [Accepted: 10/04/2019] [Indexed: 01/19/2023]
Abstract
OBJECTIVE Assess rate of and factors associated with optimal follow-up in patients with BI-RADS 3 breast findings. METHODS This Institutional Review Board-approved, retrospective cohort study, performed at an academic medical center, included all women undergoing breast imaging (ultrasound and mammography) in 2016. Index reports for unique patients with an assessment of BI-RADS 3 (retrieved via natural language processing) comprised the study population. Patient-specific and provider-related features were extracted from the Research Data Warehouse. The Institutional Cancer Registry identified patients diagnosed with breast cancer. Optimal follow-up rate was calculated as patients with follow-up imaging on the same breast 3 to 9 months from the index examination among patients with BI-RADS 3 assessments. Univariate analysis and multivariable logistic regression determined features associated with optimal follow-up. Malignancy rate and time to malignancy detection were recorded. RESULTS Among 93,685 breast imaging examinations, 64,771 were from unique patients of which 2,967 had BI-RADS 3 findings (4.6%). Excluding patients with off-site index examinations and those with another breast examination <3 months from the index, 1,125 of 1,511 patients (74%) had optimal follow-up. In univariate and multivariable analysis, prior breast cancer was associated with optimal follow-up; younger age, Hispanic ethnicity, divorced status, and lack of insurance were associated with not having optimal follow-up. Malignancy rate was 0.86%, and mean time to detection was 330 days. DISCUSSION Follow-up of BI-RADS 3 breast imaging findings is optimal in only 74% of women. Further interventions to promote follow-up should target younger, unmarried women, those with Hispanic ethnicity, and women without history of breast cancer and without insurance coverage.
Collapse
Affiliation(s)
- Ronilda Lacson
- Center for Evidence-Based Imaging, Department of Radiology, Brigham and Women's Hospital, Boston, Massachusetts; Harvard Medical School, Boston, Massachusetts.
| | - Aijia Wang
- Center for Evidence-Based Imaging, Department of Radiology, Brigham and Women's Hospital, Boston, Massachusetts
| | - Laila Cochon
- Center for Evidence-Based Imaging, Department of Radiology, Brigham and Women's Hospital, Boston, Massachusetts
| | - Catherine Giess
- Center for Evidence-Based Imaging, Department of Radiology, Brigham and Women's Hospital, Boston, Massachusetts; Harvard Medical School, Boston, Massachusetts
| | - Sonali Desai
- Harvard Medical School, Boston, Massachusetts; Department of Medicine, Brigham and Women's Hospital, Boston, Massachusetts
| | - Sunil Eappen
- Harvard Medical School, Boston, Massachusetts; Department of Anesthesiology, Brigham and Women's Hospital, Boston, Massachusetts
| | - Ramin Khorasani
- Center for Evidence-Based Imaging, Department of Radiology, Brigham and Women's Hospital, Boston, Massachusetts; Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
23
|
Cochon LR, Kapoor N, Carrodeguas E, Ip IK, Lacson R, Boland G, Khorasani R. Variation in Follow-up Imaging Recommendations in Radiology Reports: Patient, Modality, and Radiologist Predictors. Radiology 2019; 291:700-707. [PMID: 31063082 PMCID: PMC7526331 DOI: 10.1148/radiol.2019182826] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
Abstract
Background Variation between radiologists when making recommendations for additional imaging and associated factors are, to the knowledge of the authors, unknown. Clear identification of factors that account for variation in follow-up recommendations might prevent unnecessary tests for incidental or ambiguous image findings. Purpose To determine incidence and identify factors associated with follow-up recommendations in radiology reports from multiple modalities, patient care settings, and imaging divisions. Materials and Methods This retrospective study analyzed 318 366 reports obtained from diagnostic imaging examinations performed at a large urban quaternary care hospital from January 1 to December 31, 2016, excluding breast and US reports. A subset of 1000 reports were randomly selected and manually annotated to train and validate a machine learning algorithm to predict whether a report included a follow-up imaging recommendation (training-and-validation set consisted of 850 reports and test set of 150 reports). The trained algorithm was used to classify 318 366 reports. Multivariable logistic regression was used to determine the likelihood of follow-up recommendation. Additional analysis by imaging subspecialty division was performed, and intradivision and interradiologist variability was quantified. Results The machine learning algorithm classified 38 745 of 318 366 (12.2%) reports as containing follow-up recommendations. Average patient age was 59 years ± 17 (standard deviation); 45.2% (143 767 of 318 366) of reports were from male patients. Among 65 radiologists, 57% (37 of 65) were men. At multivariable analysis, older patients had higher rates of follow-up recommendations (odds ratio [OR], 1.01 [95% confidence interval {CI}: 1.01, 1.01] for each additional year), male patients had lower rates of follow-up recommendations (OR, 0.9; 95% CI: 0.9, 1.0), and follow-up recommendations were most common among CT studies (OR, 4.2 [95% CI: 4.0, 4.4] compared with radiography). Radiologist sex (P = .54), presence of a trainee (P = .45), and years in practice (P = .49) were not significant predictors overall. A division-level analysis showed 2.8-fold to 6.7-fold interradiologist variation. Conclusion Substantial interradiologist variation exists in the probability of recommending a follow-up examination in a radiology report, after adjusting for patient, examination, and radiologist factors. © RSNA, 2019 See also the editorial by Russell in this issue.
Collapse
Affiliation(s)
- Laila R Cochon
- From the Center for Evidence-Based Imaging, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| | - Neena Kapoor
- From the Center for Evidence-Based Imaging, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| | - Emmanuel Carrodeguas
- From the Center for Evidence-Based Imaging, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| | - Ivan K Ip
- From the Center for Evidence-Based Imaging, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| | - Ronilda Lacson
- From the Center for Evidence-Based Imaging, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| | - Giles Boland
- From the Center for Evidence-Based Imaging, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| | - Ramin Khorasani
- From the Center for Evidence-Based Imaging, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA 02115
| |
Collapse
|