1
|
Donnelly LF, Guimaraes CV. Event-Based Learning and Improvement: Radiology's Move From Peer Review to Peer Learning. Semin Ultrasound CT MR 2024; 45:161-169. [PMID: 38373672 DOI: 10.1053/j.sult.2024.02.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/21/2024]
Abstract
Over the past 15 years, the radiology community has made great progress moving from a system of score-based peer review to one of peer learning. Much has been learned along the way. In peer learning, cases in which learning opportunities are identified are reviewed solely for the purpose of fostering learning and improvement. This article defines peer learning and peer review and emphasizes the difference; looks back at the 20-year history of score-based peer review and transition to peer learning; outlines the problems with score-based peer review and the key elements of peer learning; discusses the current state of peer learning; and outlines future challenges and opportunities.
Collapse
Affiliation(s)
- Lane F Donnelly
- Department of Radiology, University of North Carolina School of Medicine, Chapel Hill, NC; Department of Pediatrics, University of North Carolina School of Medicine, Chapel Hill, NC.
| | - Carolina V Guimaraes
- Department of Radiology, University of North Carolina School of Medicine, Chapel Hill, NC
| |
Collapse
|
2
|
Äijö T, Elgort D, Becker M, Herzog R, Brown RKJ, Odry BL, Vianu R. Improving the Reliability of Peer Review Without a Gold Standard. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:489-503. [PMID: 38316666 PMCID: PMC11031531 DOI: 10.1007/s10278-024-00971-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 10/29/2023] [Accepted: 11/27/2023] [Indexed: 02/07/2024]
Abstract
Peer review plays a crucial role in accreditation and credentialing processes as it can identify outliers and foster a peer learning approach, facilitating error analysis and knowledge sharing. However, traditional peer review methods may fall short in effectively addressing the interpretive variability among reviewing and primary reading radiologists, hindering scalability and effectiveness. Reducing this variability is key to enhancing the reliability of results and instilling confidence in the review process. In this paper, we propose a novel statistical approach called "Bayesian Inter-Reviewer Agreement Rate" (BIRAR) that integrates radiologist variability. By doing so, BIRAR aims to enhance the accuracy and consistency of peer review assessments, providing physicians involved in quality improvement and peer learning programs with valuable and reliable insights. A computer simulation was designed to assign predefined interpretive error rates to hypothetical interpreting and peer-reviewing radiologists. The Monte Carlo simulation then sampled (100 samples per experiment) the data that would be generated by peer reviews. The performances of BIRAR and four other peer review methods for measuring interpretive error rates were then evaluated, including a method that uses a gold standard diagnosis. Application of the BIRAR method resulted in 93% and 79% higher relative accuracy and 43% and 66% lower relative variability, compared to "Single/Standard" and "Majority Panel" peer review methods, respectively. Accuracy was defined by the median difference of Monte Carlo simulations between measured and pre-defined "actual" interpretive error rates. Variability was defined by the 95% CI around the median difference of Monte Carlo simulations between measured and pre-defined "actual" interpretive error rates. BIRAR is a practical and scalable peer review method that produces more accurate and less variable assessments of interpretive quality by accounting for variability within the group's radiologists, implicitly applying a standard derived from the level of consensus within the group across various types of interpretive findings.
Collapse
Affiliation(s)
| | - Daniel Elgort
- Covera Health, New York, NY, USA
- Present Address: Aster Insights, Tampa, FL, USA
| | - Murray Becker
- Covera Health, New York, NY, USA
- Rutgers Robert Wood Johnson Medical School, New Brunswick, NJ, USA
| | | | - Richard K J Brown
- Department of Radiology, University of Michigan (Michigan Medicine), Ann Arbor, MI, USA
| | | | | |
Collapse
|
3
|
Ohno Y, Aoki T, Endo M, Koyama H, Moriya H, Okada F, Higashino T, Sato H, Oyama-Manabe N, Haraguchi T, Arakita K, Aoyagi K, Ikeda Y, Kaminaga S, Taniguchi A, Sugihara N. Machine learning-based computer-aided simple triage (CAST) for COVID-19 pneumonia as compared with triage by board-certified chest radiologists. Jpn J Radiol 2024; 42:276-290. [PMID: 37861955 PMCID: PMC10899374 DOI: 10.1007/s11604-023-01495-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Accepted: 09/22/2023] [Indexed: 10/21/2023]
Abstract
PURPOSE Several reporting systems have been proposed for providing standardized language and diagnostic categories aiming for expressing the likelihood that lung abnormalities on CT images represent COVID-19. We developed a machine learning (ML)-based CT texture analysis software for simple triage based on the RSNA Expert Consensus Statement system. The purpose of this study was to conduct a multi-center and multi-reader study to determine the capability of ML-based computer-aided simple triage (CAST) software based on RSNA expert consensus statements for diagnosis of COVID-19 pneumonia. METHODS For this multi-center study, 174 cases who had undergone CT and polymerase chain reaction (PCR) tests for COVID-19 were retrospectively included. Their CT data were then assessed by CAST and consensus from three board-certified chest radiologists, after which all cases were classified as either positive or negative. Diagnostic performance was then compared by McNemar's test. To determine radiological finding evaluation capability of CAST, three other board-certified chest radiologists assessed CAST results for radiological findings into five criteria. Finally, accuracies of all radiological evaluations were compared by McNemar's test. RESULTS A comparison of diagnosis for COVID-19 pneumonia based on RT-PCR results for cases with COVID-19 pneumonia findings on CT showed no significant difference of diagnostic performance between ML-based CAST software and consensus evaluation (p > 0.05). Comparison of agreement on accuracy for all radiological finding evaluations showed that emphysema evaluation accuracy for investigator A (AC = 91.7%) was significantly lower than that for investigators B (100%, p = 0.0009) and C (100%, p = 0.0009). CONCLUSION This multi-center study shows COVID-19 pneumonia triage by CAST can be considered at least as valid as that by chest expert radiologists and may be capable for playing as useful a complementary role for management of suspected COVID-19 pneumonia patients as well as the RT-PCR test in routine clinical practice.
Collapse
Affiliation(s)
- Yoshiharu Ohno
- Department of Diagnostic Radiology, Fujita Health University School of Medicine, 1-98 Dengakugakubo, Kutsukake-Cho, Toyoake, Aichi, 470-1192, Japan.
- Joint Research Laboratory of Advanced Medical Imaging, Fujita Health University School of Medicine, Toyoake, Aichi, Japan.
| | - Takatoshi Aoki
- Department of Radiology, University of Occupational and Environmental Health School of Medicine, Kitakyusyu, Fukuoka, Japan
| | - Masahiro Endo
- Division of Diagnostic Radiology, Shizuoka Cancer Center, Sunto-Gun, Nagaizumi-Cho, Shizuoka, Japan
| | - Hisanobu Koyama
- Department of Radiology, Advanced Diagnostic Medical Imaging, Kobe University Graduate School of Medicine, Kobe, Hyogo, Japan
| | - Hiroshi Moriya
- Department of Radiology, Ohara General Hospital, Fukushima, Fukushima, Japan
| | - Fumito Okada
- Department of Radiology, Oita Prefectural Hospital, Oita, Oita, Japan
| | - Takanori Higashino
- Department of Radiology, National Hospital Organization Himeji Medical Center, Himeji, Hyogo, Japan
| | - Haruka Sato
- Department of Radiology, Oita University Faculty of Medicine, Yufu, Oita, Japan
| | - Noriko Oyama-Manabe
- Department of Radiology, Jichi Medical University Saitama Medical Center, Saitama, Saitama, Japan
| | - Takafumi Haraguchi
- Department of Advanced Biomedical Imaging and Informatics, St. Marianna University School of Medicine, Kawasaki, Kanagawa, Japan
| | | | - Kota Aoyagi
- Canon Medical Systems Corporation, Otawara, Tochigi, Japan
| | | | | | | | - Naoki Sugihara
- Canon Medical Systems Corporation, Otawara, Tochigi, Japan
| |
Collapse
|
4
|
Banziger C, McNeil K, Goh HL, Choi S, Zealley IA. Simple changes to the reporting environment produce a large reduction in the frequency of interruptions to the reporting radiologist: an observational study. Acta Radiol 2022; 64:1873-1879. [PMID: 36437570 PMCID: PMC10160395 DOI: 10.1177/02841851221139624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Background Interruptions are a cause of discrepancy, errors, and potential safety incidents in radiology. The sources of radiological error are multifactorial and strategies to reduce error should include measures to reduce interruptions. Purpose To evaluate the effect of simple changes in the reporting environment on the frequency of interruptions to the reporting radiologist of a hospital radiology department. Material and Methods A prospective observational study was carried out. The number and type of potentially disruptive events (PDEs) to the radiologist reporting inpatient computed tomography (CT) scans were recorded during 20 separate 1-h observation periods during both pre- and post-intervention phases. The interventions were (i) relocation of the radiologist to a private, quiet room, and (ii) initial vetting of clinician enquiries via a separate duty radiologist Results After the intervention there was an 82% reduction in the number of frank interruptions (PDEs that require the radiologist to abandon the reporting task) from a median 6 events per hour to 1 (95% confidence interval [CI] = 4–6; P < 0.00001). The overall number of PDEs was reduced by 56% from a median 11 events per hour to 5 (95% CI = 4.5–11: P < 0.00001). Conclusion Relocation of inpatient CT reporting to a private, quiet room, coupled with vetting of clinician enquiries via the duty radiologist, resulted in a large reduction in the frequency of interruptions, a frequently cited avoidable source of radiological error.
Collapse
Affiliation(s)
- Carina Banziger
- School of Medicine, University of St Andrews, St Andrews, Scotland, UK
- Carina Banziger, University of St Andrews, School of Medicine, St Andrews KY16 9TF, UK.
Emails: ,
| | - Kirsty McNeil
- Department of Radiology, NHS Tayside, Ninewells Hospital, Dundee, Scotland, UK
| | - Hui Lu Goh
- Department of Radiology, NHS Greater Glasgow and Clyde, Glasgow, Scotland, UK
| | - Samantha Choi
- Department of Radiology, Royal Hospital for Children and Young People, Edinburgh, Scotland, UK
| | - Ian A Zealley
- Department of Radiology, NHS Tayside, Ninewells Hospital, Dundee, Scotland, UK
| |
Collapse
|
5
|
Yeates EO, Grigorian A, Chinn J, Young H, Colin Escobar J, Glavis-Bloom J, Anavim A, Yaghmai V, Nguyen NT, Nahmias J. Night Radiology Coverage for Trauma: Residents, Teleradiology, or Both? J Am Coll Surg 2022; 235:500-509. [PMID: 35972171 DOI: 10.1097/xcs.0000000000000280] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
BACKGROUND Overnight radiology coverage for trauma patients is often addressed with a combination of on-call radiology residents (RR) and a teleradiology service; however, the accuracy of these 2 readers has not been studied for trauma. We aimed to compare the accuracy of RR versus teleradiologist interpretations of CT scans for trauma patients. STUDY DESIGN A retrospective analysis (March 2019 through May 2020) of trauma patients presenting to a single American College of Surgeons Level I trauma center was performed. Patients whose CT scans were performed between 10 pm to 8 am were included, because their scans were interpreted by both a RR and teleradiologist. Interpretations were compared with the final attending faculty radiologist's interpretation and graded for accuracy based on the RADPEER scoring system. Discrepancies were characterized as traumatic injury or incidental findings and missed findings or overcalls. Turnaround time was also compared. RESULTS A total of 1,053 patients and 8,226 interpretations were included. Compared with teleradiologists, RR had a lower discrepancy (7.7% vs 9.0%, p = 0.026) and major discrepancy rate (3.8% vs 5.2%, p = 0.003). Among major discrepancies, RR had a lower rate of traumatic injury discrepancies (3.2% vs 4.4%, p = 0.004) and missed findings (3.4% vs 5.1%, p < 0.001), but a higher rate of overcalls (0.5% vs 0.1%, p < 0.001) compared with teleradiologists. The mean turnaround time was shorter for RR (51.3 vs 78.8 minutes, p < 0.001). The combination of both RR and teleradiologist interpretations had a lower overall discrepancy rate than RR (5.0% vs 7.7%, p < 0.001). CONCLUSIONS This study identified lower discrepancy rates and a faster turnaround time by RR compared with teleradiologists for trauma CT studies. The combination of both interpreters had an even lower discrepancy rate, suggesting this combination is optimal when an in-house attending radiologist is not available.
Collapse
Affiliation(s)
- Eric O Yeates
- From the Department of Surgery (Yeates, Grigorian, Chinn, Young, Colin Excobar, Nguyen, Nahmias)
| | - Areg Grigorian
- From the Department of Surgery (Yeates, Grigorian, Chinn, Young, Colin Excobar, Nguyen, Nahmias)
- Department of Surgery, University of Southern California (USC), Los Angeles, CA (Grigorian)
| | - Justine Chinn
- From the Department of Surgery (Yeates, Grigorian, Chinn, Young, Colin Excobar, Nguyen, Nahmias)
| | - Hayley Young
- From the Department of Surgery (Yeates, Grigorian, Chinn, Young, Colin Excobar, Nguyen, Nahmias)
| | - Jessica Colin Escobar
- From the Department of Surgery (Yeates, Grigorian, Chinn, Young, Colin Excobar, Nguyen, Nahmias)
| | - Justin Glavis-Bloom
- Department of Radiology (Glavis-Bloom, Anavim, Yaghmai), University of California, Irvine (UCI), Orange, CA
| | - Arash Anavim
- Department of Radiology (Glavis-Bloom, Anavim, Yaghmai), University of California, Irvine (UCI), Orange, CA
| | - Vahid Yaghmai
- Department of Radiology (Glavis-Bloom, Anavim, Yaghmai), University of California, Irvine (UCI), Orange, CA
| | - Ninh T Nguyen
- From the Department of Surgery (Yeates, Grigorian, Chinn, Young, Colin Excobar, Nguyen, Nahmias)
| | - Jeffry Nahmias
- From the Department of Surgery (Yeates, Grigorian, Chinn, Young, Colin Excobar, Nguyen, Nahmias)
| |
Collapse
|
6
|
Automated vs. human evaluation of corneal staining. Graefes Arch Clin Exp Ophthalmol 2022; 260:2605-2612. [PMID: 35357547 PMCID: PMC9325848 DOI: 10.1007/s00417-022-05574-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2021] [Revised: 12/26/2021] [Accepted: 01/21/2022] [Indexed: 11/04/2022] Open
Abstract
BACKGROUND AND PURPOSE Corneal fluorescein staining is one of the most important diagnostic tests in dry eye disease (DED). Nevertheless, the result of this examination is depending on the grader. So far, there is no method for an automated quantification of corneal staining commercially available. Aim of this study was to develop a software-assisted grading algorithm and to compare it with a group of human graders with variable clinical experience in patients with DED. METHODS Fifty images of eyes stained with 2 µl of 2% fluorescein presenting different severity of superficial punctate keratopathy in patients with DED were taken under standardized conditions. An algorithm for detecting and counting superficial punctate keratitis was developed using ImageJ with a training dataset of 20 randomly picked images. Then, the test dataset of 30 images was analyzed (1) by the ImageJ algorithm and (2) by 22 graders, all ophthalmologists with different levels of experience. All graders evaluated the images using the Oxford grading scheme for corneal staining at baseline and after 6-8 weeks. Intrarater agreement was also evaluated by adding a mirrored version of all original images into the set of images during the 2nd grading. RESULTS The count of particles detected by the algorithm correlated significantly (n = 30; p < 0.01) with the estimated true Oxford grade (Sr = 0,91). Overall human graders showed only moderate intrarater agreement (K = 0,426), while software-assisted grading was always the same (K = 1,0). Little difference was found between specialists and non-specialists in terms of intrarater agreement (K = 0,436 specialists; K = 0,417 non-specialists). The highest interrater agreement was seen with 75,6% in the most experienced grader, a cornea specialist with 29 years of experience, and the lowest was seen in a resident with 25,6% who had only 2 years of experience. CONCLUSION The variance in human grading of corneal staining - if only small - is likely to have only little impact on clinical management and thus seems to be acceptable. While human graders give results sufficient for clinical application, software-assisted grading of corneal staining ensures higher consistency and thus is preferrable for re-evaluating patients, e.g., in clinical trials.
Collapse
|
7
|
Smith JT. It's not about the errors, it's about the learning: How the Royal College of Radiologists has developed a Radiology Events and Learning process in the United Kingdom. J Med Imaging Radiat Oncol 2022; 66:185-192. [PMID: 35243780 DOI: 10.1111/1754-9485.13355] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 11/03/2021] [Indexed: 11/30/2022]
Abstract
The Royal College of Radiologists (RCR) is based in the United Kingdom but is a global organisation with members and fellows worldwide. In this invited article, the chair of the RCR Radiology Events and Learning (REAL) panel recounts his experience in looking at radiological errors. He starts with his personal work auditing his own mistakes as a junior consultant, describes what he learned in his departmental role in a large teaching hospital running a Radiology Events and Learning Meeting (REALM) and gives an overview of some of the work done over the last two decades by the RCR. This includes publishing national guidelines which set standards for running a REALM, setting up the REAL panel which produces a quarterly newsletter of cases from RCR members, and running an annual conference to share information with local radiology departments around the country. A review of the literature describing the drivers for this work and looking at the parallels with industry lies alongside the practical tips he found useful which he hopes would be helpful to anyone setting up their own departmental errors or discrepancy meeting.
Collapse
|
8
|
Hlabangana LT, Elsingergy M, Ahmed A, Boschoff PE, Goodier M, Bove M, Andronikou S. Inter-rater reliability in quality assurance (QA) of pediatric chest X-rays. J Med Imaging Radiat Sci 2021; 52:427-434. [PMID: 33958315 DOI: 10.1016/j.jmir.2021.04.002] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2020] [Revised: 03/25/2021] [Accepted: 04/09/2021] [Indexed: 10/21/2022]
Abstract
PURPOSE The goal of the study is to determine the inter-rater agreement on multiple factors that were utilized to evaluate the quality of pediatric chest X-ray exams from different levels of healthcare provision in an African setting. METHODS The image quality of pediatric chest X-rays from 3 South African medical centers of varying level of healthcare service were retrospectively assessed by 3 raters for 12 quality factors including: (1) absent body parts; (2) under inspiration; (3) patient rotation; (4) scapula in the way; (5) patient kyphosis/lordosis; (6) artefact/foreign body; (7) central vessel visualization; (8) peripheral vessels visualization; (9) poor collimation; and (10) trachea and bronchi visualization; (11) post-cardiac vessel visualization; and (12) absent or wrong image orientation. Analysis was performed using the Brennan--Prediger coefficient of agreement for inter-rater reliability and Cochran's Q statistic and McNemar's test for inter-rater bias. RESULTS 1077 X-rays were reviewed. The least difference between observers in the frequency of the errors was noticed for factors (1) absent body parts and (12) absent or wrong image orientation with almost perfect agreement between raters. κ score for these two factors among all raters and between each pair of raters was more than 0.95 with no significant inter-rater bias. Conversely, there was poor agreement for the remaining factors with the least agreed on being factor (3) patient rotation with a κ score of 0.23. This was followed by factors (2) under inspiration (κ score of 0.32) and factors (4) scapula in the way (κ score of 0.35) respectively. There was significant inter-rater bias for all these three factors. CONCLUSION Many of the factors used to assess the quality of a chest X-ray in children demonstrate poor reliability despite mitigation against variations in training, standard quality definitions and level of healthcare service provision. New definitions, objective measures and recording tools for assessing pediatric chest radiographic quality are required.
Collapse
Affiliation(s)
- Linda Tebogo Hlabangana
- University of the Witwatersrand School of Clinical Medicine, Faculty of Health Sciences, Johannesburg, South Africa
| | - Mohamed Elsingergy
- Department of Radiology, Children's Hospital of Philadelphia, 3401 Civic Center Blvd., Philadelphia, PA 19104, USA
| | - Aadil Ahmed
- Bayradiology Private Practice, St George's Hospital, Porte Elizabeth, Eastern Cape Province, South Africa
| | - Peter Ernst Boschoff
- Wits Donald Gordon Medical Center, TJ Nel Radiologists Inc., Johannesburg, Gauteng Province, South Africa
| | - Matthew Goodier
- University of KwaZulu-Natal, Greys Hospital, Pietermaritzburg, KwaZulu-Natal Province, South Africa
| | - Michele Bove
- Burger Radiologists Inc., Arwyp Medical Center, Johannesburg, Gauteng Province, South Africa
| | - Savvas Andronikou
- Department of Radiology, Children's Hospital of Philadelphia, 3401 Civic Center Blvd., Philadelphia, PA 19104, USA; Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA.
| |
Collapse
|
9
|
Woznitza N, Steele R, Hussain A, Gower S, Groombridge H, Togher D, Lofton L, Lainchbury J, Compton E, Rowe S, Robertson K. Reporting radiographer peer review systems: A cross-sectional survey of London NHS Trusts. Radiography (Lond) 2021; 27:173-177. [DOI: 10.1016/j.radi.2020.07.014] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Revised: 07/12/2020] [Accepted: 07/17/2020] [Indexed: 12/20/2022]
|
10
|
Optimizing Professional Practice Evaluation to Enable a Nonpunitive Learning Health System Approach to Peer Review. Pediatr Qual Saf 2020; 6:e375. [PMID: 33409427 PMCID: PMC7781295 DOI: 10.1097/pq9.0000000000000375] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Accepted: 08/28/2020] [Indexed: 01/06/2023] Open
Abstract
Healthcare organizations are focused on 2 different and sometimes conflicting tasks; (1) accelerate the improvement of clinical care delivery and (2) collect provider-specific data to determine the competency of providers. We describe creating a process to meet both of these aims while maintaining a culture that fosters improvement and teamwork.
Collapse
|
11
|
Wang Z, Zhao W, Shen J, Jiang Z, Yang S, Tan S, Zhang Y. PI-RADS version 2.1 scoring system is superior in detecting transition zone prostate cancer: a diagnostic study. Abdom Radiol (NY) 2020; 45:4142-4149. [PMID: 32902659 DOI: 10.1007/s00261-020-02724-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Revised: 08/18/2020] [Accepted: 08/30/2020] [Indexed: 12/30/2022]
Abstract
PURPOSE The studies comparing the versions 2 vs. 2.1 of the Prostate Imaging Reporting and Data System (PI-RADS) are rare. This study aimed to evaluate whether PI-RADS version 2.1 is superior in detecting transition zone prostate cancer in comparison with PI-RADS version 2. METHODS This was a diagnostic study of patients with prostate diseases who visited the Urology Department of The Second Affiliated Hospital of Soochow University and underwent a magnetic resonance imaging (MRI) examination between 03-01-2016 and 10-31-2018. The images originally analyzed using PI-RADS version 2 were retrospectively re-analyzed and scored in 2019 according to the updated PI-RADS version 2.1. The kappa and receiver operating characteristic (ROC) curves were used. RESULTS For Reader 1, compared with PI-RADS version 2, version 2.1 had higher sensitivity (85% vs. 79%, P = 0.03), lower specificity (65% vs. 83%, P < 0.001), and lower area under the curve (AUC) (0.749 vs. 0.809, P < 0.001). For Reader 2 (first attempt), compared with PI-RADS version 2, version 2.1 had lower specificity (67% vs. 91%, P < 0.001) and lower AUC (0.702 vs. 0.844, P < 0.001). For Reader 2 (second attempt), compared with PI-RADS version 2, version 2.1 had higher sensitivity (88% vs. 78%, P < 0.001) and lower specificity (77% vs. 91%, P < 0.001). The kappa between the two attempts for Reader 2 was 0.321. CONCLUSION These results suggest that PI-RADS version 2.1 might improve the detection of prostate cancers in the transition zone compared with PI-RADS version 2 but that it might results in higher numbers of biopsies because of lower specificity.
Collapse
|
12
|
Larson DB, Broder JC, Bhargavan-Chatfield M, Donnelly LF, Kadom N, Khorasani R, Sharpe RE, Pahade JK, Moriarity AK, Tan N, Siewert B, Kruskal JB. Transitioning From Peer Review to Peer Learning: Report of the 2020 Peer Learning Summit. J Am Coll Radiol 2020; 17:1499-1508. [DOI: 10.1016/j.jacr.2020.07.016] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Revised: 07/05/2020] [Accepted: 07/15/2020] [Indexed: 10/23/2022]
|
13
|
Waite S, Farooq Z, Grigorian A, Sistrom C, Kolla S, Mancuso A, Martinez-Conde S, Alexander RG, Kantor A, Macknik SL. A Review of Perceptual Expertise in Radiology-How it develops, How we can test it, and Why humans still matter in the era of Artificial Intelligence. Acad Radiol 2020; 27:26-38. [PMID: 31818384 DOI: 10.1016/j.acra.2019.08.018] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2019] [Revised: 08/26/2019] [Accepted: 08/27/2019] [Indexed: 10/25/2022]
Abstract
As the first step in image interpretation is detection, an error in perception can prematurely end the diagnostic process leading to missed diagnoses. Because perceptual errors of this sort-"failure to detect"-are the most common interpretive error (and cause of litigation) in radiology, understanding the nature of perceptual expertise is essential in decreasing radiology's long-standing error rates. In this article, we review what constitutes a perceptual error, the existing models of radiologic image perception, the development of perceptual expertise and how it can be tested, perceptual learning methods in training radiologists, and why understanding perceptual expertise is still relevant in the era of artificial intelligence. Adding targeted interventions, such as perceptual learning, to existing teaching practices, has the potential to enhance expertise and reduce medical error.
Collapse
|
14
|
Davenport MS, Larson DB. Measuring Diagnostic Radiologists: What Measurements Should We Use? J Am Coll Radiol 2019; 16:333-335. [PMID: 30718210 DOI: 10.1016/j.jacr.2018.12.011] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2018] [Accepted: 12/12/2018] [Indexed: 11/27/2022]
Affiliation(s)
- Matthew S Davenport
- Department of Radiology and the Department of Urology, Michigan Medicine, Ann Arbor, Michigan; Michigan Radiology Quality Collaborative, Ann Arbor, Michigan.
| | - David B Larson
- Department of Radiology, Stanford University School of Medicine, Stanford California
| |
Collapse
|
15
|
Abstract
OBJECTIVE The purpose of this article is to outline practical steps that a department can take to transition to a peer learning model. CONCLUSION The 2015 Institute of Medicine report on improving diagnosis emphasized that organizations and industries that embrace error as an opportunity to learn tend to outperform those that do not. To meet this charge, radiology must transition from a peer review to a peer learning approach.
Collapse
|
16
|
Donnelly LF, Dorfman SR, Jones J, Bisset GS. Transition From Peer Review to Peer Learning: Experience in a Radiology Department. J Am Coll Radiol 2017; 15:1143-1149. [PMID: 29055610 DOI: 10.1016/j.jacr.2017.08.023] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2017] [Revised: 08/10/2017] [Accepted: 08/17/2017] [Indexed: 10/18/2022]
Abstract
PURPOSE To describe the process by which a radiology department moved from peer review to peer collaborative improvement (PCI) and review data from the first 16 months of the PCI process. MATERIALS AND METHODS Data from the first 16 months after PCI were reviewed: number of case reviews performed, number of learning opportunities identified, percentage yield of learning opportunities identified, type of learning opportunities identified, and comparison of the previous parameters between case randomly reviewed versus actively pushed (issues actively identified and entered). Changes in actively pushed cases were also assessed as volume per month over the 16 months (run chart). Faculty members were surveyed about their perception of the conversion to PCI. RESULTS In all, 12,197 cases were peer reviewed, yielding 1,140 learning opportunities (9.34%). The most common types of learning opportunities for all reviewed cases included perception (5.1%) and reporting (1.9%). The yield of learning opportunities from actively pushed cases was 96.3% compared with 3.88% for randomly reviewed cases. The number of actively pushed cases per month increased over the course of the period and established two new confidence intervals. The faculty survey revealed that the faculty perceived the new PCI process as positive, nonpunitive, and focused on improvement. CONCLUSIONS The study demonstrates that a switch to PCI is perceived as nonpunitive and associated with increased radiologist submission of learning opportunities. Active entering of identified learning opportunities had a greater yield and perceived value, compared with random review of cases.
Collapse
Affiliation(s)
- Lane F Donnelly
- Department of Radiology, Texas Children's Hospital, Houston, Texas.
| | - Scott R Dorfman
- Department of Radiology, Texas Children's Hospital, Houston, Texas
| | - Jeremy Jones
- Department of Radiology, Texas Children's Hospital, Houston, Texas
| | - George S Bisset
- Department of Radiology, Texas Children's Hospital, Houston, Texas
| |
Collapse
|
17
|
Scali EP, Harris AC, Martin ML. Peer Review in Radiology: How Can We Learn From Our Mistakes? Can Assoc Radiol J 2017; 68:368-370. [PMID: 28818363 DOI: 10.1016/j.carj.2017.04.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2016] [Revised: 02/24/2017] [Accepted: 04/10/2017] [Indexed: 10/19/2022] Open
Affiliation(s)
- Elena P Scali
- Department of Radiology, University of British Columbia, Vancouver, British Columbia, Canada.
| | - Alison C Harris
- Department of Radiology, University of British Columbia, Vancouver, British Columbia, Canada
| | - Michael L Martin
- Department of Radiology, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
18
|
Chin SC, Weir-McCall JR, Yeap PM, White RD, Budak MJ, Duncan G, Oliver TB, Zealley IA. Evidence-based anatomical review areas derived from systematic analysis of cases from a radiological departmental discrepancy meeting. Clin Radiol 2017; 72:902.e1-902.e12. [PMID: 28687168 DOI: 10.1016/j.crad.2017.06.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2016] [Revised: 05/30/2017] [Accepted: 06/06/2017] [Indexed: 12/21/2022]
Abstract
AIM To produce short checklists of specific anatomical review sites for different regions of the body based on the frequency of radiological errors reviewed at radiology discrepancy meetings, thereby creating "evidence-based" review areas for radiology reporting. MATERIALS AND METHODS A single centre discrepancy database was retrospectively reviewed from a 5-year period. All errors were classified by type, modality, body system, and specific anatomical location. Errors were assigned to one of four body regions: chest, abdominopelvic, central nervous system (CNS), and musculoskeletal (MSK). Frequencies of errors in anatomical locations were then analysed. RESULTS There were 561 errors in 477 examinations; 290 (46%) errors occurred in the abdomen/pelvis, 99 (15.7%) in the chest, 117 (18.5%) in the CNS, and 125 (19.9%) in the MSK system. In each body system, the five most common location were chest: lung bases on computed tomography (CT), apices on radiography, pulmonary vasculature, bones, and mediastinum; abdominopelvic: vasculature, colon, kidneys, liver, and pancreas; CNS: intracranial vasculature, peripheral cerebral grey matter, bone, parafalcine, and the frontotemporal lobes surrounding the Sylvian fissure; and MSK: calvarium, sacrum, pelvis, chest, and spine. CONCLUSION The five listed locations accounted for >50% of all perceptual errors suggesting an avenue for focused review at the end of reporting.
Collapse
Affiliation(s)
- S C Chin
- Department of Clinical Radiology, Ninewells Hospital & Medical School, Ninewells Avenue, Dundee, Tayside, Scotland, DD1 9SY, UK.
| | - J R Weir-McCall
- Department of Clinical Radiology, Ninewells Hospital & Medical School, Ninewells Avenue, Dundee, Tayside, Scotland, DD1 9SY, UK
| | - P M Yeap
- Department of Clinical Radiology, Ninewells Hospital & Medical School, Ninewells Avenue, Dundee, Tayside, Scotland, DD1 9SY, UK
| | - R D White
- Department of Clinical Radiology, Ninewells Hospital & Medical School, Ninewells Avenue, Dundee, Tayside, Scotland, DD1 9SY, UK; Department of Radiology, University Hospital of Wales, Heath Park, Cardiff, CF14 4XW, UK
| | - M J Budak
- Gold Coast Radiology, Queensland, Australia
| | - G Duncan
- Department of Clinical Radiology, Ninewells Hospital & Medical School, Ninewells Avenue, Dundee, Tayside, Scotland, DD1 9SY, UK
| | - T B Oliver
- Department of Clinical Radiology, Ninewells Hospital & Medical School, Ninewells Avenue, Dundee, Tayside, Scotland, DD1 9SY, UK
| | - I A Zealley
- Department of Clinical Radiology, Ninewells Hospital & Medical School, Ninewells Avenue, Dundee, Tayside, Scotland, DD1 9SY, UK
| |
Collapse
|
19
|
Ma WK, Borgen R, Kelly J, Millington S, Hilton B, Aspin R, Lança C, Hogg P. Blurred digital mammography images: an analysis of technical recall and observer detection performance. Br J Radiol 2017; 90:20160271. [PMID: 28134567 PMCID: PMC5601529 DOI: 10.1259/bjr.20160271] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2016] [Revised: 01/16/2017] [Accepted: 01/25/2017] [Indexed: 11/05/2022] Open
Abstract
OBJECTIVE Blurred images in full-field digital mammography are a problem in the UK Breast Screening Programme. Technical recalls may be due to blurring not being seen on lower resolution monitors used for review. This study assesses the visual detection of blurring on a 2.3-MP monitor and a 5-MP report grade monitor and proposes an observer standard for the visual detection of blurring on a 5-MP reporting grade monitor. METHODS 28 observers assessed 120 images for blurring; 20 images had no blurring present, whereas 100 images had blurring imposed through mathematical simulation at 0.2, 0.4, 0.6, 0.8 and 1.0 mm levels of motion. Technical recall rate for both monitors and angular size at each level of motion were calculated. χ2 tests were used to test whether significant differences in blurring detection existed between 2.3- and 5-MP monitors. RESULTS The technical recall rate for 2.3- and 5-MP monitors are 20.3% and 9.1%, respectively. The angular size for 0.2- to 1-mm motion varied from 55 to 275 arc s. The minimum amount of motion for visual detection of blurring in this study is 0.4 mm. For 0.2-mm simulated motion, there was no significant difference [χ2 (1, N = 1095) = 1.61, p = 0.20] in blurring detection between the 2.3- and 5-MP monitors. CONCLUSION According to this study, monitors ≤2.3 MP are not suitable for technical review of full-field digital mammography images for the detection of blur. Advances in knowledge: This research proposes the first observer standard for the visual detection of blurring.
Collapse
Affiliation(s)
- Wang Kei Ma
- Department of Radiography, University of Salford, Salford, UK
| | - Rita Borgen
- East Lancashire Breast Screening Unit, Burnley General Hospital, Burnley, UK
| | - Judith Kelly
- Department of Radiography, Countess of Chester Hospital NHS Foundation Trust, Chester, UK
| | - Sara Millington
- Department of Radiography, Countess of Chester Hospital NHS Foundation Trust, Chester, UK
| | - Beverley Hilton
- East Lancashire Breast Screening Unit, Burnley General Hospital, Burnley, UK
| | - Rob Aspin
- Department of Computer Science and Software Engineering, University of Salford, Salford, UK
| | - Carla Lança
- Department of Sciences and Rehabilitation Technologies, Lisbon School of Health Technology, Lisbon, Portugal
- Centro de Investigação em Saúde Pública, Escola Nacional de Saúde Pública, Universidade NOVA de Lisboa, Lisbon, Portugal
| | - Peter Hogg
- Department of Radiography, University of Salford, Salford, UK
- Department of Radiography, Karolinska Institute, Stockholm, Sweden
| |
Collapse
|
20
|
Agrawal A, Koundinya DB, Raju JS, Agrawal A, Kalyanpur A. Utility of contemporaneous dual read in the setting of emergency teleradiology reporting. Emerg Radiol 2016; 24:157-164. [PMID: 27858233 DOI: 10.1007/s10140-016-1465-3] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2016] [Accepted: 11/08/2016] [Indexed: 11/30/2022]
Abstract
PURPOSE Emergency radiology requires rapid and accurate interpretation of imaging examinations. Missed findings may lead to adverse outcomes. Double reporting may be used to minimize errors. Limited contemporaneous double reporting may be most efficient and cost-effective, but no data exists. This study is intended to examine the benefits of double reading and identify examinations where this would be most useful. METHODS In this study, dual reporting was conducted in a parallel reading environment in a teleradiology practice for 3779 radiological procedures performed at two radiology centers in the USA over a period of 4 months. Discrepancies between reads were scored using the ACR peer review scoring system and grouped by modality and body part. Errors were tabulated across the study types, followed by identification of statistically significant differences. The interaction between image number and odds of an error was ascertained. RESULTS In 145 instances (3.8%; 95 % CI, 3.2-4.4%), double reporting identified errors, leading to report modification. Study type was significantly related to error frequency (p = 0.0001), with higher than average frequencies of error seen for CT abdomen and pelvis and MRI head or spine, but lower than average for CT head, CT spine, and ultrasound. Image number was positively associated with error odds, but was not independently significant in a joint logistic regression model that included study type. CONCLUSION Dual reporting identifies missed findings in about 1 of 25 emergency studies. This benefit varies substantially across study types and limited double reporting, merits further investigation as a cost-effective practice improvement strategy.
Collapse
Affiliation(s)
- Anjali Agrawal
- Teleradiology Solutions, 12B Sriram Road, Civil Lines, Delhi, 110054, India.
| | - D B Koundinya
- CSIR Institute of Genomics and Integrative Biology, Delhi University, Mall Road, North Campus, Delhi, 110007, India
| | - Jayadeepa Srinivas Raju
- Teleradiology Solutions, #7G, Opposite Graphite India, Whitefield, Bangalore, Karnataka, 560048, India
| | - Anurag Agrawal
- CSIR Institute of Genomics and Integrative Biology, Delhi University, Mall Road, North Campus, Delhi, 110007, India
| | - Arjun Kalyanpur
- Teleradiology Solutions, #7G, Opposite Graphite India, Whitefield, Bangalore, Karnataka, 560048, India
| |
Collapse
|
21
|
Marongiu L, Shain E, Drumright L, Lillestøl R, Somasunderam D, Curran MD. Analysis of TaqMan Array Cards Data by an Assumption-Free Improvement of the maxRatio Algorithm Is More Accurate than the Cycle-Threshold Method. PLoS One 2016; 11:e0165282. [PMID: 27828987 PMCID: PMC5102466 DOI: 10.1371/journal.pone.0165282] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2016] [Accepted: 10/10/2016] [Indexed: 11/19/2022] Open
Abstract
Quantitative PCR diagnostic platforms are moving towards increased sample throughput, with instruments capable of carrying out thousands of reactions at once already in use. The need for a computational tool to reliably assist in the validation of the results is therefore compelling. In the present study, 328 residual clinical samples provided by the Public Health England at Addenbrooke's Hospital (Cambridge, UK) were processed by TaqMan Array Card assay, generating 15 744 reactions from 54 targets. The amplification data were analysed by the conventional cycle-threshold (CT) method and an improvement of the maxRatio (MR) algorithm developed to filter out the reactions with irregular amplification profiles. The reactions were also independently validated by three raters and a consensus was generated from their classification. The inter-rater agreement by Fleiss' kappa was 0.885; the agreement between either CT or MR with the raters gave Fleiss' kappa 0.884 and 0.902, respectively. Based on the consensus classification, the CT and MR methods achieved an assay accuracy of 0.979 and 0.987, respectively. These results suggested that the assumption-free MR algorithm was more reliable than the CT method, with clear advantages for the diagnostic settings.
Collapse
Affiliation(s)
- Luigi Marongiu
- Department of Medicine, University of Cambridge, Cambridge, Cambridgeshire, CB2 0QQ, United Kingdom
- * E-mail:
| | - Eric Shain
- Grove Street Technology LLC, 459 Grove Street, Glencoe, Illinois, 60022, United States of America
| | - Lydia Drumright
- Department of Medicine, University of Cambridge, Cambridge, Cambridgeshire, CB2 0QQ, United Kingdom
| | - Reidun Lillestøl
- Department of Medicine, University of Cambridge, Cambridge, Cambridgeshire, CB2 0QQ, United Kingdom
| | - Donald Somasunderam
- Public Health England, Clinical Microbiology and Public Health Laboratory, Addenbrooke's Hospital, Hills Road, Cambridge, Cambridgeshire, CB2 0QW, United Kingdom
| | - Martin D. Curran
- Public Health England, Clinical Microbiology and Public Health Laboratory, Addenbrooke's Hospital, Hills Road, Cambridge, Cambridgeshire, CB2 0QW, United Kingdom
| |
Collapse
|
22
|
Development and Validation of a Standardized Tool for Prioritization of Information Sources. Online J Public Health Inform 2016; 8:e187. [PMID: 27752297 PMCID: PMC5065522 DOI: 10.5210/ojphi.v8i2.6720] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
PURPOSE To validate the utility and effectiveness of a standardized tool for prioritization of information sources for early detection of diseases. METHODS The tool was developed with input from diverse public health experts garnered through survey. Ten raters used the tool to evaluate ten information sources and reliability among raters was computed. The Proc mixed procedure with random effect statement and SAS Macros were used to compute multiple raters' Fleiss Kappa agreement and Kendall's Coefficient of Concordance. RESULTS Ten disparate information sources evaluated obtained the following composite scores: ProMed 91%; WAHID 90%; Eurosurv 87%; MediSys 85%; SciDaily 84%; EurekAl 83%; CSHB 78%; GermTrax 75%; Google 74%; and CBC 70%. A Fleiss Kappa agreement of 50.7% was obtained for ten information sources and 72.5% for a sub-set of five sources rated, which is substantial agreement validating the utility and effectiveness of the tool. CONCLUSION This study validated the utility and effectiveness of a standardized criteria tool developed to prioritize information sources. The new tool was used to identify five information sources suited for use by the KIWI system in the CEZD-IIR project to improve surveillance of infectious diseases. The tool can be generalized to situations when prioritization of numerous information sources is necessary.
Collapse
|
23
|
Lauritzen PM, Andersen JG, Stokke MV, Tennstrand AL, Aamodt R, Heggelund T, Dahl FA, Sandbæk G, Hurlen P, Gulbrandsen P. Radiologist-initiated double reading of abdominal CT: retrospective analysis of the clinical importance of changes to radiology reports. BMJ Qual Saf 2016; 25:595-603. [PMID: 27013638 PMCID: PMC4975845 DOI: 10.1136/bmjqs-2015-004536] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2015] [Accepted: 01/21/2016] [Indexed: 11/25/2022]
Abstract
Background Misinterpretation of radiological examinations is an important contributing factor to diagnostic errors. Consultant radiologists in Norwegian hospitals frequently request second reads by colleagues in real time. Our objective was to estimate the frequency of clinically important changes to radiology reports produced by these prospectively obtained double readings. Methods We retrospectively compared the preliminary and final reports from 1071 consecutive double-read abdominal CT examinations of surgical patients at five public hospitals in Norway. Experienced gastrointestinal surgeons rated the clinical importance of changes from the preliminary to final report. The severity of the radiological findings in clinically important changes was classified as increased, unchanged or decreased. Results Changes were classified as clinically important in 146 of 1071 reports (14%). Changes to 3 reports (0.3%) were critical (demanding immediate action), 35 (3%) were major (implying a change in treatment) and 108 (10%) were intermediate (requiring further investigations). The severity of the radiological findings was increased in 118 (81%) of the clinically important changes. Important changes were made less frequently when abdominal radiologists were first readers, more frequently when they were second readers, and more frequently to urgent examinations. Conclusion A 14% rate of clinically important changes made during double reading may justify quality assurance of radiological interpretation. Using expert second readers and a targeted selection of urgent cases and radiologists reading outside their specialty may increase the yield of discrepant cases.
Collapse
Affiliation(s)
- Peter Mæhre Lauritzen
- Department of Diagnostic Imaging, Akershus University Hospital, Lørenskog, Norway Institute of Clinical Medicine, University of Oslo, Campus Ahus, Lørenskog, Norway
| | - Jack Gunnar Andersen
- Department of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway
| | | | | | - Rolf Aamodt
- Department of Gastrointestinal Surgery, Akershus University Hospital, Lørenskog, Norway
| | - Thomas Heggelund
- Department of Gastrointestinal Surgery, Akershus University Hospital, Lørenskog, Norway
| | - Fredrik A Dahl
- Health Services Research Unit, Akershus University Hospital, Lørenskog, Norway
| | - Gunnar Sandbæk
- Department of Radiology and Nuclear Medicine, Oslo University Hospital, Oslo, Norway Institute of Clinical Medicine, University of Oslo, Oslo, Norway
| | - Petter Hurlen
- Department of Diagnostic Imaging, Akershus University Hospital, Lørenskog, Norway
| | - Pål Gulbrandsen
- Institute of Clinical Medicine, University of Oslo, Campus Ahus, Lørenskog, Norway Health Services Research Unit, Akershus University Hospital, Lørenskog, Norway
| |
Collapse
|
24
|
Carlton Jones AL, Roddie ME. Implementation of a virtual learning from discrepancy meeting: a method to improve attendance and facilitate shared learning from radiological error. Clin Radiol 2016; 71:583-90. [PMID: 26932774 DOI: 10.1016/j.crad.2016.01.021] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2015] [Revised: 01/04/2016] [Accepted: 01/15/2016] [Indexed: 11/27/2022]
Abstract
AIM To assess the effect on radiologist participation in learning from discrepancy meetings (LDMs) in a multisite radiology department by establishing virtual LDMs using OsiriX (Pixmeo). MATERIALS AND METHODS Sets of anonymised discrepancy cases were added to an OsiriX database available for viewing on iMacs in all radiology reporting rooms. Radiologists were given a 3-week period to review the cases and send their feedback to the LDM convenor. Group learning points and consensus feedback were added to each case before it was moved to a permanent digital LDM library. Participation was recorded and compared with that from the previous 4 years of conventional LDMs. Radiologist feedback comparing the two types of LDM was collected using an anonymous online questionnaire. RESULTS Numbers of radiologists attending increased significantly from a mean of 12±2.9 for the conventional LDM to 32.7±7 for the virtual LDM (p<0.0001) and the percentage of radiologists achieving the UK standard of participation in at least 50% of LDMs annually rose from an average of 18% to 68%. The number of cases submitted per meeting rose significantly from an average of 11.1±3 for conventional LDMs to 15.9±5.9 for virtual LDMs (p<0.0097). Analysis of 35 returned questionnaires showed that radiologists welcomed being able to review cases at a time and place of their choosing and at their own pace. CONCLUSION Introduction of virtual LDMs in a multisite radiology department improved radiologist participation in shared learning from radiological discrepancy and increased the number of submitted cases.
Collapse
Affiliation(s)
- A L Carlton Jones
- Imperial College Healthcare NHS Trust, Charing Cross Hospital, Fulham Palace Road, London W6 8RF, UK
| | - M E Roddie
- Imperial College Healthcare NHS Trust, Charing Cross Hospital, Fulham Palace Road, London W6 8RF, UK.
| |
Collapse
|
25
|
Lauritzen PM, Stavem K, Andersen JG, Stokke MV, Tennstrand AL, Bjerke G, Hurlen P, Sandbæk G, Dahl FA, Gulbrandsen P. Double reading of current chest CT examinations: Clinical importance of changes to radiology reports. Eur J Radiol 2016; 85:199-204. [DOI: 10.1016/j.ejrad.2015.11.012] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2015] [Revised: 10/22/2015] [Accepted: 11/04/2015] [Indexed: 11/17/2022]
|
26
|
Riviello ED, Kiviri W, Twagirumugabe T, Mueller A, Banner-Goodspeed VM, Officer L, Novack V, Mutumwinka M, Talmor DS, Fowler RA. Hospital Incidence and Outcomes of the Acute Respiratory Distress Syndrome Using the Kigali Modification of the Berlin Definition. Am J Respir Crit Care Med 2016; 193:52-9. [DOI: 10.1164/rccm.201503-0584oc] [Citation(s) in RCA: 206] [Impact Index Per Article: 25.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
27
|
Lockwood P, Piper K, Pittock L. CT head reporting by radiographers: Results of an accredited postgraduate programme. Radiography (Lond) 2015. [DOI: 10.1016/j.radi.2014.12.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
28
|
Verdoorn JT, Hunt CH, Luetmer MT, Wood CP, Eckel LJ, Schwartz KM, Diehn FE, Kallmes DF. Increasing neuroradiology exam volumes on-call do not result in increased major discrepancies in primary reads performed by residents. Open Neuroimag J 2015; 8:11-5. [PMID: 25646138 PMCID: PMC4311384 DOI: 10.2174/1874440001408010011] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2014] [Revised: 12/05/2014] [Accepted: 12/09/2014] [Indexed: 12/03/2022] Open
Abstract
Background and Purpose: A common perception is that increased on-call workload leads to increased resident mistakes. To test this, we evaluated whether increased imaging volume has led to increased errors by residents. Materials and Methods: A retrospective review was made of all overnight neuroradiology CT exams with a primary resident read from 2006-2010. All studies were over-read by staff neuroradiologists next morning. As the volume is higher on Friday through Sunday nights, weekend studies were examined separately. Discrepancies were classified as either minor or major. “Major” discrepancy was defined as a discrepancy that the staff radiologist felt was significant enough to potentially affect patient care, necessitating a corrected report and phone contact with the ordering physician and documentation. The total number of major discrepancies was recorded by quarter. In addition, the total number of neuroradiology CT studies read overnight on-call was noted. Results: The mean number of cases per night during the weekday increased from 3.0 in 2006 to 5.2 in 2010 (p<0.001). During the weekend, the mean number of cases per night increased from 5.4 in 2006 to 7.6 in 2010 (p<0.001). Despite this increase, the major discrepancy rate decreased from 2.7% in 2006 to 2.3% in 2010 (p=0.34). Conclusion: Despite an increase in neuroradiology exam volumes, there continues to be a low major discrepancy rate for primary resident interpretations. While continued surveillance of on-call volumes is crucial to the educational environment, concern of increased major errors should not be used as sole justification to limit autonomy.
Collapse
Affiliation(s)
- Jared T Verdoorn
- Department of Radiology, Mayo Clinic, 200 1st Street SW, Rochester, MN, 55905, USA
| | - Christopher H Hunt
- Department of Radiology, Mayo Clinic, 200 1st Street SW, Rochester, MN, 55905, USA
| | - Marianne T Luetmer
- Department of Radiology, Mayo Clinic, 200 1st Street SW, Rochester, MN, 55905, USA
| | - Christopher P Wood
- Department of Radiology, Mayo Clinic, 200 1st Street SW, Rochester, MN, 55905, USA
| | - Laurence J Eckel
- Department of Radiology, Mayo Clinic, 200 1st Street SW, Rochester, MN, 55905, USA
| | - Kara M Schwartz
- Department of Radiology, Mayo Clinic, 200 1st Street SW, Rochester, MN, 55905, USA
| | - Felix E Diehn
- Department of Radiology, Mayo Clinic, 200 1st Street SW, Rochester, MN, 55905, USA
| | - David F Kallmes
- Department of Radiology, Mayo Clinic, 200 1st Street SW, Rochester, MN, 55905, USA
| |
Collapse
|
29
|
Guérin G, Jamali S, Soto CA, Guilbert F, Raymond J. Interobserver agreement in the interpretation of outpatient head CT scans in an academic neuroradiology practice. AJNR Am J Neuroradiol 2014; 36:24-9. [PMID: 25059693 DOI: 10.3174/ajnr.a4058] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
BACKGROUND AND PURPOSE The repeatability of head CT interpretations may be studied in different contexts: in peer-review quality assurance interventions or in interobserver agreement studies. We assessed the agreement between double-blind reports of outpatient CT scans in a routine academic practice. MATERIALS AND METHODS Outpatient head CT scans (119 patients) were randomly selected to be read twice in a blinded fashion by 8 neuroradiologists practicing in an academic institution during 1 year. Nonstandardized reports were analyzed to extract 4 items (answer to the clinical question, major findings, incidental findings, recommendations for further investigations) from each report, to identify agreement or discrepancies (classified as class 2 [mentioned or not mentioned or contradictions between reports], class 1 [mentioned in both reports but diverging in location or severity], 0 [concordant], or not applicable), according to a standardized data-extraction form. Agreement regarding the presence or absence of clinically significant or incidental findings was studied with κ statistics. RESULTS The interobserver agreement regarding head CT studies with positive and negative results for clinically pertinent findings was 0.86 (0.77-0.95), but concordance was only 75.6% (67.2%-82.5%). Class 2 discrepancy was found in 15.1%; class 1 discrepancy, in 9.2% of cases. The κ value for reporting incidental findings was 0.59 (0.45-0.74), with class 2 discrepancy in 29.4% of cases. Most discrepancies did not impact the clinical management of patients. CONCLUSIONS Discrepancies in double-blind interpretations of head CT examinations were more common than reported in peer-review quality assurance programs.
Collapse
Affiliation(s)
- G Guérin
- From the Department of Radiology (G.G., C.A.S., F.G., J.R.), Centre Hospitalier de l'Université de Montréal, Notre-Dame Hospital, Montreal, Quebec, Canada
| | - S Jamali
- Laboratory of Interventional Neuroradiology (S.J., J.R.), Centre Hospitalier de l'Université de Montréal, Notre-Dame Hospital Research Centre, Montreal, Quebec, Canada
| | - C A Soto
- From the Department of Radiology (G.G., C.A.S., F.G., J.R.), Centre Hospitalier de l'Université de Montréal, Notre-Dame Hospital, Montreal, Quebec, Canada
| | - F Guilbert
- From the Department of Radiology (G.G., C.A.S., F.G., J.R.), Centre Hospitalier de l'Université de Montréal, Notre-Dame Hospital, Montreal, Quebec, Canada
| | - J Raymond
- From the Department of Radiology (G.G., C.A.S., F.G., J.R.), Centre Hospitalier de l'Université de Montréal, Notre-Dame Hospital, Montreal, Quebec, Canada Laboratory of Interventional Neuroradiology (S.J., J.R.), Centre Hospitalier de l'Université de Montréal, Notre-Dame Hospital Research Centre, Montreal, Quebec, Canada.
| |
Collapse
|
30
|
Prowse S, Pinkey B, Etherington R. Discrepancies in discrepancy meetings: Results of the UK national discrepancy meeting survey. Clin Radiol 2014; 69:18-22. [DOI: 10.1016/j.crad.2013.05.105] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2013] [Revised: 05/09/2013] [Accepted: 05/15/2013] [Indexed: 11/16/2022]
|
31
|
Commentary on discrepancies in discrepancy meetings. Clin Radiol 2013; 69:11-2. [PMID: 23973162 DOI: 10.1016/j.crad.2013.07.013] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2013] [Accepted: 07/15/2013] [Indexed: 11/22/2022]
|