1
|
Bradford AC, Nguyen T, Schulson L, Dick A, Gupta S, Simon K, Stein BD. High-Dose Opioid Prescribing in Individuals with Acute Pain: Assessing the Effects of US State Opioid Policies. J Gen Intern Med 2024; 39:2689-2697. [PMID: 39028403 PMCID: PMC11534911 DOI: 10.1007/s11606-024-08947-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 07/08/2024] [Indexed: 07/20/2024]
Abstract
BACKGROUND How state opioid policy environments with multiple concurrent policies affect opioid prescribing to individuals with acute pain is unknown. OBJECTIVE To examine how prescription drug monitoring programs (PDMPs), pain management clinic regulations, initial prescription duration limits, and mandatory continued medical education affected total and high-dose prescribing. DESIGN A county-level multiple-policy difference-in-difference event study framework. SUBJECTS A total of 2,425,643 individuals in a large national commercial insurance deidentified claims database (aged 12-64 years) with acute pain diagnoses and opioid prescriptions from 2007 to 2019. MAIN MEASURES The total number of acute pain opioid treatment episodes and number of episodes containing high-dose (> 90 morphine equivalent daily dosage (MEDD)) prescriptions. KEY RESULTS Approximately 7.5% of acute pain episodes were categorized as high-dose episodes. Prescription duration limits were associated with increases in the number of total episodes; no other policy was found to have a significant impact. Beginning five quarters after implementation, counties in states with pain management clinic regulations experienced a sustained 50% relative decline in the number of episodes containing > 90 MEDD prescriptions (95 CIs: (Q5: - 0.506, - 0.144; Q12: - 1.000, - 0.290)). Mandated continuing medical education regarding the treatment of pain was associated with a 50-75% relative increase in number of high-dose episodes following the first year-and-a-half of enactment (95 CIs: (Q7: 0.351, 0.869; Q12: 0.413, 1.107)). Initial prescription duration limits were associated with an initial relative reduction of 25% in high-dose prescribing, with the effect increasing over time (95 CI: (Q12: - 0.967, - 0.335). There was no evidence that PDMPs affected high-dose opioids dispensed to individuals with acute pain. Other high-risk prescribing indicators were explored as well; no consistent policy impacts were found. CONCLUSIONS State opioid policies may have differential effects on high-dose opioid dispensing in individuals with acute pain. Policymakers should consider effectiveness of individual policies in the presence of other opioid policies to address the ongoing opioid crisis.
Collapse
Affiliation(s)
- Ashley C Bradford
- School of Public Policy, Georgia Institute of Technology, Atlanta, GA, USA.
| | - Thuy Nguyen
- School of Public Health, Department of Health Management and Policy, University of Michigan, Ann Arbor, MI, USA
| | - Lucy Schulson
- RAND Corporation, Boston, MA, USA
- Department of General Internal Medicine, Boston University School of Medicine, Boston, MA, USA
| | | | - Sumedha Gupta
- Department of Economics, Indiana University, Indianapolis, IN, USA
| | - Kosali Simon
- O'Neill School of Public and Environmental Affairs, Indiana University, Bloomington, IN, USA
| | | |
Collapse
|
2
|
Lucero KS, Moore DE. A Systematic Investigation of Assessment Scores, Self-Efficacy, and Clinical Practice: Are They Related? JOURNAL OF CME 2024; 13:2420373. [PMID: 39498264 PMCID: PMC11533230 DOI: 10.1080/28338073.2024.2420373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/22/2024] [Revised: 10/11/2024] [Accepted: 10/18/2024] [Indexed: 11/07/2024]
Abstract
A considerable amount of continuing professional development (CPD) for health professionals is online and voluntary. There is evidence that some CPD activities impact clinical practice outcomes from self-reported and objective, administrative data. Some studies have shown that there is a potential mediating effect of knowledge/competency and/or self-efficacy between participation in CPD activities and the outcomes of that participation, specifically clinical practice. However, because clinical practice in those studies has been self-report, little is known about how this relationship impacts real world clinical practice. The purpose of the current study is to examine the relationship between knowledge/competency, self-efficacy, and real-world clinical practice so that we can begin to understand whether our focus on knowledge/competency and self-efficacy to change real-world clinical practice is empirically supported. We employed secondary data analysis from pre-participation questionnaire and medical and pharmacy claims data originally collected in three evaluations of online CPD interventions to examine if the relationship between knowledge/competency and self-efficacy contributed to physicians' real-world clinical practice. Results show an association between knowledge/competency scores and ratings of self-efficacy and suggest unique contributions of knowledge/competency and self-efficacy to clinical practice. Study results support the value of knowledge/competency scores and self-efficacy ratings as predictors of clinical practice. The effect size was larger for self-efficacy suggesting it may be a more practical indicator of clinical practice for CPD evaluators because its process of question development is simpler than the process for knowledge and case-based decision-making questions. However, it is important to conduct thorough need assessments which may include knowledge/competency assessments to identify topics to cover in CPD activities that are more likely to increase self-efficacy and ultimately, clinical practice.
Collapse
Affiliation(s)
| | - Donald E. Moore
- School of Medicine, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
3
|
Forgiarini A, Deroma L, Buttussi F, Zangrando N, Licata S, Valent F, Chittaro L, Di Chiara A. Introducing Virtual Reality in a STEMI Coronary Syndrome Course: Qualitative Evaluation with Nurses and Doctors. CYBERPSYCHOLOGY, BEHAVIOR AND SOCIAL NETWORKING 2024; 27:387-398. [PMID: 38527251 DOI: 10.1089/cyber.2023.0414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/27/2024]
Abstract
In the increasing number of medical education topics taught with virtual reality (VR), the prehospital management of ST-segment elevation myocardial infarction (STEMI) had not been considered. This article proposes an implemented VR system for STEMI training and introduces it in an institutional course addressed to emergency nurses and case manager (CM) doctors. The system comprises three different applications to, respectively, allow (a) the course instructor to control the conditions of the virtual patient, (b) the CM to communicate with the nurse in the virtual field and receive from him/her the patient's parameters and electrocardiogram, and (c) the nurse to interact with the patient in the immersive VR scenario. We enrolled 17 course participants to collect their perceptions and opinions through a semistructured interview. The thematic analysis showed the system was appreciated (n = 17) and described as engaging (n = 4), challenging (n = 5), useful to improve self-confidence (n = 4), innovative (n = 5), and promising for training courses (n = 10). Realism was also appreciated (n = 13), although with some drawbacks (e.g., oversimplification; n = 5). Overall, participants described the course as an opportunity to share opinions (n = 8) and highlight issues (n = 4) and found it useful for novices (n = 5) and, as a refresh, for experienced personnel (n = 6). Some participants suggested improvements in the scenarios' type (n = 5) and variability (n = 5). Although most participants did not report usage difficulties with the VR system (n = 13), many described the need to get familiar with it (n = 13) and the specific gestures it requires (n = 10). Three suffered from cybersickness.
Collapse
Affiliation(s)
- Alessandro Forgiarini
- Human-Computer Interaction Laboratory, Department of Mathematics, Computer Science and Physics, University of Udine, Udine, Italy
- Hygiene and Clinical Epidemiology Unit, Azienda Sanitaria Universitaria Friuli Centrale, Udine, Italy
| | - Laura Deroma
- Hygiene and Public Health Unit, Department of Prevention, Azienda Sanitaria Universitaria Friuli Centrale, Udine, Italy
| | - Fabio Buttussi
- Human-Computer Interaction Laboratory, Department of Mathematics, Computer Science and Physics, University of Udine, Udine, Italy
| | - Nicola Zangrando
- Hygiene and Clinical Epidemiology Unit, Azienda Sanitaria Universitaria Friuli Centrale, Udine, Italy
| | - Sabrina Licata
- Hygiene and Clinical Epidemiology Unit, Azienda Sanitaria Universitaria Friuli Centrale, Udine, Italy
| | - Francesca Valent
- Hygiene and Clinical Epidemiology Unit, Azienda Sanitaria Universitaria Friuli Centrale, Udine, Italy
| | - Luca Chittaro
- Human-Computer Interaction Laboratory, Department of Mathematics, Computer Science and Physics, University of Udine, Udine, Italy
| | - Antonio Di Chiara
- Cardiology Tolmezzo, San Daniele-Tolmezzo Hospital, Azienda Sanitaria Universitaria Friuli Centrale, Udine, Italy
| |
Collapse
|
4
|
Parekh P, Bahadoor V. The Utility of Multiple-Choice Assessment in Current Medical Education: A Critical Review. Cureus 2024; 16:e59778. [PMID: 38846235 PMCID: PMC11154086 DOI: 10.7759/cureus.59778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/06/2024] [Indexed: 06/09/2024] Open
Abstract
In recent years, healthcare education providers have boasted about a conscious shift towards increasing clinical competence via assessment tests that promote more active learning. Despite this, multiple-choice questions remain amongst the most prevalent forms of assessment. Various literature justifies the use of multiple-choice testing by its high levels of validity and reliability. Education providers also benefit from requiring fewer resources and costs in the development of questions and easier adaptivity of questions to compensate for neurodiversity. However, when testing these (and other) variables via a structured approach in terms of their utility, it is elucidated that these advantages are largely dependent on the quality of the questions that are written, the level of clinical competence that is to be attained by learners and the impact of negating confounding variables such as differential attainment. Attempts at improving the utility of multiple-choice question testing in modern healthcare curricula are discussed in this review, as well as the impact of these modifications on performance.
Collapse
Affiliation(s)
- Priya Parekh
- Trauma and Orthopaedics, Wirral University Teaching Hospital, Wirral, GBR
| | - Vikesh Bahadoor
- Trauma and Orthopaedics, Wirral University Teaching Hospital, Wirral, GBR
| |
Collapse
|
5
|
Perrichot A, Vaittinada Ayar P, Taboulet P, Choquet C, Gay M, Casalino E, Steg PG, Curac S, Vaittinada Ayar P. Assessment of real-time electrocardiogram effects on interpretation quality by emergency physicians. BMC MEDICAL EDUCATION 2023; 23:677. [PMID: 37723508 PMCID: PMC10506301 DOI: 10.1186/s12909-023-04670-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Accepted: 09/12/2023] [Indexed: 09/20/2023]
Abstract
BACKGROUND Electrocardiogram (ECG) is one of the most commonly performed examinations in emergency medicine. The literature suggests that one-third of ECG interpretations contain errors and can lead to clinical adverse outcomes. The purpose of this study was to assess the quality of real-time ECG interpretation by senior emergency physicians compared to cardiologists and an ECG expert. METHODS This was a prospective study in two university emergency departments and one emergency medical service. All ECGs were performed and interpreted over five weeks by a senior emergency physician (EP) and then by a cardiologist using the same questionnaire. In case of mismatch between EP and the cardiologist our expert had the final word. The ratio of agreement between both interpretations and the kappa (k) coefficient characterizing the identification of major abnormalities defined the reading ability of the emergency physicians. RESULTS A total of 905 ECGs were analyzed, of which 705 (78%) resulted in a similar interpretation between emergency physicians and cardiologists/expert. However, the interpretations of emergency physicians and cardiologists for the identification of major abnormalities coincided in only 66% (k: 0.59 (95% confidence interval (CI): 0.54-0.65); P-value = 1.64e-92). ECGs were correctly classified by emergency physicians according to their emergency level in 82% of cases (k: 0.73 (95% CI: 0.70-0.77); P-value ≈ 0). Emergency physicians correctly recognized normal ECGs (sensitivity = 0.91). CONCLUSION Our study suggested gaps in the identification of major abnormalities among emergency physicians. The initial and ongoing training of emergency physicians in ECG reading deserves to be improved.
Collapse
Affiliation(s)
- Alice Perrichot
- Emergency Department, Beaujon Hospital AP-HP, Clichy, France
| | - Pradeebane Vaittinada Ayar
- Laboratoire des Sciences du Climat et l’Environnement (LSCE-IPSL), CNRS/CEA/UVSQ, UMR8212, Université Paris-Saclay, Gif-sur-Yvette, 91190 France
| | - Pierre Taboulet
- Emergency Department, Saint Louis Hospital AP-HP, Clichy, France
| | | | - Matthieu Gay
- Emergency Department, Beaujon Hospital AP-HP, Clichy, France
| | | | | | - Sonja Curac
- Emergency Department, Beaujon Hospital AP-HP, Clichy, France
| | - Prabakar Vaittinada Ayar
- Emergency Department, Beaujon Hospital AP-HP, Clichy, France
- INSERM UMR-S942, MASCOTT, Paris, France
- University of Paris Cité, Paris, France
| |
Collapse
|
6
|
Lucero KS. Net Promoter Score (NPS): What Does Net Promoter Score Offer in the Evaluation of Continuing Medical Education? J Eur CME 2022; 11:2152941. [PMCID: PMC9718547 DOI: 10.1080/21614083.2022.2152941] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/04/2022] Open
Abstract
Net promoter Score (NPS) has been used in many fields, such as software, clinical care, and websites, as a measure of customer satisfaction since 2003. With a single question, NPS methodology is thought to determine brand loyalty and intent to act based on experiences with the brand or product. In the current study, accredited continuing medical education or continuing education (CME/CE) was the product. Providers of CME have utilised NPS rating (the individual score on a scale of 0 to 10) to collect data about the value of the experience a clinician has with CME activities, but there has been no research to examine what it actually is associated with. This study looked to understand – relative to other self-reported and assessment outcomes in CME, what does NPS at the activity level indicate? From 155 online CME programmes (29,696 target audience learners with complete data), potential outcomes of CME, including whether knowledge or competence improved via assessment score, mean post-confidence rating, and whether one intended practices changes and was committed to those changes, were examined as predictors of NPS. NPS is unique in that it cannot be calculated at the individual level; individual scores must be aggregated, and then the percentage who selected ratings of 0 to 5 is subtracted from the percentage who selected 9 or 10. Results showed that percentage of learners who are committed to change predicts 70% of the variance in NPS, which suggests NPS is a valid indicator of intention to act. These results have implications for how we might, as a field, incorporate the utilisation of a single standardised question to examine the potential impact of online CME and call for additional research on whether NPS predicts change in clinical practice.
Collapse
Affiliation(s)
- Katie Stringer Lucero
- Medscape, LLC, New York, USA,CONTACT Katie Stringer Lucero Medscape, LLC, 395 Hudson St, New York, NY10014, USA
| |
Collapse
|
7
|
Tudor Car L, Kyaw BM, Teo A, Fox TE, Vimalesvaran S, Apfelbacher C, Kemp S, Chavannes N. Outcomes, Measurement Instruments, and Their Validity Evidence in Randomized Controlled Trials on Virtual, Augmented, and Mixed Reality in Undergraduate Medical Education: Systematic Mapping Review. JMIR Serious Games 2022; 10:e29594. [PMID: 35416789 PMCID: PMC9047880 DOI: 10.2196/29594] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Revised: 09/20/2021] [Accepted: 12/15/2021] [Indexed: 12/18/2022] Open
Abstract
BACKGROUND Extended reality, which encompasses virtual reality (VR), augmented reality (AR), and mixed reality (MR), is increasingly used in medical education. Studies assessing the effectiveness of these new educational modalities should measure relevant outcomes using outcome measurement tools with validity evidence. OBJECTIVE Our aim is to determine the choice of outcomes, measurement instruments, and the use of measurement instruments with validity evidence in randomized controlled trials (RCTs) on the effectiveness of VR, AR, and MR in medical student education. METHODS We conducted a systematic mapping review. We searched 7 major bibliographic databases from January 1990 to April 2020, and 2 reviewers screened the citations and extracted data independently from the included studies. We report our findings in line with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. RESULTS Of the 126 retrieved RCTs, 115 (91.3%) were on VR and 11 (8.7%) were on AR. No RCT on MR in medical student education was found. Of the 115 studies on VR, 64 (55.6%) were on VR simulators, 30 (26.1%) on screen-based VR, 9 (7.8%) on VR patient simulations, and 12 (10.4%) on VR serious games. Most studies reported only a single outcome and immediate postintervention assessment data. Skills outcome was the most common outcome reported in studies on VR simulators (97%), VR patient simulations (100%), and AR (73%). Knowledge was the most common outcome reported in studies on screen-based VR (80%) and VR serious games (58%). Less common outcomes included participants' attitudes, satisfaction, cognitive or mental load, learning efficacy, engagement or self-efficacy beliefs, emotional state, competency developed, and patient outcomes. At least one form of validity evidence was found in approximately half of the studies on VR simulators (55%), VR patient simulations (56%), VR serious games (58%), and AR (55%) and in a quarter of the studies on screen-based VR (27%). Most studies used assessment methods that were implemented in a nondigital format, such as paper-based written exercises or in-person assessments where examiners observed performance (72%). CONCLUSIONS RCTs on VR and AR in medical education report a restricted range of outcomes, mostly skills and knowledge. The studies largely report immediate postintervention outcome data and use assessment methods that are in a nondigital format. Future RCTs should include a broader set of outcomes, report on the validity evidence of the measurement instruments used, and explore the use of assessments that are implemented digitally.
Collapse
Affiliation(s)
- Lorainne Tudor Car
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore.,Department of Primary Care and Public Health, School of Public Health, Imperial College London, London, United Kingdom
| | - Bhone Myint Kyaw
- Centre for Population Health Sciences, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
| | - Andrew Teo
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
| | - Tatiana Erlikh Fox
- Centre for Population Health Sciences, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore.,Department of Internal Medicine, Onze Lieve Vrouwen Gasthuis, Amsterdam, Netherlands
| | - Sunitha Vimalesvaran
- Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
| | - Christian Apfelbacher
- Institute of Social Medicine and Health Systems Research, Otto von Guericke University Magdeburg, Magdegurg, Germany.,Family Medicine and Primary Care, Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore, Singapore
| | - Sandra Kemp
- Faculty of Health Sciences, Curtin Medical School, Curtin University, Bentley, Australia
| | - Niels Chavannes
- Department of Public Health and Primary Care, Leiden University Medical Centre, Leiden, Netherlands
| |
Collapse
|
8
|
Cook DA, Oh SY, Pusic MV. Assessments of Physicians' Electrocardiogram Interpretation Skill: A Systematic Review. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2022; 97:603-615. [PMID: 33913438 DOI: 10.1097/acm.0000000000004140] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
PURPOSE To identify features of instruments, test procedures, study design, and validity evidence in published studies of electrocardiogram (ECG) skill assessments. METHOD The authors conducted a systematic review, searching MEDLINE, Embase, Cochrane CENTRAL, PsycINFO, CINAHL, ERIC, and Web of Science databases in February 2020 for studies that assessed the ECG interpretation skill of physicians or medical students. Two authors independently screened articles for inclusion and extracted information on test features, study design, risk of bias, and validity evidence. RESULTS The authors found 85 eligible studies. Participants included medical students (42 studies), postgraduate physicians (48 studies), and practicing physicians (13 studies). ECG selection criteria were infrequently reported: 25 studies (29%) selected single-diagnosis or straightforward ECGs; 5 (6%) selected complex cases. ECGs were selected by generalists (15 studies [18%]), cardiologists (10 studies [12%]), or unspecified experts (4 studies [5%]). The median number of ECGs per test was 10. The scoring rubric was defined by 2 or more experts in 32 studies (38%), by 1 expert in 5 (6%), and using clinical data in 5 (6%). Scoring was performed by a human rater in 34 studies (40%) and by computer in 7 (8%). Study methods were appraised as low risk of selection bias in 16 studies (19%), participant flow bias in 59 (69%), instrument conduct and scoring bias in 20 (24%), and applicability problems in 56 (66%). Evidence of test score validity was reported infrequently, namely evidence of content (39 studies [46%]), internal structure (11 [13%]), relations with other variables (10 [12%]), response process (2 [2%]), and consequences (3 [4%]). CONCLUSIONS ECG interpretation skill assessments consist of idiosyncratic instruments that are too short, composed of items of obscure provenance, with incompletely specified answers, graded by individuals with underreported credentials, yielding scores with limited interpretability. The authors suggest several best practices.
Collapse
Affiliation(s)
- David A Cook
- D.A. Cook is professor of medicine and medical education, director of education science, Office of Applied Scholarship and Education Science, research chair, Mayo Clinic Rochester Multidisciplinary Simulation Center, and consultant, Division of General Internal Medicine, Mayo Clinic College of Medicine and Science, Rochester, Minnesota; ORCID: https://orcid.org/0000-0003-2383-4633
| | - So-Young Oh
- S.-Y. Oh is assistant director, Program for Digital Learning, Institute for Innovations in Medical Education, NYU Grossman School of Medicine, NYU Langone Health, New York, New York; ORCID: https://orcid.org/0000-0002-4640-3695
| | - Martin V Pusic
- M.V. Pusic is associate professor of emergency medicine and pediatrics, Department of Emergency Medicine, NYU Grossman School of Medicine, New York, New York; ORCID: https://orcid.org/0000-0001-5236-6598
| |
Collapse
|
9
|
Stojan J, Haas M, Thammasitboon S, Lander L, Evans S, Pawlik C, Pawilkowska T, Lew M, Khamees D, Peterson W, Hider A, Grafton-Clarke C, Uraiby H, Gordon M, Daniel M. Online learning developments in undergraduate medical education in response to the COVID-19 pandemic: A BEME systematic review: BEME Guide No. 69. MEDICAL TEACHER 2022; 44:109-129. [PMID: 34709949 DOI: 10.1080/0142159x.2021.1992373] [Citation(s) in RCA: 48] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
BACKGROUND The COVID-19 pandemic spurred an abrupt transition away from in-person educational activities. This systematic review investigated the pivot to online learning for nonclinical undergraduate medical education (UGME) activities and explored descriptions of educational offerings deployed, their impact, and lessons learned. METHODS The authors systematically searched four online databases and conducted a manual electronic search of MedEdPublish up to December 21, 2020. Two authors independently screened titles, abstracts and full texts, performed data extraction and assessed risk of bias. A third author resolved discrepancies. Findings were reported in accordance with the STORIES (STructured apprOach to the Reporting in healthcare education of Evidence Synthesis) statement and BEME guidance. RESULTS Fifty-six articles were included. The majority (n = 41) described the rapid transition of existing offerings to online formats, whereas fewer (n = 15) described novel activities. The majority (n = 27) included a combination of synchronous and asynchronous components. Didactics (n = 40) and small groups (n = 26) were the most common instructional methods. Teachers largely integrated technology to replace and amplify rather than transform learning, though learner engagement was often interactive. Thematic analysis revealed unique challenges of online learning, as well as exemplary practices. The quality of study designs and reporting was modest, with underpinning theory at highest risk of bias. Virtually all studies (n = 54) assessed reaction/satisfaction, fewer than half (n = 23) assessed changes in attitudes, knowledge or skills, and none assessed behavioral, organizational or patient outcomes. CONCLUSIONS UGME educators successfully transitioned face-to-face instructional methods online and implemented novel solutions during the COVID-19 pandemic. Although technology's potential to transform teaching is not yet fully realized, the use of synchronous and asynchronous formats encouraged virtual engagement, while offering flexible, self-directed learning. As we transition from emergency remote learning to a post-pandemic world, educators must underpin new developments with theory, report additional outcomes and provide details that support replication.
Collapse
Affiliation(s)
- Jennifer Stojan
- Internal Medicine and Pediatrics, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Mary Haas
- Internal Medicine and Pediatrics, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Satid Thammasitboon
- Department of Pediatrics, Texas Children's Hospital and Baylor College of Medicine, Houston, TX, USA
| | - Lina Lander
- Family Medicine and Public Health, University of California San Diego School of Medicine, La Jolla, CA, USA
| | - Sean Evans
- Family Medicine and Public Health, University of California San Diego School of Medicine, La Jolla, CA, USA
| | - Cameron Pawlik
- Internal Medicine and Pediatrics, University of Michigan Medical School, Ann Arbor, MI, USA
| | | | - Madelyn Lew
- Internal Medicine and Pediatrics, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Deena Khamees
- McGovern Medical School, University of Texas Health Science Center, Houston, TX, USA
| | - William Peterson
- Internal Medicine and Pediatrics, University of Michigan Medical School, Ann Arbor, MI, USA
| | - Ahmad Hider
- Internal Medicine and Pediatrics, University of Michigan Medical School, Ann Arbor, MI, USA
| | | | - Hussein Uraiby
- School of Medicine, University of Leicester, Leicester, UK
| | - Morris Gordon
- Blackpool Victoria Hospital, Blackpool, UK
- School of Medicine, University of Central Lancashire, Preston, UK
| | - Michelle Daniel
- Family Medicine and Public Health, University of California San Diego School of Medicine, La Jolla, CA, USA
| |
Collapse
|
10
|
Saltos-Rivas R, Novoa-Hernández P, Serrano Rodríguez R. On the quality of quantitative instruments to measure digital competence in higher education: A systematic mapping study. PLoS One 2021; 16:e0257344. [PMID: 34506585 PMCID: PMC8432775 DOI: 10.1371/journal.pone.0257344] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 08/29/2021] [Indexed: 11/18/2022] Open
Abstract
In this study, we report on a Systematic Mapping Study (SMS) on how the quality of the quantitative instruments used to measure digital competencies in higher education is assured. 73 primary studies were selected from the published literature in the last 10 years in order to 1) characterize the literature, 2) evaluate the reporting practice of quality assessments, and 3) analyze which variables explain such reporting practices. The results indicate that most of the studies focused on medium to large samples of European university students, who attended social science programs. Ad hoc, self-reported questionnaires measuring various digital competence areas were the most commonly used method for data collection. The studies were mostly published in low tier journals. 36% of the studies did not report any quality assessment, while less than 50% covered both groups of reliability and validity assessments at the same time. In general, the studies had a moderate to high depth of evidence on the assessments performed. We found that studies in which several areas of digital competence were measured were more likely to report quality assessments. In addition, we estimate that the probability of finding studies with acceptable or good reporting practices increases over time.
Collapse
Affiliation(s)
- Rafael Saltos-Rivas
- Facultad de Filosofía Letras y Ciencias de la Educación de la Universidad Técnica de Manabí, Portoviejo, Ecuador
| | - Pavel Novoa-Hernández
- Escuela de Ciencias Empresariales, Universidad Católica del Norte, Coquimbo, Chile
- * E-mail:
| | | |
Collapse
|
11
|
Cook DA, Oh SY, Pusic MV. Accuracy of Physicians' Electrocardiogram Interpretations: A Systematic Review and Meta-analysis. JAMA Intern Med 2020; 180:1461-1471. [PMID: 32986084 PMCID: PMC7522782 DOI: 10.1001/jamainternmed.2020.3989] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
IMPORTANCE The electrocardiogram (ECG) is the most common cardiovascular diagnostic test. Physicians' skill in ECG interpretation is incompletely understood. OBJECTIVES To identify and summarize published research on the accuracy of physicians' ECG interpretations. DATA SOURCES A search of PubMed/MEDLINE, Embase, Cochrane CENTRAL (Central Register of Controlled Trials), PsycINFO, CINAHL (Cumulative Index to Nursing and Allied Health), ERIC (Education Resources Information Center), and Web of Science was conducted for articles published from database inception to February 21, 2020. STUDY SELECTION Of 1138 articles initially identified, 78 studies that assessed the accuracy of physicians' or medical students' ECG interpretations in a test setting were selected. DATA EXTRACTION AND SYNTHESIS Data on study purpose, participants, assessment features, and outcomes were abstracted, and methodological quality was appraised with the Medical Education Research Study Quality Instrument. Results were pooled using random-effects meta-analysis. MAIN OUTCOMES AND MEASURES Accuracy of ECG interpretation. RESULTS Of 1138 studies initially identified, 78 assessed the accuracy of ECG interpretation. Across all training levels, the median accuracy was 54% (interquartile range [IQR], 40%-66%; n = 62 studies) on pretraining assessments and 67% (IQR, 55%-77%; n = 47 studies) on posttraining assessments. Accuracy varied widely across studies. The pooled accuracy for pretraining assessments was 42.0% (95% CI, 34.3%-49.6%; n = 24 studies; I2 = 99%) for medical students, 55.8% (95% CI, 48.1%-63.6%; n = 37 studies; I2 = 96%) for residents, 68.5% (95% CI, 57.6%-79.5%; n = 10 studies; I2 = 86%) for practicing physicians, and 74.9% (95% CI, 63.2%-86.7%; n = 8 studies; I2 = 22%) for cardiologists. CONCLUSIONS AND RELEVANCE Physicians at all training levels had deficiencies in ECG interpretation, even after educational interventions. Improved education across the practice continuum appears warranted. Wide variation in outcomes could reflect real differences in training or skill or differences in assessment design.
Collapse
Affiliation(s)
- David A Cook
- Office of Applied Scholarship and Education Science and Division of General Internal Medicine, Mayo Clinic College of Medicine and Science, Rochester, Minnesota
| | - So-Young Oh
- Institute for Innovations in Medical Education, NYU Grossman School of Medicine, NYU Langone Health, New York, New York
| | - Martin V Pusic
- Department of Emergency Medicine, NYU Grossman School of Medicine, NYU Langone Health, New York, New York
| |
Collapse
|
12
|
Heyward J, Olson L, Sharfstein JM, Stuart EA, Lurie P, Alexander GC. Evaluation of the Extended-Release/Long-Acting Opioid Prescribing Risk Evaluation and Mitigation Strategy Program by the US Food and Drug Administration: A Review. JAMA Intern Med 2020; 180:301-309. [PMID: 31886822 DOI: 10.1001/jamainternmed.2019.5459] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
IMPORTANCE Extended-release/long-acting (ER/LA) opioids have caused substantial morbidity and mortality in the United States, yet little is known about the efforts of the US Food and Drug Administration (FDA) and drug manufacturers to reduce adverse outcomes associated with inappropriate prescribing or use. This review of 9739 pages of FDA documents obtained through a Freedom of Information Act request aimed to investigate whether the FDA and ER/LA manufacturers were able to assess the effectiveness of the ER/LA Risk Evaluation and Mitigation Strategy (REMS) program by evaluating manufacturer REMS assessments and FDA oversight of these assessments. OBSERVATIONS The REMS program was implemented largely as planned. The FDA's goal was for 60% of ER/LA prescribers to take REMS-adherent continuing education (CE) between 2012 and 2016; 27.6% (88 316 of 320 000) of prescribers had done so by 2016. Audits of REMS programs indicated close adherence to FDA content guidelines except for financial disclosures. Nonrepresentative cross-sectional surveys of self-selected prescribers suggested modestly greater ER/LA knowledge among CE completers than noncompleters, and claims-based surveillance indicated slowly declining ER/LA prescribing, although the contribution of the REMS to these trends could not be assessed. The effectiveness of the REMS program for reducing adverse outcomes also could not be assessed because the analyses used nonrepresentative samples, lacked adequate controls for confounding, and did not link prescribing or clinical outcomes to prescribers' receipt of CE training. Although the FDA had requested studies tracking adverse outcomes as a function of CE training, the FDA concluded that these studies had not been performed as of the 60-month report in 2017. CONCLUSIONS AND RELEVANCE Five years after initiation, the FDA and ER/LA manufacturers could not conclude whether the ER/LA REMS had reduced inappropriate prescribing or improved patient outcomes. Alternative observational study designs would have allowed for more rigorous estimates of the program's effectiveness.
Collapse
Affiliation(s)
- James Heyward
- Center for Drug Safety and Effectiveness, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland
| | - Lily Olson
- Center for Drug Safety and Effectiveness, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland
| | - Joshua M Sharfstein
- Center for Drug Safety and Effectiveness, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland.,Office of Public Health Practice and Training, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland
| | - Elizabeth A Stuart
- Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland
| | - Peter Lurie
- Center for Science in the Public Interest, Washington, DC
| | - G Caleb Alexander
- Center for Drug Safety and Effectiveness, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland.,Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland.,Division of General Internal Medicine, Johns Hopkins Medicine, Baltimore, Maryland
| |
Collapse
|
13
|
Allen LM, Palermo C, Armstrong E, Hay M. Categorising the broad impacts of continuing professional development: a scoping review. MEDICAL EDUCATION 2019; 53:1087-1099. [PMID: 31396999 DOI: 10.1111/medu.13922] [Citation(s) in RCA: 51] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Revised: 04/30/2019] [Accepted: 05/22/2019] [Indexed: 05/15/2023]
Abstract
CONTEXT A number of systematic reviews have evaluated the impacts of continuing professional development (CPD). These reviews, due to their focused nature, may fail to capture the full range of impacts of CPD. This scoping review aims to explore the broader impacts of CPD with the intention of developing a categorisation of the types of impact of CPD. METHODS The authors searched MEDLINE, CINAHL and ERIC databases for studies published between 2007 and 2017 that looked at the impacts of formal CPD programmes for all health professionals. Studies were independently screened for eligibility; one reviewer charted data for all included studies, a sample of 10% was reviewed by a second reviewer. The charted data were analysed using both qualitative and quantitative content analysis. RESULTS The search returned 2750 manuscripts; 192 manuscripts describing 191 studies were included in this review. Most articles were from the USA (78 studies, 41%) and included medical doctors in the population (105 studies, 55%). Twelve categories of impact were generated through conventional content analysis: knowledge, practice change, skill, confidence, attitudes, career development, networking, user outcomes, intention to change, organisational change, personal change and scholarly accomplishments. Knowledge was most commonly measured (103 studies, 54%), whereas measurement of scholarly accomplishments was the least common (10 studies, 5%). CONCLUSIONS Existing evidence takes a narrow view when assessing the impacts of CPD. Emphasis on measuring impacts as knowledge, behaviour, confidence, skills and attitudes may be due to the widely accepted four levels of evaluation from the Kirkpatrick Model or because the majority of studies used quantitative methods. The categories proposed in this review may be used to capture a broader view of the impacts of CPD programmes, contributing to the evidence base for their value and translating into CPD programmes that truly transform health professionals, their careers and their practice.
Collapse
Affiliation(s)
- Louise M Allen
- Faculty of Medicine, Nursing and Health Sciences, Monash Institute for Health and Clinical Education, Monash University, Clayton, Victoria, Australia
| | - Claire Palermo
- Faculty of Medicine, Nursing and Health Sciences, Monash Centre for Scholarship in Health Education, Monash University, Clayton, Victoria, Australia
| | | | - Margaret Hay
- Faculty of Medicine, Nursing and Health Sciences, Monash Institute for Health and Clinical Education, Monash University, Clayton, Victoria, Australia
| |
Collapse
|
14
|
Law GC, Apfelbacher C, Posadzki PP, Kemp S, Tudor Car L. Choice of outcomes and measurement instruments in randomised trials on eLearning in medical education: a systematic mapping review protocol. Syst Rev 2018; 7:75. [PMID: 29776434 PMCID: PMC5960094 DOI: 10.1186/s13643-018-0739-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Accepted: 05/02/2018] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND There will be a lack of 18 million healthcare workers by 2030. Multiplying the number of well-trained healthcare workers through innovative ways such as eLearning is highly recommended in solving this shortage. However, high heterogeneity of learning outcomes in eLearning systematic reviews reveals a lack of consistency and agreement on core learning outcomes in eLearning for medical education. In addition, there seems to be a lack of validity evidence for measurement instruments used in these trials. This undermines the credibility of these outcome measures and affects the ability to draw accurate and meaningful conclusions. The aim of this research is to address this issue by determining the choice of outcomes, measurement instruments and the prevalence of measurement instruments with validity evidence in randomised trials on eLearning for pre-registration medical education. METHODS We will conduct a systematic mapping and review to identify the types of outcomes, the kinds of measurement instruments and the prevalence of validity evidence among measurement instruments in eLearning randomised controlled trials (RCTs) in pre-registration medical education. The search period will be from January 1990 until August 2017. We will consider studies on eLearning for health professionals' education. Two reviewers will extract and manage data independently from the included studies. Data will be analysed and synthesised according to the aim of the review. DISCUSSION Appropriate choice of outcomes and measurement tools is essential for ensuring high-quality research in the field of eLearning and eHealth. The results of this study could have positive implications for other eHealth interventions, including (1) improving quality and credibility of eLearning research, (2) enhancing the quality of digital medical education and (3) informing researchers, academics and curriculum developers about the types of outcomes and validity evidence for measurement instruments used in eLearning studies. The protocol aspires to assist in the advancement of the eLearning research field as well as in the development of high-quality healthcare professionals' digital education. SYSTEMATIC REVIEW REGISTRATION PROSPERO CRD42017068427.
Collapse
Affiliation(s)
- Gloria C Law
- Centre of Population Health Services (CePHeS), Lee Kong Chian School of Medicine, Nanyang Technological University, 11 Mandalay Road, Singapore, 308232, Singapore
| | - Christian Apfelbacher
- Institute of Epidemiology and Preventive Medicine, Regensburg University, Franz-Josef-Strauss-Allee 11, 93053, Regensburg, Germany
| | - Pawel P Posadzki
- Centre of Population Health Services (CePHeS), Lee Kong Chian School of Medicine, Nanyang Technological University, 11 Mandalay Road, Singapore, 308232, Singapore
| | - Sandra Kemp
- Curtin Medical School, Curtin University, 410, Hayman Rd, Bentley, WA, 6102, Australia
| | - Lorainne Tudor Car
- Family Medicine and Primary Care, Lee Kong Chian School of Medicine, Nanyang Technological University, 11 Mandalay Road, Singapore, 308232, Singapore. .,Department of Primary Care and Public Health, School of Public Health, Imperial College London, London, UK.
| |
Collapse
|
15
|
Reliability and validity of self-efficacy scales assessing students’ information literacy skills. ELECTRONIC LIBRARY 2017. [DOI: 10.1108/el-03-2016-0056] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Purpose
This paper systematically reviews the evidence of reliability and validity of scales available in studies that reported surveys of students to assess their perceived self-efficacy of information literacy (IL) skills.
Design/methodology/approach
Search in two subject and two general databases and scanning of titles, abstracts and full texts of documents have been carried out in this paper.
Findings
In total, 45 studies met the eligibility criteria. A large number of studies did not report any psychometric characteristics of data collection instruments they used. The selected studies provided information on 22 scales. The instruments were heterogeneous in number of items and type of scale options. The most used reliability measure was internal consistency (with high values of Cronbach’s alpha), and the most used validity was face/content validity by experts.
Practical implications
The culture of using good-quality scales needs to be promoted by IL practitioners, authors and journal editors.
Originality/value
This paper is the first review of its kind, which is useful for IL stakeholders.
Collapse
|
16
|
Légaré F, Freitas A, Turcotte S, Borduas F, Jacques A, Luconi F, Godin G, Boucher A, Sargeant J, Labrecque M. Responsiveness of a simple tool for assessing change in behavioral intention after continuing professional development activities. PLoS One 2017; 12:e0176678. [PMID: 28459836 PMCID: PMC5411052 DOI: 10.1371/journal.pone.0176678] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2016] [Accepted: 04/14/2017] [Indexed: 11/19/2022] Open
Abstract
BACKGROUND Continuing professional development (CPD) activities are one way that new knowledge can be translated into changes in practice. However, few tools are available for evaluating the extent to which these activities change health professionals' behavior. We developed a questionnaire called CPD-Reaction for assessing the impact of CPD activities on health professionals' clinical behavioral intentions. We evaluated its responsiveness to change in behavioral intention and verified its acceptability among stakeholders. METHODS AND FINDINGS We enrolled 376 health professionals who completed CPD-Reaction before and immediately after attending a CPD activity. We contacted them three months later and asked them to self-report on any behavior change. We compared the mean rankings on each CPD-Reaction construct before and immediately after CPD activities. To estimate its predictive validity, we compared the median behavioral intention score (post-activity) of health professionals reporting a behavior change three months later with the median behavioral intention score of physicians who reported no change. We explored stakeholders' views on CPD-Reaction in semi-structured interviews. Participants were mostly family physicians (62.2%), with an average of 19 years of clinical practice. Post-activity, we observed an increase in intention-related scores for all constructs (P < 0.001) with the most appreciable for the construct beliefs about capabilities. A total of 313 participants agreed to be contacted at follow up, and of these only 69 (22%) reported back. Of these, 43 (62%) self-reported a behavior change. We observed no statistically significant difference in intention between health professionals who later reported a behavior change and those who reported no change (P = 0.30). Overall, CPD stakeholders found the CPD-Reaction questionnaire of interest and suggested potential solutions to perceived barriers to its implementation. CONCLUSION The CPD-Reaction questionnaire seems responsive to change in behavioral intention. Although CPD stakeholders found it interesting, future implementation will require addressing barriers they identified.
Collapse
Affiliation(s)
- France Légaré
- CHU de Québec Research Centre, Quebec City, Quebec, Canada
- Department of Family Medicine and Emergency Medicine, Université Laval, Quebec City, Canada
| | | | | | - Francine Borduas
- Office of the Vice-Dean of Education and Continuing Professional Development, Faculty of Medicine, Université Laval, Quebec, Quebec, Canada
| | - André Jacques
- Advisor in Continuing Professional Development, Saint-Adolphe-d’Howard, Quebec, Canada
| | - Francesca Luconi
- Continuing Health Professional Education Office, Faculty of Medicine, McGill University, Montreal, Quebec, Canada
| | - Gaston Godin
- Faculty of Nursing, Université Laval, Quebec City, Quebec, Canada
| | - Andrée Boucher
- Centre de pédagogie appliquée aux sciences de la santé, Faculty of Medicine, Université de Montréal, Quebec, Canada
| | - Joan Sargeant
- Division of Medical Education, Faculty of Medicine, Dalhousie University, Nova Scotia, Canada
| | | |
Collapse
|
17
|
Phitayakorn R, Salles A, Falcone JL, Jensen AR, Steinemann S, Torbeck L. A needs assessment of education research topics among surgical educators in the United States. Am J Surg 2016; 213:346-352. [PMID: 27955883 DOI: 10.1016/j.amjsurg.2016.11.044] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2016] [Revised: 10/26/2016] [Accepted: 11/29/2016] [Indexed: 10/20/2022]
Abstract
BACKGROUND There are currently no courses that focus specifically on surgical education research. A needs assessment of surgical educators is required to best design these courses. METHODS A cross-sectional survey-based study on all faculty members of the Association for Surgical Education was done to determine their education research needs. RESULTS The overall response rate was 15% and the majority of the 78 respondents were physicians (63%) in their mid- to late career stage (65%). Participants thought research topics should be taught at an advanced level in a workshop format. Senior educators were less interested than junior educators in learning to create conceptual frameworks (p = 0.038) and presenting their research at national meetings (p = 0.014). CONCLUSIONS Surgical educators desire more training in education research techniques that are taught in a workshop format at a national surgical education meeting. These workshops may lay the groundwork for a nationally recognized certificate in surgical education research.
Collapse
Affiliation(s)
- R Phitayakorn
- The Massachusetts General Hospital Department of Surgery, Harvard Medical School, Boston, MA, USA.
| | - A Salles
- Division of Minimally Invasive Surgery, Department of Surgery, Washington University in St. Louis, St. Louis, MO, USA
| | - J L Falcone
- One Health Surgical Specialists, One Health, Owensboro, KY, USA; University of Louisville, Department of Surgery, Louisville, KY, USA
| | - A R Jensen
- Department of Surgery, Children's Hospital Los Angeles and Keck School of Medicine of the University of Southern California, Los Angeles, CA, USA
| | - S Steinemann
- Department of Surgery, University of Hawaii, Honolulu, HI, USA
| | - L Torbeck
- Department of Surgery, Indiana University, Indianapolis, IN, USA
| |
Collapse
|
18
|
Lenzen LM, Weidringer JW, Ollenschläger G. [Conflict of interest in continuing medical education - Studies on certified CME courses]. ZEITSCHRIFT FUR EVIDENZ, FORTBILDUNG UND QUALITAT IM GESUNDHEITSWESEN 2016; 110-111:60-68. [PMID: 26875037 DOI: 10.1016/j.zefq.2015.11.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2014] [Revised: 09/04/2015] [Accepted: 11/05/2015] [Indexed: 06/05/2023]
Abstract
OBJECTIVES Although the problem of conflict of interest in medical education is discussed intensively, few valid data have been published on how to deal with the form, content, funding, sponsorship, and the influence of economic interests in continuing medical education (CME). Against this background, we carried out an analysis of data which had been documented for the purpose of certification by a German Medical Association. A central aim of the study was to obtain evidence of possible influences of economic interests on continuing medical education. Furthermore, strategies for quality assurance of CME contents and their implementation were to be examined. METHODS We analyzed all registration data for courses certified in the category D ("structured interactive CME via print media, online media and audiovisual media") by the Bavarian Chamber of Physicians in 2012. To measure the effects of conflict of interest, relationships between topics of training and variables relating to the alleged self-interest of the organizer/sponsor (for example, drug sales in a group of physicians) were statistically verified. These data were taken from the Bavarian Medical Statistics 2012 and the GKV-Arzneimittelschnellinformation. RESULTS In 2012, a total of 734 CME course offerings have been submitted for 51 medical specialties by 30 course suppliers in the Bavarian Medical Association. To ensure the neutrality of interests of the CME courses the course suppliers signed a cooperation treaty ensuring their compliance with defined behavior towards the Bavarian Medical Association concerning sponsorship. The correlation between course topics and drug data suggests that course suppliers tend to submit topics that are economically attractive to them. There was a significant correlation between the number of CME courses in a specific field and the sales from drug prescriptions issued by physicians in the respective field. CONCLUSIONS The results show that neutrality of interests regarding continuing medical education is difficult to achieve under the current framework for the organization, certification, and especially the funding of CME events in Germany. The cooperation agreement between the Bavarian Medical Association and training applicants is taken as an example of how legal certainty can be ensured. Based on the findings described below, suggestions and strategies to strengthen assessment expertise of course participants have been developed and elaborated.
Collapse
Affiliation(s)
- Laura Marianne Lenzen
- Institut für Gesundheitsökonomie und Klinische Epidemiologie der Universität zu Köln (IGKE), Köln, Germany; Klinik für Psychiatrie, Psychotherapie und Psychosomatik, Medizinische Fakultät, RWTH Aachen, Germany.
| | - Johann Wilhelm Weidringer
- Bayerische Landesärztekammer, Leiter des Referates Fortbildung und Qualitätsmanagement, München, Germany
| | - Günter Ollenschläger
- vormals Ärztliches Zentrum für Qualität in der Medizin (ÄZQ), Berlin, Germany; Institut für Gesundheitsökonomie und Klinische Epidemiologie der Universität zu Köln (IGKE), Köln, Germany
| |
Collapse
|
19
|
Cervero RM, Gaines JK. The impact of CME on physician performance and patient health outcomes: an updated synthesis of systematic reviews. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2015; 35:131-8. [PMID: 26115113 DOI: 10.1002/chp.21290] [Citation(s) in RCA: 294] [Impact Index Per Article: 32.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
INTRODUCTION Since 1977, many systematic reviews have asked 2 fundamental questions: (1) Does CME improve physician performance and patient health outcomes? and (2) What are the mechanisms of action that lead to positive changes in these outcomes? The article's purpose is to synthesize the systematic review literature about CME effectiveness published since 2003. METHODS We identified 8 systematic reviews of CME effectiveness published since 2003 in which primary research studies in CME were reviewed and physicians' performance and/or patient health outcomes were included as outcome measures. RESULTS Five systematic reviews addressed the question of "Is CME Effective?" using primary studies employing randomized controlled trials (RCTs) or experimental design methods and concluded: (1) CME does improve physician performance and patient health outcomes, and (2) CME has a more reliably positive impact on physician performance than on patient health outcomes. The 8 systematic reviews support previous research showing CME activities that are more interactive, use more methods, involve multiple exposures, are longer, and are focused on outcomes that are considered important by physicians lead to more positive outcomes. DISCUSSION Future research on CME effectiveness must take account of the wider social, political, and organizational factors that play a role in physician performance and patient health outcomes. We now have 39 systematic reviews that present an evidence-based approach to designing CME that is more likely to improve physician performance and patient health outcomes. These insights from the scientific study of CME effectiveness should be incorporated in ongoing efforts to reform systems of CME and health care delivery.
Collapse
|
20
|
Thepwongsa I, Kirby C, Schattner P, Shaw J, Piterman L. Type 2 diabetes continuing medical education for general practitioners: what works? A systematic review. Diabet Med 2014; 31:1488-97. [PMID: 25047877 DOI: 10.1111/dme.12552] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/07/2014] [Revised: 04/24/2014] [Accepted: 07/17/2014] [Indexed: 10/25/2022]
Abstract
AIMS To perform a systematic review of studies that have assessed the effectiveness of interventions designed to improve healthcare professionals' care of patients with diabetes and to assess the effects of educational interventions targeted at general practitioners' diabetes management. METHODS A computer search was conducted using the Cochrane Library, PubMed, Ovid MEDLINE, Scopus, EMBASE, Informit, Google scholar and ERIC from the earliest date of each database up until 2013. A supplementary review of reference lists from each article obtained was also carried out. Measured changes in general practitioners' satisfaction, knowledge, practice behaviours and patient outcomes were recorded. RESULTS Thirteen out of 1255 studies met the eligibility criteria, but none was specifically conducted in rural or remote areas. Ten studies were randomized trials. Fewer than half of the studies (5/13, 38.5%) reported a significant improvement in at least one of the following outcome categories: satisfaction with the programme, knowledge and practice behaviour. There was little evidence of the impact of general practitioner educational interventions on patient outcomes. Of the five studies that examined patient outcomes, only one reported a positive impact: a reduction in patient HbA1c levels. CONCLUSIONS Few studies examined the effectiveness of general practitioner Type 2 diabetes education in improving general practitioner satisfaction, knowledge, practices and/or patient outcomes. Evidence to support the effectiveness of education is partial and weak. To determine effective strategies for general practitioner education related to Type 2 diabetes, further well designed studies, accompanied by valid and reliable evaluation methods, are needed.
Collapse
Affiliation(s)
- I Thepwongsa
- Department of General Practice, School of Primary Health Care, Monash University, Notting Hill
| | | | | | | | | |
Collapse
|
21
|
Cook DA, Zendejas B, Hamstra SJ, Hatala R, Brydges R. What counts as validity evidence? Examples and prevalence in a systematic review of simulation-based assessment. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2014; 19:233-50. [PMID: 23636643 DOI: 10.1007/s10459-013-9458-4] [Citation(s) in RCA: 195] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/12/2012] [Accepted: 04/09/2013] [Indexed: 05/26/2023]
Abstract
Ongoing transformations in health professions education underscore the need for valid and reliable assessment. The current standard for assessment validation requires evidence from five sources: content, response process, internal structure, relations with other variables, and consequences. However, researchers remain uncertain regarding the types of data that contribute to each evidence source. We sought to enumerate the validity evidence sources and supporting data elements for assessments using technology-enhanced simulation. We conducted a systematic literature search including MEDLINE, ERIC, and Scopus through May 2011. We included original research that evaluated the validity of simulation-based assessment scores using two or more evidence sources. Working in duplicate, we abstracted information on the prevalence of each evidence source and the underlying data elements. Among 217 eligible studies only six (3 %) referenced the five-source framework, and 51 (24 %) made no reference to any validity framework. The most common evidence sources and data elements were: relations with other variables (94 % of studies; reported most often as variation in simulator scores across training levels), internal structure (76 %; supported by reliability data or item analysis), and content (63 %; reported as expert panels or modification of existing instruments). Evidence of response process and consequences were each present in <10 % of studies. We conclude that relations with training level appear to be overrepresented in this field, while evidence of consequences and response process are infrequently reported. Validation science will be improved as educators use established frameworks to collect and interpret evidence from the full spectrum of possible sources and elements.
Collapse
Affiliation(s)
- David A Cook
- Office of Education Research, Mayo Medical School, Rochester, MN, USA,
| | | | | | | | | |
Collapse
|
22
|
Hoover MJ, Jung R, Jacobs DM, Peeters MJ. Educational testing validity and reliability in pharmacy and medical education literature. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2013; 77:213. [PMID: 24371337 PMCID: PMC3872932 DOI: 10.5688/ajpe7710213] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2013] [Accepted: 07/29/2013] [Indexed: 05/13/2023]
Abstract
OBJECTIVES To evaluate and compare the reliability and validity of educational testing reported in pharmacy education journals to medical education literature. METHODS Descriptions of validity evidence sources (content, construct, criterion, and reliability) were extracted from articles that reported educational testing of learners' knowledge, skills, and/or abilities. Using educational testing, the findings of 108 pharmacy education articles were compared to the findings of 198 medical education articles. RESULTS For pharmacy educational testing, 14 articles (13%) reported more than 1 validity evidence source while 83 articles (77%) reported 1 validity evidence source and 11 articles (10%) did not have evidence. Among validity evidence sources, content validity was reported most frequently. Compared with pharmacy education literature, more medical education articles reported both validity and reliability (59%; p<0.001). CONCLUSION While there were more scholarship of teaching and learning (SoTL) articles in pharmacy education compared to medical education, validity, and reliability reporting were limited in the pharmacy education literature.
Collapse
Affiliation(s)
- Matthew J. Hoover
- College of Pharmacy, Northeast Ohio Medical University, Rootstown, Ohio
- Cleveland Clinic Marymount Hospital, Garfield Heights, Ohio
- University of Toledo College of Pharmacy and Pharmaceutical Sciences, Toledo, Ohio
| | - Rose Jung
- University of Toledo College of Pharmacy and Pharmaceutical Sciences, Toledo, Ohio
| | - David M. Jacobs
- University of Toledo College of Pharmacy and Pharmaceutical Sciences, Toledo, Ohio
- University of Houston, Houston, Texas
| | - Michael J. Peeters
- University of Toledo College of Pharmacy and Pharmaceutical Sciences, Toledo, Ohio
| |
Collapse
|
23
|
Peeters MJ, Beltyukova SA, Martin BA. Educational testing and validity of conclusions in the scholarship of teaching and learning. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2013; 77:186. [PMID: 24249848 PMCID: PMC3831397 DOI: 10.5688/ajpe779186] [Citation(s) in RCA: 35] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2013] [Accepted: 07/27/2013] [Indexed: 05/13/2023]
Abstract
Validity and its integral evidence of reliability are fundamentals for educational and psychological measurement, and standards of educational testing. Herein, we describe these standards of educational testing, along with their subtypes including internal consistency, inter-rater reliability, and inter-rater agreement. Next, related issues of measurement error and effect size are discussed. This article concludes with a call for future authors to improve reporting of psychometrics and practical significance with educational testing in the pharmacy education literature. By increasing the scientific rigor of educational research and reporting, the overall quality and meaningfulness of SoTL will be improved.
Collapse
Affiliation(s)
- Michael J. Peeters
- College of Pharmacy and Pharmaceutical Sciences, University of Toledo, Toledo, Ohio
| | | | - Beth A. Martin
- School of Pharmacy, University of Wisconsin-Madison, Madison, Wisconsin
| |
Collapse
|
24
|
Stelfox HT, Straus SE. Measuring quality of care: considering measurement frameworks and needs assessment to guide quality indicator development. J Clin Epidemiol 2013; 66:1320-7. [PMID: 24018344 DOI: 10.1016/j.jclinepi.2013.05.018] [Citation(s) in RCA: 70] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2012] [Revised: 04/25/2013] [Accepted: 05/17/2013] [Indexed: 01/28/2023]
Abstract
OBJECTIVE In this article, we describe one approach for evaluating the value of developing quality indicators (QIs). STUDY DESIGN AND SETTING We focus on describing how to develop a conceptual measurement framework and how to evaluate the need to develop QIs. A recent process to develop QIs for injury care is used for illustration. RESULTS Key steps to perform before developing QIs include creating a conceptual measurement framework, determining stakeholder perspectives, and performing a QI needs assessment. QI development is likely to be most beneficial for medical problems for which quality measures have not been previously developed or are inadequate and that have a large burden of illness to justify quality measurement and improvement efforts, are characterized by variable or substandard care such that opportunities for improvement exist, and have evidence that improving quality of care will improve patient health. CONCLUSION By developing a conceptual measurement framework and performing a QI needs assessment, developers and users of QIs can target their efforts.
Collapse
Affiliation(s)
- Henry T Stelfox
- Department of Critical Care Medicine, Institute for Public Health, University of Calgary, Teaching Research & Wellness Building, 3280 Hospital Drive NW, Calgary, Alberta, Canada T2N 4Z6; Department of Medicine, Institute for Public Health, University of Calgary, Teaching Research & Wellness Building, 3280 Hospital Drive NW, Calgary, Alberta, Canada T2N 4Z6; Department of Community Health Sciences, Institute for Public Health, University of Calgary, Teaching Research & Wellness Building, 3280 Hospital Drive NW, Calgary, Alberta, Canada T2N 4Z6.
| | | |
Collapse
|
25
|
Abstract
OBJECTIVES Evaluating the patient impact of health professions education is a societal priority with many challenges. Researchers would benefit from a summary of topics studied and potential methodological problems. We sought to summarize key information on patient outcomes identified in a comprehensive systematic review of simulation-based instruction. DATA SOURCES Systematic search of MEDLINE, EMBASE, CINAHL, PsychINFO, Scopus, key journals, and bibliographies of previous reviews through May 2011. STUDY ELIGIBILITY Original research in any language measuring the direct effects on patients of simulation-based instruction for health professionals, in comparison with no intervention or other instruction. APPRAISAL AND SYNTHESIS Two reviewers independently abstracted information on learners, topics, study quality including unit of analysis, and validity evidence. We pooled outcomes using random effects. RESULTS From 10,903 articles screened, we identified 50 studies reporting patient outcomes for at least 3,221 trainees and 16,742 patients. Clinical topics included airway management (14 studies), gastrointestinal endoscopy (12), and central venous catheter insertion (8). There were 31 studies involving postgraduate physicians and seven studies each involving practicing physicians, nurses, and emergency medicine technicians. Fourteen studies (28 %) used an appropriate unit of analysis. Measurement validity was supported in seven studies reporting content evidence, three reporting internal structure, and three reporting relations with other variables. The pooled Hedges' g effect size for 33 comparisons with no intervention was 0.47 (95 % confidence interval [CI], 0.31-0.63); and for nine comparisons with non-simulation instruction, it was 0.36 (95 % CI, -0.06 to 0.78). LIMITATIONS Focused field in education; high inconsistency (I(2) > 50 % in most analyses). CONCLUSIONS Simulation-based education was associated with small-moderate patient benefits in comparison with no intervention and non-simulation instruction, although the latter did not reach statistical significance. Unit of analysis errors were common, and validity evidence was infrequently reported.
Collapse
|
26
|
Cook DA, Brydges R, Zendejas B, Hamstra SJ, Hatala R. Technology-enhanced simulation to assess health professionals: a systematic review of validity evidence, research methods, and reporting quality. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2013; 88:872-83. [PMID: 23619073 DOI: 10.1097/acm.0b013e31828ffdcf] [Citation(s) in RCA: 117] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
PURPOSE To summarize the tool characteristics, sources of validity evidence, methodological quality, and reporting quality for studies of technology-enhanced simulation-based assessments for health professions learners. METHOD The authors conducted a systematic review, searching MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous reviews through May 2011. They selected original research in any language evaluating simulation-based assessment of practicing and student physicians, nurses, and other health professionals. Reviewers working in duplicate evaluated validity evidence using Messick's five-source framework; methodological quality using the Medical Education Research Study Quality Instrument and the revised Quality Assessment of Diagnostic Accuracy Studies; and reporting quality using the Standards for Reporting Diagnostic Accuracy and Guidelines for Reporting Reliability and Agreement Studies. RESULTS Of 417 studies, 350 (84%) involved physicians at some stage in training. Most focused on procedural skills, including minimally invasive surgery (N=142), open surgery (81), and endoscopy (67). Common elements of validity evidence included relations with trainee experience (N=306), content (142), relations with other measures (128), and interrater reliability (124). Of the 217 studies reporting more than one element of evidence, most were judged as having high or unclear risk of bias due to selective sampling (N=192) or test procedures (132). Only 64% proposed a plan for interpreting the evidence to be presented (validity argument). CONCLUSIONS Validity evidence for simulation-based assessments is sparse and is concentrated within specific specialties, tools, and sources of validity evidence. The methodological and reporting quality of assessment studies leaves much room for improvement.
Collapse
Affiliation(s)
- David A Cook
- Office of Education Research, Mayo Clinic College of Medicine, Rochester, Minnesota 55905, USA.
| | | | | | | | | |
Collapse
|
27
|
Wetzel AP. Factor analysis methods and validity evidence: a review of instrument development across the medical education continuum. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2012; 87:1060-1069. [PMID: 22722361 DOI: 10.1097/acm.0b013e31825d305d] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
PURPOSE Instrument development consistent with best practices is necessary for effective assessment and evaluation of learners and programs across the medical education continuum. The author explored the extent to which current factor analytic methods and other techniques for establishing validity are consistent with best practices. METHOD The author conducted electronic and hand searches of the English-language medical education literature published January 2006 through December 2010. To describe and assess current practices, she systematically abstracted reliability and validity evidence as well as factor analysis methods, data analysis, and reported evidence from instrument development articles reporting the application of exploratory factor analysis and principal component analysis. RESULTS Sixty-two articles met eligibility criteria. They described 64 instruments and 95 factor analyses. Most studies provided at least one source of evidence based on test content. Almost all reported internal consistency, providing evidence based on internal structure. Evidence based on response process and relationships with other variables was reported less often, and evidence based on consequences of testing was not identified. Factor analysis findings suggest common method selection errors and critical omissions in reporting. CONCLUSIONS Given the limited reliability and validity evidence provided for the reviewed instruments, educators should carefully consider the available supporting evidence before adopting and applying published instruments. Researchers should design for, test, and report additional evidence to strengthen the argument for reliability and validity of these measures for research and practice.
Collapse
Affiliation(s)
- Angela P Wetzel
- Department of Foundations of Education, Virginia Commonwealth University School of Education, Richmond, VA 23284-2020, USA.
| |
Collapse
|
28
|
Improving participant feedback to continuing medical education presenters in internal medicine: a mixed-methods study. J Gen Intern Med 2012; 27:425-31. [PMID: 21948229 PMCID: PMC3304027 DOI: 10.1007/s11606-011-1894-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/29/2011] [Revised: 08/09/2011] [Accepted: 09/12/2011] [Indexed: 10/17/2022]
Abstract
BACKGROUND Feedback is essential for improving the skills of continuing medical education (CME) presenters. However, there has been little research on improving the quality of feedback to CME presenters. OBJECTIVES To validate an instrument for generating balanced and behavior-specific feedback from a national cross-section of participants to presenters at a large internal medicine CME course. DESIGN, SETTING, AND PARTICIPANTS A prospective, randomized validation study with qualitative data analysis that included all 317 participants at a Mayo Clinic internal medicine CME course in 2009. MEASUREMENTS An 8-item (5-point Likert scales) CME faculty assessment enhanced study form (ESF) was designed based on literature and expert review. Course participants were randomized to a standard form, a generic study form (GSF), or the ESF. The dimensionality of instrument scores was determined using factor analysis to account for clustered data. Internal consistency and interrater reliabilities were calculated. Associations between overall feedback scores and presenter and presentation variables were identified using generalized estimating equations to account for multiple observations within talk and speaker combinations. Two raters reached consensus on qualitative themes and independently analyzed narrative entries for evidence of balanced and behavior-specific comments. RESULTS Factor analysis of 5,241 evaluations revealed a uni-dimensional model for measuring CME presenter feedback. Overall internal consistency (Cronbach alpha = 0.94) and internal consistency reliability (ICC range 0.88-0.95) were excellent. Feedback scores were associated with presenters' academic ranks (mean score): Instructor (4.12), Assistant Professor (4.38), Associate Professor (4.56), Professor (4.70) (p = 0.046). Qualitative analysis revealed that the ESF generated the highest numbers of balanced comments (GSF = 11, ESF = 26; p = 0.01) and behavior-specific comments (GSF = 64, ESF = 104; p = 0.001). CONCLUSIONS We describe a practical and validated method for generating balanced and behavior-specific feedback for CME presenters in internal medicine. Our simple method for prompting course participants to give balanced and behavior-specific comments may ultimately provide CME presenters with feedback for improving their presentations.
Collapse
|
29
|
Abstract
OBJECTIVE Multiple quality indicators are available to evaluate adult trauma care, but their characteristics and outcomes have not been systematically compared. We sought to systematically review the evidence about the reliability, validity, and implementation of quality indicators for evaluating trauma care. DATA SOURCES Search of MEDLINE, EMBASE, CINAHL, and The Cochrane Library up to January 14, 2009; the Gray Literature; select journals by hand; reference lists; and articles recommended by experts in the field. STUDY SELECTION Studies were selected that evaluated the reliability, validity, or the impact of one or more quality indicators on the quality of care delivered to patients ≥ 18 yrs of age with a major traumatic injury. DATA EXTRACTION Reviewers with methodologic and content expertise conducted data extraction independently. DATA SYNTHESIS The literature search identified 6869 citations. Review of abstracts led to the retrieval of 538 full-text articles for assessment; 40 articles were selected for review. Of these, 20 (50%) articles were cohort studies and 13 (33%) articles were case series. Five articles used control groups, including three before and after case series, a case-control study, and a nonrandomized controlled trial. A total of 115 quality indicators in adult trauma care was identified, predominantly measures of hospital processes (62%) and outcomes (17%) of care. We did not identify any posthospital or secondary injury prevention quality indicators. Reliability was described for two quality indicators, content validity for 22 quality indicators, construct validity for eight quality indicators, and criterion validity for 46 quality indicators. A total of 58 quality indicators was implemented and evaluated in three studies. Eight quality indicators had supporting evidence for more than one measurement domain. A single quality indicator, peer review for preventable death, had both reliability and validity evidence. CONCLUSIONS Although many quality indicators are available to measure the quality of trauma care, reliability evidence, validity evidence, and description of outcomes after implementation are limited.
Collapse
|
30
|
|
31
|
Developing a theory-based instrument to assess the impact of continuing professional development activities on clinical practice: a study protocol. Implement Sci 2011; 6:17. [PMID: 21385369 PMCID: PMC3063813 DOI: 10.1186/1748-5908-6-17] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2010] [Accepted: 03/07/2011] [Indexed: 11/29/2022] Open
Abstract
Background Continuing professional development (CPD) is one of the principal means by which health professionals (i.e. primary care physicians and specialists) maintain, improve, and broaden the knowledge and skills required for optimal patient care and safety. However, the lack of a widely accepted instrument to assess the impact of CPD activities on clinical practice thwarts researchers' comparisons of the effectiveness of CPD activities. Using an integrated model for the study of healthcare professionals' behaviour, our objective is to develop a theory-based, valid, reliable global instrument to assess the impact of accredited CPD activities on clinical practice. Methods Phase 1: We will analyze the instruments identified in a systematic review of factors influencing health professionals' behaviours using criteria that reflect the literature on measurement development and CPD decision makers' priorities. The outcome of this phase will be an inventory of instruments based on social cognitive theories. Phase 2: Working from this inventory, the most relevant instruments and their related items for assessing the concepts listed in the integrated model will be selected. Through an e-Delphi process, we will verify whether these instruments are acceptable, what aspects need revision, and whether important items are missing and should be added. The outcome of this phase will be a new global instrument integrating the most relevant tools to fit our integrated model of healthcare professionals' behaviour. Phase 3: Two data collections are planned: (1) a test-retest of the new instrument, including item analysis, to assess its reliability and (2) a study using the instrument before and after CPD activities with a randomly selected control group to explore the instrument's mere-measurement effect. Phase 4: We will conduct individual interviews and focus groups with key stakeholders to identify anticipated barriers and enablers for implementing the new instrument in CPD practice. Phase 5: Drawing on the results from the previous phases, we will use consensus-building methods to develop with the decision makers a plan to implement the new instrument. Discussion This project proposes to give stakeholders a theory-based global instrument to validly and reliably measure the impacts of CPD activities on clinical practice, thus laying the groundwork for more targeted and effective knowledge-translation interventions in the future.
Collapse
|
32
|
Cook DA, Levinson AJ, Garside S. Method and reporting quality in health professions education research: a systematic review. MEDICAL EDUCATION 2011; 45:227-38. [PMID: 21299598 DOI: 10.1111/j.1365-2923.2010.03890.x] [Citation(s) in RCA: 93] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
CONTEXT Studies evaluating reporting quality in health professions education (HPE) research have demonstrated deficiencies, but none have used comprehensive reporting standards. Additionally, the relationship between study methods and effect size (ES) in HPE research is unknown. OBJECTIVES This review aimed to evaluate, in a sample of experimental studies of Internet-based instruction, the quality of reporting, the relationship between reporting and methodological quality, and associations between ES and study methods. METHODS We conducted a systematic search of databases including MEDLINE, Scopus, CINAHL, EMBASE and ERIC, for articles published during 1990-2008. Studies (in any language) quantifying the effect of Internet-based instruction in HPE compared with no intervention or other instruction were included. Working independently and in duplicate, we coded reporting quality using the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement, and coded study methods using a modified Newcastle-Ottawa Scale (m-NOS), the Medical Education Research Study Quality Instrument (MERSQI), and the Best Evidence in Medical Education (BEME) global scale. RESULTS For reporting quality, articles scored a mean±standard deviation (SD) of 51±25% of STROBE elements for the Introduction, 58±20% for the Methods, 50±18% for the Results and 41±26% for the Discussion sections. We found positive associations (all p<0.0001) between reporting quality and MERSQI (ρ=0.64), m-NOS (ρ=0.57) and BEME (ρ=0.58) scores. We explored associations between study methods and knowledge ES by subtracting each study's ES from the pooled ES for studies using that method and comparing these differences between subgroups. Effect sizes in single-group pretest/post-test studies differed from the pooled estimate more than ESs in two-group studies (p=0.013). No difference was found between other study methods (yes/no: representative sample, comparison group from same community, randomised, allocation concealed, participants blinded, assessor blinded, objective assessment, high follow-up). CONCLUSIONS Information is missing from all sections of reports of HPE experiments. Single-group pre-/post-test studies may overestimate ES compared with two-group designs. Other methodological variations did not bias study results in this sample.
Collapse
Affiliation(s)
- David A Cook
- Division of General Internal Medicine, College of Medicine, Mayo Clinic, Rochester, Minnesota 55905, USA
| | | | | |
Collapse
|
33
|
Pluye P, Grad RM, Johnson-Lafleur J, Bambrick T, Burnand B, Mercer J, Marlow B, Campbell C. Evaluation of email alerts in practice: Part 2. Validation of the information assessment method. J Eval Clin Pract 2010; 16:1236-43. [PMID: 20722882 DOI: 10.1111/j.1365-2753.2009.01313.x] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
RATIONALE AND OBJECTIVE The information assessment method (IAM) permits health professionals to systematically document the relevance, cognitive impact, use and health outcomes of information objects delivered by or retrieved from electronic knowledge resources. The companion review paper (Part 1) critically examined the literature, and proposed a 'Push-Pull-Acquisition-Cognition-Application' evaluation framework, which is operationalized by IAM. The purpose of the present paper (Part 2) is to examine the content validity of the IAM cognitive checklist when linked to email alerts. METHODS A qualitative component of a mixed methods study was conducted with 46 doctors reading and rating research-based synopses sent on email. The unit of analysis was a doctor's explanation of a rating of one item regarding one synopsis. Interviews with participants provided 253 units that were analysed to assess concordance with item definitions. RESULTS AND CONCLUSION The content relevance of seven items was supported. For three items, revisions were needed. Interviews suggested one new item. This study has yielded a 2008 version of IAM.
Collapse
Affiliation(s)
- Pierre Pluye
- Department of Family Medicine, McGill University, Montreal, Canada.
| | | | | | | | | | | | | | | |
Collapse
|
34
|
Pluye P, Grad RM, Granikov V, Jagosh J, Leung K. Evaluation of email alerts in practice: Part 1. Review of the literature on clinical emailing channels. J Eval Clin Pract 2010; 16:1227-35. [PMID: 20722885 DOI: 10.1111/j.1365-2753.2009.001301.x] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
RATIONALE Methods to systematically assess electronic knowledge resources by health professionals may enhance evaluation of these resources, knowledge exchange between information users and providers, and continuing professional development. We developed the Information Assessment Method (IAM) to document health professional perspectives on the relevance, cognitive impact, potential use and expected health outcomes of information delivered by (push) or retrieved from (pull) electronic knowledge resources. However, little is known about push communication in health sciences, and what we propose to call clinical emailing channels (CECs). CECs can be understood as a communication infrastructure that channels clinically relevant research knowledge, email alerts, from information providers to the inboxes of individual practitioners. AIMS In two companion papers, our objectives are to (part 1) explore CEC evaluation in routine practice, and (part 2) examine the content validity of the cognitive component of IAM. METHODS The present paper (part 1) critically reviews the literature in health sciences and four disciplines: communication, information studies, education and knowledge translation. Our review addresses the following questions. What are CECs? How are they assessed? RESULTS The review contributes to better define CECs, and proposes a 'push-pull-acquisition-cognition-application' evaluation framework, which is operationalized by IAM. CONCLUSION Compared with existing evaluation tools, our review suggests IAM is comprehensive, generic and systematic.
Collapse
Affiliation(s)
- Pierre Pluye
- Department of Family Medicine, McGill University, Montreal, QC, Canada.
| | | | | | | | | |
Collapse
|
35
|
The impact of education on care practices: an exploratory study of the influence of "action plans" on the behavior of health professionals. Int Psychogeriatr 2010; 22:897-908. [PMID: 20594385 PMCID: PMC2955438 DOI: 10.1017/s1041610210001031] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
BACKGROUND There has been limited focus on evaluation of continuing education (CEU) and continuing medical education (CME) in the fields of gerontology and geriatrics. The increasing elderly population combined with the limited clinical workforce highlights the need for more effective methods of continuing education. Traditionally, outcomes of CEU and CME programs relied on self-report measures of satisfaction with the scope and quality of the training, but more recent efforts in this area have focused on outcomes indicating level of improved skills and attitudinal changes of medical and allied health professionals towards working with elderly patients in need of assistance. METHODS This study focused on the use of "Action Plans" as a tool to stimulate changes in clinical programs following training, along with attempting to determine typical barriers to change and how to deal with them. More than 600 action plans were obtained from participants attending various continuing education classes providing training on care of patients with dementia (PWD) and their families. Both qualitative and quantitative methods, including logistic regression models were used to analyze the data. RESULTS Three months following training 366 participants reported whether they were successful in implementing their action plans and identified factors that either facilitated or hindered their goal to make changes outlined in their action plans. Despite the low response rate of program participants, the "action plan" (with follow up to determine degree of completion) appeared to stimulate effective behavioral changes in clinicians working with dementia patients and their family members. Seventy three percent of the respondents reported at least some level of success in implementing specific changes. Specific details about barriers to change and how to overcome them are discussed. CONCLUSIONS Our results support that developing and writing action plans can be a useful tool to self- monitor behavioral change among trainees over time.
Collapse
|
36
|
Kottner J, Audigé L, Brorson S, Donner A, Gajewski BJ, Hróbjartsson A, Roberts C, Shoukri M, Streiner DL. Guidelines for Reporting Reliability and Agreement Studies (GRRAS) were proposed. J Clin Epidemiol 2010; 64:96-106. [PMID: 21130355 DOI: 10.1016/j.jclinepi.2010.03.002] [Citation(s) in RCA: 1255] [Impact Index Per Article: 89.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2009] [Revised: 01/25/2010] [Accepted: 03/02/2010] [Indexed: 02/08/2023]
Abstract
OBJECTIVE Results of reliability and agreement studies are intended to provide information about the amount of error inherent in any diagnosis, score, or measurement. The level of reliability and agreement among users of scales, instruments, or classifications is widely unknown. Therefore, there is a need for rigorously conducted interrater and intrarater reliability and agreement studies. Information about sample selection, study design, and statistical analysis is often incomplete. Because of inadequate reporting, interpretation and synthesis of study results are often difficult. Widely accepted criteria, standards, or guidelines for reporting reliability and agreement in the health care and medical field are lacking. The objective was to develop guidelines for reporting reliability and agreement studies. STUDY DESIGN AND SETTING Eight experts in reliability and agreement investigation developed guidelines for reporting. RESULTS Fifteen issues that should be addressed when reliability and agreement are reported are proposed. The issues correspond to the headings usually used in publications. CONCLUSION The proposed guidelines intend to improve the quality of reporting.
Collapse
Affiliation(s)
- Jan Kottner
- Department of Nursing Science, Centre for Humanities and Health Sciences, Charité-Universitätsmedizin Berlin, Berlin, Germany.
| | | | | | | | | | | | | | | | | |
Collapse
|
37
|
Marco Martínez F, Fernández-Gutiérrez B. Formación para la investigación en patología musculoesquelética: desde el MIR a la formación médica continuada. Rev Esp Cir Ortop Traumatol (Engl Ed) 2010. [DOI: 10.1016/j.recot.2010.01.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
|
38
|
Woodward HI, Mytton OT, Lemer C, Yardley IE, Ellis BM, Rutter PD, Greaves FE, Noble DJ, Kelley E, Wu AW. What Have We Learned About Interventions to Reduce Medical Errors? Annu Rev Public Health 2010; 31:479-97 1 p following 497. [DOI: 10.1146/annurev.publhealth.012809.103544] [Citation(s) in RCA: 52] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Helen I. Woodward
- Imperial College Healthcare NHS Trust, London, W2 1NY, United Kingdom
| | | | - Claire Lemer
- Barnet and Chase Farm Hospitals NHS Trust, London, EN2 8JL, United Kingdom
| | | | - Benjamin M. Ellis
- WHO Patient Safety, World Health Organization, Geneva 27, Switzerland
| | | | | | | | - Edward Kelley
- WHO Patient Safety, World Health Organization, Geneva 27, Switzerland
| | - Albert W. Wu
- Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland 21205;
| |
Collapse
|
39
|
Training for research in musculoskeletal disease: from the residency program to continuing medical education. Rev Esp Cir Ortop Traumatol (Engl Ed) 2010. [DOI: 10.1016/s1988-8856(10)70232-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
|
40
|
Windish DM, Reed DA, Boonyasai RT, Chakraborti C, Bass EB. Methodological rigor of quality improvement curricula for physician trainees: a systematic review and recommendations for change. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2009; 84:1677-92. [PMID: 19940573 DOI: 10.1097/acm.0b013e3181bfa080] [Citation(s) in RCA: 63] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
PURPOSE To systematically determine whether published quality improvement (QI) curricula for physician trainees adhere to QI guidelines and meet standards for study quality in medical education research. METHOD The authors searched MEDLINE, EMBASE, CINAHL, and ERIC between 1980 and April 2008 for physician trainee QI curricula and assessed (1) adherence to seven domains containing 35 QI objectives, and (2) study quality using the Medical Education Research Study Quality Instrument (MERSQI). RESULTS Eighteen curricula met eligibility criteria; 5 involved medical students and 13 targeted residents. Three curricula (18%) measured health care outcomes. Attitudes about QI were high, and many behavior and patient-related outcomes showed positive results. Curricula addressed a mean of 4.3 (SD 1.8) QI domains. Student initiatives included 38.2% [95% CI, 12.2%-64.2%] beginning student-level objectives and 23.0% [95% CI, -4.0% to 50.0%] advanced student-level objectives. Resident curricula addressed 42.3% [95% CI, 29.8%-54.8%] beginning resident-level objectives and 33.7% [95% CI, 23.2%-44.1%] advanced resident-level objectives. The mean (SD) total MERSQI score was 9.86 (2.92) with a range of 5 of 14 [total possible range 5-18]; 35% of curricula demonstrated lower study quality (MERSQI score < or = 7). Curricula varied widely in quality of reporting, teaching strategies, evaluation instruments, and funding obtained. CONCLUSIONS Many QI curricula in this study inadequately addressed QI educational objectives and had relatively weak research quality. Educators seeking to improve QI curricula should use recommended curricular and reporting guidelines, stronger methodologic rigor through development and use of validated instruments, available QI resources already present in health care settings, and outside funding opportunities.
Collapse
Affiliation(s)
- Donna M Windish
- Department of Internal Medicine, Yale University School of Medicine, New Haven, Connecticut, USA.
| | | | | | | | | |
Collapse
|
41
|
Garattini L, Gritti S, De Compadri P, Casadei G. Continuing Medical Education in six European countries: a comparative analysis. Health Policy 2009; 94:246-54. [PMID: 19913324 DOI: 10.1016/j.healthpol.2009.09.017] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2009] [Accepted: 09/30/2009] [Indexed: 10/20/2022]
Abstract
OBJECTIVE We examined Continuing Medical Education (CME) systems in a sample of six EU countries: Austria, Belgium, France, Italy, Norway, and the UK. The aim of this comparative study was to assess the main country-specific institutional settings applied by governments. METHODS A common scheme of analysis was applied to investigate the following variables: (i) CME institutional framework; (ii) benefits and/or penalties to participants; (iii) types of CME activities and system of credits; (iv) accreditation of CME providers and events; (v) CME funding and sponsorship. The analysis involved reviewing the literature on CME policy and interviewing a selected panel of local experts in each country (at least one public manager, one representative of medical associations and one pharmaceutical manager). RESULTS CME is formally compulsory in Austria, France, Italy and the UK, although no sanctions are enforced against non-compliant physicians in practice. The only two countries that offer financial incentives to enhance CME participation are Belgium and Norway, although limited to specific categories of physicians. Formal accreditation of CME providers is required in Austria, France and Italy, while in the other three countries accreditation is focused on activities. Private sponsorship is allowed in all countries but Norway, although within certain limits. CONCLUSIONS This comparative exercise provides an overview of the CME policies adopted by six EU countries to regulate both demand and supply. The substantial variability in the organization and accreditation of schemes indicates that much could be done to improve effectiveness. Although further analysis is needed to assess the results of these policies in practice, lessons drawn from this study may help clarify the weaknesses and strengths of single domestic policies in the perspective of pan-European CME harmonization.
Collapse
Affiliation(s)
- Livio Garattini
- CESAV - Centre for Health Economics, Mario Negri Institute for Pharmacological Research, Villa Camozzi, Ranica (BG), Italy.
| | | | | | | |
Collapse
|
42
|
Marinopoulos SS, Dorman T, Bass EB. The American College of Chest Physicians evidence-based educational guidelines for continuing medical education interventions: estimating effect size. Chest 2009; 136:947-948. [PMID: 19736208 DOI: 10.1378/chest.09-0826] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Affiliation(s)
| | - Todd Dorman
- Johns Hopkins University School of Medicine, Baltimore, MD
| | - Eric B Bass
- Johns Hopkins University School of Medicine, Baltimore, MD
| |
Collapse
|
43
|
Abstract
Knowledge translation (KT) is an iterative process that involves knowledge development, synthesis, contextualization, and adaptation, with the expressed purpose of moving the best evidence into practice that results in better health processes and outcomes for patients. Optimization of the process requires engaged interaction between knowledge developers and knowledge users. Knowledge users include consumers, clinicians, and policy makers. KT is highly reliant on understanding when research evidence needs to be moved into practice. Social, personal, policy, and system factors contribute to how and when change in practice can be accomplished. Evidence-based practitioners need to understand a conceptual basis for KT and the evidence indicating which specific KT strategies might help them move best evidence into action in practice. Audit and feedback, knowledge brokering, clinical practice guidelines, professional standards, and "active-learning" continuing education are examples of KT strategies.
Collapse
Affiliation(s)
- Joy C MacDermid
- Hand and Upper Limb Centre Clinical Research Laboratory, St. Joseph's Health Centre, 268 Grosvenor Street, London, Ontario, Canada.
| | | |
Collapse
|
44
|
Strouse JJ, Lanzkron S, Beach MC, Haywood C, Park H, Witkop C, Wilson RF, Bass EB, Segal JB. Hydroxyurea for sickle cell disease: a systematic review for efficacy and toxicity in children. Pediatrics 2008; 122:1332-42. [PMID: 19047254 DOI: 10.1542/peds.2008-0441] [Citation(s) in RCA: 114] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
CONTEXT Hydroxyurea is the only approved medication for the treatment of sickle cell disease in adults; there are no approved drugs for children. OBJECTIVE Our goal was to synthesize the published literature on the efficacy, effectiveness, and toxicity of hydroxyurea in children with sickle cell disease. METHODS Medline, Embase, TOXLine, and the Cumulative Index to Nursing and Allied Health Literature through June 2007 were used as data sources. We selected randomized trials, observational studies, and case reports (English language only) that evaluated the efficacy and toxicity of hydroxyurea in children with sickle cell disease. Two reviewers abstracted data sequentially on study design, patient characteristics, and outcomes and assessed study quality independently. RESULTS We included 26 articles describing 1 randomized, controlled trial, 22 observational studies (11 with overlapping participants), and 3 case reports. Almost all study participants had sickle cell anemia. Fetal hemoglobin levels increased from 5%-10% to 15%-20% on hydroxyurea. Hemoglobin concentration increased modestly (approximately 1 g/L) but significantly across studies. The rate of hospitalization decreased in the single randomized, controlled trial and 5 observational studies by 56% to 87%, whereas the frequency of pain crisis decreased in 3 of 4 pediatric studies. New and recurrent neurologic events were decreased in 3 observational studies of hydroxyurea compared with historical controls. Common adverse events were reversible mild-to-moderate neutropenia, mild thrombocytopenia, severe anemia, rash or nail changes (10%), and headache (5%). Severe adverse events were rare and not clearly attributable to hydroxyurea. CONCLUSIONS Hydroxyurea reduces hospitalization and increases total and fetal hemoglobin levels in children with severe sickle cell anemia. There was inadequate evidence to assess the efficacy of hydroxyurea in other groups. The small number of children in long-term studies limits conclusions about late toxicities.
Collapse
Affiliation(s)
- John J Strouse
- Department of Pediatrics, Johns Hopkins University School of Medicine, Division of Pediatric Hematology, 720 Rutland Ave, Ross 1125, Baltimore, MD 21205, USA.
| | | | | | | | | | | | | | | | | |
Collapse
|
45
|
Abstract
BACKGROUND Publicly-funded health centers serve disadvantaged populations who underuse colorectal cancer screening (CRC). Because physicians play a key role in patient adherence to screening, provider interventions within health center practices could improve the delivery/utilization of CRC screening. METHODS A 2-group study design was used with 4 pairs of health centers randomized to the intervention or control condition. The provider intervention featured academic detailing of the small practice groups, followed by a strategic planning session with the entire health center staff using SWOT analysis. The outcome measure of provider endoscopy referral/fecal occult blood test dispensing and/or completion of CRC screening was determined by medical record audit (n = 2224). The intervention effect was evaluated using generalized estimating equations. Pre-post intervention patient surveys (n = 281) were conducted. RESULTS Chart audits of the 1 year period before and after the intervention revealed a 16% increase from baseline in CRC screening referral/dispensing/completion among intervention centers, compared with a 4% increase among controls, odds ratio (OR) = 2.25 (1.67-3.04) P < 0.001. Intervention versus control health center patient self-reports of lack of physician recommendation as a reason for not having CRC screening declined from baseline to follow-up (P = 0.04). CONCLUSIONS Provider referrals/dispensing/completion of CRC screening within health centers was significantly improved and barriers reduced through a provider intervention combining continuing medical education with a team building strategic planning exercise.
Collapse
|
46
|
|
47
|
Abstract
BACKGROUND Hydroxyurea is the only approved drug for treatment of sickle cell disease. OBJECTIVE To synthesize the published literature on the efficacy, effectiveness, and toxicity of hydroxyurea when used in adults with sickle cell disease. DATA SOURCES MEDLINE, EMBASE, TOXLine, and CINAHL were searched through 30 June 2007. STUDY SELECTION Randomized trials, observational studies, and case reports evaluating efficacy and toxicity of hydroxyurea in adults with sickle cell disease, and toxicity studies of hydroxyurea in other conditions that were published in English. DATA EXTRACTION Paired reviewers abstracted data on study design, patient characteristics, and outcomes sequentially and did quality assessments independently. DATA SYNTHESIS In the single randomized trial, the hemoglobin level was higher in hydroxyurea recipients than placebo recipients after 2 years (difference, 6 g/L), as was fetal hemoglobin (absolute difference, 3.2%). The median number of painful crises was 44% lower than in the placebo group. The 12 observational studies that enrolled adults reported a relative increase in fetal hemoglobin of 4% to 20% and a relative reduction in crisis rates by 68% to 84%. Hospital admissions declined by 18% to 32%. The evidence suggests that hydroxyurea may impair spermatogenesis. Limited evidence indicates that hydroxyurea treatment in adults with sickle cell disease is not associated with leukemia. Likewise, limited evidence suggests that hydroxyurea and leg ulcers are not associated in patients with sickle cell disease, and evidence is insufficient to estimate the risk for skin neoplasms, although these outcomes can be attributed to hydroxyurea in other conditions. LIMITATION Only English-language articles were included, and some studies were of lower quality. CONCLUSION Hydroxyurea has demonstrated efficacy in adults with sickle cell disease. The paucity of long-term studies limits conclusions about toxicity.
Collapse
|
48
|
Lanzkron S, Strouse JJ, Wilson R, Beach MC, Haywood C, Park H, Witkop C, Bass EB, Segal JB. Systematic review: Hydroxyurea for the treatment of adults with sickle cell disease. Ann Intern Med 2008; 148:939-55. [PMID: 18458272 PMCID: PMC3256736 DOI: 10.7326/0003-4819-148-12-200806170-00221] [Citation(s) in RCA: 175] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
BACKGROUND Hydroxyurea is the only approved drug for treatment of sickle cell disease. OBJECTIVE To synthesize the published literature on the efficacy, effectiveness, and toxicity of hydroxyurea when used in adults with sickle cell disease. DATA SOURCES MEDLINE, EMBASE, TOXLine, and CINAHL were searched through 30 June 2007. STUDY SELECTION Randomized trials, observational studies, and case reports evaluating efficacy and toxicity of hydroxyurea in adults with sickle cell disease, and toxicity studies of hydroxyurea in other conditions that were published in English. DATA EXTRACTION Paired reviewers abstracted data on study design, patient characteristics, and outcomes sequentially and did quality assessments independently. DATA SYNTHESIS In the single randomized trial, the hemoglobin level was higher in hydroxyurea recipients than placebo recipients after 2 years (difference, 6 g/L), as was fetal hemoglobin (absolute difference, 3.2%). The median number of painful crises was 44% lower than in the placebo group. The 12 observational studies that enrolled adults reported a relative increase in fetal hemoglobin of 4% to 20% and a relative reduction in crisis rates by 68% to 84%. Hospital admissions declined by 18% to 32%. The evidence suggests that hydroxyurea may impair spermatogenesis. Limited evidence indicates that hydroxyurea treatment in adults with sickle cell disease is not associated with leukemia. Likewise, limited evidence suggests that hydroxyurea and leg ulcers are not associated in patients with sickle cell disease, and evidence is insufficient to estimate the risk for skin neoplasms, although these outcomes can be attributed to hydroxyurea in other conditions. LIMITATION Only English-language articles were included, and some studies were of lower quality. CONCLUSION Hydroxyurea has demonstrated efficacy in adults with sickle cell disease. The paucity of long-term studies limits conclusions about toxicity.
Collapse
Affiliation(s)
- Sophie Lanzkron
- School of Medicine ,Johns Hopkins University, 1830 East Monument Street, Suite 7300, Baltimore, MD 21205, USA
| | | | | | | | | | | | | | | | | |
Collapse
|