1
|
VanFosson CA. A Conceptual Model of Individual Clinical Readiness. Mil Med 2024; 189:e2530-e2536. [PMID: 38771701 DOI: 10.1093/milmed/usae215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 03/17/2024] [Accepted: 04/11/2024] [Indexed: 05/23/2024] Open
Abstract
INTRODUCTION Force readiness is a priority among senior leaders across all branches of the Department of Defense. Units that do not achieve readiness benchmarks are considered non-deployable until the unit achieves the requisite benchmarks. Because military units are made up of individuals, the unit cannot be ready if the individuals within the unit are not ready. For medical personnel, this refers to one's ability to competently provide patient care in a deployed setting or their individual clinical readiness (ICR). A review of the literature found no conceptual model of ICR. Other potential concepts, such as individual medical readiness, were identified but used inconsistently. Therefore, the purpose of this article is to define ICR and propose a conceptual model to inform future efforts to achieve ICR and facilitate future study of the concept. MATERIALS AND METHODS Model development occurred using a 3-step theoretical model synthesis process. The process included specification of key concepts, identification of related factors and relationships, and organizing them into an integrated network of ideas. RESULTS ICR is the clinically oriented service members' (COSM) ability to meet the demands of the militarily relevant, assigned clinical mission. ICR leads to one's "individual clinical performance," a key concept distinct from ICR. To understand ICR, one must account for "individual characteristics," as well as one's "education," "training," and "exposure." ICR and individual clinical performance are influenced by the "quality of exposure" and the "patient care environment." One's "individual clinical performance" also reciprocally influences the patient care environment, as well as the "team's clinical performance." These factors (individual clinical performance, team clinical performance, and the patient care environment) influence "patient outcomes." In the proposed model, patient outcomes are an indirect result of ICR and its antecedents (personal characteristics, education, training, and exposure); one's individual clinical performance may not be consistent with their ICR. Patient outcomes are also influenced by the "patient environment" (external to the health care environment) and "patient characteristics"; these elements of the model do not influence ICR or individual clinical performance. CONCLUSION Force readiness is a Department of Defense priority. In order for military units to be deployment ready, so too must their personnel be deployment ready. For COSMs, this includes one's ability to competently provide patient care in a deployed setting or their ICR. This article defines ICR, as well as identifies another key concept and other factors associated with ICR. The proposed model is a tool for military medical leaders to communicate with and influence non-medical military leaders in the Department of Defense. Future research is needed to further refine the proposed model, determine the strength of the proposed relationships, and identify interventions to improve ICR.
Collapse
|
2
|
Simulation-based Assessment of the Management of Critical Events by Board-certified Anesthesiologists. Anesthesiology 2017; 127:475-489. [DOI: 10.1097/aln.0000000000001739] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Abstract
Background
We sought to determine whether mannequin-based simulation can reliably characterize how board-certified anesthesiologists manage simulated medical emergencies. Our primary focus was to identify gaps in performance and to establish psychometric properties of the assessment methods.
Methods
A total of 263 consenting board-certified anesthesiologists participating in existing simulation-based maintenance of certification courses at one of eight simulation centers were video recorded performing simulated emergency scenarios. Each participated in two 20-min, standardized, high-fidelity simulated medical crisis scenarios, once each as primary anesthesiologist and first responder. Via a Delphi technique, an independent panel of expert anesthesiologists identified critical performance elements for each scenario. Trained, blinded anesthesiologists rated video recordings using standardized rating tools. Measures included the percentage of critical performance elements observed and holistic (one to nine ordinal scale) ratings of participant’s technical and nontechnical performance. Raters also judged whether the performance was at a level expected of a board-certified anesthesiologist.
Results
Rater reliability for most measures was good. In 284 simulated emergencies, participants were rated as successfully completing 81% (interquartile range, 75 to 90%) of the critical performance elements. The median rating of both technical and nontechnical holistic performance was five, distributed across the nine-point scale. Approximately one-quarter of participants received low holistic ratings (i.e., three or less). Higher-rated performances were associated with younger age but not with previous simulation experience or other individual characteristics. Calling for help was associated with better individual and team performance.
Conclusions
Standardized simulation-based assessment identified performance gaps informing opportunities for improvement. If a substantial proportion of experienced anesthesiologists struggle with managing medical emergencies, continuing medical education activities should be reevaluated.
Collapse
|
3
|
Brown EC, Robicsek A, Billings LK, Barrios B, Konchak C, Paramasivan AM, Masi CM. Evaluating Primary Care Physician Performance in Diabetes Glucose Control. Am J Med Qual 2015; 31:392-9. [PMID: 25921589 DOI: 10.1177/1062860615585138] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This study demonstrates that it is possible to identify primary care physicians (PCPs) who perform better or worse than expected in managing diabetes. Study subjects were 14 033 adult diabetics and their 133 PCPs. Logistic regression was used to predict the odds that a patient would have uncontrolled diabetes (defined as HbA1c ≥8%) based on patient-level characteristics alone. A second model predicted diabetes control from physician-level identity and characteristics alone. A third model combined the patient- and physician-level models using hierarchical logistic regression. Physician performance is calculated from the difference between the expected and observed proportions of patients with uncontrolled diabetes. After adjusting for important patient characteristics, PCPs were identified who performed better or worse than expected in managing diabetes. This strategy can be used to characterize physician performance in other chronic conditions. This approach may lead to new insights regarding effective and ineffective treatment strategies.
Collapse
Affiliation(s)
- Eric C Brown
- NorthShore University HealthSystem, Evanston, IL University of Chicago, Chicago, IL
| | - Ari Robicsek
- NorthShore University HealthSystem, Evanston, IL University of Chicago, Chicago, IL
| | - Liana K Billings
- NorthShore University HealthSystem, Evanston, IL University of Chicago, Chicago, IL
| | | | - Chad Konchak
- NorthShore University HealthSystem, Evanston, IL
| | | | - Christopher M Masi
- NorthShore University HealthSystem, Evanston, IL University of Chicago, Chicago, IL
| |
Collapse
|
4
|
Shaughnessy AF, Chang KT, Sparks J, Cohen-Osher M, Gravel J. Assessing and Documenting the Cognitive Performance of Family Medicine Residents Practicing Outpatient Medicine. J Grad Med Educ 2014; 6:526-31. [PMID: 26279780 PMCID: PMC4535219 DOI: 10.4300/jgme-d-13-00341.1] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/19/2013] [Revised: 01/15/2014] [Accepted: 03/17/2014] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Development of cognitive skills for competent medical practice is a goal of residency education. Cognitive skills must be developed for many different clinical situations. INNOVATION We developed the Resident Cognitive Skills Documentation (CogDoc) as a method for capturing faculty members' real-time assessment of residents' cognitive performance while they precepted them in a family medicine office. The tool captures 3 dimensions of cognitive skills: medical knowledge, understanding, and its application. This article describes CogDoc development, our experience with its use, and its reliability and feasibility. METHODS After development and pilot-testing, we introduced the CogDoc at a single training site, collecting all completed forms for 14 months to determine completion rate, competence development over time, consistency among preceptors, and resident use of the data. RESULTS Thirty-eight faculty members completed 5021 CogDoc forms, documenting 29% of all patient visits by 33 residents. Competency was documented in all entrustable professional activities. Competence was statistically different among residents of different years of training for all 3 dimensions and progressively increased within all residency classes over time. Reliability scores were high: 0.9204 for the medical knowledge domain, 0.9405 for understanding, and 0.9414 for application. Almost every resident reported accessing the individual forms or summaries documenting their performance. CONCLUSIONS The CogDoc approach allows for ongoing assessment and documentation of resident competence, and, when compiled over time, depicts a comprehensive assessment of residents' cognitive development and ability to make decisions in ambulatory medicine. This approach meets criteria for an acceptable tool for assessing cognitive skills.
Collapse
|
5
|
Abstract
BACKGROUND The assessment of a practising physician's performance may be conducted for various reasons, including licensure. In response to a request from the College of Physicians and Surgeons of Manitoba (CPSM), the Division of Continuing Professional Development in the Faculty of Medicine, University of Manitoba, has established a practice-based assessment programme - the Manitoba Practice Assessment Program (MPAP) - as the College needed a method to evaluate the competence and performance of physicians on the conditional register. CONTEXT Using a multifaceted approach and CanMEDS as a guiding framework, a variety of practice-based assessment surveys and tools were developed and piloted. Because of the challenge of collating data, the MPAP team needed a computerised solution to manage the data and assessment process. INNOVATION Over a 2-year period, a customised web-based forms and information management system was designed, developed, tested and implemented. The secure and robust system allows the MPAP team to create assessment surveys and tools in which each item is mapped to Canadian Medical Education Directives for Specialists (CanMEDS) roles and competencies. Reports can be auto-generated, summarising a physician's performance on specific competencies and roles. Overall, the system allows the MPAP team to effectively manage all aspects of the assessment programme. IMPLICATIONS Throughout all stages of design to implementation, a variety of lessons were learned that can be shared with those considering building their own customised web-based system. The key to success is active involvement in all stages of the process!
Collapse
Affiliation(s)
- Brenda Stutsky
- Faculty of Medicine, University of Manitoba, Winnipeg, Canada
| |
Collapse
|
6
|
Wenghofer EF, Marlow B, Campbell C, Carter L, Kam S, McCauley W, Hill L. The relationship between physician participation in continuing professional development programs and physician in-practice peer assessments. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2014; 89:920-927. [PMID: 24871244 DOI: 10.1097/acm.0000000000000243] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
PURPOSE To investigate the relationship between physicians' performance, as evaluated through in-practice peer assessments, and their participation in continuing professional development (CPD). METHOD The authors examined the predictive effects of participating in the CPD programs of the Royal College of Physicians and Surgeons of Canada and the College of Family Physicians of Canada one year before in-practice peer assessments conducted by the medical regulatory authority in Ontario, Canada, in 2008-2009. Two multivariate logistic regression models were used to determine whether physicians who reported participating in any CPD and group-based, assessment-based, and/or self-directed CPD activities were more or less likely to receive satisfactory assessments than physicians who had not. All models were adjusted for the effects of sex, age, specialty certification, practice location, number of patient visits per week, hours worked per week, and international medical graduate status. RESULTS A total of 617 physicians were included in the study. Analysis revealed that physicians who reported participating in any CPD activities were significantly more likely (odds ratio [OR] = 2.5; P = .021) to have satisfactory assessments than those who had not. In addition, physicians participating in group-based CPD activities were more likely to have satisfactory assessments than those who did not (OR = 2.4; P = .016). CONCLUSIONS There is encouraging evidence supporting a positive predictive association between participating in CPD and performance on in-practice peer assessments. The findings have potential implications for policies which require physicians to participate in programs of lifelong learning.
Collapse
Affiliation(s)
- Elizabeth F Wenghofer
- Dr. Wenghofer is associate professor, School of Rural and Northern Health, and Northern Ontario School of Medicine, Laurentian University, Sudbury, Ontario, Canada. Dr. Marlow is assistant professor, Department of Family and Community Medicine, University of Toronto, Toronto, Ontario, Canada. Dr. Campbell is director of professional affairs, Royal College of Physicians and Surgeons of Canada, Ottawa, Ontario, Canada, and associate professor, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada. Dr. Carter is director, Centre for Flexible Learning, Nipissing University, North Bay, Ontario, Canada, and professor, Northern Ontario School of Medicine, Sudbury, Ontario, Canada. Ms. Kam is a PhD student, School of Rural and Northern Health, Laurentian University, Sudbury, Ontario, Canada. Dr. McCauley is medical advisor, Quality Management Division, College of Physicians and Surgeons of Ontario, Toronto, Ontario, Canada, and associate professor, Schulich School of Medicine & Dentistry, University of Western Ontario, London, Ontario, Canada. Ms. Hill is former manager, Continuing Professional Development, College of Family Physicians of Canada, Mississauga, Ontario, Canada
| | | | | | | | | | | | | |
Collapse
|
7
|
Mammographic interpretation: radiologists' ability to accurately estimate their performance and compare it with that of their peers. AJR Am J Roentgenol 2012; 199:695-702. [PMID: 22915414 DOI: 10.2214/ajr.11.7402] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
OBJECTIVE The purposes of this study were to determine whether U.S. radiologists accurately estimate their own interpretive performance of screening mammography and to assess how they compare their performance with that of their peers. SUBJECTS AND METHODS Between 2005 and 2006, 174 radiologists from six Breast Cancer Surveillance Consortium registries completed a mailed survey. The radiologists' estimated and actual recall, false-positive, and cancer detection rates and positive predictive value of biopsy recommendation (PPV(2)) for screening mammography were compared. Radiologists' ratings of their performance as lower than, similar to, or higher than that of their peers were compared with their actual performance. Associations with radiologist characteristics were estimated with weighted generalized linear models. RESULTS Although most radiologists accurately estimated their cancer detection and recall rates (74% and 78% of radiologists), fewer accurately estimated their false-positive rate (19%) and PPV(2) (26%). Radiologists reported having recall rates similar to (43%) or lower than (31%) and false-positive rates similar to (52%) or lower than (33%) those of their peers and similar (72%) or higher (23%) cancer detection rates and similar (72%) or higher (38%) PPV(2). Estimation accuracy did not differ by radiologist characteristics except that radiologists who interpreted 1000 or fewer mammograms annually were less accurate at estimating their recall rates. CONCLUSION Radiologists perceive their performance to be better than it actually is and at least as good as that of their peers. Radiologists have particular difficulty estimating their false-positive rates and PPV(2).
Collapse
|
8
|
Luria J, Buncher MJ, Ruddy RM. Workforce and its Impact on Quality. CLINICAL PEDIATRIC EMERGENCY MEDICINE 2011. [DOI: 10.1016/j.cpem.2011.05.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|