51
|
van Mook WNKA, De Grave WS, Gorter SL, Zwaveling JH, Schuwirth LW, van der Vleuten PM. Intensive care medicine trainees' perception of professionalism: a qualitative study. Anaesth Intensive Care 2011; 39:107-15. [PMID: 21375100 DOI: 10.1177/0310057x1103900118] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The Competency-Based Training program in Intensive Care Medicine in Europe identified 12 competency domains. Professionalism was given a prominence equal to technical ability. However, little information pertaining to fellows' views on professionalism is available. A nationwide qualitative study was performed. The moderator asked participants to clarify the terms professionalism and professional behaviour, and to explore the questions "How do you learn the mentioned aspects?" and "What ways of learning do you find useful or superfluous?". Qualitative data analysis software (MAXQDA2007) facilitated analysis using an inductive coding approach. Thirty-five fellows across eight groups participated. The themes most frequently addressed were communication, keeping distance and boundaries, medical knowledge and expertise, respect, teamwork, leadership and organisation and management. Medical knowledge, expertise and technical skills seem to become more tacit when training progresses. Topics can be categorised into themes of workplace-based learning, by gathering practical experience, by following examples and receiving feedback on action, including learning from own and others' mistakes. Formal teaching courses (e.g. communication) and scheduled sessions addressing professionalism aspects were also valued. The emerging themes considered most relevant for intensivists were adequate communication skills and keeping boundaries with patients and relatives. Professionalism is mainly learned 'on the job' from role models in the intensive care unit. Formal teaching courses and sessions addressing professionalism aspects were nevertheless valued, and learning from own and others' mistakes was considered especially useful. Self-reflection as a starting point for learning professionalism was stressed.
Collapse
Affiliation(s)
- W N K A van Mook
- Department of Intensive Care Medicine Educational Development and Research, Maastricht, The Netherlands
| | | | | | | | | | | |
Collapse
|
52
|
Raymond MR, Clauser BE, Furman GE. The impact of statistical adjustment on conditional standard errors of measurement in the assessment of physician communication skills. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2010; 15:587-600. [PMID: 20127509 DOI: 10.1007/s10459-010-9221-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2009] [Accepted: 10/28/2009] [Indexed: 05/28/2023]
Abstract
The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary least squares regression to adjust ratings, and then used generalizability theory to evaluate the impact of these adjustments on score reliability and the overall standard error of measurement. In addition, conditional standard errors of measurement were computed for both observed and adjusted scores to determine whether the improvements in measurement precision were uniform across the score distribution. Results indicated that measurement was generally less precise for communication ratings toward the lower end of the score distribution; and the improvement in measurement precision afforded by statistical modeling varied slightly across the score distribution such that the most improvement occurred in the upper-middle range of the score scale. Possible reasons for these patterns in measurement precision are discussed, as are the limitations of the statistical models used for adjusting performance ratings.
Collapse
Affiliation(s)
- Mark R Raymond
- National Board of Medical Examiners, Philadelphia, PA, USA.
| | | | | |
Collapse
|
53
|
Chipman JG, Webb TP, Shabahang M, Heller SF, vanCamp JM, Waer AL, Luxenberg MG, Christenson M, Schmitz CC. A multi-institutional study of the Family Conference Objective Structured Clinical Exam: a reliable assessment of professional communication. Am J Surg 2010; 201:492-7. [PMID: 20850709 DOI: 10.1016/j.amjsurg.2010.02.006] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2009] [Revised: 02/09/2010] [Accepted: 02/16/2010] [Indexed: 11/24/2022]
Abstract
BACKGROUND To test the value of a simulated Family Conference Objective Structured Clinical Exam (OSCE) for resident assessment purposes, we examined the generalizability and construct validity of its scores in a multi-institutional study. METHODS Thirty-four first-year (PG1) and 27 third-year (PG3) surgery residents (n = 61) from 6 training programs were tested. The OSCE consisted of 2 cases (End-of-Life [EOL] and Disclosure of Complications [DOC]). At each program, 2 clinicians and 2 standardized family members rated residents using case-specific tools. Performance was measured as the percentage of possible score obtained. We examined the generalizability of scores for each case separately. To assess construct validity, we compared PG1 with PG3 performance using repeated measures multivariate analysis of variance (MANOVA). RESULTS The relative G-coefficient for EOL was .890. For DOC, the relative G-coefficient was .716. There were no significant performance differences between PG1 and PG3 residents. CONCLUSIONS This OSCE provides reliable assessments suitable for formative evaluation of residents' interpersonal communication skills and professionalism.
Collapse
Affiliation(s)
- Jeffrey G Chipman
- Department of Surgery, University of Minnesota, Minneapolis, 55455, USA.
| | | | | | | | | | | | | | | | | |
Collapse
|
54
|
Brydges R, Carnahan H, Rose D, Dubrowski A. Comparing self-guided learning and educator-guided learning formats for simulation-based clinical training. J Adv Nurs 2010; 66:1832-44. [DOI: 10.1111/j.1365-2648.2010.05338.x] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
55
|
Brydges R, Carnahan H, Rose D, Rose L, Dubrowski A. Coordinating progressive levels of simulation fidelity to maximize educational benefit. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2010; 85:806-12. [PMID: 20520031 DOI: 10.1097/acm.0b013e3181d7aabd] [Citation(s) in RCA: 66] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
PURPOSE To evaluate the effectiveness of a novel, simulation-based educational model rooted in scaffolding theory that capitalizes on a systematic progressive sequence of simulators that increase in realism (i.e., fidelity) and information content. METHOD Forty-five medical students were randomly assigned to practice intravenous catheterization using high-fidelity training, low-fidelity training, or progressive training from low to mid to high fidelity. One week later, participants completed a transfer test on a standardized patient simulation. Blinded expert raters assessed participants' global clinical performance, communication, procedure documentation, and technical skills on the transfer test. Participants' management of the resources available during practice was also recorded. Data were analyzed using multivariate analysis of variance. The study was conducted in fall 2008 at the University of Toronto. RESULTS The high-fidelity group scored higher (P < .05) than the low-fidelity group on all measures except procedure documentation. The progressive group scored higher (P < .05) than other groups for documentation and global clinical performance and was equivalent to the high-fidelity group for communication and technical skills. Total practice time was greatest for the progressive group; however, this group required little practice time on the resource-intensive high-fidelity simulator. CONCLUSIONS Allowing students to progress in their practice on simulators of increasing fidelity led to superior transfer of a broad range of clinical skills. Further, this progressive group was resource-efficient, as participants concentrated on lower fidelity and lower resource-intensive simulators. It is suggested that clinical training curricula incorporate exposure to multiple simulators to maximize educational benefit and potentially save educator time.
Collapse
Affiliation(s)
- Ryan Brydges
- Centre for Health Education Scholarship, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada
| | | | | | | | | |
Collapse
|
56
|
Baig L, Violato C, Crutcher R. A construct validity study of clinical competence: a multitrait multimethod matrix approach. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2010; 30:19-25. [PMID: 20222031 DOI: 10.1002/chp.20052] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
INTRODUCTION The purpose of the study was to adduce evidence for estimating the construct validity of clinical competence measured through assessment instruments used for high-stakes examinations. METHODS Thirty-nine international physicians (mean age = 41 + 6.5 y) participated in high-stakes examination and 3-month supervised clinical practice to determine the practice readiness of physicians. Three traits-doctor-patient relationship, clinical competence, and communication skills-were assessed with objective structured clinical examinations, in-training evaluation reports, and clinical assessments. These traits were intercorrelated in a multitrait multimethod matrix (MTMM). RESULTS The reliability of assessments ranged from moderate to high (Cronbach's alpha: 0.58-0.98; Ep(2) = 0.79). There is evidence for both convergent and divergent validity for clinical competence, followed by doctor-patient relationships, and communications (validity coefficients = 0.12-0.85). The correlations between the same methods but different traits indicate that there is substantial method specificity in the assessment accounting for nearly one-quarter of the variance (23.7%). DISCUSSION There is evidence for the construct validity of all 3 traits across 3 methods. The MTMM approach, currently underutilized, could be used to estimate the degree of evidence for validating complex constructs, such as clinical competence.
Collapse
Affiliation(s)
- Lubna Baig
- Alberta International Medical Graduate Program, Health Sciences Centre, University of Calgary, Canada.
| | | | | |
Collapse
|
57
|
van Mook WNKA, Gorter SL, O'Sullivan H, Wass V, Schuwirth LW, van der Vleuten CPM. Approaches to professional behaviour assessment: tools in the professionalism toolbox. Eur J Intern Med 2009; 20:e153-7. [PMID: 19892295 DOI: 10.1016/j.ejim.2009.07.012] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/23/2008] [Revised: 07/22/2009] [Accepted: 07/25/2009] [Indexed: 11/26/2022]
Abstract
There is general agreement that professionalism and professional behaviour should be (formatively and summatively) assessed, but consensus on how this should be done is still lacking. After discussing some of the remaining issues and questions regarding professionalism assessment, this article discusses the importance of qualitative comments to the assessment of professional behaviour, focuses on the currently most frequently used tools, as well as stresses the need for triangulation (combining) of these tools.
Collapse
Affiliation(s)
- Walther N K A van Mook
- Department of Intensive Care and Internal Medicine, Maastricht University Medical Centre, Maastricht, The Netherlands.
| | | | | | | | | | | |
Collapse
|
58
|
van Mook WNKA, van Luijk SJ, de Grave W, O'Sullivan H, Wass V, Schuwirth LW, van der Vleuten CPM. Teaching and learning professional behavior in practice. Eur J Intern Med 2009; 20:e105-11. [PMID: 19712827 DOI: 10.1016/j.ejim.2009.01.003] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/22/2008] [Revised: 12/18/2008] [Accepted: 01/04/2009] [Indexed: 11/29/2022]
Abstract
This paper is the fourth article in a series on Professionalism and provides an overview of current methods used for teaching and learning about professionalism. The questions "whether" and "how" professionalism can be placed in the formal medical school curricula are addressed, and the informal learning related to professionalism reviewed.
Collapse
Affiliation(s)
- Walther N K A van Mook
- Department of Intensive Care and Internal Medicine, Maastricht University Medical Centre, Maastricht, The Netherlands.
| | | | | | | | | | | | | |
Collapse
|
59
|
Baig LA, Violato C, Crutcher RA. Assessing clinical communication skills in physicians: are the skills context specific or generalizable. BMC MEDICAL EDUCATION 2009; 9:22. [PMID: 19445685 PMCID: PMC2687440 DOI: 10.1186/1472-6920-9-22] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2008] [Accepted: 05/15/2009] [Indexed: 05/16/2023]
Abstract
BACKGROUND Communication skills are essential for physicians to practice Medicine. Evidence for the validity and domain specificity of communication skills in physicians is equivocal and requires further research. This research was conducted to adduce evidence for content and context specificity of communication skills and to assess the usefulness of a generic instrument for assessing communication skills in International Medical Graduates (IMGs). METHODS A psychometric design was used for identifying the reliability and validity of the communication skills instruments used for high-stakes exams for IMG's. Data were collected from 39 IMGs (19 men--48.7%; 20 women--51.3%; Mean age = 41 years) assessed at 14 station OSCE and subsequently in supervised clinical practice with several instruments (patient surveys; ITERs; Mini-CEX). RESULTS All the instruments had adequate reliability (Cronbach's alpha: .54 - .96). There were significant correlations (r range: 0.37 - 0.70, p < .05) of communication skills assessed by examiner with standardized patients, and of mini-CEX with patient surveys, and ITERs. The intra-item reliability across all cases for the 13 items was low (Cronbach's alpha: .20 - .56). The correlations of communication skills within method (e.g., OSCE or clinical practice) were significant but were non-significant between methods (e.g., OSCE and clinical practice). CONCLUSION The results provide evidence of context specificity of communication skills, as well as convergent and criterion-related validity of communication skills. Both in OSCEs and clinical practice, communication checklists need to be case specific, designed for content validity.
Collapse
Affiliation(s)
- Lubna A Baig
- Alberta International Medical Graduate Program, and Community Health Sciences, University of Calgary, Calgary, Canada
| | - Claudio Violato
- Medical Education and Research Unit, Department of Community Health Sciences, Faculty of Medicine, University of Calgary, Calgary, Canada
| | - Rodney A Crutcher
- Department of Family Medicine, Faculty of Medicine, University of Calgary, Calgary, Canada
| |
Collapse
|
60
|
Professionalism and communication in the intensive care unit: reliability and validity of a simulated family conference. Simul Healthc 2009; 3:224-38. [PMID: 19088667 DOI: 10.1097/sih.0b013e31817e6149] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE An Objective Structured Clinical Exam was designed to assess physician's ability to discuss end-of-life (EOL) and disclose iatrogenic complications (DOC) with family members of intensive care unit patients. The study explores reliability and validity based on scores from contrasting rater groups (clinicians, SPs, and examinees). METHODS Two 20-minute stations were administered to 17 surgical residents and 2 critical fellows at a university-based training program. The exam was conducted, videotaped, and scored in a standardized setting by 8 clinical raters (MD and RN) and 8 standardized families using separate rating tools (EOL and DOC). Examinees assessed themselves using the same tools. We analyzed the internal consistency, inter-rater agreement, and discriminant validity of both cases using data from each rater group. Cross-rater group comparisons were also made. RESULTS The internal consistency reliability correlations were above 0.90 regardless of case or rater group. Within rater groups, raters were within 1 point of agreement (5-pt and 6-pt scales) on 81% of the DOC and between 74% and 79% of the EOL items. Family raters were more favorable than clinical raters in scoring DOC, but not EOL cases. Large raw differences in performance by training level favored more experienced trainees (3rd year residents and fellows). These differences were statistically significant when based on residents own self-ratings, but not when they were based on clinical or family ratings. DISCUSSION The Family Conference Objective Structured Clinical Exam is a reliable exam with high content validity. It seems unique in the literature for assessing surgical trainees' ability to discuss "bad news" with family members in intensive care.
Collapse
|
61
|
LeBlanc VR, Tabak D, Kneebone R, Nestel D, MacRae H, Moulton CA. Psychometric properties of an integrated assessment of technical and communication skills. Am J Surg 2009; 197:96-101. [PMID: 19101250 DOI: 10.1016/j.amjsurg.2008.08.011] [Citation(s) in RCA: 50] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2008] [Revised: 08/12/2008] [Accepted: 08/12/2008] [Indexed: 10/21/2022]
Abstract
PURPOSE The Integrated Procedural Performance Instrument (IPPI) consists of clinical scenarios in which bench-top models are positioned to simulated patients. Trainees are required to perform technical skills while engaging with the patient. The purpose of this study was to determine whether an IPPI format examination could discriminate between different levels of trainees. METHODS Sixteen fourth-year medical students and 16 first-year surgery residents participated in 4 IPPI scenarios. Videotaped performances were scored by 2 blinded independent clinician raters on previously validated instruments: checklist of technical skills, Global Rating Scale of technical skills, and communication scale. We conducted separate mixed design analyses of variance (level x cases) on the 3 scales. RESULTS Residents performed better than medical students on the checklist (74% vs 60%, P < .05), the Global Rating Scale of technical skills (75% vs 56%, P < .01), and the coherence communication subscale (79% vs 69%, P < .05). CONCLUSIONS An IPPI examination discriminated between students' and residents' technical skills and coherence in communication skills. It also highlighted a potential gap in the training of residents' communication skills.
Collapse
Affiliation(s)
- Vicki R LeBlanc
- Wilson Centre, Faculty of Medicine, University of Toronto, 200 Elizabeth St., 1ES-565, Toronto, Ontario, M5G 2C4 Canada.
| | | | | | | | | | | |
Collapse
|
62
|
Harasym PH, Woloschuk W, Cunning L. Undesired variance due to examiner stringency/leniency effect in communication skill scores assessed in OSCEs. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2008; 13:617-32. [PMID: 17610034 DOI: 10.1007/s10459-007-9068-0] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2006] [Accepted: 05/04/2007] [Indexed: 05/11/2023]
Abstract
Physician-patient communication is a clinical skill that can be learned and has a positive impact on patient satisfaction and health outcomes. A concerted effort at all medical schools is now directed at teaching and evaluating this core skill. Student communication skills are often assessed by an Objective Structure Clinical Examination (OSCE). However, it is unknown what sources of error variance are introduced into examinee communication scores by various OSCE components. This study primarily examined the effect different examiners had on the evaluation of students' communication skills assessed at the end of a family medicine clerkship rotation. The communication performance of clinical clerks from Classes 2005 and 2006 were assessed using six OSCE stations. Performance was rated at each station using the 28-item Calgary-Cambridge guide. Item Response Theory analysis using a Multifaceted Rasch model was used to partition the various sources of error variance and generate a "true" communication score where the effects of examiner, case, and items are removed. Variance and reliability of scores were as follows: communication scores (.20 and .87), examiner stringency/leniency (.86 and .91), case (.03 and .96), and item (.86 and .99), respectively. All facet scores were reliable (.87-.99). Examiner variance (.86) was more than four times the examinee variance (.20). About 11% of the clerks' outcome status shifted using "true" rather than observed/raw scores. There was large variability in examinee scores due to variation in examiner stringency/leniency behaviors that may impact pass-fail decisions. Exploring the benefits of examiner training and employing "true" scores generated using Item Response Theory analyses prior to making pass/fail decisions are recommended.
Collapse
Affiliation(s)
- Peter H Harasym
- Department of Community Health Sciences, University of Calgary, Calgary, AB, Canada
| | | | | |
Collapse
|
63
|
Norman G. Assessment of communication skills. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2008; 13:559-561. [PMID: 19067213 DOI: 10.1007/s10459-008-9148-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/11/2008] [Accepted: 11/11/2008] [Indexed: 05/27/2023]
|
64
|
Mattick K, Dennis I, Bradley P, Bligh J. Content specificity: is it the full story? Statistical modelling of a clinical skills examination. MEDICAL EDUCATION 2008; 42:589-599. [PMID: 18482090 DOI: 10.1111/j.1365-2923.2008.03020.x] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
OBJECTIVE This study sought to determine the relative contributions made by transferable skills and content-specific skills to Year 2 medical student performance in a clinical skills examination. METHODS Correlated trait-correlated method models were constructed to describe the performance of 2 year groups of students in examinations held in the summers of 2004 and 2005 at Peninsula Medical School in the UK. The transferable skills components of the models were then removed to indicate the contribution made to the fit of the models to the data. RESULTS Although content-specific skills made the greater contribution to the 2 models of student performance (accounting for averages of 54% and 43% of the variance, respectively), transferable skills did make an important but smaller contribution (averages of 13% and 16%, respectively). When the transferable skills components of the models were removed, the fit was not as good. CONCLUSIONS Both content-specific skills and transferable skills contributed to performance in the clinical skills examination. This challenges current thinking and has important implications, not just for those involved in clinical skills examinations, but for all medical educators.
Collapse
Affiliation(s)
- Karen Mattick
- Institute of Clinical Education, Peninsula Medical School, Universities of Exeter and Plymouth, Exeter, UK.
| | | | | | | |
Collapse
|
65
|
|
66
|
Headly A. Communication skills: a call for teaching to the test. Am J Med 2007; 120:912-5. [PMID: 17904465 DOI: 10.1016/j.amjmed.2007.06.024] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/10/2006] [Revised: 02/12/2007] [Accepted: 06/26/2007] [Indexed: 11/22/2022]
Affiliation(s)
- Anna Headly
- Undergraduate Medical Education, Internal Medicine, UMDNJ/Robert Wood Johnson Medical School, Camden, NJ 08103, USA.
| |
Collapse
|
67
|
Dyche L. Interpersonal skill in medicine: the essential partner of verbal communication. J Gen Intern Med 2007; 22:1035-9. [PMID: 17437144 PMCID: PMC2219735 DOI: 10.1007/s11606-007-0153-0] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/22/2006] [Revised: 08/14/2006] [Accepted: 02/06/2007] [Indexed: 11/25/2022]
Abstract
Medical educators have promoted skillful communication as a means for doctors to develop positive relationships with their patients. In practice, communication tends to be defined primarily as what doctors say, with less attention to how, when, and to whom they say it. These latter elements of communication, which often carry the emotional content of the discourse, are usually referred to as interpersonal skills. Although recognized as important by some educators, interpersonal skills have received much less attention than task-oriented, verbal aspects. Moreover, the field lacks a common language and conceptualization for discussing them. This paper offers a framework for describing interpersonal skills and understanding their relationship to verbal communication and describes an interpersonal skill-set comprised of Understanding, Empathy, and Relational Versatility.
Collapse
Affiliation(s)
- Lawrence Dyche
- Department of Family and Social Medicine, Montefiore Medical Center, Albert Einstein College of Medicine, 3544 Jerome Ave, Bronx, NY 10467, USA.
| |
Collapse
|
68
|
Cave J, Washer P, Sampson P, Griffin M, Noble L. Explicitly linking teaching and assessment of communication skills. MEDICAL TEACHER 2007; 29:317-22. [PMID: 17786744 DOI: 10.1080/01421590701509654] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
BACKGROUND Communication skills teaching is known to be effective, but students feel there are discrepancies between how communication skills are taught and how they are assessed. AIMS This study examined the effect of using standard assessment criteria during communication skills teaching on students' performance in an end-of-year summative OSCE. METHOD Students attending their year 3 communication skills teaching were randomised to one of the following three conditions: the assessment criteria were available for reference on the medical school website; or students received the assessment criteria for use in the discussion and feedback; or each student's performance was graded by him- or her-self, his or her peers, the tutor and the actor using the standard assessment criteria. RESULTS There was no significant difference in the end-of-year OSCE performance of students who received the three different conditions. Actively using standard assessment criteria during teaching did not therefore improve OSCE performance. There were low but significant correlations between the tutors' assessment and the students' self-assessment and between the tutors' assessment and the peer group's assessment. CONCLUSION The congruence between observers in the assessments of role-played consultations using the standard assessment criteria indicates that the criteria may be helpful for summarizing feedback to students.
Collapse
Affiliation(s)
- Judith Cave
- Division of Medical Education, Royal Free and University College Medical School, London, UK.
| | | | | | | | | |
Collapse
|
69
|
Jefferies A, Simmons B, Tabak D, McIlroy JH, Lee KS, Roukema H, Skidmore M. Using an objective structured clinical examination (OSCE) to assess multiple physician competencies in postgraduate training. MEDICAL TEACHER 2007; 29:183-91. [PMID: 17701631 DOI: 10.1080/01421590701302290] [Citation(s) in RCA: 51] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
BACKGROUND Competency-based models of medical education require reliable and valid assessment of multiple physician roles. AIMS To develop and evaluate an objective structured clinical examination (OSCE) designed to assess 7 physician competencies (CanMEDS Roles). METHODS Twenty four candidates from 4 neonatal-perinatal medicine training programs participated in a 10-station OSCE. Ten 5-point rating scales were developed and used to assess the CanMEDS Roles of Medical Expert, Communicator, Collaborator, Manager, Health Advocate, Scholar and Professional. Three descriptors of performance anchored the ratings. For each station, examiners completed appropriate CanMEDS ratings, a station-specific binary checklist and an overall process-related global rating. Trained standardized patients (SP) and standardized health professionals (SHP) completed rating scales that assessed verbal and non-verbal expression, empathy and coherence as well as the overall global rating. RESULTS Each station incorporated 3-5 physician Roles. Interstation alpha was 0.80 for checklist scores and 0.88 for examiners' overall global rating. Median interstation alpha for individual CanMEDS ratings was 0.72 (range 0.08-0.91). There were significant correlations between examiner Medical Expert scores and SP/SHP overall global scores and between examiner Communicator scores and 4 SP/SHP assessments of communication skills. Second year trainees' CanMEDS scores for each competency were significantly higher than those of first year trainees (p < 0.05). CONCLUSIONS The OSCE may be useful as a reliable and valid method of simultaneously assessing multiple physician competencies.
Collapse
|
70
|
Yudkowsky R, Downing SM, Sandlow LJ. Developing an institution-based assessment of resident communication and interpersonal skills. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2006; 81:1115-22. [PMID: 17122484 DOI: 10.1097/01.acm.0000246752.00689.bf] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
PURPOSE The authors describe the development and validation of an institution-wide, cross-specialty assessment of residents' communication and interpersonal skills, including related components of patient care and professionalism. METHOD Residency program faculty, the department of medical education, and the Clinical Performance Center at the University of Illinois at Chicago College of Medicine collaborated to develop six standardized patient-based clinical simulations. The standardized patients rated the residents' performance. The assessment was piloted in 2003 for internal medicine and family medicine and was subsequently adapted for other specialties, including surgery, pediatrics, obstetrics-gynecology, and neurology. We present validity evidence based on the content, internal structure, relationship to other variables, feasibility, acceptability, and impact of the 2003 assessment. RESULTS Seventy-nine internal medicine and family medicine residents participated in the initial administration of the assessment. A factor analysis of the 18 communication scale items resulted in two factors interpretable as "communication" and "interpersonal skills." Median internal consistency of the scale (coefficient alpha) was 0.91. Generalizability of the assessment ranged from 0.57 to 0.82 across specialties. Case-specific items provided information about group-level deficiencies. Cost of the assessment was about $250 per resident. Once the initial cases had been developed and piloted, they could be adapted for other specialties with minimal additional effort, at a cost saving of about $1,000 per program. CONCLUSION Centrally developed, institution-wide competency assessment uses resources efficiently to relieve individual programs of the need to "reinvent the wheel" and provides program directors and residents with useful information for individual and programmatic review.
Collapse
Affiliation(s)
- Rachel Yudkowsky
- UIC Clinical Performance Center, Department of Medical Education, University of Illinois at Chicago College of Medicine, Chicago, IL 60612, USA.
| | | | | |
Collapse
|
71
|
Yudkowsky R, Downing SM, Ommert D. Prior experiences associated with residents' scores on a communication and interpersonal skill OSCE. PATIENT EDUCATION AND COUNSELING 2006; 62:368-73. [PMID: 16603331 DOI: 10.1016/j.pec.2006.03.004] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2005] [Revised: 02/09/2006] [Accepted: 03/01/2006] [Indexed: 05/08/2023]
Abstract
OBJECTIVE This exploratory study investigated whether prior task experience and comfort correlate with scores on an assessment of patient-centered communication. METHODS A six-station standardized patient exam assessed patient-centered communication of 79 PGY2-3 residents in Internal Medicine and Family Medicine. A survey provided information on prior experiences. t-tests, correlations, and multi-factorial ANOVA explored relationship between scores and experiences. RESULTS Experience with a task predicted comfort but did not predict communication scores. Comfort was moderately correlated with communication scores for some tasks; residents who were less comfortable were indeed less skilled, but greater comfort did not predict higher scores. Female gender and medical school experiences with standardized patients along with training in patient-centered interviewing were associated with higher scores. Residents without standardized patient experiences in medical school were almost five times more likely to be rejected by patients. CONCLUSIONS Task experience alone does not guarantee better communication, and may instill a false sense of confidence. Experiences with standardized patients during medical school, especially in combination with interviewing courses, may provide an element of "deliberate practice" and have a long-term impact on communication skills. PRACTICE IMPLICATIONS The combination of didactic courses and practice with standardized patients may promote a patient-centered approach.
Collapse
Affiliation(s)
- Rachel Yudkowsky
- Department of Medical Education, University of Illinois at Chicago College of Medicine, USA.
| | | | | |
Collapse
|
72
|
Troncon LEDA. Significance of experts' overall ratings for medical student competence in relation to history-taking. SAO PAULO MED J 2006; 124:101-4. [PMID: 16878194 DOI: 10.1590/s1516-31802006000200010] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/04/2005] [Accepted: 02/24/2006] [Indexed: 11/22/2022] Open
Abstract
CONTEXT AND OBJECTIVE Overall ratings (ORs) of competence, given by expert physicians, are increasingly used in clinical skills assessments. Nevertheless, the influence of specific components of competence on ORs is incompletely understood. The aim here was to investigate whether ORs for medical student history-taking competence are influenced by performance relating to communication skills, completeness of questioning and asking contentdriven key questions. DESIGN AND SETTING Descriptive, quantitative study at Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo. METHODS Thirty-six medical students were examined in a 15-station high-stake objective structured clinical examination (OSCE). At four stations devoted to history-taking, examiners filled out checklists covering the components investigated and independently rated students overall performance using a five-point scale from 1 (poor) to 5 (excellent). Physician ratings were aggregated for each student. Nonparametric correlations were made between ORs. RESULTS ORs presented significant correlations with checklist scores (Spearmans rs = 0.38; p = 0.02) and OSCE general results (rs = 0.52; p < 0.001). Scores for "communication skills" tended to correlate with ORs (rs = 0.31), but without reaching significance (p = 0.06). Neither the scores for "completeness" (rs = 0.26; p = 0.11) nor those for "asking key questions" (rs = 0.07; p = 0.60) correlated with ORs. CONCLUSIONS Experts overall ratings for medical student competence regarding history-taking is likely to encompass a particular dimension, since ratings were only weakly influenced by specific components of performance.
Collapse
Affiliation(s)
- Luiz Ernesto de Almeida Troncon
- Faculdade de Medicina de Ribeirão Preto (FMRP), Universidade de São Paulo (USP); Hospital das Clínicas, Campus da USP, Ribeirão Preto, São Paulo, Brazil.
| |
Collapse
|
73
|
Hanna M, Fins JJ. Viewpoint: power and communication: why simulation training ought to be complemented by experiential and humanist learning. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2006; 81:265-70. [PMID: 16501273 DOI: 10.1097/00001888-200603000-00016] [Citation(s) in RCA: 77] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
The authors present an analysis of communication training for medical students using simulation patients, and its possible influence on later doctor-patient relationships. Many empirical studies have shown the various benefits of using simulation patients to teach communication skills, but theoretical sociology and humanistic reflection shed light on some fundamental differences between the student-doctor/actor-patient interactions practiced in simulation encounters and real doctor-patient relationships. In contrast to the usual power dynamics of a doctor-patient relation, those of simulation encounters are inverted and overwritten by an entirely different set of power relations, namely, those of the evaluator-student relationship. Since the power dynamics of real doctor-patient relations are generally overlooked, the altered dynamics of the simulation encounter are not readily perceived, and simulation encounters are thus often mistaken as accurate representations of clinical reality. Exclusive reliance on this pedagogic approach of simulation training may be encouraging students to become "simulation doctors" who act out a good relationship to their patients but have no authentic connection with them. The authors propose that liberal-arts learning and encounters with real patients should be used to cultivate students' abilities to create good doctor-patient relationships, as a compliment to the pedagogic benefits of simulation encounters.
Collapse
Affiliation(s)
- Michael Hanna
- Division of Medical Ethics, Department of Public Health, Joan and Sanford I. Weill Medical College, Cornell University, 435 East 70 St. Suite 4J, New York, NY 10021, USA
| | | |
Collapse
|
74
|
Hulsman RL, Mollema ED, Oort FJ, Hoos AM, de Haes JCJM. Using standardized video cases for assessment of medical communication skills: reliability of an objective structured video examination by computer. PATIENT EDUCATION AND COUNSELING 2006; 60:24-31. [PMID: 16332467 DOI: 10.1016/j.pec.2004.11.010] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2004] [Accepted: 11/15/2004] [Indexed: 05/05/2023]
Abstract
OBJECTIVE Using standardized video cases in a computerized objective structured video examination (OSVE) aims to measure cognitive scripts underlying overt communication behavior by questions on knowledge, understanding and performance. In this study the reliability of the OSVE assessment is analyzed using the generalizability theory. METHODS Third year undergraduate medical students from the Academic Medical Center of the University of Amsterdam answered short-essay questions on three video cases, respectively about history taking, breaking bad news, and decision making. Of 200 participants, 116 completed all three video cases. Students were assessed in three shifts, each using a set of parallel case editions. About half of all available exams were scored independently by two raters using a detailed rating manual derived from the other half. Analyzed were the reliability of the assessment, the inter-rater reliability, and interrelatedness of the three types of video cases and their parallel editions, by computing a generalizability coefficient G. RESULTS The test score showed a normal distribution. The students performed relatively well on the history taking type of video cases, and relatively poor on decision making and did relatively poor on the understanding ('knows why/when') type of questions. The reliability of the assessment was acceptable (G = 0.66). It can be improved by including up to seven cases in the OSVE. The inter-rater reliability was very good (G = 0.93). The parallel editions of the video cases appeared to be more alike (G = 0.60) than the three case types (G = 0.47). DISCUSSION The additional value of an OSVE is the differential picture that is obtained about covert cognitive scripts underlying overt communication behavior in different types of consultations, indicated by the differing levels of knowledge, understanding and performance. The validation of the OSVE score requires more research. CONCLUSION AND PRACTICE IMPLICATIONS A computerized OSVE has been successfully applied with third year undergraduate medical students. The test score meets psychometric criteria, enabling a proper discrimination between adequately and poorly performing students. The high inter-rater reliability indicates that a single rater is permitted.
Collapse
Affiliation(s)
- R L Hulsman
- Department of Medical Psychology, J4, Academic Medical Centre, P.O. Box 22660, 1100 DD Amsterdam, The Netherlands.
| | | | | | | | | |
Collapse
|
75
|
De Haes JCJM, Oort FJ, Hulsman RL. Summative assessment of medical students' communication skills and professional attitudes through observation in clinical practice. MEDICAL TEACHER 2005; 27:583-9. [PMID: 16332548 DOI: 10.1080/01421590500061378] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
UNLABELLED To establish medical students' professional competence for the medical profession, we designed a standardized observation procedure and the Amsterdam Attitude and Communication Scale (AACS) with nine five-point scale items, for summative assessment of their communication skills and professional attitudes. This study examines the reliability of the AACS assessment in clinical practice. In the Academic Medical Centre, Amsterdam, The Netherlands, the performance of 442 fifth year clinical students was judged six times in two settings: behaviour in clinical practice was judged independently twice by a doctor and a nurse; one videotaped patient interview was judged independently by a doctor and by a psychologist. The final mark was obtained by averaging ratings across all six assessments. Raters were 88 doctors, 29 nurses, and three psychologists. MAIN OUTCOME MEASURES Standard errors (SEs) for absolute judgements indicate measurement precision. Precision of AACS scores is considered sufficient with SEs smaller than 0.25. Multi-disciplinary assessment of students' clinical performance using the AACS is feasible and sufficiently precise (with an overall mean of 3.97 and standard deviation of 0.55, the absolute SE is 0.21). Judgements of behaviour in the clinic were more precise (SEs range from 0.11 to 0.16) than judgements of videotaped interviews (SEs are 0.25 and 0.29). The procedure is sufficiently precise if five or six assessments are combined.
Collapse
Affiliation(s)
- J C J M De Haes
- Department of Medical Psychology, Academic Medical Centre, University of Amsterdam, The Netherlands
| | | | | |
Collapse
|
76
|
Baez A, Eckert-Norton M, Morrison A. Knowing how and showing how: interdisciplinary collaboration on substance abuse skill OSCEs for medical, nursing and social work students. Subst Abus 2005; 25:33-7. [PMID: 16150679 DOI: 10.1300/j465v25n03_05] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Affiliation(s)
- Annecy Baez
- Lehman College, City University of New York, 250 Bedford Park Blvd West, Bronx, NY 10468, USA
| | | | | |
Collapse
|
77
|
Gallagher TJ, Hartung PJ, Gerzina H, Gregory SW, Merolla D. Further analysis of a doctor-patient nonverbal communication instrument. PATIENT EDUCATION AND COUNSELING 2005; 57:262-71. [PMID: 15893207 DOI: 10.1016/j.pec.2004.06.008] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/01/2004] [Revised: 06/06/2004] [Accepted: 06/07/2004] [Indexed: 05/02/2023]
Abstract
This study examines the reliability and validity of the relational communication scale for observational measurement (RCS-O) using a random sample of 80 videotaped interactions of medical students interviewing standardized patients (SPs). The RCS-O is a 34-item instrument designed to measure the nonverbal communication of physicians interacting with patients. The instrument was applied and examined in two different interview scenarios. In the first scenario (year 1), the medical student's interview objective is to demonstrate patient-centered interviewing skills as the SP presents with a psychosocial concern. In the second scenario (year 3), the student's interview objective is to demonstrate both doctor-centered and patient-centered skills as the SP presents with a case common in primary care. In the year 1 scenario, 19 of the 34 RCS-O items met acceptable levels of inter-rater agreement and reliability. In the year 3 scenario, 26 items met acceptable levels of inter-rater agreement and reliability. Factor analysis indicated that in both scenarios each of the four primary relational communication dimensions was salient: intimacy, composure, formality, and dominance. Measures of correlation and differences involving the RCS-O dimensions and structural features of the interviews (e.g., number of questions asked by the medical student) are examined.
Collapse
|
78
|
Mazor KM, Ockene JK, Rogers HJ, Carlin MM, Quirk ME. The relationship between checklist scores on a communication OSCE and analogue patients' perceptions of communication. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2005; 10:37-51. [PMID: 15912283 DOI: 10.1007/s10459-004-1790-2] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/27/2003] [Accepted: 06/08/2004] [Indexed: 05/02/2023]
Abstract
Many efforts to teach and evaluate physician-patient communication are based on two assumptions: first, that communication can be conceptualized as consisting of specific observable behaviors, and second, that physicians who exhibit certain behaviors are more effective in communicating with patients. These assumptions are usually implicit, and are seldom tested. The purpose of this study was to investigate whether specific communication behaviors are positively related to patients' perceptions of effective communication. Trained raters used a checklist to record the presence or absence of specific communication behaviors in 100 encounters in a communication Objective Structured Clinical Examination (OSCE). Lay volunteers served as analogue patients and rated communication during each encounter. Correlations between checklist scores and analogue patients' ratings were not significantly different from zero for four of five OSCE cases studied. Within each case, certain communication behaviors did appear to be related to patients' ratings, but the critical behaviors were not consistent across cases. We conclude that scores from OSCE communication checklists may not predict patients' perceptions of communication. Determinants of patient perceptions of physician communication may be more subtle, more complex, and more case-specific than we were able to capture with the current checklist.
Collapse
|
79
|
Norman G. Editorial--checklists vs. ratings, the illusion of objectivity, the demise of skills and the debasement of evidence. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2005; 10:1-3. [PMID: 15912279 DOI: 10.1007/s10459-005-4723-9] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
|
80
|
Shrank WH, Reed VA, Jernstedt GC. Fostering professionalism in medical education: a call for improved assessment and meaningful incentives. J Gen Intern Med 2004; 19:887-92. [PMID: 15242476 PMCID: PMC1492501 DOI: 10.1111/j.1525-1497.2004.30635.x] [Citation(s) in RCA: 52] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Increasing attention has been focused on developing professionalism in medical school graduates. Unfortunately, the culture of academic medical centers and the behaviors that faculty model are often incongruent with our image of professionalism. The need for improved role modeling, better assessment of student behavior, and focused faculty development is reviewed. We propose that the incentive structure be adjusted to reward professional behavior in both students and faculty. The third-year medicine clerkship provides an ideal opportunity for clinician-educators to play a leading role in evaluating, rewarding, and ultimately fostering professionalism in medical school graduates.
Collapse
|
81
|
van Walsum KL, Lawson DM, Bramson R. Physicians' Intergenerational Family Relationships and Patients' Perceptions of Working Alliance. ACTA ACUST UNITED AC 2004. [DOI: 10.1037/1091-7527.22.4.457] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
82
|
Hall MJ, Adamo G, McCurry L, Lacy T, Waits W, Chow J, Rawn L, Ursano RJ. Use of standardized patients to enhance a psychiatry clerkship. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2004; 79:28-31. [PMID: 14690994 DOI: 10.1097/00001888-200401000-00008] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Changes in psychiatric health care delivery driven by such major shifts as deinstitutionalization, community-based care, and managed care have greatly altered the educational milieu for third-year psychiatry clerkships. Students may be assigned exclusively to alcohol and substance abuse treatment units, consultation-liaison services, or outpatient clinics, and may not have as broad an exposure as is desirable to patients with a variety of psychiatric illnesses. The authors describe a pilot course they developed in 2001, Clinical Psychiatric Assessment and Diagnosis, for third-year medical students at the Uniformed Services University of the Health Sciences medical school. The course uses standardized patients (SPs) to help students gain broader clinical experience. In psychiatry, a growing body of literature supports the acceptability, reliability, and validity of objective structured clinical examination assessment using SPs for medical students. Only a few articles report the use of SPs to primarily teach psychiatry instead of evaluating student proficiency in clinical psychiatry. Since this course was developed, the National Board of Medical Examiners announced that all medical students will be required to pass a clinical skills test in order to practice medicine, beginning with the class of 2005. The examination will use SPs modeling different clinical scenarios. In light of this change, many medical schools may have to reevaluate and possibly revamp their curriculums to insure sufficient acquisition of clinical skills in different specialties. The use of SPs in psychiatry could provide an effective, primary clinical teaching experience to address this new requirement as well.
Collapse
Affiliation(s)
- Molly J Hall
- Department of Psychiatry, Uniformed Services University of the Health Sciences (USUHS), Bethesda, Maryland 20814, USA.
| | | | | | | | | | | | | | | |
Collapse
|
83
|
Loayssa Lara JR. [Dilemmas and alternatives in the evaluation of family doctor training]. Aten Primaria 2003; 32:376-81. [PMID: 14572403 PMCID: PMC7684402 DOI: 10.1016/s0212-6567(03)79300-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2002] [Accepted: 01/08/2003] [Indexed: 11/17/2022] Open
|
84
|
Roberts LW, Geppert C, McCarty T, Obenshain SS. Evaluating medical students' skills in obtaining informed consent for HIV testing. J Gen Intern Med 2003; 18:112-9. [PMID: 12542585 PMCID: PMC1494816 DOI: 10.1046/j.1525-1497.2003.10835.x] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
OBJECTIVE To evaluate fourth-year medical students' abilities to obtain informed consent or refusal for HIV testing through a performance-based evaluation method. DESIGN Student competence was assessed in a standardized patient interaction in which the student obtained informed consent or refusal for HIV testing. A previously validated 16-item checklist was completed by the standardized patient. A subset was independently reviewed and scored by a faculty member to calculate interrater reliability for this report. Student feedback on the assessment was elicited. SETTING School of Medicine at the University of New Mexico. PATIENTS/PARTICIPANTS All senior medical students in the class of 2000 were included. INTERVENTIONS A 10-minute standardized patient interaction was administered within the context of a formal comprehensive performance assessment. MEASUREMENTS AND MAIN RESULTS Seventy-nine students participated, and most (96%) demonstrated competence on the station. For the 15 specific items, the mean score was 25.5 out of 30 possible points (range, 13 to 30; SD, 3.5) on the checklist. A strong positive correlation (rs =.79) was found between the total score on the 15 Likert-scaled items and the score in response to the global item, "I would return to this clinician" (mean, 3.5; SD, 1.0). Scores given by the standardized patients and the faculty rater were well correlated. The station was generally well received by students, many of whom were stimulated to pursue further learning. CONCLUSIONS This method of assessing medical students' abilities to obtain informed consent or refusal for HIV testing can be translated to a variety of clinical settings. Such efforts may help in demonstrating competence in performing key ethics skills and may help ensure ethically sound clinical care for people at risk for HIV infection.
Collapse
Affiliation(s)
- Laura Weiss Roberts
- Empirical Ethics Group, Department of Psychiatry, University of New Mexico School of Medicine, Albuquerque, NM 87131-5326, USA.
| | | | | | | |
Collapse
|
85
|
Wiskin CMD, Allan TF, Skelton JR. Hitting the mark: negotiated marking and performance factors in the communication skills element of the VOICE examination. MEDICAL EDUCATION 2003; 37:22-31. [PMID: 12535112 DOI: 10.1046/j.1365-2923.2003.01408.x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
INTRODUCTION Communication skills assessment is complex. Standardised patient use is widespread, but anxiety exists around the use of role players as assessors of competence in high stakes examinations. This study measures the level of agreement between scoring examiners and role players, and considers their influence on each other. Examiner status and question choices are analysed as variables. METHOD The valid oral interactive contextualised examination (VOICE) is a general practice examination styled as an objective structured clinical examination (OSCE) of six 15-minute stations, which include two role-played consultations with professional role players. The examination candidates are final year medical students. Clinical components are examined by a general practitioner (GP). Communication skills are assessed by these examiners in conjunction with the role players, through a process of negotiation. Descriptive professionalism/attitude bandings are used as percentage-scoring guidelines. Checklists are not used. For this study, the initial (independently) perceived marks of the two scoring groups and their agreed final (awarded) marks were recorded, along with other variables including gender, performance factors, demographics and the nature of the question. Data represents 512 students undertaking 1024 simulated consultations, examined by 28 role players and 46 examiners. Analysis was carried out using SPSS Version 10. RESULTS Results show that the examination and negotiation process is consistent. Role players have a direct influence on scoring. The examiner's background is a significant variable [F9,1014 = 4.207, P < 0.001]. Students perform less well on questions involving higher degrees of clinical information giving. Question choice is not significant [F30,3039 = 1.397, P=0.074]. DISCUSSION The variables in the examination do not indicate any discrepancy substantial enough to bias a student's grade. Negotiated marking in this context is considered safe and reliable.
Collapse
Affiliation(s)
- Connie M D Wiskin
- Department of Primary Care and General Practice, Medical School, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK.
| | | | | |
Collapse
|
86
|
Wilkinson TJ, Fontaine S. Patients' global ratings of student competence. Unreliable contamination or gold standard? MEDICAL EDUCATION 2002; 36:1117-1121. [PMID: 12472737 DOI: 10.1046/j.1365-2923.2002.01379.x] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
PURPOSE To determine whether global ratings by patients are valid and reliable enough to be used within a major summative assessment of medical students' clinical skills. METHOD In 11 stations of an 18-station objective structured clinical examination (OSCE), where a student was asked to educate or take a history from a patient, the patient was asked, 'How likely would you be to come back and discuss your concerns with this student again?' These 11 opinions were aggregated into a single patient opinion mark and correlated with other measures of student competence. The patients were not experienced in student assessment. RESULTS A total of 204 students undertook the OSCE. Reliability of patient opinion across all 11 stations revealed a Cronbach alpha of 0.65. The correlation coefficient between the patient ratings and the total OSCE score was good (r = 0.74; P < 0.001) and was better than the correlation between any single OSCE station and the total OSCE score. It was also better than the correlation between the aggregated patient opinion and tests of student knowledge (r = 0.47). CONCLUSION It is known that patients can reliably complete checklists of clinical skills and that doctors can reliably provide global ratings of students. We have now shown that, by controlling the context, asking the right question and aggregating several opinions, untrained patients can provide a reliable and valid global opinion that contributes to the assessment of a student's clinical skills.
Collapse
Affiliation(s)
- Tim J Wilkinson
- Christchurch School of Medicine and Health Sciences, University of Otago, New Zealand.
| | | |
Collapse
|
87
|
Penava DA, Stanojevic S. Communication skills Assessed at OSCE are not Affected by Participation in the Adolescent Healthy Sexuality Program. MEDICAL EDUCATION ONLINE 2002; 7:4539. [PMID: 28253751 DOI: 10.3402/meo.v7i.4539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
PURPOSE We proposed that first year medical students who voluntarily participated in the Healthy Sexuality adolescent program would perform better than their peers on an adolescent counseling station at the year-end OSCE (Objective Structured Clinical Examination). In addition we compared medical students? communication skills at the time of the program as assessed by self, peers and participating adolescents. METHODS Nineteen first year medical students voluntarily participated in the ongoing Healthy Sexuality program. Adolescent participants, medical student peer participants and medical students assessed communication comp onents on a 7-point Likert scale at the end of the program. At the year-end OSCE, all first year medical students at the University of Western Ontario were assessed at an adolescent counseling station by a standardized patient (SP) and a physician exa miner. Statistical analysis examined differences between the two groups. RESULTS Students who participated in the Healthy Sexuality program did not perform better than their colleagues on the year-end OSCE. A statistically significant correlation between physician examiner and SP evaluations was found (r = 0.62). Adolescent participants communication skills assessments in the Healthy Sexuality Program demonstrated no significant correlation with medical student assessments (self or peer). CONCLUSIONS Voluntary intervention with adolescents did not result in improved communication skills at the structured year-end examination. Further investigation will be directed towards delineating differences between SP and physician examiner assessments.
Collapse
Affiliation(s)
- D A Penava
- a Department of Obstetrics and Gynaecology St. Joseph's Health Centre The University of Western Ontario
| | | |
Collapse
|
88
|
Collins J, Mullan BF, Holbert JM. Evaluation of Speakers at a National Radiology Continuing Medical Education Course. MEDICAL EDUCATION ONLINE 2002; 7:4540. [PMID: 28253766 DOI: 10.3402/meo.v7i.4540] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
PURPOSE Evaluations of a national radiology continuing medical education (CME) course in thoracic imaging were analyzed to determine what constitutes effective and ineffective lecturing. METHODS AND MATERIALS Evaluations of sessions and individual speakers participating in a fiveday course jointly sponsored by the Society of Thoracic Radiology (STR) and the Radiological Society of North America (RSNA) were tallied by the RSNA Department of Data Management and three members of the STR Training Committee. Comments were collated and analyzed to determine the number of positive and negative comments and common themes related to ineffective lecturing. RESULTS Twenty-two sessions were evaluated by 234 (75.7%) of 309 professional registrants. Eighty-one speakers were evaluated by an average of 153 registrants (range, 2 - 313). Mean ratings for 10 items evaluating sessions ranged from 1.28 ? 2.05 (1=most positive, 4=least positive; SD .451 - .902). The average speaker rating was 5.7 (1=very poor, 7=outstanding; SD 0.94; range 4.3 - 6.4). Total number of comments analyzed was 862, with 505 (58.6%) considered positive and 404 (46.9%) considered negative (the total number exceeds 862 as a "comment" could consist of both positive and negative statements). Poor content was mentioned most frequently, making up 107 (26.5%) of 404 negative comments, and applied to 51 (63%) of 81 speakers. Other negative comments, in order of decreasing frequency, were related to delivery, image slides, command of the English language, text slides, and handouts. CONCLUSIONS Individual evaluations of speakers at a national CME course provided information regarding the quality of lectures that was not provided by evaluations of grouped presentations. Systematic review of speaker evaluations provided specific information related to the types and frequency of features related to ineffective lecturing. This information can be used to design CME course evaluations, design future CME course outcomes studies, provide training to presenters, and monitor presenter performance.
Collapse
Affiliation(s)
- Jannette Collins
- a Department of Radiology University of Wisconsin Hospital and Clinics and Medical School Madison , WI
| | - Brian F Mullan
- b Department of Radiology University of Iowa Hospital and Clinics Iowa City , IA
| | - John M Holbert
- c Department of Radiology Texas A &M University Temple , TX
| |
Collapse
|
89
|
Kelly ME, Fenlon NP, Murphy AW. An approach to the education about, and assessment of, attitudes in undergraduate medical education. Ir J Med Sci 2002; 171:206-10. [PMID: 12647910 DOI: 10.1007/bf03170282] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
BACKGROUND Attitudes have been shown to be important determinants of the quality and efficacy of medical care. There is little research on the education about, and assessment of, attitudes in undergraduate medical education. AIM To describe the design and delivery of an attitude awareness workshop, and the associated objective structured clinical examination (OSCE) station used as the assessment tool. METHODS Development, delivery, assessment and evaluation of an attitude awareness workshop were performed. RESULTS Data are presented from 144 students. In 1999, the overall mean OSCE score was 62.44% (SD 7.6, n=73). The mean score for the attitude station was 57.97% (SD 12.9). In 2000, these figures were 67.11% (SD 8.3, n=71) and 73.75% (SD 10.8) respectively. In 1998/99, the average mark out of 10 for the educational quality of the attitude workshop was 6.6 and in 1999/00 this rose to 7.8. CONCLUSION Development of both an educational and assessment programme concerning attitudes appears feasible.
Collapse
Affiliation(s)
- M E Kelly
- Department of General Practice, National University of Ireland, Galway, Ireland.
| | | | | |
Collapse
|
90
|
Arnold L. Assessing professional behavior: yesterday, today, and tomorrow. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2002; 77:502-515. [PMID: 12063194 DOI: 10.1097/00001888-200206000-00006] [Citation(s) in RCA: 256] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
PURPOSE The author interprets the state of the art of assessing professional behavior. She defines the concept of professionalism, reviews the psychometric properties of key approaches to assessing professionalism, conveys major findings that these approaches produced, and discusses recommendations to improve the assessment of professionalism. METHOD The author reviewed professionalism literature from the last 30 years that had been identified through database searches; included in conference proceedings, bibliographies, and reference lists; and suggested by experts. The cited literature largely came from peer-reviewed journals, represented themes or novel approaches, reported qualitative or quantitative data about measurement instruments, or described pragmatic or theoretical approaches to assessing professionalism. RESULTS A circumscribed concept of professionalism is available to serve as a foundation for next steps in assessing professional behavior. The current array of assessment tools is rich. However, their measurement properties should be strengthened. Accordingly, future research should explore rigorous qualitative techniques; refine quantitative assessments of competence, for example, through OSCEs; and evaluate separate elements of professionalism. It should test the hypothesis that assessment tools will be better if they define professionalism as behaviors expressive of value conflicts, investigate the resolution of these conflicts, and recognize the contextual nature of professional behaviors. Whether measurement tools should be tailored to the stage of a medical career and how the environment can support or sabotage the assessment of professional behavior are central issues. FINAL THOUGHT: Without solid assessment tools, questions about the efficacy of approaches to educating learners about professional behavior will not be effectively answered.
Collapse
Affiliation(s)
- Louise Arnold
- University of Missouri-Kansas City School of Medicine, 64108, USA
| |
Collapse
|
91
|
Gallagher TJ, Hartung PJ, Gregory SW. Assessment of a measure of relational communication for doctor-patient interactions. PATIENT EDUCATION AND COUNSELING 2001; 45:211-218. [PMID: 11722857 DOI: 10.1016/s0738-3991(01)00126-4] [Citation(s) in RCA: 36] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
This research analyzes the psychometric properties of a 34-item doctor-patient relational communication scale adapted from its survey research form [Commun Monogr 1987;54:307] for use as an observational instrument to rate doctor-patient interaction. Relational communication determines the affective "tone" of interpersonal communication, and is handled mostly through nonverbal channels. Relational communication provides the framework in which the content of a doctor-patient exchange, such as symptom reports by the patient, is interpreted and acted upon. The relational communication scale was adapted for use by three trained observers each of whom rated 20 videotaped interactions between medical students and standardized patients. Results indicate fair to excellent internal consistency, inter-rater reliability, inter-rater agreement, and construct validity for four of the six relational communication subscales. The scale is practical to administer and would lend itself for use in formative evaluation of medical student and physician communication skills.
Collapse
Affiliation(s)
- T J Gallagher
- Department of Sociology, Kent State University, OH 44242, USA.
| | | | | |
Collapse
|
92
|
de Haes JC, Oort F, Oosterveld P, ten Cate O. Assessment of medical students' communicative behaviour and attitudes: estimating the reliability of the use of the Amsterdam attitudes and communication scale through generalisability coefficients. PATIENT EDUCATION AND COUNSELING 2001; 45:35-42. [PMID: 11602366 DOI: 10.1016/s0738-3991(01)00141-0] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
It is widely accepted that adequate attitudes and communicative skills are among the essential objectives in medical education. The Amsterdam attitude and communication scale (AACS) was developed to assess communicative skills and professional attitudes of medical students. More specifically, it was designed to evaluate the clinical behaviour of clerks to establish their suitability for the medical profession. The AACS covers nine dimensions. Moreover, an overall judgement of the student's performance is included. The present paper reports first results on the reliability of the use of the AACS. Data were collected in the course of an AACS training programme for future judges: senior medical and nursing staff members (N=98). Participants judged three videotapes of clerks interviewing patients at the bedside. For the assessment of videotapes, the first four dimensions of the AACS and the overall judgement are relevant. By applying Generalisability Theory to the training data we can forecast the reliability of the AACS in practice and gain insight in the number of raters that is needed to achieve sufficient reliability in clinical practice. If clerk behaviour is rated by six judges, summative assessment is sufficiently precise, i.e. <0.25. When using the full AACS, covering 10 items, the same number of judges is needed. Scores on individual AACS items are not sufficiently reliable. In conclusion, the results indicate that students' behaviour can be evaluated in a reliable manner using the AACS as long as enough judges and items are involved.
Collapse
Affiliation(s)
- J C de Haes
- Department of Medical Psychology, Academic Medical Centre, University of Amsterdam, P.O. Box 22700, The Netherlands.
| | | | | | | |
Collapse
|
93
|
Prislin MD, Lie D, Shapiro J, Boker J, Radecki S. Using standardized patients to assess medical students' professionalism. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2001; 76:S90-S92. [PMID: 11597884 DOI: 10.1097/00001888-200110001-00030] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Affiliation(s)
- M D Prislin
- Department of Family Medicine, University of California, Orange 92868, USA.
| | | | | | | | | |
Collapse
|
94
|
Rose M, Wilkerson L. Widening the lens on standardized patient assessment: what the encounter can reveal about the development of clinical competence. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2001; 76:856-859. [PMID: 11500293 DOI: 10.1097/00001888-200108000-00023] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
The standardized patient (SP) examination is used in a majority of medical schools to test clinical skills. This examination usually yields both numerical ratings of clinical skill and narrative comments by patients or observers, yet most empirical studies of SP assessment focus on the numerical ratings only. This quantitative focus can lead to a narrow conceptualization of the nature and development of clinical competence. The authors suggest that in addition to utilizing SP numerical ratings, medical educators also use the rich qualitative material produced in the SP examination (e.g., patient comments, videotapes of the examination) to explore students' development of clinical competence, which involves the purposive integration of basic science, technical skill, empathy, communication, professional role, and personal history.
Collapse
Affiliation(s)
- M Rose
- University of California, Los Angeles, UCLA School of Medicine 90095, USA
| | | |
Collapse
|
95
|
Rothman AI, Cusimano M. Assessment of English proficiency in international medical graduates by physician examiners and standardized patients. MEDICAL EDUCATION 2001; 35:762-766. [PMID: 11489104 DOI: 10.1046/j.1365-2923.2001.00964.x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
CONTEXT Since 1986, the Ontario Ministry of Health has provided a medical licensure preparation programme for international medical graduates. Because of the diversity in candidates' oral English proficiency, this competency has been viewed as a particularly important selection criterion. OBJECTIVES To assess and compare the quality of ratings of oral English proficiency of international medical graduates provided by physician examiners and by standardized patients (SPs). PARTICIPANTS AND MATERIALS: The study samples consisted of 73 candidates for the Ontario International Medical Graduate (IMG) Program, and physician examiners and SPs in five 10-minute encounter objective structured clinical examination (OSCE) stations. Materials used were a seven-item speaking performance rating instrument prepared for the Ontario IMG Program. METHODS Rating sheets were scanned and the results analysed using SPSS 9.0 for Windows. RESULTS Correlations between the physician and SP ratings on the seven items ranged from 0.52 to 0.70. The SPs provided more lenient ratings. Mean alpha reliability for the physicians' ratings on the seven items was 0.59, and for the SPs' 0.64. There was poor agreement between the two sets of raters in identifying problematic candidates. CONCLUSIONS Notwithstanding the sizable correlations between the ratings provided by the two rater groups, the results demonstrated that there was little agreement between the two groups in identifying the potentially problematic candidates. The physicians were less prone than the SPs to rate candidates as problematic. SPs may be better placed than the physician examiners to directly assess IMG candidates' oral English proficiency.
Collapse
Affiliation(s)
- A I Rothman
- Department of Medicine, Faculty of Medicine University of Toronto, Ontario, Canada
| | | |
Collapse
|
96
|
Humphris GM, Kaney S. Examiner fatigue in communication skills objective structured clinical examinations. MEDICAL EDUCATION 2001; 35:444-9. [PMID: 11328514 DOI: 10.1046/j.1365-2923.2001.00893.x] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
CONTEXT The assessment of undergraduates' communication skills by means of objective structured clinical examinations (OSCEs) is a demanding task for examiners. Tiredness over the course of an examining session may introduce systematic error. In addition, unsystematic error may also be present which changes over the duration of the OSCE session. AIM To determine the strength of some sources of systematic and unsystematic error in the assessment of communication skills over the duration of an examination schedule. METHODS Undergraduate first-year medical students completing their initial summative assessment of communication skills (a four-station OSCE) comprised the study population. Students from three cohorts were included (1996-98 intake). In all 3 years the OSCE was carried out identically. All stations lasted 5 minutes with a simulated patient. Students were assessed using an examiner (content expert) and a simulated-patient evaluation tool, the Liverpool Communication Skills Assessment Scale (LCSAS) and the Global Simulated-patient Rating Scale (GSPRS), respectively. Each student was assigned a time slot ranging from 1 to 24, where 1, for example, would denote that the student entered the exam first and 24 indicates the final slot for entry into the examination. The number of students who failed this exam was noted for each of the 24 time slots. A control set of marks from a communication skills written exam was also adopted for exploring a possible link with the time slot. Analysis was conducted using graphical display, covariate analysis and logistic regression. RESULTS No significant relationship was found between the schedule point that the student entered the OSCE exam and their performance. The reliability of the content expert and simulated-patient assessments was stable throughout the session. CONCLUSION No evidence could be found that duration of examining in a communication OSCE influenced examiners and the marks they awarded. Checks of this nature are recommended for routine inspection to confirm a lack of bias.
Collapse
Affiliation(s)
- G M Humphris
- Department of Clinical Psychology, Whelan Building, The University of Liverpool, Liverpool L69 3GB, UK
| | | |
Collapse
|
97
|
Sloan PA, Plymale MA, Johnson M, Vanderveer B, LaFountain P, Sloan DA. Cancer pain management skills among medical students: the development of a Cancer Pain Objective Structured Clinical Examination. J Pain Symptom Manage 2001; 21:298-306. [PMID: 11312044 DOI: 10.1016/s0885-3924(00)00278-5] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Recent surveys suggest that most physicians have inadequate knowledge to assess and manage cancer pain; however, the important domain of clinical performance has not yet been clearly evaluated. The Objective Structured Clinical Examination (OSCE) has become a widely- used and accepted method to evaluate the clinical abilities of medical students. The purpose of this study was to develop and test a Cancer Pain OSCE for medical students evaluating their clinical competence in the area of cancer pain management. A four-component Cancer Pain OSCE was developed and presented to 34 third-year medical students during a sixteen-week combined medicine/surgery clerkship. The content of the objective criteria for each component of the OSCE was developed by a multidisciplinary group of pain experts. The OSCE was designed to assess the students' cancer pain management skills of pain history-taking, focused physical examination, analgesic management of cancer pain, and communication of opioid analgesia myths. Actual cancer survivors were used in the five-minute individual stations. The students were asked to complete a cancer pain history, physical examination, manage cancer pain using analgesics, and communicate with a family member regarding opioid myths. Clinical performance was evaluated using pre-defined checklists. Results showed the student's average performance for the history component was the highest of all four components of the examination. Out of 34 points possible on this clinical skills item, students on average (SD) scored 24.5 (5.2), or 72%. For the short-answer analgesic management component of the Cancer Pain OSCE, the overall score was 32%. Most students managed cancer pain with opioids, however, very few prescribed regular opioid use, and the use of adjuvant analgesics was uncommon. Student performance on the focused cancer pain physical examination was, in general, poor. On average students scored 61% on the musculoskeletal system, but only 31% on both the neurological and lymphathic examination. The overall percent score for the cancer pain OSCE was 48%. We conclude that the Cancer Pain OSCE is a useful performance-based tool to test individual skills in the essential components of cancer pain assessment and management. Of the four components of the Cancer Pain OSCE, medical students performed best on the cancer pain history and performed poorly on the cancer pain physical examination. Information gained from this study will provide a foundation on which future small-group medical student structured teaching will be based.
Collapse
Affiliation(s)
- P A Sloan
- Department of Anesthesiology, University of Kentucky College of Medicine, Lexington, KY 40536-0293, USA
| | | | | | | | | | | |
Collapse
|
98
|
Rothman AI, Cusimano M. A comparison of physician examiners', standardized patients', and communication experts' ratings of international medical graduates' English proficiency. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2000; 75:1206-11. [PMID: 11112723 DOI: 10.1097/00001888-200012000-00018] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
PURPOSE To assess the quality of ratings of interviewing skills and oral English proficiency provided on a clinical skills OSCE by physician examiners, standardized patients (SPs), and communication skills experts. METHOD In 1998, 73 candidates to the Ontario International Medical Graduate (OIMG) Program completed a 29-station OSCE-type clinical skills selection examination. Physician examiners, SPs, and communication skills experts assessed components of oral English proficiency and interview performance. Based on these results, the frequency and generalizability of English-language flags, physician examiners' indications that spoken English skills were bad enough to significantly impede communication with patients; the reliability of the OIMG's Interview and Oral Performance Scales and generalizability of overall interview and oral performance ratings; and comparisons of repeated assessments by experts were calculated. Principal-components analysis was applied to the panels' ratings to determine a more economical expression of the language proficiency and interview communication skills results. RESULTS The mean number of English-language flags per candidate was 2.1, the median was 1.0, and Cronbach's alpha of the ratings was 0.63. Means, SDs, and alphas of the physician examiners' and SPs' ratings of the interview performance scale were 9.15/10, 0.43, 0.36, and 9.30/10, 0. 56, 0.50, respectively. Corresponding values for overall interview performance ratings were 3.08/4, 0.30, 0.33, and 3.34/4, 0.32, 0.47. Means, SDs, and alphas of the physician examiners' and SPs' ratings of the oral performance scale were 8.54/10, 0.74, 0.78, and 8.74/10, 1.00, 0.76. Corresponding values for overall ratings of oral performance were 3.85/5, 0.51, 0.68, and 4.08/5, 0.60, 0.68. For the two experts' ratings of two contiguous five-minute interview stations, internal consistencies were 0.88 and 0.78. For the two experts' ratings of standardized ten-minute interviews, internal consistencies were 0.81 and 0.92. Correlations between the mean values of the experts' ratings of the ten- and five-minute stations were 0.45 and 0.51. Three factors emerged from the PCA, language proficiency, physician examiners' ratings of interview proficiency, and SPs' ratings of interview proficiency. CONCLUSIONS Consistency between the physician examiners' and SPs' ratings of English proficiency was observed; less agreement was observed in their ratings of interviewing skills, and little agreement was observed between the experts' ratings. Communication skills results may be validly expressed by three measures: one overall global rating of language proficiency provided by physician examiners or SPs, and overall global ratings of interview proficiency provided separately by physician examiners and SPs.
Collapse
Affiliation(s)
- A I Rothman
- Department of Medicine, University of Toronto, Toronto, Ontario, Canada.
| | | |
Collapse
|
99
|
Ginsburg S, Regehr G, Hatala R, McNaughton N, Frohna A, Hodges B, Lingard L, Stern D. Context, conflict, and resolution: a new conceptual framework for evaluating professionalism. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2000; 75:S6-S11. [PMID: 11031159 DOI: 10.1097/00001888-200010001-00003] [Citation(s) in RCA: 97] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Affiliation(s)
- S Ginsburg
- Mt. Sinai Hospital, Toronto, Ontario, Canada
| | | | | | | | | | | | | | | |
Collapse
|
100
|
Donnelly MB, Sloan D, Plymale M, Schwartz R. Assessment of residents' interpersonal skills by faculty proctors and standardized patients: a psychometric analysis. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2000; 75:S93-S95. [PMID: 11031186 DOI: 10.1097/00001888-200010001-00030] [Citation(s) in RCA: 20] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Affiliation(s)
- M B Donnelly
- Department of Surgery, University of Kentucky COM, Lexington, KY 40536-0298, USA
| | | | | | | |
Collapse
|