1
|
Chen JY, Chin WY, Tsang JPY. How clinician examiners compare with simulated patients in assessing medical student empathy in a clinical exam setting. MEDICAL TEACHER 2020; 42:86-91. [PMID: 31558085 DOI: 10.1080/0142159x.2019.1665635] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Background: Empathy is an important clinical attribute to be assessed during clinical examinations. While simulated patients (SPs) are well positioned to assess empathy in such settings, clinician-examiners are objective observers who are also experts in assessment. In this study, the assessments of student empathy from both examiners and SPs in clinical examinations were compared.Methods: The 10-item CARE measure were used for the assessment of empathy in 158 medical students in the Family Medicine specialty clerkship clinical competency test. The ratings from examiners and SPs were analyzed together with genders of students, examiners and patients, and the examination results.Results: SPs empathy ratings were higher than those from examiners across all ten items of CARE. A weak positive correlation was found between both ratings. Female SPs were more likely to give higher ratings, and examiners were more likely to give higher ratings to female students. SPs rating was moderately correlated with student examination score, while the correlation with examiners rating was strong.Conclusion: Although the inter-rater reliability was weak between the empathy rating from simulated patients and examiners, the evaluation of empathy from the patient's perspective was seen to be more authentic as they are in interaction with the students.
Collapse
Affiliation(s)
- Julie Yun Chen
- Department of Family Medicine and Primary Care, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, Hong Kong
- Bau Institute of Medical and Health Sciences Education, Li Ka Shing Faculty of Medicine, The University of Hong, Hong Kong, Hong Kong
| | - Weng-Yee Chin
- Department of Family Medicine and Primary Care, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, Hong Kong
- Bau Institute of Medical and Health Sciences Education, Li Ka Shing Faculty of Medicine, The University of Hong, Hong Kong, Hong Kong
| | - Joyce Pui Yan Tsang
- Department of Family Medicine and Primary Care, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Hong Kong, Hong Kong
- Bau Institute of Medical and Health Sciences Education, Li Ka Shing Faculty of Medicine, The University of Hong, Hong Kong, Hong Kong
| |
Collapse
|
2
|
Roberts MJ, Gale TCE, Sice PJA, Anderson IR. The relative reliability of actively participating and passively observing raters in a simulation-based assessment for selection to specialty training in anaesthesia. Anaesthesia 2013; 68:591-9. [DOI: 10.1111/anae.12255] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/01/2013] [Indexed: 11/30/2022]
Affiliation(s)
- M. J. Roberts
- Collaboration for the Advancement of Medical Education Research and Assessment (CAMERA); Plymouth University Peninsula Schools of Medicine and Dentistry; Plymouth; UK
| | | | - P. J. A. Sice
- Department of Anaesthesia; Plymouth Hospitals NHS Trust; Plymouth; UK
| | - I. R. Anderson
- Department of Anaesthesia; Plymouth Hospitals NHS Trust; Plymouth; UK
| |
Collapse
|
3
|
Vaughan B, Sullivan V, Gosling C, McLaughlin P, Fryer G, Wolff M, Gabb R. Assessing fitness-to-practice of overseas-trained health practitioners by Australian registration & accreditation bodies. BMC MEDICAL EDUCATION 2012; 12:91. [PMID: 23020885 PMCID: PMC3549784 DOI: 10.1186/1472-6920-12-91] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/19/2012] [Accepted: 09/27/2012] [Indexed: 06/01/2023]
Abstract
BACKGROUND Assessment of fitness-to-practice of health professionals trained overseas and who wish to practice in Australia is undertaken by a range of organisations. These organisations conduct assessments using a range of methods. However there is very little published about how these organisations conduct their assessments. The purpose of the current paper is to investigate the methods of assessment used by these organisations and the issues associated with conducting these assessments. METHODS A series of semi-structured interviews was undertaken with a variety of organisations who undertake assessments of overseas-trained health professionals who wish to practice in Australia. Content analysis of the interviews was used to identify themes and patterns. RESULTS Four themes were generated from the content analysis of the interviews: (1) assessing; (2) process; (3) examiners; and (4) cost-efficiency. The themes were interconnected and each theme also had a number of sub-themes. CONCLUSIONS The organisations who participated in the present study used a range of assessment methods to assess overseas trained health professionals. These organisations also highlighted a number of issues, particularly related to examiners and process issues, pre- and post-assessment. Organisations demonstrated an appreciation for ongoing review of their assessment processes and incorporating evidence from the literature to inform their processes and assessment development.
Collapse
Affiliation(s)
- Brett Vaughan
- Osteopathy Unit, School of Biomedical & Health Sciences, Victoria University, Melbourne, Australia
- Institute of Sport, Exercise and Active Living, Victoria University, Melbourne, Australia
| | - Vivienne Sullivan
- Osteopathy Unit, School of Biomedical & Health Sciences, Victoria University, Melbourne, Australia
| | - Cameron Gosling
- Department of Epidemiology & Preventive Medicine, Monash University, Melbourne, Australia
| | - Patrick McLaughlin
- Osteopathy Unit, School of Biomedical & Health Sciences, Victoria University, Melbourne, Australia
- Institute of Sport, Exercise and Active Living, Victoria University, Melbourne, Australia
| | - Gary Fryer
- Osteopathy Unit, School of Biomedical & Health Sciences, Victoria University, Melbourne, Australia
- Institute of Sport, Exercise and Active Living, Victoria University, Melbourne, Australia
| | | | - Roger Gabb
- Teaching & Learning Taskforce, Faculty of Health, Engineering & Science, Victoria University, Melbourne, Australia
| |
Collapse
|
4
|
Bolstad AL, Xu Y, Shen JJ, Covelli M, Torpey M. Reliability of standardized patients used in a communication study on international nurses in the United States of America. Nurs Health Sci 2012; 14:67-73. [PMID: 22321160 DOI: 10.1111/j.1442-2018.2011.00667.x] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
As an evaluation method, standardized patients have a long history in medical education and research yet are less established in nursing. This paper explores the reliability of using standardized patients as the evaluative method in a communication competence pilot study with international nurses. Standardized patients and second raters scored the same encounters. We examined the scores by intraclass correlation coefficients. Anecdotal comments by the two types of raters were assessed qualitatively to highlight similarities and areas of difference between them. The results of reliability analysis for standardized patients scores for the composite variables of Establishing Communicative Rapport, Therapeutic Communication, Non-Verbal Communication, and Overall Satisfaction ranged from 0.755 (P < 0.01) to 0.42 (P = 0.09). In this study, the results showed standardized patient evaluation has moderate to substantial reliability when compared to second raters of the same set of clinical encounters. This is similar to the reliability established over many decades of medical research. Greater use of this dynamic and interactive technique may be beneficial to nursing education and research.
Collapse
Affiliation(s)
- Anne L Bolstad
- University of Nevada Las Vegas School of Nursing, Las Vegas, NV 89154-3018, USA
| | | | | | | | | |
Collapse
|
5
|
Raymond MR, Clauser BE, Swygert K, van Zanten M. Measurement precision of spoken English proficiency scores on the USMLE step 2 clinical skills examination. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2009; 84:S83-S85. [PMID: 19907394 DOI: 10.1097/acm.0b013e3181b37d01] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
BACKGROUND Previous research has shown that ratings of English proficiency on the United States Medical Licensing Examination Clinical Skills Examination are highly reliable. However, the score distributions for native and nonnative speakers of English are sufficiently different to suggest that reliability should be investigated separately for each group. METHOD Generalizability theory was used to obtain reliability indices separately for native and nonnative speakers of English (N = 29,084). Conditional standard errors of measurement were also obtained for both groups to evaluate measurement precision for each group at specific score levels. RESULTS Overall indices of reliability (phi) exceeded 0.90 for both native and nonnative speakers, and both groups were measured with nearly equal precision throughout the score distribution. However, measurement precision decreased at lower levels of proficiency for all examinees. CONCLUSIONS The results of this and future studies may be helpful in understanding and minimizing sources of measurement error at particular regions of the score distribution.
Collapse
Affiliation(s)
- Mark R Raymond
- National Board of Medical Examiners, 3750 Market Street, Philadelphia, PA 19104, USA.
| | | | | | | |
Collapse
|
6
|
Chur-Hansen A, Elliott TE, Klein NC, Howell CA. Assessment of English-language proficiency for general practitioner registrars. THE JOURNAL OF CONTINUING EDUCATION IN THE HEALTH PROFESSIONS 2007; 27:36-41. [PMID: 17385731 DOI: 10.1002/chp.92] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
INTRODUCTION English-language proficiency of medical practitioners is an issue attracting increasing attention in medical education. To best provide language education support, it is essential that learning needs are assessed and that useful feedback and advice are provided. We report the outcomes of a language assessment that was embedded within the context of a comprehensive general practice learning-needs analysis. METHODS A group of general practitioner registrars (N = 18) training in Adelaide, South Australia, participated in the learning-needs analysis. The analysis used reliable, validated rating scales that provided information on both verbal and written language skills. These scales were used in the context of an objective structured clinical interview. The interviews were videotaped to enable multiple ratings per candidate. Following the learning-needs analysis, ratings were collated and fed back individually to participants according to a feedback report and template. RESULTS Of this sample, 5 (28%) were found to have no need for any assistance with either spoken or written language, 5 had poor handwriting, 5 were considered to have minor difficulties, and 3 (17%) were identified as having substantial spoken and written English-language difficulties. These outcomes allowed medical educators to focus the language education support offered to the general practitioner registrars appropriately. CONCLUSIONS Language skills can be usefully assessed within a more comprehensive learning-needs analysis. In combination with this assessment, the provision of specific feedback and recommendations for appropriate language-learning opportunities is essential.
Collapse
Affiliation(s)
- Anna Chur-Hansen
- Discipline of Psychiatry, University of Adelaide, South Australia, Australia.
| | | | | | | |
Collapse
|
7
|
Humphrey-Murto S, Smee S, Touchie C, Wood TJ, Blackmore DE. A comparison of physician examiners and trained assessors in a high-stakes OSCE setting. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2005; 80:S59-62. [PMID: 16199459 DOI: 10.1097/00001888-200510001-00017] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
BACKGROUND The Medical Council of Canada (MCC) administers an objective structured clinical examination for licensure. Traditionally, physician examiners (PE) have evaluated these examinees. Recruitment of physicians is becoming more difficult. Determining if alternate scorers can be used is of increasing importance. METHOD In 2003, the MCC ran a study using trained assessors (TA) simultaneously with PEs. Four examination centers and three history-taking stations were selected. Health care workers were recruited as the TAs. RESULTS A 3x2x4 mixed analyses of variance indicated no significant difference between scorers (F1,462=.01, p=.94). There were significant interaction effects, which were, localized to site 1/station 3, site 3/station 2, and site 4/station1. Pass/fail decisions would have misclassified 14.4-25.01% of examinees. CONCLUSION Trained assessors may be a valid alternative to PE for completing checklists in history-taking stations, but their role in completing global ratings is not supported by this study.
Collapse
|
8
|
van Zanten M, Boulet JR, McKinley D, Whelan GP. Evaluating the spoken English proficiency of international medical graduates: detecting threats to the validity of standardised patient ratings. MEDICAL EDUCATION 2003; 37:69-76. [PMID: 12535117 DOI: 10.1046/j.1365-2923.2003.01400.x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
PURPOSE To investigate potential threats to the validity of the spoken English proficiency ratings provided by standardised patients (SPs) in high-stakes clinical skills examinations. METHOD Spoken English ratings from 43 327 patient encounters were studied. These involved over 5000 candidates, 40% of whom were female and 33% of whom self-reported English to be their native language. Over 100 SPs were involved in the study, 51% of whom were female and 90% of whom were native English speakers. Possible performance differences in English ratings were studied as a function of candidate and SP gender, and as a function of candidate and SP native language (English versus all other languages). RESULTS No significant candidate by SP gender effect was detected. There were no meaningful differences in mean English ratings as a function of SP or candidate gender. Likewise, English ratings did not vary as a function of either candidate or SP native language. While candidate mean English ratings were not associated with the native language of the SP, native English-speaking candidates did achieve significantly higher ratings. DISCUSSION The lack of significant interaction between candidate and SP gender, and candidate and SP native language, suggests that the SPs provided unbiased English ratings. These results, combined with the expected higher English ratings given to candidates with English-speaking backgrounds, provides additional evidence to support the validity and fairness of spoken English proficiency ratings provided by standardised patients.
Collapse
Affiliation(s)
- Marta van Zanten
- Clinical Skills Assessment, Educational Commission for Foreign Medical Graduates, 3624 Market Street, Philadelphia, PA 19104, USA.
| | | | | | | |
Collapse
|
9
|
Mavis BE, Henry RC. Between a rock and a hard place: finding a place for the OSCE in medical education. MEDICAL EDUCATION 2002; 36:408-409. [PMID: 12028388 DOI: 10.1046/j.1365-2923.2002.01241.x] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Affiliation(s)
- Brian E Mavis
- Office of Medical Education Research and Development, College of Human Medicine, Michigan State University, East Lansing 48824-1316, USA.
| | | |
Collapse
|