1
|
Bond WF, Mischler MJ, Lynch TJ, Ebert-Allen RA, Mou KM, Aiyer M, Park YS. The Use of Virtual Standardized Patients for Practice in High Value Care. Simul Healthc 2023; 18:147-154. [PMID: 35322798 DOI: 10.1097/sih.0000000000000659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
INTRODUCTION This study examined the influence of high value care (HVC)-focused virtual standardized patients (VSPs) on learner attitudes toward cost-conscious care (CCC), performance on subsequent standardized patient (SP) encounters, and the correlation of VSP performance with educational outcomes. METHOD After didactic sessions on HVC, third-year medical students participated in a randomized crossover design of simulation modalities consisting of 4 VSPs and 3 SPs. Surveys of attitudes toward CCC were administered before didactics and after the first simulation method. Performance markers included automated VSP grading and, for SP cases, faculty-graded observational checklists and patient notes. Performance was compared between modalities using t tests and analysis of variance and then correlated with US Medical Licensing Examination performance. RESULTS Sixty-six students participated (VSP first: n = 37; SP-first: n = 29). Attitudes toward CCC significantly improved after training (Cohen d = 0.35, P = 0.043), regardless of modality. Simulation order did not impact learner performance for SP encounters. Learners randomized to VSP first performed significantly better within VSP cases for interview (Cohen d = 0.55, P = 0.001) and treatment (Cohen d = 0.50, P = 0.043). The HVC component of learner performance on the SP simulations significantly correlated with US Medical Licensing Examination step 1 ( r = 0.26, P = 0.038) and step 2 clinical knowledge ( r = 0.33, P = 0.031). CONCLUSIONS High value care didactics combined with either VSPs or SPs positively influenced attitudes toward CCC. The ability to detect an impact of VSPs on learner SP performance was limited by content specificity and sample size.
Collapse
Affiliation(s)
- William F Bond
- From Jump Simulation (W.F.B., M.J.M., T.J.L., R.E.A., K.M.M., and M.A.), a collaboration of OSF Healthcare and the University of Illinois College of Medicine at Peoria; the Department of Internal Medicine (T.J.L., M.J.M., M.A.), Department of Pediatrics (T.J.L., M.J.M), and Department of Emergency Medicine (W.F.B) University of Illinois College of Medicine at Peoria; and Department of Medical Education (Y.S.P.), University of Illinois College of Medicine at Chicago, Chicago, IL
| | | | | | | | | | | | | |
Collapse
|
2
|
Affiliation(s)
| | | | - Shanu Gupta
- Corresponding author: Shanu Gupta, MD and Director of Education, Rush University Hospitalists, 10 Kellogg, 1717 West Congress Parkway, Chicago, IL 60612, 312.942.4200,
| | | | | |
Collapse
|
3
|
Gardner KM. Developing a Customized Multiple Interview for Dental School Admissions. J Dent Educ 2014. [DOI: 10.1002/j.0022-0337.2014.78.4.tb05711.x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
4
|
Berendonk C, Stalmeijer RE, Schuwirth LWT. Expertise in performance assessment: assessors' perspectives. ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2013; 18:559-71. [PMID: 22847173 PMCID: PMC3767885 DOI: 10.1007/s10459-012-9392-x] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2012] [Accepted: 07/09/2012] [Indexed: 05/14/2023]
Abstract
The recent rise of interest among the medical education community in individual faculty making subjective judgments about medical trainee performance appears to be directly related to the introduction of notions of integrated competency-based education and assessment for learning. Although it is known that assessor expertise plays an important role in performance assessment, the roles played by different factors remain to be unraveled. We therefore conducted an exploratory study with the aim of building a preliminary model to gain a better understanding of assessor expertise. Using a grounded theory approach, we conducted seventeen semi-structured interviews with individual faculty members who differed in professional background and assessment experience. The interviews focused on participants' perceptions of how they arrived at judgments about student performance. The analysis resulted in three categories and three recurring themes within these categories: the categories assessor characteristics, assessors' perceptions of the assessment tasks, and the assessment context, and the themes perceived challenges, coping strategies, and personal development. Central to understanding the key processes in performance assessment appear to be the dynamic interrelatedness of the different factors and the developmental nature of the processes. The results are supported by literature from the field of expertise development and in line with findings from social cognition research. The conceptual framework has implications for faculty development and the design of programs of assessment.
Collapse
Affiliation(s)
- Christoph Berendonk
- Institute of Medical Education, Faculty of Medicine, University of Berne, Konsumstrasse 13, 3010, Berne, Switzerland,
| | | | | |
Collapse
|
5
|
Raymond MR, Luciw-Dubas UA. The second time around: accounting for retest effects on oral examinations. Eval Health Prof 2011; 33:386-403. [PMID: 20801978 DOI: 10.1177/0163278710374855] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Years of research with high-stakes written tests indicates that although repeat examinees typically experience score gains between their first and subsequent attempts, their pass rates remain considerably lower than pass rates for first-time examinees. This outcome is consistent with expectations. Comparable studies of the performance of repeat examinees on oral examinations are lacking. The current research evaluated pass rates for more than 50,000 examinees on written and oral exams administered by six medical specialty boards for several recent years. Pass rates for first-time examinees were similar for both written and oral exams, averaging about 84% across all boards. Pass rates for repeat examinees on written exams were expectedly lower, ranging from 22% to 51%, with an average of 36%. However, pass rates for repeat examinees on oral exams were markedly higher than for written exams, ranging from 53% to 77%, with an average of 65%. Four explanations for the elevated repeat pass rates on oral exams are proposed, including an increase in examinee proficiency, construct-irrelevant variance, measurement error (score unreliability), and memorization of test content. Simulated data are used to demonstrate that roughly one third of the score increase can be explained by measurement error alone. The authors suggest that a substantial portion of the score increase can also likely be attributed to construct-irrelevant variance. Results are discussed in terms of their implications for making pass-fail decisions when retesting is allowed. The article concludes by identifying areas for future research.
Collapse
Affiliation(s)
- Mark R Raymond
- National Board of Medical Examiners, Philadelphia, PA, USA.
| | | |
Collapse
|
6
|
Touchie C, Humphrey-Murto S, Ainslie M, Myers K, Wood TJ. Two models of raters in a structured oral examination: does it make a difference? ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2010; 15:97-108. [PMID: 19657717 DOI: 10.1007/s10459-009-9175-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2009] [Accepted: 07/15/2009] [Indexed: 05/28/2023]
Abstract
Oral examinations have become more standardized over recent years. Traditionally a small number of raters were used for this type of examination. Past studies suggested that more raters should improve reliability. We compared the results of a multi-station structured oral examination using two different rater models, those based in a station, (station-specific raters), and those who follow a candidate throughout the entire examination, (candidate-specific raters).Two station-specific and two candidate-specific raters simultaneously evaluated internal medicine residents' performance at each station. No significant differences were found in examination scores. Reliability was higher for the candidate-specific raters. Inter-rater reliability, internal consistency and a study of station inter-correlations suggested that a halo effect may be present for candidates examined by candidate-specific raters. This study suggests that although the model of candidate-specific raters was more reliable than the model of station-specific raters for the overall examination, the presence of a halo effect may influence individual examination outcomes.
Collapse
Affiliation(s)
- Claire Touchie
- Division of General Internal Medicine, Department of Medicine, University of Ottawa, Ottawa, ON, Canada.
- The Ottawa Hospital, General Campus, 501 Smyth Road, LM-14, Ottawa, ON, K1H 8L6, Canada.
| | - Susan Humphrey-Murto
- Division of Rheumatology, Department of Medicine, University of Ottawa, Ottawa, ON, Canada
| | - Martha Ainslie
- Division of Respirology, Department of Medicine, University of Calgary, Calgary, AB, Canada
| | - Kathryn Myers
- Division of General Internal Medicine, Department of Medicine, University of Western Ontario, London, ON, Canada
| | | |
Collapse
|
7
|
Roh HR, Kim JK, Hwang JY, Park SB, Lee SW. Experience of implementation of objective structured oral examination for ethical competence assessment. KOREAN JOURNAL OF MEDICAL EDUCATION 2009; 21:23-33. [PMID: 25812954 DOI: 10.3946/kjme.2009.21.1.23] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2008] [Accepted: 01/16/2009] [Indexed: 06/04/2023]
Abstract
PURPOSE We developed an objective structured oral examination (OSOE) case to assess the medical ethics of students. The aim of this study was to assess the reliability of OSOE with generalizability theory. METHODS One 10-minute OSOE that contained key questions was developed. The evaluation sheet consisted of 4 domains: moral sensitivity, moral reasoning, decision making, and attitude. The total number of items was 13. The numbers of checklist items and global rating items were 11 and 2, respectively. Items and key questions were validated by 6 professionals. Standardization of the raters and the pilot study was performed before the OSOE. Fifty-four third-year medical students participated in the OSOE. The OSOE was duplicated, and 2 professors assessed 1 student independently. Each station lasted 8 minutes and was followed by a 2-minute interval,during which raters completed the checklist forms. We analyzed the reliability of the OSOE with the GENOVA program. RESULTS The reliability (generalizability coefficient) was 0.945, and the interrater agreement was 0.867. The type of item, checklist or global rating, was the largest variance component. The reliability of the checklist alone was 0.668 and that of the global rating alone was 0.363. CONCLUSION The OSOE is reliable and can be used to assess ethics. More research should focus on achieving validity.
Collapse
Affiliation(s)
- Hye Rin Roh
- Department of Surgery, School of Medicine, Kangwon National University, Chuncheon, Korea
| | - Ja-Kyoung Kim
- Department of Pediatrics, School of Medicine, Kangwon National University, Chuncheon, Korea
| | - Jong-Yun Hwang
- Department of Obstetrics & Gynecology, School of Medicine, Kangwon National University, Chuncheon, Korea
| | - Sung Bae Park
- Department of Surgery, School of Medicine, Kangwon National University, Chuncheon, Korea
| | - Sang Wook Lee
- Department of Urology, School of Medicine, Kangwon National University, Chuncheon, Korea
| |
Collapse
|
8
|
Kim J, Neilipovitz D, Cardinal P, Chiu M, Clinch J. A pilot study using high-fidelity simulation to formally evaluate performance in the resuscitation of critically ill patients: The University of Ottawa Critical Care Medicine, High-Fidelity Simulation, and Crisis Resource Management I Study. Crit Care Med 2006; 34:2167-74. [PMID: 16775567 DOI: 10.1097/01.ccm.0000229877.45125.cc] [Citation(s) in RCA: 207] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE Resuscitation of critically ill patients requires medical knowledge, clinical skills, and nonmedical skills, or crisis resource management (CRM) skills. There is currently no gold standard for evaluation of CRM performance. The primary objective was to examine the use of high-fidelity simulation as a medium to evaluate CRM performance. Since no gold standard for measuring performance exists, the secondary objective was the validation of a measuring instrument for CRM performance-the Ottawa Crisis Resource Management Global Rating Scale (or Ottawa GRS). DESIGN First- and third-year residents participated in two simulator scenarios, recreating emergencies seen in acute care settings. Three raters then evaluated resident performance using edited video recordings of simulator performance. SETTING A Canadian university tertiary hospital. INTERVENTIONS : The Ottawa GRS was used, which provides a 7-point Likert scale for performance in five categories of CRM and an overall performance score. MEASUREMENTS AND MAIN RESULTS Construct validity was measured on the basis of content validity, response process, internal structure, and response to other variables. One variable measured in this study was the level of training. A t-test analysis of Ottawa GRS scores was conducted to examine response to the variable of level of training. Intraclass correlation coefficient scores were used to measure interrater reliability for both scenarios. Thirty-two first-year and 28 third-year residents participated in the study. Third-year residents produced higher mean scores for overall CRM performance than first-year residents (p < .0001) and in all individual categories within the Ottawa GRS (p = .0019 to p < .0001). This difference was noted for both scenarios and for each individual rater (p = .0061 to p < .0001). No statistically significant difference in resident scores was observed between scenarios. Intraclass correlation coefficient scores of .59 and .61 were obtained for scenarios 1 and 2, respectively. CONCLUSIONS Data obtained using the Ottawa GRS in measuring CRM performance during high-fidelity simulation scenarios support evidence of construct validity. Data also indicate the presence of acceptable interrater reliability when using the Ottawa GRS.
Collapse
Affiliation(s)
- John Kim
- Division of Critical Care Medicine and Department of Anesthesiology at the University of Ottawa, The Ottawa Hospital, USA
| | | | | | | | | |
Collapse
|
9
|
Turnbull J, Turnbull J, Jacob P, Brown J, Duplessis M, Rivest J. Contextual considerations in summative competency examinations: relevance to the long case. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2005; 80:1133-7. [PMID: 16306287 DOI: 10.1097/00001888-200512000-00014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Long-case patient-based examinations previously formed the basis of summative competency testing in physician certification examinations. These exams were found to be unreliable and have fallen from favor. During the authors' deliberation of the long case in the neurology certification examinations of the Royal College of Physicians and Surgeons of Canada, they considered the examination context and concluded that the appropriate psychometric analysis of the exams is highly contingent on the context. The examination context underlying certification examinations has evolved considerably; within a different context, a more cohesive test system based on a quality assurance framework could better manage substantive psychometric issues around case specificity, comprehensiveness, reliability, and compensability. These arguments are in small part psychometric, but are mostly philosophical and have relevance to the profession and the public.
Collapse
Affiliation(s)
- John Turnbull
- McMaster University Medical Centre, Room 4U7, 1200 Main Street W, Hamilton, Ontario, Canada L8N 3Z5.
| | | | | | | | | | | |
Collapse
|
10
|
Eva KW, Reiter HI, Rosenfeld J, Norman GR. The ability of the multiple mini-interview to predict preclerkship performance in medical school. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2004; 79:S40-2. [PMID: 15383385 DOI: 10.1097/00001888-200410001-00012] [Citation(s) in RCA: 139] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
PROBLEM STATEMENT AND BACKGROUND One of the greatest challenges continuing to face medical educators is the development of an admissions protocol that provides valid information pertaining to the noncognitive qualities candidates possess. An innovative protocol, the Multiple Mini-Interview, has recently been shown to be feasible, acceptable, and reliable. This article presents a first assessment of the technique's validity. METHOD Forty five candidates to the Undergraduate MD program at McMaster University participated in an MMI in Spring 2002 and enrolled in the program the following autumn. Performance on this tool and on the traditional protocol was compared to performance on preclerkship evaluation exercises. RESULTS The MMI was the best predictor of objective structured clinical examination performance and grade point average was the most consistent predictor of performance on multiple-choice question examinations of medical knowledge. CONCLUSIONS While further validity testing is required, the MMI appears better able to predict preclerkship performance relative to traditional tools designed to assess the noncognitive qualities of applicants.
Collapse
Affiliation(s)
- Kevin W Eva
- Department of Clinical Epidemiology and Biostatistics, Programme for Educational Research and Development, McMaster University, Hamilton, Ontario, Canada.
| | | | | | | |
Collapse
|
11
|
Eva KW, Rosenfeld J, Reiter HI, Norman GR. An admissions OSCE: the multiple mini-interview. MEDICAL EDUCATION 2004; 38:314-26. [PMID: 14996341 DOI: 10.1046/j.1365-2923.2004.01776.x] [Citation(s) in RCA: 389] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
CONTEXT Although health sciences programmes continue to value non-cognitive variables such as interpersonal skills and professionalism, it is not clear that current admissions tools like the personal interview are capable of assessing ability in these domains. Hypothesising that many of the problems with the personal interview might be explained, at least in part, by it being yet another measurement tool that is plagued by context specificity, we have attempted to develop a multiple sample approach to the personal interview. METHODS A group of 117 applicants to the undergraduate MD programme at McMaster University participated in a multiple mini-interview (MMI), consisting of 10 short objective structured clinical examination (OSCE)-style stations, in which they were presented with scenarios that required them to discuss a health-related issue (e.g. the use of placebos) with an interviewer, interact with a standardised confederate while an examiner observed the interpersonal skills displayed, or answer traditional interview questions. RESULTS The reliability of the MMI was observed to be 0.65. Furthermore, the hypothesis that context specificity might reduce the validity of traditional interviews was supported by the finding that the variance component attributable to candidate-station interaction was greater than that attributable to candidate. Both applicants and examiners were positive about the experience and the potential for this protocol. DISCUSSION The principles used in developing this new admissions instrument, the flexibility inherent in the multiple mini-interview, and its feasibility and cost-effectiveness are discussed.
Collapse
Affiliation(s)
- Kevin W Eva
- Department of Clinical Epidemiology and Biostatistics, Programme for Educational Research and Development, McMaster University, Hamilton, Ontario, Canada.
| | | | | | | |
Collapse
|
12
|
Wass V, Wakeford R, Neighbour R, Van der Vleuten C. Achieving acceptable reliability in oral examinations: an analysis of the Royal College of General Practitioners membership examination's oral component. MEDICAL EDUCATION 2003; 37:126-131. [PMID: 12558883 DOI: 10.1046/j.1365-2923.2003.01417.x] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
BACKGROUND The membership examination of the Royal College of General Practitioners (RCGP) uses structured oral examinations to assess candidates' decision making skills and professional values. AIM To estimate three indices of reliability for these oral examinations. METHODS In summer 1998, a revised system was introduced for the oral examinations. Candidates took two 20-minute (five topic) oral examinations with two examiner pairs. Areas for oral topics had been identified. Examiners set their own topics in three competency areas (communication, professional values and personal development) and four contexts (patient, teamwork, personal, society). They worked in two pairs (a quartet) to preplan questions on 10 topics. The results were analysed in detail. Generalisability theory was used to estimate three indices of reliability: (A) intercase (B) pass/fail decision and (C) standard error of measurement (SEM). For each index, a benchmark requirement was preset at (A) 0.8 (B) 0.9 and (C) 0.5. RESULTS There were 896 candidates in total. Of these, 87 candidates (9.7%) failed. Total score variance was attributed to: 41% candidates, 32% oral content, 27% examiners and general error. Reliability coefficients were: (A) intercase 0.65; (B) pass/fail 0.85. The SEM was 0.52 (i.e. precise enough to distinguish within one unit on the rating scale). Extending testing time to four 20-minute oral examinations, each with two examiners, or five orals, each with one examiner, would improve intercase and pass/fail reliabilities to 0.78 and 0.94, respectively. CONCLUSION Structured oral examinations can achieve reliabilities appropriate to high stakes examinations if sufficient resources are available.
Collapse
Affiliation(s)
- Val Wass
- Department of General Practice and Primary Care, Guy's, King's and St Thomas' School of Medicine, London, UK.
| | | | | | | |
Collapse
|
13
|
Hutchinson L, Aitken P, Hayes T. Are medical postgraduate certification processes valid? A systematic review of the published evidence. MEDICAL EDUCATION 2002; 36:73-91. [PMID: 11849527 DOI: 10.1046/j.1365-2923.2002.01120.x] [Citation(s) in RCA: 49] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
OBJECTIVE To collate the published works on validation of assessments used in postgraduate medical certification. DESIGN Systematic review of original papers on reliability and validity of assessments used in medical postgraduate certification. SETTING Medical and education research databases. RESULTS Fifty-five papers were identified from 1985 to 2000. A wide range of approaches to validation were employed. Inter-rater reliability and internal consistency were the most reported foci for validation. There were just two papers on consequential validity, and only a few on construct validity. These two forms of validity are considered central in recent general education writing. The majority of papers were from general and family practice. There was a noticeable lack of papers from the UK Royal Colleges (except the Royal College of General Practitioners), despite 5 years of the new unified grade and the renewed emphasis on the role of the Royal Colleges in setting assessment criteria. CONCLUSIONS There is a relative scarcity of published papers on validation of assessment for postgraduate medical certification considering the influence these high stakes processes have on doctors career progression and employment opportunities. General and family practice institutions in a number of English speaking countries have set an example to others, by showing that rigour and transparency in assessment development and implementation can be reflected in publication.
Collapse
|
14
|
Wood TJ, Cunnington JP, Norman GR. Assessing the measurement properties of a clinical reasoning exercise. TEACHING AND LEARNING IN MEDICINE 2000; 12:196-200. [PMID: 11273369 DOI: 10.1207/s15328015tlm1204_6] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
BACKGROUND A challenge for Problem-Based Learning (PBL) schools is to introduce reliable, valid, and cost-effective testing methods into the curriculum in such a way as to maximize the potential benefits of PBL while avoiding problems associated with assessment techniques like multiple-choice question, or MCQ, tests. PURPOSE We document the continued development of an exam that was designed to satisfy the demands of both PBL and the scientific principles of measurement. METHODS A total of 102 medical students wrote a clinical reasoning exercise (CRE) as a requirement for two consecutive units of instruction. Each CRE consisted of a series of 18 short clinical problems designed to assess a student's knowledge of the mechanism of diseases that were covered in three subunits located within each unit. Responses were scored by a student's tutor and a 2nd crossover tutor. RESULTS Generalizability coefficients for raters, subunits, and individual problems were low, but the reliability of the overall test scores and the reliability of the scores across 2 units of instruction were high. Subsequent analyses found that the crossover tutor's ratings were lower than the ratings provided by one's own tutor, and the CRE correlated with the biology component of a progress test. CONCLUSION The magnitude of the generalizability coefficients demonstrates that the CRE is capable of detecting differences in reasoning across knowledge domains and is therefore a useful evaluation tool.
Collapse
Affiliation(s)
- T J Wood
- Faculty of Health Sciences, McMaster University, Hamilton, Ontario, Canada.
| | | | | |
Collapse
|
15
|
Abstract
Undergraduate medical education in Manchester in undergoing wholesale revision, with the introduction of problem-based learning (PBL) in each successive year of the curriculum, as the cohort of students who joined the faculty in 1994 advances through the course. This cohort has now entered year 3, which primarily hospital-based. In preparation for this, we have explored the development of an OSCE, not only to assess core interpersonal skills such as history taking, clinical examination, and the ability to explain things to patients, but also to integrate the examination of important skills relating to investigational sciences. These include the correct choice of laboratory tests, accurate interpretation of data, and appropriate selection of clinical responses to test results.
Collapse
Affiliation(s)
- E W Benbow
- Department of Pathological Sciences, University of Manchester, U.K
| | | | | | | |
Collapse
|