1
|
Salmon G, Pugsley L. The mini-PAT as a multi-source feedback tool for trainees in child and adolescent psychiatry: assessing whether it is fit for purpose. BJPsych Bull 2017; 41:115-119. [PMID: 28400971 PMCID: PMC5376729 DOI: 10.1192/pb.bp.115.052720] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
This paper discusses the research supporting the use of multi-source feedback (MSF) for doctors and describes the mini-Peer Assessment Tool (mini-PAT), the MSF instrument currently used to assess trainees in child and adolescent psychiatry. The relevance of issues raised in the literature about MSF tools in general is examined in relation to trainees in child and adolescent psychiatry as well as the appropriateness of the mini-PAT for this group. Suggestions for change including modifications to existing MSF tools or the development of a specialty-specific MSF instrument are offered.
Collapse
|
2
|
Zeller MP, Sherbino J, Whitman L, Skeate R, Arnold DM. Design and Implementation of a Competency-Based Transfusion Medicine Training Program in Canada. Transfus Med Rev 2016; 30:30-6. [DOI: 10.1016/j.tmrv.2015.11.001] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2015] [Revised: 10/22/2015] [Accepted: 11/03/2015] [Indexed: 11/29/2022]
|
3
|
Moonen-van Loon JMW, Overeem K, Govaerts MJB, Verhoeven BH, van der Vleuten CPM, Driessen EW. The reliability of multisource feedback in competency-based assessment programs: the effects of multiple occasions and assessor groups. ACADEMIC MEDICINE : JOURNAL OF THE ASSOCIATION OF AMERICAN MEDICAL COLLEGES 2015; 90:1093-9. [PMID: 25993283 DOI: 10.1097/acm.0000000000000763] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
PURPOSE Residency programs around the world use multisource feedback (MSF) to evaluate learners' performance. Studies of the reliability of MSF show mixed results. This study aimed to identify the reliability of MSF as practiced across occasions with varying numbers of assessors from different professional groups (physicians and nonphysicians) and the effect on the reliability of the assessment for different competencies when completed by both groups. METHOD The authors collected data from 2008 to 2012 from electronically completed MSF questionnaires. In total, 428 residents completed 586 MSF occasions, and 5,020 assessors provided feedback. The authors used generalizability theory to analyze the reliability of MSF for multiple occasions, different competencies, and varying numbers of assessors and assessor groups across multiple occasions. RESULTS A reliability coefficient of 0.800 can be achieved with two MSF occasions completed by at least 10 assessors per group or with three MSF occasions completed by 5 assessors per group. Nonphysicians' scores for the "Scholar" and "Health advocate" competencies and physicians' scores for the "Health advocate" competency had a negative effect on the composite reliability. CONCLUSIONS A feasible number of assessors per MSF occasion can reliably assess residents' performance. Scores from a single occasion should be interpreted cautiously. However, every occasion can provide valuable feedback for learning. This research confirms that the (unique) characteristics of different assessor groups should be considered when interpreting MSF results. Reliability seems to be influenced by the included assessor groups and competencies. These findings will enhance the utility of MSF during residency training.
Collapse
Affiliation(s)
- Joyce M W Moonen-van Loon
- J.M.W. Moonen-van Loon is postdoctoral researcher, Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands. K. Overeem is postdoctoral researcher, Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands. M.J.B. Govaerts is assistant professor, Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands. B.H. Verhoeven is pediatric surgeon, Department of Surgery, Radboud University Medical Center, Nijmegen, and assistant professor, Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands. C.P.M. van der Vleuten is professor of education, Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands. E.W. Driessen is associate professor of education, Department of Educational Development and Research, Maastricht University, Maastricht, The Netherlands
| | | | | | | | | | | |
Collapse
|
4
|
Abstract
AIM To account for the means by which poor performance among career doctors is identified by National Health Service organizations, whether the tools are considered effective and how these processes may be strengthened in the light of revalidation and the requirement for doctors to demonstrate their fitness to practice. METHOD This study sought to look beyond the 'doctor as individual'; as well as considering the typical approaches to managing the practice of an individual, the systems within which the doctor is working were reviewed, as these are also relevant to standards of performance. A qualitative review was undertaken consisting of a literature review of current practice, a policy review of current documentation from 15 trusts in one deanery locality, and 14 semi-structured interviews with respondents with an overview of processes in use. The framework for the analysis of the data considered tools at three levels: individual, team and organizational. RESULTS Tools are, in the main, reactive--with an individual focus. They rely on colleagues and others to speak out, so their effectiveness is hindered by a reluctance to do so. Tools can lack an evidence base for their use, and there is limited linking of data across contexts and tools. CONCLUSIONS There is more work to be done in evaluating current tools and developing stronger processes. Linkage between data sources needs to be improved and proactive tools at the organizational level need further development to help with the early identification of performance issues. This would also assist in balancing a wider systems approach with a current over emphasis on individual doctors.
Collapse
Affiliation(s)
- Rachel Locke
- Senior Research Officer, Faculty of Education, Health and Social Care, The University of Winchester, Winchester, UK Wessex General Practice Research Lead, GP Education Unit, Southampton University Hospitals Trusts; Associate Tutor, MA Education: Professional Enquiry (Medics Pathway), The University of Winchester, Winchester, UK Honorary Research Fellow, The University of Winchester, Winchester, UK Associate Dean at Wessex Deanery, GP at Park Surgery, Chandlers Ford; Honorary Research Professor, The University of Winchester, Winchester, UK
| | | | | | | |
Collapse
|
5
|
Davies JG, Ciantar J, Jubraj B, Bates IP. Use of a multisource feedback tool to develop pharmacists in a postgraduate training program. AMERICAN JOURNAL OF PHARMACEUTICAL EDUCATION 2013; 77:52. [PMID: 23610470 PMCID: PMC3631727 DOI: 10.5688/ajpe77352] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/11/2012] [Accepted: 10/19/2012] [Indexed: 06/02/2023]
Abstract
OBJECTIVES. To evaluate use of a peer-assessment tool as a performance indicator for junior pharmacists in a formal postgraduate training program in London. METHODS. A 4-year retrospective analysis of data gathered using the pharmacy mini-PAT (peer-assessment tool) was undertaken. Assessments, including junior pharmacist self-evaluations, were conducted every 6 months. Overall performance and performance for clustered items were analyzed to determine changes. Assessments by healthcare professionals were then compared between professional groupings, which included pharmacists, physicians, and nurses. RESULTS. There was a significant improvement over time in both self-assessment scores and scores on assessments conducted by others using the mini-PAT. Junior pharmacists rated themselves significantly lower than did their assessors (p<0.001); pharmacist assessors rated the performance of junior pharmacists significantly lower than did other healthcare professionals (p<0.001). Validity, ease of use, and relevance of the pharmacy mini-PAT were demonstrated. CONCLUSIONS. As part of a range of formative evaluations involving assessors from across various health professions, the mini-PAT is a valuable instrument for developing junior pharmacists. A cohort's mini-PAT result provides a snapshot of his/her performance that can be used to identify key areas requiring further training.
Collapse
Affiliation(s)
- John Graham Davies
- Institute of Pharmaceutical Science, King’s College London, London, England, United Kingdom
| | - Julienne Ciantar
- Faculty of Medicine and Surgery, University of Malta, Msida, Malta
| | - Barry Jubraj
- Chelsea & Westminster Hospital NHS Foundation Trust, London, England, United Kingdom
- University College London School of Pharmacy, London, England, United Kingdom
| | - Ian Peter Bates
- University College London School of Pharmacy, London, England, United Kingdom
| |
Collapse
|
6
|
Sharma N, Cui Y, Leighton JP, White JS. Team-based assessment of medical students in a clinical clerkship is feasible and acceptable. MEDICAL TEACHER 2012; 34:555-61. [PMID: 22746962 DOI: 10.3109/0142159x.2012.669083] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
BACKGROUND This study describes the development, implementation and evaluation of a team-based, multi-source method of assessment in which students on a clinical clerkship were provided with feedback on their performance as observed by physicians, residents, nurses, peers, patients and administrators. METHODS The instrument was developed by reviewing existing assessment items and by obtaining input from assessors and students. Numerical data and written comments provided to students were collected, internal consistency was estimated and interviews and focus groups were used to determine acceptability to assessors and students. RESULTS A total of 1068 assessors completed 3501 forms for 127 students. Internal consistency estimates for each assessment form were acceptable (Cronbach's alpha 0.856-0.948). Each student received an average of 188 words of written feedback divided into an average of 26 'Areas of Excellence' and 5 'Areas for Improvement'. Interviews revealed that the majority of students and assessors interviewed found the method acceptable. CONCLUSIONS This study demonstrates that a team-based model of assessment based on the principles of multi-source feedback is a feasible and acceptable form of assessment for medical students learning in a clinical clerkship, and has some advantages over traditional preceptor-based assessment. Further studies will focus on the strengths and weaknesses of this novel assessment technique.
Collapse
|
7
|
Patel JP, Sharma A, West D, Bates IP, Davies JG, Abdel-Tawab R. An evaluation of using multi-source feedback (MSF) among junior hospital pharmacists. INTERNATIONAL JOURNAL OF PHARMACY PRACTICE 2011; 19:276-80. [PMID: 21733015 DOI: 10.1111/j.2042-7174.2010.00092.x] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
OBJECTIVE The mini Peer Assessment Tool (mini-PAT) for pharmacists was introduced in 2006 as a formative method of assessing junior hospital pharmacists in the workplace and is the first widespread application of multi-source feedback (MSF) specifically within a pharmacy setting. The aim of this study was to evaluate the feasibility and measurement characteristics of the assessment method, in order to guide its future application. METHODS At the time of the study (September 2008) the assessment had been in place for 3 years. All assessment data from the first 3 years were analysed retrospectively. KEY FINDINGS We evaluated 633 mini-PAT assessments. Over the study period, the assessor response rate remained relatively consistent at 77% and compared favourably with applications of MSF within medicine. Members of the pharmacy team (pharmacists and pharmacy technicians) dominated the assessor nomination lists. It was encouraging to see completed assessment forms returned from nominated doctors and nurses with whom the junior pharmacist had been working. Differences were found between how different occupational groups rated the junior pharmacists against the 16 items on the assessment form (Kruskal-Wallis, df=3, P<0.001). Pharmacist assessors rated the junior pharmacists lowest against all 16 items on the mini-PAT assessment form, whereas nominated doctors rated them the highest. CONCLUSION This study demonstrates that an MSF assessment method can successfully be applied to a wide range of junior hospital pharmacists, and that the majority of junior hospital pharmacists assessed meet expectations.
Collapse
Affiliation(s)
- Jignesh P Patel
- Pharmaceutical Science Division, School of Biomedical and Health Sciences, King's College London Department of Policy and Practice, School of Pharmacy, University of London, London, UK.
| | | | | | | | | | | |
Collapse
|
8
|
Mackillop LH, Crossley J, Vivekananda-Schmidt P, Wade W, Armitage M. A single generic multi-source feedback tool for revalidation of all UK career-grade doctors: does one size fit all? MEDICAL TEACHER 2011; 33:e75-e83. [PMID: 21275537 DOI: 10.3109/0142159x.2010.535870] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
BACKGROUND The UK Department of Health is considering a single, generic multi-source feedback (MSF) questionnaire to inform revalidation. METHOD Evaluation of an implementation pilot, reporting: response rates, assessor mix, question redundancy and participants' perceptions. Reliability was estimated using Generalisability theory. RESULTS A total of 12,540 responses were received on 977 doctors. The mean time taken to complete an MSF exercise was 68.2 days. The mean number of responses received per doctor was 12.0 (range 1-17) with no significant difference between specialties. Individual question response rates and participants' comments about questions indicate that some questions are less appropriate for some specialities. There was a significant difference in the mean score between specialities. Despite guidance, there were significant differences in the mix of assessors across specialties. More favourable scores were given by progressively more junior doctors. Nurses gave the most reliable scores. CONCLUSIONS It is feasible to electronically administer a generic questionnaire to a large population of doctors. Generic content is appropriate for most but not all specialties. The differences in mean scores and the reliability of the MSF between specialties may be in part due to the specialty differences in assessor mix. Therefore the number and assessor mix should be standardised at specialty level and scores should not be compared across specialties.
Collapse
|
9
|
Ahmed K, Jawad M, Dasgupta P, Darzi A, Athanasiou T, Khan MS. Assessment and maintenance of competence in urology. Nat Rev Urol 2010; 7:403-13. [PMID: 20567253 DOI: 10.1038/nrurol.2010.81] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
10
|
Patel JP, West D, Bates IP, Eggleton AG, Davies G. Early experiences of the mini-PAT (Peer Assessment Tool) amongst hospital pharmacists in South East London. INTERNATIONAL JOURNAL OF PHARMACY PRACTICE 2010. [DOI: 10.1211/ijpp.17.02.0008] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
Abstract
Abstract
Objectives
The aim was to describe early experience of use of the mini-PAT (Peer Assessment Tool) amongst general-level pharmacists working in secondary care, and to capture their views about the method of assessment.
Methods
General-level pharmacists who had completed two rounds of the mini-PAT assessment in their first year post-qualification were asked to complete a semi-structured questionnaire, assessing the usefulness and acceptability of the assessment method.
Key findings
The pharmacists found the assessment method useful and acceptable, with many citing that they found it useful to see how they were doing in relation to their peers. To further improve the assessment method, the general-level pharmacists suggested that any verbatim comments made should have the relevant assessor's name next to it, so the context of the comment can be understood.
Conclusions
Early experience suggests that the mini-PAT is a useful formative assessment tool for use amongst GLPs.
Collapse
Affiliation(s)
- Jignesh P Patel
- Pharmaceutical Science Division, School of Biomedical and Health Sciences, King's College London, UK
| | - David West
- School of Pharmacy, University of London, London, UK
| | - Ian P Bates
- School of Pharmacy, University of London, London, UK
| | | | - Graham Davies
- Pharmaceutical Science Division, School of Biomedical and Health Sciences, King's College London, UK
| |
Collapse
|
11
|
|
12
|
Hesketh A, Anderson F, Drimmie F, Scahill L, Davey P. Supporting the prescribing of junior doctors: a 360° approach. CLINICAL TEACHER 2009. [DOI: 10.1111/j.1743-498x.2009.00304.x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
13
|
Hobson JC. Revalidation, multisource feedback and cloud computing. Clin Otolaryngol 2009; 34:295-6. [PMID: 19531204 DOI: 10.1111/j.1749-4486.2009.01949.x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- J C Hobson
- Otorhinolaryngology/Head & Neck Surgery, Royal Bolton Hospital, UK.
| |
Collapse
|
14
|
Leslie LK. What can data tell us about the quality and relevance of current pediatric residency education? Pediatrics 2009; 123 Suppl 1:S50-5. [PMID: 19088246 DOI: 10.1542/peds.2008-1578l] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The Residency Review and Redesign (R(3)P) Project relied on both qualitative and quantitative data in developing its recommendations regarding residency education. This article reviews quantitative data in the published literature of import to the R(3)P Project as well as findings by Freed and colleagues published in this supplement to Pediatrics. Primary questions of interest to the R(3)P Project included: What factors drive decision-making regarding residency selection? Do current training programs have the flexibility to meet the needs of residents, no matter what their career choice with pediatrics? What areas need greater focus within residency training? Should the length of training remain at 36 months? Based on the available data, the R(3)P Project concluded that more diversity needs to be fostered with training programs. By promoting innovative and diverse approaches to improving pediatric residency education, members of the R(3)P Project hope to enhance learning, encourage multiple career paths within the broad field of pediatrics, and, ultimately, improve patient and family outcomes.
Collapse
Affiliation(s)
- Laurel K Leslie
- Department of Medicine, Tufts Medical Center, 800 Washington St, 345, Boston, MA 02111, USA.
| |
Collapse
|
15
|
Carraccio C, Sectish TC. Report of colloquium II: the theory and practice of graduate medical education--how do we know when we have made a "good doctor"? Pediatrics 2009; 123 Suppl 1:S17-21. [PMID: 19088240 DOI: 10.1542/peds.2008-1578f] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Participants of the second colloquium of the Residency Review and Redesign in Pediatrics (R(3)P) Project considered 3 primary questions: What is a "good doctor"? How do we make one? and How do we know when we have made one? Experts from other countries and other medical specialties helped participants wrestle with these most basic questions. Participants emerged with a better feeling of the utility of different types of evaluations needed to determine resident competence. It was clear that the complexity of the task requires faculty education and development. Most important, it requires the ongoing commitment of all of pediatrics as we seek to link education directly to better health outcomes for children, adolescents, and young adults.
Collapse
Affiliation(s)
- Carol Carraccio
- Department of Pediatrics, University of Maryland, Room N5W56, 22 S Greene St, Baltimore, MD 21201, USA.
| | | |
Collapse
|
16
|
Davies H, Archer J, Bateman A, Dewar S, Crossley J, Grant J, Southgate L. Specialty-specific multi-source feedback: assuring validity, informing training. MEDICAL EDUCATION 2008; 42:1014-20. [PMID: 18823521 DOI: 10.1111/j.1365-2923.2008.03162.x] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
CONTEXT The white paper 'Trust, Assurance and Safety: the Regulation of Health Professionals in the 21st Century' proposes a single, generic multi-source feedback (MSF) instrument in the UK. Multi-source feedback was proposed as part of the assessment programme for Year 1 specialty training in histopathology. METHODS An existing instrument was modified following blueprinting against the histopathology curriculum to establish content validity. Trainees were also assessed using an objective structured practical examination (OSPE). Factor analysis and correlation between trainees' OSPE performance and the MSF were used to explore validity. All 92 trainees participated and the assessor response rate was 93%. Reliability was acceptable with eight assessors (95% confidence interval 0.38). Factor analysis revealed two factors: 'generic' and 'histopathology'. Pearson correlation of MSF scores with OSPE performances was 0.48 (P = 0.001) and the histopathology factor correlated more highly (histopathology r = 0.54, generic r = 0.42; t = - 2.76, d.f. = 89, P < 0.01). Trainees scored least highly in relation to ability to use histopathology to solve clinical problems (mean = 4.39) and provision of good reports (mean = 4.39). Three of six doctors whose means were < 4.0 received free text comments about report writing. There were 83 forms with aggregate scores of < 4. Of these, 19.2% included comments about report writing. RESULTS Specialty-specific MSF is feasible and achieves satisfactory reliability. The higher correlation of the 'histopathology' factor with the OSPE supports validity. This paper highlights the importance of validating an MSF instrument within the specialty-specific context as, in addition to assuring content validity, the PATH-SPRAT (Histopathology-Sheffield Peer Review Assessment Tool) also demonstrates the potential to inform training as part of a quality improvement model.
Collapse
Affiliation(s)
- Helena Davies
- Department of Medical Education, University of Sheffield, Sheffield, UK
| | | | | | | | | | | | | |
Collapse
|
17
|
Thammasitboon S, Mariscalco MM, Yudkowsky R, Hetland MD, Noronha PA, Mrtek RG. Exploring individual opinions of potential evaluators in a 360-degree assessment: four distinct viewpoints of a competent resident. TEACHING AND LEARNING IN MEDICINE 2008; 20:314-322. [PMID: 18855235 DOI: 10.1080/10401330802384680] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
BACKGROUND Despite the highly acclaimed psychometric features of a 360-degree assessment in the fields of economics, military, and education, there has been increased interest in developing 360-degree instruments to assess competencies in graduate medical education only in the past recent years. Most of the effort to date, however, has focused on developing instruments and testing their reliability and feasibility. Insufficient attention has gone into issues of construct validity and particularly understanding the underlying constructs on which the instruments are based as well as the phenomena that affect ratings. PURPOSE In preparation for developing a 360-degree assessment instrument, we explored variations in evaluators' opinion type of a competent resident and offer observation about evaluator's professional background and opinions. METHOD Evaluators from two residency programs ranked 36 opinion statements, using a relative-ranking model, based on their opinion of a competent resident. By-person factor analysis was used to structure opinion types. RESULTS Factor analysis of 156 responses identified four factors interpreted as four different opinion types of a competent resident: (a) altruistic, compassionate healer (n = 42 evaluators), (b) scientifically grounded clinician (n = 30), (c) holistic, humanistic clinician (n = 62), and (d) patient-focused, health manager (n = 31). Although 72% of nurses/respiratory therapist evaluators expressed type C, 28% expressed other types just as often. Only 14% of evaluator physicians expressed type D, and the remainders were evenly split among the other types. CONCLUSIONS Our evaluators in 360-degree system expressed four opinion types of a competent resident. The individual opinion and not professional background influences the characteristics an evaluator values in a competent resident. We propose that these values will have an impact on competency assessment and should be taken into account in a 360-degree assessment.
Collapse
Affiliation(s)
- Satid Thammasitboon
- Department of Pediatrics, Robert C Byrd Health Sciences Center, Morgantown, West Virginia 26506-9214, USA.
| | | | | | | | | | | |
Collapse
|
18
|
Archer J, Norcini J, Southgate L, Heard S, Davies H. mini-PAT (Peer Assessment Tool): a valid component of a national assessment programme in the UK? ADVANCES IN HEALTH SCIENCES EDUCATION : THEORY AND PRACTICE 2008; 13:181-92. [PMID: 17036157 DOI: 10.1007/s10459-006-9033-3] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2006] [Accepted: 09/03/2006] [Indexed: 05/12/2023]
Abstract
PURPOSE To design, implement and evaluate a multisource feedback instrument to assess Foundation trainees across the UK. METHODS mini-PAT (Peer Assessment Tool) was modified from SPRAT (Sheffield Peer Review Assessment Tool), an established multisource feedback (360 degrees ) instrument to assess more senior doctors, as part of a blueprinting exercise of instruments suitable for assessment in Foundation programmes (first 2 years postgraduation). mini-PAT's content validity was assured by a mapping exercise against the Foundation Curriculum. Trainees' clinical performance was then assessed using 16 questions rated against a six-point scale on two occasions in the pilot period. Responses were analysed to determine internal structure, potential sources of bias and measurement characteristics. RESULTS Six hundred and ninety-three mini-PAT assessments were undertaken for 553 trainees across 12 Deaneries in England, Wales and Northern Ireland. Two hundred and nineteen trainees were F1s or PRHOs and 334 were F2s. Trainees identified 5544 assessors of whom 67% responded. The mean score for F2 trainees was 4.61 (SD = 0.43) and for F1s was 4.44 (SD = 0.56). An independent t test showed that the mean scores of these 2 groups were significantly different (t = -4.59, df 390, p < 0.001). 43 F1s (19.6%) and 19 F2s (5.6%) were assessed as being below expectations for F2 completion. The factor analysis produced 2 main factors, one concerned clinical performance, the other humanistic qualities. Seventy-four percent of F2 trainees could have been assessed by as few as 8 assessors (95% CI +/-0.6) as they either scored an overall mean of 4.4 or above or 3.6 and below. Fifty-three percent of F1 trainees could have been assessed by as few as 8 assessors (95% CI +/-0.5) as they scored an overall mean of 4.5 or above or 3.5 and below. The hierarchical regression when controlling for the grade of trainee showed that bias related to the length of the working relationship, occupation of the assessor and the working environment explained 7% of the variation in mean scores when controlling for the year of the Foundation Programme (R squared change = 0.06, F change = 8.5, significant F change <0.001). CONCLUSIONS As part of an assessment programme, mini-PAT appears to provide a valid way of collating colleague opinions to help reliably assess Foundation trainees.
Collapse
Affiliation(s)
- Julian Archer
- University of Sheffield, D Floor, Stephenson Wing, Sheffield Children's Hospital, Western Bank, Sheffield, UK.
| | | | | | | | | |
Collapse
|
19
|
Abstract
The structured evaluation of doctors' performance through peer review is a relatively new phenomenon brought about by public demand for accountability to patients. Medical knowledge (as assessed by examination score) is no longer a good predictor of individual performance, humanistic qualities and communication skills. The process of peer review (or multi-source assessment) was developed over the last two decades in the USA and has started to pick up momentum in the UK through the introduction of Modernizing Medical Careers. However the concept is not new. Driven by market forces, it was initially developed by industrial organizations to improve leadership qualities with a view to increasing productivity through positive behaviour change and self-awareness. Multi-source feedback is not without its problems and may not always produce its desired outcomes. In this article we review the evidence for peer review and critically discuss the current process of mini peer assessment tool (mini-PAT) as the assessment tool for peer review employed in UK.
Collapse
Affiliation(s)
- Aza Abdulla
- Consultant Physician, Princess Royal University Hospital, Bromley Hospitals NHSTrust, Farnborough Common, Orpington, Kent BR6 8ND, UK.
| |
Collapse
|
20
|
|