1
|
Ershad Langroodi M, Liu X, Tousignant MR, Jarc AM. Objective performance indicators versus GEARS: an opportunity for more accurate assessment of surgical skill. Int J Comput Assist Radiol Surg 2024; 19:2259-2267. [PMID: 39320413 DOI: 10.1007/s11548-024-03248-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 07/29/2024] [Indexed: 09/26/2024]
Abstract
PURPOSE Surgical skill evaluation that relies on subjective scoring of surgical videos can be time-consuming and inconsistent across raters. We demonstrate differentiated opportunities for objective evaluation to improve surgeon training and performance. METHODS Subjective evaluation was performed using the Global evaluative assessment of robotic skills (GEARS) from both expert and crowd raters; whereas, objective evaluation used objective performance indicators (OPIs) derived from da Vinci surgical systems. Classifiers were trained for each evaluation method to distinguish between surgical expertise levels. This study includes one clinical task from a case series of robotic-assisted sleeve gastrectomy procedures performed by a single surgeon, and two training tasks performed by novice and expert surgeons, i.e., surgeons with no experience in robotic-assisted surgery (RAS) and those with more than 500 RAS procedures. RESULTS When comparing expert and novice skill levels, OPI-based classifier showed significantly higher accuracy than GEARS-based classifier on the more complex dissection task (OPI 0.93 ± 0.08 vs. GEARS 0.67 ± 0.18; 95% CI, 0.16-0.37; p = 0.02), but no significant difference was shown on the simpler suturing task. For the single-surgeon case series, both classifiers performed well when differentiating between early and late group cases with smaller group sizes and larger intervals between groups (OPI 0.9 ± 0.08; GEARS 0.87 ± 0.12; 95% CI, 0.02-0.04; p = 0.67). When increasing the group size to include more cases, thereby having smaller intervals between groups, OPIs demonstrated significantly higher accuracy (OPI 0.97 ± 0.06; GEARS 0.76 ± 0.07; 95% CI, 0.12-0.28; p = 0.004) in differentiating between the early/late cases. CONCLUSIONS Objective methods for skill evaluation in RAS outperform subjective methods when (1) differentiating expertise in a technically challenging training task, and (2) identifying more granular differences along early versus late phases of a surgeon learning curve within a clinical task. Objective methods offer an opportunity for more accessible and scalable skill evaluation in RAS.
Collapse
Affiliation(s)
| | - Xi Liu
- Research and Development, Intuitive Surgical, Inc, 5655 Spalding Dr, Norcross, GA, 30092, USA
| | - Mark R Tousignant
- Research and Development, Intuitive Surgical, Inc, 5655 Spalding Dr, Norcross, GA, 30092, USA
| | - Anthony M Jarc
- Research and Development, Intuitive Surgical, Inc, 5655 Spalding Dr, Norcross, GA, 30092, USA
| |
Collapse
|
2
|
Olsen RG, Konge L, Hayatzaki K, Mortensen MA, Røder A, Bjerrum F. Medical Students Cannot Assess Robotic Surgeons Performing Radical Prostatectomy. Simul Healthc 2024; 19:213-219. [PMID: 37279115 DOI: 10.1097/sih.0000000000000733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
INTRODUCTION Medical students have previously been shown to be just as effective for video rating as experts. We want to compare medical students to experienced surgeons as video assessors of simulated robot-assisted radical prostatectomy (RARP) performance. MATERIALS AND METHODS Video recordings of three RARP modules on the RobotiX (formerly Simbionix) simulator from a previous study were used. Five novice surgeons, five experienced robotic surgeons, and five experienced robotic surgeons in RARP performed a total of 45 video-recorded procedures. The videos were assessed with the modified Global Evaluative Assessment of Robotic Skills tool as both full-length and an edited edition that only included the first 5 minutes of the procedure. RESULTS Fifty medical students and two experienced RARP surgeons (ES) performed a total of 680 video ratings of full-length videos and 5-minute videos (2-9 ratings per video). Medical students and ES showed poor agreement for both full-length videos and 5-minute videos (0.29 and -0.13, respectively). Medical students could not discriminate between the skill level of the surgeons in either full-length videos or 5-minute videos ( P = 0.053-0.36 and P = 0.21-0.82), whereas ES could discriminate between novice surgeons and experienced surgeons (full-length, P < 0.001, and 5 minutes, P = 0.007) and intermediate and experienced surgeons (full-length, P = 0.001, and 5 minutes, P = 0.01) in both full-length videos and 5-minute videos. CONCLUSION We found that medical students cannot be used to assess RARP because they showed poor agreement with the ES rating for both full-length videos and 5-minute videos. Medical students could not discriminate between surgical skill levels.
Collapse
Affiliation(s)
- Rikke Groth Olsen
- From the Copenhagen Academy for Medical Education and Simulation (CAMES) (R.G.O., L.K., F.B.); Department of Urology (R.G.O., A.R.), Copenhagen Prostate Cancer Center, Copenhagen University Hospital-Rigshospitalet; Faculty of Health and Medical Sciences (L.K., A.R.), University of Copenhagen, Copenhagen; Department of Urology (K.H.), Zealand University Hospital, Roskilde; Department of Urology (M.A.M.), Odense University Hospital; Department of Clinical Research (M.A.M.), University of Southern Denmark, Odense; and Department of Surgery, Herlev-Gentofte Hospital (F.B.), Herlev, Denmark
| | | | | | | | | | | |
Collapse
|
3
|
Olsen RG, Svendsen MBS, Tolsgaard MG, Konge L, Røder A, Bjerrum F. Automated performance metrics and surgical gestures: two methods for assessment of technical skills in robotic surgery. J Robot Surg 2024; 18:297. [PMID: 39068261 PMCID: PMC11283394 DOI: 10.1007/s11701-024-02051-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Accepted: 07/15/2024] [Indexed: 07/30/2024]
Abstract
The objective of this study is to compare automated performance metrics (APM) and surgical gestures for technical skills assessment during simulated robot-assisted radical prostatectomy (RARP). Ten novices and six experienced RARP surgeons performed simulated RARPs on the RobotiX Mentor (Surgical Science, Sweden). Simulator APM were automatically recorded, and surgical videos were manually annotated with five types of surgical gestures. The consequences of the pass/fail levels, which were based on contrasting groups' methods, were compared for APM and surgical gestures. Intra-class correlation coefficient (ICC) analysis and a Bland-Altman plot were used to explore the correlation between APM and surgical gestures. Pass/fail levels for both APM and surgical gesture could fully distinguish between the skill levels of the surgeons with a specificity and sensitivity of 100%. The overall ICC (one-way, random) was 0.70 (95% CI: 0.34-0.88), showing moderate agreement between the methods. The Bland-Altman plot showed a high agreement between the two methods for assessing experienced surgeons but disagreed on the novice surgeons' skill level. APM and surgical gestures could both fully distinguish between novices and experienced surgeons in a simulated setting. Both methods of analyzing technical skills have their advantages and disadvantages and, as of now, those are only to a limited extent available in the clinical setting. The development of assessment methods in a simulated setting enables testing before implementing it in a clinical setting.
Collapse
Affiliation(s)
- Rikke Groth Olsen
- Copenhagen Academy for Medical Education and Simulation (CAMES), Ryesgade 53B, 2100, Copenhagen, Denmark.
- Department of Urology, Copenhagen Prostate Cancer Center, Copenhagen University Hospital-Rigshospitalet, Copenhagen, Denmark.
- Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark.
| | - Morten Bo Søndergaard Svendsen
- Copenhagen Academy for Medical Education and Simulation (CAMES), Ryesgade 53B, 2100, Copenhagen, Denmark
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Martin G Tolsgaard
- Copenhagen Academy for Medical Education and Simulation (CAMES), Ryesgade 53B, 2100, Copenhagen, Denmark
| | - Lars Konge
- Copenhagen Academy for Medical Education and Simulation (CAMES), Ryesgade 53B, 2100, Copenhagen, Denmark
- Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Andreas Røder
- Department of Urology, Copenhagen Prostate Cancer Center, Copenhagen University Hospital-Rigshospitalet, Copenhagen, Denmark
- Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
| | - Flemming Bjerrum
- Copenhagen Academy for Medical Education and Simulation (CAMES), Ryesgade 53B, 2100, Copenhagen, Denmark
- Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark
- Gastrounit, Surgical Section, Copenhagen University Hospital-Amager and Hvidovre, Hvidovre, Denmark
| |
Collapse
|
4
|
Addison P, Bitner DP, Addy J, Dechario S, Husk G, Antonacci A, Talamini M, Giangola G, Filicori F. Does Surgeon Experience Correlate with Crowd-Sourced Skill Assessment in Robotic Bariatric Surgery? Am Surg 2023; 89:5253-5262. [PMID: 36454236 DOI: 10.1177/00031348221142586] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2023]
Abstract
BACKGROUND The Global Evaluative Assessment of Robotic Skills (GEARS) rubric provides a measure of skill in robotic surgery. We hypothesize surgery performed by more experienced operators will be associated with higher GEARS scores. METHOD Patients undergoing sleeve gastrectomy from 2016 to 2020 were analyzed. Three groups were defined by time in practice: less than 5, between 5 and 15, and more than 15 years. Continuous variables were compared with ANOVA and multivariable regression was performed. RESULTS Fourteen operators performing 154 cases were included. More experienced surgeons had higher GEARS scores and shorter operative times. On multivariable regression, operative time (P = 0.027), efficiency (P = .022), depth perception (P = 0.033), and bimanual dexterity (P = 0.047) were associated with experience. CONCLUSIONS In our video-based assessment (VBA) model, operative time and several GEARS subcomponent scores were associated with surgical experience. Further studies should determine the association between these metrics and surgical outcomes.
Collapse
Affiliation(s)
- Poppy Addison
- Intraoperative Performance Analytics Laboratory, Department of General Surgery, Lenox Hill Hospital, New York, NY, USA
| | - Daniel P Bitner
- Intraoperative Performance Analytics Laboratory, Department of General Surgery, Lenox Hill Hospital, New York, NY, USA
| | - Jermyn Addy
- Intraoperative Performance Analytics Laboratory, Department of General Surgery, Lenox Hill Hospital, New York, NY, USA
| | | | - Gregg Husk
- Intraoperative Performance Analytics Laboratory, Department of General Surgery, Lenox Hill Hospital, New York, NY, USA
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| | - Anthony Antonacci
- Intraoperative Performance Analytics Laboratory, Department of General Surgery, Lenox Hill Hospital, New York, NY, USA
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| | - Mark Talamini
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
- Department of General Surgery, North Shore University Hospital, Manhasset, NY, USA
| | - Gary Giangola
- Intraoperative Performance Analytics Laboratory, Department of General Surgery, Lenox Hill Hospital, New York, NY, USA
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| | - Filippo Filicori
- Intraoperative Performance Analytics Laboratory, Department of General Surgery, Lenox Hill Hospital, New York, NY, USA
- Donald and Barbara Zucker School of Medicine at Hofstra/Northwell, Hempstead, NY, USA
| |
Collapse
|
5
|
Ketel MHM, Klarenbeek BR, Eddahchouri Y, Cuesta MA, van Daele E, Gutschow CA, Hölscher AH, Hubka M, Luyer MDP, Merritt RE, Nieuwenhuijzen GAP, Shen Y, Abma IL, Rosman C, van Workum F. Crowd-sourced and expert video assessment in minimally invasive esophagectomy. Surg Endosc 2023; 37:7819-7828. [PMID: 37605010 PMCID: PMC10520122 DOI: 10.1007/s00464-023-10297-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 07/02/2023] [Indexed: 08/23/2023]
Abstract
BACKGROUND Video-based assessment by experts may structurally measure surgical performance using procedure-specific competency assessment tools (CATs). A CAT for minimally invasive esophagectomy (MIE-CAT) was developed and validated previously. However, surgeon's time is scarce and video assessment is time-consuming and labor intensive. This study investigated non-procedure-specific assessment of MIE video clips by MIE experts and crowdsourcing, collective surgical performance evaluation by anonymous and untrained laypeople, to assist procedure-specific expert review. METHODS Two surgical performance scoring frameworks were used to assess eight MIE videos. First, global performance was assessed with the non-procedure-specific Global Operative Assessment of Laparoscopic Skills (GOALS) of 64 procedural phase-based video clips < 10 min. Each clip was assessed by two MIE experts and > 30 crowd workers. Second, the same experts assessed procedure-specific performance with the MIE-CAT of the corresponding full-length video. Reliability and convergent validity of GOALS for MIE were investigated using hypothesis testing with correlations (experience, blood loss, operative time, and MIE-CAT). RESULTS Less than 75% of hypothesized correlations between GOALS scores and experience of the surgical team (r < 0.3), blood loss (r = - 0.82 to 0.02), operative time (r = - 0.42 to 0.07), and the MIE-CAT scores (r = - 0.04 to 0.76) were met for both crowd workers and experts. Interestingly, experts' GOALS and MIE-CAT scores correlated strongly (r = 0.40 to 0.79), while crowd workers' GOALS and experts' MIE-CAT scores correlations were weak (r = - 0.04 to 0.49). Expert and crowd worker GOALS scores correlated poorly (ICC ≤ 0.42). CONCLUSION GOALS assessments by crowd workers lacked convergent validity and showed poor reliability. It is likely that MIE is technically too difficult to assess for laypeople. Convergent validity of GOALS assessments by experts could also not be established. GOALS might not be comprehensive enough to assess detailed MIE performance. However, expert's GOALS and MIE-CAT scores strongly correlated indicating video clip (instead of full-length video) assessments could be useful to shorten assessment time.
Collapse
Affiliation(s)
- Mirte H M Ketel
- Department of Surgery, Radboud University Medical Center, Nijmegen, The Netherlands.
| | | | - Yassin Eddahchouri
- Department of Surgery, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Miguel A Cuesta
- Department of Surgery, Amsterdam University Medical Centers, Location VUmc, Amsterdam, The Netherlands
| | - Elke van Daele
- Department of Digestive Surgery, Ghent University Hospital, Ghent, Belgium
| | - Christian A Gutschow
- Department of Surgery and Transplantation, University Hospital Zurich, Zurich, Switzerland
| | - Arnulf H Hölscher
- Department for General, Visceral and Trauma Surgery, Elisabeth-Krankenhaus-Essen GmbH, Essen, Germany
| | - Michal Hubka
- Department of Thoracic Surgery, Virginia Mason Medical Center, Seattle, SE, USA
| | - Misha D P Luyer
- Department of Surgery, Catharina Hospital, Eindhoven, The Netherlands
| | - Robert E Merritt
- Department of Surgery, Ohio State University - Wexner Medical Center, Columbus, OH, USA
| | | | - Yaxing Shen
- Department of Thoracic Surgery, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Inger L Abma
- IQ Healthcare, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Camiel Rosman
- Department of Surgery, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Frans van Workum
- Department of Surgery, Radboud University Medical Center, Nijmegen, The Netherlands
- Department of Surgery, Canisius-Wilhelmina Hospital, Nijmegen, The Netherlands
| |
Collapse
|
6
|
Burke JR, Fleming CA, King M, El-Sayed C, Bolton WS, Munsch C, Harji D, Bach SP, Collins JW. Utilising an accelerated Delphi process to develop consensus on the requirement and components of a pre-procedural core robotic surgery curriculum. J Robot Surg 2023; 17:1443-1455. [PMID: 36757562 PMCID: PMC9909133 DOI: 10.1007/s11701-022-01518-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Accepted: 12/31/2022] [Indexed: 02/10/2023]
Abstract
Robot-assisted surgery (RAS) continues to grow globally. Despite this, in the UK and Ireland, it is estimated that over 70% of surgical trainees across all specialities have no access to robot-assisted surgical training (RAST). This study aimed to provide educational stakeholders guidance on a pre-procedural core robotic surgery curriculum (PPCRC) from the perspective of the end user; the surgical trainee. The study was conducted in four Phases: P1: a steering group was formed to review current literature and summarise the evidence, P2: Pan-Specialty Trainee Panel Virtual Classroom Discussion, P3: Accelerated Delphi Process and P4: Formulation of Recommendations. Forty-three surgeons in training representing all surgical specialties and training levels contributed to the three round Delphi process. Additions to the second- and third-round surveys were formulated based on the answers and comments from previous rounds. Consensus opinion was defined as ≥ 80% agreement. There was 100% response from all three rounds. The resulting formulated guidance showed good internal consistency, with a Cronbach alpha of > 0.8. There was 97.7% agreement that a standardised PPCRC would be advantageous to training and that, independent of speciality, there should be a common approach (95.5% agreement). Consensus was reached in multiple areas: 1. Experience and Exposure, 2. Access and context, 3. Curriculum Components, 4 Target Groups and Delivery, 5. Objective Metrics, Benchmarking and Assessment. Using the Delphi methodology, we achieved multispecialty consensus among trainees to develop and reach content validation for the requirements and components of a PPCRC. This guidance will benefit from further validation following implementation.
Collapse
Affiliation(s)
- Joshua Richard Burke
- The Association of Surgeons in Training, Royal College of Surgeons of England, London, England, UK
- Robotics and Digital Surgery Initiative, Royal College of Surgeons of England, London, England
- Leeds Institute Medical Research, University of Leeds, Leeds, UK
| | - Christina A. Fleming
- The Association of Surgeons in Training, Royal College of Surgeons of England, London, England, UK
- The Royal College of Surgeons, Dublin, Ireland
| | - Martin King
- The Association of Surgeons in Training, Royal College of Surgeons of England, London, England, UK
- Craigavon Area Hospital, Craigavon, Northern Ireland
| | - Charlotte El-Sayed
- Technology Enhanced Learning Directorate of Innovation, Digital and Transformation, Health Education England, London, England
- Academic Department of Surgery, University of Birmingham, Birmingham, UK
| | | | - Chris Munsch
- Technology Enhanced Learning Directorate of Innovation, Digital and Transformation, Health Education England, London, England
| | - Deena Harji
- Robotics and Digital Surgery Initiative, Royal College of Surgeons of England, London, England
- Department of Colorectal Surgery, Manchester University NHS Foundation Trust, Manchester, UK
| | - Simon P. Bach
- Robotics and Digital Surgery Initiative, Royal College of Surgeons of England, London, England
- Academic Department of Surgery, University of Birmingham, Birmingham, UK
| | - Justin W. Collins
- University College London, Division of Surgery and Interventional Science, Research Department of Targeted Intervention, London, UK
- Wellcome/ESPRC Centre for Interventional and Surgical Sciences (WEISS), UK, University College London, London, UK
| |
Collapse
|
7
|
Hussein N, Van den Eynde J, Callahan C, Guariento A, Gollmann-Tepeköylü C, Elbatarny M, Loubani M. The use of objective assessments in the evaluation of technical skills in cardiothoracic surgery: a systematic review. Interact Cardiovasc Thorac Surg 2022; 35:6651070. [PMID: 35900153 PMCID: PMC9403301 DOI: 10.1093/icvts/ivac194] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2022] [Accepted: 07/21/2022] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVES With reductions in training time and intraoperative exposure, there is a need for objective assessments to measure trainee progression. This systematic review focuses on the evaluation of trainee technical skill performance using objective assessments in cardiothoracic surgery and its incorporation into training curricula. METHODS Databases (EBSCOHOST, Scopus and Web of Science) and reference lists of relevant articles for studies that incorporated objective assessment of technical skills of trainees/residents in cardiothoracic surgery were included. Data extraction included task performed; assessment setting and tool used; number/level of assessors; study outcome and whether the assessments were incorporated into training curricula. The methodological rigour of the studies was scored using the Medical Education Research Study Quality Instrument (MERSQI). RESULTS Fifty-four studies were included for quantitative synthesis. Six were randomized-controlled trials. Cardiac surgery was the most common speciality utilizing objective assessment methods with coronary anastomosis the most frequently tested task. Likert-based assessment tools were most commonly used (61%). Eighty-five per cent of studies were simulation-based with the rest being intraoperative. Expert surgeons were primarily used for objective assessments (78%) with 46% using blinding. Thirty (56%) studies explored objective changes in technical performance with 97% demonstrating improvement. The other studies were primarily validating assessment tools. Thirty-nine per cent of studies had established these assessment tools into training curricula. The mean ± standard deviation MERSQI score for all studies was 13.6 ± 1.5 demonstrating high validity. CONCLUSIONS Despite validated technical skill assessment tools being available and demonstrating trainee improvement, their regular adoption into training curricula is lacking. There is a need to incorporate these assessments to increase the efficiency and transparency of training programmes for cardiothoracic surgeons.
Collapse
Affiliation(s)
- Nabil Hussein
- Hull-York-Medical-School, University of York, York, UK.,Department of Cardiothoracic Surgery, Castle Hill Hospital, Cottingham, UK
| | | | - Connor Callahan
- Department of Surgery, Washington University School of Medicine, St. Louis, MO, USA
| | - Alvise Guariento
- Pediatric and Congenital Cardiac Surgery Unit, Department of Cardiac, Thoracic, Vascular Sciences and Public Health, University of Padua, Padua, Italy
| | | | - Malak Elbatarny
- Department of Cardiac Surgery, University of Toronto, Toronto, ON, Canada
| | - Mahmoud Loubani
- Hull-York-Medical-School, University of York, York, UK.,Department of Cardiothoracic Surgery, Castle Hill Hospital, Cottingham, UK
| |
Collapse
|
8
|
Olsen RG, Genét MF, Konge L, Bjerrum F. Crowdsourced assessment of surgical skills: A systematic review. Am J Surg 2022; 224:1229-1237. [DOI: 10.1016/j.amjsurg.2022.07.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Revised: 05/30/2022] [Accepted: 07/14/2022] [Indexed: 11/25/2022]
|
9
|
Mattioli DD, Thomas GW, Long SA, Tatum M, Anderson DD. Minimally Trained Analysts Can Perform Fast, Objective Assessment of Orthopedic Technical Skill from Fluoroscopic Images. IISE TRANSACTIONS ON HEALTHCARE SYSTEMS ENGINEERING 2022; 12:212-220. [PMID: 36147899 PMCID: PMC9488091 DOI: 10.1080/24725579.2022.2035022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
Skill assessment in orthopedics has traditionally relied on subjective impressions from a supervising surgeon. The feedback derived from these tools may be limited by bias and other practical issues. Objective analysis of intraoperative fluoroscopic images offers an inexpensive, repeatable, and precise assessment strategy without bias. Assessors generally refrain from using the scores of images obtained throughout the operation to evaluate skill for practical reasons. A new system was designed to facilitate rapid analysis of this fluoroscopy via minimally trained analysts. Four expert and four novice analysts independently measured one objective metric for skill using both a custom analysis software and a commercial alternative. Analysts were able to measure the objective metric three times faster when using the custom software, and without a practical difference in accuracy in comparison to the expert analysts using the commercial software. These results suggest that a well-designed fluoroscopy analysis system can facilitate inexpensive, reliable, and objective assessment of surgical skills.
Collapse
Affiliation(s)
- Dominik D. Mattioli
- Department of Industrial & Systems Engineering, University of Iowa, Iowa City, United States
| | - Geb W. Thomas
- Department of Industrial & Systems Engineering, University of Iowa, Iowa City, United States,Department of Orthopedics and Rehabilitation, University of Iowa, Iowa City, United States
| | - Steven A. Long
- Department of Orthopedics and Rehabilitation, University of Iowa, Iowa City, United States
| | - Marcus Tatum
- Department of Industrial & Systems Engineering, University of Iowa, Iowa City, United States
| | - Donald D. Anderson
- Department of Industrial & Systems Engineering, University of Iowa, Iowa City, United States,Department of Orthopedics and Rehabilitation, University of Iowa, Iowa City, United States
| |
Collapse
|
10
|
Harnessing Artificial Intelligence in Maxillofacial Surgery. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
11
|
Karani R, Tapiero S, Jefferson FA, Vernez S, Xie L, Larson KN, Osann K, Okhunov Z, Patel RM, Landman J, Clayman RV, Stephany HA. Crowd-Sourced Assessment of Surgical Skills of Urology Resident Applicants: Four-Year Experience. JOURNAL OF SURGICAL EDUCATION 2021; 78:2030-2037. [PMID: 34147416 DOI: 10.1016/j.jsurg.2021.05.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 04/21/2021] [Accepted: 05/16/2021] [Indexed: 06/12/2023]
Abstract
OBJECTIVE To determine a) if surgical skills among urology resident applicants could be reliably assessed via crowdsourcing and b) to what extent surgical skills testing impacts resident selection. DESIGN Interviewees completed the following surgical skills tasks during their interview day: open knot tying (OKT), laparoscopic peg transfer (LPT), and robotic suturing (RS). Urology faculty and crowd-workers evaluated each applicant's video-recorded performance using validated scoring and were assessed for agreement using Cronbach's alpha. Applicants' USMLE scores, interview scores, and Jefferson Scale of Physician Empathy (JSPE-S) scores were assessed for correlation with skills testing scores and match rank. Additionally, a survey was distributed to interviewees assessing match outcomes. SETTING University of California Irvine Department of Urology, Surgical Skills Laboratory PARTICIPANTS: All 94 urology residency interviewees at the University of California Irvine Department of Urology from 2015-2018 were invited to complete the three surgical skills tasks on their interview day. RESULTS Survey responses were received from all 94 interviewees (100%). Crowd and expert agreement was good (α=0.88), fair (α=0.67), and poor (α=0.32) for LPT, RS, and OKT scores, respectively. The skills testing scores did not correlate with match rank, USMLE score, or JSPE-S score. On multivariate analysis, only interview score (r= -0.723; p<0.001) and faculty LPT score (r=-0.262; p=0.001) were significant predictors of match rank. Interviewees who reported matching into a top 3 residency choice had significantly higher faculty LPT scores than those who did not (11.9 vs. 9.7, p=0.03). CONCLUSIONS Surgical skills overall did not significantly impact match rank. Expert assessment of laparoscopic peg transfer skills and interview performance among urology resident applicants correlated with match rank.
Collapse
Affiliation(s)
- Rajiv Karani
- Department of Urology, University of California, Irvine, Orange, California
| | - Shlomi Tapiero
- Department of Urology, University of California, Irvine, Orange, California
| | | | - Simone Vernez
- Department of Urology, University of California, Irvine, Orange, California
| | - Lillian Xie
- Department of Urology, University of California, Irvine, Orange, California
| | - Krista N Larson
- Department of Urology, University of California, Irvine, Orange, California
| | - Kathryn Osann
- Department of Urology, University of California, Irvine, Orange, California
| | - Zhamshid Okhunov
- Department of Urology, University of California, Irvine, Orange, California
| | - Roshan M Patel
- Department of Urology, University of California, Irvine, Orange, California
| | - Jaime Landman
- Department of Urology, University of California, Irvine, Orange, California
| | - Ralph V Clayman
- Department of Urology, University of California, Irvine, Orange, California
| | - Heidi A Stephany
- Department of Urology, University of California, Irvine, Orange, California.
| |
Collapse
|
12
|
Kpokiri EE, John R, Wu D, Fongwen N, Budak JZ, Chang CC, Ong JJ, Tucker JD. Crowdsourcing to develop open-access learning resources on antimicrobial resistance. BMC Infect Dis 2021; 21:914. [PMID: 34488673 PMCID: PMC8419975 DOI: 10.1186/s12879-021-06628-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Accepted: 08/20/2021] [Indexed: 12/15/2022] Open
Abstract
Objectives Antimicrobial resistance (AMR) is a significant threat to global public health. Many medical curricula have limited clinical cases and materials focused on AMR, yet enhanced AMR education and training are needed to support antimicrobial stewardship programmes. We used crowdsourcing methods to develop open-access, learner-centred AMR resources. Crowdsourcing is the process of having a large group, including experts and non-experts, solve a problem and then share solutions with the public. Methods We organised a global crowdsourcing contest soliciting AMR-related multiple-choice questions, infographics, and images. First, we convened a diverse steering committee group to finalise a call for entries. Second, we launched the contest and disseminated the call for entries using social media, blog posts, email, and an in-person event. Partner institutions included two digital healthcare platforms: Figure 1® and Ding Xiang Yuan. Both organizations serve as online communities for healthcare specialists and professionals to report and comment on clinical information. At the end of the call, solicited entries were screened for eligibility and judged on merit and relevance to AMR learning and education. Exceptional entries were recognised, awarded prizes, and further reviewed for sharing with the public via open-access platforms. Results We received 59 entries from nine countries. These included 54 multiple-choice questions, four infographics, and one image. Eligible entries (n = 56) were reviewed and assigned a score on a 1–10 scale. Eight entries received mean scores greater than 6.0 and were selected as finalists. The eight finalist entries consisted of three infographics and five multiple-choice questions. They were disseminated through open-access publications and online medical communities. Although we launched a global call, we relied heavily on medical student groups and the entries received were not entirely globally representative. Conclusions We demonstrate that crowdsourcing challenge contests can be used to identify infectious disease teaching materials. Medical educators and curriculum developers can adapt this method to solicit additional teaching content for medical students. Supplementary Information The online version contains supplementary material available at 10.1186/s12879-021-06628-0.
Collapse
Affiliation(s)
- Eneyi E Kpokiri
- Faculty of Infectious and Tropical Diseases, London School of Hygiene and Tropical Medicine, Keppel St., Bloomsbury, London, WC1E 7HT, UK.
| | - Randall John
- Department of Health Policy and Management, Gillings School of Global Public Health, University of North Carolina, Chapel Hill, NC, USA
| | - Dan Wu
- Faculty of Infectious and Tropical Diseases, London School of Hygiene and Tropical Medicine, Keppel St., Bloomsbury, London, WC1E 7HT, UK
| | - Noah Fongwen
- Faculty of Infectious and Tropical Diseases, London School of Hygiene and Tropical Medicine, Keppel St., Bloomsbury, London, WC1E 7HT, UK
| | - Jehan Z Budak
- Department of Medicine, Division of Allergy & Infectious Diseases, University of Washington, Seattle, WA, USA
| | - Christina C Chang
- Partners ID Images, Department of Infectious Diseases, Massachusetts General Hospital, Boston, MA, USA
| | - Jason J Ong
- Faculty of Infectious and Tropical Diseases, London School of Hygiene and Tropical Medicine, Keppel St., Bloomsbury, London, WC1E 7HT, UK.,Central Clinical School, Monash University, Melbourne, Australia
| | - Joseph D Tucker
- Faculty of Infectious and Tropical Diseases, London School of Hygiene and Tropical Medicine, Keppel St., Bloomsbury, London, WC1E 7HT, UK.,Social Entrepreneurship To Spur Health (SESH), Guangzhou, China.,Institute of Global Health and Infectious Diseases, University of North Carolina, Chapel Hill, NC, USA
| |
Collapse
|
13
|
Video-Based Coaching: Current Status and Role in Surgical Practice (Part 1) From the Society for Surgery of the Alimentary Tract, Health Care Quality and Outcomes Committee. J Gastrointest Surg 2021; 25:2439-2446. [PMID: 34355331 DOI: 10.1007/s11605-021-05102-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Accepted: 07/21/2021] [Indexed: 01/31/2023]
Abstract
Patient safety and outcomes are directly related to surgical performance. Surgical training emphasizes the importance of the surgeon in determining these outcomes. After training is complete, there is a lack of structured programs for surgeons to audit their skills and continue their individual development. There is a significant linear relationship between surgeon technical skill and surgical outcomes; however, measuring technical performance is difficult. Video-based coaching matches an individual surgeon in practice with a surgical colleague who has been trained in the core principles of coaching for individualizing instruction. It can provide objective assessment for teaching higher-level concepts, such as technical skills, cognitive skills, and decision-making. There are many benefits to video-based coaching. While the concept is gaining acceptance as a method of surgical education, it is still novel in clinical practice. As more surgeons look towards video-based coaching for quality improvement, a consistent definition of the program, goals, and metrics for assessment will be critical. This paper is a review on the status of the video-based coaching as it applies to practicing surgeons.
Collapse
|
14
|
Beqari J, Seymour NE. Application of technology to educational needs in surgery. J Surg Oncol 2021; 124:181-192. [PMID: 34245576 DOI: 10.1002/jso.26512] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Revised: 04/14/2021] [Accepted: 04/17/2021] [Indexed: 01/22/2023]
Abstract
Innovations in surgical education follow advancing clinical technology. New surgical methods have prompted demand for systematic methods to leverage computing power and internet tools to achieve proficiency-based training goals. Virtual reality, high-fidelity patient simulation, web-based resources to facilitate performance assessment, and telementoring have become mainstream practices, although patient outcomes benefits are not well studied. Remote virtual meeting and mentoring have had transformative effects on resident experiences, the full effects of which remain to be seen.
Collapse
Affiliation(s)
- Jorind Beqari
- University of Massachusetts Medical School-Baystate, Springfield, Massachusetts, USA
| | - Neal E Seymour
- University of Massachusetts Medical School-Baystate, Springfield, Massachusetts, USA
| |
Collapse
|
15
|
Abstract
BACKGROUND Time-based training models in plastic surgery vary in exposure, resulting in low confidence levels among graduates. The evolution of postgraduate medical education into a competency-based model to address these issues requires an understanding of interventions described in the plastic surgery literature to identify gaps and guide creation of assessments to demonstrate competence. METHODS A systematic search of the MEDLINE, Embase, Cumulative Index to Nursing and Allied Health Literature, PubMed, and Cochrane databases from inception until December of 2017 was conducted using search terms and synonyms of educational interventions reported in plastic surgery. Full texts were retrieved following filtering and data extracted were related to intervention design and execution, involvement of competency assessment, and educational objectives and alignment to Accreditation Council for Graduate Medical Education competencies and Royal College of Physicians and Surgeons of Canada Canadian Medical Education Directives for Specialists roles. Study quality was assessed using Kirkpatrick's levels of learning evaluation, validity evidence, and the Medical Education Research Study Quality Instrument score. RESULTS Of the initial 4307 results, only 36 interventions met the inclusion criteria. Almost all interventions aligned to medical knowledge and patient care Accreditation Council for Graduate Medical Education competencies. One-fifth of the interventions involved no assessment of competency, whereas most displayed assessment at the level of design as opposed to outcomes. Quality assessment revealed low levels of learning evaluation and evidence of validity; the average Medical Education Research Study Quality Instrument score was 10.9 of 18. CONCLUSION A systematic review of educational literature in plastic surgery was conducted to assess the quality of reported educational interventions, and to help guide creating tools that ensure competency acquirement among trainees.
Collapse
|
16
|
The Use of Crowdsourcing Technology to Evaluate Preoperative Severity in Patients With Unilateral Cleft Lip in a Multiethnic Population. J Craniofac Surg 2021; 32:482-485. [PMID: 33704965 DOI: 10.1097/scs.0000000000006917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
ABSTRACT Crowd sourcing has been used in multiple disciplines to quickly generate large amounts of diverse data. The objective of this study was to use crowdsourcing to grade preoperative severity of unilateral cleft lip phenotype in a multiethnic cohort with the hypothesis that crowdsourcing could efficiently achieve similar rankings compared to expert surgeons. Deidentified preoperative photos were collected for patients with primary, unilateral cleft lip with or without cleft palate (CL ± P). A platform was developed with C-SATS for pairwise comparisons utilizing Elo rankings by crowdsource workers through Amazon Mechanical Turk. Images were independently ranked by 2 senior surgeons for comparison. Seventy-six patients with varying severity of unilateral (CL ± P) phenotype were chosen from Operation Smile missions in Bolivia, Madagascar, Vietnam, and Morocco. Patients were an average of 1.2 years' old, ranging from 3 months to 3.3 years. Each image was compared with 10 others, for a total of 380 unique pairwise comparisons. A total of 4627 total raters participated with a median of 12 raters per pair. Data collection was completed in <20 hours. The crowdsourcing ranking and expert surgeon rankings were highly correlated with Pearson correlation coefficient of R = 0.77 (P = 0.0001). Crowdsourcing provides a rapid and convenient method of obtaining preoperative severity ratings, comparable to expert surgeon assessment, across multiple ethnicities. The method serves as a potential solution to the current lack of rating systems for preoperative severity and overcomes the difficulty of acquiring large-scale assessment from expert surgeons.
Collapse
|
17
|
Nentin F, Gabbur N, Katz A. A Shift in Medical Education During the COVID-19 Pandemic. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2021; 1318:773-784. [PMID: 33973211 DOI: 10.1007/978-3-030-63761-3_43] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
The COVID-19 pandemic has abruptly affected every aspect of people's daily lives worldwide. Just like every other area, the medical field has been dramatically impacted by the need to care for a large number of patients while at the same time protecting staff, patients, and their families. Changes in the wake of the pandemic called for the prompt and extensive rechanneling and re-organization of resources. The pandemic has opened challenges and concerns for patient safety, starting with the early recognition that individuals, including medical staff, may spread the virus during the asymptomatic phase. Many healthcare facilities faced resource-limited settings, including challenges in the availability of personal protective equipment for healthcare providers. Additionally, the pandemic has disrupted medical education, both at the undergraduate and at the graduate levels, and according to many predictions, its effects may forever transform the ways medical education is delivered. In this chapter, we are exploring the history of medical education, describe changes in medical education experienced during the COVID-19 pandemic, and predict some of the considerations worth taking into account when envisioning the future of medical education.
Collapse
Affiliation(s)
- Farida Nentin
- Department of Obstetrics, Gynecology and Reproductive Science, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Nagaraj Gabbur
- Department of Obstetrics and Gynecology, Zucker School of Medicine at Hofstra/Northwell, North Shore University Hospital and Long Island Jewish Medical Centers, Manhasset, NY, USA
| | - Adi Katz
- Department of Obstetrics and Gynecology, Zucker School of Medicine at Hofstra/Northwell, Lenox Hill Hospital, New York, NY, USA.
| |
Collapse
|
18
|
Pradarelli JC, George E, Kavanagh J, Sonnay Y, Khoon TH, Havens JM. Training Novice Raters to Assess Nontechnical Skills of Operating Room Teams. JOURNAL OF SURGICAL EDUCATION 2021; 78:386-390. [PMID: 32800768 DOI: 10.1016/j.jsurg.2020.07.042] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Revised: 06/25/2020] [Accepted: 07/28/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVE To our knowledge, no curricula have been described for training novice, nonclinician raters of nontechnical skills in the operating room (OR). We aimed to report the reliability of Oxford Non-Technical Skills (NOTECHS) ratings provided by novice raters who underwent a scalable curriculum for learning to assess nontechnical skills of OR teams. DESIGN In-person training course to apply the NOTECHS framework to assessing OR teams' nontechnical skill performance, led by 2 facilitators and involving 5 partial-day sessions of didactic presentations, video simulation, and live OR observation with postassessment debriefing. NOTECHS ratings were submitted after each of 11 video scenarios and 8 live operations for the total NOTECHS team rating (including surgical/anesthesiology/nursing subteams) and for each NOTECHS skill category-situation awareness, problem solving and decision making, teamwork and cooperation, leadership and management. Inter-rater reliability was determined by calculating the intraclass correlation coefficient (ICC, range 0-1). SETTING Training for outcome measurement during a quality improvement initiative focused on surgical safety in 3 public hospitals in Singapore. Two trainings were conducted in May 2019 and January 2020. PARTICIPANTS Ten novice raters who were existing hospital staff and had overall minimal OR experience and no prior experience with nontechnical skill assessment. RESULTS ICC for the total NOTECHS team rating was 0.89 (95% confidence interval [CI], 0.87-0.91). ICCs for each NOTECHS category were as follows: situation awareness, 0.83 (95% CI, 0.78-0.88); problem solving and decision-making, 0.76 (95% CI, 0.70-0.83); teamwork and cooperation, 0.84 (95% CI, 0.79-0.88); leadership and management, 0.81 (95% CI, 0.75-0.86). CONCLUSIONS This training curriculum for nontechnical skill assessments of OR teams was associated with high inter-rater reliability from novice raters with minimal collective OR experience. Using scalable training materials to produce reliable measurements of OR team performance, this nontechnical skills assessment curriculum may contribute to future QI projects aimed at improving surgical safety.
Collapse
Affiliation(s)
- Jason C Pradarelli
- Ariadne Labs at Brigham and Women's Hospital and the Harvard T.H. Chan School of Public Health, Boston, Massachusetts; Department of Surgery, Brigham and Women's Hospital, Boston, Massachusetts.
| | - Emily George
- Ariadne Labs at Brigham and Women's Hospital and the Harvard T.H. Chan School of Public Health, Boston, Massachusetts
| | - Jane Kavanagh
- Ariadne Labs at Brigham and Women's Hospital and the Harvard T.H. Chan School of Public Health, Boston, Massachusetts
| | - Yves Sonnay
- Ariadne Labs at Brigham and Women's Hospital and the Harvard T.H. Chan School of Public Health, Boston, Massachusetts
| | - Tan Hiang Khoon
- Division of Surgery and Surgical Oncology, Singapore General Hospital, Singapore
| | - Joaquim M Havens
- Ariadne Labs at Brigham and Women's Hospital and the Harvard T.H. Chan School of Public Health, Boston, Massachusetts; Department of Surgery, Brigham and Women's Hospital, Boston, Massachusetts
| |
Collapse
|
19
|
Alnafisee N, Zafar S, Vedula SS, Sikder S. Current methods for assessing technical skill in cataract surgery. J Cataract Refract Surg 2021; 47:256-264. [PMID: 32675650 DOI: 10.1097/j.jcrs.0000000000000322] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Accepted: 06/19/2020] [Indexed: 12/18/2022]
Abstract
Surgery is a major source of errors in patient care. Preventing complications from surgical errors in the operating room is estimated to lead to reduction of up to 41 846 readmissions and save $620.3 million per year. It is now established that poor technical skill is associated with an increased risk of severe adverse events postoperatively and traditional models to train surgeons are being challenged by rapid advances in technology, an intensified patient-safety culture, and a need for value-driven health systems. This review discusses the current methods available for evaluating technical skills in cataract surgery and the recent technological advancements that have enabled capture and analysis of large amounts of complex surgical data for more automated objective skills assessment.
Collapse
Affiliation(s)
- Nouf Alnafisee
- From the The Wilmer Eye Institute, Johns Hopkins University School of Medicine (Alnafisee, Zafar, Sikder), Baltimore, and the Department of Computer Science, Malone Center for Engineering in Healthcare, The Johns Hopkins University Whiting School of Engineering (Vedula), Baltimore, Maryland, USA
| | | | | | | |
Collapse
|
20
|
Pereira KR. Harnessing Artificial Intelligence in Maxillofacial Surgery. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_322-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
21
|
Abstract
OBJECTIVE To define criteria for robotic credentialing using expert consensus. BACKGROUND A recent review of institutional robotic credentialing policies identified significant variability and determined current policies are largely inadequate to ensure surgeon proficiency and may threaten patient safety. METHODS 28 national robotic surgery experts were invited to participate in a consensus conference. After review of available institutional policies and discussion, the group developed a 91 proposed criteria. Using a modified Delphi process the experts were asked to indicate their agreement with the proposed criteria in three electronic survey rounds after the conference. Criteria that achieved 80% or more in agreement (consensus) in all rounds were included in the final list. RESULTS All experts agreed that there is a need for standardized robotic surgery credentialing criteria across institutions that promote surgeon proficiency. 49 items reached consensus in the first round, 19 in the second, and 8 in the third for a total of 76 final items. Experts agreed that privileges should be granted based on video review of surgical performance and attainment of clearly defined objective proficiency benchmarks. Parameters for ongoing outcome monitoring were determined and recommendations for technical skills training, proctoring, and performance assessment were defined. CONCLUSIONS Using a systematic approach, detailed credentialing criteria for robotic surgery were defined. Implementation of these criteria uniformly across institutions will promote proficiency of robotic surgeons and has the potential to positively impact patient outcomes.
Collapse
|
22
|
Desai DM. Put me in coach. Am J Surg 2020; 221:2-3. [PMID: 33069366 DOI: 10.1016/j.amjsurg.2020.09.034] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Revised: 09/24/2020] [Accepted: 09/25/2020] [Indexed: 11/16/2022]
Affiliation(s)
- Dev M Desai
- University of Texas Southwestern Medical Center, Children's Medical Center, Dallas 1935 Medical District Drive, Solid Organ Transplant Program, MC B2.02, Dallas, TX, 75235, USA.
| |
Collapse
|
23
|
Paulsgrove K, Miller E, Seidel K, Kinter S, Tse R. Crowdsourcing to Assess Speech Quality Associated With Velopharyngeal Dysfunction. Cleft Palate Craniofac J 2020; 58:25-34. [PMID: 32806948 DOI: 10.1177/1055665620948770] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023] Open
Abstract
OBJECTIVE To assess crowdsourced responses in the evaluation of speech outcomes in children with velopharyngeal dysfunction (VPD). DESIGN Fifty deidentified speech samples were compiled. Multiple pairwise comparisons obtained by crowdsourcing were used to produce a rank order of speech quality. Ratings of overall and specific speech characteristics were also collected. Twelve speech-language pathologists (SLPs) who specialize in VPD were asked to complete the same tasks. Crowds and experts completed each task on 2 separate occasions at least 1 week apart. SETTING On-line crowdsourcing platform. PARTICIPANTS Crowdsource raters were anonymous and at least 18 years of age, North American English speakers with self-reported normal hearing. Speech-language pathologists were recruited from multiple cleft/craniofacial teams. INTERVENTIONS None. MAIN OUTCOME MEASURE(S) Correlation of repeated assessments and comparison of crowd and SLP assessments. RESULTS We obtained 6331 lay person assessments that met inclusion criteria via crowdsourcing within 8 hours. The crowds provided reproducible Elo rankings of speech quality, ρ(48) = .89; P <.0001, and consistent ratings of intelligibility and acceptability (intraclass correlation coefficient [ICC] = .87 and .92) on repeated assessments. There was a significant correlation of those crowd rankings, ρ(10) = .86; P = .0003, and ratings (ICC = .75 and .79) with those of SLPs. The correlation of more specific speech characteristics by the crowds and SLPs was moderate to weak (ICC < 0.65). CONCLUSIONS Crowdsourcing shows promise as a rapid way to obtain large numbers of speech assessments. Reliability of repeated assessments was acceptable. Large groups of naive raters yield comparable evaluations of overall speech acceptability, intelligibility, and quality, but are not consistent with expert raters for specific speech characteristics such as resonance and nasal air emission.
Collapse
Affiliation(s)
- Kaylee Paulsgrove
- Speech & Language Services, 7274Seattle Children's Hospital, Seattle, WA, USA
| | - Erin Miller
- Division of Plastic Surgery, Department of Surgery, 21617University of Washington, Seattle, WA, USA
| | - Kristy Seidel
- CSATS Inc, A Division of Johnson & Johnson, Seattle, WA, USA
| | - Sara Kinter
- Speech & Language Services, 7274Seattle Children's Hospital, Seattle, WA, USA
| | - Raymond Tse
- Division of Plastic Surgery, Department of Surgery, 21617University of Washington, Seattle, WA, USA.,Division of Craniofacial and Plastic Surgery, Department of Surgery, 7274Seattle Children's Hospital, Seattle, WA, USA
| |
Collapse
|
24
|
Perkins SQ, Dabaja A, Atiemo H. Best Approaches to Evaluation and Feedback in Post-Graduate Medical Education. Curr Urol Rep 2020; 21:36. [PMID: 32789759 DOI: 10.1007/s11934-020-00991-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE OF REVIEW The objectives of this literature review are to appraise current approaches and assess new technologies that have been utilized for evaluation and feedback of residents, with focus on surgical trainees. RECENT FINDINGS In 1999, the Accreditation Council for Graduate Medical Education introduced the Milestone system as a tool for summative evaluation. The organization allows individual program autonomy on how evaluation and feedback are performed. In the past, questionnaire evaluations and informal verbal feedback were employed. However, with the advent of technology, they have taken a different shape in the form of crowdsourcing, mobile platforms, and simulation. Limited data is available on new methods but studies show promise citing low cost and positive impact on resident education. No one "best approach" exists for evaluation and feedback. However, it is apparent that a multimodal approach that is based on the ACGME Milestones can be effective and aid in guiding programs.
Collapse
Affiliation(s)
- Sara Q Perkins
- Henry Ford Health System, 2799 W Grand Blvd, K9, Detroit, MI, 48202, USA
| | - Ali Dabaja
- Henry Ford Health System, 2799 W Grand Blvd, K9, Detroit, MI, 48202, USA
| | - Humphrey Atiemo
- Henry Ford Health System, 2799 W Grand Blvd, K9, Detroit, MI, 48202, USA.
| |
Collapse
|
25
|
Abstract
BACKGROUND Female surgeons are subjected to implicit bias throughout their careers. The evaluation of gender bias in training is warranted with increasing numbers of female trainees in colon and rectal surgery. OBJECTIVE This study aimed to evaluate gender bias in colon and rectal surgery training program operative experience. DESIGN This is a retrospective cohort study. SETTING The Association of Program Directors for Colon and Rectal Surgery robotic case log database contains operative details (procedure, attending surgeon, case percentage, and operative segments) completed by trainees as console surgeon for 2 academic years (2016-2017, 2017-2018). MAIN OUTCOME MEASURE The primary outcomes measured are the percentage of trainee console participation and the completion of total mesorectal excision. Resident and attending surgeon gender was recorded retrospectively. The cohort was separated into 4 groups based on resident and attending surgeon gender combination. Case volume, average console participation per case, and completion of total mesorectal excisions were compared for each group by using interaction regression analysis. RESULTS Fifty-two training programs participated, including 120 trainees and 190 attending surgeons. Forty-five (37.5%) trainees and 36 (18.9%) attending surgeons were women. The average number of cases per trainee was 23.27 per year for women and 28.15 per year for men (p = 0.19). Average console participation was 53.5% for women and 61.7% for men (p < 0.001). Male attending surgeons provided female trainees less console participation than male counterparts (52.1% vs 59.7%, p < 0.001). Female attending surgeons provided the same amount of console participation to female and male trainees (63.3% vs 61.8%, p = 0.62). Male trainees performed significantly more complete total mesorectal excision console cases than female trainees (57.16% vs 42.38%, p < 0.0001). LIMITATIONS The data are subject to self-reporting bias. CONCLUSIONS There is gender disparity in robotic operative experience in colon and rectal surgery training programs with less opportunity for console participation and less opportunity to complete total mesorectal excisions for female trainees. This trend should be highlighted and further evaluated to resolve this disparity. See Video Abstract at http://links.lww.com/DCR/B224. PROGRAMAS DE CAPACITACIÓN ROBÓTICA SOBRE CIRUGÍA DE COLON Y RECTO: UNA EVALUACIÓN DE LAS DISPARIDADES DE GÉNERO: Cirujanos mujeres están sujetas a sesgos implícitos a lo largo de sus carreras. La evaluación del sesgo de género en el entrenamiento se amerita por un número cada vez mayor de aprendices femeniles en cirugía de colon y recto.Evaluar el sesgo de género en la experiencia operativa en programas de entrenamiento de cirugía de colon y recto.Estudio de cohorte retrospectivo.La base de datos de registro de casos robóticos de la Asociación de Directores de Programas para Cirugía de Colon y Rectal contiene detalles operativos (procedimiento, cirujano asistente, porcentaje de casos y segmentos operativos) completados por los alumnos como cirujanos de consola durante dos años académicos (2016-17, 2017-18).Porcentaje de participación de la consola de entrenamiento y finalización de la escisión mesorrectal total. Se registraron retrospectivamente el sexo de los médicos residentes y asistentes. La cohorte se separó en cuatro grupos según la combinación de género residente y asistente. El volumen de casos, la participación promedio de la consola por caso y la finalización de las extirpaciones mesorrectales totales se compararon para cada grupo mediante el análisis de regresión de interacción.Participaron 52 programas de capacitación, incluidos 120 aprendices y 190 cirujanos asistentes. Cuarenta y cinco (37.5%) aprendices y 36 (18.9%) cirujanos asistentes eran mujeres. El número promedio de casos por aprendiz fue de 23.27 / año para mujeres y 28.15 / año para hombres (p = 0.19). La participación promedio de la consola fue del 53.5% para las mujeres y del 61.7% para los hombres (p <0.001). Los cirujanos asistentes masculinos proporcionaron a las mujeres aprendices menos participación en la consola en comparación con sus compañeros masculinos (52.1% vs 59.7%, p <0.001). Los cirujanos asistentes femeninos proporcionaron la misma cantidad de participación en la consola a los aprendices femeninos y masculinos (63.3% vs 61.8%, p = 0.62). Los aprendices masculinos realizaron casos de consola TME significativamente más completos que las aprendices femeninas (57.16% vs 42.38%, p <0.0001).Los datos están sujetos a sesgos de autoinforme.Existe una disparidad de género en la experiencia quirúrgica robótica en los programas de entrenamiento de cirugía de colon y recto con menos oportunidades para la participación de la consola y menos oportunidades para completar las extirpaciones mesorrectales totales para las mujeres en formación. Esta tendencia debe destacarse y evaluarse para resolver esta disparidad. Consulte Video Resumen en http://links.lww.com/DCR/B224. (Traducción-Dr. Adrián Ortega).
Collapse
|
26
|
Bendre HH, Rajender A, Barbosa PV, Wason SEL. Robotic dismembered pyeloplasty surgical simulation using a 3D-printed silicone-based model: development, face validation and crowdsourced learning outcomes assessment. J Robot Surg 2020; 14:897-902. [PMID: 32240498 DOI: 10.1007/s11701-020-01072-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Accepted: 03/22/2020] [Indexed: 12/01/2022]
Abstract
Ureteropelvic junction obstruction (UPJO) is an uncommonly encountered pathology, posing a challenge for resident training. We describe the development and face validation of a robotic pyeloplasty simulation using a 3D-printed silicone-based model of UPJO for surgical training, in combination with crowdsourced scoring to objectively assess performance and learning outcomes. The organs were created using 3D modeling software and printed using a silicone-based material by Lazarus 3D, LLC. They were secured in a laparoscopic box trainer and the robotic system was docked. Eight residents and three faculty each performed two robotic-assisted right dismembered pyeloplasties on separate occaisions. Face validity was evaluated on a 5-point Likert scale. Crowd-Sourced Assessment of Technical Skills (C-SATS Inc.) scored surgical performance using the Global Evaluative Assessment of Robotic Skills (GEARS) criteria, based on video review of each simulation. All participants completed the simulation twice with fully patent anastomoses. Average time to complete the first and second trials was 44.4 min and 43.2 min, respectively. The average GEARS score was 17.1 and 17.6 for the first and second trials respectively. Participants improved on average in all 5 GEARS categories, with significant improvement in depth perception (p = 0.006). The model received mean scores (out of 5) of 4.36 for aesthetics, 4.18 for overall feel, 3.55 for realism, 4.72 for usability, and 4.72 for suturability. Residents had a significant increase in confidence between initial and final surveys on a 5-point Likert Scale: 1.63 vs. 2.38 (p = 0.03). Using 3D-printed silicone-based models, participants completed robotic-assisted dismembered pyeloplasties for training and skill acquisition. We demonstrated face validity of the simulation, which was also found to improve participant speed and significantly improve resident confidence. Crowdsourced assessment demonstrated significant improvement in depth perception.
Collapse
Affiliation(s)
- Hersh H Bendre
- Boston Medical Center, Boston University School of Medicine, 725 Albany St., Suite 3B, Boston, MA, 02118, USA
| | - Archana Rajender
- Boston Medical Center, Boston University School of Medicine, 725 Albany St., Suite 3B, Boston, MA, 02118, USA
| | - Philip V Barbosa
- Boston Medical Center, Boston University School of Medicine, 725 Albany St., Suite 3B, Boston, MA, 02118, USA
| | - Shaun E L Wason
- Boston Medical Center, Boston University School of Medicine, 725 Albany St., Suite 3B, Boston, MA, 02118, USA.
| |
Collapse
|
27
|
Goldenberg M, Ordon M, Honey JRD, Andonian S, Lee JY. Objective Assessment and Standard Setting for Basic Flexible Ureterorenoscopy Skills Among Urology Trainees Using Simulation-Based Methods. J Endourol 2020; 34:495-501. [PMID: 32059622 DOI: 10.1089/end.2019.0626] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023] Open
Abstract
Objective: To objectively assess the performance of graduating urology residents performing flexible ureterorenoscopy (fURS) using a simulation-based model and to set an entrustability standard or benchmark for use across the educational spectrum. Methods: Chief urology residents and attending endourologists performed a standardized fURS task (ureterorenoscopy and repositioning of stones) using a Boston Scientific© Lithovue ureteroscope on a Cook Medical© URS model. All performances were video-recorded and blindly scored by both endourology experts and crowd-workers (C-SATS) using the Ureteroscopic Global Rating Scale, plus an overall entrustability score. Validity evidence supporting the scores was collected and categorized. The Borderline Group (BG) method was used to set absolute performance standards for the expert and crowdsourced ratings. Results: A total of 44 participants (40 chief residents, 4 faculties) completed testing. Eighty-three percent of participants had performed >50 fURS cases at the time of the study. Only 47.7% (mean score 12.6/20) and 61.4% (mean score 12.4/20) of participants were deemed "entrustable" by experts and crowd-workers, respectively. The BG method produced entrustability benchmarks of 11.8/20 for experts and 11.4/20 for crowd-worker ratings, resulting in pass rates of 56.9% and 61.4%. Conclusion: Using absolute standard setting methods, benchmark scores were set to identify trainees who could safely carry out fURS in the simulated setting. Only 60% of residents in our cohort were rated as entrustable. These findings support the use of benchmarks to earlier identify trainees requiring remediation.
Collapse
Affiliation(s)
- Mitchell Goldenberg
- Division of Urology, Department of Surgery, St. Michael's Hospital, University of Toronto, Toronto, Canada
| | - Michael Ordon
- Division of Urology, Department of Surgery, St. Michael's Hospital, University of Toronto, Toronto, Canada
| | - John R D'A Honey
- Division of Urology, Department of Surgery, St. Michael's Hospital, University of Toronto, Toronto, Canada
| | - Sero Andonian
- Division of Urology, McGill University Health Centre, McGill University, Quebec, Canada
| | - Jason Y Lee
- Division of Urology, Department of Surgery, University Health Network-Toronto General Hospital, University of Toronto, Toronto, Canada
| |
Collapse
|
28
|
Thomas GW, Long S, Tatum M, Kowalewski T, Mattioli D, Marsh JL, Kowalski HR, Karam MD, Bechtold JE, Anderson DD. A Vision for Using Simulation & Virtual Coaching to Improve the Community Practice of Orthopedic Trauma Surgery. THE IOWA ORTHOPAEDIC JOURNAL 2020; 40:25-34. [PMID: 32742205 PMCID: PMC7368513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
BACKGROUND Many orthopedic surgeries involve the challenging integration of fluoroscopic image interpretation with skillful tool manipulation to enable procedures to be performed through less invasive approaches. Simulation has proved beneficial for teaching and improving these skills for residents, but similar benefits have not yet been realized for practicing orthopedic surgeons. A vision is presented to elevate community orthopedic practice and improve patient safety by advancing the use of simulators for training and assessing surgical skills. METHODS Key elements of this vision that are established include 1) methods for the objective and rigorous assessment of the performance of practicing surgeons now exist, 2) simulators are sufficiently mature and sophisticated that practicing surgeons will use them, and 3) practicing surgeons can improve their performance with appropriate feedback and coaching. RESULTS Data presented indicate that surgical performance can be adequately and comparably measured using structured observations made by experts or non-expert crowds, with the crowdsourcing approach being more expedient and less expensive. Rigorous measures of the surgical result and intermediate objectives obtained semi-automatically from intra-operative fluoroscopic image sequences can distinguish performances of experts from novices. Experience suggests that practicing orthopedic surgeons are open to and can be constructively engaged by a family of mature simulators as a means to evaluate and improve their surgical skills. CONCLUSIONS The results presented support our contention that new objective assessment measures are sufficient for evaluating the performance of working surgeons. The novel class of orthopedic surgical simulators available were tested and approved by practicing physicians. There exists a clear opportunity to combine purpose-designed simulator exercises with virtual coaching to help practicing physicians retain, retrain, and improve their technical skills. This will ultimately reduce cost, increase the quality of care, and decrease complication rates. CLINICAL RELEVANCE This vision articulates a means to boost the confidence of practitioners and ease their anxiety so that they perform impactful procedures more often in community hospitals, which promises to improve treatment and reduce the cost of care while keeping patients closer to their homes and families.
Collapse
Affiliation(s)
- Geb W. Thomas
- Department of Industrial and Systems Engineering
- Department of Orthopedics and Rehabilitation
| | - Steven Long
- Department of Orthopedics and Rehabilitation
- Department of Biomedical Engineering
| | - Marcus Tatum
- Department of Industrial and Systems Engineering
| | | | | | | | | | | | | | - Donald D. Anderson
- Department of Industrial and Systems Engineering
- Department of Orthopedics and Rehabilitation
- Department of Biomedical Engineering
| |
Collapse
|
29
|
Novel evaluation of surgical activity recognition models using task-based efficiency metrics. Int J Comput Assist Radiol Surg 2019; 14:2155-2163. [PMID: 31267333 DOI: 10.1007/s11548-019-02025-w] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2019] [Accepted: 06/26/2019] [Indexed: 01/14/2023]
Abstract
PURPOSE Surgical task-based metrics (rather than entire procedure metrics) can be used to improve surgeon training and, ultimately, patient care through focused training interventions. Machine learning models to automatically recognize individual tasks or activities are needed to overcome the otherwise manual effort of video review. Traditionally, these models have been evaluated using frame-level accuracy. Here, we propose evaluating surgical activity recognition models by their effect on task-based efficiency metrics. In this way, we can determine when models have achieved adequate performance for providing surgeon feedback via metrics from individual tasks. METHODS We propose a new CNN-LSTM model, RP-Net-V2, to recognize the 12 steps of robotic-assisted radical prostatectomies (RARP). We evaluated our model both in terms of conventional methods (e.g., Jaccard Index, task boundary accuracy) as well as novel ways, such as the accuracy of efficiency metrics computed from instrument movements and system events. RESULTS Our proposed model achieves a Jaccard Index of 0.85 thereby outperforming previous models on RARP. Additionally, we show that metrics computed from tasks automatically identified using RP-Net-V2 correlate well with metrics from tasks labeled by clinical experts. CONCLUSION We demonstrate that metrics-based evaluation of surgical activity recognition models is a viable approach to determine when models can be used to quantify surgical efficiencies. We believe this approach and our results illustrate the potential for fully automated, postoperative efficiency reports.
Collapse
|
30
|
Tucker JD, Day S, Tang W, Bayus B. Crowdsourcing in medical research: concepts and applications. PeerJ 2019; 7:e6762. [PMID: 30997295 PMCID: PMC6463854 DOI: 10.7717/peerj.6762] [Citation(s) in RCA: 80] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2018] [Accepted: 03/11/2019] [Indexed: 12/23/2022] Open
Abstract
Crowdsourcing shifts medical research from a closed environment to an open collaboration between the public and researchers. We define crowdsourcing as an approach to problem solving which involves an organization having a large group attempt to solve a problem or part of a problem, then sharing solutions. Crowdsourcing allows large groups of individuals to participate in medical research through innovation challenges, hackathons, and related activities. The purpose of this literature review is to examine the definition, concepts, and applications of crowdsourcing in medicine. This multi-disciplinary review defines crowdsourcing for medicine, identifies conceptual antecedents (collective intelligence and open source models), and explores implications of the approach. Several critiques of crowdsourcing are also examined. Although several crowdsourcing definitions exist, there are two essential elements: (1) having a large group of individuals, including those with skills and those without skills, propose potential solutions; (2) sharing solutions through implementation or open access materials. The public can be a central force in contributing to formative, pre-clinical, and clinical research. A growing evidence base suggests that crowdsourcing in medicine can result in high-quality outcomes, broad community engagement, and more open science.
Collapse
Affiliation(s)
- Joseph D. Tucker
- Institute for Global Health and Infectious Diseases, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
- Faculty of Infectious and Tropical Diseases, London School of Hygiene & Tropical Medicine, University of London, London, UK
- Social Entrepreneurship to Spur Health (SESH) Global, Guangzhou, China
| | - Suzanne Day
- Institute for Global Health and Infectious Diseases, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
- Department of Social Medicine, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Weiming Tang
- Institute for Global Health and Infectious Diseases, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
- Department of STD Control, Dermatology Hospital of Southern Medical University, Guangzhou, China
| | - Barry Bayus
- Kenan-Flagler School of Business, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| |
Collapse
|
31
|
St John-Matthews J, Newton PM, Grant AJ, Robinson L. Crowdsourcing in health professions education: What radiography educators can learn from other disciplines. Radiography (Lond) 2019; 25:164-169. [PMID: 30955690 DOI: 10.1016/j.radi.2018.11.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2018] [Revised: 11/15/2018] [Accepted: 11/20/2018] [Indexed: 10/27/2022]
Abstract
OBJECTIVES Crowdsourcing works through an institution outsourcing a function normally performed by an employee or group of individuals. Within a crowdsource users, known as the crowd, form a community who voluntarily undertake a task which involves the pooling of knowledge resources. A literature review was undertaken to identify how the tool is being used in health professions education, and potential for use in radiography education. KEY FINDINGS 17 papers were returned. Literature identified was assessed against an established crowdsourcing definition. Reviewing these yielded four themes for discussion: student selection procedures, lesson planning, teaching materials and assessment. CONCLUSION Crowdsourcing is associated with innovative activities through collective solution seeking via a large network of users. It is increasingly being adopted in healthcare training and maybe transferable to educational activities within the field of radiography education.
Collapse
Affiliation(s)
- J St John-Matthews
- Department of Allied Health Professions, Faculty of Health and Applied Sciences, University of the West of England, Bristol, BS16 1DD, UK.
| | - P M Newton
- Research in Health Professions Education, Swansea University Medical School, Swansea, Wales, SA2 8PP, UK
| | - A J Grant
- Research in Health Professions Education, Swansea University Medical School, Swansea, Wales, SA2 8PP, UK
| | - L Robinson
- School of Health Science, Frederick Road Campus, University of Salford, Allerton Building, M6 6PU, UK
| |
Collapse
|
32
|
|