1
|
Swails JL, Gadgil MA, Goodrum H, Gupta R, Rahbar MH, Bernstam EV. Role of faculty characteristics in failing to fail in clinical clerkships. Med Educ 2022; 56:634-640. [PMID: 34983083 DOI: 10.1111/medu.14725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/03/2021] [Revised: 12/21/2021] [Accepted: 12/27/2021] [Indexed: 06/14/2023]
Abstract
INTRODUCTION In the context of competency-based medical education, poor student performance must be accurately documented to allow learners to improve and to protect the public. However, faculty may be reluctant to provide evaluations that could be perceived as negative, and clerkship directors report that some students pass who should have failed. Student perception of faculty may be considered in faculty promotion, teaching awards, and leadership positions. Therefore, faculty of lower academic rank may perceive themselves to be more vulnerable and, therefore, be less likely to document poor student performance. This study investigated faculty characteristics associated with low performance evaluations (LPEs). METHOD The authors analysed individual faculty evaluations of medical students who completed the third-year clerkships over 15 years using a generalised mixed regression model to assess the association of evaluator academic rank with likelihood of an LPE. Other available factors related to experience or academic vulnerability were incorporated including faculty age, race, ethnicity, and gender. RESULTS The authors identified 50 120 evaluations by 585 faculty on 3447 students between January 2007 and April 2021. Faculty were more likely to give LPEs at the midpoint (4.9%), compared with the final (1.6%), evaluation (odds ratio [OR] = 4.004, 95% confidence interval [CI] [3.59, 4.53]; p < 0.001). The likelihood of LPE decreased significantly during the 15-year study period (OR = 0.94 [0.90, 0.97]; p < 0.01). Full professors were significantly more likely to give an LPE than assistant professors (OR = 1.62 [1.08, 2.43]; p = 0.02). Women were more likely to give LPEs than men (OR = 1.88 [1.37, 2.58]; p 0.01). Other faculty characteristics including race and experience were not associated with LPE. CONCLUSIONS The number of LPEs decreased over time, and senior faculty were more likely to document poor medical student performance compared with assistant professors.
Collapse
Affiliation(s)
- Jennifer L Swails
- Department of Internal Medicine, Mc Govern Medical School, University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Meghana A Gadgil
- Division of Hospital Medicine, San Francisco General Hospital, San Francisco, California, USA
- Division of Health Policy and Management, School of Public Health, University of California, Berkeley, Berkeley, California, USA
| | - Heath Goodrum
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Resmi Gupta
- Division of Clinical and Translational Sciences, Department of Internal Medicine, McGovern Medical School, Houston, Texas, USA
| | - Mohammad H Rahbar
- Division of Clinical and Translational Sciences, Department of Internal Medicine, McGovern Medical School, Houston, Texas, USA
- Department of Epidemiology, Human Genetics, and Environmental Sciences, School of Public Health, The University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Elmer V Bernstam
- Department of Internal Medicine, Mc Govern Medical School, University of Texas Health Science Center at Houston, Houston, Texas, USA
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, Texas, USA
| |
Collapse
|
2
|
Kumar A, Goodrum H, Kim A, Stender C, Roberts K, Bernstam EV. Closing the loop: automatically identifying abnormal imaging results in scanned documents. J Am Med Inform Assoc 2022; 29:831-840. [PMID: 35146510 PMCID: PMC9714594 DOI: 10.1093/jamia/ocac007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Revised: 11/29/2021] [Accepted: 01/13/2022] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVES Scanned documents (SDs), while common in electronic health records and potentially rich in clinically relevant information, rarely fit well with clinician workflow. Here, we identify scanned imaging reports requiring follow-up with high recall and practically useful precision. MATERIALS AND METHODS We focused on identifying imaging findings for 3 common causes of malpractice claims: (1) potentially malignant breast (mammography) and (2) lung (chest computed tomography [CT]) lesions and (3) long-bone fracture (X-ray) reports. We train our ClinicalBERT-based pipeline on existing typed/dictated reports classified manually or using ICD-10 codes, evaluate using a test set of manually classified SDs, and compare against string-matching (baseline approach). RESULTS A total of 393 mammograms, 305 chest CT, and 683 bone X-ray reports were manually reviewed. The string-matching approach had an F1 of 0.667. For mammograms, chest CTs, and bone X-rays, respectively: models trained on manually classified training data and optimized for F1 reached an F1 of 0.900, 0.905, and 0.817, while separate models optimized for recall achieved a recall of 1.000 with precisions of 0.727, 0.518, and 0.275. Models trained on ICD-10-labelled data and optimized for F1 achieved F1 scores of 0.647, 0.830, and 0.643, while those optimized for recall achieved a recall of 1.0 with precisions of 0.407, 0.683, and 0.358. DISCUSSION Our pipeline can identify abnormal reports with potentially useful performance and so decrease the manual effort required to screen for abnormal findings that require follow-up. CONCLUSION It is possible to automatically identify clinically significant abnormalities in SDs with high recall and practically useful precision in a generalizable and minimally laborious way.
Collapse
Affiliation(s)
- Akshat Kumar
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, Texas, USA,McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Heath Goodrum
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Ashley Kim
- McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Carly Stender
- McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Kirk Roberts
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Elmer V Bernstam
- Corresponding Author: Elmer V. Bernstam, MD, MSE, School of Biomedical Informatics, The University of Texas Health Science Center at Houston, 7000 Fannin Street, Suite 600, Houston, TX 77030, USA;
| |
Collapse
|
3
|
Bernstam EV, Shireman PK, Meric‐Bernstam F, N. Zozus M, Jiang X, Brimhall BB, Windham AK, Schmidt S, Visweswaran S, Ye Y, Goodrum H, Ling Y, Barapatre S, Becich MJ. Artificial intelligence in clinical and translational science: Successes, challenges and opportunities. Clin Transl Sci 2022; 15:309-321. [PMID: 34706145 PMCID: PMC8841416 DOI: 10.1111/cts.13175] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 10/01/2021] [Indexed: 01/12/2023] Open
Abstract
Artificial intelligence (AI) is transforming many domains, including finance, agriculture, defense, and biomedicine. In this paper, we focus on the role of AI in clinical and translational research (CTR), including preclinical research (T1), clinical research (T2), clinical implementation (T3), and public (or population) health (T4). Given the rapid evolution of AI in CTR, we present three complementary perspectives: (1) scoping literature review, (2) survey, and (3) analysis of federally funded projects. For each CTR phase, we addressed challenges, successes, failures, and opportunities for AI. We surveyed Clinical and Translational Science Award (CTSA) hubs regarding AI projects at their institutions. Nineteen of 63 CTSA hubs (30%) responded to the survey. The most common funding source (48.5%) was the federal government. The most common translational phase was T2 (clinical research, 40.2%). Clinicians were the intended users in 44.6% of projects and researchers in 32.3% of projects. The most common computational approaches were supervised machine learning (38.6%) and deep learning (34.2%). The number of projects steadily increased from 2012 to 2020. Finally, we analyzed 2604 AI projects at CTSA hubs using the National Institutes of Health Research Portfolio Online Reporting Tools (RePORTER) database for 2011-2019. We mapped available abstracts to medical subject headings and found that nervous system (16.3%) and mental disorders (16.2) were the most common topics addressed. From a computational perspective, big data (32.3%) and deep learning (30.0%) were most common. This work represents a snapshot in time of the role of AI in the CTSA program.
Collapse
Affiliation(s)
- Elmer V. Bernstam
- School of Biomedical InformaticsThe University of Texas Health Science Center at HoustonHoustonTexasUSA
- Division of General Internal MedicineDepartment of Internal MedicineMcGovern Medical SchoolThe University of Texas Health Science Center at HoustonHoustonTexasUSA
| | - Paula K. Shireman
- Departments of Surgery and MicrobiologyImmunology & Molecular GeneticsUniversity of Texas Health San AntonioSan AntonioTexasUSA
- University HealthSan AntonioTexasUSA
- South Texas Veterans Health Care SystemSan AntonioTexasUSA
| | - Funda Meric‐Bernstam
- Department of Investigational Cancer TherapeuticsThe University of Texas MD Anderson Cancer CenterHoustonTexasUSA
| | - Meredith N. Zozus
- Division of Clinical Research InformaticsDepartment of Population Health SciencesUniversity of Texas Health San AntonioSan AntonioTexasUSA
| | - Xiaoqian Jiang
- School of Biomedical InformaticsThe University of Texas Health Science Center at HoustonHoustonTexasUSA
| | - Bradley B. Brimhall
- University HealthSan AntonioTexasUSA
- Department of PathologyUniversity of Texas Health San AntonioSan AntonioTexasUSA
| | - Ashley K. Windham
- University HealthSan AntonioTexasUSA
- Department of PathologyUniversity of Texas Health San AntonioSan AntonioTexasUSA
| | - Susanne Schmidt
- Department of Population Health SciencesUniversity of Texas Health San AntonioSan AntonioTexasUSA
| | - Shyam Visweswaran
- Department of Biomedical InformaticsUniversity of Pittsburgh School of MedicinePittsburghPennsylvaniaUSA
| | - Ye Ye
- Department of Biomedical InformaticsUniversity of Pittsburgh School of MedicinePittsburghPennsylvaniaUSA
| | - Heath Goodrum
- School of Biomedical InformaticsThe University of Texas Health Science Center at HoustonHoustonTexasUSA
| | - Yaobin Ling
- School of Biomedical InformaticsThe University of Texas Health Science Center at HoustonHoustonTexasUSA
| | - Seemran Barapatre
- Department of Biomedical InformaticsUniversity of Pittsburgh School of MedicinePittsburghPennsylvaniaUSA
| | - Michael J. Becich
- Department of Biomedical InformaticsUniversity of Pittsburgh School of MedicinePittsburghPennsylvaniaUSA
| |
Collapse
|
4
|
Goodrum H, Roberts K, Bernstam EV. Automatic classification of scanned electronic health record documents. Int J Med Inform 2020; 144:104302. [PMID: 33091829 DOI: 10.1016/j.ijmedinf.2020.104302] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2020] [Revised: 09/18/2020] [Accepted: 10/09/2020] [Indexed: 12/12/2022]
Abstract
OBJECTIVES Electronic Health Records (EHRs) contain scanned documents from a variety of sources such as identification cards, radiology reports, clinical correspondence, and many other document types. We describe the distribution of scanned documents at one health institution and describe the design and evaluation of a system to categorize documents into clinically relevant and non-clinically relevant categories as well as further sub-classifications. Our objective is to demonstrate that text classification systems can accurately classify scanned documents. METHODS We extracted text using Optical Character Recognition (OCR). We then created and evaluated multiple text classification machine learning models, including both "bag of words" and deep learning approaches. We evaluated the system on three different levels of classification using both the entire document as input, as well as the individual pages of the document. Finally, we compared the effects of different text processing methods. RESULTS A deep learning model using ClinicalBERT performed best. This model distinguished between clinically-relevant documents and not clinically-relevant documents with an accuracy of 0.973; between intermediate sub-classifications with an accuracy of 0.949; and between individual classes with an accuracy of 0.913. DISCUSSION Within the EHR, some document categories such as "external medical records" may contain hundreds of scanned pages without clear document boundaries. Without further sub-classification, clinicians must view every page or risk missing clinically-relevant information. Machine learning can automatically classify these scanned documents to reduce clinician burden. CONCLUSION Using machine learning applied to OCR-extracted text has the potential to accurately identify clinically-relevant scanned content within EHRs.
Collapse
Affiliation(s)
- Heath Goodrum
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, TX, United States
| | - Kirk Roberts
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, TX, United States
| | - Elmer V Bernstam
- School of Biomedical Informatics, The University of Texas Health Science Center at Houston, TX, United States; Division of General Internal Medicine, McGovern Medical School, The University of Texas Health Science Center at Houston, TX, United States.
| |
Collapse
|
5
|
Jenkinson AO, Shah K, Goodrum H. Locator spoon. Br J Oral Maxillofac Surg 2017; 56:156-157. [PMID: 29274985 DOI: 10.1016/j.bjoms.2017.11.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2017] [Accepted: 11/27/2017] [Indexed: 11/29/2022]
Affiliation(s)
| | - K Shah
- Morriston Hospital, Swansea, SA6 4NL.
| | - H Goodrum
- Maxillofacial Laboratory, Morriston Hospital, Swansea, SA6 4NL.
| |
Collapse
|
6
|
Goodson A, Evans P, Goodrum H, Sugar A, Kittur M. Custom-made fibular “cradle” plate to optimise bony height, contour of the lower border, and length of the pedicle in reconstruction of the mandible. Br J Oral Maxillofac Surg 2017; 55:423-424. [DOI: 10.1016/j.bjoms.2016.12.011] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2016] [Accepted: 12/16/2016] [Indexed: 11/24/2022]
|