1
|
Behar-Horenstein LS, Suiter S, Snyder F, Laurila K. Consensus Building to Inform Common Evaluation Metrics for the Comprehensive Partnerships to Advance Cancer Health Equity (CPACHE) Program. JOURNAL OF CANCER EDUCATION : THE OFFICIAL JOURNAL OF THE AMERICAN ASSOCIATION FOR CANCER EDUCATION 2023; 38:231-239. [PMID: 34741221 PMCID: PMC9102223 DOI: 10.1007/s13187-021-02103-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 10/09/2021] [Indexed: 06/13/2023]
Abstract
Common measures facilitate the standardization of assessment practices. These types of measures are needed to develop instruments that can be used to assess the overall effectiveness of the U54 Comprehensive Partnerships to Advance Cancer Health Equity (CPACHE) funding mechanism. Developing common measures requires a multi-phase process. Stakeholders used the nominal group technique, a consensus development process, and the Grid-Enabled Measures (GEM) platform to identify evaluation constructs and measures of those constructs. Use of these instruments will ensure the implementation of standardized data elements, facilitate data integration, enhance the quality of evaluation reporting to the National Cancer Institute, foster comparative analyses across centers, and support the national assessment of the CPACHE program.
Collapse
Affiliation(s)
- Linda S Behar-Horenstein
- Department of Human Development and Organizational Studies, University of Florida, Gainesville, FL, USA.
- Department of Human Development and Organizational Studies, University of Florida, 7916 Monarch Ct, Delray Beach, FL, 33446, USA.
| | - Sarah Suiter
- Department of Human and Organizational Development, Vanderbilt University, Nashville, TN, USA
| | | | - Kelly Laurila
- Department of Anthropology, Northern Arizona University, Flagstaff, AZ, USA
| |
Collapse
|
2
|
Yu F, Patel T, Carnegie A, Dave G. Evaluating the impact of a CTSA program from 2008 to 2021 through bibliometrics, social network analysis, and altmetrics. J Clin Transl Sci 2023; 7:e44. [PMID: 36845314 PMCID: PMC9947612 DOI: 10.1017/cts.2022.530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 11/21/2022] [Accepted: 12/27/2022] [Indexed: 01/12/2023] Open
Abstract
Introduction We evaluate a CTSA program hub by applying bibliometrics, social network analysis (SNA), and altmetrics and examine the changes in research productivity, citation impact, research collaboration, and CTSA-supported research topics since our pilot study in 2017. Methods The sampled data included North Carolina Translational and Clinical Science Institute (NC TraCS)-supported publications produced between September 2008 and March 2021. We applied measures and metrics from bibliometrics, SNA, and altmetrics to the dataset. In addition, we analyzed research topics and correlations between different metrics. Results 1154 NC TraCS-supported publications generated over 53,560 citation counts by April 2021. The average cites per year and the relative citation ratio (RCR) mean of these publications improved from 33 and 2.26 in 2017 to 48 and 2.58 in 2021. The number of involved UNC units in the most published authors' collaboration network increased from 7 (2017) to 10 (2021). NC TraCS-supported co-authorship involved 61 NC organizations. PlumX metrics identified articles with the highest altmetrics scores. About 96% NC TraCS-supported publications have above the average SciVal Topic Prominence Percentile; the average approximate potential to translate of the included publication was 54.2%; and 177 publications addressed health disparity issues. Bibliometric measures (e.g., citation counts, RCR) and PlumX metrics (i.e., Citations, Captures, and Social-Media) are positively correlated (p < .05). Conclusion Bibliometrics, SNA, and altmetrics offer distinctive but related perspectives to examine CTSA research performance and longitudinal growth, especially at the individual program hub level. These perspectives can help CTSAs build program foci.
Collapse
Affiliation(s)
- Fei Yu
- Health Sciences Library, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| | - Tanha Patel
- North Carolina Translational and Clinical Institute, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
- School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| | - Andrea Carnegie
- North Carolina Translational and Clinical Institute, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
- School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| | - Gaurav Dave
- North Carolina Translational and Clinical Institute, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
- School of Medicine, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| |
Collapse
|
3
|
The NJ Alliance for Clinical and Translational Science (NJ ACTS) experience: Responding at “warp speed” to COVID-19. J Clin Transl Sci 2022; 6:e62. [PMID: 35720969 PMCID: PMC9161045 DOI: 10.1017/cts.2022.383] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 03/22/2022] [Accepted: 03/25/2022] [Indexed: 11/08/2022] Open
Abstract
Introduction: Methods: Results: Conclusion:
Collapse
|
4
|
Sampson R, Shapiro S, He W, Denmark S, Kirchoff K, Hutson K, Paranal R, Forney L, McGhee K, Harvey J. An integrated approach to improve clinical trial efficiency: Linking a clinical trial management system into the Research Integrated Network of Systems. J Clin Transl Sci 2022; 6:e63. [PMID: 35720964 PMCID: PMC9161043 DOI: 10.1017/cts.2022.382] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 03/25/2022] [Accepted: 03/28/2022] [Indexed: 12/04/2022] Open
Abstract
Low-accruing clinical trials delay translation of research breakthroughs into the clinic, expose participants to risk without providing meaningful clinical insight, increase the cost of therapies, and waste limited resources. By tracking patient accrual, Clinical and Translational Science Awards hubs can identify at-risk studies and provide them the support needed to reach recruitment goals and maintain financial solvency. However, tracking accrual has proved challenging because relevant patient- and protocol-level data often reside in siloed systems. To address this fragmentation, in September 2020 the South Carolina Clinical and Translational Research Institute, with an academic home at the Medical University of South Carolina, implemented a clinical trial management system (CTMS), with its access to patient-level data, and incorporated it into its Research Integrated Network of Systems (RINS), which links study-level data across disparate systems relevant to clinical research. Within the first year of CTMS implementation, 324 protocols were funneled through CTMS/RINS, with more than 2600 participants enrolled. Integrated data from CTMS/RINS have enabled near-real-time assessment of patient accrual and accelerated reimbursement from industry sponsors. For institutions with bioinformatics or programming capacity, the CTMS/RINS integration provides a powerful model for tracking and improving clinical trial efficiency, compliance, and cost-effectiveness.
Collapse
Affiliation(s)
- Royce Sampson
- South Carolina Clinical & Translational Research Institute, Medical University of South Carolina, Charleston, SC, USA
- Office of Clinical Research, Office of the Vice President for Research, Medical University of South Carolina, Charleston, SC, USA
- Department of Psychiatry and Behavioral Sciences, Medical University of South Carolina, Charleston, SC, USA
| | - Steve Shapiro
- Office of Clinical Research, Office of the Vice President for Research, Medical University of South Carolina, Charleston, SC, USA
| | - Wenjun He
- South Carolina Clinical & Translational Research Institute, Medical University of South Carolina, Charleston, SC, USA
| | - Signe Denmark
- South Carolina Clinical & Translational Research Institute, Medical University of South Carolina, Charleston, SC, USA
- Office of Clinical Research, Office of the Vice President for Research, Medical University of South Carolina, Charleston, SC, USA
| | - Katie Kirchoff
- South Carolina Clinical & Translational Research Institute, Medical University of South Carolina, Charleston, SC, USA
- Biomedical Informatics Center, Medical University of South Carolina, Charleston, SC, USA
| | - Kyle Hutson
- South Carolina Clinical & Translational Research Institute, Medical University of South Carolina, Charleston, SC, USA
- Office of Clinical Research, Office of the Vice President for Research, Medical University of South Carolina, Charleston, SC, USA
| | - Rechelle Paranal
- South Carolina Clinical & Translational Research Institute, Medical University of South Carolina, Charleston, SC, USA
| | - Leila Forney
- South Carolina Clinical & Translational Research Institute, Medical University of South Carolina, Charleston, SC, USA
- Office of Clinical Research, Office of the Vice President for Research, Medical University of South Carolina, Charleston, SC, USA
| | - Kimberly McGhee
- South Carolina Clinical & Translational Research Institute, Medical University of South Carolina, Charleston, SC, USA
- Academic Affairs Faculty, Medical University of South Carolina, Charleston, SC, USA
| | - Jillian Harvey
- South Carolina Clinical & Translational Research Institute, Medical University of South Carolina, Charleston, SC, USA
- Department of Healthcare Leadership and Management, Medical University of South Carolina, Charleston, SC, USA
| |
Collapse
|
5
|
A snapshot of U.S. IRB review of COVID-19 research in the early pandemic. J Clin Transl Sci 2021; 5:e205. [PMID: 34956653 PMCID: PMC8692853 DOI: 10.1017/cts.2021.848] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2021] [Revised: 08/17/2021] [Accepted: 08/25/2021] [Indexed: 11/24/2022] Open
Abstract
Background/Objective: Along with the greater research enterprise, Institutional Review Boards (IRBs) had to quickly adapt to the COVID-19 pandemic. IRBs had to review and oversee COVID-related research, while navigating strict public health measures and a workforce largely relegated to working from home. Our objectives were to measure adjustments to standard IRB review processes, IRB turnaround time and document and any novel ethical issues encountered. Methods: Structured data requests were sent to members of the Consortium to Advance Effective Research Ethics Oversight directing Human Research Protection Programs (HRPP). Results: Fourteen of the 32 HRPP director members responded to a questionnaire about their approach to review and oversight during COVID-19. Eleven of the 14 provided summary data on COVID-19-specific protocols and six of the 11 provided protocol-related documents for our review. All respondents adopted at least one additional COVID-19-specific step to their usual review process. The average turnaround time for convened and expedited IRB reviews was 15 calendar days. In our review of the documents from 194 COVID-19-specific protocols (n = 302 documents), we identified only a single review that raised ethical concerns unique to COVID-19. Conclusions: Our data provide a snapshot of how HRPPs approached the review of COVID-19-specific protocols at the start of the pandemic in the USA. While not generalizable to all HRPPs, these data indicate that HRPPs can adapt and respond quickly response to a pandemic and likely need little novel expertise in the review and oversight of COVID-19-specific protocols.
Collapse
|
6
|
The Research Centers in Minority Institutions (RCMI) Consortium: A Blueprint for Inclusive Excellence. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:ijerph18136848. [PMID: 34202383 PMCID: PMC8296926 DOI: 10.3390/ijerph18136848] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 06/11/2021] [Accepted: 06/17/2021] [Indexed: 11/17/2022]
Abstract
The Research Centers in Minority Institutions, (RCMI) Program was established by Congress to address the health research and training needs of minority populations, by preparing future generations of scientists at these institutions, with a track record of producing minority scholars in medicine, science, and technology. The RCMI Consortium consists of the RCMI Specialized Centers and a Coordinating Center (CC). The RCMI-CC leverages the scientific expertise, technologies, and innovations of RCMI Centers to accelerate the delivery of solutions to address health disparities in communities that are most impacted. There is increasing recognition that the gap in representation of racial/ethnic groups and women is perpetuated by institutional cultures lacking inclusion and equity. The objective of this work is to provide a framework for inclusive excellence by developing a systematic evaluation process with common data elements that can track the inter-linked goals of workforce diversity and health equity. At its core, the RCMI Program embodies the trinity of diversity, equity, and inclusion. We propose a realist evaluation framework and a logic model that integrates the institutional context to develop common data metrics for inclusive excellence. The RCMI-CC will collaborate with NIH-funded institutions and research consortia to disseminate and scale this model.
Collapse
|
7
|
Evaluating Research Centers in Minority Institutions: Framework, Metrics, Best Practices, and Challenges. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:ijerph17228373. [PMID: 33198272 PMCID: PMC7696594 DOI: 10.3390/ijerph17228373] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Revised: 11/02/2020] [Accepted: 11/10/2020] [Indexed: 02/06/2023]
Abstract
The NIH-funded Research Centers in Minority Institutions (RCMI) program is currently funding 18 academic institutions to strengthen the research environment and contribution to health disparities research. The purpose of this multiphase mixed-methods study was to establish a uniform evaluation framework for demonstrating the collective success of this research consortium. Methods included discussions of aims and logic models at the RCMI Evaluators' Workshop, a literature review to inform an evaluation conceptual framework, and a case study survey to obtain evaluation-related information and metrics. Ten RCMIs participated in the workshop and 14 submitted responses to the survey. The resultant RCMI Evaluation Conceptual Model presents a practical ongoing approach to document RCMIs' impacts on health disparities. Survey results identified 37 common metrics under four primary categories. Evaluation challenges were issues related to limited human resources, data collection, decision-making, defining metrics, cost-sharing, and revenue-generation. There is a need for further collaborative efforts across RCMI sites to engage program leadership and community stakeholders in addressing the identified evaluation challenges and measurement. Program leadership should be engaged to apply the Evaluation Conceptual Framework and common metrics to allow for valid inter-institutional comparisons and consortium-wide evaluations. Stakeholders could ensure evaluation metrics are used to facilitate community impacts.
Collapse
|
8
|
Daudelin DH, Peterson LE, Selker HP. Pilot test of an accrual Common Metric for the NIH Clinical and Translational Science Awards (CTSA) Consortium: Metric feasibility and data quality. J Clin Transl Sci 2020; 5:e44. [PMID: 33948266 PMCID: PMC8057372 DOI: 10.1017/cts.2020.537] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 08/27/2020] [Accepted: 08/31/2020] [Indexed: 11/07/2022] Open
Abstract
Failure to accrue participants into clinical trials incurs economic costs, wastes resources, jeopardizes answering research questions meaningfully, and delays translating research discoveries into improved health. This paper reports the results of a pilot test of the Median Accrual Ratio (MAR) metric developed as a part of the Common Metrics Initiative of the NIH's National Center for Advancing Translational Science (NCATS) Clinical and Translational Science Award (CTSA) Consortium. Using the metric is intended to enhance the ability of the CTSA Consortium and its "hubs" to increase subject accrual into trials within expected timeframes. The pilot test was undertaken at Tufts Clinical and Translational Science Institute (CTSI) with eight CTSA Consortium hubs. We describe the pilot test methods, and results regarding feasibility of collecting metric data and the quality of data that was collected. Participating hubs welcomed the opportunity to assess accrual efforts, but experienced challenges in collecting accrual metric data due to insufficient infrastructure and inconsistent implementation of electronic data systems and lack of uniform data definitions. Also, the metric could not be constructed for all trial designs, particularly those using competitive enrollment strategies. We offer recommendations to address the identified challenges to facilitate progress to broad accrual metric data collection and use.
Collapse
Affiliation(s)
- Denise H. Daudelin
- Tufts Clinical and Translational Science Institute, Tufts University, Boston, MA, USA
- Institute for Clinical Research and Health Policy Studies, Tufts Medical Center, Boston, MA, USA
| | - Laura E. Peterson
- Tufts Clinical and Translational Science Institute, Tufts University, Boston, MA, USA
| | - Harry P. Selker
- Tufts Clinical and Translational Science Institute, Tufts University, Boston, MA, USA
- Institute for Clinical Research and Health Policy Studies, Tufts Medical Center, Boston, MA, USA
| |
Collapse
|
9
|
Kamenetzky A, Hinrichs-Krapels S. How do organisations implement research impact assessment (RIA) principles and good practice? A narrative review and exploratory study of four international research funding and administrative organisations. Health Res Policy Syst 2020; 18:6. [PMID: 31959198 PMCID: PMC6971910 DOI: 10.1186/s12961-019-0515-1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2019] [Accepted: 12/09/2019] [Indexed: 12/14/2022] Open
Abstract
Background Public research funding agencies and research organisations are increasingly accountable for the wider impacts of the research they support. While research impact assessment (RIA) frameworks and tools exist, little is known and shared of how these organisations implement RIA activities in practice. Methods We conducted a review of academic literature to search for research organisations’ published experiences of RIAs. We followed this with semi-structured interviews from a convenience sample (n = 7) of representatives of four research organisations deploying strategies to support and assess research impact. Results We found only five studies reporting empirical evidence on how research organisations put RIA principles into practice. From our interviews, we observed a disconnect between published RIA frameworks and tools, and the realities of organisational practices, which tended not to be reported. We observed varying maturity and readiness with respect to organisations’ structural set ups for conducting RIAs, particularly relating to leadership, skills for evaluation and automating RIA data collection. Key processes for RIA included efforts to engage researcher communities to articulate and plan for impact, using a diversity of methods, frameworks and indicators, and supporting a learning approach. We observed outcomes of RIAs as having supported a dialogue to orient research to impact, underpinned shared learning from analyses of research, and provided evidence of the value of research in different domains and to different audiences. Conclusions Putting RIA principles and frameworks into practice is still in early stages for research organisations. We recommend that organisations (1) get set up by considering upfront the resources, time and leadership required to embed impact strategies throughout the organisation and wider research ‘ecosystem’, and develop methodical approaches to assessing impact; (2) work together by engaging researcher communities and wider stakeholders as a core part of impact pathway planning and subsequent assessment; and (3) recognise the benefits that RIA can bring about as a means to improve mutual understanding of the research process between different actors with an interest in research.
Collapse
Affiliation(s)
- Adam Kamenetzky
- National Institute for Health Research Central Commissioning Facility, Twickenham, TW1 3NL, United Kingdom. .,Policy Institute at King's College London, Strand Campus, London, WC2B 6LE, United Kingdom.
| | - Saba Hinrichs-Krapels
- Policy Institute at King's College London, Strand Campus, London, WC2B 6LE, United Kingdom.,King's Global Health Institute, King's College London, Denmark Hill, London, SE5 9RJ, United Kingdom
| |
Collapse
|
10
|
Behring M, Hale K, Ozaydin B, Grizzle WE, Sodeke SO, Manne U. Inclusiveness and ethical considerations for observational, translational, and clinical cancer health disparity research. Cancer 2019; 125:4452-4461. [PMID: 31502259 PMCID: PMC6891126 DOI: 10.1002/cncr.32495] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2019] [Revised: 06/28/2019] [Accepted: 07/27/2019] [Indexed: 12/21/2022]
Abstract
BACKGROUND Although general trends in cancer outcomes are improving, racial/ethnic disparities in patient outcomes continue to widen, suggesting disparity-related shortcomings in cancer research designs. METHODS Using convenience sampling, a total of 24 data sources, representing several research designs and 5 high-burden tumor types, were included for analyses. The percentages of races/ethnicities across each design/tumor type were compared with those of the 2017 US Census data. The authors used a framework based on the Belmont principles to evaluate the ethical strengths and/or weaknesses of each design. RESULTS In all designs, white individuals were found to be overrepresented. African American and Asian individuals were underrepresented, and Native Americans had consistently poor or no representation. In general, ethical concerns varied according to the study design. Clinical trials were high with regard to respect for persons and beneficence but low for equitable subject selection related to the inclusion of race/ethnicity. Observational study designs were more inclusive for race/ethnicity compared with clinical and translational studies, but their clinical usefulness was less. CONCLUSIONS The authors proposed that ethical concerns should vary according to the study design. Because observational designs have strengths in inclusiveness for race/ethnicity, their clinical usefulness can be improved by extending the Learning Health System in hospital catchment populations, the use of health records linked to biospecimens, and minority oversampling. Likewise, minority enrollment into clinical trials can be improved through Learning Health System linking and by using National Cancer Institute-mandated Community Outreach and Engagement Cores. This will allow precision medicine for otherwise overlooked minority subgroups.
Collapse
Affiliation(s)
- Michael Behring
- Department of Pathology, University of Alabama at Birmingham; 1720 Second Avenue South, Birmingham, AL 35294
| | - Kevin Hale
- Department of Pathology, University of Alabama at Birmingham; 1720 Second Avenue South, Birmingham, AL 35294
| | - Bunyamin Ozaydin
- Department of Health Services Administration, University of Alabama at Birmingham; 1720 Second Avenue South, Birmingham, AL 35294
| | - William E. Grizzle
- Department of Pathology, University of Alabama at Birmingham; 1720 Second Avenue South, Birmingham, AL 35294
- O’Neal Comprehensive Cancer Center, University of Alabama at Birmingham; 1720 Second Avenue South, Birmingham, AL 35294
| | - Stephen O. Sodeke
- Department of Modern Languages, Communications, and Philosophy, College of Arts & Science, Tuskegee University, Tuskegee, AL 36088
| | - Upender Manne
- Department of Pathology, University of Alabama at Birmingham; 1720 Second Avenue South, Birmingham, AL 35294
- O’Neal Comprehensive Cancer Center, University of Alabama at Birmingham; 1720 Second Avenue South, Birmingham, AL 35294
| |
Collapse
|
11
|
Beyond the common metrics: Expanding the impact of the KL2 mentored career development program using alternative impact assessment frameworks. J Clin Transl Sci 2019; 3:1-4. [PMID: 31404156 PMCID: PMC6676494 DOI: 10.1017/cts.2019.375] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
|
12
|
Fontanesi J, Magit A, Ford JJ, Nguyen H, Firestein GS. Systems approach to assessing and improving local human research Institutional Review Board performance. J Clin Transl Sci 2018; 2:103-109. [PMID: 31660223 PMCID: PMC6799096 DOI: 10.1017/cts.2018.24] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2017] [Revised: 03/01/2018] [Accepted: 03/30/2018] [Indexed: 01/16/2023] Open
Abstract
OBJECTIVE To quantifying the interdependency within the regulatory environment governing human subject research, including Institutional Review Boards (IRBs), federally mandated Medicare coverage analysis and contract negotiations. METHODS Over 8000 IRB, coverage analysis and contract applications initiated between 2013 and 2016 were analyzed using traditional and machine learning analytics for a quality improvement effort to improve the time required to authorize the start of human research studies. RESULTS Staffing ratios, study characteristics such as the number of arms, source of funding and number and type of ancillary reviews significantly influenced the timelines. Using key variables, a predictive algorithm identified outliers for a workflow distinct from the standard process. Improved communication between regulatory units, integration of common functions, and education outreach improved the regulatory approval process. CONCLUSIONS Understanding and improving the interdependencies between IRB, coverage analysis and contract negotiation offices requires a systems approach and might benefit from predictive analytics.
Collapse
Affiliation(s)
- John Fontanesi
- University of California at San Diego, San Diego, CA, USA
| | - Anthony Magit
- University of California at San Diego, San Diego, CA, USA
| | | | - Han Nguyen
- University of California at San Diego, San Diego, CA, USA
| | - Gary S. Firestein
- University of California at San Diego, San Diego, CA, USA
- University of California Biomedical Research Acceleration, Integration & Development (UC BRAID), San Francisco, CA, USA
| |
Collapse
|
13
|
Vitale K, Newton GL, Abraido-Lanza AF, Aguirre AN, Ahmed S, Esmond SL, Evans J, Gelmon SB, Hart C, Hendricks D, McClinton-Brown R, Young SN, Stewart MK, Tumiel-Berhalter LM. Community Engagement in Academic Health Centers: A Model for Capturing and Advancing Our Successes. JOURNAL OF COMMUNITY ENGAGEMENT AND SCHOLARSHIP 2018; 10:81-90. [PMID: 30581538 PMCID: PMC6301056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Community engagement (CE) has come to the forefront of academic health centers' (AHCs) work because of two recent trends: the shift from a more traditional 'treatment of disease' model of health care to a population health paradigm (Gourevitch, 2014), and increased calls from funding agencies to include CE in research activities (Bartlett, Barnes, & McIver, 2014). As defined by the Centers for Disease Control and Prevention, community engagement is "the process of working collaboratively with and through groups of people affiliated by geographic proximity, special interest, or similar situations to address issues affecting the well-being of those people" (Centers for Disease Control and Prevention (CDC), 1997, p. 9). AHCs are increasingly called on to communicate details of their CE efforts to key stakeholders and to demonstrate their effectiveness. The population health paradigm values preventive care and widens the traditional purview of medicine to include social determinants of patients' health (Gourevitch, 2014). Thus, it has become increasingly important to join with communities in population health improvement efforts that address behavioral, social, and environmental determinants of health (Michener, et al., 2012; Aguilar-Gaxiola, et al., 2014; Blumenthal & Mayer, 1996). This CE can occur within multiple contexts in AHCs (Ahmed & Palermo, 2010; Kastor, 2011) including in education, clinical activities, research, health policy, and community service.
Collapse
Affiliation(s)
- Karen Vitale
- University of Rochester Medical Center, Center for Community Health, 46 Prince Street, Suite 1001, Rochester, NY 14607
| | - Gail L Newton
- University of Rochester Medical Center, Center for Community Health, 46 Prince Street, Suite 1001, Rochester, NY 14607, , (585)224-3057 (telephone), (585)442-3372 (fax)
| | - Ana F Abraido-Lanza
- Columbia University, Department of Sociomedical Sciences, Mailman School of Public Health, 722 West 168 Street, 5 floor, New York, NY 10032, , (212)305-1859 (telephone), (212)305-0315 (fax)
| | - Alejandra N Aguirre
- Columbia University, Irving Institute for Clinical and Translational Research, College of Physicians and Surgeons, 390 Fort Washington Avenue, Ground Floor, New York, NY 10033, (646)697-2272 (telephone), No Fax
| | - Syed Ahmed
- Medical College of Wisconsin, 8701 Watertown Plank Road, Milwaukee, WI 53226, , (414)955-7657 (telephone), (414)955-6529 (fax
| | - Sarah L Esmond
- University of Wisconsin, School of Medicine and Public Health, Health Sciences Learning Center, 750 Highland Avenue, Rm 4241, Madison, WI 53705-2221, , (608)263-9401 (telephone), (608)262-7864 (fax)
| | - Jill Evans
- Stanford University School of Medicine, Center for Population Health Sciences, 1070 Aratradero Road, Palo Alto, CA 94304, , (650)736-8074 (telephone), No Fax
| | - Sherril B Gelmon
- Portland State University, OHSU & PSU School of Public Health, PO Box 751, Portland, OR 97207-0751, , (503)725-3044 (telephone), (503)725-8250 (fax)
| | - Camille Hart
- University of Arkansas for Medical Sciences, 4301 W. Markham Street #820, Little Rock, AR 72205, , (501)454-1467 (telephone), (501)526-6620 (fax)
| | - Deborah Hendricks
- University of Minnesota, Clinical and Translational Science Institute, 717 Delaware Street S.E., Room 216, Minneapolis, MN 55414, , (612)624-4247 (telephone), (612)625-2695 (fax)
| | - Rhonda McClinton-Brown
- Stanford University School of Medicine, Office of Community Engagement, Center for Population Health Sciences, 1070 Arastadero Road, Palo Alto, CA 94304, , No Fax
| | - Sharon Neu Young
- Medical College of Wisconsin, 8701 Watertown Plank Road, Milwaukee, WI 53226, , (414)955-4439 (telephone), (414)955-6529 (fax)
| | - M Kathryn Stewart
- University of Arkansas for Medical Sciences, 4301 W. Markham Street, #820, Little Rock, AR 72205, , (501)526-6625 (telephone), (501)526-6620 (fax)
| | - Laurene M Tumiel-Berhalter
- University of Buffalo, Department of Family Medicine, 77 Goodell Street, Suite 220, Buffalo, NY 14203, , (716)816-7278 (telephone), (716)845-6899 (fax)
| |
Collapse
|
14
|
Luke DA, Sarli CC, Suiter AM, Carothers BJ, Combs TB, Allen JL, Beers CE, Evanoff BA. The Translational Science Benefits Model: A New Framework for Assessing the Health and Societal Benefits of Clinical and Translational Sciences. Clin Transl Sci 2017; 11:77-84. [PMID: 28887873 PMCID: PMC5759746 DOI: 10.1111/cts.12495] [Citation(s) in RCA: 44] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2017] [Accepted: 07/10/2017] [Indexed: 12/02/2022] Open
Abstract
We report the development of the Translational Science Benefits Model (TSBM), a framework designed to support institutional assessment of clinical and translational research outcomes to measure clinical and community health impacts beyond bibliometric measures. The TSBM includes 30 specific and potentially measurable indicators that reflect benefits that accrue from clinical and translational science research such as products, system characteristics, or activities. Development of the TSBM was based on literature review, a modified Delphi method, and in‐house expert panel feedback. Three case studies illustrate the feasibility and face validity of the TSBM for identification of clinical and community health impacts that result from translational science activities. Future plans for the TSBM include further pilot testing and a resource library that will be freely available for evaluators, translational scientists, and academic institutions who wish to implement the TSBM framework in their own evaluation efforts.
Collapse
Affiliation(s)
- Douglas A Luke
- Center for Public Health Systems Science, George Warren Brown School of Social Work, Washington University in St. Louis, Missouri, USA
| | - Cathy C Sarli
- Becker Medical Library, Washington University in St. Louis School of Medicine, St. Louis, Missouri, USA
| | - Amy M Suiter
- Becker Medical Library, Washington University in St. Louis School of Medicine, St. Louis, Missouri, USA
| | - Bobbi J Carothers
- Center for Public Health Systems Science, George Warren Brown School of Social Work, Washington University in St. Louis, Missouri, USA
| | - Todd B Combs
- Center for Public Health Systems Science, George Warren Brown School of Social Work, Washington University in St. Louis, Missouri, USA
| | - Jae L Allen
- Institute of Clinical and Translational Sciences (ICTS), Washington University in St. Louis School of Medicine, St. Louis, Missouri, USA
| | - Courtney E Beers
- Institute of Clinical and Translational Sciences (ICTS), Washington University in St. Louis School of Medicine, St. Louis, Missouri, USA
| | - Bradley A Evanoff
- Division of General Medical Sciences, Washington University in St. Louis School of Medicine, St. Louis, Missouri, USA
| |
Collapse
|
15
|
Feasibility of common bibliometrics in evaluating translational science. J Clin Transl Sci 2017; 1:45-52. [PMID: 28480055 PMCID: PMC5408837 DOI: 10.1017/cts.2016.8] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2016] [Revised: 08/04/2016] [Accepted: 10/11/2017] [Indexed: 11/07/2022] Open
Abstract
Introduction A pilot study by 6 Clinical and Translational Science Awards (CTSAs) explored how bibliometrics can be used to assess research influence. Methods Evaluators from 6 institutions shared data on publications (4202 total) they supported, and conducted a combined analysis with state-of-the-art tools. This paper presents selected results based on the tools from 2 widely used vendors for bibliometrics: Thomson Reuters and Elsevier. Results Both vendors located a high percentage of publications within their proprietary databases (>90%) and provided similar but not equivalent bibliometrics for estimating productivity (number of publications) and influence (citation rates, percentage of papers in the top 10% of citations, observed citations relative to expected citations). A recently available bibliometric from the National Institutes of Health Office of Portfolio Analysis, examined after the initial analysis, showed tremendous potential for use in the CTSA context. Conclusion Despite challenges in making cross-CTSA comparisons, bibliometrics can enhance our understanding of the value of CTSA-supported clinical and translational research.
Collapse
|
16
|
Soergel D, Helfer O. A Metrics Ontology. An intellectual infrastructure for defining, managing, and applying metrics. KNOWLEDGE ORGANIZATION FOR A SUSTAINABLE WORLD: CHALLENGES AND PERSPECTIVES FOR CULTURAL, SCIENTIFIC, AND TECHNOLOGICAL SHARING IN A CONNECTED SOCIETY : PROCEEDINGS OF THE FOURTEENTH INTERNATIONAL ISKO CONFERENCE 27-29 SEPTEMBER 2016 RI... 2016; 15:333-341. [PMID: 28168236 PMCID: PMC5291334] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
This paper presents the beginnings of a comprehensive ontology for organizing information about metrics and its potential application to defining and managing metrics in the CTSA (Clinical and Translational Science Award) project. The aim is to support an integrated database of all metrics used by CTSA components. The ontology is given as an entity-relationship conceptual data schema. Its completion should draw on metrics definition templates that can be found in many places.
Collapse
|
17
|
Surkis A, Hogle JA, DiazGranados D, Hunt JD, Mazmanian PE, Connors E, Westaby K, Whipple EC, Adamus T, Mueller M, Aphinyanaphongs Y. Classifying publications from the clinical and translational science award program along the translational research spectrum: a machine learning approach. J Transl Med 2016; 14:235. [PMID: 27492440 PMCID: PMC4974725 DOI: 10.1186/s12967-016-0992-8] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2016] [Accepted: 07/27/2016] [Indexed: 01/05/2023] Open
Abstract
BACKGROUND Translational research is a key area of focus of the National Institutes of Health (NIH), as demonstrated by the substantial investment in the Clinical and Translational Science Award (CTSA) program. The goal of the CTSA program is to accelerate the translation of discoveries from the bench to the bedside and into communities. Different classification systems have been used to capture the spectrum of basic to clinical to population health research, with substantial differences in the number of categories and their definitions. Evaluation of the effectiveness of the CTSA program and of translational research in general is hampered by the lack of rigor in these definitions and their application. This study adds rigor to the classification process by creating a checklist to evaluate publications across the translational spectrum and operationalizes these classifications by building machine learning-based text classifiers to categorize these publications. METHODS Based on collaboratively developed definitions, we created a detailed checklist for categories along the translational spectrum from T0 to T4. We applied the checklist to CTSA-linked publications to construct a set of coded publications for use in training machine learning-based text classifiers to classify publications within these categories. The training sets combined T1/T2 and T3/T4 categories due to low frequency of these publication types compared to the frequency of T0 publications. We then compared classifier performance across different algorithms and feature sets and applied the classifiers to all publications in PubMed indexed to CTSA grants. To validate the algorithm, we manually classified the articles with the top 100 scores from each classifier. RESULTS The definitions and checklist facilitated classification and resulted in good inter-rater reliability for coding publications for the training set. Very good performance was achieved for the classifiers as represented by the area under the receiver operating curves (AUC), with an AUC of 0.94 for the T0 classifier, 0.84 for T1/T2, and 0.92 for T3/T4. CONCLUSIONS The combination of definitions agreed upon by five CTSA hubs, a checklist that facilitates more uniform definition interpretation, and algorithms that perform well in classifying publications along the translational spectrum provide a basis for establishing and applying uniform definitions of translational research categories. The classification algorithms allow publication analyses that would not be feasible with manual classification, such as assessing the distribution and trends of publications across the CTSA network and comparing the categories of publications and their citations to assess knowledge transfer across the translational research spectrum.
Collapse
Affiliation(s)
- Alisa Surkis
- Health Sciences Library, NYU School of Medicine, New York, USA
| | - Janice A. Hogle
- Institute for Clinical and Translational Research, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, USA
| | | | - Joe D. Hunt
- Indiana Clinical and Translational Sciences Institute, Indiana University School of Medicine, Indianapolis, USA
| | | | - Emily Connors
- Clinical and Translational Science Institute, Medical College of Wisconsin, Milwaukee, USA
| | - Kate Westaby
- Wisconsin Partnership Program, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, USA
| | - Elizabeth C. Whipple
- Ruth Lilly Medical Library, Indiana University School of Medicine, Indianapolis, USA
| | - Trisha Adamus
- Ebling Library for the Health Sciences, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, USA
| | - Meridith Mueller
- Population Health Sciences, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, USA
| | | |
Collapse
|
18
|
Schneider M, Guerrero L, Jones LB, Tong G, Ireland C, Dumbauld J, Rainwater J. Developing the Translational Research Workforce: A Pilot Study of Common Metrics for Evaluating the Clinical and Translational Award KL2 Program. Clin Transl Sci 2015; 8:662-7. [PMID: 26602332 DOI: 10.1111/cts.12353] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
Abstract
PURPOSE This pilot study describes the career development programs (i.e., NIH KL2 awards) across five Clinical and Translational Science Award (CTSA) institutions within the University of California (UC) system, and examines the feasibility of a set of common metrics for evaluating early outcomes. METHODS A survey of program administrators provided data related to the institutional environment within which each KL2 program was implemented. Application and progress report data yielded a combined data set that characterized KL2 awardees, their initial productivity, and early career outcomes. RESULTS The pilot project demonstrated the feasibility of aggregating common metrics data across multiple institutions. The data indicated that KL2 awardees were an accomplished set of investigators, both before and after the award period, representing a wide variety of disciplines. Awardees that had completed their trainee period overwhelmingly remained active in translational research conducted within an academic setting. Early indications also suggest high rates of success with obtaining research funding subsequent to the KL2 award. CONCLUSION This project offers a model for how to collect and analyze common metrics related to the education and training function of the CTSA Consortium. Next steps call for expanding participation to other CTSA sites outside of the University of California system.
Collapse
Affiliation(s)
- Margaret Schneider
- School of Social Ecology, and Institute for Clinical and Translational Science, University of California, Irvine, California, USA
| | - Lourdes Guerrero
- David Geffen School of Medicine at UCLA, General Internal Medicine and Health Services Research, UCLA Clinical and Translational Science Institute, Los Angeles, California, USA
| | - Lisa B Jones
- Institute for Clinical and Translational Science, University of California, Irvine, California, USA
| | - Greg Tong
- School of Medicine, University of California, San Francisco, California, USA
| | - Christine Ireland
- School of Medicine, University of California, San Francisco, California, USA
| | - Jill Dumbauld
- Clinical and Translational Research Institute, University of California, San Diego, California, USA
| | - Julie Rainwater
- Schools of Health and Clinical and Translational Science Center, University of California, Davis, California, USA
| |
Collapse
|