1
|
Shaw J. On the very idea of pursuitworthiness. STUDIES IN HISTORY AND PHILOSOPHY OF SCIENCE 2022; 91:103-112. [PMID: 34896786 DOI: 10.1016/j.shpsa.2021.11.016] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Revised: 11/16/2021] [Accepted: 11/17/2021] [Indexed: 06/14/2023]
Abstract
Recent philosophical literature has turned its attention towards assessments of how to judge scientific proposals as worthy of further inquiry. Previous work, as well as papers contained within this special issue, propose criteria for pursuitworthiness (Achinstein, 1993; Whitt, 1992; DiMarco & Khalifa, 2019; Laudan, 1977; Shan, 2020; Šešelja et al., 2012). The purpose of this paper is to assess the grounds on which pursuitworthiness demands can be legitimately made. To do this, I propose a challenge to the possibility of even minimal criteria of pursuitworthiness, inspired by Paul Feyerabend. I go on to provide a framework for identifying the contexts in which pursuitworthiness criteria may promote the efficiency of scientific inquiry. I then spell out some implications this framework has for values and pursuit.
Collapse
Affiliation(s)
- Jamie Shaw
- Institute for History and Philosophy of Science and Technology, University of Toronto, Canada.
| |
Collapse
|
3
|
Biegelbauer P, Palfinger T, Mayer S. How to select the best: Selection procedures of innovation agencies. RESEARCH EVALUATION 2020. [DOI: 10.1093/reseval/rvaa011] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Abstract
Abstract
Innovation agencies, that is organizations with the primary focus of funding applied research and technological development, evaluate project proposals to select the most promising proposals for funding. At the moment, there is only little verified knowledge available on project evaluation and selection processes of innovation agencies. We want to show how projects are evaluated and selected in these organizations. We want to also make a contribution for better understanding the variety of the utilized processes by pointing out the reasoning behind some of the most important practices. This article therefore focuses on the following questions: How are projects selected in innovation agencies? What are the employed procedures and practices? Are there differences in procedures and practices and what would be the reason for these differences? The basis for answering these questions is a study produced for the European Association of National Innovation Agencies, Taftie. There we have analysed the project selection procedures of 18 programmes run by 12 European innovation agencies. To do so, we have produced an overview of existing selection procedures of the innovation agencies, analysed, and compared the procedures along the stages of a typical selection process. The key points of interest were role of evaluators, selection criteria, ranking procedures, and general process issues.
Collapse
Affiliation(s)
- Peter Biegelbauer
- Centre for Innovation Systems and Policy, Austrian Institute of Technology, Giefinggasse 4, Vienna 1210, Austria
| | - Thomas Palfinger
- Austrian Centre for Digital Humanities and Cultural Heritage, Austrian Academy of Sciences, Sonnenfelsgasse 19, Vienna 1010, Austria
| | - Sabine Mayer
- Strategy Department, Austrian Research Promotion Agency, Sensengasse 1, Vienna 1090, Austria
| |
Collapse
|
5
|
Braithwaite J, Herkes J, Churruca K, Long JC, Pomare C, Boyling C, Bierbaum M, Clay-Williams R, Rapport F, Shih P, Hogden A, Ellis LA, Ludlow K, Austin E, Seah R, McPherson E, Hibbert PD, Westbrook J. Comprehensive Researcher Achievement Model (CRAM): a framework for measuring researcher achievement, impact and influence derived from a systematic literature review of metrics and models. BMJ Open 2019; 9:e025320. [PMID: 30928941 PMCID: PMC6475357 DOI: 10.1136/bmjopen-2018-025320] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/10/2018] [Revised: 02/04/2019] [Accepted: 02/06/2019] [Indexed: 12/20/2022] Open
Abstract
OBJECTIVES Effective researcher assessment is key to decisions about funding allocations, promotion and tenure. We aimed to identify what is known about methods for assessing researcher achievements, leading to a new composite assessment model. DESIGN We systematically reviewed the literature via the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols framework. DATA SOURCES All Web of Science databases (including Core Collection, MEDLINE and BIOSIS Citation Index) to the end of 2017. ELIGIBILITY CRITERIA: (1) English language, (2) published in the last 10 years (2007-2017), (3) full text was available and (4) the article discussed an approach to the assessment of an individual researcher's achievements. DATA EXTRACTION AND SYNTHESIS Articles were allocated among four pairs of reviewers for screening, with each pair randomly assigned 5% of their allocation to review concurrently against inclusion criteria. Inter-rater reliability was assessed using Cohen's Kappa (ĸ). The ĸ statistic showed agreement ranging from moderate to almost perfect (0.4848-0.9039). Following screening, selected articles underwent full-text review and bias was assessed. RESULTS Four hundred and seventy-eight articles were included in the final review. Established approaches developed prior to our inclusion period (eg, citations and outputs, h-index and journal impact factor) remained dominant in the literature and in practice. New bibliometric methods and models emerged in the last 10 years including: measures based on PageRank algorithms or 'altmetric' data, methods to apply peer judgement and techniques to assign values to publication quantity and quality. Each assessment method tended to prioritise certain aspects of achievement over others. CONCLUSIONS All metrics and models focus on an element or elements at the expense of others. A new composite design, the Comprehensive Researcher Achievement Model (CRAM), is presented, which supersedes past anachronistic models. The CRAM is modifiable to a range of applications.
Collapse
Affiliation(s)
- Jeffrey Braithwaite
- Australian Institute of Health Innovation, Macquarie University, North Ryde, New South Wales, Australia
| | - Jessica Herkes
- Australian Institute of Health Innovation, Macquarie University, North Ryde, New South Wales, Australia
| | - Kate Churruca
- Australian Institute of Health Innovation, Macquarie University, North Ryde, New South Wales, Australia
| | - Janet C Long
- Australian Institute of Health Innovation, Macquarie University, North Ryde, New South Wales, Australia
| | - Chiara Pomare
- Australian Institute of Health Innovation, Macquarie University, North Ryde, New South Wales, Australia
| | - Claire Boyling
- Australian Institute of Health Innovation, Macquarie University, North Ryde, New South Wales, Australia
| | - Mia Bierbaum
- Australian Institute of Health Innovation, Macquarie University, North Ryde, New South Wales, Australia
| | - Robyn Clay-Williams
- Australian Institute of Health Innovation, Macquarie University, North Ryde, New South Wales, Australia
| | - Frances Rapport
- Australian Institute of Health Innovation, Macquarie University, North Ryde, New South Wales, Australia
| | - Patti Shih
- Australian Institute of Health Innovation, Macquarie University, North Ryde, New South Wales, Australia
| | - Anne Hogden
- Australian Institute of Health Innovation, Macquarie University, North Ryde, New South Wales, Australia
| | - Louise A Ellis
- Australian Institute of Health Innovation, Macquarie University, North Ryde, New South Wales, Australia
| | - Kristiana Ludlow
- Australian Institute of Health Innovation, Macquarie University, North Ryde, New South Wales, Australia
| | - Elizabeth Austin
- Australian Institute of Health Innovation, Macquarie University, North Ryde, New South Wales, Australia
| | - Rebecca Seah
- Australian Institute of Health Innovation, Macquarie University, North Ryde, New South Wales, Australia
| | - Elise McPherson
- Australian Institute of Health Innovation, Macquarie University, North Ryde, New South Wales, Australia
| | - Peter D Hibbert
- Australian Institute of Health Innovation, Macquarie University, North Ryde, New South Wales, Australia
- Division of Health Sciences, University of South Australia, Adelaide, South Australia, Australia
| | - Johanna Westbrook
- Australian Institute of Health Innovation, Macquarie University, North Ryde, New South Wales, Australia
| |
Collapse
|
9
|
Abstract
To investigate the effect of competitive incentives under peer review, we designed a novel experimental setup called the Art Exhibition Game. We present experimental evidence of how competition introduces both positive and negative effects when creative artifacts are evaluated and selected by peer review. Competition proved to be a double-edged sword: on the one hand, it fosters innovation and product diversity, but on the other hand, it also leads to more unfair reviews and to a lower level of agreement between reviewers. Moreover, an external validation of the quality of peer reviews during the laboratory experiment, based on 23,627 online evaluations on Amazon Mechanical Turk, shows that competition does not significantly increase the level of creativity. Furthermore, the higher rejection rate under competitive conditions does not improve the average quality of published contributions, because more high-quality work is also rejected. Overall, our results could explain why many ground-breaking studies in science end up in lower-tier journals. Differences and similarities between the Art Exhibition Game and scholarly peer review are discussed and the implications for the design of new incentive systems for scientists are explained.
Collapse
|