1
|
Himanen L, Conte E, Gauffriau M, Strøm T, Wolf B, Gadd E. The SCOPE framework - implementing ideals of responsible research assessment. F1000Res 2024; 12:1241. [PMID: 38813348 PMCID: PMC11134161 DOI: 10.12688/f1000research.140810.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 05/08/2024] [Indexed: 05/31/2024] Open
Abstract
Background Research and researchers are heavily evaluated, and over the past decade it has become widely acknowledged that the consequences of evaluating the research enterprise and particularly individual researchers are considerable. This has resulted in the publishing of several guidelines and principles to support moving towards more responsible research assessment (RRA). To ensure that research evaluation is meaningful, responsible, and effective the International Network of Research Management Societies (INORMS) Research Evaluation Group created the SCOPE framework enabling evaluators to deliver on existing principles of RRA. SCOPE bridges the gap between principles and their implementation by providing a structured five-stage framework by which evaluations can be designed and implemented, as well as evaluated. Methods SCOPE is a step-by-step process designed to help plan, design, and conduct research evaluations as well as check effectiveness of existing evaluations. In this article, four case studies are presented to show how SCOPE has been used in practice to provide value-based research evaluation. Results This article situates SCOPE within the international work towards more meaningful and robust research evaluation practices and shows through the four case studies how it can be used by different organisations to develop evaluations at different levels of granularity and in different settings. Conclusions The article demonstrates that the SCOPE framework is rooted firmly in the existing literature. In addition, it is argued that it does not simply translate existing principles of RRA into practice, but provides additional considerations not always addressed in existing RRA principles and practices thus playing a specific role in the delivery of RRA. Furthermore, the use cases show the value of SCOPE across a range of settings, including different institutional types, sizes, and missions.
Collapse
Affiliation(s)
- Laura Himanen
- Faculty of Management and Business, Tampere University, Kanslerinrinne 1, FI-33014, Tampere, Finland
- CSC – IT Center for Science, Keilaranta 14, Espoo, 02101, Finland
| | - Erica Conte
- Unity Health Toronto, 30 Bond St, Toronto, Ontario, L1Z 1P3, Canada
| | - Marianne Gauffriau
- IT University of Copenhagen, Rued Langgaards vej 7, Copenhagen, DK-2300, Denmark
| | - Tanja Strøm
- Oslo Metropolitan University, (OsloMet), Pilestredet 46, Oslo, 0167, Norway
| | - Baron Wolf
- University of Kentucky, 311 Main Building, Lexington, Kentucky, 40502, USA
| | - Elizabeth Gadd
- Loughborough University, Epinal Way, Loughborough, England, LE11 3TU, UK
| |
Collapse
|
2
|
Krüger AK, Petersohn S. From Research Evaluation to Research Analytics. The digitization of academic performance measurement. VALUATION STUDIES 2022. [DOI: 10.3384/vs.2001-5992.2022.9.1.11-46] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
One could think that bibliometric measurement of academic performance has always been digital since the computer-assisted invention of the Science Citation Index. Yet, since the 2000s, the digitization of bibliometric infrastructure has accelerated at a rapid pace. Citation databases are indexing an increasing variety of publication types. Altmetric data aggregators are producing data on the reception of research outcomes. Machine-readable persistent identifiers are created to unambiguously identify researchers, research organizations, and research objects; and evaluative software tools and current research information systems are constantly enlarging their functionalities to make use of these data and extract meaning from them. In this article, we analyse how these developments in evaluative bibliometrics have contributed to an extension of indicator-based research evaluation towards data-driven research analytics. Drawing on empirical material from blogs and websites as well as from research and policy papers, we discuss how interoperability, scalability, and flexibility as material specificities of digital infrastructures generate new ways of data production and their assessment, which affect the possibilities of how academic performance can be understood and (e)valuated.
Collapse
|
3
|
Susanin A, Boyar A, Costello K, Fraiman A, Misrok A, Sears M, Hildebrandt T. Rigor and reproducibility for data analysis and design in the study of eating disorders. Int J Eat Disord 2022; 55:1267-1278. [PMID: 35852964 DOI: 10.1002/eat.23774] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 06/24/2022] [Accepted: 06/26/2022] [Indexed: 11/10/2022]
Abstract
OBJECTIVE Incorporating open science practices has become a priority for submission criteria in the International Journal of Eating Disorders (IJED). In this systematic review, we used the rigor and reproducibility framework developed by Hildebrandt and Prenoveau (2020) to examine the implementation of statistically sound open science principles in IJED, determining whether the cost and effort of incorporating these practices ultimately make research more likely to be cited. METHOD For this systematic review, six trained coders examined 1145 articles published from January 2011 to May 2021, including the 5 years prior to the 2016 introduction of the Open Science Foundation article preregistration. We coded for the presence or absence of 10 specific open science elements and calculated citation metrics for each article. RESULTS There was evidence of a significant positive relationship between time and total rigor and reproducibility (Total RR) criteria included in IJED articles following the implementation of preregistration in 2016. For every increase in year from 2011 to 2016, there was a .14 decrease in Total RR criteria. From 2016 to 2021, there was a .42 increase per volume in Total RR criteria. There was no statistically significant relationship between Total RR criteria and citation impact. DISCUSSION Although findings indicate that statistical rigor and reproducibility in this field has increased, the lack of direct relationship between open science methods and article visibility for scientists suggests that there is a limited incentive for researchers to participate in reporting guidelines. PUBLIC SIGNIFICANCE Statistical controversies within science threaten the rigor and reproducibility of published research. Open science practices, including the preregistration of study hypotheses, links to statistical code, and explicit data-sharing arguably generate reliable and valid inferences. This review illustrates the rigor and reproducibility of articles published in IJED between 2011 and 2021 and identifies whether open sciences practices have become increasingly prevalent in eating disorder research.
Collapse
Affiliation(s)
- Annabel Susanin
- Eating and Weight Disorders Program, Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Allison Boyar
- Eating and Weight Disorders Program, Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Kayla Costello
- Eating and Weight Disorders Program, Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Annie Fraiman
- Department of Psychology, Hofstra University, Hempstead, New York, USA
| | - Arielle Misrok
- Ferkauf Graduate School of Psychology, Yeshiva University, Bronx, New York, USA
| | - Malka Sears
- Eating and Weight Disorders Program, Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| | - Tom Hildebrandt
- Eating and Weight Disorders Program, Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, New York, USA
| |
Collapse
|
4
|
Ramos-Lara MDLP, Carreón-Vázquez G, Acatitla-Romero E, Mendoza-Rosas RM. Mapping Manuel Sandoval Vallarta (1899-1977) Scientific Contribution. FOUNDATIONS OF SCIENCE 2022:1-28. [PMID: 36187324 PMCID: PMC9516536 DOI: 10.1007/s10699-022-09872-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 08/20/2022] [Indexed: 06/16/2023]
Abstract
This paper employs network theory, mining data and bibliometric analysis when mapping the scientific contribution of Nobel Prize candidate; Manuel Sandoval Vallarta, the first and most renowned Mexican physicist and important figure in Latin American science. Vallarta died in 1977, and the existing literature is about his life and contributions to science but not about how those are still valuable today. This paper is the first to highlight, with mapping tools, that his contributions are relevant to the international community of cosmic rays (as he was pioneer and leader), quantum mechanics and relativity. These tools delivered three findings: Identify how he built his own field of study, same as universal knowledge. Unveil that the backward and forward Vallarta citations follow a scale-free network distribution. Determine social factors that benefited or affected his scientific activities-such as World War II interrupting Vallarta's successful productivity at Massachusetts Institute of Technology. Furthermore, this study confirmed the interdisciplinary nature of the mapping studies of the scientist's contributions using scientometric tools. As a result, several interesting questions arose throughout our research, some of which were answered from the history and philosophy of science. However, others need to be analyzed by experts in the fields of Vallarta. Mapping research sends an invitation to interdisciplinary dialogue/research between experts in different areas of study to better understand the process of knowledge production both, individual and collective.
Collapse
Affiliation(s)
- María de la Paz Ramos-Lara
- Center for Interdisciplinary Research in Sciences and Humanities, National Autonomous University of Mexico, Mexico City, Mexico
| | - Gustavo Carreón-Vázquez
- Institute of Economic Research, National Autonomous University of Mexico, Mexico City, Mexico
| | - Edgar Acatitla-Romero
- Faculty of Accounting and Administration, National Autonomous University of Mexico, Mexico City, Mexico
| | - Rosa María Mendoza-Rosas
- Center for Interdisciplinary Research in Sciences and Humanities, National Autonomous University of Mexico, Mexico City, Mexico
| |
Collapse
|
5
|
Epistemic community formation: a bibliometric study of recurring authors in medical journals. Scientometrics 2022. [DOI: 10.1007/s11192-022-04409-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
6
|
Bibliometrics in Press. Representations and uses of bibliometric indicators in the Italian daily newspapers. Scientometrics 2022. [DOI: 10.1007/s11192-022-04341-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
AbstractScholars in science and technology studies and bibliometricians are increasingly revealing the performative nature of bibliometric indicators. Far from being neutral technical measures, indicators such as the Impact Factor and the h-index are deeply transforming the social and epistemic structures of contemporary science. At the same time, scholars have highlighted how bibliometric indicators are endowed with social meanings that go beyond their purely technical definitions. These social representations of bibliometric indicators are constructed and negotiated between different groups of actors within several arenas. This study aims to investigate how bibliometric indicators are used in a context, which, so far, has not yet been covered by researchers, that of daily newspapers. By a content analysis of a corpus of 583 articles that appeared in four major Italian newspapers between 1990 and 2020, we chronicle the main functions that bibliometrics and bibliometric indicators played in the Italian press. Our material shows, among other things, that the public discourse developed in newspapers creates a favorable environment for bibliometrics-centered science policies, that bibliometric indicators contribute to the social construction of scientific facts in the press, especially in science news related to medicine, and that professional bibliometric expertise struggles to be represented in newspapers and hence reach the general public.
Collapse
|
7
|
What Is Quality in Research? Building a Framework of Design, Process and Impact Attributes and Evaluation Perspectives. SUSTAINABILITY 2022. [DOI: 10.3390/su14053034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
The strategic relevance of innovation and scientific research has amplified the attention towards the definition of quality in research practice. However, despite the proliferation of evaluation metrics and procedures, there is a need to go beyond bibliometric approaches and to identify, more explicitly, what constitutes good research and which are its driving factors or determinants. This article reviews specialized research policy, science policy and scientometrics literature to extract critical dimensions associated with research quality as presented in a vast although fragmented theory background. A literature-derived framework of research quality attributes is, thus, obtained, which is subject to an expert feedback process, involving scholars and practitioners in the fields of research policy and evaluation. The results are represented by a structured taxonomy of 66 quality attributes providing a systemic definition of research quality. The attributes are aggregated into a three-dimensional framework encompassing research design (ex ante), research process (in-process) and research impact (ex post) perspectives. The main value of the study is to propose a literature-derived and comprehensive inventory of quality attributes and perspectives of evaluation. The findings can support further theoretical developments and research policy discussions on the ultimate drivers of quality and impact of scientific research. The framework can be also useful to design new exercises or procedures of research evaluation based on a multidimensional view of quality.
Collapse
|
8
|
Impact and visibility of Norwegian, Finnish and Spanish journals in the fields of humanities. Scientometrics 2021. [DOI: 10.1007/s11192-021-04169-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
AbstractThis article analyses the impact and visibility of scholarly journals in the humanities that are publishing in the national languages in Finland, Norway and Spain. Three types of publishers are considered: commercial publishers, scholarly society as publisher, and research organizations as publishers. Indicators of visibility and impact were obtained from Web of Science, SCOPUS, Google Metrics, Scimago Journal Rank and Journal Citation Report. The findings compiled show that in Spain the categories “History and Archaeology” and “Language and Literature” account for almost 70% of the journals analysed, while the other countries offer a more homogeneous distribution. In Finland, the scholarly society publisher is predominant, in Spain, research organization as publishers, mostly universities, have a greater weighting, while in Norway, the commercial publishers take centre stage. The results show that journals from Finland and Norway will have reduced possibilities in terms of impact and visibility, since the vernacular language appeals to a smaller readership. Conversely, the Spanish journals are more attractive for indexing in commercial databases. Distribution in open access ranges from 64 to 70% in Norwegian and Finish journals, and to 91% in Spanish journals. The existence of DOI range from 31 to 41% in Nordic journals to 60% in Spanish journals and has a more widespread bearing on the citations received in all three countries (journals with DOI and open access are cited more frequently).
Collapse
|
9
|
Romanelli JP, Gonçalves MCP, de Abreu Pestana LF, Soares JAH, Boschi RS, Andrade DF. Four challenges when conducting bibliometric reviews and how to deal with them. ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH INTERNATIONAL 2021; 28:60448-60458. [PMID: 34545520 DOI: 10.1007/s11356-021-16420-x] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2021] [Accepted: 09/05/2021] [Indexed: 06/13/2023]
Abstract
The evidence base in environmental sciences is increasing steadily. Environmental researchers have been challenged to handle massive volumes of data to support more comprehensive studies, assess the current status of science, and move research towards future progress. Bibliometrics can provide important insights into the research directions by providing summarized information for several end users. Here, we present an in-depth discussion on the use of bibliometric indicators to evaluate research outputs through four case studies comprising disciplines in environmental sciences. We discuss four big challenges researchers may face when conducting bibliometric reviews and how to deal with them. We also address some primary questions researchers may answer with bibliometric mapping, drawing lessons from the case studies. Lastly, we clarify some misuses of review concepts and suggest methodological principles of systematic reviews and maps to improve the overall quality of bibliometric studies.
Collapse
Affiliation(s)
- João Paulo Romanelli
- Laboratory of Ecology and Forest Restoration (LERF), "Luiz de Queiroz" College of Agriculture, University of São Paulo, Av. Pádua Dias, 11, Piracicaba, SP, 13418-900, Brazil.
| | - Maria Carolina Pereira Gonçalves
- Laboratory of Enzymatic Technology (LabEnz), Department of Chemical Engineering, Federal University of São Carlos, Rod. Washington Luiz, km 235, São Carlos, SP, 13565-905, Brazil
| | - Luís Fernando de Abreu Pestana
- Agronomic Sciences College (FCA), Forest Science Department, São Paulo State University, Av. Universitária, 3780, Botucatu, SP, 18610-034, Brazil
| | - Jéssica Akemi Hitaka Soares
- Agronomic Sciences College (FCA), Forest Science Department, São Paulo State University, Av. Universitária, 3780, Botucatu, SP, 18610-034, Brazil
| | - Raquel Stucchi Boschi
- Secretariat for Environmental Management and Sustainability (SGAS), Federal University of São Carlos, Rod. Washington Luís, km 235, São Carlos, SP, 13565-905, Brazil
| | - Daniel Fernandes Andrade
- Group of Applied Instrumental Analysis, Department of Chemistry, Federal University of São Carlos, Rod. Washington Luís, km 235, São Carlos, SP, 13565-905, Brazil
| |
Collapse
|
10
|
Bornmann L, Tekles A. Convergent validity of several indicators measuring disruptiveness with milestone assignments to physics papers by experts. J Informetr 2021. [DOI: 10.1016/j.joi.2021.101159] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
11
|
González-Alcaide G. Bibliometric studies outside the information science and library science field: uncontainable or uncontrollable? Scientometrics 2021. [DOI: 10.1007/s11192-021-04061-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
12
|
The citation impact of articles from which authors gained monetary rewards based on journal metrics. Scientometrics 2021. [DOI: 10.1007/s11192-021-03944-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
13
|
Assessing the publication output on country level in the research field communication using Garfield’s Impact Factor. Scientometrics 2021. [DOI: 10.1007/s11192-021-04006-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
14
|
Thelwall M, Kousha K. Researchers' attitudes towards the h-index on Twitter 2007-2020: criticism and acceptance. Scientometrics 2021; 126:5361-5368. [PMID: 33935333 PMCID: PMC8072298 DOI: 10.1007/s11192-021-03961-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Accepted: 03/17/2021] [Indexed: 11/23/2022]
Abstract
The h-index is an indicator of the scientific impact of an academic publishing career. Its hybrid publishing/citation nature and inherent bias against younger researchers, women, people in low resourced countries, and those not prioritizing publishing arguably give it little value for most formal and informal research evaluations. Nevertheless, it is well-known by academics, used in some promotion decisions, and is prominent in bibliometric databases, such as Google Scholar. In the context of this apparent conflict, it is important to understand researchers’ attitudes towards the h-index. This article used public tweets in English to analyse how scholars discuss the h-index in public: is it mentioned, are tweets about it positive or negative, and has interest decreased since its shortcomings were exposed? The January 2021 Twitter Academic Research initiative was harnessed to download all English tweets mentioning the h-index from the 2006 start of Twitter until the end of 2020. The results showed a constantly increasing number of tweets. Whilst the most popular tweets unapologetically used the h-index as an indicator of research performance, 28.5% of tweets were critical of its simplistic nature and others joked about it (8%). The results suggest that interest in the h-index is still increasing online despite scientists willing to evaluate the h-index in public tending to be critical. Nevertheless, in limited situations it may be effective at succinctly conveying the message that a researcher has had a successful publishing career.
Collapse
Affiliation(s)
- Mike Thelwall
- Statistical Cybermetrics Research Group, University of Wolverhampton, Wolverhampton, UK
| | - Kayvan Kousha
- Statistical Cybermetrics Research Group, University of Wolverhampton, Wolverhampton, UK
| |
Collapse
|
15
|
Faber FT, Eriksen MB, Hammer DMG. Obsolescence of the literature: A study of included studies in Cochrane reviews. J Inf Sci 2021. [DOI: 10.1177/01655515211006588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Ageing or obsolescence describes the process of declining use of a particular publication over time and can affect the results of a citation analyses as the length of citation window can change rankings. Obsolescence may not only vary across fields but also across subfields or sub-disciplines. The aim of this study is to determine the sub-disciplinary differences of obsolescence on a larger scale allowing for differences over time as well. The study presents the results of an analysis of 82,759 references across 53 healthcare and health policy topics. The references in this study are extracted from systematic reviews published from 2012 to 2016. The analyses of obsolescence include median citation age and mean citation age. This study finds that the median citation age and the mean citation age differ considerably across groups. For the latter indicator, an analysis of the confidence intervals confirms these differences. Using the subfield categorisation from Cochrane review groups, we found larger differences across subfields than in the citing half-lives published by Journal Citation Reports. Obsolescence is important to consider when setting the length of the citation windows. This study emphasises the vast differences across health sciences subfields. The length of the citation period is thus highly important for the results of a bibliometric evaluation or study covering fields with very varying obsolescence rates.
Collapse
Affiliation(s)
- Frandsen Tove Faber
- Department of Design and Communication, University of Southern Denmark, Denmark
| | - Mette Brandt Eriksen
- The University Library of Southern Denmark, University of Southern Denmark, Denmark
| | | |
Collapse
|
16
|
Triggle CR, MacDonald R, Triggle DJ, Grierson D. Requiem for impact factors and high publication charges. Account Res 2021; 29:133-164. [PMID: 33787413 DOI: 10.1080/08989621.2021.1909481] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Journal impact factors, publication charges and assessment of quality and accuracy of scientific research are critical for researchers, managers, funders, policy makers, and society. Editors and publishers compete for impact factor rankings, to demonstrate how important their journals are, and researchers strive to publish in perceived top journals, despite high publication and access charges. This raises questions of how top journals are identified, whether assessments of impacts are accurate and whether high publication charges borne by the research community are justified, bearing in mind that they also collectively provide free peer-review to the publishers. Although traditional journals accelerated peer review and publication during the COVID-19 pandemic, preprint servers made a greater impact with over 30,000 open access articles becoming available and accelerating a trend already seen in other fields of research. We review and comment on the advantages and disadvantages of a range of assessment methods and the way in which they are used by researchers, managers, employers and publishers. We argue that new approaches to assessment are required to provide a realistic and comprehensive measure of the value of research and journals and we support open access publishing at a modest, affordable price to benefit research producers and consumers.
Collapse
Affiliation(s)
- Chris R Triggle
- Departments of Medical Education & Pharmacology, Weill Cornell Medicine-Qatar, Doha, Qatar
| | - Ross MacDonald
- Distributed eLibrary, Weill Cornell Medicine-Qatar, Doha, New York, Qatar
| | - David J Triggle
- School of Pharmacy and Pharmaceutical Sciences, State University of New York, Buffalo, New York, USA
| | - Donald Grierson
- School of Biosciences, University of Nottingham, Loughborough, UK
| |
Collapse
|
17
|
|
18
|
Web of Science (WoS) and Scopus: The Titans of Bibliographic Information in Today’s Academic World. PUBLICATIONS 2021. [DOI: 10.3390/publications9010012] [Citation(s) in RCA: 133] [Impact Index Per Article: 44.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
Nowadays, the importance of bibliographic databases (DBs) has increased enormously, as they are the main providers of publication metadata and bibliometric indicators universally used both for research assessment practices and for performing daily tasks. Because the reliability of these tasks firstly depends on the data source, all users of the DBs should be able to choose the most suitable one. Web of Science (WoS) and Scopus are the two main bibliographic DBs. The comprehensive evaluation of the DBs’ coverage is practically impossible without extensive bibliometric analyses or literature reviews, but most DBs users do not have bibliometric competence and/or are not willing to invest additional time for such evaluations. Apart from that, the convenience of the DB’s interface, performance, provided impact indicators and additional tools may also influence the users’ choice. The main goal of this work is to provide all of the potential users with an all-inclusive description of the two main bibliographic DBs by gathering the findings that are presented in the most recent literature and information provided by the owners of the DBs at one place. This overview should aid all stakeholders employing publication and citation data in selecting the most suitable DB.
Collapse
|
19
|
Are University Rankings Statistically Significant? A Comparison among Chinese Universities and with the USA. JOURNAL OF DATA AND INFORMATION SCIENCE 2021. [DOI: 10.2478/jdis-2021-0014] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Abstract
Purpose
Building on Leydesdorff, Bornmann, and Mingers (2019), we elaborate the differences between Tsinghua and Zhejiang University as an empirical example. We address the question of whether differences are statistically significant in the rankings of Chinese universities. We propose methods for measuring statistical significance among different universities within or among countries.
Design/methodology/approach
Based on z-testing and overlapping confidence intervals, and using data about 205 Chinese universities included in the Leiden Rankings 2020, we argue that three main groups of Chinese research universities can be distinguished (low, middle, and high).
Findings
When the sample of 205 Chinese universities is merged with the 197 US universities included in Leiden Rankings 2020, the results similarly indicate three main groups: low, middle, and high. Using this data (Leiden Rankings and Web of Science), the z-scores of the Chinese universities are significantly below those of the US universities albeit with some overlap.
Research limitations
We show empirically that differences in ranking may be due to changes in the data, the models, or the modeling effects on the data. The scientometric groupings are not always stable when we use different methods.
Practical implications
Differences among universities can be tested for their statistical significance. The statistics relativize the values of decimals in the rankings. One can operate with a scheme of low/middle/high in policy debates and leave the more fine-grained rankings of individual universities to operational management and local settings.
Originality/value
In the discussion about the rankings of universities, the question of whether differences are statistically significant, has, in our opinion, insufficiently been addressed in research evaluations.
Collapse
|
20
|
Budimir G, Rahimeh S, Tamimi S, Južnič P. Comparison of self-citation patterns in WoS and Scopus databases based on national scientific production in Slovenia (1996–2020). Scientometrics 2021. [DOI: 10.1007/s11192-021-03862-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
21
|
The lack of meaningful boundary differences between journal impact factor quartiles undermines their independent use in research evaluation. Scientometrics 2021. [DOI: 10.1007/s11192-020-03801-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
22
|
The HF-rating as a universal complement to the h-index. Scientometrics 2020. [DOI: 10.1007/s11192-020-03611-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
23
|
Bornmann L, Haunschild R, Mutz R. Should citations be field-normalized in evaluative bibliometrics? An empirical analysis based on propensity score matching. J Informetr 2020. [DOI: 10.1016/j.joi.2020.101098] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
24
|
Abstract
AbstractWe discuss the trend towards using quantitative metrics for evaluating research. We claim that, rather than promoting meaningful research, purely metric-based research evaluation schemes potentially lead to a dystopian academic reality, leaving no space for creativity and intellectual initiative. After sketching what the future could look like if quantitative metrics are allowed to proliferate, we provide a more detailed discussion on why research is so difficult to evaluate and outline approaches for avoiding such a situation. In particular, we characterize meaningful research as an essentially contested concept and argue that quantitative metrics should always be accompanied by operationalized instructions for their proper use and continuously evaluated via feedback loops. Additionally, we analyze a dataset containing information about computer science publications and their citation history and indicate how quantitative metrics could potentially be calibrated via alternative evaluation methods such as test of time awards. Finally, we argue that, instead of over-relying on indicators, research environments should primarily be based on trust and personal responsibility.
Collapse
|
25
|
Niebla-Zatarain JC, Pinedo-de-Anda FJ, Leyva-Duarte E. Entrepreneurship on family business: Bibliometric overview (2005–2018). JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2020. [DOI: 10.3233/jifs-179649] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Juan C. Niebla-Zatarain
- Doctoral Program in Management Sciences, University Autonomous of Occident, Culiacán, Sinaloa, México
| | | | - Efren Leyva-Duarte
- Doctoral Program in Management Sciences, University Autonomous of Occident, Culiacán, Sinaloa, México
| |
Collapse
|
26
|
Abstract
Most scientometricians reject the use of the journal impact factor for assessing individual articles and their authors. The well-known San Francisco Declaration on Research Assessment also strongly objects against this way of using the impact factor. Arguments against the use of the impact factor at the level of individual articles are often based on statistical considerations. The skewness of journal citation distributions typically plays a central role in these arguments. We present a theoretical analysis of statistical arguments against the use of the impact factor at the level of individual articles. Our analysis shows that these arguments do not support the conclusion that the impact factor should not be used for assessing individual articles. Using computer simulations, we demonstrate that under certain conditions the number of citations an article has received is a more accurate indicator of the value of the article than the impact factor. However, under other conditions, the impact factor is a more accurate indicator. It is important to critically discuss the dominant role of the impact factor in research evaluations, but the discussion should not be based on misplaced statistical arguments. Instead, the primary focus should be on the socio-technical implications of the use of the impact factor.
Collapse
Affiliation(s)
- Ludo Waltman
- Centre for Science and Technology Studies, Leiden University, Leiden, The Netherlands
| | - Vincent A. Traag
- Centre for Science and Technology Studies, Leiden University, Leiden, The Netherlands
| |
Collapse
|
27
|
Waltman L, Traag VA. Use of the journal impact factor for assessing individual articles need not be statistically wrong. F1000Res 2020; 9:366. [PMID: 33796272 PMCID: PMC7974631 DOI: 10.12688/f1000research.23418.1] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 04/17/2020] [Indexed: 07/22/2023] Open
Abstract
Most scientometricians reject the use of the journal impact factor for assessing individual articles and their authors. The well-known San Francisco Declaration on Research Assessment also strongly objects against this way of using the impact factor. Arguments against the use of the impact factor at the level of individual articles are often based on statistical considerations. The skewness of journal citation distributions typically plays a central role in these arguments. We present a theoretical analysis of statistical arguments against the use of the impact factor at the level of individual articles. Our analysis shows that these arguments do not support the conclusion that the impact factor should not be used for assessing individual articles. In fact, our computer simulations demonstrate the possibility that the impact factor is a more accurate indicator of the value of an article than the number of citations the article has received. It is important to critically discuss the dominant role of the impact factor in research evaluations, but the discussion should not be based on misplaced statistical arguments. Instead, the primary focus should be on the socio-technical implications of the use of the impact factor.
Collapse
Affiliation(s)
- Ludo Waltman
- Centre for Science and Technology Studies, Leiden University, Leiden, The Netherlands
| | - Vincent A. Traag
- Centre for Science and Technology Studies, Leiden University, Leiden, The Netherlands
| |
Collapse
|
28
|
Bornmann L. Bibliometrics-based decision trees (BBDTs) based on bibliometrics-based heuristics (BBHs): Visualized guidelines for the use of bibliometrics in research evaluation. QUANTITATIVE SCIENCE STUDIES 2020. [DOI: 10.1162/qss_a_00012] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
Abstract
Fast-and-frugal heuristics are simple strategies that base decisions on only a few predictor variables. In so doing, heuristics may not only reduce complexity but also boost the accuracy of decisions, their speed, and transparency. In this paper, bibliometrics-based decision trees (BBDTs) are introduced for research evaluation purposes. BBDTs visualize bibliometrics-based heuristics (BBHs), which are judgment strategies solely using publication and citation data. The BBDT exemplar presented in this paper can be used as guidance to find an answer on the question in which situations simple indicators such as mean citation rates are reasonable and in which situations more elaborated indicators (i.e., [sub-]field-normalized indicators) should be applied.
Collapse
Affiliation(s)
- Lutz Bornmann
- Administrative Headquarters of the Max Planck Society, Division for Science and Innovation Studies, Hofgartenstraße 8, 80539 Munich, Germany
| |
Collapse
|
29
|
Katchanov YL, Markova YV, Shmatko NA. The distinction machine: Physics journals from the perspective of the Kolmogorov–Smirnov statistic. J Informetr 2019. [DOI: 10.1016/j.joi.2019.100982] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
30
|
Lepori B, Geuna A, Mira A. Scientific output scales with resources. A comparison of US and European universities. PLoS One 2019; 14:e0223415. [PMID: 31613903 PMCID: PMC6793846 DOI: 10.1371/journal.pone.0223415] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Accepted: 09/20/2019] [Indexed: 11/19/2022] Open
Abstract
By using a comprehensive dataset of US and European universities, we demonstrate super-linear scaling between university revenues and their volume of publications and (field-normalized) citations. We show that this relationship holds both in the US and in Europe. In terms of resources, our data show that three characteristics differentiate the US system: (1) a significantly higher level of resources for the entire system, (2) a clearer distinction between education-oriented institutions and doctoral universities and (3) a higher concentration of resources among doctoral universities. Accordingly, a group of US universities receive a much larger amount of resources and have a far higher number of publications and citations when compared to their European counterparts. These results demonstrate empirically that international rankings are by and large richness measures and, therefore, can be interpreted only by introducing a measure of resources. Implications for public policies and institutional evaluation are finally discussed.
Collapse
Affiliation(s)
- Benedetto Lepori
- Faculty of Communication Sciences, Università della Svizzera Italiana, Lugano, Switzerland
| | - Aldo Geuna
- Department of Economics and Statistics Cognetti De Martiis, University of Turin, Turin, Italy
- BRICK, Collegio Carlo Alberto, Turin, Italy
| | - Antonietta Mira
- Institute of Computational Sciences, Faculty of Economics, Università della Svizzera Italiana, Lugano, Switzerland
| |
Collapse
|
31
|
Katchanov YL, Markova YV, Shmatko NA. Comparing the topological rank of journals in Web of Science and Mendeley. Heliyon 2019; 5:e02089. [PMID: 31388571 PMCID: PMC6667838 DOI: 10.1016/j.heliyon.2019.e02089] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2018] [Revised: 02/25/2019] [Accepted: 07/10/2019] [Indexed: 11/23/2022] Open
Abstract
Recently, there has been a surge of interest in new data emerged due to the rapid development of the information technologies in scholarly communication. Since the 2010s, altmetrics has become a common trend in scientometric research. However, researchers have not treated in much detail the question of the probability distributions underlying these new data. The principal objective of this study was to investigate one of the classic problems of scientometrics-the problem of citation and readership distributions. The study is based on the data obtained from two information systems: Web of Science and Mendeley. Here we based on the concept of the cumulative empirical distribution function to explore the differences and similarities between citations and readership counts of biological journals indexed in Web of Science and Mendeley. The basic idea was to determine, for any journal, a "size" (it is said to be the topological rank) of citation and readership empirical cumulative distributions, and then to compare distributions of the topological ranks of Web of Science and Mendeley. In order to verify our model, we employ it to the bibliometric and altmetric research of 305 biological journals indexed in Journal Citation Reports 2015. The findings show that both distributions of the topological rank of biological journals are statistically close to the Wakeby distribution. The findings presented in this study add to our understanding of information processes of the scholarly communication in the new digital environment.
Collapse
Affiliation(s)
- Yurij L. Katchanov
- Institute for Statistical Studies and Economics of Knowledge, National Research University Higher School of Economics, 20 Myasnitskaya Ulitsa, Moscow 101000, Russian Federation
| | - Yulia V. Markova
- American Association for the Advancement of Science, 1200 New York Ave NW, 20005, Washington, DC, USA
| | - Natalia A. Shmatko
- Institute for Statistical Studies and Economics of Knowledge, National Research University Higher School of Economics, 20 Myasnitskaya Ulitsa, Moscow 101000, Russian Federation
| |
Collapse
|
32
|
Bornmann L, Marewski JN. Heuristics as conceptual lens for understanding and studying the usage of bibliometrics in research evaluation. Scientometrics 2019. [DOI: 10.1007/s11192-019-03018-x] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
33
|
Robinson-Garcia N, Torres-Salinas D, Herrera-Viedma E, Docampo D. Mining university rankings: Publication output and citation impact as their basis. RESEARCH EVALUATION 2019. [DOI: 10.1093/reseval/rvz014] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Abstract
World university rankings have become well-established tools that students, university managers, and policy makers read and use. Each ranking claims to have a unique methodology capable of measuring the ‘quality’ of universities. The purpose of this article is to analyze to which extent these different rankings measure the same phenomenon and what it is that they are measuring. For this, we selected a total of seven world university rankings and performed a principal component analysis. After ensuring that despite their methodological differences, they all come together to a single component; we hypothesized that bibliometric indicators could explain what is being measured. Our analyses show that ranking scores from whichever of the seven league tables under study can be explained by the number of publications and citations received by the institution. We conclude by discussing policy implications and opportunities on how a nuanced and responsible use of rankings can help decision-making at the institutional level
Collapse
Affiliation(s)
- Nicolas Robinson-Garcia
- INGENIO (CSIC-UPV), Universitat Politècnica de València, Camí de Vera s/n, Valencia, Spain
- School of Public Policy, Georgia Institute of Technology, 685 Cherry Street, Atlanta, GA, USA
| | | | - Enrique Herrera-Viedma
- Department of Computer Science and Artificial Intelligence, University of Granada, Gran Vía 48, Granada, Spain
| | - Domingo Docampo
- atlanTTic Research Center for Communications, University of Vigo, E Vigo, Spain
| |
Collapse
|
34
|
The integrated impact indicator revisited (I3*): a non-parametric alternative to the journal impact factor. Scientometrics 2019. [DOI: 10.1007/s11192-019-03099-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
35
|
Probability and expected frequency of breakthroughs: basis and use of a robust method of research assessment. Scientometrics 2019. [DOI: 10.1007/s11192-019-03022-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
36
|
Evaluating research and researchers by the journal impact factor: Is it better than coin flipping? J Informetr 2019. [DOI: 10.1016/j.joi.2019.01.009] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
|
37
|
Söderlind J, Geschwind L. Making sense of academic work: the influence of performance measurement in Swedish universities. ACTA ACUST UNITED AC 2019. [DOI: 10.1080/23322969.2018.1564354] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Johan Söderlind
- Department of Learning, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Lars Geschwind
- Department of Learning, KTH Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
38
|
Leydesdorff L, Bornmann L, Mingers J. Statistical significance and effect sizes of differences among research universities at the level of nations and worldwide based on the leiden rankings. J Assoc Inf Sci Technol 2019. [DOI: 10.1002/asi.24130] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
- Loet Leydesdorff
- Amsterdam School of Communication Research (ASCoR)University of Amsterdam Amsterdam 1001 NG The Netherlands
| | - Lutz Bornmann
- Division for Science and Innovation StudiesAdministrative Headquarters of the Max Planck Society Munich 80539 Germany
| | - John Mingers
- Kent Business SchoolUniversity of Kent Canterbury CT7 2PE United Kingdom
| |
Collapse
|
39
|
Franchignoni F, Özçakar L, Negrini S. Basic bibliometrics for dummies and others: an overview of some journal-level indicators in physical and rehabilitation medicine. Eur J Phys Rehabil Med 2018; 54:792-796. [PMID: 30160439 DOI: 10.23736/s1973-9087.18.05462-x] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
This report aims to complement and update a series of papers published in the last decade on bibliometrics regarding journals related to physical and rehabilitation medicine (PRM). It targets clinicians and researchers (academic and non-academic) in our discipline who would like to use bibliometric indicators as a complementary "tool" to integrate into their expert practice of journal evaluation. Different journal-based metrics are analysed in order to provide a wide (albeit general) view of the performance of top PRM journals. First, we provide some brief preliminary remarks useful for an informed understanding of our results: 1) an update on bibliometric indicators and multidisciplinary databases of peer-reviewed literature; 2) the meaning of some bibliometric indicators; 3) the practical message related to this report: "keep it simple." Then, we profile the performance of 22 PRM core journals, according to six widely used bibliometric indicators. Indicators are grouped into three categories defined by their quartile classification (Three Star: top quartile; Two Star: second upper quartile; One Star: under the median). In conclusion, bibliometrics is just one of the key methods used for measuring the (supposed) 'impact' of scholarly publications and it represents only a raw proxy for the real impact or value of the research. This report wishes to add a small contribution for a simplified understanding of journal-level indicators in PRM, to support informed decisions on which high-level journals merit special attention by clinicians and researchers working in our discipline.
Collapse
Affiliation(s)
| | - Levent Özçakar
- Department of Physical and Rehabilitation Medicine, Hacettepe University Medical School, Sıhhiye, Ankara, Turkey
| | - Stefano Negrini
- Physical and Rehabilitation Medicine, Department of Clinical and Experimental Sciences, University of Brescia, Brescia, Italy.,IRCCS Fondazione Don Gnocchi, Milan, Italy
| |
Collapse
|
40
|
Hook DW, Porter SJ, Herzog C. Dimensions: Building Context for Search and Evaluation. Front Res Metr Anal 2018. [DOI: 10.3389/frma.2018.00023] [Citation(s) in RCA: 68] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
|
41
|
Critical rationalism and the search for standard (field-normalized) indicators in bibliometrics. J Informetr 2018. [DOI: 10.1016/j.joi.2018.05.002] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
42
|
Hot and cold spots in the US research: A spatial analysis of bibliometric data on the institutional level. J Inf Sci 2018. [DOI: 10.1177/0165551518782829] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Spatial bibliometrics addresses the spatial aspects of scientific research activities. In this case study, we use the Getis–Ord G∗ i ( d) statistic for bibliometric data on US institutions to identify hot spots of institutions on a map publishing many high-impact papers. The study is based on a dataset with performance data (proportion and number of papers belonging to the 10% most frequently cited papers) and geo-coordinates for all institutions in the United States from the SCImago group (and Scopus). The Getis-Ord Gi* statistic returns, for each institution on a map, a z score. Higher z scores point to intense clustering of institutions, which have published a large proportion or number of highly cited papers (hot spots). The US maps, which we generate as examples in this study, point to four regions. These regions can be labelled as hot spots: around San Francisco, Los Angeles, Boston and Washington, DC. The empirical focus on institutional hot spots in a country using bibliometric data is of specific importance for science policy, because geospatial proximity is shown as an important factor for innovation processes.
Collapse
|
43
|
|
44
|
Bornmann L, Haunschild R. Measuring Individual Performance with Comprehensive Bibliometric Reports as an Alternative to h-Index Values. J Korean Med Sci 2018; 33:e138. [PMID: 29713257 PMCID: PMC5920126 DOI: 10.3346/jkms.2018.33.e138] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/27/2017] [Accepted: 12/27/2017] [Indexed: 11/28/2022] Open
Abstract
The h-index is frequently used to measure the performance of single scientists in Korea (and beyond). No single indicator alone, however, is able to provide a stable and complete assessment of performance. The Stata command bibrep.ado is introduced which automatically produces bibliometric reports for single researchers (senior researchers working in the natural or life sciences). The user of the command receives a comprehensive bibliometric report which can be used in research evaluation instead of the h-index.
Collapse
Affiliation(s)
- Lutz Bornmann
- Division for Science and Innovation Studies, Administrative Headquarters of the Max Planck Society, Munich, Germany
| | - Robin Haunschild
- Max Planck Institute for Solid State Research, Stuttgart, Germany
| |
Collapse
|
45
|
Bornmann L. Which research institution performs better than average in a subject category or better than selected other institutions? ONLINE INFORMATION REVIEW 2018. [DOI: 10.1108/oir-08-2015-0276] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Purpose
Institutional bibliometric analyses compare as a rule the performance of different institutions. The purpose of this paper is to use a statistical approach which not only allows a comparison of the citation impact of papers from selected institutions, but also a comparison of the citation impact of the papers of these institutions with all other papers published in a particular time frame.
Design/methodology/approach
The study is based on a randomly selected cluster sample (n=4,327,013 articles and reviews from 2000 to 2004), which is drawn from a bibliometric in-house database including Web of Science data. Regression models are used to analyze citation impact scores. Subsequent to the models, average predictions at specific interesting values are calculated to analyze which factors could have an effect on the impact scores-the journal impact factor (JIF), of the journals which published the papers and the number of affiliations given in a paper.
Findings
Three anonymous German institutions are compared with one another and with the set of all other papers in the time frame. As an indicator of institutional performance, fractionally counted PPtop 50% on the level of individual papers are used. This indicator is a normalized impact score whereas each paper is fractionally assigned to the 50 percent most frequently cited papers within its subject category and publication year. The results show that the JIF and the number of affiliations have a statistically significant effect on the institutional performance.
Originality/value
Fractional regression models are introduced to analyze the fractionally counted PPtop 50% on the level of individual papers.
Collapse
|
46
|
Bornmann L, Leydesdorff L. Count highly-cited papers instead of papers with h citations: use normalized citation counts and compare "like with like"! Scientometrics 2018; 115:1119-1123. [PMID: 29628536 PMCID: PMC5880847 DOI: 10.1007/s11192-018-2682-1] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2018] [Indexed: 11/10/2022]
Abstract
Teixeira da Silva and Dobránszki (Scientometrics. 10.1007/s11192-018-2680-3, 2018) describe practical problems in using the h-index for the purpose of research evaluation. For example, they discuss the h-index differences among the bibliometric databases. In this Letter to the Editor, we argue for abstaining from using the h-index. One can use normalized indicators instead.
Collapse
Affiliation(s)
- Lutz Bornmann
- 1Division for Science and Innovation Studies, Administrative Headquarters of the Max Planck Society, Hofgartenstr. 8, 80539 Munich, Germany
| | - Loet Leydesdorff
- Amsterdam School of Communication Research (ASCoR), PO Box 15793, 1001 NG Amsterdam, The Netherlands
| |
Collapse
|
47
|
Perianes-Rodriguez A, Ruiz-Castillo J. The impact of classification systems in the evaluation of the research performance of the Leiden Ranking universities. J Assoc Inf Sci Technol 2018. [DOI: 10.1002/asi.24017] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Antonio Perianes-Rodriguez
- Unit A1. Support to the Scientific Council; European Research Council Executive Agency; Brussels Belgium
| | | |
Collapse
|
48
|
Abstract
In research evaluation of single researchers, the assessment of paper and journal impact is of interest. High journal impact reflects the ability of researchers to convince strict reviewers, and high paper impact reflects the usefulness of papers for future research. In many bibliometric studies, metrics for journal and paper impact are separately presented. In this paper, we introduce two graph types, which combine both metrics in a single graph. The graphs can be used in research evaluation to visualize the performance of single researchers comprehensively.
Collapse
|
49
|
Bornmann L, Williams R. Use of the journal impact factor as a criterion for the selection of junior researchers: A rejoinder on a comment by Peters (2017). J Informetr 2017. [DOI: 10.1016/j.joi.2017.08.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
50
|
Katchanov YL, Markova YV. The “space of physics journals”: topological structure and the Journal Impact Factor. Scientometrics 2017. [DOI: 10.1007/s11192-017-2471-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|