1
|
Zarghani M, Nemati-Anaraki L, Sedghi S, Chakoli AN, Rowhani-Farid A. Design and validation of a conceptual model regarding impact of open science on healthcare research processes. BMC Health Serv Res 2024; 24:309. [PMID: 38454424 PMCID: PMC10921571 DOI: 10.1186/s12913-024-10764-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 02/21/2024] [Indexed: 03/09/2024] Open
Abstract
INTRODUCTION The development and use of digital tools in various stages of research highlight the importance of novel open science methods for an integrated and accessible research system. The objective of this study was to design and validate a conceptual model of open science on healthcare research processes. METHODS This research was conducted in three phases using a mixed-methods approach. The first phase employed a qualitative method, namely purposive sampling and semi-structured interview guides to collect data from healthcare researchers and managers. Influential factors of open science on research processes were extracted for refining the components and developing the proposed model; the second phase utilized a panel of experts and collective agreement through purposive sampling. The final phase involved purposive sampling and Delphi technique to validate the components of the proposed model according to researchers' perspectives. FINDINGS From the thematic analysis of 20 interview on the study topic, 385 codes, 38 sub-themes, and 14 main themes were extracted for the initial proposed model. These components were reviewed by expert panel members, resulting in 31 sub-themes, 13 main themes, and 4 approved themes. Ultimately, the agreed-upon model was assessed in four layers for validation by the expert panel, and all the components achieved a score of > 75% in two Delphi rounds. The validated model was presented based on the infrastructure and culture layers, as well as supervision, assessment, publication, and sharing. CONCLUSION To effectively implement these methods in the research process, it is essential to create cultural and infrastructural backgrounds and predefined requirements for preventing potential abuses and privacy concerns in the healthcare system. Applying these principles will lead to greater access to outputs, increasing the credibility of research results and the utilization of collective intelligence in solving healthcare system issues.
Collapse
Affiliation(s)
- Maryam Zarghani
- Medical Library and Information Sciences, School of Health Management and Medical Information Science, Iran University of Medical Sciences, Tehran, Iran
| | - Leila Nemati-Anaraki
- Department of Medical Library and Information Sciences, School of Health Management and Medical Information Science, Iran University of Medical Sciences, Rashid Yasmin Street, Upper than Mirdamad St., Tehran, Iran.
- Health Management and Economics Research Center, Iran University of Medical Sciences, Tehran, Iran.
| | - Shahram Sedghi
- Department of Medical Library and Information Sciences, School of Health Management and Medical Information Science, Iran University of Medical Sciences, Rashid Yasmin Street, Upper than Mirdamad St., Tehran, Iran
- Health Management and Economics Research Center, Health Management Research Institute, Iran University of Medical Sciences, Tehran, Iran
| | | | - Anisa Rowhani-Farid
- Department of Pharmaceutical Health Services Research, University of Maryland School of Pharmacy, Baltimore, Maryland, USA
| |
Collapse
|
2
|
Plante J, Langerwerf L, Klopper M, Rhon DI, Young JL. Evaluation of Transparency and Openness Guidelines in Physical Therapist Journals. Phys Ther 2024; 104:pzad133. [PMID: 37815940 DOI: 10.1093/ptj/pzad133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 06/05/2023] [Accepted: 08/21/2023] [Indexed: 10/12/2023]
Abstract
OBJECTIVE The goals of this study were to evaluate the extent that physical therapist journals support open science research practices by adhering to the Transparency and Openness Promotion (TOP) guidelines and to assess the relationship between journal scores and their respective journal impact factor (JIF). METHODS Scimago, mapping studies, the National Library of Medicine, and journal author guidelines were searched to identify physical therapist journals for inclusion. Journals were graded on 10 standards (29 available total points) related to transparency with data, code, research materials, study design and analysis, preregistration of studies and statistical analyses, replication, and open science badges. The relationship between journal transparency and openness scores and their JIF was determined. RESULTS Thirty-five journals' author guidelines were assigned transparency and openness factor scores. The median score (interquartile range) across journals was 3.00 out of 29 (3.00) points (for all journals the scores ranged from 0 to 8). The 2 standards with the highest degree of implementation were design and analysis transparency (reporting guidelines) and study preregistration. No journals reported on code transparency, materials transparency, replication, and open science badges. TOP factor scores were a significant predictor of JIF scores. CONCLUSION There is low implementation of the TOP standards by physical therapist journals. TOP factor scores demonstrated predictive abilities for JIF scores. Policies from journals must improve to make open science practices the standard in research. Journals are in an influential position to guide practices that can improve the rigor of publication which, ultimately, enhances the evidence-based information used by physical therapists. IMPACT Transparent, open, and reproducible research will move the profession forward by improving the quality of research and increasing the confidence in results for implementation in clinical care.
Collapse
Affiliation(s)
- Jacqueline Plante
- Department of Physical Therapy, Doctor of Science in Physical Therapy Program, Bellin College, Green Bay, Wisconsin, USA
| | - Leigh Langerwerf
- Department of Physical Therapy, Doctor of Science in Physical Therapy Program, Bellin College, Green Bay, Wisconsin, USA
| | - Mareli Klopper
- Department of Physical Therapy, Doctor of Science in Physical Therapy Program, Bellin College, Green Bay, Wisconsin, USA
| | - Daniel I Rhon
- Department of Physical Therapy, Doctor of Science in Physical Therapy Program, Bellin College, Green Bay, Wisconsin, USA
- Department of Rehabilitation Medicine, School of Medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland, USA
| | - Jodi L Young
- Department of Physical Therapy, Doctor of Science in Physical Therapy Program, Bellin College, Green Bay, Wisconsin, USA
| |
Collapse
|
3
|
Yeung AWK, Robertson M, Uecker A, Fox PT, Eickhoff SB. Trends in the sample size, statistics, and contributions to the BrainMap database of activation likelihood estimation meta-analyses: An empirical study of 10-year data. Hum Brain Mapp 2023; 44:1876-1887. [PMID: 36479854 PMCID: PMC9980884 DOI: 10.1002/hbm.26177] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 11/18/2022] [Accepted: 11/28/2022] [Indexed: 12/13/2022] Open
Abstract
The literature of neuroimaging meta-analysis has been thriving for over a decade. A majority of them were coordinate-based meta-analyses, particularly the activation likelihood estimation (ALE) approach. A meta-evaluation of these meta-analyses was performed to qualitatively evaluate their design and reporting standards. The publications listed from the BrainMap website were screened. Six hundred and three ALE papers published during 2010-2019 were included and analysed. For reporting standards, most of the ALE papers reported their total number of Papers involved and mentioned the inclusion/exclusion criteria on Paper selection. However, most papers did not describe how data redundancy was avoided when multiple related Experiments were reported within one paper. The most prevalent repeated-measures correction methods were voxel-level FDR (54.4%) and cluster-level FWE (33.8%), with the latter quickly replacing the former since 2016. For study characteristics, sample size in terms of number of Papers included per ALE paper and number of Experiments per analysis seemed to be stable over the decade. One-fifth of the surveyed ALE papers failed to meet the recommendation of having >17 Experiments per analysis. For data sharing, most of them did not provide input and output data. In conclusion, the field has matured well in terms of rising dominance of cluster-level FWE correction, and slightly improved reporting on elimination of data redundancy and providing input data. The provision of Data and Code availability statements and flow chart of literature screening process, as well as data submission to BrainMap, should be more encouraged.
Collapse
Affiliation(s)
- Andy Wai Kan Yeung
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong, China
| | - Michaela Robertson
- Research Imaging Institute, University of Texas Health Science Center, San Antonio, Texas, USA
| | - Angela Uecker
- Research Imaging Institute, University of Texas Health Science Center, San Antonio, Texas, USA
| | - Peter T Fox
- Research Imaging Institute, University of Texas Health Science Center, San Antonio, Texas, USA.,Department of Radiology, University of Texas Health Science Center, San Antonio, Texas, USA
| | - Simon B Eickhoff
- Institute of Systems Neuroscience, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany.,Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7), Research Centre Jülich, Jülich, Germany
| |
Collapse
|
4
|
Appukuttan S, Bologna LL, Schürmann F, Migliore M, Davison AP. EBRAINS Live Papers - Interactive Resource Sheets for Computational Studies in Neuroscience. Neuroinformatics 2023; 21:101-113. [PMID: 35986836 PMCID: PMC9931781 DOI: 10.1007/s12021-022-09598-z] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/04/2022] [Indexed: 01/23/2023]
Abstract
We present here an online platform for sharing resources underlying publications in neuroscience. It enables authors to easily upload and distribute digital resources, such as data, code, and notebooks, in a structured and systematic way. Interactivity is a prominent feature of the Live Papers, with features to download, visualise or simulate data, models and results presented in the corresponding publications. The resources are hosted on reliable data storage servers to ensure long term availability and easy accessibility. All data are managed via the EBRAINS Knowledge Graph, thereby helping maintain data provenance, and enabling tight integration with tools and services offered under the EBRAINS ecosystem.
Collapse
Affiliation(s)
- Shailesh Appukuttan
- Université Paris-Saclay, CNRS, Institut des Neurosciences Paris-Saclay, Saclay, 91400 France
| | - Luca L. Bologna
- Institute of Biophysics, National Research Council, Palermo, 90143 Italy
| | - Felix Schürmann
- Blue Brain Project, École polytechnique fédérale de Lausanne (EPFL), Campus Biotech, Geneva, 1202 Switzerland
| | - Michele Migliore
- Institute of Biophysics, National Research Council, Palermo, 90143 Italy
| | - Andrew P. Davison
- Université Paris-Saclay, CNRS, Institut des Neurosciences Paris-Saclay, Saclay, 91400 France
| |
Collapse
|
5
|
Cadwallader L, Hrynaszkiewicz I. A survey of researchers' code sharing and code reuse practices, and assessment of interactive notebook prototypes. PeerJ 2022; 10:e13933. [PMID: 36032954 PMCID: PMC9406794 DOI: 10.7717/peerj.13933] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 08/01/2022] [Indexed: 01/19/2023] Open
Abstract
This research aimed to understand the needs and habits of researchers in relation to code sharing and reuse; gather feedback on prototype code notebooks created by NeuroLibre; and help determine strategies that publishers could use to increase code sharing. We surveyed 188 researchers in computational biology. Respondents were asked about how often and why they look at code, which methods of accessing code they find useful and why, what aspects of code sharing are important to them, and how satisfied they are with their ability to complete these tasks. Respondents were asked to look at a prototype code notebook and give feedback on its features. Respondents were also asked how much time they spent preparing code and if they would be willing to increase this to use a code sharing tool, such as a notebook. As a reader of research articles the most common reason (70%) for looking at code was to gain a better understanding of the article. The most commonly encountered method for code sharing-linking articles to a code repository-was also the most useful method of accessing code from the reader's perspective. As authors, the respondents were largely satisfied with their ability to carry out tasks related to code sharing. The most important of these tasks were ensuring that the code was running in the correct environment, and sharing code with good documentation. The average researcher, according to our results, is unwilling to incur additional costs (in time, effort or expenditure) that are currently needed to use code sharing tools alongside a publication. We infer this means we need different models for funding and producing interactive or executable research outputs if they are to reach a large number of researchers. For the purpose of increasing the amount of code shared by authors, PLOS Computational Biology is, as a result, focusing on policy rather than tools.
Collapse
|
6
|
Leipzig J, Nüst D, Hoyt CT, Ram K, Greenberg J. The role of metadata in reproducible computational research. PATTERNS (NEW YORK, N.Y.) 2021; 2:100322. [PMID: 34553169 PMCID: PMC8441584 DOI: 10.1016/j.patter.2021.100322] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
Reproducible computational research (RCR) is the keystone of the scientific method for in silico analyses, packaging the transformation of raw data to published results. In addition to its role in research integrity, improving the reproducibility of scientific studies can accelerate evaluation and reuse. This potential and wide support for the FAIR principles have motivated interest in metadata standards supporting reproducibility. Metadata provide context and provenance to raw data and methods and are essential to both discovery and validation. Despite this shared connection with scientific data, few studies have explicitly described how metadata enable reproducible computational research. This review employs a functional content analysis to identify metadata standards that support reproducibility across an analytic stack consisting of input data, tools, notebooks, pipelines, and publications. Our review provides background context, explores gaps, and discovers component trends of embeddedness and methodology weight from which we derive recommendations for future work.
Collapse
Affiliation(s)
- Jeremy Leipzig
- Metadata Research Center, College of Computing and Informatics, Drexel University, Philadelphia, PA, USA
| | - Daniel Nüst
- Institute for Geoinformatics, University of Münster, Münster, Germany
| | | | - Karthik Ram
- Berkeley Institute for Data Science, University of California, Berkeley, Berkeley, CA, USA
| | - Jane Greenberg
- Metadata Research Center, College of Computing and Informatics, Drexel University, Philadelphia, PA, USA
| |
Collapse
|
7
|
Janero DR. Tackling the reproducibility problem to empower translation of preclinical academic drug discovery: is there an answer? Expert Opin Drug Discov 2021; 16:595-600. [PMID: 33617734 DOI: 10.1080/17460441.2021.1893690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Affiliation(s)
- David R Janero
- Department of Pharmaceutical Sciences, Bouvé College of Health Sciences; and Health Sciences Entrepreneurs; Northeastern University, Boston, MA, USA
| |
Collapse
|
8
|
Koesten L, Vougiouklis P, Simperl E, Groth P. Dataset Reuse: Toward Translating Principles to Practice. PATTERNS (NEW YORK, N.Y.) 2020; 1:100136. [PMID: 33294873 PMCID: PMC7691392 DOI: 10.1016/j.patter.2020.100136] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Revised: 08/28/2020] [Accepted: 10/12/2020] [Indexed: 10/28/2022]
Abstract
The web provides access to millions of datasets that can have additional impact when used beyond their original context. We have little empirical insight into what makes a dataset more reusable than others and which of the existing guidelines and frameworks, if any, make a difference. In this paper, we explore potential reuse features through a literature review and present a case study on datasets on GitHub, a popular open platform for sharing code and data. We describe a corpus of more than 1.4 million data files, from over 65,000 repositories. Using GitHub's engagement metrics as proxies for dataset reuse, we relate them to reuse features from the literature and devise an initial model, using deep neural networks, to predict a dataset's reusability. This demonstrates the practical gap between principles and actionable insights that allow data publishers and tools designers to implement functionalities that provably facilitate reuse.
Collapse
Affiliation(s)
| | | | | | - Paul Groth
- University of Amsterdam, Amsterdam 1090 GH, the Netherlands
| |
Collapse
|
9
|
Konkol M, Nüst D, Goulier L. Publishing computational research - a review of infrastructures for reproducible and transparent scholarly communication. Res Integr Peer Rev 2020; 5:10. [PMID: 32685199 PMCID: PMC7359270 DOI: 10.1186/s41073-020-00095-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2020] [Accepted: 06/24/2020] [Indexed: 12/19/2022] Open
Abstract
BACKGROUND The trend toward open science increases the pressure on authors to provide access to the source code and data they used to compute the results reported in their scientific papers. Since sharing materials reproducibly is challenging, several projects have developed solutions to support the release of executable analyses alongside articles. METHODS We reviewed 11 applications that can assist researchers in adhering to reproducibility principles. The applications were found through a literature search and interactions with the reproducible research community. An application was included in our analysis if it (i) was actively maintained at the time the data for this paper was collected, (ii) supports the publication of executable code and data, (iii) is connected to the scholarly publication process. By investigating the software documentation and published articles, we compared the applications across 19 criteria, such as deployment options and features that support authors in creating and readers in studying executable papers. RESULTS From the 11 applications, eight allow publishers to self-host the system for free, whereas three provide paid services. Authors can submit an executable analysis using Jupyter Notebooks or R Markdown documents (10 applications support these formats). All approaches provide features to assist readers in studying the materials, e.g., one-click reproducible results or tools for manipulating the analysis parameters. Six applications allow for modifying materials after publication. CONCLUSIONS The applications support authors to publish reproducible research predominantly with literate programming. Concerning readers, most applications provide user interfaces to inspect and manipulate the computational analysis. The next step is to investigate the gaps identified in this review, such as the costs publishers have to expect when hosting an application, the consideration of sensitive data, and impacts on the review process.
Collapse
Affiliation(s)
- Markus Konkol
- Institute for Geoinformatics, University of Münster, Münster, Germany
| | - Daniel Nüst
- Institute for Geoinformatics, University of Münster, Münster, Germany
| | - Laura Goulier
- Institute for Geoinformatics, University of Münster, Münster, Germany
| |
Collapse
|
10
|
Identifying Data Sharing and Reuse with Scholix: Potentials and Limitations. PATTERNS 2020; 1:100007. [PMID: 33205084 PMCID: PMC7660440 DOI: 10.1016/j.patter.2020.100007] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2019] [Revised: 01/27/2020] [Accepted: 02/24/2020] [Indexed: 11/21/2022]
Abstract
The Scholexplorer API, based on the Scholix (Scholarly Link eXchange) framework, aims to identify links between articles and supporting data. This quantitative case study demonstrates that the API vastly expanded the number of datasets previously known to be affiliated with University of Bath outputs, allowing improved monitoring of compliance with funder mandates by identifying peer-reviewed articles linked to at least one unique dataset. Availability of author names for research outputs increased from 2.4% to 89.2%, which enabled identification of ten articles reusing non-Bath-affiliated datasets published in external repositories in the first phase, giving valuable evidence of data reuse and impact for data producers. Of these, only three were formally cited in the references. Further enhancement of the Scholix schema and enrichment of Scholexplorer metadata using controlled vocabularies would be beneficial. The adoption of standardized data citations by journals will be critical to creating links in a more systematic manner.
Collapse
|
11
|
Abstract
This paper focuses on the characteristics of research data quality, and aims to cover the most important issues related to it, giving particular attention to its attributes and to data governance. The corporate word’s considerable interest in the quality of data is obvious in several thoughts and issues reported in business-related publications, even if there are apparent differences between values and approaches to data in corporate and in academic (research) environments. The paper also takes into consideration that addressing data quality would be unimaginable without considering big data.
Collapse
|