1
|
Calvo-Cidoncha E, Camacho-Hernando C, Feu F, Pastor-Duran X, Codina-Jané C, Lozano-Rubí R. OntoPharma: ontology based clinical decision support system to reduce medication prescribing errors. BMC Med Inform Decis Mak 2022; 22:238. [PMID: 36088328 PMCID: PMC9463735 DOI: 10.1186/s12911-022-01979-3] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 08/25/2022] [Indexed: 11/10/2022] Open
Abstract
Abstract
Background
Clinical decision support systems (CDSS) have been shown to reduce medication errors. However, they are underused because of different challenges. One approach to improve CDSS is to use ontologies instead of relational databases. The primary aim was to design and develop OntoPharma, an ontology based CDSS to reduce medication prescribing errors. Secondary aim was to implement OntoPharma in a hospital setting.
Methods
A four-step process was proposed. (1) Defining the ontology domain. The ontology scope was the medication domain. An advisory board selected four use cases: maximum dosage alert, drug-drug interaction checker, renal failure adjustment, and drug allergy checker. (2) Implementing the ontology in a formal representation. The implementation was conducted by Medical Informatics specialists and Clinical Pharmacists using Protégé-OWL. (3) Developing an ontology-driven alert module. Computerised Physician Order Entry (CPOE) integration was performed through a REST API. SPARQL was used to query ontologies. (4) Implementing OntoPharma in a hospital setting. Alerts generated between July 2020/ November 2021 were analysed.
Results
The three ontologies developed included 34,938 classes, 16,672 individuals and 82 properties. The domains addressed by ontologies were identification data of medicinal products, appropriateness drug data, and local concepts from CPOE. When a medication prescribing error is identified an alert is shown. OntoPharma generated 823 alerts in 1046 patients. 401 (48.7%) of them were accepted.
Conclusions
OntoPharma is an ontology based CDSS implemented in clinical practice which generates alerts when a prescribing medication error is identified. To gain user acceptance OntoPharma has been designed and developed by a multidisciplinary team. Compared to CDSS based on relational databases, OntoPharma represents medication knowledge in a more intuitive, extensible and maintainable manner.
Collapse
|
2
|
Kang T, Turfah A, Kim J, Perotte A, Weng C. A neuro-symbolic method for understanding free-text medical evidence. J Am Med Inform Assoc 2021; 28:1703-1711. [PMID: 33956981 PMCID: PMC8135980 DOI: 10.1093/jamia/ocab077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 03/18/2021] [Accepted: 04/09/2021] [Indexed: 11/12/2022] Open
Abstract
OBJECTIVE We introduce Medical evidence Dependency (MD)-informed attention, a novel neuro-symbolic model for understanding free-text clinical trial publications with generalizability and interpretability. MATERIALS AND METHODS We trained one head in the multi-head self-attention model to attend to the Medical evidence Ddependency (MD) and to pass linguistic and domain knowledge on to later layers (MD informed). This MD-informed attention model was integrated into BioBERT and tested on 2 public machine reading comprehension benchmarks for clinical trial publications: Evidence Inference 2.0 and PubMedQA. We also curated a small set of recently published articles reporting randomized controlled trials on COVID-19 (coronavirus disease 2019) following the Evidence Inference 2.0 guidelines to evaluate the model's robustness to unseen data. RESULTS The integration of MD-informed attention head improves BioBERT substantially in both benchmark tasks-as large as an increase of +30% in the F1 score-and achieves the new state-of-the-art performance on the Evidence Inference 2.0. It achieves 84% and 82% in overall accuracy and F1 score, respectively, on the unseen COVID-19 data. CONCLUSIONS MD-informed attention empowers neural reading comprehension models with interpretability and generalizability via reusable domain knowledge. Its compositionality can benefit any transformer-based architecture for machine reading comprehension of free-text medical evidence.
Collapse
Affiliation(s)
- Tian Kang
- Department of Biomedical Informatics, Columbia University, New York, USA
| | - Ali Turfah
- Department of Statistics, Columbia University, New York, USA
| | - Jaehyun Kim
- Department of Biomedical Informatics, Columbia University, New York, USA
| | - Adler Perotte
- Department of Biomedical Informatics, Columbia University, New York, USA
| | - Chunhua Weng
- Department of Biomedical Informatics, Columbia University, New York, USA
| |
Collapse
|
3
|
Abstract
Backgound Clinical trials informatics has evolved through the development of multiple applications addressing distinct parts of the clinical trials cycle. This setting creates difficulties in the transport and sharing of data among applications that serve a common functionality. Purpose We present an alternative approach for the design of clinical trials information systems consisting of loosely coupled components standing on a comprehensive model of the global clinical trial process. Methods We describe how such a structure is able to support the development and implementation of complex knowledge-driven modules, such as automated database query systems, reporting systems and intelligent data-analysis tools, and how currently available internet technologies may be used to support the independent development of applications and collaboration between researchers. Results These principles were applied to the development of a fully functional system that has been in production for eight years in a diversity of pharmaceutical and academic drug trials. Marked time savings in database set-up and statistical reporting have been documented, as well as good reliability in the selection of appropriate statistical methods to clinical trial data analysis. Conclusions In order to meet the expected functionality and to avoid the proliferation of databases and software applications, clinical trials information systems need to be based on a generic model of clinical trials and on a distributed architecture.
Collapse
|
4
|
Blake C, Lucic A. Automatic endpoint detection to support the systematic review process. J Biomed Inform 2015; 56:42-56. [PMID: 26003938 DOI: 10.1016/j.jbi.2015.05.004] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2014] [Revised: 05/05/2015] [Accepted: 05/06/2015] [Indexed: 11/25/2022]
Abstract
Preparing a systematic review can take hundreds of hours to complete, but the process of reconciling different results from multiple studies is the bedrock of evidence-based medicine. We introduce a two-step approach to automatically extract three facets - two entities (the agent and object) and the way in which the entities are compared (the endpoint) - from direct comparative sentences in full-text articles. The system does not require a user to predefine entities in advance and thus can be used in domains where entity recognition is difficult or unavailable. As with a systematic review, the tabular summary produced using the automatically extracted facets shows how experimental results differ between studies. Experiments were conducted using a collection of more than 2million sentences from three journals Diabetes, Carcinogenesis and Endocrinology and two machine learning algorithms, support vector machines (SVM) and a general linear model (GLM). F1 and accuracy measures for the SVM and GLM differed by only 0.01 across all three comparison facets in a randomly selected set of test sentences. The system achieved the best performance of 92% for objects, whereas the accuracy for both agent and endpoints was 73%. F1 scores were higher for objects (0.77) than for endpoints (0.51) or agents (0.47). A situated evaluation of Metformin, a drug to treat diabetes, showed system accuracy of 95%, 83% and 79% for the object, endpoint and agent respectively. The situated evaluation had higher F1 scores of 0.88, 0.64 and 0.62 for object, endpoint, and agent respectively. On average, only 5.31% of the sentences in a full-text article are direct comparisons, but the tabular summaries suggest that these sentences provide a rich source of currently underutilized information that can be used to accelerate the systematic review process and identify gaps where future research should be focused.
Collapse
Affiliation(s)
- Catherine Blake
- Graduate School of Library and Information Science and Medical Information Science, Center for Informatics Research in Science and Scholarship (CIRSS), University of Illinois, Urbana Champaign, 501 E. Daniel Street, MC-493, Champaign, IL 61820-6211, USA.
| | - Ana Lucic
- Graduate School of Library and Information Science, University of Illinois, Urbana Champaign, 501 E. Daniel Street, MC-493, Champaign, IL 61820-6211, USA.
| |
Collapse
|
5
|
|
6
|
Valkenhoef GV, Tervonen T, Brock BD, Hillege H. Deficiencies in the transfer and availability of clinical trials evidence: a review of existing systems and standards. BMC Med Inform Decis Mak 2012; 12:95. [PMID: 22947211 PMCID: PMC3534489 DOI: 10.1186/1472-6947-12-95] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2011] [Accepted: 08/24/2012] [Indexed: 11/15/2022] Open
Abstract
Background Decisions concerning drug safety and efficacy are generally based on pivotal evidence provided by clinical trials. Unfortunately, finding the relevant clinical trials is difficult and their results are only available in text-based reports. Systematic reviews aim to provide a comprehensive overview of the evidence in a specific area, but may not provide the data required for decision making. Methods We review and analyze the existing information systems and standards for aggregate level clinical trials information from the perspective of systematic review and evidence-based decision making. Results The technology currently used has major shortcomings, which cause deficiencies in the transfer, traceability and availability of clinical trials information. Specifically, data available to decision makers is insufficiently structured, and consequently the decisions cannot be properly traced back to the underlying evidence. Regulatory submission, trial publication, trial registration, and systematic review produce unstructured datasets that are insufficient for supporting evidence-based decision making. Conclusions The current situation is a hindrance to policy decision makers as it prevents fully transparent decision making and the development of more advanced decision support systems. Addressing the identified deficiencies would enable more efficient, informed, and transparent evidence-based medical decision making.
Collapse
Affiliation(s)
- Gert van Valkenhoef
- Department of Epidemiology, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands.
| | | | | | | |
Collapse
|
7
|
Zaveri A, Cofiel L, Shah J, Pradhan S, Chan E, Dameron O, Pietrobon R, Ang BT. Achieving high research reporting quality through the use of computational ontologies. Neuroinformatics 2010; 8:261-71. [PMID: 20953737 DOI: 10.1007/s12021-010-9079-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Systematic reviews and meta-analyses constitute one of the central pillars of evidence-based medicine. However, clinical trials are poorly reported which delays meta-analyses and consequently the translation of clinical research findings to clinical practice. We propose a Center of Excellence in Research Reporting in Neurosurgery (CERR-N) and the creation of a clinically significant computational ontology to encode Randomized Controlled Trials (RCT) studies in neurosurgery. A 128 element strong computational ontology was derived from the Trial Bank ontology by omitting classes which were not required to perform meta-analysis. Three researchers from our team tagged five randomly selected RCT's each, published in the last 5 years (2004-2008), in the Journal of Neurosurgery (JoN), Neurosurgery Journal (NJ) and Journal of Neurotrauma (JoNT). We evaluated inter and intra observer reliability for the ontology using percent agreement and kappa coefficient. The inter-observer agreement was 76.4%, 75.97% and 74.9% and intra-observer agreement was 89.8%, 80.8% and 86.56% for JoN, NJ and JoNT respectively. The inter-observer kappa coefficient was 0.60, 0.54 and 0.53 and the intra-observer kappa coefficient was 0.79, 0.82 and 0.79 for JoN, NJ and JoNT journals respectively. The high degree of inter and intra-observer agreement confirms tagging consistency in sections of a given scientific manuscript. Standardizing reporting for neurosurgery articles can be reliably achieved through the integration of a computational ontology within the context of a CERR-N. This approach holds potential for the overall improvement in the quality of reporting of RCTs in neurosurgery, ultimately streamlining the translation of clinical research findings to improvement in patient care.
Collapse
|
8
|
Kiritchenko S, de Bruijn B, Carini S, Martin J, Sim I. ExaCT: automatic extraction of clinical trial characteristics from journal publications. BMC Med Inform Decis Mak 2010; 10:56. [PMID: 20920176 PMCID: PMC2954855 DOI: 10.1186/1472-6947-10-56] [Citation(s) in RCA: 67] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2009] [Accepted: 09/28/2010] [Indexed: 11/22/2022] Open
Abstract
BACKGROUND Clinical trials are one of the most important sources of evidence for guiding evidence-based practice and the design of new trials. However, most of this information is available only in free text - e.g., in journal publications - which is labour intensive to process for systematic reviews, meta-analyses, and other evidence synthesis studies. This paper presents an automatic information extraction system, called ExaCT, that assists users with locating and extracting key trial characteristics (e.g., eligibility criteria, sample size, drug dosage, primary outcomes) from full-text journal articles reporting on randomized controlled trials (RCTs). METHODS ExaCT consists of two parts: an information extraction (IE) engine that searches the article for text fragments that best describe the trial characteristics, and a web browser-based user interface that allows human reviewers to assess and modify the suggested selections. The IE engine uses a statistical text classifier to locate those sentences that have the highest probability of describing a trial characteristic. Then, the IE engine's second stage applies simple rules to these sentences to extract text fragments containing the target answer. The same approach is used for all 21 trial characteristics selected for this study. RESULTS We evaluated ExaCT using 50 previously unseen articles describing RCTs. The text classifier (first stage) was able to recover 88% of relevant sentences among its top five candidates (top5 recall) with the topmost candidate being relevant in 80% of cases (top1 precision). Precision and recall of the extraction rules (second stage) were 93% and 91%, respectively. Together, the two stages of the extraction engine were able to provide (partially) correct solutions in 992 out of 1050 test tasks (94%), with a majority of these (696) representing fully correct and complete answers. CONCLUSIONS Our experiments confirmed the applicability and efficacy of ExaCT. Furthermore, they demonstrated that combining a statistical method with 'weak' extraction rules can identify a variety of study characteristics. The system is flexible and can be extended to handle other characteristics and document types (e.g., study protocols).
Collapse
Affiliation(s)
- Svetlana Kiritchenko
- Institute for Information Technology, National Research Council, Ottawa, Ontario, Canada
| | - Berry de Bruijn
- Institute for Information Technology, National Research Council, Ottawa, Ontario, Canada
| | - Simona Carini
- University of California San Francisco, San Francisco, CA, USA
| | - Joel Martin
- Institute for Information Technology, National Research Council, Ottawa, Ontario, Canada
| | - Ida Sim
- University of California San Francisco, San Francisco, CA, USA
| |
Collapse
|
9
|
Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci 2009. [PMID: 19664226 DOI: 10.1186/1748-5908-4-50.] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Many interventions found to be effective in health services research studies fail to translate into meaningful patient care outcomes across multiple contexts. Health services researchers recognize the need to evaluate not only summative outcomes but also formative outcomes to assess the extent to which implementation is effective in a specific setting, prolongs sustainability, and promotes dissemination into other settings. Many implementation theories have been published to help promote effective implementation. However, they overlap considerably in the constructs included in individual theories, and a comparison of theories reveals that each is missing important constructs included in other theories. In addition, terminology and definitions are not consistent across theories. We describe the Consolidated Framework For Implementation Research (CFIR) that offers an overarching typology to promote implementation theory development and verification about what works where and why across multiple contexts. METHODS We used a snowball sampling approach to identify published theories that were evaluated to identify constructs based on strength of conceptual or empirical support for influence on implementation, consistency in definitions, alignment with our own findings, and potential for measurement. We combined constructs across published theories that had different labels but were redundant or overlapping in definition, and we parsed apart constructs that conflated underlying concepts. RESULTS The CFIR is composed of five major domains: intervention characteristics, outer setting, inner setting, characteristics of the individuals involved, and the process of implementation. Eight constructs were identified related to the intervention (e.g., evidence strength and quality), four constructs were identified related to outer setting (e.g., patient needs and resources), 12 constructs were identified related to inner setting (e.g., culture, leadership engagement), five constructs were identified related to individual characteristics, and eight constructs were identified related to process (e.g., plan, evaluate, and reflect). We present explicit definitions for each construct. CONCLUSION The CFIR provides a pragmatic structure for approaching complex, interacting, multi-level, and transient states of constructs in the real world by embracing, consolidating, and unifying key constructs from published implementation theories. It can be used to guide formative evaluations and build the implementation knowledge base across multiple studies and settings.
Collapse
Affiliation(s)
- Laura J Damschroder
- HSR&D Center for Clinical Management Research, VA Ann Arbor Healthcare System (11H), 2215 Fuller Rd, Ann Arbor, MI 48105, USA.
| | | | | | | | | | | |
Collapse
|
10
|
Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci 2009; 4:50. [PMID: 19664226 PMCID: PMC2736161 DOI: 10.1186/1748-5908-4-50] [Citation(s) in RCA: 7538] [Impact Index Per Article: 502.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2008] [Accepted: 08/07/2009] [Indexed: 11/24/2022] Open
Abstract
BACKGROUND Many interventions found to be effective in health services research studies fail to translate into meaningful patient care outcomes across multiple contexts. Health services researchers recognize the need to evaluate not only summative outcomes but also formative outcomes to assess the extent to which implementation is effective in a specific setting, prolongs sustainability, and promotes dissemination into other settings. Many implementation theories have been published to help promote effective implementation. However, they overlap considerably in the constructs included in individual theories, and a comparison of theories reveals that each is missing important constructs included in other theories. In addition, terminology and definitions are not consistent across theories. We describe the Consolidated Framework For Implementation Research (CFIR) that offers an overarching typology to promote implementation theory development and verification about what works where and why across multiple contexts. METHODS We used a snowball sampling approach to identify published theories that were evaluated to identify constructs based on strength of conceptual or empirical support for influence on implementation, consistency in definitions, alignment with our own findings, and potential for measurement. We combined constructs across published theories that had different labels but were redundant or overlapping in definition, and we parsed apart constructs that conflated underlying concepts. RESULTS The CFIR is composed of five major domains: intervention characteristics, outer setting, inner setting, characteristics of the individuals involved, and the process of implementation. Eight constructs were identified related to the intervention (e.g., evidence strength and quality), four constructs were identified related to outer setting (e.g., patient needs and resources), 12 constructs were identified related to inner setting (e.g., culture, leadership engagement), five constructs were identified related to individual characteristics, and eight constructs were identified related to process (e.g., plan, evaluate, and reflect). We present explicit definitions for each construct. CONCLUSION The CFIR provides a pragmatic structure for approaching complex, interacting, multi-level, and transient states of constructs in the real world by embracing, consolidating, and unifying key constructs from published implementation theories. It can be used to guide formative evaluations and build the implementation knowledge base across multiple studies and settings.
Collapse
Affiliation(s)
- Laura J Damschroder
- HSR&D Center for Clinical Management Research, VA Ann Arbor Healthcare System (11H), 2215 Fuller Rd, Ann Arbor, MI 48105, USA
| | - David C Aron
- VA HSR&D Center for Quality Improvement Research (14W), Louis Stokes Cleveland DVAMC, 10701 East Blvd, Cleveland, OH 44106, USA
| | - Rosalind E Keith
- HSR&D Center for Clinical Management Research, VA Ann Arbor Healthcare System (11H), 2215 Fuller Rd, Ann Arbor, MI 48105, USA
| | - Susan R Kirsh
- VA HSR&D Center for Quality Improvement Research (14W), Louis Stokes Cleveland DVAMC, 10701 East Blvd, Cleveland, OH 44106, USA
| | - Jeffery A Alexander
- Health Management and Policy, School of Public Health, University of Michigan,109 S. Observatory (M3507 SPH II), Ann Arbor, Michigan 48109-2029, USA
| | - Julie C Lowery
- HSR&D Center for Clinical Management Research, VA Ann Arbor Healthcare System (11H), 2215 Fuller Rd, Ann Arbor, MI 48105, USA
| |
Collapse
|
11
|
Chung GY. Sentence retrieval for abstracts of randomized controlled trials. BMC Med Inform Decis Mak 2009; 9:10. [PMID: 19208256 PMCID: PMC2657779 DOI: 10.1186/1472-6947-9-10] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2008] [Accepted: 02/10/2009] [Indexed: 11/23/2022] Open
Abstract
Background The practice of evidence-based medicine (EBM) requires clinicians to integrate their expertise with the latest scientific research. But this is becoming increasingly difficult with the growing numbers of published articles. There is a clear need for better tools to improve clinician's ability to search the primary literature. Randomized clinical trials (RCTs) are the most reliable source of evidence documenting the efficacy of treatment options. This paper describes the retrieval of key sentences from abstracts of RCTs as a step towards helping users find relevant facts about the experimental design of clinical studies. Method Using Conditional Random Fields (CRFs), a popular and successful method for natural language processing problems, sentences referring to Intervention, Participants and Outcome Measures are automatically categorized. This is done by extending a previous approach for labeling sentences in an abstract for general categories associated with scientific argumentation or rhetorical roles: Aim, Method, Results and Conclusion. Methods are tested on several corpora of RCT abstracts. First structured abstracts with headings specifically indicating Intervention, Participant and Outcome Measures are used. Also a manually annotated corpus of structured and unstructured abstracts is prepared for testing a classifier that identifies sentences belonging to each category. Results Using CRFs, sentences can be labeled for the four rhetorical roles with F-scores from 0.93–0.98. This outperforms the use of Support Vector Machines. Furthermore, sentences can be automatically labeled for Intervention, Participant and Outcome Measures, in unstructured and structured abstracts where the section headings do not specifically indicate these three topics. F-scores of up to 0.83 and 0.84 are obtained for Intervention and Outcome Measure sentences. Conclusion Results indicate that some of the methodological elements of RCTs are identifiable at the sentence level in both structured and unstructured abstract reports. This is promising in that sentences labeled automatically could potentially form concise summaries, assist in information retrieval and finer-grained extraction.
Collapse
Affiliation(s)
- Grace Y Chung
- Centre for Health Informatics, University of New South Wales, Sydney, NSW 2052, Australia.
| |
Collapse
|
12
|
Chung GY, Coiera E. Are decision trees a feasible knowledge representation to guide extraction of critical information from randomized controlled trial reports? BMC Med Inform Decis Mak 2008; 8:48. [PMID: 18957129 PMCID: PMC2584633 DOI: 10.1186/1472-6947-8-48] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2008] [Accepted: 10/28/2008] [Indexed: 11/20/2022] Open
Abstract
Background This paper proposes the use of decision trees as the basis for automatically extracting information from published randomized controlled trial (RCT) reports. An exploratory analysis of RCT abstracts is undertaken to investigate the feasibility of using decision trees as a semantic structure. Quality-of-paper measures are also examined. Methods A subset of 455 abstracts (randomly selected from a set of 7620 retrieved from Medline from 1998 – 2006) are examined for the quality of RCT reporting, the identifiability of RCTs from abstracts, and the completeness and complexity of RCT abstracts with respect to key decision tree elements. Abstracts were manually assigned to 6 sub-groups distinguishing whether they were primary RCTs versus other design types. For primary RCT studies, we analyzed and annotated the reporting of intervention comparison, population assignment and outcome values. To measure completeness, the frequencies by which complete intervention, population and outcome information are reported in abstracts were measured. A qualitative examination of the reporting language was conducted. Results Decision tree elements are manually identifiable in the majority of primary RCT abstracts. 73.8% of a random subset was primary studies with a single population assigned to two or more interventions. 68% of these primary RCT abstracts were structured. 63% contained pharmaceutical interventions. 84% reported the total number of study subjects. In a subset of 21 abstracts examined, 71% reported numerical outcome values. Conclusion The manual identifiability of decision tree elements in the abstract suggests that decision trees could be a suitable construct to guide machine summarisation of RCTs. The presence of decision tree elements could also act as an indicator for RCT report quality in terms of completeness and uniformity.
Collapse
Affiliation(s)
- Grace Y Chung
- Centre for Health Informatics, University of New South Wales, Sydney, NSW, 2052, Australia.
| | | |
Collapse
|
13
|
|
14
|
|
15
|
Davidoff F, Batalden P. Toward stronger evidence on quality improvement. Draft publication guidelines: the beginning of a consensus project. Qual Saf Health Care 2006; 14:319-25. [PMID: 16195563 PMCID: PMC1744070 DOI: 10.1136/qshc.2005.014787] [Citation(s) in RCA: 144] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
In contrast with the primary goals of science, which are to discover and disseminate new knowledge, the primary goal of improvement is to change performance. Unfortunately, scholarly accounts of the methods, experiences, and results of most medical quality improvement work are not published, either in print or electronic form. In our view this failure to publish is a serious deficiency: it limits the available evidence on efficacy, prevents critical scrutiny, deprives staff of the opportunity and incentive to clarify thinking, slows dissemination of established improvements, inhibits discovery of innovations, and compromises the ethical obligation to return valuable information to the public.The reasons for this failure are many: competing service responsibilities of and lack of academic rewards for improvement staff; editors' and peer reviewers' unfamiliarity with improvement goals and methods; and lack of publication guidelines that are appropriate for rigorous, scholarly improvement work. We propose here a draft set of guidelines designed to help with writing, reviewing, editing, interpreting, and using such reports. We envisage this draft as the starting point for collaborative development of more definitive guidelines. We suggest that medical quality improvement will not reach its full potential unless accurate and transparent reports of improvement work are published frequently and widely.
Collapse
Affiliation(s)
- F Davidoff
- Institute for Healthcare Improvement, 143 Garden Street, Wethersfield, CT 06109, USA.
| | | |
Collapse
|
16
|
Blake C, Pratt W. Collaborative information synthesis I: A model of information behaviors of scientists in medicine and public health. ACTA ACUST UNITED AC 2006. [DOI: 10.1002/asi.20487] [Citation(s) in RCA: 49] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
17
|
Sim I, Olasov B, Carini S. An ontology of randomized controlled trials for evidence-based practice: content specification and evaluation using the competency decomposition method. J Biomed Inform 2004; 37:108-19. [PMID: 15120657 DOI: 10.1016/j.jbi.2004.03.001] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2003] [Indexed: 10/26/2022]
Abstract
Randomized controlled trials (RCTs) are one of the least biased sources of clinical research evidence, and are therefore a critical resource for the practice of evidence-based medicine. With over 10,000 new RCTs indexed in Medline each year, knowledge systems are needed to help clinicians translate evidence into practice. Common ontologies for RCTs and other domains would facilitate the development of these knowledge systems. However, no standard method exists for developing domain ontologies. In this paper, we describe a new systematic approach to specifying and evaluating the conceptual content of ontologies. In this method, called competency decomposition, the target task for an ontology is hierarchically decomposed into subtasks and methods, and the ontology content is specified by identifying the domain information required to complete each of the subtasks. We illustrate the use of this competency decomposition approach for the content specification and evaluation of an RCT ontology for evidence-based practice.
Collapse
Affiliation(s)
- Ida Sim
- Department of Medicine, Program in Biological and Medical Informatics, University of California, 3333 California St., Suite 435 Q, San Francisco, CA 94143-1211, USA.
| | | | | |
Collapse
|
18
|
Bravata DM, McDonald KM, Szeto H, Smith WM, Rydzak C, Owens DK. A conceptual framework for evaluating information technologies and decision support systems for bioterrorism preparedness and response. Med Decis Making 2004; 24:192-206. [PMID: 15090105 DOI: 10.1177/0272989x04263254] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
OBJECTIVES The authors sought to develop a conceptual framework for evaluating whether existing information technologies and decision support systems (IT/DSSs) would assist the key decisions faced by clinicians and public health officials preparing for and responding to bioterrorism. METHODS They reviewed reports of natural and bioterrorism related infectious outbreaks, bioterrorism preparedness exercises, and advice from experts to identify the key decisions, tasks, and information needs of clinicians and public health officials during a bioterrorism response. The authors used task decomposition to identify the subtasks and data requirements of IT/DSSs designed to facilitate a bioterrorism response. They used the results of the task decomposition to develop evaluation criteria for IT/DSSs for bioterrorism preparedness. They then applied these evaluation criteria to 341 reports of 217 existing IT/DSSs that could be used to support a bioterrorism response. MAIN RESULTS In response to bioterrorism, clinicians must make decisions in 4 critical domains (diagnosis, management, prevention, and reporting to public health), and public health officials must make decisions in 4 other domains (interpretation of bioterrorism surveillance data, outbreak investigation, outbreak control, and communication). The time horizons and utility functions for these decisions differ. From the task decomposition, the authors identified critical subtasks for each of the 8 decisions. For example, interpretation of diagnostic tests is an important subtask of diagnostic decision making that requires an understanding of the tests' sensitivity and specificity. Therefore, an evaluation criterion applied to reports of diagnostic IT/DSSs for bioterrorism asked whether the reports described the systems' sensitivity and specificity. Of the 217 existing IT/DSSs that could be used to respond to bioterrorism, 79 studies evaluated 58 systems for at least 1 performance metric. CONCLUSIONS The authors identified 8 key decisions that clinicians and public health officials must make in response to bioterrorism. When applying the evaluation system to 217 currently available IT/DSSs that could potentially support the decisions of clinicians and public health officials, the authors found that the literature provides little information about the accuracy of these systems.
Collapse
Affiliation(s)
- Dena M Bravata
- Center for Primary Care and Outcomes Research, Stanford University, Stanford, California 94305-6019, USA.
| | | | | | | | | | | |
Collapse
|
19
|
Abstract
This paper concerns the use of the Internet in the research process, from identifying research issues through qualitative research, through using the Web for surveys and clinical trials, to pre-publishing and publishing research results. Material published on the Internet may be a valuable resource for researchers desiring to understand people and the social and cultural contexts within which they live outside of experimental settings, with due emphasis on the interpretations, experiences, and views of 'real world' people. Reviews of information posted by consumers on the Internet may help to identify health beliefs, common topics, motives, information, and emotional needs of patients, and point to areas where research is needed. The Internet can further be used for survey research. Internet-based surveys may be conducted by means of interactive interviews or by questionnaires designed for self-completion. Electronic one-to-one interviews can be conducted via e-mail or using chat rooms. Questionnaires can be administered by e-mail (e.g. using mailing lists), by posting to newsgroups, and on the Web using fill-in forms. In "open" web-based surveys, selection bias occurs due to the non-representative nature of the Internet population, and (more importantly) through self-selection of participants, i.e. the non-representative nature of respondents, also called the 'volunteer effect'. A synopsis of important techniques and tips for implementing Web-based surveys is given. Ethical issues involved in any type of online research are discussed. Internet addresses for finding methods and protocols are provided. The Web is also being used to assist in the identification and conduction of clinical trials. For example, the web can be used by researchers doing a systematic review who are looking for unpublished trials. Finally, the web is used for two distinct types of electronic publication. Type 1 publication is unrefereed publication of protocols or work in progress (a 'post-publication' peer review process may take place), whereas Type 2 publication is peer-reviewed and will ordinarily take place in online journals.
Collapse
|
20
|
Abstract
Biomedical research involving human beings is subject to codes of ethical conduct that mandate review, approval, and monitoring of clinical trials by research ethics committees (RECs). The role of RECs in public dissemination of results of research is not addressed explicitly in international codes or national regulations governing human research. I contend that RECs should have a prominent role in ensuring that trial results are publicly disseminated. I also propose specific responsibilities and actions that should be assumed and implemented by RECs to promote this objective.
Collapse
Affiliation(s)
- Howard Mann
- University Hospital, 50 North Medical Drive, Salt Lake City, UT 84132, USA.
| |
Collapse
|
21
|
Abstract
The poor translation of evidence into practice is a well-known problem. Hopes are high that information technology can help make evidence-based practice feasible for mere mortal physicians. In this paper, we draw upon the methods and perspectives of clinical practice, medical informatics, and health services research to analyze the gap between evidence and action, and to argue that computing systems for bridging this gap should incorporate both informatics and health services research expertise. We discuss 2 illustrative systems--trial banks and a web-based system to develop and disseminate evidence-based guidelines (alchemist)--and conclude with a research and training agenda.
Collapse
Affiliation(s)
- Ida Sim
- Division of General Internal Medicine, Department of Medicine and the Graduate Group in Biological and Medical Informatics, University of California-San Francisco, 94143-0320, USA.
| | | | | |
Collapse
|
22
|
Sim I, Gorman P, Greenes RA, Haynes RB, Kaplan B, Lehmann H, Tang PC. Clinical decision support systems for the practice of evidence-based medicine. J Am Med Inform Assoc 2001; 8:527-34. [PMID: 11687560 PMCID: PMC130063 DOI: 10.1136/jamia.2001.0080527] [Citation(s) in RCA: 322] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
BACKGROUND The use of clinical decision support systems to facilitate the practice of evidence-based medicine promises to substantially improve health care quality. OBJECTIVE To describe, on the basis of the proceedings of the Evidence and Decision Support track at the 2000 AMIA Spring Symposium, the research and policy challenges for capturing research and practice-based evidence in machine-interpretable repositories, and to present recommendations for accelerating the development and adoption of clinical decision support systems for evidence-based medicine. RESULTS The recommendations fall into five broad areas--capture literature-based and practice-based evidence in machine--interpretable knowledge bases; develop maintainable technical and methodological foundations for computer-based decision support; evaluate the clinical effects and costs of clinical decision support systems and the ways clinical decision support systems affect and are affected by professional and organizational practices; identify and disseminate best practices for work flow-sensitive implementations of clinical decision support systems; and establish public policies that provide incentives for implementing clinical decision support systems to improve health care quality. CONCLUSIONS Although the promise of clinical decision support system-facilitated evidence-based medicine is strong, substantial work remains to be done to realize the potential benefits.
Collapse
Affiliation(s)
- I Sim
- Department of Medicine, University of California-San Francisco, California 94143-0320, USA.
| | | | | | | | | | | | | |
Collapse
|