1
|
Luu HS, Campbell WS, Cholan RA, Edgerton ME, Englund A, Keller A, Korte ED, Mitchell SH, Watkins GT, Westervelt L, Wyman D, Powell S. Analysis of laboratory data transmission between two healthcare institutions using a widely used point-to-point health information exchange platform: a case report. JAMIA Open 2024; 7:ooae032. [PMID: 38660616 PMCID: PMC11042873 DOI: 10.1093/jamiaopen/ooae032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 01/31/2024] [Accepted: 04/08/2024] [Indexed: 04/26/2024] Open
Abstract
Objective The objective was to identify information loss that could affect clinical care in laboratory data transmission between 2 health care institutions via a Health Information Exchange platform. Materials and Methods Data transmission results of 9 laboratory tests, including LOINC codes, were compared in the following: between sending and receiving electronic health record (EHR) systems, the individual Health Level Seven International (HL7) Version 2 messages across the instrument, laboratory information system, and sending EHR. Results Loss of information for similar tests indicated the following potential patient safety issues: (1) consistently missing specimen source; (2) lack of reporting of analytical technique or instrument platform; (3) inconsistent units and reference ranges; (4) discordant LOINC code use; and (5) increased complexity with multiple HL7 versions. Discussion and Conclusions Using an HIE with standard messaging, SHIELD (Systemic Harmonization and Interoperability Enhancement for Laboratory Data) recommendations, and enhanced EHR functionality to support necessary data elements would yield consistent test identification and result value transmission.
Collapse
Affiliation(s)
- Hung S Luu
- Department of Pathology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States
| | - Walter S Campbell
- Department of Pathology and Microbiology, University of Nebraska Medical Center, Omaha, NE 68198, United States
| | - Raja A Cholan
- Deloitte Consulting LLP, Washington, DC 20004, United States
| | - Mary E Edgerton
- Department of Pathology and Microbiology, University of Nebraska Medical Center, Omaha, NE 68198, United States
| | - Andrea Englund
- Department of Pathology and Microbiology, University of Nebraska Medical Center, Omaha, NE 68198, United States
| | - Alana Keller
- Synensys, LLC, Peachtree, GA 30269, United States
| | | | | | - Greg T Watkins
- Deloitte Consulting LLP, Washington, DC 20004, United States
| | | | - Daniel Wyman
- Synensys, LLC, Peachtree, GA 30269, United States
| | | |
Collapse
|
2
|
Carter AB, Berger AL, Schreiber R. Laboratory Test Names Matter: A Survey on What Works and What Doesn't Work for Orders and Results. Arch Pathol Lab Med 2024; 148:155-167. [PMID: 37134236 DOI: 10.5858/arpa.2021-0314-oa] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/08/2023] [Indexed: 05/05/2023]
Abstract
CONTEXT.— Health care providers were surveyed to determine their ability to correctly decipher laboratory test names and their preferences for laboratory test names and result displays. OBJECTIVE.— To confirm principles for laboratory test nomenclature and display and to compare and contrast the abilities and preferences of different provider groups for laboratory test names. DESIGN.— Health care providers across different specialties and perspectives completed a survey of 38 questions, which included participant demographics, real-life examples of poorly named laboratory orders that they were asked to decipher, an assessment of vitamin D test name knowledge, their preferences for ideal names for tests, and their preferred display for test results. Participants were grouped and compared by profession, level of training, and the presence or absence of specialization in informatics and/or laboratory medicine. RESULTS.— Participants struggled with poorly named tests, especially with less commonly ordered tests. Participants' knowledge of vitamin D analyte names was poor and consistent with prior published studies. The most commonly selected ideal names correlated positively with the percentage of the authors' previously developed naming rules (R = 0.54, P < .001). There was strong consensus across groups for the best result display. CONCLUSIONS.— Poorly named laboratory tests are a significant source of provider confusion, and tests that are named according to the authors' naming rules as outlined in this article have the potential to improve test ordering and correct interpretation of results. Consensus among provider groups indicates that a single yet clear naming strategy for laboratory tests is achievable.
Collapse
Affiliation(s)
- Alexis B Carter
- From the Department of Pathology and Laboratory Medicine, Children's Healthcare of Atlanta, Atlanta, Georgia (Carter)
| | - Andrea L Berger
- the Department of Population Health Sciences, Geisinger Medical Center, Danville, Pennsylvania (Berger)
| | - Richard Schreiber
- the Department of Medicine and Information Services, Penn State Health Holy Spirit Medical Center, Camp Hill, Pennsylvania (Schreiber)
| |
Collapse
|
3
|
de Groot R, Püttmann DP, Fleuren LM, Thoral PJ, Elbers PWG, de Keizer NF, Cornet R. Determining and assessing characteristics of data element names impacting the performance of annotation using Usagi. Int J Med Inform 2023; 178:105200. [PMID: 37703800 DOI: 10.1016/j.ijmedinf.2023.105200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 08/11/2023] [Accepted: 08/23/2023] [Indexed: 09/15/2023]
Abstract
INTRODUCTION Hospitals generate large amounts of data and this data is generally modeled and labeled in a proprietary way, hampering its exchange and integration. Manually annotating data element names to internationally standardized data element identifiers is a time-consuming effort. Tools can support performing this task automatically. This study aimed to determine what factors influence the quality of automatic annotations. METHODS Data element names were used from the Dutch COVID-19 ICU Data Warehouse containing data on intensive care patients with COVID-19 from 25 hospitals in the Netherlands. In this data warehouse, the data had been merged using a proprietary terminology system while also storing the original hospital labels (synonymous names). Usagi, an OHDSI annotation tool, was used to perform the annotation for the data. A gold standard was used to determine if Usagi made correct annotations. Logistic regression was used to determine if the number of characters, number of words, match score (Usagi's certainty) and hospital label origin influenced Usagi's performance to annotate correctly. RESULTS Usagi automatically annotated 30.5% of the data element names correctly and 5.5% of the synonymous names. The match score is the best predictor for Usagi finding the correct annotation. It was determined that the AUC of data element names was 0.651 and 0.752 for the synonymous names respectively. The AUC for the individual hospital label origins varied between 0.460 to 0.905. DISCUSSION The results show that Usagi performed better to annotate the data element names than the synonymous names. The hospital origin in the synonymous names dataset was associated with the amount of correctly annotated concepts. Hospitals that performed better had shorter synonymous names and fewer words. Using shorter data element names or synonymous names should be considered to optimize the automatic annotating process. Overall, the performance of Usagi is too poor to completely rely on for automatic annotation.
Collapse
Affiliation(s)
- Rowdy de Groot
- Amsterdam UMC Location University of Amsterdam, Department of Medical Informatics, Amsterdam, the Netherlands.
| | - Daniel P Püttmann
- Amsterdam UMC Location University of Amsterdam, Department of Medical Informatics, Amsterdam, the Netherlands
| | - Lucas M Fleuren
- Department of Intensive Care Medicine, Center for Critical Care Computation Intelligence (C4i), Amsterdam Medical Data Science (AMDS), Amsterdam Public Health (APH), Amsterdam Cardiovascular Science (ACS), Amsterdam Institute for Infection and Immunity (AII), Amsterdam UMC, Vrije Universiteit, Amsterdam, the Netherlands
| | - Patrick J Thoral
- Department of Intensive Care Medicine, Center for Critical Care Computation Intelligence (C4i), Amsterdam Medical Data Science (AMDS), Amsterdam Public Health (APH), Amsterdam Cardiovascular Science (ACS), Amsterdam Institute for Infection and Immunity (AII), Amsterdam UMC, Vrije Universiteit, Amsterdam, the Netherlands
| | - Paul W G Elbers
- Department of Intensive Care Medicine, Center for Critical Care Computation Intelligence (C4i), Amsterdam Medical Data Science (AMDS), Amsterdam Public Health (APH), Amsterdam Cardiovascular Science (ACS), Amsterdam Institute for Infection and Immunity (AII), Amsterdam UMC, Vrije Universiteit, Amsterdam, the Netherlands
| | - Nicolette F de Keizer
- Amsterdam UMC Location University of Amsterdam, Department of Medical Informatics, Amsterdam, the Netherlands
| | - Ronald Cornet
- Amsterdam UMC Location University of Amsterdam, Department of Medical Informatics, Amsterdam, the Netherlands
| |
Collapse
|
4
|
Lanzola G, Polce F, Parimbelli E, Gabetta M, Cornet R, de Groot R, Kogan A, Glasspool D, Wilk S, Quaglini S. The Case Manager: An Agent Controlling the Activation of Knowledge Sources in a FHIR-Based Distributed Reasoning Environment. Appl Clin Inform 2023; 14:725-734. [PMID: 37339683 PMCID: PMC10499504 DOI: 10.1055/a-2113-4443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Accepted: 05/12/2023] [Indexed: 06/22/2023] Open
Abstract
BACKGROUND Within the CAPABLE project the authors developed a multi-agent system that relies on a distributed architecture. The system provides cancer patients with coaching advice and supports their clinicians with suitable decisions based on clinical guidelines. OBJECTIVES As in many multi-agent systems we needed to coordinate the activities of all agents involved. Moreover, since the agents share a common blackboard where all patients' data are stored, we also needed to implement a mechanism for the prompt notification of each agent upon addition of new information potentially triggering its activation. METHODS The communication needs have been investigated and modeled using the HL7-FHIR (Health Level 7-Fast Healthcare Interoperability Resources) standard to ensure proper semantic interoperability among agents. Then a syntax rooted in the FHIR search framework has been defined for representing the conditions to be monitored on the system blackboard for activating each agent. RESULTS The Case Manager (CM) has been implemented as a dedicated component playing the role of an orchestrator directing the behavior of all agents involved. Agents dynamically inform the CM about the conditions to be monitored on the blackboard, using the syntax we developed. The CM then notifies each agent whenever any condition of interest occurs. The functionalities of the CM and other actors have been validated using simulated scenarios mimicking the ones that will be faced during pilot studies and in production. CONCLUSION The CM proved to be a key facilitator for properly achieving the required behavior of our multi-agent system. The proposed architecture may also be leveraged in many clinical contexts for integrating separate legacy services, turning them into a consistent telemedicine framework and enabling application reusability.
Collapse
Affiliation(s)
- Giordano Lanzola
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy
| | - Francesca Polce
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy
| | - Enea Parimbelli
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy
| | - Matteo Gabetta
- Research and Development Division, Biomeris S.r.l, Pavia, Italy
| | - Ronald Cornet
- Medical Informatics, Amsterdam Public Health Institute, Methodology & Digital Health, Amsterdam University Medical Centers, Amsterdam, The Netherlands
| | - Rowdy de Groot
- Medical Informatics, Amsterdam Public Health Institute, Methodology & Digital Health, Amsterdam University Medical Centers, Amsterdam, The Netherlands
| | - Alexandra Kogan
- Department of Information Systems, University of Haifa, Haifa, Israel
| | | | - Szymon Wilk
- Research and Development Division, Institute of Computing Science, Poznan University of Technology, Poznan, Poland
| | - Silvana Quaglini
- Department of Electrical, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy
| |
Collapse
|
5
|
McDonald CJ, Baik SH, Zheng Z, Amos L, Luan X, Marsolo K, Qualls L. Mis-mappings between a producer's quantitative test codes and LOINC codes and an algorithm for correcting them. J Am Med Inform Assoc 2022; 30:301-307. [PMID: 36343113 PMCID: PMC9846663 DOI: 10.1093/jamia/ocac215] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 10/17/2022] [Accepted: 10/31/2022] [Indexed: 11/09/2022] Open
Abstract
OBJECTIVES To access the accuracy of the Logical Observation Identifiers Names and Codes (LOINC) mapping to local laboratory test codes that is crucial to data integration across time and healthcare systems. MATERIALS AND METHODS We used software tools and manual reviews to estimate the rate of LOINC mapping errors among 179 million mapped test results from 2 DataMarts in PCORnet. We separately reported unweighted and weighted mapping error rates, overall and by parts of the LOINC term. RESULTS Of included 179 537 986 mapped results for 3029 quantitative tests, 95.4% were mapped correctly implying an 4.6% mapping error rate. Error rates were less than 5% for the more common tests with at least 100 000 mapped test results. Mapping errors varied across different LOINC classes. Error rates in chemistry and hematology classes, which together accounted for 92.0% of the mapped test results, were 0.4% and 7.5%, respectively. About 50% of mapping errors were due to errors in the property part of the LOINC name. DISCUSSIONS Mapping errors could be detected automatically through inconsistencies in (1) qualifiers of the analyte, (2) specimen type, (3) property, and (4) method. Among quantitative test results, which are the large majority of reported tests, application of automatic error detection and correction algorithm could reduce the mapping errors further. CONCLUSIONS Overall, the mapping error rate within the PCORnet data was 4.6%. This is nontrivial but less than other published error rates of 20%-40%. Such error rate decreased substantially to 0.1% after the application of automatic detection and correction algorithm.
Collapse
Affiliation(s)
- Clement J McDonald
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, Maryland, USA
| | - Seo H Baik
- Corresponding Author: Clement J. McDonald, MD, Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, 8600 Rockville Pike, Bethesda, MD 20894, USA;
| | - Zhaonian Zheng
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, Maryland, USA
| | - Liz Amos
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, Maryland, USA
| | - Xiaocheng Luan
- Lister Hill National Center for Biomedical Communications, National Library of Medicine, National Institutes of Health, Bethesda, Maryland, USA
| | - Keith Marsolo
- Department of Population Health Sciences, Duke University School of Medicine, Durham, North Carolina, USA
| | - Laura Qualls
- Department of Population Health Sciences, Duke University School of Medicine, Durham, North Carolina, USA
| |
Collapse
|
6
|
Harmonization and standardization of data for a pan-European cohort on SARS- CoV-2 pandemic. NPJ Digit Med 2022; 5:75. [PMID: 35701537 PMCID: PMC9198067 DOI: 10.1038/s41746-022-00620-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Accepted: 05/19/2022] [Indexed: 11/12/2022] Open
Abstract
The European project ORCHESTRA intends to create a new pan-European cohort to rapidly advance the knowledge of the effects and treatment of COVID-19. Establishing processes that facilitate the merging of heterogeneous clusters of retrospective data was an essential challenge. In addition, data from new ORCHESTRA prospective studies have to be compatible with earlier collected information to be efficiently combined. In this article, we describe how we utilized and contributed to existing standard terminologies to create consistent semantic representation of over 2500 COVID-19-related variables taken from three ORCHESTRA studies. The goal is to enable the semantic interoperability of data within the existing project studies and to create a common basis of standardized elements available for the design of new COVID-19 studies. We also identified 743 variables that were commonly used in two of the three prospective ORCHESTRA studies and can therefore be directly combined for analysis purposes. Additionally, we actively contributed to global interoperability by submitting new concept requests to the terminology Standards Development Organizations.
Collapse
|
7
|
Cholan RA, Pappas G, Rehwoldt G, Sills AK, Korte ED, Appleton IK, Scott NM, Rubinstein WS, Brenner SA, Merrick R, Hadden WC, Campbell KE, Waters MS. OUP accepted manuscript. J Am Med Inform Assoc 2022; 29:1372-1380. [PMID: 35639494 PMCID: PMC9277627 DOI: 10.1093/jamia/ocac072] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Accepted: 04/27/2022] [Indexed: 11/22/2022] Open
Abstract
Objective Assess the effectiveness of providing Logical Observation Identifiers Names and Codes (LOINC®)-to-In Vitro Diagnostic (LIVD) coding specification, required by the United States Department of Health and Human Services for SARS-CoV-2 reporting, in medical center laboratories and utilize findings to inform future United States Food and Drug Administration policy on the use of real-world evidence in regulatory decisions. Materials and Methods We compared gaps and similarities between diagnostic test manufacturers’ recommended LOINC® codes and the LOINC® codes used in medical center laboratories for the same tests. Results Five medical centers and three test manufacturers extracted data from laboratory information systems (LIS) for prioritized tests of interest. The data submission ranged from 74 to 532 LOINC® codes per site. Three test manufacturers submitted 15 LIVD catalogs representing 26 distinct devices, 6956 tests, and 686 LOINC® codes. We identified mismatches in how medical centers use LOINC® to encode laboratory tests compared to how test manufacturers encode the same laboratory tests. Of 331 tests available in the LIVD files, 136 (41%) were represented by a mismatched LOINC® code by the medical centers (chi-square 45.0, 4 df, P < .0001). Discussion The five medical centers and three test manufacturers vary in how they organize, categorize, and store LIS catalog information. This variation impacts data quality and interoperability. Conclusion The results of the study indicate that providing the LIVD mappings was not sufficient to support laboratory data interoperability. National implementation of LIVD and further efforts to promote laboratory interoperability will require a more comprehensive effort and continuing evaluation and quality control.
Collapse
Affiliation(s)
- Raja A Cholan
- Corresponding Author: Raja A. Cholan, MS, Deloitte Consulting LLP, Washington, DC 20004, USA;
| | - Gregory Pappas
- Office of the National Coordinator for Health Information Technology, Washington, District of Columbia, USA
- U.S. Food and Drug Administration, Silver Spring, Maryland, USA
| | - Greg Rehwoldt
- Deloitte Consulting LLP, Washington, District of Columbia, USA
| | - Andrew K Sills
- Deloitte Consulting LLP, Washington, District of Columbia, USA
| | | | | | - Natalie M Scott
- Deloitte Consulting LLP, Washington, District of Columbia, USA
| | | | - Sara A Brenner
- U.S. Food and Drug Administration, Silver Spring, Maryland, USA
- U.S. Department of Health and Human Services, Silver Spring, Maryland, USA
| | - Riki Merrick
- Association for Public Health Laboratories, Silver Spring, Maryland, USA
| | | | - Keith E Campbell
- U.S. Food and Drug Administration, Silver Spring, Maryland, USA
- U.S. Department of Veterans Affairs, Bend, Oregon, USA
| | | |
Collapse
|
8
|
Carter AB, Abruzzo LV, Hirschhorn JW, Jones D, Jordan DC, Nassiri M, Ogino S, Patel NR, Suciu CG, Temple-Smolkin RL, Zehir A, Roy S. Electronic Health Records and Genomics: Perspectives from the Association for Molecular Pathology Electronic Health Record (EHR) Interoperability for Clinical Genomics Data Working Group. J Mol Diagn 2021; 24:1-17. [PMID: 34656760 DOI: 10.1016/j.jmoldx.2021.09.009] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Revised: 09/14/2021] [Accepted: 09/28/2021] [Indexed: 02/09/2023] Open
Abstract
The use of genomics in medicine is expanding rapidly, but information systems are lagging in their ability to support genomic workflows both from the laboratory and patient-facing provider perspective. The complexity of genomic data, the lack of needed data standards, and lack of genomic fluency and functionality as well as several other factors have contributed to the gaps between genomic data generation, interoperability, and utilization. These gaps are posing significant challenges to laboratory and pathology professionals, clinicians, and patients in the ability to generate, communicate, consume, and use genomic test results. The Association for Molecular Pathology Electronic Health Record Working Group was convened to assess the challenges and opportunities and to recommend solutions on ways to resolve current problems associated with the display and use of genomic data in electronic health records.
Collapse
Affiliation(s)
- Alexis B Carter
- The Electronic Health Record Interoperability for Clinical Genomics Data Working Group of the Informatics Subdivision, Association for Molecular Pathology, Rockville, Maryland; Children's Healthcare of Atlanta, Atlanta, Georgia.
| | - Lynne V Abruzzo
- The Electronic Health Record Interoperability for Clinical Genomics Data Working Group of the Informatics Subdivision, Association for Molecular Pathology, Rockville, Maryland; Department of Pathology, Wexner Medical Center, The Ohio State University, Columbus, Ohio
| | - Julie W Hirschhorn
- The Electronic Health Record Interoperability for Clinical Genomics Data Working Group of the Informatics Subdivision, Association for Molecular Pathology, Rockville, Maryland; Medical University of South Carolina, Charleston, South Carolina
| | - Dan Jones
- The Electronic Health Record Interoperability for Clinical Genomics Data Working Group of the Informatics Subdivision, Association for Molecular Pathology, Rockville, Maryland; The Ohio State University Comprehensive Cancer Center, James Cancer Hospital and Solove Research Institute, Columbus, Ohio
| | | | - Mehdi Nassiri
- The Electronic Health Record Interoperability for Clinical Genomics Data Working Group of the Informatics Subdivision, Association for Molecular Pathology, Rockville, Maryland; Department of Pathology and Laboratory Medicine, Indiana University School of Medicine, Indianapolis, Indiana
| | - Shuji Ogino
- The Electronic Health Record Interoperability for Clinical Genomics Data Working Group of the Informatics Subdivision, Association for Molecular Pathology, Rockville, Maryland; Brigham & Women's Hospital, Boston, Massachusetts; Harvard Medical School, Boston, Massachusetts; Harvard T.H. Chan School of Public Health, Boston, Massachusetts; Broad Institute of Massachusetts Institute of Technology and Harvard, Cambridge, Massachusetts
| | - Nimesh R Patel
- The Electronic Health Record Interoperability for Clinical Genomics Data Working Group of the Informatics Subdivision, Association for Molecular Pathology, Rockville, Maryland; Department of Pathology, Rhode Island Hospital and Alpert Medical School of Brown University, Providence, Rhode Island
| | - Christopher G Suciu
- The Electronic Health Record Interoperability for Clinical Genomics Data Working Group of the Informatics Subdivision, Association for Molecular Pathology, Rockville, Maryland; Department of Pathology and Immunology, Washington University School of Medicine, St. Louis, Missouri; Institute for Informatics, Washington University School of Medicine, St. Louis, Missouri
| | | | - Ahmet Zehir
- The Electronic Health Record Interoperability for Clinical Genomics Data Working Group of the Informatics Subdivision, Association for Molecular Pathology, Rockville, Maryland; Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, New York
| | - Somak Roy
- The Electronic Health Record Interoperability for Clinical Genomics Data Working Group of the Informatics Subdivision, Association for Molecular Pathology, Rockville, Maryland; Department of Pathology, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio
| |
Collapse
|
9
|
McKnight J, Wilson ML, Banning P, Paton C, Bahati F, English M, Fleming K. Use of LOINC for interoperability between organisations poses a risk to safety - Authors' reply. LANCET DIGITAL HEALTH 2020; 2:e570. [PMID: 33328085 PMCID: PMC7613542 DOI: 10.1016/s2589-7500(20)30247-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Revised: 09/02/2020] [Accepted: 09/21/2020] [Indexed: 12/03/2022]
Affiliation(s)
- Jacob McKnight
- Nuffield Department of Medicine, University of Oxford, Oxford OX1 3SY, UK.
| | | | - Pamela Banning
- 3M Health Information Systems, Salt Lake City, UT, USA; LOINC, Regenstrief Institute, Indianapolis, IN, USA
| | - Chris Paton
- Nuffield Department of Medicine, University of Oxford, Oxford OX1 3SY, UK
| | | | - Mike English
- Nuffield Department of Medicine, University of Oxford, Oxford OX1 3SY, UK
| | - Ken Fleming
- Nuffield Department of Medicine, University of Oxford, Oxford OX1 3SY, UK
| |
Collapse
|
10
|
Use of LOINC for interoperability between organisations poses a risk to safety. LANCET DIGITAL HEALTH 2020; 2:e569. [PMID: 33328084 DOI: 10.1016/s2589-7500(20)30244-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Revised: 09/08/2020] [Accepted: 09/21/2020] [Indexed: 11/23/2022]
|