1
|
Epidemiological anomaly detection in Philippine public health surveillance data through Newcomb-Benford analysis. J Public Health (Oxf) 2024:fdae062. [PMID: 38693873 DOI: 10.1093/pubmed/fdae062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2023] [Revised: 04/08/2024] [Accepted: 04/12/2024] [Indexed: 05/03/2024] Open
Abstract
BACKGROUND Public health surveillance is vital for monitoring and controlling disease spread. In the Philippines, an effective surveillance system is crucial for managing diverse infectious diseases. The Newcomb-Benford Law (NBL) is a statistical tool known for anomaly detection in various datasets, including those in public health. METHODS Using Philippine epidemiological data from 2019 to 2023, this study applied NBL analysis. Diseases included acute flaccid paralysis, diphtheria, measles, rubella, neonatal tetanus, pertussis, chikungunya, dengue, leptospirosis and others. The analysis involved Chi-square tests, Mantissa Arc tests, Mean Absolute Deviation (MAD) and Distortion Factor calculations. RESULTS Most diseases exhibited nonconformity to NBL, except for measles. MAD consistently indicated nonconformity, highlighting potential anomalies. Rabies consistently showed substantial deviations, while leptospirosis exhibited closer alignment, especially in 2021. Annual variations in disease deviations were notable, with acute meningitis encephalitis syndrome in 2019 and influenza-like illness in 2023 having the highest deviations. CONCLUSIONS The study provides practical insights for improving Philippine public health surveillance. Despite some diseases showing conformity, deviations suggest data quality issues. Enhancing the PIDSR, especially in diseases with consistent nonconformity, is crucial for accurate monitoring and response. The NBL's versatility across diverse domains emphasizes its utility for ensuring data integrity and quality assurance.
Collapse
|
2
|
Likelihood Ratio Test and the Evidential Approach for 2 × 2 Tables. ENTROPY (BASEL, SWITZERLAND) 2024; 26:375. [PMID: 38785625 PMCID: PMC11119089 DOI: 10.3390/e26050375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 04/24/2024] [Accepted: 04/26/2024] [Indexed: 05/25/2024]
Abstract
Categorical data analysis of 2 × 2 contingency tables is extremely common, not least because they provide risk difference, risk ratio, odds ratio, and log odds statistics in medical research. A χ2 test analysis is most often used, although some researchers use likelihood ratio test (LRT) analysis. Does it matter which test is used? A review of the literature, examination of the theoretical foundations, and analyses of simulations and empirical data are used by this paper to argue that only the LRT should be used when we are interested in testing whether the binomial proportions are equal. This so-called test of independence is by far the most popular, meaning the χ2 test is widely misused. By contrast, the χ2 test should be reserved for where the data appear to match too closely a particular hypothesis (e.g., the null hypothesis), where the variance is of interest, and is less than expected. Low variance can be of interest in various scenarios, particularly in investigations of data integrity. Finally, it is argued that the evidential approach provides a consistent and coherent method that avoids the difficulties posed by significance testing. The approach facilitates the calculation of appropriate log likelihood ratios to suit our research aims, whether this is to test the proportions or to test the variance. The conclusions from this paper apply to larger contingency tables, including multi-way tables.
Collapse
|
3
|
EPOPTIS: A Monitoring-as-a-Service Platform for Internet-of-Things Applications. SENSORS (BASEL, SWITZERLAND) 2024; 24:2208. [PMID: 38610418 PMCID: PMC11014048 DOI: 10.3390/s24072208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 03/23/2024] [Accepted: 03/26/2024] [Indexed: 04/14/2024]
Abstract
The technology landscape has been dynamically reshaped by the rapid growth of the Internet of Things, introducing an era where everyday objects, equipped with smart sensors and connectivity, seamlessly interact to create intelligent ecosystems. IoT devices are highly heterogeneous in terms of software and hardware, and many of them are severely constrained. This heterogeneity and potentially constrained nature creates new challenges in terms of security, privacy, and data management. This work proposes a Monitoring-as-a-Service platform for both monitoring and management purposes, offering a comprehensive solution for collecting, storing, and processing monitoring data from heterogeneous IoT networks for the support of diverse IoT-based applications. To ensure a flexible and scalable solution, we leverage the FIWARE open-source framework, also incorporating blockchain and smart contract technologies to establish a robust integrity verification mechanism for aggregated monitoring and management data. Additionally, we apply automated workflows to filter and label the collected data systematically. Moreover, we provide thorough evaluation results in terms of CPU and RAM utilization and average service latency.
Collapse
|
4
|
Data from the Indian drug regulator and from Clinical Trials Registry-India does not always match. Front Med (Lausanne) 2024; 11:1346208. [PMID: 38435394 PMCID: PMC10906088 DOI: 10.3389/fmed.2024.1346208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 01/19/2024] [Indexed: 03/05/2024] Open
Abstract
Introduction In India, regulatory trials, which require the drug regulator's permission, must be registered with the Clinical Trials Registry-India (CTRI) as of 19 March 2019. In this study, for about 300 trials, we aimed to identify the CTRI record that matched the trial for which the regulator had given permission. After identifying 'true pairs', our goal was to determine whether the sites and Principal Investigators mentioned in the permission letter were the same as those mentioned in the CTRI record. Methods We developed a methodology to compare the regulator's permission letters with CTRI records. We manually validated 151 true pairs by comparing the titles, the drug interventions, and the indications. We then examined discrepancies in their trial sites and Principal Investigators. Results Our findings revealed substantial variations in the number and identity of sites and Principal Investigators between the permission letters and the CTRI records. Discussion These discrepancies raise concerns about the accuracy and transparency of regulatory trials in India. We recommend easier data extraction from regulatory documents, cross-referencing regulatory documents and CTRI records, making public the changes to approval letters, and enforcing oversight by Institutional Ethics Committees for site additions or deletions. These steps will increase transparency around regulatory trials running in India.
Collapse
|
5
|
Revolutionizing healthcare information systems with blockchain. Front Digit Health 2024; 5:1329196. [PMID: 38274085 PMCID: PMC10808696 DOI: 10.3389/fdgth.2023.1329196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Accepted: 12/28/2023] [Indexed: 01/27/2024] Open
|
6
|
Participant Misrepresentation in Online Focus Groups: Red Flags and Proactive Measures. Ethics Hum Res 2024; 46:37-42. [PMID: 38240399 DOI: 10.1002/eahr.500198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2024]
Abstract
Covid-19 public health measures prompted a significant increase in online research. This approach has several benefits over face-to-face data-collection methods, including lower cost and wider geographical reach of participants. Yet when the online data-collection instrument is a survey, there are also well-documented drawbacks of participant misrepresentation and related data-authenticity issues. However, the scholarly literature has not looked at participant misrepresentation in online focus-group empirical research. This case study communicates a concerning situation that arose during our research project: dishonest participant behavior threatened the integrity and validity of our data collected through online focus-group sessions as well as e-surveys. We describe the study context, initial red flags alerting us to the issue, subsequent investigations, and implications for research ethics, funding, and data quality. We conclude with a discussion of potential steps to safeguard future online focus-group research against similar issues.
Collapse
|
7
|
Dynamic Tensor Modeling for Missing Data Completion in Electronic Toll Collection Gantry Systems. SENSORS (BASEL, SWITZERLAND) 2023; 24:86. [PMID: 38202948 PMCID: PMC10780861 DOI: 10.3390/s24010086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 12/19/2023] [Accepted: 12/21/2023] [Indexed: 01/12/2024]
Abstract
The deployment of Electronic Toll Collection (ETC) gantry systems marks a transformative advancement in the journey toward an interconnected and intelligent highway traffic infrastructure. The integration of these systems signifies a leap forward in streamlining toll collection and minimizing environmental impact through decreased idle times. To solve the problems of missing sensor data in an ETC gantry system with large volumes and insufficient traffic detection among ETC gantries, this study constructs a high-order tensor model based on the analysis of the high-dimensional, sparse, large-volume, and heterogeneous characteristics of ETC gantry data. In addition, a missing data completion method for the ETC gantry data is proposed based on an improved dynamic tensor flow model. This study approximates the decomposition of neighboring tensor blocks in the high-order tensor model of the ETC gantry data based on tensor Tucker decomposition and the Laplacian matrix. This method captures the correlations among space, time, and user information in the ETC gantry data. Case studies demonstrate that our method enhances ETC gantry data quality across various rates of missing data while also reducing computational complexity. For instance, at a less than 5% missing data rate, our approach reduced the RMSE for time vehicle distance by 0.0051, for traffic volume by 0.0056, and for interval speed by 0.0049 compared to the MATRIX method. These improvements not only indicate a potential for more precise traffic data analysis but also add value to the application of ETC systems and contribute to theoretical and practical advancements in the field.
Collapse
|
8
|
Traceable Research Data Sharing in a German Medical Data Integration Center With FAIR (Findability, Accessibility, Interoperability, and Reusability)-Geared Provenance Implementation: Proof-of-Concept Study. JMIR Form Res 2023; 7:e50027. [PMID: 38060305 PMCID: PMC10739241 DOI: 10.2196/50027] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 10/25/2023] [Accepted: 11/01/2023] [Indexed: 12/08/2023] Open
Abstract
BACKGROUND Secondary investigations into digital health records, including electronic patient data from German medical data integration centers (DICs), pave the way for enhanced future patient care. However, only limited information is captured regarding the integrity, traceability, and quality of the (sensitive) data elements. This lack of detail diminishes trust in the validity of the collected data. From a technical standpoint, adhering to the widely accepted FAIR (Findability, Accessibility, Interoperability, and Reusability) principles for data stewardship necessitates enriching data with provenance-related metadata. Provenance offers insights into the readiness for the reuse of a data element and serves as a supplier of data governance. OBJECTIVE The primary goal of this study is to augment the reusability of clinical routine data within a medical DIC for secondary utilization in clinical research. Our aim is to establish provenance traces that underpin the status of data integrity, reliability, and consequently, trust in electronic health records, thereby enhancing the accountability of the medical DIC. We present the implementation of a proof-of-concept provenance library integrating international standards as an initial step. METHODS We adhered to a customized road map for a provenance framework, and examined the data integration steps across the ETL (extract, transform, and load) phases. Following a maturity model, we derived requirements for a provenance library. Using this research approach, we formulated a provenance model with associated metadata and implemented a proof-of-concept provenance class. Furthermore, we seamlessly incorporated the internationally recognized Word Wide Web Consortium (W3C) provenance standard, aligned the resultant provenance records with the interoperable health care standard Fast Healthcare Interoperability Resources, and presented them in various representation formats. Ultimately, we conducted a thorough assessment of provenance trace measurements. RESULTS This study marks the inaugural implementation of integrated provenance traces at the data element level within a German medical DIC. We devised and executed a practical method that synergizes the robustness of quality- and health standard-guided (meta)data management practices. Our measurements indicate commendable pipeline execution times, attaining notable levels of accuracy and reliability in processing clinical routine data, thereby ensuring accountability in the medical DIC. These findings should inspire the development of additional tools aimed at providing evidence-based and reliable electronic health record services for secondary use. CONCLUSIONS The research method outlined for the proof-of-concept provenance class has been crafted to promote effective and reliable core data management practices. It aims to enhance biomedical data by imbuing it with meaningful provenance, thereby bolstering the benefits for both research and society. Additionally, it facilitates the streamlined reuse of biomedical data. As a result, the system mitigates risks, as data analysis without knowledge of the origin and quality of all data elements is rendered futile. While the approach was initially developed for the medical DIC use case, these principles can be universally applied throughout the scientific domain.
Collapse
|
9
|
Cloud security in a bioanalytical world: considerations for use of third-party cloud services for bioanalysis. Bioanalysis 2023; 15:1461-1468. [PMID: 38044848 DOI: 10.4155/bio-2023-0164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2023] Open
Abstract
While using the cloud environment for various functions has become commonplace, relatively little attention has been given to considerations for the use of third-party cloud services for regulated bioanalytical workflow and data management. Little guidance has been provided as to how to utilize the cloud to support bioanalytical activities. It can be intimidating when considering how to go about using cloud services for data acquisition, but there are some general ideas to keep in mind when evaluating ways to accommodate regulated bioanalysis online. Determining how to incorporate the use of cloud storage with data that are generated from regulated bioanalytical analysis is an important step in maintaining the security of the data.
Collapse
|
10
|
Secure Biomedical Document Protection Framework to Ensure Privacy Through Blockchain. BIG DATA 2023; 11:437-451. [PMID: 37219960 DOI: 10.1089/big.2022.0170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
In the recent health care era, biomedical documents play a crucial role, and they contain much evidence-based documentation associated with many stakeholders data. Protecting those confidential research documents is more difficult and effective, and a significant process in the medical-based research domain. Those bio-documentation related to health care and other relevant community-valued data are suggested by medical professionals and processed. Many traditional security mechanisms such as akteonline and Health Insurance Portability and Accountability Act (HIPAA) are used to protect the biomedical documents as they consider the problem of non-repudiation and data integrity related to the retrieval and storage of documents. Thus, there is a need for a comprehensive framework that improves protection in terms of cost and response time related to biomedical documents. In this research work, blockchain-based biomedical document protection framework (BBDPF) is proposed, which includes blockchain-based biomedical data protection (BBDP) and blockchain-based biomedical data retrieval (BBDR) algorithms. BBDP and BBDR algorithms provide consistency on the data to prevent data modification and interception of confidential data with proper data validation. Both the algorithms have strong cryptographic mechanisms to withstand post-quantum security risks, ensuring the integrity of biomedical document retrieval and non-deny of data retrieval transactions. In the performance analysis, Ethereum blockchain infrastructure is deployed BBDPF and smart contracts using Solidity language. In the performance analysis, request time and searching time are determined based on the number of request to ensure data integrity, non-repudiation, and smart contracts for the proposed hybrid model as it gets increased gradually. A modified prototype is built with a web-based interface to prove the concept and evaluate the proposed framework. The experimental results revealed that the proposed framework renders data integrity, non-repudiation, and support for smart contracts with Query Notary Service, MedRec, MedShare, and Medlock.
Collapse
|
11
|
Strengthening Privacy and Data Security in Biomedical Microelectromechanical Systems by IoT Communication Security and Protection in Smart Healthcare. SENSORS (BASEL, SWITZERLAND) 2023; 23:8944. [PMID: 37960646 PMCID: PMC10647665 DOI: 10.3390/s23218944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Revised: 10/31/2023] [Accepted: 11/01/2023] [Indexed: 11/15/2023]
Abstract
Biomedical Microelectromechanical Systems (BioMEMS) serve as a crucial catalyst in enhancing IoT communication security and safeguarding smart healthcare systems. Situated at the nexus of advanced technology and healthcare, BioMEMS are instrumental in pioneering personalized diagnostics, monitoring, and therapeutic applications. Nonetheless, this integration brings forth a complex array of security and privacy challenges intrinsic to IoT communications within smart healthcare ecosystems, demanding comprehensive scrutiny. In this manuscript, we embark on an extensive analysis of the intricate security terrain associated with IoT communications in the realm of BioMEMS, addressing a spectrum of vulnerabilities that spans cyber threats, data manipulation, and interception of communications. The integration of real-world case studies serves to illuminate the direct repercussions of security breaches within smart healthcare systems, highlighting the imperative to safeguard both patient safety and the integrity of medical data. We delve into a suite of security solutions, encompassing rigorous authentication processes, data encryption, designs resistant to attacks, and continuous monitoring mechanisms, all tailored to fortify BioMEMS in the face of ever-evolving threats within smart healthcare environments. Furthermore, the paper underscores the vital role of ethical and regulatory considerations, emphasizing the need to uphold patient autonomy, ensure the confidentiality of data, and maintain equitable access to healthcare in the context of IoT communication security. Looking forward, we explore the impending landscape of BioMEMS security as it intertwines with emerging technologies such as AI-driven diagnostics, quantum computing, and genomic integration, anticipating potential challenges and strategizing for the future. In doing so, this paper highlights the paramount importance of adopting an integrated approach that seamlessly blends technological innovation, ethical foresight, and collaborative ingenuity, thereby steering BioMEMS towards a secure and resilient future within smart healthcare systems, in the ambit of IoT communication security and protection.
Collapse
|
12
|
Highlights of the 14th Japan Bioanalysis Forum Symposium. Bioanalysis 2023; 15:1271-1276. [PMID: 37855216 DOI: 10.4155/bio-2023-0162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2023] Open
Abstract
The 14th Japan Bioanalysis Forum Symposium was held at Tower Hall Funabori, Japan from 1-3 March 2023. The conference theme, 'Bringing Together - the Expertise of Bioanalysis', aimed to enable people from various fields to gather, learn and collaborate together for the common goal of delivering medicines to patients faster. Approximately 360 participants from various fields, including pharmaceutical industries, contractors, academia and regulatory authorities, gathered at an in-person symposium which had an online participation option, for the first time in 4 years. The symposium offered a wide range of topics including ICH M10, new modalities, biomarkers, immunogenicity, electronization and patient-centric sampling. The latest research results were provided from domestic and overseas scientists. This report summarizes the major topics.
Collapse
|
13
|
Good clinical practices in the bioanalytical laboratory. Bioanalysis 2023; 15:1381-1388. [PMID: 37737137 DOI: 10.4155/bio-2023-0150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/23/2023] Open
Abstract
Despite the existence of good clinical practice guidelines, the way in which they are applied to the bioanalytical laboratory remains unclear. Aspects of patient confidentiality, informed consent and subject withdrawal; addressing unblinding associated with sample analysis, including repeat analysis and incurred sample reanalysis; or the differences in responsibilities between the sponsor and contract research organization are not articulated by the US FDA within the bioanalytical setting, and for most bioanalytical laboratories this remains a gap in their standard operating procedures. The aim of this article is to identify and clarify the aspects of the good clinical practices that are applicable to the bioanalytical laboratory when conducting bioanalysis with clinical samples, and to address potential gaps in the bioanalytical laboratory when it comes to clinical sample bioanalysis.
Collapse
|
14
|
Data Management: The First Step in Reproducible Research. Indian J Occup Environ Med 2023; 27:359-363. [PMID: 38390491 PMCID: PMC10880825 DOI: 10.4103/ijoem.ijoem_342_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 02/21/2023] [Accepted: 02/27/2023] [Indexed: 02/24/2024] Open
Abstract
Reproducibility is a preferred aim in any scientific research, including occupational health research. Datamanagement is an important and essential step in marching towards reproducibility. A good datamanagement helps us stay organized, improve transparency, quality and fosters collaboration. Here we discuss how to organize and prepare for data management, how data management facilitates interoperability and accessibility, followed by storing and dissemination of data. We wrap up by providing pointers on what needs to be included in the data management plans.
Collapse
|
15
|
ChatGPT and the stochastic parrot: artificial intelligence in medical research. Br J Anaesth 2023; 131:e120-e121. [PMID: 37516646 DOI: 10.1016/j.bja.2023.06.065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 06/10/2023] [Accepted: 06/28/2023] [Indexed: 07/31/2023] Open
|
16
|
Improving Data Integrity and Quality From Online Health Surveys of Women With Infant Children. Nurs Res 2023; 72:386-391. [PMID: 37625181 PMCID: PMC10534022 DOI: 10.1097/nnr.0000000000000671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
BACKGROUND Online surveys have proven to be an efficient method to gather health information in studies of various populations, but these are accompanied by threats to data integrity and quality. We draw on our experience with a nefarious intrusion into an online survey and our efforts to protect data integrity and quality in a subsequent online survey. OBJECTIVES We aim to share lessons learned regarding detecting and preventing threats to online survey data integrity and quality. METHODS We examined data from two online surveys we conducted, as well as findings of others reported in the literature, to delineate threats to and prevention strategies for online health surveys. RESULTS Our first survey was launched inadvertently without available security features engaged in Qualtrics, resulting in a number of threats to data integrity and quality. These threats included multiple submissions, often within seconds of each other, from the same internet protocol (IP) address; use of proxy servers or virtual private networks, often with suspicious or abusive IP address ratings and geolocations outside the United States; and incoherent text data or otherwise suspicious responses. After excluding fraudulent, suspicious, or ineligible cases, as well as cases that terminated before submitting data, 102 of 224 (45.5%) eligible survey respondents remained with partial or complete data. In a second online survey with security features in Qualtrics engaged, no IP addresses were associated with any duplicate submissions. To further protect data integrity and quality, we added items to detect inattentive or fraudulent respondents and applied a risk scoring system in which 23 survey respondents were high risk, 16 were moderate risk, and 289 of 464 (62.3%) were low or no risk and therefore considered eligible respondents. DISCUSSION Technological safeguards, such as blocking repeat IP addresses and study design features to detect inattentive or fraudulent respondents, are strategies to support data integrity and quality in online survey research. For online data collection to make meaningful contributions to nursing research, it is important for nursing scientists to implement technological, study design, and methodological safeguards to protect data integrity and quality and for future research to focus on advancing data protection methodologies.
Collapse
|
17
|
A Security Framework for Increasing Data and Device Integrity in Internet of Things Systems. SENSORS (BASEL, SWITZERLAND) 2023; 23:7532. [PMID: 37687988 PMCID: PMC10490583 DOI: 10.3390/s23177532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Revised: 08/26/2023] [Accepted: 08/28/2023] [Indexed: 09/10/2023]
Abstract
The trustworthiness of a system is not just about proving the identity or integrity of the hardware but also extends to the data, control, and management planes of communication between devices and the software they are running. This trust in data and device integrity is desirable for Internet of Things (IoT) systems, especially in critical environments. In this study, we developed a security framework, IoTAttest, for building IoT systems that leverage the Trusted Platform Module 2.0 and remote attestation technologies to enable the establishment of IoT devices' collected data and control plan traffic integrity. After presenting the features and reference architecture of IoTAttest, we evaluated the privacy preservation and validity through the implementation of two proof-of-concept IoT applications that were designed by two teams of university students based on the reference architecture. After the development, the developers answered open questions regarding their experience and perceptions of the framework's usability, limitations, scalability, extensibility, potential, and security. The results indicate that IoTAttest can be used to develop IoT systems with effective attestation to achieve device and data integrity. The proof-of-concept solutions' outcomes illustrate the functionalities and performance of the IoT framework. The feedback from the proof-of-concept developers affirms that they perceived the framework as usable, scalable, extensible, and secure.
Collapse
|
18
|
Empowering Precision Medicine: Unlocking Revolutionary Insights through Blockchain-Enabled Federated Learning and Electronic Medical Records. SENSORS (BASEL, SWITZERLAND) 2023; 23:7476. [PMID: 37687931 PMCID: PMC10490801 DOI: 10.3390/s23177476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 08/10/2023] [Accepted: 08/10/2023] [Indexed: 09/10/2023]
Abstract
Precision medicine has emerged as a transformative approach to healthcare, aiming to deliver personalized treatments and therapies tailored to individual patients. However, the realization of precision medicine relies heavily on the availability of comprehensive and diverse medical data. In this context, blockchain-enabled federated learning, coupled with electronic medical records (EMRs), presents a groundbreaking solution to unlock revolutionary insights in precision medicine. This abstract explores the potential of blockchain technology to empower precision medicine by enabling secure and decentralized data sharing and analysis. By leveraging blockchain's immutability, transparency, and cryptographic protocols, federated learning can be conducted on distributed EMR datasets without compromising patient privacy. The integration of blockchain technology ensures data integrity, traceability, and consent management, thereby addressing critical concerns associated with data privacy and security. Through the federated learning paradigm, healthcare institutions and research organizations can collaboratively train machine learning models on locally stored EMR data, without the need for data centralization. The blockchain acts as a decentralized ledger, securely recording the training process and aggregating model updates while preserving data privacy at its source. This approach allows the discovery of patterns, correlations, and novel insights across a wide range of medical conditions and patient populations. By unlocking revolutionary insights through blockchain-enabled federated learning and EMRs, precision medicine can revolutionize healthcare delivery. This paradigm shift has the potential to improve diagnosis accuracy, optimize treatment plans, identify subpopulations for clinical trials, and expedite the development of novel therapies. Furthermore, the transparent and auditable nature of blockchain technology enhances trust among stakeholders, enabling greater collaboration, data sharing, and collective intelligence in the pursuit of advancing precision medicine. In conclusion, this abstract highlights the transformative potential of blockchain-enabled federated learning in empowering precision medicine. By unlocking revolutionary insights from diverse and distributed EMR datasets, this approach paves the way for a future where healthcare is personalized, efficient, and tailored to the unique needs of each patient.
Collapse
|
19
|
The evolving role of data & safety monitoring boards for real-world clinical trials. J Clin Transl Sci 2023; 7:e179. [PMID: 37745930 PMCID: PMC10514684 DOI: 10.1017/cts.2023.582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 06/20/2023] [Accepted: 06/24/2023] [Indexed: 09/26/2023] Open
Abstract
Introduction Clinical trials provide the "gold standard" evidence for advancing the practice of medicine, even as they evolve to integrate real-world data sources. Modern clinical trials are increasingly incorporating real-world data sources - data not intended for research and often collected in free-living contexts. We refer to trials that incorporate real-world data sources as real-world trials. Such trials may have the potential to enhance the generalizability of findings, facilitate pragmatic study designs, and evaluate real-world effectiveness. However, key differences in the design, conduct, and implementation of real-world vs traditional trials have ramifications in data management that can threaten their desired rigor. Methods Three examples of real-world trials that leverage different types of data sources - wearables, medical devices, and electronic health records are described. Key insights applicable to all three trials in their relationship to Data and Safety Monitoring Boards (DSMBs) are derived. Results Insight and recommendations are given on four topic areas: A. Charge of the DSMB; B. Composition of the DSMB; C. Pre-launch Activities; and D. Post-launch Activities. We recommend stronger and additional focus on data integrity. Conclusions Clinical trials can benefit from incorporating real-world data sources, potentially increasing the generalizability of findings and overall trial scale and efficiency. The data, however, present a level of informatic complexity that relies heavily on a robust data science infrastructure. The nature of monitoring the data and safety must evolve to adapt to new trial scenarios to protect the rigor of clinical trials.
Collapse
|
20
|
Introduction to the Use of Linear and Nonlinear Regression Analysis in Quantitative Biological Assays. Curr Protoc 2023; 3:e801. [PMID: 37358238 DOI: 10.1002/cpz1.801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/27/2023]
Abstract
Biological assays are essential tools in biomedical and pharmaceutical research. In simplest terms, such an assay is an analytical method used to measure or predict a response in a biological system in the presence of a given stimulus (e.g., drug). The inherent complexity involved in evaluating a biological system requires the use of rigorous and appropriate tools for data analysis. Linear and nonlinear regression models represent critically important statistical analyses used to define the relationships between variables of interest in biological systems. Recent challenges relating to the reproducibility of published data suggest the absence of standardized and routine use of statistics to support experimental results across a wide range of scientific disciplines. The current situation warrants an introductory review of basic regression concepts using current, practical examples, along with references to in-depth resources. The goal is to provide the necessary information to help standardize the analysis of biological assays in academic research and drug discovery and development, elevating their utility and increasing data transparency and reproducibility. © 2023 The Authors. Current Protocols published by Wiley Periodicals LLC.
Collapse
|
21
|
SM2-Based Offline/Online Efficient Data Integrity Verification Scheme for Multiple Application Scenarios. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23094307. [PMID: 37177511 PMCID: PMC10181684 DOI: 10.3390/s23094307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 04/17/2023] [Accepted: 04/23/2023] [Indexed: 05/15/2023]
Abstract
With the rapid development of cloud storage and cloud computing technology, users tend to store data in the cloud for more convenient services. In order to ensure the integrity of cloud data, scholars have proposed cloud data integrity verification schemes to protect users' data security. The storage environment of the Internet of Things, in terms of big data and medical big data, demonstrates a stronger demand for data integrity verification schemes, but at the same time, the comprehensive function of data integrity verification schemes is required to be higher. Existing data integrity verification schemes are mostly applied in the cloud storage environment but cannot successfully be applied to the environment of the Internet of Things in the context of big data storage and medical big data storage. To solve this problem when combined with the characteristics and requirements of Internet of Things data storage and medical data storage, we designed an SM2-based offline/online efficient data integrity verification scheme. The resulting scheme uses the SM4 block cryptography algorithm to protect the privacy of the data content and uses a dynamic hash table to realize the dynamic updating of data. Based on the SM2 signature algorithm, the scheme can also realize offline tag generation and batch audits, reducing the computational burden of users. In security proof and efficiency analysis, the scheme has proven to be safe and efficient and can be used in a variety of application scenarios.
Collapse
|
22
|
A Survey on the Security Challenges of Low-Power Wireless Communication Protocols for Communicating Concrete in Civil Engineerings. SENSORS (BASEL, SWITZERLAND) 2023; 23:1849. [PMID: 36850446 PMCID: PMC9959860 DOI: 10.3390/s23041849] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 02/02/2023] [Accepted: 02/03/2023] [Indexed: 06/18/2023]
Abstract
With the increase in low-power wireless communication solutions, the deployment of Wireless Sensor Networks is becoming usual, especially to implement Cyber-Physical Systems. These latter can be used for Structural Health Monitoring applications in critical environments. To ensure a long-term deployment, battery-free and energy-autonomous wireless sensors are designed and can be powered by ambient energy harvesting or Wireless Power Transfer. Because of the criticality of the applications and the limited resources of the nodes, the security is generally relegated to the background, which leads to vulnerabilities in the entire system. In this paper, a security analysis based on an example: the implementation of a communicating reinforced concrete using a network of battery-free nodes; is presented. First, the employed wireless communication protocols are presented in regard of their native security features, main vulnerabilities, and most usual attacks. Then, the security analysis is carried out for the targeted implementation, especially by defining the main hypothesis of the attack and its consequences. Finally, solutions to secure the data and the network are compared. From a global point-of-view, this security analysis must be initiated from the project definition and must be continued throughout the deployment to allow the use of adapted, updatable and upgradable solutions.
Collapse
|
23
|
Distributed Data Integrity Verification Scheme in Multi-Cloud Environment. SENSORS (BASEL, SWITZERLAND) 2023; 23:1623. [PMID: 36772662 PMCID: PMC9919567 DOI: 10.3390/s23031623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 01/30/2023] [Accepted: 01/30/2023] [Indexed: 06/18/2023]
Abstract
Most existing data integrity auditing protocols in cloud storage rely on proof of probabilistic data possession. Consequently, the sampling rate of data integrity verification is low to prevent expensive costs to the auditor. However, in the case of a multi-cloud environment, the amount of stored data will be huge. As a result, a higher sampling rate is needed. It will also have an increased cost for the auditor as a consequence. Therefore, this paper proposes a blockchain-based distributed data integrity verification protocol in multi-cloud environments that enables data verification using multi-verifiers. The proposed scheme aims to increase the sampling rate of data verification without increasing the costs significantly. The performance analysis shows that this protocol achieved a lower time consumption required for verification tasks using multi-verifiers than a single verifier. Furthermore, utilizing multi-verifiers also decreases each verifier's computation and communication costs.
Collapse
|
24
|
Abstract
With NCATS National COVID Cohort Collaborative (N3C) dataset, we evaluated 14 billion medical records and identified more than 12 million patients tested for COVID-19 across the US. To assess potential disparities in COVID-19 testing, we chose ten US states and then compared each state's population distribution characteristics with distribution of corresponding characteristics from N3C. Minority racial groups were more prevalent in the N3C dataset as compared to census data. The proportion of Hispanics and Latinos in N3C was slightly lower than in the state census. Patients over 65 years old had higher representation in the N3C dataset and patients under 18 were underrepresented. Proportion of females in the N3C was higher compared with the state data. All ten states in N3C showed a higher representation of urban population versus rural population compared to census data.
Collapse
|
25
|
Data integrity within the biopharmaceutical sector in the era of Industry 4.0. Biotechnol J 2022; 17:e2100609. [PMID: 35318814 DOI: 10.1002/biot.202100609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 03/12/2022] [Accepted: 03/19/2022] [Indexed: 11/11/2022]
Abstract
Data Integrity (DI) in the highly regulated biopharmaceutical sector is of paramount importance to ensure decisions on meeting product specifications are accurate and hence assure patient safety and product quality. The challenge of ensuring DI within this sector is becoming more complex with the growing amount of data generated given increasing adoption of process analytical technology (PAT), advanced automation, high throughput microscale studies and managing data models created by machine learning (ML) tools. This paper aims to identify DI risks and mitigation strategies in biopharmaceutical manufacturing facilities as the sector moves towards Industry 4.0. To achieve this, the paper examines common DI violations and links them to the ALCOA+ principles used across the FDA, EMA and MHRA. The relevant DI guidelines from the ISPE's GAMP5® and ISA-95 standards are also discussed with a focus on the role of validated computerised and automated manufacturing systems to avoid DI risks and generate compliant data. The paper also highlights the importance of DI whilst using data analytics to ensure the developed models meet the required regulatory standards for process monitoring and control. This includes a discussion on possible mitigation strategies and methodologies to ensure data integrity is maintained for smart manufacturing operations such as the use of cloud platforms to facilitate the storage and transfer of manufacturing data, and migrate away from paper-based records. This article is protected by copyright. All rights reserved.
Collapse
|
26
|
The Use of Blockchain Technology in the Health Care Sector: Systematic Review. JMIR Med Inform 2022; 10:e17278. [PMID: 35049516 PMCID: PMC8814929 DOI: 10.2196/17278] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2019] [Revised: 11/12/2020] [Accepted: 09/28/2021] [Indexed: 11/22/2022] Open
Abstract
Background Blockchain technology is a part of Industry 4.0’s new Internet of Things applications: decentralized systems, distributed ledgers, and immutable and cryptographically secure technology. This technology entails a series of transaction lists with identical copies shared and retained by different groups or parties. One field where blockchain technology has tremendous potential is health care, due to the more patient-centric approach to the health care system as well as blockchain’s ability to connect disparate systems and increase the accuracy of electronic health records. Objective The aim of this study was to systematically review studies on the use of blockchain technology in health care and to analyze the characteristics of the studies that have implemented blockchain technology. Methods This study used a systematic review methodology to find literature related to the implementation aspect of blockchain technology in health care. Relevant papers were searched for using PubMed, SpringerLink, IEEE Xplore, Embase, Scopus, and EBSCOhost. A quality assessment of literature was performed on the 22 selected papers by assessing their trustworthiness and relevance. Results After full screening, 22 papers were included. A table of evidence was constructed, and the results of the selected papers were interpreted. The results of scoring for measuring the quality of the publications were obtained and interpreted. Out of 22 papers, a total of 3 (14%) high-quality papers, 9 (41%) moderate-quality papers, and 10 (45%) low-quality papers were identified. Conclusions Blockchain technology was found to be useful in real health care environments, including for the management of electronic medical records, biomedical research and education, remote patient monitoring, pharmaceutical supply chains, health insurance claims, health data analytics, and other potential areas. The main reasons for the implementation of blockchain technology in the health care sector were identified as data integrity, access control, data logging, data versioning, and nonrepudiation. The findings could help the scientific community to understand the implementation aspect of blockchain technology. The results from this study help in recognizing the accessibility and use of blockchain technology in the health care sector.
Collapse
|
27
|
Evidence and risk indicators of non-random sampling in clinical trials in implant dentistry: A systematic appraisal. J Clin Periodontol 2021; 49:144-152. [PMID: 34747036 PMCID: PMC9299163 DOI: 10.1111/jcpe.13571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Revised: 10/25/2021] [Accepted: 10/29/2021] [Indexed: 11/30/2022]
Abstract
Aim Analysis of distribution of p‐values of continuous differences between test and controls after randomization provides evidence of unintentional error, non‐random sampling, or data fabrication in randomized controlled trials (RCTs). We assessed evidence of highly unusual distributions of baseline characteristics of subjects enrolled in clinical trials in implant dentistry. Materials and methods RCTs published between 2005 and 2020 were systematically searched in Pubmed, Embase, and Cochrane databases. Baseline patient data were extracted from full text articles by two independent assessors. The hypothesis of non‐random sampling was tested by comparing the expected and the observed distribution of the p‐values of differences between test and controls after randomization. Results One‐thousand five‐hundred and thirty‐eight unique RCTs were identified, of which 409 (26.6%) did not report baseline characteristics of the population, and 671 (43.6%) reported data in forms other than mean and standard deviation and could not be used to assess their random sampling. Four‐hundred and fifty‐eight trials with 1449 baseline variables in the form of mean and standard deviation were assessed. The study observed an over‐representation of very small p‐values [<.001, 1.38%, 95% confidence interval (CI) 0.85–2.12 compared to the expected 0.10%, 95% CI 0.00–0.26]. No evidence of over‐representation of larger p‐values was observed. Unusual distributions were present in 2.38% of RCTs and more frequent in non‐registered trials, in studies supported by non‐industry funding, and in multi‐centre RCTs. Conclusions The inability to assess random sampling due to insufficient reporting in 26.6% of trials requires attention. In trials reporting suitable baseline data, unusual distributions were uncommon, and no evidence of data fabrication was detected, but there was evidence of non‐random sampling. Continued efforts are necessary to ensure high integrity and trust in the evidence base of the field.
Collapse
|
28
|
Cloud solutions for GxP laboratories: considerations for data storage. Bioanalysis 2021; 13:1313-1321. [PMID: 34515519 DOI: 10.4155/bio-2021-0137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Challenges for data storage during drug development have become increasingly complex as the pharmaceutical industry expands in an environment that requires on-demand availability of data and resources for users across the globe. While the efficiency and relative low cost of cloud services have become increasingly attractive, hesitancy toward the use of cloud services has decreased and there has been a significant shift toward real-world implementation. Within GxP laboratories, the considerations for cloud storage of data include data integrity and security, as well as access control and usage for users around the globe. In this review, challenges and considerations when using cloud storage options for the storage of laboratory-based GxP data are discussed and best practices are defined.
Collapse
|
29
|
Strategies for the Identification and Prevention of Survey Fraud: Data Analysis of a Web-Based Survey. JMIR Cancer 2021; 7:e30730. [PMID: 34269685 PMCID: PMC8325077 DOI: 10.2196/30730] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 06/09/2021] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND To assess the impact of COVID-19 on cancer survivors, we fielded a survey promoted via email and social media in winter 2020. Examination of the data showed suspicious patterns that warranted serious review. OBJECTIVE The aim of this paper is to review the methods used to identify and prevent fraudulent survey responses. METHODS As precautions, we included a Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA), a hidden question, and instructions for respondents to type a specific word. To identify likely fraudulent data, we defined a priori indicators that warranted elimination or suspicion. If a survey contained two or more suspicious indicators, the survey was eliminated. We examined differences between the retained and eliminated data sets. RESULTS Of the total responses (N=1977), nearly three-fourths (n=1408) were dropped and one-fourth (n=569) were retained after data quality checking. Comparisons of the two data sets showed statistically significant differences across almost all demographic characteristics. CONCLUSIONS Numerous precautions beyond the inclusion of a CAPTCHA are needed when fielding web-based surveys, particularly if a financial incentive is offered.
Collapse
|
30
|
Blockchain Processing Technique Based on Multiple Hash Chains for Minimizing Integrity Errors of IoT Data in Cloud Environments. SENSORS 2021; 21:s21144679. [PMID: 34300418 PMCID: PMC8309535 DOI: 10.3390/s21144679] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 06/29/2021] [Accepted: 07/07/2021] [Indexed: 11/17/2022]
Abstract
As IoT (Internet of Things) devices are diversified in the fields of use (manufacturing, health, medical, energy, home, automobile, transportation, etc.), it is becoming important to analyze and process data sent and received from IoT devices connected to the Internet. Data collected from IoT devices is highly dependent on secure storage in databases located in cloud environments. However, storing directly in a database located in a cloud environment makes it not only difficult to directly control IoT data, but also does not guarantee the integrity of IoT data due to a number of hazards (error and error handling, security attacks, etc.) that can arise from natural disasters and management neglect. In this paper, we propose an optimized hash processing technique that enables hierarchical distributed processing with an n-bit-size blockchain to minimize the loss of data generated from IoT devices deployed in distributed cloud environments. The proposed technique minimizes IoT data integrity errors as well as strengthening the role of intermediate media acting as gateways by interactively authenticating blockchains of n bits into n + 1 and n − 1 layers to normally validate IoT data sent and received from IoT data integrity errors. In particular, the proposed technique ensures the reliability of IoT information by validating hash values of IoT data in the process of storing index information of IoT data distributed in different locations in a blockchain in order to maintain the integrity of the data. Furthermore, the proposed technique ensures the linkage of IoT data by allowing minimal errors in the collected IoT data while simultaneously grouping their linkage information, thus optimizing the load balance after hash processing. In performance evaluation, the proposed technique reduced IoT data processing time by an average of 2.54 times. Blockchain generation time improved on average by 17.3% when linking IoT data. The asymmetric storage efficiency of IoT data according to hash code length is improved by 6.9% on average over existing techniques. Asymmetric storage speed according to the hash code length of the IoT data block was shown to be 10.3% faster on average than existing techniques. Integrity accuracy of IoT data is improved by 18.3% on average over existing techniques.
Collapse
|
31
|
A Clustering Algorithm for Multi-Modal Heterogeneous Big Data With Abnormal Data. Front Neurorobot 2021; 15:680613. [PMID: 34194310 PMCID: PMC8236595 DOI: 10.3389/fnbot.2021.680613] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Accepted: 04/27/2021] [Indexed: 11/13/2022] Open
Abstract
The problems of data abnormalities and missing data are puzzling the traditional multi-modal heterogeneous big data clustering. In order to solve this issue, a multi-view heterogeneous big data clustering algorithm based on improved Kmeans clustering is established in this paper. At first, for the big data which involve heterogeneous data, based on multi view data analyzing, we propose an advanced Kmeans algorithm on the base of multi view heterogeneous system to determine the similarity detection metrics. Then, a BP neural network method is used to predict the missing attribute values, complete the missing data and restore the big data structure in heterogeneous state. Last, we ulteriorly propose a data denoising algorithm to denoise the abnormal data. Based on the above methods, we construct a framework namely BPK-means to resolve the problems of data abnormalities and missing data. Our solution approach is evaluated through rigorous performance evaluation study. Compared with the original algorithm, both theoretical verification and experimental results show that the accuracy of the proposed method is greatly improved.
Collapse
|
32
|
What the Coronavirus Disease 2019 (COVID-19) Pandemic Has Reinforced: The Need for Accurate Data. Clin Infect Dis 2021; 72:920-923. [PMID: 33146707 PMCID: PMC7665390 DOI: 10.1093/cid/ciaa1686] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Accepted: 10/28/2020] [Indexed: 11/14/2022] Open
Abstract
The COVID-19 pandemic has challenged the United States’ existing national public health informatics infrastructure. This report details the factors that have contributed to COVID-19 data inaccuracies and reporting delays and their effect on the modeling and monitoring of the COVID-19 pandemic.
Collapse
|
33
|
HealthyBlock: Blockchain-Based IT Architecture for Electronic Medical Records Resilient to Connectivity Failures. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:ijerph17197132. [PMID: 33003452 PMCID: PMC7579627 DOI: 10.3390/ijerph17197132] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 09/07/2020] [Accepted: 09/16/2020] [Indexed: 11/28/2022]
Abstract
The current information systems for the registration and control of electronic medical records (EMR) present a series of problems in terms of the fragmentation, security, and privacy of medical information, since each health institution, laboratory, doctor, etc. has its own database and manages its own information, without the intervention of patients. This situation does not favor effective treatment and prevention of diseases for the population, due to potential information loss, misinformation, or data leaks related to a patient, which in turn may imply a direct risk for the individual and high public health costs for governments. One of the proposed solutions to this problem has been the creation of electronic medical record (EMR) systems using blockchain networks; however, most of them do not take into account the occurrence of connectivity failures, such as those found in various developing countries, which can lead to failures in the integrity of the system data. To address these problems, HealthyBlock is presented in this paper as an architecture based on blockchain networks, which proposes a unified electronic medical record system that considers different clinical providers, with resilience in data integrity during connectivity failure and with usability, security, and privacy characteristics. On the basis of the HealthyBlock architecture, a prototype was implemented for the care of patients in a network of hospitals. The results of the evaluation showed high efficiency in keeping the EMRs of patients unified, updated, and secure, regardless of the network clinical provider they consult.
Collapse
|
34
|
Study of Subjective Data Integrity for Image Quality Data Sets with Consumer Camera Content. J Imaging 2020; 6:jimaging6030007. [PMID: 34460604 PMCID: PMC8321034 DOI: 10.3390/jimaging6030007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Accepted: 02/12/2020] [Indexed: 11/27/2022] Open
Abstract
We need data sets of images and subjective scores to develop robust no reference (or blind) visual quality metrics for consumer applications. These applications have many uncontrolled variables because the camera creates the original media and the impairment simultaneously. We do not fully understand how this impacts the integrity of our subjective data. We put forward two new data sets of images from consumer cameras. The first data set, CCRIQ2, uses a strict experiment design, more suitable for camera performance evaluation. The second data set, VIME1, uses a loose experiment design that resembles the behavior of consumer photographers. We gather subjective scores through a subjective experiment with 24 participants using the Absolute Category Rating method. We make these two new data sets available royalty-free on the Consumer Digital Video Library. We also present their integrity analysis (proposing one new approach) and explore the possibility of combining CCRIQ2 with its legacy counterpart. We conclude that the loose experiment design yields unreliable data, despite adhering to international recommendations. This suggests that the classical subjective study design may not be suitable for studies using consumer content. Finally, we show that Hoßfeld–Schatz–Egger α failed to detect important differences between the two data sets.
Collapse
|
35
|
Data integrity in regulated bioanalysis: a summary from the European Bioanalysis Forum Workshop in collaboration with the MHRA. Bioanalysis 2019; 11:1227-1231. [PMID: 31452404 DOI: 10.4155/bio-2019-0139] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
In this conference report, we summarize the main findings and messages from a workshop on 'Data Integrity'. The workshop was held at the 11th European Bioanalysis Forum Open (EBF) Symposium in Barcelona (21-23 November 2018), in collaboration with the Medicines and Health products Regulatory Agency to provide insight and understanding of regulatory data integrity expectations. The workshop highlighted the importance of engaging with software developers to address the gap between industry's data integrity needs and current system software capabilities. Delegates were also made aware of the importance of implementing additional procedural controls to mitigate the risk associated with using systems that do not fully meet data integrity requirements.
Collapse
|
36
|
Abstract
The Japan Bioanalysis Forum Symposium was held on 12-14 February 2019 (Yokohama, Japan), in celebration of its 10th anniversary, and over 370 participants from pharmaceutical industries, contractors, academia and regulatory authorities from home and abroad came together in Yokohama. The 3-day symposium particularly aimed to foster collaboration with the scientists surrounding bioanalysts, according to the theme 'Open to the Public.' The symposium also included a broad range of pioneering programs, such as lectures by speakers from DMPK/metabolomics fields, discussions of future bioanalysis and poster presentations by publicly offered presenters as well as the regular ones we had organized. This report summarizes the major topics as a conference report.
Collapse
|
37
|
De novo discovery of antibody drugs - great promise demands scrutiny. MAbs 2019; 11:809-811. [PMID: 31122133 PMCID: PMC6601558 DOI: 10.1080/19420862.2019.1622926] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2019] [Revised: 05/01/2019] [Accepted: 05/21/2019] [Indexed: 01/10/2023] Open
Abstract
We live in an era of rapidly advancing computing capacity and algorithmic sophistication. "Big data" and "artificial intelligence"find progressively wider use in all spheres of human activity, including healthcare. A diverse array of computational technologies is being applied with increasing frequency to antibody drug research and development (R&D). Their successful applications are met with great interest due to the potential for accelerating and streamlining the antibody R&D process. While this excitement is very likely justified in the long term, it is less likely that the transition from the first use to routine practice will escape challenges that other new technologies had experienced before they began to blossom. This transition typically requires many cycles of iterative learning that rely on the deconstruction of the technology to understand its pitfalls and define vectors for optimization. The study by Vasquez et al. identifies a key obstacle to such learning: the lack of transparency regarding methodology in computational antibody design reports, which has the potential to mislead the community efforts.
Collapse
|
38
|
A Strongly Unforgeable Certificateless Signature Scheme and Its Application in IoT Environments. SENSORS 2019; 19:s19122692. [PMID: 31207962 PMCID: PMC6631681 DOI: 10.3390/s19122692] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/04/2019] [Revised: 06/01/2019] [Accepted: 06/11/2019] [Indexed: 11/24/2022]
Abstract
With the widespread application of the Internet of Things (IoT), ensuring communication security for IoT devices is of considerable importance. Since IoT data are vulnerable to eavesdropping, tampering, forgery, and other attacks during an open network transmission, the integrity and authenticity of data are fundamental security requirements in the IoT. A certificateless signature (CLS) is a viable solution for providing data integrity, data authenticity, and identity identification in resource-constrained IoT devices. Therefore, designing a secure and efficient CLS scheme for IoT environments has become one of the main objectives of IoT security research. However, the existing CLS schemes rarely focus on strong unforgeability and replay attacks. Herein, we design a novel CLS scheme to protect the integrity and authenticity of IoT data. In addition to satisfying the strong unforgeability requirement, the proposed scheme also resists public key replacement attacks, malicious-but-passive key-generation-centre attacks, and replay attacks. Compared with other related CLS schemes without random oracles, our CLS scheme has a shorter private key, stronger security, and lower communication and computational costs.
Collapse
|
39
|
[Problems with Laboratory Notebooks in Academia and How to Resolve Them]. YAKUGAKU ZASSHI 2019; 139:887-890. [PMID: 31155531 DOI: 10.1248/yakushi.18-00193-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
There is currently a major effort to promote drug discovery in academia as a way to seed new drug development in the pharmaceutical industry. However, there are concerns in industry about the quality of drug candidates generated in academic institutions. These concerns encompass culture and perceptions with respect to intellectual property management, the process of product development, and the reliability of scientific data. Questions about data reliability underscore the particularly serious problem of mistrust in academic research. Therefore, the author became interested in the topic of industry standards for quality assurance (QA) and arranged training workshops at Okayama University on the appropriate methods for recording experimental notes by lecturers involved in QA. The outcomes are presented here.
Collapse
|
40
|
On the Security and Data Integrity of Low-Cost Sensor Networks for Air Quality Monitoring. SENSORS (BASEL, SWITZERLAND) 2018; 18:E4451. [PMID: 30558353 PMCID: PMC6308815 DOI: 10.3390/s18124451] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/01/2018] [Revised: 11/05/2018] [Accepted: 11/07/2018] [Indexed: 11/16/2022]
Abstract
The emerging connected, low-cost, and easy-to-use air quality monitoring systems have enabled a paradigm shift in the field of air pollution monitoring. These systems are increasingly being used by local government and non-profit organizations to inform the public, and to support decision making related to air quality. However, data integrity and system security are rarely considered during the design and deployment of such monitoring systems, and such ignorance leaves tremendous room for undesired and damaging cyber intrusions. The collected measurement data, if polluted, could misinform the public and mislead policy makers. In this paper, we demonstrate such issues by using a.com, a popular low-cost air quality monitoring system that provides an affordable and continuous air quality monitoring capability to broad communities. To protect the air quality monitoring network under this investigation, we denote the company of interest as a.com. Through a series of probing, we are able to identify multiple security vulnerabilities in the system, including unencrypted message communication, incompetent authentication mechanisms, and lack of data integrity verification. By exploiting these vulnerabilities, we have the ability of "impersonating" any victim sensor in the a.com system and polluting its data using fabricated data. To the best of our knowledge, this is the first security analysis of low-cost and connected air quality monitoring systems. Our results highlight the urgent need in improving the security and data integrity design in these systems.
Collapse
|
41
|
A Randomized Watermarking Technique for Detecting Malicious Data Injection Attacks in Heterogeneous Wireless Sensor Networks for Internet of Things Applications. SENSORS 2018; 18:s18124346. [PMID: 30544877 PMCID: PMC6308818 DOI: 10.3390/s18124346] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/11/2018] [Revised: 11/27/2018] [Accepted: 12/06/2018] [Indexed: 11/25/2022]
Abstract
Using Internet of Things (IoT) applications has been a growing trend in the last few years. They have been deployed in several areas of life, including secure and sensitive sectors, such as the military and health. In these sectors, sensory data is the main factor in any decision-making process. This introduces the need to ensure the integrity of data. Secure techniques are needed to detect any data injection attempt before catastrophic effects happen. Sensors have limited computational and power resources. This limitation creates a challenge to design a security mechanism that is both secure and energy-efficient. This work presents a Randomized Watermarking Filtering Scheme (RWFS) for IoT applications that provides en-route filtering to remove any injected data at an early stage of the communication. Filtering injected data is based on a watermark that is generated from the original data and embedded directly in random places throughout the packet’s payload. The scheme uses homomorphic encryption techniques to conceal the report’s measurement from any adversary. The advantage of homomorphic encryption is that it allows the data to be aggregated and, thus, decreases the packet’s size. The results of our proposed scheme prove that it improves the security and energy consumption of the system as it mitigates some of the limitations in the existing works.
Collapse
|
42
|
Smart Contract-Based Review System for an IoT Data Marketplace. SENSORS 2018; 18:s18103577. [PMID: 30360413 PMCID: PMC6211088 DOI: 10.3390/s18103577] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2018] [Revised: 10/10/2018] [Accepted: 10/16/2018] [Indexed: 11/30/2022]
Abstract
Internet of Things (IoT)-based devices, especially those used for home automation, consist of their own sensors and generate many logs during a process. Enterprises producing IoT devices convert these log data into more useful data through secondary processing; thus, they require data from the device users. Recently, a platform for data sharing has been developed because the demand for IoT data increases. Several IoT data marketplaces are based on peer-to-peer (P2P) networks, and in this type of marketplace, it is difficult for an enterprise to trust a data owner or the data they want to trade. Therefore, in this study, we propose a review system that can confirm the reputation of a data owner or the data traded in the P2P data marketplace. The traditional server-client review systems have many drawbacks, such as security vulnerability or server administrator’s malicious behavior. However, the review system developed in this study is based on Ethereum smart contracts; thus, this system is running on the P2P network and is more flexible for the network problem. Moreover, the integrity and immutability of the registered reviews are assured because of the blockchain public ledger. In addition, a certain amount of gas is essential for all functions to be processed by Ethereum transactions. Accordingly, we tested and analyzed the performance of our proposed model in terms of gas required.
Collapse
|
43
|
A Smartphone-Based Application Improves the Accuracy, Completeness, and Timeliness of Cattle Disease Reporting and Surveillance in Ethiopia. Front Vet Sci 2018; 5:2. [PMID: 29387688 PMCID: PMC5776010 DOI: 10.3389/fvets.2018.00002] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2017] [Accepted: 01/04/2018] [Indexed: 12/04/2022] Open
Abstract
Accurate disease reporting, ideally in near real time, is a prerequisite to detecting disease outbreaks and implementing appropriate measures for their control. This study compared the performance of the traditional paper-based approach to animal disease reporting in Ethiopia to one using an application running on smartphones. In the traditional approach, the total number of cases for each disease or syndrome was aggregated by animal species and reported to each administrative level at monthly intervals; while in the case of the smartphone application demographic information, a detailed list of presenting signs, in addition to the putative disease diagnosis were immediately available to all administrative levels via a Cloud-based server. While the smartphone-based approach resulted in much more timely reporting, there were delays due to limited connectivity; these ranged on average from 2 days (in well-connected areas) up to 13 days (in more rural locations). We outline the challenges that would likely be associated with any widespread rollout of a smartphone-based approach such as the one described in this study but demonstrate that in the long run the approach offers significant benefits in terms of timeliness of disease reporting, improved data integrity and greatly improved animal disease surveillance.
Collapse
|
44
|
Center for Drug Evaluation and Research Perspective on Quality in Clinical Trials. Ther Innov Regul Sci 2017; 51:416-418. [PMID: 30227045 DOI: 10.1177/2168479017701801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Clinical trial quality is essential to bringing effective treatments to patients as quickly as possible. Clinical trials that answer important questions, yield meaningful data, and protect trial participants can provide data that support both regulatory and clinical decision making. The Food and Drug Administration's (FDA's) Center for Drug Evaluation and Research (CDER) encourages stakeholders to improve clinical trial quality and efficiency. CDER believes that a systematic approach to clinical trial quality-one that builds in quality up front and focuses on the most critical aspects of study conduct-contributes to successful trials. Beyond FDA's regulatory requirements for clinical trial quality, CDER is an active participant in multiple efforts to advance clinical trial quality, including the addendum to ICH E6 (Good Clinical Practice) and the Clinical Trials Transformation Initiative project on quality-by-design for clinical trials. These efforts aim to move clinical drug development to a desired state that centers on efficient and agile clinical development programs that reliably produce high-quality data and adhere to important ethical standards.
Collapse
|
45
|
Unfaithful findings: identifying careless responding in addictions research. Addiction 2016; 111:955-6. [PMID: 26662631 DOI: 10.1111/add.13221] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/09/2015] [Accepted: 10/15/2015] [Indexed: 11/30/2022]
|
46
|
A Task-Centric Cooperative Sensing Scheme for Mobile Crowdsourcing Systems. SENSORS 2016; 16:s16050746. [PMID: 27223288 PMCID: PMC4883437 DOI: 10.3390/s16050746] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2016] [Revised: 05/01/2016] [Accepted: 05/17/2016] [Indexed: 11/28/2022]
Abstract
In a densely distributed mobile crowdsourcing system, data collected by neighboring participants often exhibit strong spatial correlations. By exploiting this property, one may employ a portion of the users as active participants and set the other users as idling ones without compromising the quality of sensing or the connectivity of the network. In this work, two participant selection questions are considered: (a) how to recruit an optimal number of users as active participants to guarantee that the overall sensing data integrity is kept above a preset threshold; and (b) how to recruit an optimal number of participants with some inaccurate data so that the fairness of selection and resource conservation can be achieved while maintaining sufficient sensing data integrity. For question (a), we propose a novel task-centric approach to explicitly exploit data correlation among participants. This subset selection problem is regarded as a constrained optimization problem and we propose an efficient polynomial time algorithm to solve it. For question (b), we formulate this set partitioning problem as a constrained min-max optimization problem. A solution using an improved version of the polynomial time algorithm is proposed based on (a). We validate these algorithms using a publicly available Intel-Berkeley lab sensing dataset and satisfactory performance is achieved.
Collapse
|
47
|
Why Patient Matching Is a Challenge: Research on Master Patient Index (MPI) Data Discrepancies in Key Identifying Fields. PERSPECTIVES IN HEALTH INFORMATION MANAGEMENT 2016; 13:1e. [PMID: 27134610 PMCID: PMC4832129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Patient identification matching problems are a major contributor to data integrity issues within electronic health records. These issues impede the improvement of healthcare quality through health information exchange and care coordination, and contribute to deaths resulting from medical errors. Despite best practices in the area of patient access and medical record management to avoid duplicating patient records, duplicate records continue to be a significant problem in healthcare. This study examined the underlying causes of duplicate records using a multisite data set of 398,939 patient records with confirmed duplicates and analyzed multiple reasons for data discrepancies between those record matches. The field that had the greatest proportion of mismatches (nondefault values) was the middle name, accounting for 58.30 percent of mismatches. The Social Security number was the second most frequent mismatch, occurring in 53.54 percent of the duplicate pairs. The majority of the mismatches in the name fields were the result of misspellings (53.14 percent in first name and 33.62 percent in last name) or swapped last name/first name, first name/middle name, or last name/middle name pairs. The use of more sophisticated technologies is critical to improving patient matching. However, no amount of advanced technology or increased data capture will completely eliminate human errors. Thus, the establishment of policies and procedures (such as standard naming conventions or search routines) for front-end and back-end staff to follow is foundational for the overall data integrity process. Training staff on standard policies and procedures will result in fewer duplicates created on the front end and more accurate duplicate record matching and merging on the back end. Furthermore, monitoring, analyzing trends, and identifying errors that occur are proactive ways to identify data integrity issues.
Collapse
|
48
|
Abstract
AIMS To identify network measures with relevance to disease spread in a network of movements derived from the Department of Conservation (DOC) translocation records from 1970 to mid-2014, and to identify conservation sites that should be prioritised for surveillance activities and improvements to data collection to make the best use of network analysis techniques in the future. METHODS Data included the source and destination of translocated specimens, the species and the dates the translocations were expected to occur. The data were used to construct a directed, non-weighted network in which a translocation event represented a tie in the network. Network density, in-degree (movements entering a node of interest) and out-degree (movements leaving a node of interest) and reciprocity were calculated. RESULTS The data analysed consisted of 692 unique translocations between 307 sites, with the majority (518; 73%) being for birds. The constructed network for bird, reptile and frog translocations comprised 260 nodes, with 34/260 (13%) having two-way movements and 47/260 (18%) non-reciprocal movements. The median degree score (sum of in- and out-degree) was two (min 0, max 36) with a mean of 3.5 in a right skewed distribution. Most sites acted as receivers or senders of consignments with only a few having both high in- and high out-degree, and thus had characteristics that made them sites of interest for surveillance activities. These included the National Wildlife Centre at Mount Bruce, Tiritiri Matangi Island and Te Kakahu (Chalky Island). CONCLUSIONS The presence of linking sites that join larger clusters within the network creates the potential for rapid disease spread if a pathogen were to be introduced. The important sites that supply or receive specimens for translocations are already well recognised by those performing translocations in New Zealand, and this paper provides further information by quantifying their role within the network.
Collapse
|
49
|
Secure Data Aggregation with Fully Homomorphic Encryption in Large-Scale Wireless Sensor Networks. SENSORS 2015; 15:15952-73. [PMID: 26151208 PMCID: PMC4541862 DOI: 10.3390/s150715952] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/10/2015] [Revised: 06/15/2015] [Accepted: 06/26/2015] [Indexed: 12/04/2022]
Abstract
With the rapid development of wireless communication technology, sensor technology, information acquisition and processing technology, sensor networks will finally have a deep influence on all aspects of people’s lives. The battery resources of sensor nodes should be managed efficiently in order to prolong network lifetime in large-scale wireless sensor networks (LWSNs). Data aggregation represents an important method to remove redundancy as well as unnecessary data transmission and hence cut down the energy used in communication. As sensor nodes are deployed in hostile environments, the security of the sensitive information such as confidentiality and integrity should be considered. This paper proposes Fully homomorphic Encryption based Secure data Aggregation (FESA) in LWSNs which can protect end-to-end data confidentiality and support arbitrary aggregation operations over encrypted data. In addition, by utilizing message authentication codes (MACs), this scheme can also verify data integrity during data aggregation and forwarding processes so that false data can be detected as early as possible. Although the FHE increase the computation overhead due to its large public key size, simulation results show that it is implementable in LWSNs and performs well. Compared with other protocols, the transmitted data and network overhead are reduced in our scheme.
Collapse
|
50
|
Evaluating Source Data Verification as a Quality Control Measure in Clinical Trials. Ther Innov Regul Sci 2014; 48:671-680. [PMID: 30227471 DOI: 10.1177/2168479014554400] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
TransCelerate has developed a risk-based monitoring methodology that transforms clinical trial monitoring from a model rooted in source data verification (SDV) to a comprehensive approach leveraging cross-functional risk assessment, technology, and adaptive on-site, off-site, and central monitoring activities to ensure data quality and subject safety. Evidence suggests that monitoring methods that concentrate on what is critical for a study and a site may produce better outcomes than do conventional SDV-driven models. This article assesses the value of SDV in clinical trial monitoring via a literature review, a retrospective analysis of data from clinical trials, and an assessment of major and critical findings from TransCelerate member company internal audits. The results support the hypothesis that generalized SDV has limited value as a quality control measure and reinforce the value of other risk-based monitoring activities.
Collapse
|