326
|
Efird JT. Goldilocks Rounding: Achieving Balance Between Accuracy and Parsimony in the Reporting of Relative Effect Estimates. Cancer Inform 2021; 20:1176935120985132. [PMID: 33456306 PMCID: PMC7791303 DOI: 10.1177/1176935120985132] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2020] [Accepted: 12/01/2020] [Indexed: 01/08/2023] Open
Abstract
Researchers often report a measure to several decimal places more than what is
sensible or realistic. Rounding involves replacing a number with a value of
lesser accuracy while minimizing the practical loss of validity. This practice
is generally acceptable to simplify data presentation and to facilitate the
communication and comparison of research results. Rounding also may reduce
spurious accuracy when the extraneous digits are not justified by the exactness
of the recording instrument or data collection procedure. However, substituting
a more explicit or simpler representation for an original measure may not be
practicable or acceptable if an adequate degree of accuracy is not retained. The
error introduced by rounding exact numbers may result in misleading conclusions
and the interpretation of study findings. For example, rounding the upper
confidence interval for a relative effect estimate of 0.996 to 2 decimal places
may obscure the statistical significance of the result. When presenting the
findings of a study, authors need to be careful that they do not report numbers
that contain too few significant digits. Equally important, they should avoid
providing more significant figures than are warranted to convey the underlying
meaning of the result.
Collapse
|
327
|
Zhao Y, Lesmes LA, Dorr M, Bex PJ, Lu ZL. Psychophysical Validation of a Novel Active Learning Approach for Measuring the Visual Acuity Behavioral Function. Transl Vis Sci Technol 2021; 10:1. [PMID: 33505768 PMCID: PMC7794273 DOI: 10.1167/tvst.10.1.1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2020] [Accepted: 12/01/2020] [Indexed: 11/24/2022] Open
Abstract
Purpose To evaluate the performance of the quantitative visual acuity (qVA) method in measuring the visual acuity (VA) behavioral function. Methods We evaluated qVA performance in terms of the accuracy, precision, and efficiency of the estimated VA threshold and range in Monte Carlo simulations and a psychophysical experiment. We also compared the estimated VA threshold from the qVA method with that from the Electronic Early Treatment Diabetic Retinopathy Study (E-ETDRS) and Freiburg Visual Acuity Text (FrACT) methods. Four repeated measures with all three methods were conducted in four Bangerter foil conditions in 14 eyes. Results In both simulations and psychophysical experiment, the qVA method quantified the full acuity behavioral function with two psychometric parameters (VA threshold and VA range) with virtually no bias and with high precision and efficiency. There was a significant correlation between qVA estimates of VA threshold and range in the psychophysical experiment. In addition, qVA threshold estimates were highly correlated with those from the E-ETDRS and FrACT methods. Conclusions The qVA method can provide an accurate, precise, and efficient assessment of the full acuity behavioral function with both VA threshold and range. Translational Relevance The qVA method can accurately, precisely, and efficiently assess the full VA behavioral function. Further research will evaluate the potential value of these rich measures for both clinical research and patient care.
Collapse
|
328
|
Pai MC, Yang CJ, Fan SY. Time Perception in Prodromal Alzheimer's Dementia and in Prodromal Dementia With Lewy Bodies. Front Psychiatry 2021; 12:728344. [PMID: 34690834 PMCID: PMC8529046 DOI: 10.3389/fpsyt.2021.728344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Accepted: 08/23/2021] [Indexed: 11/30/2022] Open
Abstract
Background: Time perception is a subjective experience or sense of time. Previous studies have shown that Alzheimer's dementia (AD) patients have time perception deficits compared to a cognitively unimpaired control group (CU). There are only a few studies on dementia with Lewy bodies (DLB) patients' time perception in comparison with CU and AD patients. Early intervention and prescription of the right medicine may delay the deterioration of AD and DLB, moreover, knowing how prodromal AD (prAD) and prodromal DLB's (prDLB) time perception differ from each other might be helpful for future understanding of these two dementias. Therefore, the purpose of this study is to explore the difference in time perception performance between prodromal AD and prodromal DLB. Methods: We invited people diagnosed with prAD, prDLB, and CU to participate in this study. Tests of verbal estimation of time and time interval production were used to assess their time perception. We analyzed the average time estimation (ATE), absolute error score (ABS), coefficient of variance (CV), and subjective temporal unit (STU) within the three groups. Results: A total of 40 prAD, 30 prDLB, and 47 CU completed the study. In the verbal estimation test, the CV for the prAD was higher than both prDLB and CU at the 9 s interval, and the CV of prAD was higher than CU at the 27 s interval. In the time interval production test, the subjective time units of prDLB were higher than prAD at the 10 s interval, while those of both prDLB and CU were higher than prAD at the 30 s interval. The percentage of subjects with STU < 1.0 s, indicating overestimation, was higher in prAD than both prDLB and CU. Conclusion: Time perception of prAD patients showed imprecision and overestimation of time, while prDLB tended to underestimate time intervals. No significant difference was found in accuracy among the three groups. It is speculated that the clinical and pathological severity of the two prodromal dementia stages may be different, and some patients have not yet had their time perception affected.
Collapse
|
329
|
Haemmerli J, Davidovic A, Meling TR, Chavaz L, Schaller K, Bijlenga P. Evaluation of the precision of operative augmented reality compared to standard neuronavigation using a 3D-printed skull. Neurosurg Focus 2021; 50:E17. [PMID: 33386018 DOI: 10.3171/2020.10.focus20789] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Accepted: 10/22/2020] [Indexed: 11/06/2022]
Abstract
OBJECTIVE Augmented reality (AR) in cranial surgery allows direct projection of preregistered overlaid images in real time on the microscope surgical field. In this study, the authors aimed to compare the precision of AR-assisted navigation and standard pointer-based neuronavigation (NV) by using a 3D-printed skull in surgical conditions. METHODS A commercial standardized 3D-printed skull was scanned, fused, and referenced with an MR image and a CT scan of a patient with a 2 × 2-mm right frontal sinus defect. The defect was identified, registered, and integrated into NV. The target was physically marked on the 3D-printed skull replicating the right frontal sinus defect. Twenty-six subjects participated, 25 of whom had no prior NV or AR experience and 1 with little AR experience. The subjects were briefly trained in how to use NV, AR, and AR recalibration tools. Participants were asked to do the following: 1) "target the center of the defect in the 3D-printed skull with a navigation pointer, assisted only by NV orientation," and 2) "use the surgical microscope and AR to focus on the center of the projected object" under conventional surgical conditions. For the AR task, the number of recalibrations was recorded. Confidence regarding NV and AR precision were assessed prior to and after the experiment by using a 9-level Likert scale. RESULTS The median distance to target was statistically lower for AR than for NV (1 mm [Q1: 1 mm, Q3: 2 mm] vs 3 mm [Q1: 2 mm, Q3: 4 mm] [p < 0.001]). In the AR task, the median number of recalibrations was 4 (Q1: 4, Q3: 4.75). The number of recalibrations was significantly correlated with the precision (Spearman rho: -0.71, p < 0.05). The trust assessment after performing the experiment scored a median of 8 for AR and 5.5 for NV (p < 0.01). CONCLUSIONS This study shows for the first time the superiority of AR over NV in terms of precision. AR is easy to use. The number of recalibrations performed using reference structures increases the precision of the navigation. The confidence regarding precision increases with experience.
Collapse
|
330
|
Weiler K, Kleber K, Zielinsky S, Moritz A, Bauer N. Analytical performance and method comparison of a quantitative point-of-care immunoassay for measurement of bile acids in cats and dogs. J Vet Diagn Invest 2021; 33:35-46. [PMID: 33112211 PMCID: PMC7756073 DOI: 10.1177/1040638720968784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Point-of-care analyzers (POCAs) for quantitative assessment of bile acids (BAs) are scarce in veterinary medicine. We evaluated the Fuji Dri-Chem Immuno AU10V analyzer and v-BA test kit (Fujifilm) for detection of feline and canine total serum BA concentration. Results were compared with a 5th-generation assay as reference method and a 3rd-generation assay, both run on a bench-top analyzer. Analytical performance was assessed at 3 different concentration ranges, and with interferences. For method comparison, samples of 60 healthy and diseased cats and 64 dogs were included. Linearity was demonstrated for a BA concentration up to 130 µmol/L in cats (r = 0.99) and 110 µmol/L in dogs (r = 0.99). The analyzer showed high precision near the lower limit of quantification of 2 µmol/L reported by the manufacturer. Intra- and inter-assay coefficients of variation were < 5% for both species and all concentrations. Interferences were observed for bilirubin (800 mg/L) and lipid (4 g/L). There was excellent correlation with the reference method for feline (rs = 0.98) and canine samples (rs = 0.97), with proportional biases of 6.7% and -1.3%, respectively. However, a large bias (44.1%) was noted when the POCA was compared to the 3rd-generation assay. Total observed error was less than total allowable error at the 3 concentrations. The POCA reliably detected feline and canine BA in clinically relevant concentrations.
Collapse
|
331
|
Anderson KW, Scott K, Karageorgos IL, Gallagher ES, Tayi VS, Butler M, Hudgens JW. Dataset from HDX-MS Studies of IgG1 Glycoforms and Their Interactions with the FcγRIa (CD64) Receptor. JOURNAL OF RESEARCH OF THE NATIONAL INSTITUTE OF STANDARDS AND TECHNOLOGY 2021; vol:126010. [PMID: 36474595 PMCID: PMC9681196 DOI: 10.6028/jres.126.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 06/04/2021] [Indexed: 05/17/2023]
Abstract
This document presents hydrogen-deuterium exchange mass spectrometry (HDX-MS) data from measurements of three purified IgG1 glycoform samples, predominantly G0F, G2F, and SAF, in isolation and in complexation with the high-affinity receptor, FcγRIa (CD64). The IgG1 antibody used in this study, aIL8hFc, is a murine-human chimeric IgG1, which inhibits IL-8 binding to human neutrophils.
Collapse
|
332
|
Wylde Z, Bonduriansky R. A comparison of two methods for estimating measurement repeatability in morphometric studies. Ecol Evol 2021; 11:763-770. [PMID: 33520164 PMCID: PMC7820162 DOI: 10.1002/ece3.7032] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 10/15/2020] [Accepted: 10/22/2020] [Indexed: 11/23/2022] Open
Abstract
Measurement repeatability is often reported in morphometric studies as an index of the contribution of measurement error to trait measurements. However, the common method of remeasuring a mounted specimen fails to capture some components of measurement error and could therefore yield inflated repeatability estimates. Remounting specimens between successive measurements is likely to provide more realistic estimates of repeatability, particularly for structures that are difficult to measure.Using measurements of 22 somatic and genitalic traits of the neriid fly Telostylinus angusticollis, we compared repeatability estimates obtained via remeasurement of a specimen that is mounted once (single-mounted method) versus remeasurement of a specimen that is remounted between measurements (remounted method). We also asked whether the difference in repeatability estimates obtained via the two methods depends on trait size, trait type (somatic vs. genitalic), sclerotization, or sex.Repeatability estimates obtained via the remounted method were lower than estimates obtained via the single-mounted method for each of the 22 traits, and the difference between estimates obtained via the two methods was generally greater for small structures (such as genitalic traits) than for large structures (such as legs and wings). However, the difference between estimates obtained via the two methods did not depend on trait type (genitalic or somatic), tissue type (soft or sclerotized) or sex.Remounting specimens between successive measurements can provide more accurate estimates of measurement repeatability than remeasuring from a single mount, especially for small structures that are difficult to measure.
Collapse
|
333
|
Kim B, Seo MS, Park R. Analytical Performance Evaluation of Automated Coagulation Analyzer CP3000 for Routine and Special Coagulation Assays. ANNALS OF CLINICAL AND LABORATORY SCIENCE 2021; 51:112-119. [PMID: 33653789] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
CP3000 coagulation analyzer is a high-throughput, fully automated coagulation analyzer. The objective of this study was to evaluate the analytical performance of CP3000 coagulation system for general and special coagulation analyses. Quality control materials and patient samples were used to evaluate the analytical performance of CP3000 coagulation system. Precision, carryover, linearity, comparability with ACL-TOP 700 coagulation system, and verification of reference range were evaluated or performed according to Clinical and Laboratory Standards Institute guidelines. Within-run and between-run precisions were below 5% for both normal and abnormal ranges. There was no detectable carryover. The linearity of antithrombin and fibrinogen were excellent. The comparability between CP3000 and ACL-TOP 700 coagulation systems was acceptable except for activated partial thromboplastin time and thrombin time due to differences in reagent composition. Reference ranges proposed by the manufacturer were verified to be acceptable. CP3000 coagulation system is a reliable system that can be used to perform routine and special coagulation tests rapidly and accurately. Because of its small footprint as an additional advantage, the implementation of CP3000 coagulation system can be efficient in hospital laboratories of various sizes.
Collapse
|
334
|
Hsu YF, Hämäläinen JA. Both contextual regularity and selective attention affect the reduction of precision-weighted prediction errors but in distinct manners. Psychophysiology 2020; 58:e13753. [PMID: 33340115 DOI: 10.1111/psyp.13753] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2020] [Revised: 12/02/2020] [Accepted: 12/02/2020] [Indexed: 10/22/2022]
Abstract
Predictive coding model of perception postulates that the primary objective of the brain is to infer the causes of sensory inputs by reducing prediction errors (i.e., the discrepancy between expected and actual information). Moreover, prediction errors are weighted by their precision (i.e., inverse variance), which quantifies the degree of certainty about the variables. There is accumulating evidence that the reduction of precision-weighted prediction errors can be affected by contextual regularity (as an external factor) and selective attention (as an internal factor). However, it is unclear whether the two factors function together or separately. Here we used electroencephalography (EEG) to examine the putative interaction of contextual regularity and selective attention on this reduction process. Participants were presented with pairs of regular and irregular quartets in attended and unattended conditions. We found that contextual regularity and selective attention independently modulated the N1/MMN where the repetition effect was absent. On the P2, the two factors respectively interacted with the repetition effect without interacting with each other. The results showed that contextual regularity and selective attention likely affect the reduction of precision-weighted prediction errors in distinct manners. While contextual regularity finetunes our efficiency at reducing precision-weighted prediction errors, selective attention seems to modulate the reduction process following the Matthew effect of accumulated advantage.
Collapse
|
335
|
Sibum HO. When is enough enough? Accurate measurement and the integrity of scientific research. HISTORY OF SCIENCE 2020; 58:437-457. [PMID: 32715765 DOI: 10.1177/0073275320939696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
At a meeting of the Physical Society of London in 1925 participants expressed their concerns regarding a recent suggestion by the Australian physicist T. H. Laby for replicating the established value of the mechanical equivalent of heat. This rather controversial discussion about the value of redetermining this numerical fact brings to light different understandings of the moral economy of accuracy in scientific work; it signals a distinctive new stage in the historical understanding of accuracy and precision and the moral integrity in conducting research.
Collapse
|
336
|
Abstract
Precision medicine has become the mainstay of modern therapeutics, especially for neoplastic disease, but this paradigm does not commonly prevail in dietary planning. Compelling evidence suggests that individual features, including the structure and function of the gut microbiota, contribute to harvesting and metabolizing energy from food, and thereby modulate the host metabolic phenotype and glucose homeostasis. Here, the concept of precision to dietary planning is highlighted by demonstrating the role of the microbiota in glucose intolerance in response to noncaloric artificial sweeteners, and by linking the microbiota and other host features to postprandial increases in blood glucose. These findings highlight the heterogeneity that exists among humans, which translates into divergent metabolic responses to similar food and warrants the adoption of next-generation sequencing technologies and advanced bioinformatics to revolutionize nutrition studies, laying the groundwork for an individually focused tailor-made practice.
Collapse
|
337
|
Sharpe V, Weber K, Kuperberg GR. Impairments in Probabilistic Prediction and Bayesian Learning Can Explain Reduced Neural Semantic Priming in Schizophrenia. Schizophr Bull 2020; 46:1558-1566. [PMID: 32432697 PMCID: PMC7846190 DOI: 10.1093/schbul/sbaa069] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
It has been proposed that abnormalities in probabilistic prediction and dynamic belief updating explain the multiple features of schizophrenia. Here, we used electroencephalography (EEG) to ask whether these abnormalities can account for the well-established reduction in semantic priming observed in schizophrenia under nonautomatic conditions. We isolated predictive contributions to the neural semantic priming effect by manipulating the prime's predictive validity and minimizing retroactive semantic matching mechanisms. We additionally examined the link between prediction and learning using a Bayesian model that probed dynamic belief updating as participants adapted to the increase in predictive validity. We found that patients were less likely than healthy controls to use the prime to predictively facilitate semantic processing on the target, resulting in a reduced N400 effect. Moreover, the trial-by-trial output of our Bayesian computational model explained between-group differences in trial-by-trial N400 amplitudes as participants transitioned from conditions of lower to higher predictive validity. These findings suggest that, compared with healthy controls, people with schizophrenia are less able to mobilize predictive mechanisms to facilitate processing at the earliest stages of accessing the meanings of incoming words. This deficit may be linked to a failure to adapt to changes in the broader environment. This reciprocal relationship between impairments in probabilistic prediction and Bayesian learning/adaptation may drive a vicious cycle that maintains cognitive disturbances in schizophrenia.
Collapse
|
338
|
Gardiner C, Coleman R, de Maat MPM, Dorgalaleh A, Echenagucia M, Gosselin RC, Ieko M, Kitchen S. International Council for Standardization in Haematology (ICSH) laboratory guidance for the evaluation of haemostasis analyser-reagent test systems. Part 1: Instrument-specific issues and commonly used coagulation screening tests. Int J Lab Hematol 2020; 43:169-183. [PMID: 33249720 DOI: 10.1111/ijlh.13411] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Revised: 10/12/2020] [Accepted: 11/06/2020] [Indexed: 12/01/2022]
Abstract
Before a new method is used for clinical testing, it is essential that it is evaluated for suitability for its intended purpose. This document gives guidance for the performance of verification, validation and implementation processes required by regulatory and accreditation bodies. It covers the planning and execution of an evaluation of the commonly performed screening tests (prothrombin time, activated partial thromboplastin time, thrombin time and fibrinogen assay), and instrument-specific issues. Advice on selecting an appropriate haemostasis analyser, planning the evaluation, and assessing the reference, interval, precision, accuracy, and comparability of a haemostasis test system are also given. A second companion document will cover specialist haemostasis testing.
Collapse
|
339
|
Evaluation of the 3D Printing Accuracy of a Dental Model According to Its Internal Structure and Cross-Arch Plate Design: An In Vitro Study. MATERIALS 2020; 13:ma13235433. [PMID: 33260676 PMCID: PMC7729473 DOI: 10.3390/ma13235433] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Revised: 11/25/2020] [Accepted: 11/26/2020] [Indexed: 11/16/2022]
Abstract
The amount of photopolymer material consumed during the three-dimensional (3D) printing of a dental model varies with the volume and internal structure of the modeling data. This study analyzed how the internal structure and the presence of a cross-arch plate influence the accuracy of a 3D printed dental model. The model was designed with a U-shaped arch and the palate removed (Group U) or a cross-arch plate attached to the palate area (Group P), and the internal structure was divided into five types. The trueness and precision were analyzed for accuracy comparisons of the 3D printed models. Two-way ANOVA of the trueness revealed that the accuracy was 135.2 ± 26.3 µm (mean ± SD) in Group U and 85.6 ± 13.1 µm in Group P. Regarding the internal structure, the accuracy was 143.1 ± 46.8 µm in the 1.5 mm-thick shell group, which improved to 111.1 ± 31.9 µm and 106.7 ± 26.3 µm in the roughly filled and fully filled models, respectively. The precision was 70.3 ± 19.1 µm in Group U and 65.0 ± 8.8 µm in Group P. The results of this study suggest that a cross-arch plate is necessary for the accurate production of a model using 3D printing regardless of its internal structure. In Group U, the error during the printing process was higher for the hollowed models.
Collapse
|
340
|
Chlipala EA, Butters M, Brous M, Fortin JS, Archuletta R, Copeland K, Bolon B. Impact of Preanalytical Factors During Histology Processing on Section Suitability for Digital Image Analysis. Toxicol Pathol 2020; 49:755-772. [PMID: 33251977 PMCID: PMC8091422 DOI: 10.1177/0192623320970534] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Digital image analysis (DIA) is impacted by the quality of tissue staining. This study examined the influence of preanalytical variables-staining protocol design, reagent quality, section attributes, and instrumentation-on the performance of automated DIA software. Our hypotheses were that (1) staining intensity is impacted by subtle differences in protocol design, reagent quality, and section composition and that (2) identically programmed and loaded stainers will produce equivalent immunohistochemical (IHC) staining. We tested these propositions by using 1 hematoxylin and eosin stainer to process 13 formalin-fixed, paraffin-embedded (FFPE) mouse tissues and by using 3 identically programmed and loaded immunostainers to process 5 FFPE mouse tissues for 4 cell biomarkers. Digital images of stained sections acquired with a commercial whole slide scanner were analyzed by customizable algorithms incorporated into commercially available DIA software. Staining intensity as viewed qualitatively by an observer and/or quantitatively by DIA was affected by staining conditions and tissue attributes. Intrarun and inter-run IHC staining intensities were equivalent for each tissue when processed on a given stainer but varied measurably across stainers. Our data indicate that staining quality must be monitored for each method and stainer to ensure that preanalytical factors do not impact digital pathology data quality.
Collapse
|
341
|
Pfau T, Reilly P. How low can we go? Influence of sample rate on equine pelvic displacement calculated from inertial sensor data. Equine Vet J 2020; 53:1075-1081. [PMID: 33113248 DOI: 10.1111/evj.13371] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Revised: 09/02/2020] [Accepted: 10/22/2020] [Indexed: 11/29/2022]
Abstract
BACKGROUND Low-cost sensor devices are often limited in terms of sample rate. Based on signal periodicity, the Nyquist theorem allows determining the minimum theoretical sample rate required to adequately capture cyclical events, such as pelvic movement in trotting horses. OBJECTIVES To quantify the magnitude of errors arising with reduced sample rates when capturing biological signals using the example of pelvic time-displacement series and derived minima and maxima used to quantify movement asymmetry in lame horses. STUDY DESIGN Data comparison. METHODS Root mean square (RMS) errors between the 'reference' time-displacement series, captured with a validated inertial sensor at 100 Hz sample rate, and down-sampled time-series (8 Hz to 50 Hz) are calculated. Accuracy and precision are determined for maxima and minima derived from the time-displacement series. RESULTS Average RMS errors are <2 mm at 50 Hz sample rate, <4 mm at 40 Hz, <7 mm between 25 and 35 Hz, and increase to up to 20 mm at 20 Hz and below. Accuracy for maxima and minima is generally below 1mm. Precision is 1 mm at 50 Hz sample rate, 3 mm at 40Hz and ≥9 mm at 20 Hz and below. MAIN LIMITATIONS Only sample rate, no other sensor parameters were investigated. CONCLUSIONS Sample rate related errors for inertial sensor derived time-displacement series of pelvic movement are <2mm at 50 Hz, a rate that many low-cost loggers, smartphones or wireless sensors can sustain hence rendering these devices valid options for quantifying parameters relevant for lameness examinations in horses.
Collapse
|
342
|
Spake R, Mori AS, Beckmann M, Martin PA, Christie AP, Duguid MC, Doncaster CP. Implications of scale dependence for cross-study syntheses of biodiversity differences. Ecol Lett 2020; 24:374-390. [PMID: 33216440 DOI: 10.1111/ele.13641] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Revised: 07/30/2020] [Accepted: 10/19/2020] [Indexed: 12/01/2022]
Abstract
Biodiversity studies are sensitive to well-recognised temporal and spatial scale dependencies. Cross-study syntheses may inflate these influences by collating studies that vary widely in the numbers and sizes of sampling plots. Here we evaluate sources of inaccuracy and imprecision in study-level and cross-study estimates of biodiversity differences, caused by within-study grain and sample sizes, biodiversity measure, and choice of effect-size metric. Samples from simulated communities of old-growth and secondary forests demonstrated influences of all these parameters on the accuracy and precision of cross-study effect sizes. In cross-study synthesis by formal meta-analysis, the metric of log response ratio applied to measures of species richness yielded better accuracy than the commonly used Hedges' g metric on species density, which dangerously combined higher precision with persistent bias. Full-data analyses of the raw plot-scale data using multilevel models were also susceptible to scale-dependent bias. We demonstrate the challenge of detecting scale dependence in cross-study synthesis, due to ubiquitous covariation between replication, variance and plot size. We propose solutions for diagnosing and minimising bias. We urge that empirical studies publish raw data to allow evaluation of covariation in cross-study syntheses, and we recommend against using Hedges' g in biodiversity meta-analyses.
Collapse
|
343
|
Boyacioglu R, Wang C, Ma D, McGivney DF, Yu X, Griswold MA. 3D magnetic resonance fingerprinting with quadratic RF phase. Magn Reson Med 2020; 85:2084-2094. [PMID: 33179822 DOI: 10.1002/mrm.28581] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Revised: 09/25/2020] [Accepted: 10/12/2020] [Indexed: 12/26/2022]
Abstract
PURPOSE To implement 3D magnetic resonance fingerprinting (MRF) with quadratic RF phase (qRF-MRF) for simultaneous quantification of T1 , T2 , ΔB0 , and T 2 ∗ . METHODS 3D MRF data with effective undersampling factor of 3 in the slice direction were acquired with quadratic RF phase patterns for T1 , T2 , and T 2 ∗ sensitivity. Quadratic RF phase encodes the off-resonance by modulating the on-resonance frequency linearly in time. Transition to 3D brings practical limitations for reconstruction and dictionary matching because of increased data and dictionary sizes. Randomized singular value decomposition (rSVD)-based compression in time and reduction in dictionary size with a quadratic interpolation method are combined to be able to process prohibitively large data sets in feasible reconstruction and matching times. RESULTS Accuracy of 3D qRF-MRF maps in various resolutions and orientations are compared to 3D fast imaging with steady-state precession (FISP) for T1 and T2 contrast and to 2D qRF-MRF for T 2 ∗ contrast and ΔB0 . The precision of 3D qRF-MRF was 1.5-2 times higher than routine clinical scans. 3D qRF-MRF ΔB0 maps were further processed to highlight the susceptibility contrast. CONCLUSION Natively co-registered 3D whole brain T1 , T2 , T 2 ∗ , ΔB0 , and QSM maps can be acquired in as short as 5 min with 3D qRF-MRF.
Collapse
|
344
|
Identification of New Genetic Clusters in Glioblastoma Multiforme: EGFR Status and ADD3 Losses Influence Prognosis. Cells 2020; 9:cells9112429. [PMID: 33172155 PMCID: PMC7694764 DOI: 10.3390/cells9112429] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Revised: 10/30/2020] [Accepted: 11/03/2020] [Indexed: 12/12/2022] Open
Abstract
Glioblastoma multiforme (GB) is one of the most aggressive tumors. Despite continuous efforts to improve its clinical management, there is still no strategy to avoid a rapid and fatal outcome. EGFR amplification is the most characteristic alteration of these tumors. Although effective therapy against it has not yet been found in GB, it may be central to classifying patients. We investigated somatic-copy number alterations (SCNA) by multiplex ligation-dependent probe amplification in a series of 137 GB, together with the detection of EGFRvIII and FISH analysis for EGFR amplification. Publicly available data from 604 patients were used as a validation cohort. We found statistical associations between EGFR amplification and/or EGFRvIII, and SCNA in CDKN2A, MSH6, MTAP and ADD3. Interestingly, we found that both EGFRvIII and losses on ADD3 were independent markers of bad prognosis (p = 0.028 and 0.014, respectively). Finally, we got an unsupervised hierarchical classification that differentiated three clusters of patients based on their genetic alterations. It offered a landscape of EGFR co-alterations that may improve the comprehension of the mechanisms underlying GB aggressiveness. Our findings can help in defining different genetic profiles, which is necessary to develop new and different approaches in the management of our patients.
Collapse
|
345
|
Patil SM, Nguyen J, Keire DA, Chen K. Sedimentation Velocity Analytical Ultracentrifugation Analysis of Marketed Rituximab Drug Product Size Distribution. Pharm Res 2020; 37:238. [PMID: 33155155 DOI: 10.1007/s11095-020-02961-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Accepted: 10/22/2020] [Indexed: 10/23/2022]
Abstract
PURPOSE Analytical methods suitable for intact drug products are often necessary to evaluate the equivalence in physicochemical properties between two drug products (DP) containing the same drug substance (DS), e.g., an innovator biologic drug and its proposed biosimilar. Analytical Ultracentrifugation (AUC) is a biophysics technique applied to the analysis of size and shape of biomolecules. However, the application of AUC to formulated monoclonal antibody (mAb) DP at high concentration has not been reported. METHODS A sedimentation velocity (SV) AUC procedure with a short-pathlength centerpiece was applied to two marketed rituximab DPs, Rituxan® (US) and Reditux® (India), without any buffer exchange or dilution. Detailed precision analysis was performed. RESULTS Highly reproducible sedimentation coefficient values (S) and peak areas were obtained for the dominant (> 84%) monomeric rituximab peak. The minor mAb fragment peaks had large variation in both S values and peak areas (3-12%). The identification of oligomer peaks was only reproducible once the abundance was higher than 2%. CONCLUSIONS SV-AUC provides an orthogonal characterization tool for protein size distribution, composition and assay, which could be informative for biosimilar drug developers who mostly only have access to formulated mAb. However, AUC needs thorough validation on its accuracy, precision and sensitivity.
Collapse
|
346
|
Lee JH, Kim S, Jun S, Seo JD, Nam Y, Song SH, Lee K, Song J. Analytical performance evaluation of the Norudia HbA 1c assay. J Clin Lab Anal 2020; 34:e23504. [PMID: 33463769 PMCID: PMC7676213 DOI: 10.1002/jcla.23504] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Revised: 06/22/2020] [Accepted: 07/08/2020] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND Hemoglobin A1c (HbA1c) is arguably the most important biomarker used in the diagnosis and treatment monitoring of diabetes mellitus. We evaluated the analytical performance of the Norudia HbA1c assay (Sekisui Medical Co., LTD), which uses an enzymatic method incorporated into a fully automated, high-throughput system. METHODS The precision, linearity, and carryover of the Norudia HbA1c assay were evaluated. Using 60 patient samples, comparative analysis of HbA1c measurements with two commonly used HbA1c assays, the D100 (Bio-Rad Laboratories, Inc) and HLC-723 G11 (Tosoh), was undergone. Thirteen commutable samples with known HbA1c concentrations measured using an IFCC reference measurement procedure were used to compare accuracy between methods. Interference of HbA1c measurement by Hb variants was evaluated using 16 known Hb variant samples. RESULTS Repeatability (% CV) for low and high concentrations ranged from 1.12%-1.50% and 0.66%-0.75%, respectively, and within-laboratory precision for low and high concentrations ranged from 1.73%-2.89% and 0.98%-1.64%, respectively. For linearity, the coefficient of determination was 0.9987. No significant carryover was observed. When compared to the D100 and HLC-723 G11 assays, the Norudia HbA1c assay showed the best accuracy with the lowest mean bias (-1.72%). Furthermore, the Norudia was least affected by Hb variants and gave the most reliable HbA1c measurements. CONCLUSION The Norudia HbA1c showed excellent analytical performance with good precision and linearity, and minimal carryover. When compared to other routine HbA1c methods, the Norudia HbA1c assay showed the highest accuracy and was least affected by Hb variants.
Collapse
|
347
|
New Method and Portable Measurement Device for the Calibration of Industrial Robots. SENSORS 2020; 20:s20205919. [PMID: 33092133 PMCID: PMC7589031 DOI: 10.3390/s20205919] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 10/12/2020] [Accepted: 10/14/2020] [Indexed: 11/17/2022]
Abstract
This paper presents an automated calibration method for industrial robots, based on the use of (1) a novel, low-cost, wireless, 3D measuring device mounted on the robot end-effector and (2) a portable 3D ball artifact fixed with respect to the robot base. The new device, called TriCal, is essentially a fixture holding three digital indicators (plunger style), the axes of which are orthogonal and intersect at one point, considered to be the robot tool center point (TCP). The artifact contains four 1-inch datum balls, each mounted on a stem, with precisely known relative positions measured on a Coordinate Measuring Machine (CMM). The measurement procedure with the TriCal is fully automated and consists of the robot moving its end-effector in such as a way as to perfectly align its TCP with the center of each of the four datum balls, with multiple end-effector orientations. The calibration method and hardware were tested on a six-axis industrial robot (KUKA KR6 R700 sixx). The calibration model included all kinematic and joint stiffness parameters, which were identified using the least-squares method. The efficiency of the new calibration system was validated by measuring the accuracy of the robot after calibration in 500 nearly random end-effector poses using a laser tracker. The same validation was performed after the robot was calibrated using measurements from the laser tracker only. Results show that both measurement methods lead to similar accuracy improvements, with the TriCal yielding maximum position errors of 0.624 mm and mean position errors of 0.326 mm.
Collapse
|
348
|
Hendriks SJ, Phyn CVC, Huzzey JM, Mueller KR, Turner SA, Donaghy DJ, Roche JR. Graduate Student Literature Review: Evaluating the appropriate use of wearable accelerometers in research to monitor lying behaviors of dairy cows. J Dairy Sci 2020; 103:12140-12157. [PMID: 33069407 DOI: 10.3168/jds.2019-17887] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2019] [Accepted: 07/01/2020] [Indexed: 12/19/2022]
Abstract
Until recently, animal behavior has been studied through close and extensive observation of individual animals and has relied on subjective assessments. Wearable technologies that allow the automation of dairy cow behavior recording currently dominate the precision dairy technology market. Wearable accelerometers provide new opportunities in animal ethology using quantitative measures of dairy cow behavior. Recent research developments indicate that quantitative measures of behavior may provide new objective on-farm measures to assist producers in predicting, diagnosing, and managing disease or injury on farms and allowing producers to monitor cow comfort and estrus behavior. These recent research developments and a large increase in the availability of wearable accelerometers have led to growing interest of both researchers and producers in this technology. This review aimed to summarize the studies that have validated lying behavior derived from accelerometers and to describe the factors that should be considered when using leg-attached accelerometers and neck-worn collars to describe lying behavior (e.g., lying time and lying bouts) in dairy cows for research purposes. Specifically, we describe accelerometer technology, including the instrument properties and methods for recording motion; the raw data output from accelerometers; and methods developed for the transformation of raw data into meaningful and interpretable information. We highlight differences in validation study outcomes for researchers to consider when developing their own experimental methodology for the use of accelerometers to record lying behaviors in dairy cows. Finally, we discuss several factors that may influence the data recorded by accelerometers and highlight gaps in the literature. We conclude that researchers using accelerometers to record lying behaviors in dairy cattle should (1) select an accelerometer device that, based on device attachment and sampling rate, is appropriate to record the behavior of interest; (2) account for cow-, farm-, and management-related factors that could affect the lying behaviors recorded; (3) determine the appropriate editing criteria for the accurate interpretation of their data; (4) support their chosen method of recording, editing, and interpreting the data by referencing an appropriately designed and accurate validation study published in the literature; and (5) report, in detail, their methodology to ensure others can decipher how the data were captured and understand potential limitations of their methodology. We recommend that standardized protocols be developed for collecting, analyzing, and reporting lying behavior data recorded using wearable accelerometers for dairy cattle.
Collapse
|
349
|
Parmenter BH, Dymock M, Banerjee T, Sebastian A, Slater GJ, Frassetto LA. Performance of Predictive Equations and Biochemical Measures Quantifying Net Endogenous Acid Production and the Potential Renal Acid Load. Kidney Int Rep 2020; 5:1738-1745. [PMID: 33102966 PMCID: PMC7569692 DOI: 10.1016/j.ekir.2020.07.026] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2019] [Revised: 06/30/2020] [Accepted: 07/21/2020] [Indexed: 11/28/2022] Open
Abstract
Introduction A limited number of studies have assessed the accuracy and precision of methods for determining the net endogenous acid production (NEAP) and its components. We aimed to investigate the performance of methods quantifying the diet dependent acid–base load. Methods Data from metabolic balance studies enabled calculations of NEAP according to the biochemical measures (of net acid excretion [NAE], urinary net endogenous acid production [UNEAP], and urinary potential renal acid load [UPRAL]) as well as estimative diet equations (by Frassetto et al., Remer and Manz, Sebastian et al., and Lemann) that were compared among themselves in healthy participants fed both acid and base forming diets for 6 days each. Results Seventeen participants (mean ± SD age, 60 ± 8 years; body mass index, 23 ± 2 kg/m2) provided 102 twenty-four-hour urine samples for analysis (NAE, 39 ± 38 mEq/d [range, −9 to 95 mEq/d]). Bland-Altman analysis comparing UNEAP to NAE showed good accuracy (bias, −2 mEq/d [95% confidence interval {CI}, −8 to 3]) and modest precision (limits of agreement, −32 to 28 mEq/d). Accurate diet equations included potential renal acid load (PRAL) by Sebastian et al. (bias, −4 mEq/d [95% CI, −8 to 0]) as well as NEAP by Lemann et al. (bias, 4 mEq/d [95% CI, −1 to 9]) and Remer and Manz (bias, −1 mEq/d [95% CI, −6 to 3]). Conclusions Researchers are encouraged to collect measures of UPRAL and UNEAP; however, investigators drawing conclusions between the diet-dependent acid–base load and human health should consider the limitations within all methods.
Collapse
|
350
|
McFarlane S, Manseau M, Steenweg R, Hervieux D, Hegel T, Slater S, Wilson PJ. An assessment of sampling designs using SCR analyses to estimate abundance of boreal caribou. Ecol Evol 2020; 10:11631-11642. [PMID: 33144989 PMCID: PMC7593142 DOI: 10.1002/ece3.6797] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Revised: 08/06/2020] [Accepted: 08/20/2020] [Indexed: 11/12/2022] Open
Abstract
Accurately estimating abundance is a critical component of monitoring and recovery of rare and elusive species. Spatial capture-recapture (SCR) models are an increasingly popular method for robust estimation of ecological parameters. We provide an analytical framework to assess results from empirical studies to inform SCR sampling design, using both simulated and empirical data from noninvasive genetic sampling of seven boreal caribou populations (Rangifer tarandus caribou), which varied in range size and estimated population density. We use simulated population data with varying levels of clustered distributions to quantify the impact of nonindependence of detections on density estimates, and empirical datasets to explore the influence of varied sampling intensity on the relative bias and precision of density estimates. Simulations revealed that clustered distributions of detections did not significantly impact relative bias or precision of density estimates. The genotyping success rate of our empirical dataset (n = 7,210 samples) was 95.1%, and 1,755 unique individuals were identified. Analysis of the empirical data indicated that reduced sampling intensity had a greater impact on density estimates in smaller ranges. The number of captures and spatial recaptures was strongly correlated with precision, but not absolute relative bias. The best sampling designs did not differ with estimated population density but differed between large and small ranges. We provide an efficient framework implemented in R to estimate the detection parameters required when designing SCR studies. The framework can be used when designing a monitoring program to minimize effort and cost while maximizing effectiveness, which is critical for informing wildlife management and conservation.
Collapse
|