1
|
Error Analysis of Normal Surface Measurements Based on Multiple Laser Displacement Sensors. SENSORS (BASEL, SWITZERLAND) 2024; 24:2059. [PMID: 38610270 PMCID: PMC11014111 DOI: 10.3390/s24072059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 03/15/2024] [Accepted: 03/18/2024] [Indexed: 04/14/2024]
Abstract
The robotic drilling of assembly holes is a crucial process in aerospace manufacturing, in which measuring the normal of the workpiece surface is a key step to guide the robot to the correct pose and guarantee the perpendicularity of the hole axis. Multiple laser displacement sensors can be used to satisfy the portable and in-site measurement requirements, but there is still a lack of accurate analysis and layout design. In this paper, a simplified parametric method is proposed for multi-sensor normal measurement devices with a symmetrical layout, using three parameters: the sensor number, the laser beam slant angle, and the laser spot distribution radius. A normal measurement error distribution simulation method considering the random sensor errors is proposed. The measurement error distribution laws at different sensor numbers, the laser beam slant angle, and the laser spot distribution radius are revealed as a pyramid-like region. The influential factors on normal measurement accuracy, such as sensor accuracy, quantity and installation position, are analyzed by a simulation and verified experimentally on a five-axis precision machine tool. The results show that increasing the laser beam slant angle and laser spot distribution radius significantly reduces the normal measurement errors. With the laser beam slant angle ≥15° and the laser spot distribution radius ≥19 mm, the normal measurement error falls below 0.05°, ensuring normal accuracy in robotic drilling.
Collapse
|
2
|
Interpreting Randomized Controlled Trials. Cancers (Basel) 2023; 15:4674. [PMID: 37835368 PMCID: PMC10571666 DOI: 10.3390/cancers15194674] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 09/19/2023] [Accepted: 09/19/2023] [Indexed: 10/15/2023] Open
Abstract
This article describes rationales and limitations for making inferences based on data from randomized controlled trials (RCTs). We argue that obtaining a representative random sample from a patient population is impossible for a clinical trial because patients are accrued sequentially over time and thus comprise a convenience sample, subject only to protocol entry criteria. Consequently, the trial's sample is unlikely to represent a definable patient population. We use causal diagrams to illustrate the difference between random allocation of interventions within a clinical trial sample and true simple or stratified random sampling, as executed in surveys. We argue that group-specific statistics, such as a median survival time estimate for a treatment arm in an RCT, have limited meaning as estimates of larger patient population parameters. In contrast, random allocation between interventions facilitates comparative causal inferences about between-treatment effects, such as hazard ratios or differences between probabilities of response. Comparative inferences also require the assumption of transportability from a clinical trial's convenience sample to a targeted patient population. We focus on the consequences and limitations of randomization procedures in order to clarify the distinctions between pairs of complementary concepts of fundamental importance to data science and RCT interpretation. These include internal and external validity, generalizability and transportability, uncertainty and variability, representativeness and inclusiveness, blocking and stratification, relevance and robustness, forward and reverse causal inference, intention to treat and per protocol analyses, and potential outcomes and counterfactuals.
Collapse
|
3
|
Enhanced Inference for Finite Population Sampling-Based Prevalence Estimation with Misclassification Errors. AM STAT 2023; 78:192-198. [PMID: 38645436 PMCID: PMC11027951 DOI: 10.1080/00031305.2023.2250401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Accepted: 08/11/2023] [Indexed: 04/23/2024]
Abstract
Epidemiologic screening programs often make use of tests with small, but non-zero probabilities of misdiagnosis. In this article, we assume the target population is finite with a fixed number of true cases, and that we apply an imperfect test with known sensitivity and specificity to a sample of individuals from the population. In this setting, we propose an enhanced inferential approach for use in conjunction with sampling-based bias-corrected prevalence estimation. While ignoring the finite nature of the population can yield markedly conservative estimates, direct application of a standard finite population correction (FPC) conversely leads to underestimation of variance. We uncover a way to leverage the typical FPC indirectly toward valid statistical inference. In particular, we derive a readily estimable extra variance component induced by misclassification in this specific but arguably common diagnostic testing scenario. Our approach yields a standard error estimate that properly captures the sampling variability of the usual bias-corrected maximum likelihood estimator of disease prevalence. Finally, we develop an adapted Bayesian credible interval for the true prevalence that offers improved frequentist properties (i.e., coverage and width) relative to a Wald-type confidence interval. We report the simulation results to demonstrate the enhanced performance of the proposed inferential methods.
Collapse
|
4
|
Population Sampling: Probability and Non-Probability Techniques. Prehosp Disaster Med 2023; 38:147-148. [PMID: 36939054 DOI: 10.1017/s1049023x23000304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/21/2023]
|
5
|
Dynamics of the Emerging Genogroup of Infectious Bursal Disease Virus Infection in Broiler Farms in South Korea: A Nationwide Study. Viruses 2022; 14:v14081604. [PMID: 35893669 PMCID: PMC9330851 DOI: 10.3390/v14081604] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2022] [Revised: 07/19/2022] [Accepted: 07/20/2022] [Indexed: 02/01/2023] Open
Abstract
Infectious bursal disease (IBD), caused by IBD virus (IBDV), threatens the health of the poultry industry. Recently, a subtype of genogroup (G) 2 IBDV named G2d has brought a new threat to the poultry industry. To determine the current status of IBDV prevalence in South Korea, active IBDV surveillance on 167 randomly selected broiler farms in South Korea from August 2020 to July 2021 was conducted. The bursas of Fabricius from five chickens from each farm were independently pooled and screened for IBDV using virus-specific RT-PCR. As a result, 86 farms were found to be infected with the G2d variant, 13 farms with G2b, and 2 farms with G3. Current prevalence estimation of IBDV infection in South Korea was determined as 17.8% at the animal level using pooled sampling methods. G2d IBDV was predominant compared to other genogroups, with a potentially high-risk G2d infection area in southwestern South Korea. The impact of IBDV infection on poultry productivity or Escherichia coli infection susceptibility was also confirmed. A comparative pathogenicity test indicated that G2d IBDV caused severe and persistent damage to infected chickens compared with G2b. This study highlights the importance of implementation of regular surveillance programs and poses challenges for the comprehensive prevention of IBDV infections.
Collapse
|
6
|
[Geo-accumulation Index Method to Optimize the Evaluation Method of Polymetallic Environment Quality: Taking Developed Agricultural Areas as an Example]. HUAN JING KE XUE= HUANJING KEXUE 2022; 43:957-964. [PMID: 35075869 DOI: 10.13227/j.hjkx.202105250] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
Abstract
An accurate and reasonable soil pollution assessment method is the premise of regional soil pollution assessment. Triangular fuzzy numbers were introduced into the geo-accumulation index method, combined with α-cut technology and the Latin hypercube sampling (LHS) stochastic simulation method, to evaluate the accumulation of heavy metals in five different soil types (moisture soil, aeolian sandy soil, cinnamon soil, loessal soil, and alluvial soil), with large differences in heavy metal (Cd, Hg, As, Pb, and Cr) content in the study area. This method can avoid the inaccuracy of the evaluation results caused by the traditional geological accumulation index method in the selection of background values and the great difference between local and whole heavy metal contents in the study area and can comprehensively and truly represent the regional soil heavy metal pollution status, thus providing a theoretical basis for scientific decision-making. The results showed that the improvement in the method will not affect the evaluation results when the heavy metal content difference is small, that is, when the evaluation results of the geological accumulation index at all points are within a certain interval level or when the evaluation results of the geological accumulation index at all points are less than 0. The traditional and geological cumulative index method based on the evaluation with triangular fuzzy numbers provides results for a certain level, which is determined by the heavy metal content in the study area average; conversely, differences in the heavy metal content of soil sampling points throughout the study area can result in a decrease or increase in the recorded heavy metal contents in larger areas, making the evaluation result inaccurate. Combined with the LHS sampling method, the possibility of changing the evaluation results into various pollution levels can greatly improve the limitations of traditional evaluation methods and make the evaluation results more reasonable and accurate. Combined with the geographic information system (GIS) method, the regional heavy metal pollution accumulation concentration can also be visualized.
Collapse
|
7
|
Conditions of the Central-Limit Theorem Are Rarely Satisfied in Empirical Psychological Studies. Front Psychol 2021; 12:762418. [PMID: 34858289 PMCID: PMC8630578 DOI: 10.3389/fpsyg.2021.762418] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2021] [Accepted: 10/11/2021] [Indexed: 11/13/2022] Open
|
8
|
Detection of Mycobacterium avium Subspecies Paratuberculosis in Pooled Fecal Samples by Fecal Culture and Real-Time PCR in Relation to Bacterial Density. Animals (Basel) 2021; 11:ani11061605. [PMID: 34072327 PMCID: PMC8229432 DOI: 10.3390/ani11061605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 05/26/2021] [Accepted: 05/27/2021] [Indexed: 11/23/2022] Open
Abstract
Simple Summary Paratuberculosis is a worldwide disease causing serious impacts to the dairy industry. Within the context of paratuberculosis control programs, dairy herds have to be classified as either paratuberculosis-positive or paratuberculosis-free with minimum effort but with sufficient reliability. We aimed to estimate the detection rate of positive herds using a combination of random sampling and pooling of five or ten fecal samples. The pooled samples were analyzed with two different laboratory methods (bacterial culture and polymerase chain reaction). Pools of size 10 can be used without significant decrease of detection probability compared with pools of size 5. Analyzing randomly sampled and pooled fecal samples allows the detection of paratuberculosis-positive herds, but the detection probability in herds with only few infected animals (<5.0%) is not high enough to recommend this approach for one-time testing in such herds. Abstract Within paratuberculosis control programs Mycobacterium avium subsp. paratuberculosis (MAP)-infected herds have to be detected with minimum effort but with sufficient reliability. We aimed to evaluate a combination of random sampling (RS) and pooling for the detection of MAP-infected herds, simulating repeated RS in imitated dairy herds (within-herd prevalence 1.0%, 2.0%, 4.3%). Each RS consisted of taking 80 out of 300 pretested fecal samples, and five or ten samples were repeatedly and randomly pooled. All pools containing at least one MAP-positive sample were analyzed by culture and real-time quantitative PCR (qPCR). The pool detection probability was 47.0% or 45.9% for pools of size 5 or 10 applying qPCR and slightly lower using culture. Combining these methods increased the pool detection probability. A positive association between bacterial density in pools and pool detection probability was identified by logistic regression. The herd-level detection probability ranged from 67.3% to 84.8% for pools of size 10 analyzed by both qPCR and culture. Pools of size 10 can be used without significant loss of sensitivity compared with pools of size 5. Analyzing randomly sampled and pooled fecal samples allows the detection of MAP-infected herds, but is not recommended for one-time testing in low prevalence herds.
Collapse
|
9
|
'Statistical Irreproducibility' Does Not Improve with Larger Sample Size: How to Quantify and Address Disease Data Multimodality in Human and Animal Research. J Pers Med 2021; 11:jpm11030234. [PMID: 33806843 PMCID: PMC8005169 DOI: 10.3390/jpm11030234] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Revised: 03/12/2021] [Accepted: 03/18/2021] [Indexed: 12/18/2022] Open
Abstract
Poor study reproducibility is a concern in translational research. As a solution, it is recommended to increase sample size (N), i.e., add more subjects to experiments. The goal of this study was to examine/visualize data multimodality (data with >1 data peak/mode) as cause of study irreproducibility. To emulate the repetition of studies and random sampling of study subjects, we first used various simulation methods of random number generation based on preclinical published disease outcome data from human gut microbiota-transplantation rodent studies (e.g., intestinal inflammation and univariate/continuous). We first used unimodal distributions (one-mode, Gaussian, and binomial) to generate random numbers. We showed that increasing N does not reproducibly identify statistical differences when group comparisons are repeatedly simulated. We then used multimodal distributions (>1-modes and Markov chain Monte Carlo methods of random sampling) to simulate similar multimodal datasets A and B (t-test-p = 0.95; N = 100,000), and confirmed that increasing N does not improve the ‘reproducibility of statistical results or direction of the effects’. Data visualization with violin plots of categorical random data simulations with five-integer categories/five-groups illustrated how multimodality leads to irreproducibility. Re-analysis of data from a human clinical trial that used maltodextrin as dietary placebo illustrated multimodal responses between human groups, and after placebo consumption. In conclusion, increasing N does not necessarily ensure reproducible statistical findings across repeated simulations due to randomness and multimodality. Herein, we clarify how to quantify, visualize and address disease data multimodality in research. Data visualization could facilitate study designs focused on disease subtypes/modes to help understand person–person differences and personalized medicine.
Collapse
|
10
|
Highly accelerated submillimeter resolution 3D GRASE with controlled T 2 blurring in T 2 -weighted functional MRI at 7 Tesla: A feasibility study. Magn Reson Med 2020; 85:2490-2506. [PMID: 33231890 DOI: 10.1002/mrm.28589] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2020] [Revised: 10/12/2020] [Accepted: 10/20/2020] [Indexed: 11/12/2022]
Abstract
PURPOSE To achieve highly accelerated submillimeter resolution T 2 -weighted functional MRI at 7T by developing a three-dimensional gradient and spin echo imaging (GRASE) with inner-volume selection and variable flip angles (VFA). METHODS GRASE imaging has disadvantages in that (a) k-space modulation causes T 2 blurring by limiting the number of slices and (b) a VFA scheme results in partial success with substantial SNR loss. In this work, accelerated GRASE with controlled T 2 blurring is developed to improve a point spread function (PSF) and temporal signal-to-noise ratio (tSNR) with a large number of slices. To this end, the VFA scheme is designed by minimizing a trade-off between SNR and blurring for functional sensitivity, and a new GRASE-optimized random encoding, which takes into account the complex signal decays of T 2 and T 2 ∗ weightings, is proposed by achieving incoherent aliasing for constrained reconstruction. Numerical and experimental studies were performed to validate the effectiveness of the proposed method over regular and VFA GRASE (R- and V-GRASE). RESULTS The proposed method, while achieving 0.8 mm isotropic resolution, functional MRI compared to R- and V-GRASE improves the spatial extent of the excited volume up to 36 slices with 52%-68% full width at half maximum (FWHM) reduction in PSF but approximately 2- to 3-fold mean tSNR improvement, thus resulting in higher BOLD activations. CONCLUSIONS We successfully demonstrated the feasibility of the proposed method in T 2 -weighted functional MRI. The proposed method is especially promising for cortical layer-specific functional MRI.
Collapse
|
11
|
Spatial sero-prevalence of brucellosis in small ruminants of India: Nationwide cross-sectional study for the year 2017-2018. Transbound Emerg Dis 2020; 68:2199-2208. [PMID: 33021085 DOI: 10.1111/tbed.13871] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2020] [Revised: 09/24/2020] [Accepted: 09/29/2020] [Indexed: 11/30/2022]
Abstract
Brucellosis in small ruminants caused mainly due to Brucella melitensis is an important zoonotic disease characterized by abortion, retained placenta, infertility, orchitis, epididymitis and rarely arthritis. Small ruminants are the main source of economy for the rural and marginally poor farmers and brucellosis is resulting in huge economic losses due to abortions and infertility and causing public health concern among the small ruminant keepers. Bovine brucellosis control programme has been implemented in India and small ruminants are left out of the programme mainly due to paucity of brucellosis status. The present cross-sectional study based on stratified random sampling was undertaken during 2017-18 to provide the nationwide brucellosis sero-prevalence in small ruminants. A total of 24,056 small ruminant serum samples (sheep samples = 8,103 [male-2,440 and female-5,663] and goat samples = 15,953 [male-4,331 and female-11,622]) sourced from 27 out of 29 states and two out of seven union territories (UTs), 350 districts of total 640 districts (54.68% of the Indian districts) and from 1,462 villages out of 6,40,867 villages (43.83% of the Indian villages). The serum samples were tested by indirect ELISA and overall brucellosis apparent and true prevalence of 7.45 (95% CI: 7.13-7.79) and 3.79 (95% CI: 3.44-4.17) was recorded. Significantly higher brucellosis sero-prevalence (p < .0001) was observed in sheep (11.55%) than goats (5.37%). Similarly, brucellosis seropositivity was highly significant in females compared to males in both sheep and goats. Countrywide, greater than 5% brucellosis sero-prevalence in sheep and goats was recorded in 14 and 10 states, respectively, indicating endemicity of the disease. The study provided the latest update on nationwide spatial sero-prevalence of small ruminant brucellosis which will aid government to strengthen regular surveillance and vaccination to reduce the disease burden and public health problems in the country.
Collapse
|
12
|
Estimating a Large Travel Time Matrix Between Zip Codes in the United States: A Differential Sampling Approach. JOURNAL OF TRANSPORT GEOGRAPHY 2020; 86:102770. [PMID: 32669759 PMCID: PMC7363032 DOI: 10.1016/j.jtrangeo.2020.102770] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Estimating a massive drive time matrix between locations is a practical but challenging task. The challenges include availability of reliable road network (including traffic) data, programming expertise, and access to high-performance computing resources. This research proposes a method for estimating a nationwide drive time matrix between ZIP code areas in the U.S.-a geographic unit at which many national datasets such as health information are compiled and distributed. The method (1) does not rely on intensive efforts in data preparation or access to advanced computing resources, (2) uses algorithms of varying complexity and computational time to estimate drive times of different trip lengths, and (3) accounts for both interzonal and intrazonal drive times. The core design samples ZIP code pairs with various intensities according to trip lengths and derives the drive times via Google Maps API, and the Google times are then used to adjust and improve some primitive estimates of drive times with low computational costs. The result provides a valuable resource for researchers.
Collapse
|
13
|
Assessment of Sample Size Calculations Used in Aquaculture by Simulation Techniques. Front Vet Sci 2020; 7:253. [PMID: 32509804 PMCID: PMC7248330 DOI: 10.3389/fvets.2020.00253] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Accepted: 04/16/2020] [Indexed: 11/13/2022] Open
Abstract
An adequate sampling methodology is the key to knowing the health status of aquatic populations. Usually, the aims of epidemiological surveys in aquaculture are to detect an infection and estimate the disease prevalence, and different formulas are used to calculate the sample size. The main objective of this study was to assess if the sample sizes calculated using classical epidemiological formulas are valid considering the sampling methodology, the population size, and the spatial distribution of diseased animals in the population (non-clustered or clustered). However, the use of sample sizes of 30, 60, and 150 fish is widely accepted in aquaculture, due to the requirements of the World Organization for Animal Health (OIE) for epidemiological surveillance. We have developed a specific software using ASP (Active Server Pages) language and MySQL database in order to generate aquatic populations from 100 to 10 000 brown trouts infected by Aeromonas salmonicida with different levels of prevalence: 2, 5, 10, and 50%. Then we implemented several Monte Carlo simulations to estimate empirically the sample sizes corresponding to the different scenarios. Furthermore, we compared these results with the values calculated by classical formulas. We determined that simple random sampling was more accurate in detecting an infection, because it is independent of the distribution of infected animals in the population. However, if diseased animals are non-clustered it is more efficient to use systematic methods, even in the case of small populations. Finally, the formula to calculate sample size to estimate disease prevalence is not valid when the expected prevalence is far from 50%, and it is necessary to increase the sample size to reach the desired precision.
Collapse
|
14
|
Fast Number Theoretic Transform for Ring-LWE on 8-bit AVR Embedded Processor. SENSORS 2020; 20:s20072039. [PMID: 32260497 PMCID: PMC7180843 DOI: 10.3390/s20072039] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Revised: 03/30/2020] [Accepted: 04/03/2020] [Indexed: 11/16/2022]
Abstract
In this paper, we optimized Number Theoretic Transform (NTT) and random sampling operations on low-end 8-bit AVR microcontrollers. We focused on the optimized modular multiplication with secure countermeasure (i.e., constant timing), which ensures high performance and prevents timing attack and simple power analysis. In particular, we presented combined Look-Up Table (LUT)-based fast reduction techniques in a regular fashion. This novel approach only requires two times of LUT access to perform the whole modular reduction routine. The implementation is carefully written in assembly language, which reduces the number of memory access and function call routines. With LUT-based optimization techniques, proposed NTT implementations outperform the previous best results by 9.0% and 14.6% for 128-bit security level and 256-bit security level, respectively. Furthermore, we adopted the most optimized AES software implementation to improve the performance of pseudo random number generation for random sampling operation. The encryption of AES-256 counter (CTR) mode used for random number generator requires only 3184 clock cycles for 128-bit data input, which is 9.5% faster than previous state-of-art results. Finally, proposed methods are applied to the whole process of Ring-LWE key scheduling and encryption operations, which require only 524,211 and 659,603 clock cycles for 128-bit security level, respectively. For the key generation of 256-bit security level, 1,325,171 and 1,775,475 clock cycles are required for H/W and S/W AES-based implementations, respectively. For the encryption of 256-bit security level, 1,430,601 and 2,042,474 clock cycles are required for H/W and S/W AES-based implementations, respectively.
Collapse
|
15
|
Animal Welfare Assessment of Fattening Pigs: A Case Study on Sample Validity. Animals (Basel) 2020; 10:ani10030389. [PMID: 32121023 PMCID: PMC7142706 DOI: 10.3390/ani10030389] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2020] [Revised: 02/06/2020] [Accepted: 02/23/2020] [Indexed: 11/16/2022] Open
Abstract
Simple Summary The welfare of farm animals is discussed in society and politics. In Germany, the Association for Technology and Structures in Agriculture developed a new guideline for the animal welfare assessment of fattening pigs. It is called ANIMAL WELFARE INDICATORS: PRACTICAL GUIDE – PIGS and contains 13 characteristics by which the welfare of an animal is assessed, so-called indicators, as well as instructions on how to collect those indicators. For reasons of feasibility, six of the indicators should not be collected for all fattening pigs in a herd, but for a sample. The question arises whether then the herd’s level of animal welfare is assessed with sufficient precision. For this reason, this study examines five strategies for collecting samples of the fattening pigs in a herd. The aim is to identify a feasible strategy that collects samples of high validity. However, the study shows that the result of the animal welfare assessment based upon samples can partly deviate considerably from the result of the assessment of the entire herd. Further studies are needed to identify the most feasible and valid method for collecting samples of pigs from a herd. Abstract A guide for animal welfare assessment of fattening pigs recommends recording some of the indicators for a sample of the animals from a herd. However, it is not certain whether the herd’s level of welfare can be correctly judged using a random sample. Therefore, both the true prevalences of welfare indicators in a full census and the estimated prevalences of the indicators based upon simulated samples taken according to five strategies (termed S1 to S5) were determined. Deviations from the true level of animal welfare in the herd due to the sampling were recorded and analyzed. Depending on the strategy, between 12% and 43% of the samples over- or underestimated the true prevalences by more than 50%. The validity of the sampling strategies was evaluated using the normalized root-mean-squared error (NRMSE) and the relative bias (RB). In terms of accuracy, the strategies differed only slightly (between NRMSE = 0.13 for S2 and NRMSE = 0.19 for S4). However, the strategies varied more obviously regarding the bias (between RB = −0.0002 for S1 and RB = −0.0370 for S5). The described results are the outcome of an initial case study on the sample validity of the indicators and have to be verified using the data of more herds.
Collapse
|
16
|
[Characteristics of Microbial Colony Counts on Agar Plates for Food and Microbial Culture Samples]. Food Hygiene and Safety Science (Shokuhin Eiseigaku Zasshi) 2019; 60:88-95. [PMID: 31474656 DOI: 10.3358/shokueishi.60.88] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Microbial colony counts of concern of food products are one of the most important items in microbiological examinations. The distributions of colony counts per agar plate of food samples are considered to be reflected with microbial cell distributions in food homogenates. However, (i) the probabilistic distributions of the colony counts per agar plate at the dilution of counting and (ii) the relationship between the colony counts per plate and the number of agar plates for food samples have not been intensively studied so far. In this study, therefore, these two points were studied with raw food samples of raw minced beef and chicken and raw milk and microbial culture samples of Escherichia coli, Staphylococcus aureus, and Saccharomyces cerevisiae. Among four major probabilistic distributions, it was found that aerobic plate counts per plate of the foods were well described with negative binomial, Poisson, and normal distributions and that the colony counts per plate of microbial cultures were described well with binomial, Poisson, and normal distributions. The effect of the number of agar plates on the estimation of the mean of colony counts per plate of a sample was then studied with the data randomly resampled from the experimental data. The resampled data showed that with more number of plates the mean of counts fluctuated less and the coefficients of variation of colony counts per plate decreased further, which were coincident to the estimated by the central limit theory. Our study would provide useful information on the characteristics of colony counts per plate of food samples which are routinely examined.
Collapse
|
17
|
Location-Scale Matching for Approximate Quasi-Order Sampling. Front Psychol 2019; 10:1163. [PMID: 31244703 PMCID: PMC6573793 DOI: 10.3389/fpsyg.2019.01163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2019] [Accepted: 05/02/2019] [Indexed: 12/03/2022] Open
Abstract
Quasi-orders are reflexive and transitive binary relations and have many applications. Examples are the dependencies of mastery among the problems of a psychological test, or methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data. Data mining techniques are typically tested based on simulation studies with unbiased samples of randomly generated quasi-orders. In this paper, we develop techniques for the approximately representative sampling of quasi-orders. Polynomial regression curves are fitted for the mean and standard deviation of quasi-order size as a function of item number. The resulting regression graphs are seen to be quadratic and linear functions, respectively. The extrapolated values for the mean and standard deviation are used to propose two quasi-order sampling techniques. The discrete method matches these location and scale measures with a transformed discrete distribution directly obtained from the sample. The continuous method uses the normal density function with matched expectation and variance. The quasi-orders are constructed according to the biased randomized doubly inductive construction, however they are resampled to become approximately representative following the matched discrete and continuous distributions. In simulations, we investigate the usefulness of these methods. The location-scale matching approach can cope with very large item sets. Close to representative samples of random quasi-orders are constructed for item numbers up to n = 400.
Collapse
|
18
|
CuDDI: A CUDA-Based Application for Extracting Drug-Drug Interaction Related Substance Terms from PubMed Literature. Molecules 2019; 24:molecules24061081. [PMID: 30893816 PMCID: PMC6470591 DOI: 10.3390/molecules24061081] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2019] [Revised: 03/12/2019] [Accepted: 03/16/2019] [Indexed: 11/30/2022] Open
Abstract
Drug-drug interaction (DDI) is becoming a serious issue in clinical pharmacy as the use of multiple medications is more common. The PubMed database is one of the biggest literature resources for DDI studies. It contains over 150,000 journal articles related to DDI and is still expanding at a rapid pace. The extraction of DDI-related information, including compounds and proteins from PubMed, is an essential step for DDI research. In this paper, we introduce a tool, CuDDI (compute unified device architecture-based DDI searching), for identification of DDI-related terms (including compounds and proteins) from PubMed. There are three modules in this application, including the automatic retrieval of substances from PubMed, the identification of DDI-related terms, and the display of relationship of DDI-related terms. For DDI term identification, a speedup of 30–105 times was observed for the compute unified device architecture (CUDA)-based version compared with the implementation with a CPU-based Python version. CuDDI can be used to discover DDI-related terms and relationships of these terms, which has the potential to help clinicians and pharmacists better understand the mechanism of DDIs. CuDDI is available at: https://github.com/chengusf/CuDDI.
Collapse
|
19
|
A pooling strategy to effectively use genotype data in quantitative traits genome-wide association studies. Stat Med 2018; 37:4083-4095. [PMID: 30003569 DOI: 10.1002/sim.7898] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2018] [Revised: 04/17/2018] [Accepted: 06/01/2018] [Indexed: 11/11/2022]
Abstract
The goal of quantitative traits genome-wide association studies is to identify associations between a phenotypic variable, such as a vitamin level and genetic variants, often single-nucleotide polymorphisms. When funding limits the number of assays that can be performed to measure the level of the phenotypic variable, a subgroup of subjects is often randomly selected from the genotype database and the level of the phenotypic variable is then measured for each subject. Because only a proportion of the genotype data can be used, such a simple random sampling method may suffer from substantial loss of efficiency, especially when the number of assays is relative small and the frequency of the less common variant (minor allele frequency) is low. We propose a pooling strategy in which subjects in a randomly selected reference subgroup are aligned with randomly selected subjects from the remaining study subjects to form independent pools; blood samples from subjects in each pool are mixed; and the level of the phenotypic variable is measured for each pool. We demonstrate that the proposed pooling approach produces considerable gains in efficiency over the simple random sampling method for inference concerning the phenotype-genotype association, resulting in higher precision and power. The methods are illustrated using genotypic and phenotypic data from the Trinity Students Study, a quantitative genome-wide association study.
Collapse
|
20
|
Chimpanzees Consider Humans' Psychological States when Drawing Statistical Inferences. Curr Biol 2018; 28:1959-1963.e3. [PMID: 29861138 DOI: 10.1016/j.cub.2018.04.077] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2017] [Revised: 03/22/2018] [Accepted: 04/24/2018] [Indexed: 11/28/2022]
Abstract
Great apes have been shown to be intuitive statisticians: they can use proportional information within a population to make intuitive probability judgments about randomly drawn samples [1, J.E., J.C., J.H., E.H., and H.R., unpublished data]. Humans, from early infancy onward, functionally integrate intuitive statistics with other cognitive domains to judge the randomness of an event [2-6]. To date, nothing is known about such cross-domain integration in any nonhuman animal, leaving uncertainty about the origins of human statistical abilities. We investigated whether chimpanzees take into account information about psychological states of experimenters (their biases and visual access) when drawing statistical inferences. We tested 21 sanctuary-living chimpanzees in a previously established paradigm that required subjects to infer which of two mixed populations of preferred and non-preferred food items was more likely to lead to a desired outcome for the subject. In a series of three experiments, we found that chimpanzees chose based on proportional information alone when they had no information about experimenters' preferences and (to a lesser extent) when experimenters had biases for certain food types but drew blindly. By contrast, when biased experimenters had visual access, subjects ignored statistical information and instead chose based on experimenters' biases. Lastly, chimpanzees intuitively used a violation of statistical likelihoods as indication for biased sampling. Our results suggest that chimpanzees have a random sampling assumption that can be overridden under the appropriate circumstances and that they are able to use mental state information to judge whether this is necessary. This provides further evidence for a shared statistical inference mechanism in apes and humans.
Collapse
|
21
|
Fast Ordered Sampling of DNA Sequence Variants. G3-GENES GENOMES GENETICS 2018. [PMID: 29531124 PMCID: PMC5940139 DOI: 10.1534/g3.117.300465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Explosive growth in the amount of genomic data is matched by increasing power of consumer-grade computers. Even applications that require powerful servers can be quickly tested on desktop or laptop machines if we can generate representative samples from large data sets. I describe a fast and memory-efficient implementation of an on-line sampling method developed for tape drives 30 years ago. Focusing on genotype files, I test the performance of this technique on modern solid-state and spinning hard drives, and show that it performs well compared to a simple sampling scheme. I illustrate its utility by developing a method to quickly estimate genome-wide patterns of linkage disequilibrium (LD) decay with distance. I provide open-source software that samples loci from several variant format files, a separate program that performs LD decay estimates, and a C++ library that lets developers incorporate these methods into their own projects.
Collapse
|
22
|
Innovative approaches to informed consent for randomized clinical trials: Identifying the ethical challenges. Clin Trials 2018; 15:17-20. [PMID: 29250988 PMCID: PMC5799024 DOI: 10.1177/1740774517746621] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
23
|
Cross-Sectional Surveys: Inferring Total Eventual Time in Current State Using Only Elapsed Time-to-Date. SOCIO-ECONOMIC PLANNING SCIENCES 2017; 57:1-13. [PMID: 28529387 PMCID: PMC5435388 DOI: 10.1016/j.seps.2016.09.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
We focus on snapshot surveying of sub-populations whose members are in a temporary state and where one of the questions asked is the elapsed time already spent in that state. From these answers we develop probabilistic and statistical procedures to estimate the distribution of total time that will eventually be spent in that state by any random individual who enters the state. The method relies on a selection bias often found in temporal sampling, sometimes called "random incidence" or "longevity bias." We develop results for several types of sampling, including random and fixed times of surveying, random and fixed times of entering the state, and sampling only those who have already spent some minimal specified time in the targeted state. An example with post-doc data is included to demonstrate the steps.
Collapse
|
24
|
Non-carious cervical lesions (NCCLs) in a random sampling community population and the association of NCCLs with occlusive wear. J Oral Rehabil 2016; 43:960-966. [PMID: 27658541 DOI: 10.1111/joor.12445] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/18/2016] [Indexed: 10/21/2022]
Abstract
This study investigated the prevalence, risk factors and association of occlusive wear with non-carious cervical lesions (NCCLs) in the general Chinese population. A total of 1320 subjects were recruited, and multistage and random sampling methods of survey spots were performed. All age groups comprised similar numbers of participants and equal numbers of males and females. Each subject completed a structured interview, and all teeth of each subject were examined by a practitioner to determine NCCLs and occlusive wear. Binary logistic regression was conducted by analysing the association of risk factors with the occurrence of NCCLs. Bivariate correlation analysis was performed by determining the association of NCCLs dimension or depth with the range of occlusive wear facets. Clinical assessment showed that the overall prevalence of subjects diagnosed with NCCLs was 63%. The proportion of subjects or teeth with NCCLs significantly increased with age. Pre-molars were the most commonly affected teeth. Single variables and interactive effects of variables associated with the occurrence of NCCLs include the following: age group, intensity of toothbrushing, frequency of fresh fruit consumption and interactive effect between intensity of toothbrushing and frequency of fresh fruit consumption. A weak positive correlation of the grading index was found between NCCLs dimension, size or depth and range of occlusive wear facets. This study reported the higher prevalence of NCCLs in the general Chinese population. Implementation of a combined strategy to reduce risk factors of NCCLs could be more effective than individual techniques; meanwhile, the occurrence of NCCL could be related to the wear degree of occlusive defects in the population studied.
Collapse
|
25
|
Mathematical expression and sampling issues of treatment contrasts: Beyond significance testing and meta-analysis to clinically useful research synthesis. Psychother Res 2016; 28:58-75. [PMID: 27581109 DOI: 10.1080/10503307.2016.1222459] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/06/2023] Open
Abstract
The more two treatments' outcome distributions overlap, the more ambiguity there is about which would be better for some clients. Effect size and t-statistics ignore this ambiguity by indicating nothing about the contrasted treatments' outcome ranges, although the wider these are the smaller are these statistics and the more other influences than these given treatments matter for outcomes. Treatment contrast data analysis logically requires valid measurement of all the influences on outcomes. Each influence, measured or not, is somehow sampled in every treatment contrast, and the nature of this sampling affects the contrast's two outcome distributions. Sampling also affects replications of a treatment contrast, which requires sampling that produces the same statistically expected outcome distributions for each replicate as a logical prerequisite of proper meta-analysis. Because scientific human psychology is most fundamentally about individual persons and cases, rather than aggregations of persons or cases, contrasted treatments' outcome distributions ought eventually be disaggregated to whatever input dimension gradation configurations collapse their ranges to zero through jointly taking account of every influence on outcomes. Only then are the data about individual persons or cases and so relevant to psychotherapy theory.
Collapse
|
26
|
Detecting the Common and Individual Effects of Rare Variants on Quantitative Traits by Using Extreme Phenotype Sampling. Genes (Basel) 2016; 7:genes7010002. [PMID: 26784232 PMCID: PMC4728382 DOI: 10.3390/genes7010002] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2015] [Revised: 12/21/2015] [Accepted: 01/05/2016] [Indexed: 12/19/2022] Open
Abstract
Next-generation sequencing technology has made it possible to detect rare genetic variants associated with complex human traits. In recent literature, various methods specifically designed for rare variants are proposed. These tests can be broadly classified into burden and nonburden tests. In this paper, we take advantage of the burden and nonburden tests, and consider the common effect and the individual deviations from the common effect. To achieve robustness, we use two methods of combining p-values, Fisher's method and the minimum-p method. In rare variant association studies, to improve the power of the tests, we explore the advantage of the extreme phenotype sampling. At first, we dichotomize the continuous phenotypes before analysis, and the two extremes are treated as two different groups representing a dichotomous phenotype. We next compare the powers of several methods based on extreme phenotype sampling and random sampling. Extensive simulation studies show that our proposed methods by using extreme phenotype sampling are the most powerful or very close to the most powerful one in various settings of true models when the same sample size is used.
Collapse
|
27
|
Abstract
Random sampling of cases is usually infeasible for psychotherapy research, so opportunistic and purposive sampling must be used instead. Such sampling does not justify generalizations from sample to population-distribution statistics, but does justify reporting what independent-variable value configurations are associated with what dependent-variable value configurations. This allows only the generalization that these associations occur at least that frequently in the population sampled from, which is enough for suggesting and testing some psychotherapy theories and informing some psychotherapy practice. Although psychotherapy practice is a longitudinal process, formal psychotherapy outcome research is so far most feasible and most widely done in the form of two-phase cross-sectional input-outcome studies. Thus, the analysis of sampling for psychotherapy research here will be in terms of the independent- and dependent-variable value configurations produced in such two-phase studies.
Collapse
|
28
|
Making sense of discrepancies in working memory training experiments: a Monte Carlo simulation. Front Syst Neurosci 2014; 8:161. [PMID: 25228862 PMCID: PMC4151028 DOI: 10.3389/fnsys.2014.00161] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2014] [Accepted: 08/15/2014] [Indexed: 12/04/2022] Open
|
29
|
Backbone and partial side chain assignment of the microtubule binding domain of the MAP1B light chain. BIOMOLECULAR NMR ASSIGNMENTS 2014; 8:123-127. [PMID: 23339032 PMCID: PMC3955483 DOI: 10.1007/s12104-013-9466-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/21/2012] [Accepted: 01/12/2013] [Indexed: 06/01/2023]
Abstract
Microtubule-associated protein 1B (MAP1B) is a classical high molecular mass microtubule-associated protein expressed at high levels in the brain. It confers specific properties to neuronal microtubules and is essential for neuronal differentiation, brain development and synapse maturation. Misexpression of the protein contributes to the development of brain disorders in humans. However, despite numerous reports demonstrating the importance of MAP1B in regulation of the neuronal cytoskeleton during neurite extension and axon guidance, its mechanism of action is still elusive. Here we focus on the intrinsically disordered microtubule binding domain of the light chain of MAP1B. In order to obtain more detailed structural information about this domain we assigned NMR chemical shifts of backbone and aliphatic side chain atoms.
Collapse
|
30
|
Accounting for inhomogeneous broadening in nano-optics by electromagnetic modeling based on Monte Carlo methods. Proc Natl Acad Sci U S A 2014; 111:E639-44. [PMID: 24469797 DOI: 10.1073/pnas.1323392111] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Many experimental systems consist of large ensembles of uncoupled or weakly interacting elements operating as a single whole; this is particularly the case for applications in nano-optics and plasmonics, including colloidal solutions, plasmonic or dielectric nanoparticles on a substrate, antenna arrays, and others. In such experiments, measurements of the optical spectra of ensembles will differ from measurements of the independent elements as a result of small variations from element to element (also known as polydispersity) even if these elements are designed to be identical. In particular, sharp spectral features arising from narrow-band resonances will tend to appear broader and can even be washed out completely. Here, we explore this effect of inhomogeneous broadening as it occurs in colloidal nanopolymers comprising self-assembled nanorod chains in solution. Using a technique combining finite-difference time-domain simulations and Monte Carlo sampling, we predict the inhomogeneously broadened optical spectra of these colloidal nanopolymers and observe significant qualitative differences compared with the unbroadened spectra. The approach combining an electromagnetic simulation technique with Monte Carlo sampling is widely applicable for quantifying the effects of inhomogeneous broadening in a variety of physical systems, including those with many degrees of freedom that are otherwise computationally intractable.
Collapse
|
31
|
LSRN: A PARALLEL ITERATIVE SOLVER FOR STRONGLY OVER- OR UNDERDETERMINED SYSTEMS. SIAM JOURNAL ON SCIENTIFIC COMPUTING : A PUBLICATION OF THE SOCIETY FOR INDUSTRIAL AND APPLIED MATHEMATICS 2014; 36:C95-C118. [PMID: 25419094 PMCID: PMC4238893 DOI: 10.1137/120866580] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
We describe a parallel iterative least squares solver named LSRN that is based on random normal projection. LSRN computes the min-length solution to min x∈ℝ n ‖Ax - b‖2, where A ∈ ℝ m × n with m ≫ n or m ≪ n, and where A may be rank-deficient. Tikhonov regularization may also be included. Since A is involved only in matrix-matrix and matrix-vector multiplications, it can be a dense or sparse matrix or a linear operator, and LSRN automatically speeds up when A is sparse or a fast linear operator. The preconditioning phase consists of a random normal projection, which is embarrassingly parallel, and a singular value decomposition of size ⌈γ min(m, n)⌉ × min(m, n), where γ is moderately larger than 1, e.g., γ = 2. We prove that the preconditioned system is well-conditioned, with a strong concentration result on the extreme singular values, and hence that the number of iterations is fully predictable when we apply LSQR or the Chebyshev semi-iterative method. As we demonstrate, the Chebyshev method is particularly efficient for solving large problems on clusters with high communication cost. Numerical results show that on a shared-memory machine, LSRN is very competitive with LAPACK's DGELSD and a fast randomized least squares solver called Blendenpik on large dense problems, and it outperforms the least squares solver from SuiteSparseQR on sparse problems without sparsity patterns that can be exploited to reduce fill-in. Further experiments show that LSRN scales well on an Amazon Elastic Compute Cloud cluster.
Collapse
|
32
|
Using landscape history to predict biodiversity patterns in fragmented landscapes. Ecol Lett 2013; 16:1221-33. [PMID: 23931035 PMCID: PMC4231225 DOI: 10.1111/ele.12160] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2013] [Revised: 03/12/2013] [Accepted: 06/28/2013] [Indexed: 11/08/2022]
Abstract
Landscape ecology plays a vital role in understanding the impacts of land-use change on biodiversity, but it is not a predictive discipline, lacking theoretical models that quantitatively predict biodiversity patterns from first principles. Here, we draw heavily on ideas from phylogenetics to fill this gap, basing our approach on the insight that habitat fragments have a shared history. We develop a landscape ‘terrageny’, which represents the historical spatial separation of habitat fragments in the same way that a phylogeny represents evolutionary divergence among species. Combining a random sampling model with a terrageny generates numerical predictions about the expected proportion of species shared between any two fragments, the locations of locally endemic species, and the number of species that have been driven locally extinct. The model predicts that community similarity declines with terragenetic distance, and that local endemics are more likely to be found in terragenetically distinctive fragments than in large fragments. We derive equations to quantify the variance around predictions, and show that ignoring the spatial structure of fragmented landscapes leads to over-estimates of local extinction rates at the landscape scale. We argue that ignoring the shared history of habitat fragments limits our ability to understand biodiversity changes in human-modified landscapes.
Collapse
|
33
|
Turbo fast three-dimensional carotid artery black-blood MRI by combining three-dimensional MERGE sequence with compressed sensing. Magn Reson Med 2012; 70:1347-52. [PMID: 23280949 DOI: 10.1002/mrm.24579] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2012] [Revised: 11/03/2012] [Accepted: 11/11/2012] [Indexed: 12/17/2022]
Abstract
PURPOSE In this study, we sought to investigate the feasibility of turbo fast three-dimensional (3D) black-blood imaging by combining a 3D motion-sensitizing driven equilibrium rapid gradient echo sequence with compressed sensing. METHODS A pseudo-centric phase encoding order was developed for compressed sensing-3D motion-sensitizing driven equilibrium rapid gradient echo to suppress flow signal in undersampled 3D k-space. Nine healthy volunteers were recruited for this study. Signal-to-tissue ratio, contrast-to-tissue ratio (CTR) and CTR efficiency (CTReff ) between fully sampled and undersampled images were calculated and compared in seven subjects. Moreover, isotropic high resolution images using different compressed sensing acceleration factors were evaluated in two other subjects. RESULTS Wall-lumen signal-to-tissue ratio or CTR were comparable between the undersampled and the fully sampled images, while significant improvement of CTReff was achieved in the undersampled images. At an isotropic high spatial resolution of 0.7 × 0.7 × 0.7 mm(3) , all undersampled images exhibited similar level of the flow suppression efficiency and the capability of delineating outer vessel wall boundary and lumen-wall interface, when compared with the fully sampled images. CONCLUSION The proposed turbo fast compressed sensing 3D black-blood imaging technique improves scan efficiency without sacrificing flow suppression efficiency and vessel wall image quality. It could be a valuable tool for rapid 3D vessel wall imaging.
Collapse
|
34
|
Abstract
When obtaining samples for population genetic studies, it is essential that the sampling is random. For Drosophila, one of the crucial steps in sampling experimental flies is the collection of eggs. Here an egg collection method is presented, which randomizes the eggs in a water column and diminishes environmental variance. This method was compared with a traditional egg collection method where eggs are collected directly from the medium. Within each method the observed and expected standard deviations of egg-to-adult viability were compared, whereby the difference in the randomness of the samples between the two methods was assessed. The method presented here was superior to the traditional method. Only 14% of the samples had a standard deviation higher than expected, as compared with 58% in the traditional method. To reduce bias in the estimation of the variance and the mean of a trait and to obtain a representative collection of genotypes, the method presented here is strongly recommended when collecting eggs from Drosophila.
Collapse
|
35
|
Abstract
Compressed sensing is a processing method that significantly reduces the number of measurements needed to accurately resolve signals in many fields of science and engineering. We develop a two-dimensional variant of compressed sensing for multidimensional spectroscopy and apply it to experimental data. For the model system of atomic rubidium vapor, we find that compressed sensing provides an order-of-magnitude (about 10-fold) improvement in spectral resolution along each dimension, as compared to a conventional discrete Fourier transform, using the same data set. More attractive is that compressed sensing allows for random undersampling of the experimental data, down to less than 5% of the experimental data set, with essentially no loss in spectral resolution. We believe that by combining powerful resolution with ease of use, compressed sensing can be a powerful tool for the analysis and interpretation of ultrafast spectroscopy data.
Collapse
|
36
|
Extraction of Network Topology From Multi-Electrode Recordings: Is there a Small-World Effect? Front Comput Neurosci 2011; 5:4. [PMID: 21344015 PMCID: PMC3036953 DOI: 10.3389/fncom.2011.00004] [Citation(s) in RCA: 82] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2010] [Accepted: 01/17/2011] [Indexed: 11/23/2022] Open
Abstract
The simultaneous recording of the activity of many neurons poses challenges for multivariate data analysis. Here, we propose a general scheme of reconstruction of the functional network from spike train recordings. Effective, causal interactions are estimated by fitting generalized linear models on the neural responses, incorporating effects of the neurons’ self-history, of input from other neurons in the recorded network and of modulation by an external stimulus. The coupling terms arising from synaptic input can be transformed by thresholding into a binary connectivity matrix which is directed. Each link between two neurons represents a causal influence from one neuron to the other, given the observation of all other neurons from the population. The resulting graph is analyzed with respect to small-world and scale-free properties using quantitative measures for directed networks. Such graph-theoretic analyses have been performed on many complex dynamic networks, including the connectivity structure between different brain areas. Only few studies have attempted to look at the structure of cortical neural networks on the level of individual neurons. Here, using multi-electrode recordings from the visual system of the awake monkey, we find that cortical networks lack scale-free behavior, but show a small, but significant small-world structure. Assuming a simple distance-dependent probabilistic wiring between neurons, we find that this connectivity structure can account for all of the networks’ observed small-world ness. Moreover, for multi-electrode recordings the sampling of neurons is not uniform across the population. We show that the small-world-ness obtained by such a localized sub-sampling overestimates the strength of the true small-world structure of the network. This bias is likely to be present in all previous experiments based on multi-electrode recordings.
Collapse
|