1
|
Modeling functional cell types in spike train data. PLoS Comput Biol 2023; 19:e1011509. [PMID: 37824442 PMCID: PMC10569560 DOI: 10.1371/journal.pcbi.1011509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Accepted: 09/12/2023] [Indexed: 10/14/2023] Open
Abstract
A major goal of computational neuroscience is to build accurate models of the activity of neurons that can be used to interpret their function in circuits. Here, we explore using functional cell types to refine single-cell models by grouping them into functionally relevant classes. Formally, we define a hierarchical generative model for cell types, single-cell parameters, and neural responses, and then derive an expectation-maximization algorithm with variational inference that maximizes the likelihood of the neural recordings. We apply this "simultaneous" method to estimate cell types and fit single-cell models from simulated data, and find that it accurately recovers the ground truth parameters. We then apply our approach to in vitro neural recordings from neurons in mouse primary visual cortex, and find that it yields improved prediction of single-cell activity. We demonstrate that the discovered cell-type clusters are well separated and generalizable, and thus amenable to interpretation. We then compare discovered cluster memberships with locational, morphological, and transcriptomic data. Our findings reveal the potential to improve models of neural responses by explicitly allowing for shared functional properties across neurons.
Collapse
|
2
|
Selective inference for k-means clustering. JOURNAL OF MACHINE LEARNING RESEARCH : JMLR 2023; 24:152. [PMID: 38264325 PMCID: PMC10805457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/25/2024]
Abstract
We consider the problem of testing for a difference in means between clusters of observations identified via k -means clustering. In this setting, classical hypothesis tests lead to an inflated Type I error rate. In recent work, Gao et al. (2022) considered a related problem in the context of hierarchical clustering. Unfortunately, their solution is highly-tailored to the context of hierarchical clustering, and thus cannot be applied in the setting of k -means clustering. In this paper, we propose a p-value that conditions on all of the intermediate clustering assignments in the k -means algorithm. We show that the p-value controls the selective Type I error for a test of the difference in means between a pair of clusters obtained using k -means clustering in finite samples, and can be efficiently computed. We apply our proposal on hand-written digits data and on single-cell RNA-sequencing data.
Collapse
|
3
|
Quantifying uncertainty in spikes estimated from calcium imaging data. Biostatistics 2023; 24:481-501. [PMID: 34654923 PMCID: PMC10449000 DOI: 10.1093/biostatistics/kxab034] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2021] [Revised: 07/28/2021] [Accepted: 09/04/2021] [Indexed: 11/12/2022] Open
Abstract
In recent years, a number of methods have been proposed to estimate the times at which a neuron spikes on the basis of calcium imaging data. However, quantifying the uncertainty associated with these estimated spikes remains an open problem. We consider a simple and well-studied model for calcium imaging data, which states that calcium decays exponentially in the absence of a spike, and instantaneously increases when a spike occurs. We wish to test the null hypothesis that the neuron did not spike-i.e., that there was no increase in calcium-at a particular timepoint at which a spike was estimated. In this setting, classical hypothesis tests lead to inflated Type I error, because the spike was estimated on the same data used for testing. To overcome this problem, we propose a selective inference approach. We describe an efficient algorithm to compute finite-sample $p$-values that control selective Type I error, and confidence intervals with correct selective coverage, for spikes estimated using a recent proposal from the literature. We apply our proposal in simulation and on calcium imaging data from the $\texttt{spikefinder}$ challenge.
Collapse
|
4
|
Highly Parallel Tissue Grafting for Combinatorial In Vivo Screening. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.03.16.533029. [PMID: 36993278 PMCID: PMC10055160 DOI: 10.1101/2023.03.16.533029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Material- and cell-based technologies such as engineered tissues hold great promise as human therapies. Yet, the development of many of these technologies becomes stalled at the stage of pre-clinical animal studies due to the tedious and low-throughput nature of in vivo implantation experiments. We introduce a 'plug and play' in vivo screening array platform called Highly Parallel Tissue Grafting (HPTG). HPTG enables parallelized in vivo screening of 43 three-dimensional microtissues within a single 3D printed device. Using HPTG, we screen microtissue formations with varying cellular and material components and identify formulations that support vascular self-assembly, integration and tissue function. Our studies highlight the importance of combinatorial studies that vary cellular and material formulation variables concomitantly, by revealing that inclusion of stromal cells can "rescue" vascular self-assembly in manner that is material-dependent. HPTG provides a route for accelerating pre-clinical progress for diverse medical applications including tissue therapy, cancer biomedicine, and regenerative medicine.
Collapse
|
5
|
Modeling functional cell types in spike train data. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.02.28.530327. [PMID: 36909648 PMCID: PMC10002678 DOI: 10.1101/2023.02.28.530327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
Abstract
A major goal of computational neuroscience is to build accurate models of the activity of neurons that can be used to interpret their function in circuits. Here, we explore using functional cell types to refine single-cell models by grouping them into functionally relevant classes. Formally, we define a hierarchical generative model for cell types, single-cell parameters, and neural responses, and then derive an expectation-maximization algorithm with variational inference that maximizes the likelihood of the neural recordings. We apply this "simultaneous" method to estimate cell types and fit single-cell models from simulated data, and find that it accurately recovers the ground truth parameters. We then apply our approach to in vitro neural recordings from neurons in mouse primary visual cortex, and find that it yields improved prediction of single-cell activity. We demonstrate that the discovered cell-type clusters are well separated and generalizable, and thus amenable to interpretation. We then compare discovered cluster memberships with locational, morphological, and transcriptomic data. Our findings reveal the potential to improve models of neural responses by explicitly allowing for shared functional properties across neurons.
Collapse
|
6
|
Tree-Values: Selective Inference for Regression Trees. JOURNAL OF MACHINE LEARNING RESEARCH : JMLR 2022; 23:305. [PMID: 38481523 PMCID: PMC10933572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Figures] [Subscribe] [Scholar Register] [Indexed: 03/17/2024]
Abstract
We consider conducting inference on the output of the Classification and Regression Tree (CART) (Breiman et al., 1984) algorithm. A naive approach to inference that does not account for the fact that the tree was estimated from the data will not achieve standard guarantees, such as Type 1 error rate control and nominal coverage. Thus, we propose a selective inference framework for conducting inference on a fitted CART tree. In a nutshell, we condition on the fact that the tree was estimated from the data. We propose a test for the difference in the mean response between a pair of terminal nodes that controls the selective Type 1 error rate, and a confidence interval for the mean response within a single terminal node that attains the nominal selective coverage. Efficient algorithms for computing the necessary conditioning sets are provided. We apply these methods in simulation and to a dataset involving the association between portion control interventions and caloric intake.
Collapse
|
7
|
Abstract
Calcium imaging has led to discoveries about neural correlates of behavior in subcortical neurons, including dopamine (DA) neurons. However, spike inference methods have not been tested in most populations of subcortical neurons. To address this gap, we simultaneously performed calcium imaging and electrophysiology in DA neurons in brain slices and applied a recently developed spike inference algorithm to the GCaMP fluorescence. This revealed that individual spikes can be inferred accurately in this population. Next, we inferred spikes in vivo from calcium imaging from these neurons during Pavlovian conditioning, as well as during navigation in virtual reality. In both cases, we quantitatively recapitulated previous in vivo electrophysiological observations. Our work provides a validated approach to infer spikes from calcium imaging in DA neurons and implies that aspects of both tonic and phasic spike patterns can be recovered.
Collapse
|
8
|
Adaptive nonparametric regression with the K-nearest neighbour fused lasso. Biometrika 2020; 107:293-310. [PMID: 32454528 DOI: 10.1093/biomet/asz071] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Indexed: 11/12/2022] Open
Abstract
The fused lasso, also known as total-variation denoising, is a locally adaptive function estimator over a regular grid of design points. In this article, we extend the fused lasso to settings in which the points do not occur on a regular grid, leading to a method for nonparametric regression. This approach, which we call the [Formula: see text]-nearest-neighbours fused lasso, involves computing the [Formula: see text]-nearest-neighbours graph of the design points and then performing the fused lasso over this graph. We show that this procedure has a number of theoretical advantages over competing methods: specifically, it inherits local adaptivity from its connection to the fused lasso, and it inherits manifold adaptivity from its connection to the [Formula: see text]-nearest-neighbours approach. In a simulation study and an application to flu data, we show that excellent results are obtained. For completeness, we also study an estimator that makes use of an [Formula: see text]-graph rather than a [Formula: see text]-nearest-neighbours graph and contrast it with the [Formula: see text]-nearest-neighbours fused lasso.
Collapse
|
9
|
Multi-scale network regression for brain-phenotype associations. Hum Brain Mapp 2020; 41:2553-2566. [PMID: 32216125 PMCID: PMC7383128 DOI: 10.1002/hbm.24982] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Revised: 01/31/2020] [Accepted: 02/26/2020] [Indexed: 02/03/2023] Open
Abstract
Brain networks are increasingly characterized at different scales, including summary statistics, community connectivity, and individual edges. While research relating brain networks to behavioral measurements has yielded many insights into brain‐phenotype relationships, common analytical approaches only consider network information at a single scale. Here, we designed, implemented, and deployed Multi‐Scale Network Regression (MSNR), a penalized multivariate approach for modeling brain networks that explicitly respects both edge‐ and community‐level information by assuming a low rank and sparse structure, both encouraging less complex and more interpretable modeling. Capitalizing on a large neuroimaging cohort (n = 1, 051), we demonstrate that MSNR recapitulates interpretable and statistically significant connectivity patterns associated with brain development, sex differences, and motion‐related artifacts. Compared to single‐scale methods, MSNR achieves a balance between prediction performance and model complexity, with improved interpretability. Together, by jointly exploiting both edge‐ and community‐level information, MSNR has the potential to yield novel insights into brain‐behavior relationships.
Collapse
|
10
|
A large-scale standardized physiological survey reveals functional organization of the mouse visual cortex. Nat Neurosci 2020; 23:138-151. [PMID: 31844315 PMCID: PMC6948932 DOI: 10.1038/s41593-019-0550-9] [Citation(s) in RCA: 134] [Impact Index Per Article: 33.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2019] [Accepted: 10/28/2019] [Indexed: 11/16/2022]
Abstract
To understand how the brain processes sensory information to guide behavior, we must know how stimulus representations are transformed throughout the visual cortex. Here we report an open, large-scale physiological survey of activity in the awake mouse visual cortex: the Allen Brain Observatory Visual Coding dataset. This publicly available dataset includes the cortical activity of nearly 60,000 neurons from six visual areas, four layers, and 12 transgenic mouse lines in a total of 243 adult mice, in response to a systematic set of visual stimuli. We classify neurons on the basis of joint reliabilities to multiple stimuli and validate this functional classification with models of visual responses. While most classes are characterized by responses to specific subsets of the stimuli, the largest class is not reliably responsive to any of the stimuli and becomes progressively larger in higher visual areas. These classes reveal a functional organization wherein putative dorsal areas show specialization for visual motion signals.
Collapse
|
11
|
Discussion of 'Gene hunting with hidden Markov model knockoffs'. Biometrika 2019; 106:23-26. [PMID: 30799876 PMCID: PMC6373413 DOI: 10.1093/biomet/asy061] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2018] [Indexed: 11/14/2022] Open
|
12
|
Fast nonconvex deconvolution of calcium imaging data. Biostatistics 2019; 21:709-726. [PMID: 30753436 DOI: 10.1093/biostatistics/kxy083] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2018] [Revised: 12/03/2018] [Accepted: 12/10/2018] [Indexed: 11/14/2022] Open
Abstract
Calcium imaging data promises to transform the field of neuroscience by making it possible to record from large populations of neurons simultaneously. However, determining the exact moment in time at which a neuron spikes, from a calcium imaging data set, amounts to a non-trivial deconvolution problem which is of critical importance for downstream analyses. While a number of formulations have been proposed for this task in the recent literature, in this article, we focus on a formulation recently proposed in Jewell and Witten (2018. Exact spike train inference via $\ell_{0} $ optimization. The Annals of Applied Statistics12(4), 2457-2482) that can accurately estimate not just the spike rate, but also the specific times at which the neuron spikes. We develop a much faster algorithm that can be used to deconvolve a fluorescence trace of 100 000 timesteps in less than a second. Furthermore, we present a modification to this algorithm that precludes the possibility of a "negative spike". We demonstrate the performance of this algorithm for spike deconvolution on calcium imaging datasets that were recently released as part of the $\texttt{spikefinder}$ challenge (http://spikefinder.codeneuro.org/). The algorithm presented in this article was used in the Allen Institute for Brain Science's "platform paper" to decode neural activity from the Allen Brain Observatory; this is the main scientific paper in which their data resource is presented. Our $\texttt{C++}$ implementation, along with $\texttt{R}$ and $\texttt{python}$ wrappers, is publicly available. $\texttt{R}$ code is available on $\texttt{CRAN}$ and $\texttt{Github}$, and $\texttt{python}$ wrappers are available on $\texttt{Github}$; see https://github.com/jewellsean/FastLZeroSpikeInference.
Collapse
|
13
|
Corrigendum: A systematic comparison reveals substantial differences in chromosomal versus episomal encoding of enhancer activity. Genome Res 2018; 28:766.3. [PMID: 29717003 DOI: 10.1101/gr.237321.118] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
14
|
Abstract
We consider the task of learning a dynamical system from high-dimensional time-course data. For instance, we might wish to estimate a gene regulatory network from gene expression data measured at discrete time points. We model the dynamical system nonparametrically as a system of additive ordinary differential equations. Most existing methods for parameter estimation in ordinary differential equations estimate the derivatives from noisy observations. This is known to be challenging and inefficient. We propose a novel approach that does not involve derivative estimation. We show that the proposed method can consistently recover the true network structure even in high dimensions, and we demonstrate empirical improvement over competing approaches. Supplementary materials for this article are available online.
Collapse
|
15
|
Abstract
In classical statistics, much thought has been put into experimental design and data collection. In the high-dimensional setting, however, experimental design has been less of a focus. In this paper, we stress the importance of collecting multiple replicates for each subject in this setting. We consider learning the structure of a graphical model with latent variables, under the assumption that these variables take a constant value across replicates within each subject. By collecting multiple replicates for each subject, we are able to estimate the conditional dependence relationships among the observed variables given the latent variables. To test the null hypothesis of conditional independence between two observed variables, we propose a pairwise decorrelated score test. Theoretical guarantees are established for parameter estimation and for this test. We show that our proposal is able to estimate latent variable graphical models more accurately than some existing proposals, and apply the proposed method to a brain imaging dataset.
Collapse
|
16
|
A systematic comparison reveals substantial differences in chromosomal versus episomal encoding of enhancer activity. Genome Res 2016; 27:38-52. [PMID: 27831498 PMCID: PMC5204343 DOI: 10.1101/gr.212092.116] [Citation(s) in RCA: 170] [Impact Index Per Article: 21.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2016] [Accepted: 11/08/2016] [Indexed: 11/24/2022]
Abstract
Candidate enhancers can be identified on the basis of chromatin modifications, the binding of chromatin modifiers and transcription factors and cofactors, or chromatin accessibility. However, validating such candidates as bona fide enhancers requires functional characterization, typically achieved through reporter assays that test whether a sequence can increase expression of a transcriptional reporter via a minimal promoter. A longstanding concern is that reporter assays are mainly implemented on episomes, which are thought to lack physiological chromatin. However, the magnitude and determinants of differences in cis-regulation for regulatory sequences residing in episomes versus chromosomes remain almost completely unknown. To address this systematically, we developed and applied a novel lentivirus-based massively parallel reporter assay (lentiMPRA) to directly compare the functional activities of 2236 candidate liver enhancers in an episomal versus a chromosomally integrated context. We find that the activities of chromosomally integrated sequences are substantially different from the activities of the identical sequences assayed on episomes, and furthermore are correlated with different subsets of ENCODE annotations. The results of chromosomally based reporter assays are also more reproducible and more strongly predictable by both ENCODE annotations and sequence-based models. With a linear model that combines chromatin annotations and sequence information, we achieve a Pearson's R2 of 0.362 for predicting the results of chromosomally integrated reporter assays. This level of prediction is better than with either chromatin annotations or sequence information alone and also outperforms predictive models of episomal assays. Our results have broad implications for how cis-regulatory elements are identified, prioritized and functionally validated.
Collapse
|
17
|
Abstract
We consider the problem of estimating the parameters in a pairwise graphical model in which the distribution of each node, conditioned on the others, may have a different exponential family form. We identify restrictions on the parameter space required for the existence of a well-defined joint density, and establish the consistency of the neighbourhood selection approach for graph reconstruction in high dimensions when the true underlying graph is sparse. Motivated by our theoretical results, we investigate the selection of edges between nodes whose conditional distributions take different parametric forms, and show that efficiency can be gained if edge estimates obtained from the regressions of particular nodes are used to reconstruct the graph. These results are illustrated with examples of Gaussian, Bernoulli, Poisson and exponential distributions. Our theoretical findings are corroborated by evidence from simulation studies.
Collapse
|
18
|
A general framework for estimating the relative pathogenicity of human genetic variants. Nat Genet 2014; 46:310-5. [PMID: 24487276 PMCID: PMC3992975 DOI: 10.1038/ng.2892] [Citation(s) in RCA: 4224] [Impact Index Per Article: 422.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2013] [Accepted: 01/13/2014] [Indexed: 02/06/2023]
Abstract
Current methods for annotating and interpreting human genetic variation tend to exploit a single information type (for example, conservation) and/or are restricted in scope (for example, to missense changes). Here we describe Combined Annotation-Dependent Depletion (CADD), a method for objectively integrating many diverse annotations into a single measure (C score) for each variant. We implement CADD as a support vector machine trained to differentiate 14.7 million high-frequency human-derived alleles from 14.7 million simulated variants. We precompute C scores for all 8.6 billion possible human single-nucleotide variants and enable scoring of short insertions-deletions. C scores correlate with allelic diversity, annotations of functionality, pathogenicity, disease severity, experimentally measured regulatory effects and complex trait associations, and they highly rank known pathogenic variants within individual genomes. The ability of CADD to prioritize functional, deleterious and pathogenic variants across many functional categories, effect sizes and genetic architectures is unmatched by any current single-annotation method.
Collapse
|
19
|
Abstract
In the high-dimensional regression setting, the elastic net produces a parsimonious model by shrinking all coefficients towards the origin. However, in certain settings, this behavior might not be desirable: if some features are highly correlated with each other and associated with the response, then we might wish to perform less shrinkage on the coefficients corresponding to that subset of features. We propose the cluster elastic net, which selectively shrinks the coefficients for such variables towards each other, rather than towards the origin. Instead of assuming that the clusters are known a priori, the cluster elastic net infers clusters of features from the data, on the basis of correlation among the variables as well as association with the response. These clusters are then used in order to more accurately perform regression. We demonstrate the theoretical advantages of our proposed approach, and explore its performance in a simulation study, and in an application to HIV drug resistance data. Supplementary Materials are available online.
Collapse
|
20
|
Abstract
We consider the task of simultaneously clustering the rows and columns of a large transposable data matrix. We assume that the matrix elements are normally distributed with a bicluster-specific mean term and a common variance, and perform biclustering by maximizing the corresponding log likelihood. We apply an ℓ1 penalty to the means of the biclusters in order to obtain sparse and interpretable biclusters. Our proposal amounts to a sparse, symmetrized version of k-means clustering. We show that k-means clustering of the rows and of the columns of a data matrix can be seen as special cases of our proposal, and that a relaxation of our proposal yields the singular value decomposition. In addition, we propose a framework for bi-clustering based on the matrix-variate normal distribution. The performances of our proposals are demonstrated in a simulation study and on a gene expression data set. This article has supplementary material online.
Collapse
|
21
|
The joint graphical lasso for inverse covariance estimation across multiple classes. J R Stat Soc Series B Stat Methodol 2013; 76:373-397. [PMID: 24817823 DOI: 10.1111/rssb.12033] [Citation(s) in RCA: 340] [Impact Index Per Article: 30.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
We consider the problem of estimating multiple related Gaussian graphical models from a high-dimensional data set with observations belonging to distinct classes. We propose the joint graphical lasso, which borrows strength across the classes in order to estimate multiple graphical models that share certain characteristics, such as the locations or weights of nonzero edges. Our approach is based upon maximizing a penalized log likelihood. We employ generalized fused lasso or group lasso penalties, and implement a fast ADMM algorithm to solve the corresponding convex optimization problems. The performance of the proposed method is illustrated through simulated and real data examples.
Collapse
|
22
|
Abstract
We consider the problem of performing unsupervised learning in the presence of outliers - that is, observations that do not come from the same distribution as the rest of the data. It is known that in this setting, standard approaches for unsupervised learning can yield unsatisfactory results. For instance, in the presence of severe outliers, K-means clustering will often assign each outlier to its own cluster, or alternatively may yield distorted clusters in order to accommodate the outliers. In this paper, we take a new approach to extending existing unsupervised learning techniques to accommodate outliers. Our approach is an extension of a recent proposal for outlier detection in the regression setting. We allow each observation to take on an "error" term, and we penalize the errors using a group lasso penalty in order to encourage most of the observations' errors to exactly equal zero. We show that this approach can be used in order to develop extensions of K-means clustering and principal components analysis that result in accurate outlier detection, as well as improved performance in the presence of outliers. These methods are illustrated in a simulation study and on two gene expression data sets, and connections with M-estimation are explored.
Collapse
|
23
|
Abstract
It has been claimed that most research findings are false, and it is known that large-scale studies involving omics data are especially prone to errors in design, execution, and analysis. The situation is alarming because taxpayer dollars fund a substantial amount of biomedical research, and because the publication of a research article that is later determined to be flawed can erode the credibility of an entire field, resulting in a severe and negative impact for years to come. Here, we urge the development of an online, open-access, postpublication, peer review system that will increase the accountability of scientists for the quality of their research and the ability of readers to distinguish good from sloppy science.
Collapse
|
24
|
Transcriptional profiling of long non-coding RNAs and novel transcribed regions across a diverse panel of archived human cancers. Genome Biol 2012; 13:R75. [PMID: 22929540 PMCID: PMC4053743 DOI: 10.1186/gb-2012-13-8-r75] [Citation(s) in RCA: 206] [Impact Index Per Article: 17.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2012] [Accepted: 08/28/2012] [Indexed: 02/06/2023] Open
Abstract
Background Molecular characterization of tumors has been critical for identifying important genes in cancer biology and for improving tumor classification and diagnosis. Long non-coding RNAs, as a new, relatively unstudied class of transcripts, provide a rich opportunity to identify both functional drivers and cancer-type-specific biomarkers. However, despite the potential importance of long non-coding RNAs to the cancer field, no comprehensive survey of long non-coding RNA expression across various cancers has been reported. Results We performed a sequencing-based transcriptional survey of both known long non-coding RNAs and novel intergenic transcripts across a panel of 64 archival tumor samples comprising 17 diagnostic subtypes of adenocarcinomas, squamous cell carcinomas and sarcomas. We identified hundreds of transcripts from among the known 1,065 long non-coding RNAs surveyed that showed variability in transcript levels between the tumor types and are therefore potential biomarker candidates. We discovered 1,071 novel intergenic transcribed regions and demonstrate that these show similar patterns of variability between tumor types. We found that many of these differentially expressed cancer transcripts are also expressed in normal tissues. One such novel transcript specifically expressed in breast tissue was further evaluated using RNA in situ hybridization on a panel of breast tumors. It was shown to correlate with low tumor grade and estrogen receptor expression, thereby representing a potentially important new breast cancer biomarker. Conclusions This study provides the first large survey of long non-coding RNA expression within a panel of solid cancers and also identifies a number of novel transcribed regions differentially expressed across distinct cancer types that represent candidate biomarkers for future research.
Collapse
|
25
|
Molecular signatures from omics data: from chaos to consensus. Biotechnol J 2012; 7:946-57. [PMID: 22528809 PMCID: PMC3418428 DOI: 10.1002/biot.201100305] [Citation(s) in RCA: 86] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2011] [Revised: 02/14/2012] [Accepted: 03/08/2012] [Indexed: 01/17/2023]
Abstract
In the past 15 years, new "omics" technologies have made it possible to obtain high-resolution molecular snapshots of organisms, tissues, and even individual cells at various disease states and experimental conditions. It is hoped that these developments will usher in a new era of personalized medicine in which an individual's molecular measurements are used to diagnose disease, guide therapy, and perform other tasks more accurately and effectively than is possible using standard approaches. There now exists a vast literature of reported "molecular signatures". However, despite some notable exceptions, many of these signatures have suffered from limited reproducibility in independent datasets, insufficient sensitivity or specificity to meet clinical needs, or other challenges. In this paper, we discuss the process of molecular signature discovery on the basis of omics data. In particular, we highlight potential pitfalls in the discovery process, as well as strategies that can be used to increase the odds of successful discovery. Despite the difficulties that have plagued the field of molecular signature discovery, we remain optimistic about the potential to harness the vast amounts of available omics data in order to substantially impact clinical practice.
Collapse
|
26
|
Abstract
BACKGROUND Following successful orthopaedic surgical procedures, implant removal is generally not necessary or recommended. However, patients with pain related to implants may benefit from this elective procedure. The foot and ankle may be more symptomatic from retained implants because of weight-bearing activities, shoe wear, and limited soft-tissue cushioning. In such cases, implant removal may provide good and reliable relief of symptoms. METHODS A prospective study of sixty-nine patients who underwent elective removal of symptomatic implants from the foot and ankle was undertaken to evaluate the patients' pain experience. The short-form McGill pain questionnaire was administered preoperatively and six weeks postoperatively. Postoperatively, patients were also asked whether they would repeat the procedure and whether they were satisfied with the results. RESULTS Patients reported significantly less pain following the procedure, with the average rating of pain on the visual analog scale (VAS) decreasing from 3.06 to 0.88 and the average rating of present pain intensity decreasing from 2.03 to 0.58 (p < 0.05 for both). Sixty-five percent of the patients reported no pain on either measure at six weeks postoperatively. Preoperative pain was correlated with postoperative pain (r = 0.24 and p < 0.05 for VAS, and r = 0.16 and p > 0.05 for present pain intensity).With the small sample size, preoperative and postoperative pain did not show a significant difference on the basis of implant location or patient age or sex. Ninety-four percent of patients said they would repeat the procedure under the same circumstances, and 91% of patients were satisfied with the results. CONCLUSIONS Following successful orthopaedic surgical procedures, removal of implants causing symptoms can result in pain relief and a high rate of patient satisfaction. LEVEL OF EVIDENCE Therapeutic Level IV. See Instructions for Authors for a complete description of levels of evidence.
Collapse
|
27
|
Massively parallel functional dissection of mammalian enhancers in vivo. Nat Biotechnol 2012; 30:265-70. [PMID: 22371081 PMCID: PMC3402344 DOI: 10.1038/nbt.2136] [Citation(s) in RCA: 366] [Impact Index Per Article: 30.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2011] [Accepted: 01/23/2012] [Indexed: 01/01/2023]
Abstract
The functional consequences of genetic variation in mammalian regulatory elements are poorly understood. We report the in vivo dissection of three mammalian enhancers at single-nucleotide resolution through a massively parallel reporter assay. For each enhancer, we synthesized a library of >100,000 mutant haplotypes with 2-3% divergence from the wild-type sequence. Each haplotype was linked to a unique sequence tag embedded within a transcriptional cassette. We introduced each enhancer library into mouse liver and measured the relative activities of individual haplotypes en masse by sequencing the transcribed tags. Linear regression analysis yielded highly reproducible estimates of the effect of every possible single-nucleotide change on enhancer activity. The functional consequence of most mutations was modest, with ∼22% affecting activity by >1.2-fold and ∼3% by >2-fold. Several, but not all, positions with higher effects showed evidence for purifying selection, or co-localized with known liver-associated transcription factor binding sites, demonstrating the value of empirical high-resolution functional analysis.
Collapse
|
28
|
On the assessment of statistical significance of three-dimensional colocalization of sets of genomic elements. Nucleic Acids Res 2012; 40:3849-55. [PMID: 22266657 PMCID: PMC3351188 DOI: 10.1093/nar/gks012] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
A growing body of experimental evidence supports the hypothesis that the 3D structure of chromatin in the nucleus is closely linked to important functional processes, including DNA replication and gene regulation. In support of this hypothesis, several research groups have examined sets of functionally associated genomic loci, with the aim of determining whether those loci are statistically significantly colocalized. This work presents a critical assessment of two previously reported analyses, both of which used genome-wide DNA–DNA interaction data from the yeast Saccharomyces cerevisiae, and both of which rely upon a simple notion of the statistical significance of colocalization. We show that these previous analyses rely upon a faulty assumption, and we propose a correct non-parametric resampling approach to the same problem. Applying this approach to the same data set does not support the hypothesis that transcriptionally coregulated genes tend to colocalize, but strongly supports the colocalization of centromeres, and provides some evidence of colocalization of origins of early DNA replication, chromosomal breakpoints and transfer RNAs.
Collapse
|
29
|
|
30
|
Abstract
We discuss the identification of genes that are associated with an outcome in RNA sequencing and other sequence-based comparative genomic experiments. RNA-sequencing data take the form of counts, so models based on the Gaussian distribution are unsuitable. Moreover, normalization is challenging because different sequencing experiments may generate quite different total numbers of reads. To overcome these difficulties, we use a log-linear model with a new approach to normalization. We derive a novel procedure to estimate the false discovery rate (FDR). Our method can be applied to data with quantitative, two-class, or multiple-class outcomes, and the computation is fast even for large data sets. We study the accuracy of our approaches for significance calculation and FDR estimation, and we demonstrate that our method has potential advantages over existing methods that are based on a Poisson or negative binomial model. In summary, this work provides a pipeline for the significance analysis of sequencing data.
Collapse
|
31
|
Abstract
We consider the supervised classification setting, in which the data consist of p features measured on n observations, each of which belongs to one of K classes. Linear discriminant analysis (LDA) is a classical method for this problem. However, in the high-dimensional setting where p ≫ n, LDA is not appropriate for two reasons. First, the standard estimate for the within-class covariance matrix is singular, and so the usual discriminant rule cannot be applied. Second, when p is large, it is difficult to interpret the classification rule obtained from LDA, since it involves all p features. We propose penalized LDA, a general approach for penalizing the discriminant vectors in Fisher's discriminant problem in a way that leads to greater interpretability. The discriminant problem is not convex, so we use a minorization-maximization approach in order to efficiently optimize it when convex penalties are applied to the discriminant vectors. In particular, we consider the use of L(1) and fused lasso penalties. Our proposal is equivalent to recasting Fisher's discriminant problem as a biconvex problem. We evaluate the performances of the resulting methods on a simulation study, and on three gene expression data sets. We also survey past methods for extending LDA to the high-dimensional setting, and explore their relationships with our proposal.
Collapse
|
32
|
|
33
|
Abstract
We consider the problem of clustering observations using a potentially large set of features. One might expect that the true underlying clusters present in the data differ only with respect to a small fraction of the features, and will be missed if one clusters the observations using the full set of features. We propose a novel framework for sparse clustering, in which one clusters the observations using an adaptively chosen subset of the features. The method uses a lasso-type penalty to select the features. We use this framework to develop simple methods for sparse K-means and sparse hierarchical clustering. A single criterion governs both the selection of the features and the resulting clusters. These approaches are demonstrated on simulated data and on genomic data sets.
Collapse
|
34
|
Discovery of molecular subtypes in leiomyosarcoma through integrative molecular profiling. Oncogene 2010; 29:845-54. [PMID: 19901961 PMCID: PMC2820592 DOI: 10.1038/onc.2009.381] [Citation(s) in RCA: 110] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2009] [Revised: 08/14/2009] [Accepted: 09/04/2009] [Indexed: 01/16/2023]
Abstract
Leiomyosarcoma (LMS) is a soft tissue tumor with a significant degree of morphologic and molecular heterogeneity. We used integrative molecular profiling to discover and characterize molecular subtypes of LMS. Gene expression profiling was performed on 51 LMS samples. Unsupervised clustering showed three reproducible LMS clusters. Array comparative genomic hybridization (aCGH) was performed on 20 LMS samples and showed that the molecular subtypes defined by gene expression showed distinct genomic changes. Tumors from the 'muscle-enriched' cluster showed significantly increased copy number changes (P=0.04). A majority of the muscle-enriched cases showed loss at 16q24, which contains Fanconi anemia, complementation group A, known to have an important role in DNA repair, and loss at 1p36, which contains PRDM16, of which loss promotes muscle differentiation. Immunohistochemistry (IHC) was performed on LMS tissue microarrays (n=377) for five markers with high levels of messenger RNA in the muscle-enriched cluster (ACTG2, CASQ2, SLMAP, CFL2 and MYLK) and showed significantly correlated expression of the five proteins (all pairwise P<0.005). Expression of the five markers was associated with improved disease-specific survival in a multivariate Cox regression analysis (P<0.04). In this analysis that combined gene expression profiling, aCGH and IHC, we characterized distinct molecular LMS subtypes, provided insight into their pathogenesis, and identified prognostic biomarkers.
Collapse
|
35
|
3'-end sequencing for expression quantification (3SEQ) from archival tumor samples. PLoS One 2010; 5:e8768. [PMID: 20098735 PMCID: PMC2808244 DOI: 10.1371/journal.pone.0008768] [Citation(s) in RCA: 115] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2009] [Accepted: 12/21/2009] [Indexed: 01/04/2023] Open
Abstract
Gene expression microarrays are the most widely used technique for genome-wide expression profiling. However, microarrays do not perform well on formalin fixed paraffin embedded tissue (FFPET). Consequently, microarrays cannot be effectively utilized to perform gene expression profiling on the vast majority of archival tumor samples. To address this limitation of gene expression microarrays, we designed a novel procedure (3′-end sequencing for expression quantification (3SEQ)) for gene expression profiling from FFPET using next-generation sequencing. We performed gene expression profiling by 3SEQ and microarray on both frozen tissue and FFPET from two soft tissue tumors (desmoid type fibromatosis (DTF) and solitary fibrous tumor (SFT)) (total n = 23 samples, which were each profiled by at least one of the four platform-tissue preparation combinations). Analysis of 3SEQ data revealed many genes differentially expressed between the tumor types (FDR<0.01) on both the frozen tissue (∼9.6K genes) and FFPET (∼8.1K genes). Analysis of microarray data from frozen tissue revealed fewer differentially expressed genes (∼4.64K), and analysis of microarray data on FFPET revealed very few (69) differentially expressed genes. Functional gene set analysis of 3SEQ data from both frozen tissue and FFPET identified biological pathways known to be important in DTF and SFT pathogenesis and suggested several additional candidate oncogenic pathways in these tumors. These findings demonstrate that 3SEQ is an effective technique for gene expression profiling from archival tumor samples and may facilitate significant advances in translational cancer research.
Collapse
|
36
|
Abstract
In recent years, breakthroughs in biomedical technology have led to a wealth of data in which the number of features (for instance, genes on which expression measurements are available) exceeds the number of observations (e.g. patients). Sometimes survival outcomes are also available for those same observations. In this case, one might be interested in (a) identifying features that are associated with survival (in a univariate sense), and (b) developing a multivariate model for the relationship between the features and survival that can be used to predict survival in a new observation. Due to the high dimensionality of this data, most classical statistical methods for survival analysis cannot be applied directly. Here, we review a number of methods from the literature that address these two problems.
Collapse
|
37
|
|
38
|
Covariance-regularized regression and classification for high-dimensional problems. J R Stat Soc Series B Stat Methodol 2009; 71:615-636. [PMID: 20084176 DOI: 10.1111/j.1467-9868.2009.00699.x] [Citation(s) in RCA: 141] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
In recent years, many methods have been developed for regression in high-dimensional settings. We propose covariance-regularized regression, a family of methods that use a shrunken estimate of the inverse covariance matrix of the features in order to achieve superior prediction. An estimate of the inverse covariance matrix is obtained by maximizing its log likelihood, under a multivariate normal model, subject to a constraint on its elements; this estimate is then used to estimate coefficients for the regression of the response onto the features. We show that ridge regression, the lasso, and the elastic net are special cases of covariance-regularized regression, and we demonstrate that certain previously unexplored forms of covariance-regularized regression can outperform existing methods in a range of situations. The covariance-regularized regression framework is extended to generalized linear models and linear discriminant analysis, and is used to analyze gene expression data sets with multiple class and survival outcomes.
Collapse
|
39
|
A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis. Biostatistics 2009; 10:515-34. [PMID: 19377034 DOI: 10.1093/biostatistics/kxp008] [Citation(s) in RCA: 700] [Impact Index Per Article: 46.7] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
We present a penalized matrix decomposition (PMD), a new framework for computing a rank-K approximation for a matrix. We approximate the matrix X as circumflexX = sigma(k=1)(K) d(k)u(k)v(k)(T), where d(k), u(k), and v(k) minimize the squared Frobenius norm of X - circumflexX, subject to penalties on u(k) and v(k). This results in a regularized version of the singular value decomposition. Of particular interest is the use of L(1)-penalties on u(k) and v(k), which yields a decomposition of X using sparse vectors. We show that when the PMD is applied using an L(1)-penalty on v(k) but not on u(k), a method for sparse principal components results. In fact, this yields an efficient algorithm for the "SCoTLASS" proposal (Jolliffe and others 2003) for obtaining sparse principal components. This method is demonstrated on a publicly available gene expression data set. We also establish connections between the SCoTLASS method for sparse principal component analysis and the method of Zou and others (2006). In addition, we show that when the PMD is applied to a cross-products matrix, it results in a method for penalized canonical correlation analysis (CCA). We apply this penalized CCA method to simulated data and to a genomic data set consisting of gene expression and DNA copy number measurements on the same set of samples.
Collapse
|
40
|
Hierarchical maintenance of MLL myeloid leukemia stem cells employs a transcriptional program shared with embryonic rather than adult stem cells. Cell Stem Cell 2009; 4:129-40. [PMID: 19200802 DOI: 10.1016/j.stem.2008.11.015] [Citation(s) in RCA: 287] [Impact Index Per Article: 19.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2008] [Revised: 09/17/2008] [Accepted: 11/25/2008] [Indexed: 01/24/2023]
Abstract
The genetic programs that promote retention of self-renewing leukemia stem cells (LSCs) at the apex of cellular hierarchies in acute myeloid leukemia (AML) are not known. In a mouse model of human AML, LSCs exhibit variable frequencies that correlate with the initiating MLL oncogene and are maintained in a self-renewing state by a transcriptional subprogram more akin to that of embryonic stem cells (ESCs) than to that of adult stem cells. The transcription/chromatin regulatory factors Myb, Hmgb3, and Cbx5 are critical components of the program and suffice for Hoxa/Meis-independent immortalization of myeloid progenitors when coexpressed, establishing the cooperative and essential role of an ESC-like LSC maintenance program ancillary to the leukemia-initiating MLL/Hox/Meis program. Enriched expression of LSC maintenance and ESC-like program genes in normal myeloid progenitors and poor-prognosis human malignancies links the frequency of aberrantly self-renewing progenitor-like cancer stem cells (CSCs) to prognosis in human cancer.
Collapse
|
41
|
Abstract
BACKGROUND Orthopaedic procedures have been reported to have the highest incidence of pain compared to other types of operations. There are limited studies in the literature that investigate postoperative pain. MATERIALS AND METHODS A prospective study of 98 patients undergoing orthopedic foot and ankle operations was undertaken to evaluate their pain experience. A Short-Form McGill Pain Questionnaire (SF-MPQ) was administered preoperatively and postoperatively. RESULTS The results showed that patients who experienced pain before the operation anticipated feeling higher pain intensity immediately postoperatively. Patients, on average, experienced higher pain intensity 3 days after the operation than anticipated. The postoperative pain intensity at 3 days was the most severe, while postoperative pain intensity at 6 weeks was the least severe. Age, gender and preoperative diagnosis (acute versus chronic) did not have a significant effect on the severity of pain that patients experienced. Six weeks following the operation, the majority of patients felt no pain. In addition, the severity of preoperative pain was highly predictive of their anticipated postoperative pain and 6-week postoperative pain, and both preoperative pain and anticipated pain predict higher immediate postoperative pain. CONCLUSION The intensity of patients' preoperative pain was predictive of the anticipated postoperative pain. Patients' preoperative pain and anticipated postoperative pain were independently predictive of the 3-day postoperative pain. The higher pain intensity a patient experienced preoperatively suggested that their postoperative pain severity would be greater. Therefore, surgeons should be aware of these findings when treating postoperative pain after orthopaedic foot and ankle operations.
Collapse
|
42
|
A recoding method to improve the humoral immune response to an HIV DNA vaccine. PLoS One 2008; 3:e3214. [PMID: 18791646 PMCID: PMC2529374 DOI: 10.1371/journal.pone.0003214] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2008] [Accepted: 08/26/2008] [Indexed: 11/18/2022] Open
Abstract
This manuscript describes a novel strategy to improve HIV DNA vaccine design. Employing a new information theory based bioinformatic algorithm, we identify a set of nucleotide motifs which are common in the coding region of HIV, but are under-represented in genes that are highly expressed in the human genome. We hypothesize that these motifs contribute to the poor protein expression of gag, pol, and env genes from the c-DNAs of HIV clinical isolates. Using this approach and beginning with a codon optimized consensus gag gene, we recode the nucleotide sequence so as to remove these motifs without modifying the amino acid sequence. Transfecting the recoded DNA sequence into a human kidney cell line results in doubling the gag protein expression level compared to the codon optimized version. We then turn both sequences into DNA vaccines and compare induced antibody response in a murine model. Our sequence, which has the motifs removed, induces a five-fold increase in gag antibody response compared to the codon optimized vaccine.
Collapse
|
43
|
|
44
|
Nonadaptive explanations for signatures of partial selective sweeps in Drosophila. Mol Biol Evol 2008; 25:1025-42. [PMID: 18199829 DOI: 10.1093/molbev/msn007] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
A beneficial mutation that has nearly but not yet fixed in a population produces a characteristic haplotype configuration, called a partial selective sweep. Whether nonadaptive processes might generate similar haplotype configurations has not been extensively explored. Here, we consider 5 population genetic data sets taken from regions flanking high-frequency transposable elements in North American strains of Drosophila melanogaster, each of which appears to be consistent with the expectations of a partial selective sweep. We use coalescent simulations to explore whether incorporation of the species' demographic history, purifying selection against the element, or suppression of recombination caused by the element could generate putatively adaptive haplotype configurations. Whereas most of the data sets would be rejected as nonneutral under the standard neutral null model, only the data set for which there is strong external evidence in support of an adaptive transposition appears to be nonneutral under the more complex null model and in particular when demography is taken into account. High-frequency, derived mutations from a recently bottlenecked population, such as we study here, are of great interest to evolutionary genetics in the context of scans for adaptive events; we discuss the broader implications of our findings in this context.
Collapse
|
45
|
Providing location-independent access to patient clinical narratives using Web browsers and a tiered server approach. PROCEEDINGS : A CONFERENCE OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION. AMIA FALL SYMPOSIUM 1996:623-7. [PMID: 8947741 PMCID: PMC2233203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
Health care today depends upon timely access to patient medical data and the latest medical knowledge. As we make the transition from a hospital-based organization to an integrated health care delivery system, patient care information must move throughout the organization quickly and efficiently over increasing distances. The emergence of widely-dispersed referral networks demands novel solutions to the problems of delivering patient care information to providers. We have developed a mechanism to provide location-independent access to clinical narrative reports using a multi-tiered server model and World Wide Web technologies for delivery. To successfully deploy such a system to sites separated by large distances, it is important to reduce complexity at the client site. Using a "thin client", such as a web browser, in our design facilitates deployment and support while reducing cost per user. This architecture allows the application to be updated without modification to the end-user software and eases maintenance over long distances.
Collapse
|
46
|
Genitourinary tract radiology at the Mayo Clinic: history portrayed through medical literature. Mayo Clin Proc 1995; 70:1218-20. [PMID: 7490926 DOI: 10.4065/70.12.1218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
|
47
|
Abstract
In these times of rapid advances in radiographic imaging, intravenous urography should be performed in an optimal way. The urographic examination should involve consultation between the referring physician and the radiologist. Necessary patient information should be accessible. McClennan said "patient selection for urographic studies should be efficacious with the radiologist exerting appropriate control so that the urogram is truly a consultative imaging service integrated into the total patient management." We share this view, and it is an extension of the philosophy of practice emphasized by other leaders in uroradiology. Cost containment, new imaging technologies, risk/benefit considerations, and evolving patterns of patient care have had a significant influence on genitourinary tract imaging. In addition, current debate about contrast media, digital radiography, efficacy, and utilization will undoubtedly have an influence on imaging during the next decade. Utilization of intravenous urography has decreased significantly in the past 15 years. Our volume of examinations has declined approximately 50% since 1970. This decline in our practice is attributed to several complex factors such as previous overutilization of screening urography for hypertension; the impact of US and CT for evaluation of obstruction, retroperitoneal disease (adenopathy and fibrosis), renal failure, and renal masses; concern about contrast medium-induced renal failure; and fewer repeat studies because of improved quality of intravenous urography in general radiology practice. In addition, overutilization of urography in patients with hematuria, prostatism, history of urinary tract infection, etc, continues to be debated in the medical community. In our integrated group practice, we have also observed overutilization of "high-tech" procedures in lieu of urography for evaluation of suspected urinary tract disease. Swings of the pendulum are inevitable in diagnostic imaging because of evolving technology and the art of medical practice. Although some differences of opinion about the details of urographic technique and indications for urography may exist, most would agree on the philosophy of producing a high-quality urographic examination. That philosophy focuses on producing the highest quality examination in each patient so that a diagnosis of normal or abnormal can be made accurately and confidently. Failure to demonstrate the entire urinary tract is a common cause of diagnostic error and one that can largely be eliminated by careful attention to the technical details of the examination.
Collapse
|
48
|
Iatrogenic dilatation of the upper urinary tract during radiographic evaluation of patients with spinal cord injury. J Urol 1986; 135:78-82. [PMID: 3941472 DOI: 10.1016/s0022-5347(17)45523-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Patients with upper and lower motor neuron spinal cord injuries were observed to determine whether cystography immediately before excretory urography induced iatrogenic dilatation of the upper urinary tract that was indistinguishable from true pathological dilatation. Evidence is given that such dilatation occurs. This iatrogenic dilatation is not seen in patients with normally innervated urinary tracts and appears to be caused by exaggerated bladder reflexes in patients with upper motor neuron lesions. Bladder spasms precipitated by cystographic contrast material also may create vesicoureteral obstruction and lead to dilatation of the upper urinary tract. Consequently, it is suggested that cystography should not immediately precede excretory urography. When such a sequence is necessary, room or body temperature contrast medium should be used for the cystogram, the bladder should be emptied before the excretory urogram is started and a 1-hour interval should be allowed between the 2 procedures. The findings also suggest that any factors that induce repeated or continuing bladder spasms may contribute to progressive dilatation of the upper urinary tract.
Collapse
|
49
|
Abstract
A case of pulmonary air embolism is presented demonstrating a nearly total lung perfusion defect and a matching ventilation deficit. Despite advanced age, mild chronic obstructive airway disease, and congestive heart failure, the perfusion/ventilatory (V/Q) abnormalities produced by the air emboli resolved to near completion within three days. Rapid resolution of V/Q abnormalities due to air embolism is distinct when compared to the abnormalities seen with thromboembolism and the mechanism the matching V/Q defects is discussed.
Collapse
|
50
|
|