51
|
Rockne RC, Hawkins-Daarud A, Swanson KR, Sluka JP, Glazier JA, Macklin P, Hormuth DA, Jarrett AM, Lima EABF, Tinsley Oden J, Biros G, Yankeelov TE, Curtius K, Al Bakir I, Wodarz D, Komarova N, Aparicio L, Bordyuh M, Rabadan R, Finley SD, Enderling H, Caudell J, Moros EG, Anderson ARA, Gatenby RA, Kaznatcheev A, Jeavons P, Krishnan N, Pelesko J, Wadhwa RR, Yoon N, Nichol D, Marusyk A, Hinczewski M, Scott JG. The 2019 mathematical oncology roadmap. Phys Biol 2019; 16:041005. [PMID: 30991381 PMCID: PMC6655440 DOI: 10.1088/1478-3975/ab1a09] [Citation(s) in RCA: 97] [Impact Index Per Article: 19.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Whether the nom de guerre is Mathematical Oncology, Computational or Systems Biology, Theoretical Biology, Evolutionary Oncology, Bioinformatics, or simply Basic Science, there is no denying that mathematics continues to play an increasingly prominent role in cancer research. Mathematical Oncology-defined here simply as the use of mathematics in cancer research-complements and overlaps with a number of other fields that rely on mathematics as a core methodology. As a result, Mathematical Oncology has a broad scope, ranging from theoretical studies to clinical trials designed with mathematical models. This Roadmap differentiates Mathematical Oncology from related fields and demonstrates specific areas of focus within this unique field of research. The dominant theme of this Roadmap is the personalization of medicine through mathematics, modelling, and simulation. This is achieved through the use of patient-specific clinical data to: develop individualized screening strategies to detect cancer earlier; make predictions of response to therapy; design adaptive, patient-specific treatment plans to overcome therapy resistance; and establish domain-specific standards to share model predictions and to make models and simulations reproducible. The cover art for this Roadmap was chosen as an apt metaphor for the beautiful, strange, and evolving relationship between mathematics and cancer.
Collapse
Affiliation(s)
- Russell C Rockne
- Department of Computational and Quantitative Medicine, Division of Mathematical Oncology, City of Hope National Medical Center, Duarte, CA 91010, United States of America. Author to whom any correspondence should be addressed
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
52
|
Hoekstra AG, Portegies Zwart S, Coveney PV. Multiscale modelling, simulation and computing: from the desktop to the exascale. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2019; 377:20180355. [PMID: 30967039 PMCID: PMC6388007 DOI: 10.1098/rsta.2018.0355] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 12/11/2018] [Indexed: 05/02/2023]
Abstract
This short contribution introduces a theme issue dedicated to 'Multiscale modelling, simulation and computing: from the desktop to the exascale'. It holds a collection of articles presenting cutting-edge research in generic multiscale modelling and multiscale computing, and applications thereof on high-performance computing systems. The special issue starts with a position paper to discuss the paradigm of multiscale computing in the face of the emerging exascale, followed by a review and critical assessment of existing multiscale computing environments. This theme issue provides a state-of-the-art account of generic multiscale computing, as well as exciting examples of applications of such concepts in domains ranging from astrophysics, via material science and fusion, to biomedical sciences. This article is part of the theme issue 'Multiscale modelling, simulation and computing: from the desktop to the exascale'.
Collapse
Affiliation(s)
- Alfons G. Hoekstra
- Computational Science Laboratory, Institute for Informatics, Faculty of Science, University of Amsterdam, Amsterdam, The Netherlands
- ITMO University, Saint Petersburg, Russia
| | | | - Peter V. Coveney
- Centre for Computational Science, Department of Chemistry, University College London, London, England
| |
Collapse
|
53
|
Succi S, Coveney PV. Big data: the end of the scientific method? PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2019; 377:20180145. [PMID: 30967041 PMCID: PMC6388004 DOI: 10.1098/rsta.2018.0145] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 07/25/2018] [Indexed: 05/18/2023]
Abstract
For it is not the abundance of knowledge, but the interior feeling and taste of things, which is accustomed to satisfy the desire of the soul. (Saint Ignatius of Loyola). We argue that the boldest claims of big data (BD) are in need of revision and toning-down, in view of a few basic lessons learned from the science of complex systems. We point out that, once the most extravagant claims of BD are properly discarded, a synergistic merging of BD with big theory offers considerable potential to spawn a new scientific paradigm capable of overcoming some of the major barriers confronted by the modern scientific method originating with Galileo. These obstacles are due to the presence of nonlinearity, non-locality and hyperdimensions which one encounters frequently in multi-scale modelling of complex systems. This article is part of the theme issue 'Multiscale modelling, simulation and computing: from the desktop to the exascale'.
Collapse
Affiliation(s)
- Sauro Succi
- Center for Life Nano Sciences at La Sapienza, Istituto Italiano di Tecnologia, viale R. Margherita, 265, 00161, Roma, Italy
- Institute for Applied Computational Science, J. Paulson School of Engineering and Applied Sciences, Harvard University, 29 Oxford Street, Cambridge, USA
| | - Peter V. Coveney
- Centre for Computational Science, Department of Chemistry, University College London, London, UK
- Yale University, New Haven, USA
| |
Collapse
|
54
|
Review: Precision medicine and driver mutations: Computational methods, functional assays and conformational principles for interpreting cancer drivers. PLoS Comput Biol 2019; 15:e1006658. [PMID: 30921324 PMCID: PMC6438456 DOI: 10.1371/journal.pcbi.1006658] [Citation(s) in RCA: 56] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
At the root of the so-called precision medicine or precision oncology, which is our focus here, is the hypothesis that cancer treatment would be considerably better if therapies were guided by a tumor’s genomic alterations. This hypothesis has sparked major initiatives focusing on whole-genome and/or exome sequencing, creation of large databases, and developing tools for their statistical analyses—all aspiring to identify actionable alterations, and thus molecular targets, in a patient. At the center of the massive amount of collected sequence data is their interpretations that largely rest on statistical analysis and phenotypic observations. Statistics is vital, because it guides identification of cancer-driving alterations. However, statistics of mutations do not identify a change in protein conformation; therefore, it may not define sufficiently accurate actionable mutations, neglecting those that are rare. Among the many thematic overviews of precision oncology, this review innovates by further comprehensively including precision pharmacology, and within this framework, articulating its protein structural landscape and consequences to cellular signaling pathways. It provides the underlying physicochemical basis, thereby also opening the door to a broader community.
Collapse
|
55
|
Bhati A, Wan S, Coveney PV. Ensemble-Based Replica Exchange Alchemical Free Energy Methods: The Effect of Protein Mutations on Inhibitor Binding. J Chem Theory Comput 2019; 15:1265-1277. [PMID: 30592603 PMCID: PMC6447239 DOI: 10.1021/acs.jctc.8b01118] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2018] [Indexed: 01/06/2023]
Abstract
The accurate prediction of the binding affinity changes of drugs caused by protein mutations is a major goal in clinical personalized medicine. We have developed an ensemble-based free energy approach called thermodynamic integration with enhanced sampling (TIES), which yields accurate, precise, and reproducible binding affinities. TIES has been shown to perform well for predictions of free energy differences of congeneric ligands to a wide range of target proteins. We have recently introduced variants of TIES, which incorporate the enhanced sampling technique REST2 (replica exchange with solute tempering) and the free energy estimator MBAR (Bennett acceptance ratio). Here we further extend the TIES methodology to study relative binding affinities caused by protein mutations when bound to a ligand, a variant which we call TIES-PM. We apply TIES-PM to fibroblast growth factor receptor 3 (FGFR3) to investigate binding free energy changes upon protein mutations. The results show that TIES-PM with REST2 successfully captures a large conformational change and generates correct free energy differences caused by a gatekeeper mutation located in the binding pocket. Simulations without REST2 fail to overcome the energy barrier between the conformations, and hence the results are highly sensitive to the initial structures. We also discuss situations where REST2 does not improve the accuracy of predictions.
Collapse
Affiliation(s)
- Agastya
P. Bhati
- Centre for Computational Science, Department
of Chemistry, University College London, 20 Gordon Street, London, WC1H 0AJ, United Kingdom
| | - Shunzhou Wan
- Centre for Computational Science, Department
of Chemistry, University College London, 20 Gordon Street, London, WC1H 0AJ, United Kingdom
| | - Peter V. Coveney
- Centre for Computational Science, Department
of Chemistry, University College London, 20 Gordon Street, London, WC1H 0AJ, United Kingdom
| |
Collapse
|
56
|
Big Data: From Forecasting to Mesoscopic Understanding. Meta-Profiling as Complex Systems. SYSTEMS 2019. [DOI: 10.3390/systems7010008] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
We consider Big Data as a phenomenon with acquired properties, similar to collective behaviours, that establishes virtual collective beings. We consider the occurrence of ongoing non-equivalent multiple properties in the conceptual framework of structural dynamics given by sequences of structures and not only by different values assumed by the same structure. We consider the difference between modelling and profiling in a constructivist way, as De Finetti intended probability to exist, depending on the configuration taken into consideration. The past has little or no influence, while events and their configurations are not memorised. Any configuration of events is new, and the probabilistic values to be considered are reset. As for collective behaviours, we introduce methodological and conceptual proposals using mesoscopic variables and their property profiles and meta-profile Big Data and non-computable profiles which were inspired by the use of natural computing to deal with cyber-ecosystems. The focus is on ongoing profiles, in which the arising properties trace trajectories, rather than assuming that we can foresee them based on the past.
Collapse
|
57
|
Scott IA. Hope, hype and harms of Big Data. Intern Med J 2019; 49:126-129. [DOI: 10.1111/imj.14172] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2018] [Revised: 07/14/2018] [Accepted: 08/04/2018] [Indexed: 12/29/2022]
Affiliation(s)
- Ian A. Scott
- Department of Internal Medicine and Clinical Epidemiology; Princess Alexandra Hospital; Brisbane Queensland Australia
| |
Collapse
|
58
|
Mazzega P. On the Ethics of Biodiversity Models, Forecasts and Scenarios. Asian Bioeth Rev 2018; 10:295-312. [PMID: 33717294 PMCID: PMC7747318 DOI: 10.1007/s41649-018-0069-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2018] [Revised: 10/30/2018] [Accepted: 11/01/2018] [Indexed: 11/28/2022] Open
Abstract
The development of numerical models to produce realistic prospective scenarios for the evolution of biological diversity is essential. Only integrative impact assessment models are able to take into account the diverse and complex interactions embedded in social-ecological systems. The knowledge used is objective, the procedure of their integration is rigorous and the data massive. Nevertheless, the technical choices (model ontology, treatment of scales and uncertainty, data choice and pre-processing, technique of representation, etc.) made at each stage of the development of models and scenarios are mostly circumstantial, depending on both the skills of modellers on a project and the means available to them. In the end, the scenarios selected and the way they are simulated limit the futures explored, and the options offered to decision makers and stakeholders to act. The ethical implications of these circumstantial choices are generally not documented, explained or even perceived by modellers. Applied ethics propose a coherent set of principles to guide a critical reflection on the social and environmental consequences of integrative modelling and simulation of biodiversity scenarios. Such reflection should be incorporated into the actual modelling process, in a broad participatory framework, and foster effective moral involvement of modellers, policy-makers and stakeholders, in preference to the application of fixed ethical rules.
Collapse
Affiliation(s)
- Pierre Mazzega
- UMR5563 GET Geosciences Environment Toulouse, CNRS / University of Toulouse, Toulouse, France
- Affiliate Researcher, Strathclyde Center for Environmental Law and Governance, University of Strathclyde, Glasgow, UK
| |
Collapse
|
59
|
Jarrett AM, Lima EABF, Hormuth DA, McKenna MT, Feng X, Ekrut DA, Resende ACM, Brock A, Yankeelov TE. Mathematical models of tumor cell proliferation: A review of the literature. Expert Rev Anticancer Ther 2018; 18:1271-1286. [PMID: 30252552 PMCID: PMC6295418 DOI: 10.1080/14737140.2018.1527689] [Citation(s) in RCA: 51] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
INTRODUCTION A defining hallmark of cancer is aberrant cell proliferation. Efforts to understand the generative properties of cancer cells span all biological scales: from genetic deviations and alterations of metabolic pathways to physical stresses due to overcrowding, as well as the effects of therapeutics and the immune system. While these factors have long been studied in the laboratory, mathematical and computational techniques are being increasingly applied to help understand and forecast tumor growth and treatment response. Advantages of mathematical modeling of proliferation include the ability to simulate and predict the spatiotemporal development of tumors across multiple experimental scales. Central to proliferation modeling is the incorporation of available biological data and validation with experimental data. Areas covered: We present an overview of past and current mathematical strategies directed at understanding tumor cell proliferation. We identify areas for mathematical development as motivated by available experimental and clinical evidence, with a particular emphasis on emerging, non-invasive imaging technologies. Expert commentary: The data required to legitimize mathematical models are often difficult or (currently) impossible to obtain. We suggest areas for further investigation to establish mathematical models that more effectively utilize available data to make informed predictions on tumor cell proliferation.
Collapse
Affiliation(s)
- Angela M Jarrett
- a Institute for Computational Engineering and Sciences , The University of Texas at Austin , Austin , USA
- b Livestrong Cancer Institutes , The University of Texas at Austin , Austin , USA
| | - Ernesto A B F Lima
- a Institute for Computational Engineering and Sciences , The University of Texas at Austin , Austin , USA
| | - David A Hormuth
- a Institute for Computational Engineering and Sciences , The University of Texas at Austin , Austin , USA
- b Livestrong Cancer Institutes , The University of Texas at Austin , Austin , USA
| | - Matthew T McKenna
- c Department of Biomedical Engineering , Vanderbilt University , Nashville , USA
| | - Xinzeng Feng
- a Institute for Computational Engineering and Sciences , The University of Texas at Austin , Austin , USA
| | - David A Ekrut
- a Institute for Computational Engineering and Sciences , The University of Texas at Austin , Austin , USA
| | - Anna Claudia M Resende
- a Institute for Computational Engineering and Sciences , The University of Texas at Austin , Austin , USA
- d Department of Computational Modeling , National Laboratory for Scientific Computing , Petrópolis , Brazil
| | - Amy Brock
- b Livestrong Cancer Institutes , The University of Texas at Austin , Austin , USA
- e Department of Biomedical Engineering , The University of Texas at Austin , Austin , USA
| | - Thomas E Yankeelov
- a Institute for Computational Engineering and Sciences , The University of Texas at Austin , Austin , USA
- b Livestrong Cancer Institutes , The University of Texas at Austin , Austin , USA
- e Department of Biomedical Engineering , The University of Texas at Austin , Austin , USA
- f Department of Diagnostic Medicine , The University of Texas at Austin , Austin , USA
- g Department of Oncology , The University of Texas at Austin , Austin , USA
| |
Collapse
|
60
|
Resteghini C, Trama A, Borgonovi E, Hosni H, Corrao G, Orlandi E, Calareso G, De Cecco L, Piazza C, Mainardi L, Licitra L. Big Data in Head and Neck Cancer. Curr Treat Options Oncol 2018; 19:62. [DOI: 10.1007/s11864-018-0585-2] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
61
|
Quantifying the value of surveillance data for improving model predictions of lymphatic filariasis elimination. PLoS Negl Trop Dis 2018; 12:e0006674. [PMID: 30296266 PMCID: PMC6175292 DOI: 10.1371/journal.pntd.0006674] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2018] [Accepted: 07/09/2018] [Indexed: 12/27/2022] Open
Abstract
Background Mathematical models are increasingly being used to evaluate strategies aiming to achieve the control or elimination of parasitic diseases. Recently, owing to growing realization that process-oriented models are useful for ecological forecasts only if the biological processes are well defined, attention has focused on data assimilation as a means to improve the predictive performance of these models. Methodology and principal findings We report on the development of an analytical framework to quantify the relative values of various longitudinal infection surveillance data collected in field sites undergoing mass drug administrations (MDAs) for calibrating three lymphatic filariasis (LF) models (EPIFIL, LYMFASIM, and TRANSFIL), and for improving their predictions of the required durations of drug interventions to achieve parasite elimination in endemic populations. The relative information contribution of site-specific data collected at the time points proposed by the WHO monitoring framework was evaluated using model-data updating procedures, and via calculations of the Shannon information index and weighted variances from the probability distributions of the estimated timelines to parasite extinction made by each model. Results show that data-informed models provided more precise forecasts of elimination timelines in each site compared to model-only simulations. Data streams that included year 5 post-MDA microfilariae (mf) survey data, however, reduced each model’s uncertainty most compared to data streams containing only baseline and/or post-MDA 3 or longer-term mf survey data irrespective of MDA coverage, suggesting that data up to this monitoring point may be optimal for informing the present LF models. We show that the improvements observed in the predictive performance of the best data-informed models may be a function of temporal changes in inter-parameter interactions. Such best data-informed models may also produce more accurate predictions of the durations of drug interventions required to achieve parasite elimination. Significance Knowledge of relative information contributions of model only versus data-informed models is valuable for improving the usefulness of LF model predictions in management decision making, learning system dynamics, and for supporting the design of parasite monitoring programmes. The present results further pinpoint the crucial need for longitudinal infection surveillance data for enhancing the precision and accuracy of model predictions of the intervention durations required to achieve parasite elimination in an endemic location. Although parasite transmission models offer powerful tools for predicting the impacts of interventions, there is growing realization that these models can be useful for this purpose only if their governing biological processes are well defined. Recently, model-data assimilation has been applied to address this problem and improve the performance of process-oriented models for ecological forecasting. Here, we developed an analytical framework that allowed the sequential coupling of the three existing lymphatic filariasis (LF) models with longitudinal infection monitoring data collected in field sites undergoing mass drug administrations (MDAs) to examine the relative value of such data for parameterizing these models and for improving their predictions of the required durations of drug interventions to break parasite transmission. We found that data-informed models provided more precise and reliable forecasts of elimination timelines in the study sites compared to model-only predictions, and that data collected up to 5 years post-MDA reduced each model’s predictive uncertainty most. We also found that this improved performance may be intriguingly related to temporal changes in system dynamics. Our results underscore the significance of sequential model-data fusion for enhancing the understanding of LF transmission dynamics, design of surveillance, and generation of reliable model predictions for management decision making.
Collapse
|
62
|
Chin-Yee B, Upshur R. Clinical judgement in the era of big data and predictive analytics. J Eval Clin Pract 2018; 24:638-645. [PMID: 29237237 DOI: 10.1111/jep.12852] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/20/2017] [Revised: 10/25/2017] [Accepted: 10/27/2017] [Indexed: 12/18/2022]
Abstract
Clinical judgement is a central and longstanding issue in the philosophy of medicine which has generated significant interest over the past few decades. In this article, we explore different approaches to clinical judgement articulated in the literature, focusing in particular on data-driven, mathematical approaches which we contrast with narrative, virtue-based approaches to clinical reasoning. We discuss the tension between these different clinical epistemologies and further explore the implications of big data and machine learning for a philosophy of clinical judgement. We argue for a pluralistic, integrative approach, and demonstrate how narrative, virtue-based clinical reasoning will remain indispensable in an era of big data and predictive analytics.
Collapse
Affiliation(s)
| | - Ross Upshur
- Department of Family and Community Medicine and Dalla Lana School of Public Health, University of Toronto, Toronto, Canada
| |
Collapse
|
63
|
Bhati AP, Wan S, Hu Y, Sherborne B, Coveney PV. Uncertainty Quantification in Alchemical Free Energy Methods. J Chem Theory Comput 2018; 14:2867-2880. [PMID: 29678106 PMCID: PMC6095638 DOI: 10.1021/acs.jctc.7b01143] [Citation(s) in RCA: 43] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
![]()
Alchemical
free energy methods have gained much importance recently
from several reports of improved ligand–protein binding affinity
predictions based on their implementation using molecular dynamics
simulations. A large number of variants of such methods implementing
different accelerated sampling techniques and free energy estimators
are available, each claimed to be better than the others in its own
way. However, the key features of reproducibility and quantification
of associated uncertainties in such methods have barely been discussed.
Here, we apply a systematic protocol for uncertainty quantification
to a number of popular alchemical free energy methods, covering both
absolute and relative free energy predictions. We show that a reliable
measure of error estimation is provided by ensemble simulation—an
ensemble of independent MD simulations—which applies irrespective
of the free energy method. The need to use ensemble methods is fundamental
and holds regardless of the duration of time of the molecular dynamics
simulations performed.
Collapse
Affiliation(s)
- Agastya P Bhati
- Centre for Computational Science, Department of Chemistry , University College London , 20 Gordon Street , London WC1H 0AJ , United Kingdom
| | - Shunzhou Wan
- Centre for Computational Science, Department of Chemistry , University College London , 20 Gordon Street , London WC1H 0AJ , United Kingdom
| | - Yuan Hu
- Modeling and Informatics , Merck & Co., Inc. , 2000 Galloping Hill Road , Kenilworth , New Jersey 07033 , United States
| | - Brad Sherborne
- Modeling and Informatics , Merck & Co., Inc. , 2000 Galloping Hill Road , Kenilworth , New Jersey 07033 , United States
| | - Peter V Coveney
- Centre for Computational Science, Department of Chemistry , University College London , 20 Gordon Street , London WC1H 0AJ , United Kingdom
| |
Collapse
|
64
|
Pedretti A, Mazzolari A, Vistoli G, Testa B. MetaQSAR: An Integrated Database Engine to Manage and Analyze Metabolic Data. J Med Chem 2018; 61:1019-1030. [DOI: 10.1021/acs.jmedchem.7b01473] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Affiliation(s)
- Alessandro Pedretti
- Dipartimento
di Scienze Farmaceutiche “Pietro Pratesi”, Facoltà
di Farmacia, Università degli Studi di Milano, Via Luigi Mangiagalli 25, I-20133 Milano, Italy
| | - Angelica Mazzolari
- Dipartimento
di Scienze Farmaceutiche “Pietro Pratesi”, Facoltà
di Farmacia, Università degli Studi di Milano, Via Luigi Mangiagalli 25, I-20133 Milano, Italy
| | - Giulio Vistoli
- Dipartimento
di Scienze Farmaceutiche “Pietro Pratesi”, Facoltà
di Farmacia, Università degli Studi di Milano, Via Luigi Mangiagalli 25, I-20133 Milano, Italy
| | | |
Collapse
|
65
|
Mahmoodi J, Leckelt M, van Zalk MWH, Geukes K, Back MD. Big Data approaches in social and behavioral science: four key trade-offs and a call for integration. Curr Opin Behav Sci 2017. [DOI: 10.1016/j.cobeha.2017.07.001] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
|
66
|
Egger M, Johnson L, Althaus C, Schöni A, Salanti G, Low N, Norris SL. Developing WHO guidelines: Time to formally include evidence from mathematical modelling studies. F1000Res 2017; 6:1584. [PMID: 29552335 PMCID: PMC5829466 DOI: 10.12688/f1000research.12367.2] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 02/19/2018] [Indexed: 12/15/2022] Open
Abstract
In recent years, the number of mathematical modelling studies has increased steeply. Many of the questions addressed in these studies are relevant to the development of World Health Organization (WHO) guidelines, but modelling studies are rarely formally included as part of the body of evidence. An expert consultation hosted by WHO, a survey of modellers and users of modelling studies, and literature reviews informed the development of recommendations on when and how to incorporate the results of modelling studies into WHO guidelines. In this article, we argue that modelling studies should routinely be considered in the process of developing WHO guidelines, but particularly in the evaluation of public health programmes, long-term effectiveness or comparative effectiveness. There should be a systematic and transparent approach to identifying relevant published models, and to commissioning new models. We believe that the inclusion of evidence from modelling studies into the Grading of Recommendations Assessment, Development and Evaluation (GRADE) process is possible and desirable, with relatively few adaptations. No single "one-size-fits-all" approach is appropriate to assess the quality of modelling studies. The concept of the 'credibility' of the model, which takes the conceptualization of the problem, model structure, input data, different dimensions of uncertainty, as well as transparency and validation into account, is more appropriate than 'risk of bias'.
Collapse
Affiliation(s)
- Matthias Egger
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, 3012, Switzerland.,Centre for Infectious Disease Epidemiology and Research (CIDER), University of Cape Town, Cape Town, 7925, South Africa
| | - Leigh Johnson
- Centre for Infectious Disease Epidemiology and Research (CIDER), University of Cape Town, Cape Town, 7925, South Africa
| | - Christian Althaus
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, 3012, Switzerland
| | - Anna Schöni
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, 3012, Switzerland
| | - Georgia Salanti
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, 3012, Switzerland
| | - Nicola Low
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, 3012, Switzerland
| | | |
Collapse
|
67
|
Egger M, Johnson L, Althaus C, Schöni A, Salanti G, Low N, Norris SL. Developing WHO guidelines: Time to formally include evidence from mathematical modelling studies. F1000Res 2017; 6:1584. [PMID: 29552335 DOI: 10.12688/f1000research.12367.1] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 08/11/2017] [Indexed: 12/15/2022] Open
Abstract
In recent years, the number of mathematical modelling studies has increased steeply. Many of the questions addressed in these studies are relevant to the development of World Health Organization (WHO) guidelines, but modelling studies are rarely formally included as part of the body of evidence. An expert consultation hosted by WHO, a survey of modellers and users of modelling studies, and literature reviews informed the development of recommendations on when and how to incorporate the results of modelling studies into WHO guidelines. In this article, we argue that modelling studies should routinely be considered in the process of developing WHO guidelines, but particularly in the evaluation of public health programmes, long-term effectiveness or comparative effectiveness. There should be a systematic and transparent approach to identifying relevant published models, and to commissioning new models. We believe that the inclusion of evidence from modelling studies into the Grading of Recommendations Assessment, Development and Evaluation (GRADE) process is possible and desirable, with relatively few adaptations. No single "one-size-fits-all" approach is appropriate to assess the quality of modelling studies. The concept of the 'credibility' of the model, which takes the conceptualization of the problem, model structure, input data, different dimensions of uncertainty, as well as transparency and validation into account, is more appropriate than 'risk of bias'.
Collapse
Affiliation(s)
- Matthias Egger
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, 3012, Switzerland.,Centre for Infectious Disease Epidemiology and Research (CIDER), University of Cape Town, Cape Town, 7925, South Africa
| | - Leigh Johnson
- Centre for Infectious Disease Epidemiology and Research (CIDER), University of Cape Town, Cape Town, 7925, South Africa
| | - Christian Althaus
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, 3012, Switzerland
| | - Anna Schöni
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, 3012, Switzerland
| | - Georgia Salanti
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, 3012, Switzerland
| | - Nicola Low
- Institute of Social and Preventive Medicine (ISPM), University of Bern, Bern, 3012, Switzerland
| | | |
Collapse
|
68
|
Abstract
There is a great deal of interest in personalized, individualized, or precision interventions for disease and health-risk mitigation. This is as true of nutrition-based intervention and prevention strategies as it is for pharmacotherapies and pharmaceutical-oriented prevention strategies. Essentially, technological breakthroughs have enabled researchers to probe an individual's unique genetic, biochemical, physiological, behavioral, and exposure profile, allowing them to identify very specific and often nuanced factors that an individual might possess, which may make it more or less likely that he or she responds favorably to a particular intervention (e.g., nutrient supplementation) or disease prevention strategy (e.g., specific diet). However, as compelling and intuitive as personalized nutrition might be in the current era in which data-intensive biomedical characterization of individuals is possible, appropriately and objectively vetting personalized nutrition strategies is not trivial and requires novel study designs and data analytical methods. These designs and methods must consider a very integrated use of the multiple contemporary biomedical assays and technologies that motivate them, which adds to their complexity. Single-subject or N-of-1 trials can be used to assess the utility of personalized interventions and, in addition, can be crafted in such a way as to accommodate the necessarily integrated use of many emerging biomedical technologies and assays. In this review, we consider the motivation, design, and implementation of N-of-1 trials in translational nutrition research that are meant to assess the utility of personalized nutritional strategies. We provide a number of example studies, discuss appropriate analytical methods given the complex data they generate and require, and consider how such studies could leverage integration of various biomarker assays and clinical end points. Importantly, we also consider the development of strategies and algorithms for matching nutritional needs to individual biomedical profiles and the issues surrounding them. Finally, we discuss the limitations of personalized nutrition studies, possible extensions of N-of-1 nutritional intervention studies, and areas of future research.
Collapse
Affiliation(s)
- Nicholas J Schork
- Translational Genomics Research Institute, Phoenix, Arizona 85004; .,J. Craig Venter Institute, La Jolla, California 92037; .,Departments of Psychiatry and Family Medicine and Public Health, University of California, San Diego, La Jolla, California 92037
| | - Laura H Goetz
- J. Craig Venter Institute, La Jolla, California 92037; .,Department of Surgery, Scripps Clinic Medical Group, La Jolla, California 92037.,Department of Molecular and Experimental Medicine, The Scripps Research Institute, La Jolla, California 92037
| |
Collapse
|
69
|
Eccleston RC, Wan S, Dalchau N, Coveney PV. The Role of Multiscale Protein Dynamics in Antigen Presentation and T Lymphocyte Recognition. Front Immunol 2017; 8:797. [PMID: 28740497 PMCID: PMC5502259 DOI: 10.3389/fimmu.2017.00797] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2016] [Accepted: 06/22/2017] [Indexed: 12/15/2022] Open
Abstract
T lymphocytes are stimulated when they recognize short peptides bound to class I proteins of the major histocompatibility complex (MHC) protein, as peptide-MHC complexes. Due to the diversity in T-cell receptor (TCR) molecules together with both the peptides and MHC proteins they bind to, it has been difficult to design vaccines and treatments based on these interactions. Machine learning has made some progress in trying to predict the immunogenicity of peptide sequences in the context of specific MHC class I alleles but, as such approaches cannot integrate temporal information and lack explanatory power, their scope will always be limited. Here, we advocate a mechanistic description of antigen presentation and TCR activation which is explanatory, predictive, and quantitative, drawing on modeling approaches that collectively span several length and time scales, being capable of furnishing reliable biological descriptions that are difficult for experimentalists to provide. It is a form of multiscale systems biology. We propose the use of chemical rate equations to describe the time evolution of the foreign and host proteins to explain how the original proteins end up being presented on the cell surface as peptide fragments, while we invoke molecular dynamics to describe the key binding processes on the molecular level, including those of peptide-MHC complexes with TCRs which lie at the heart of the immune response. On each level, complementary methods based on machine learning are available, and we discuss the relationship between these divergent approaches. The pursuit of predictive mechanistic modeling approaches requires experimentalists to adapt their work so as to acquire, store, and expose data that can be used to verify and validate such models.
Collapse
Affiliation(s)
- R Charlotte Eccleston
- Centre for Computational Science, Department of Chemistry, University College London, London, United Kingdom
| | - Shunzhou Wan
- Centre for Computational Science, Department of Chemistry, University College London, London, United Kingdom
| | | | - Peter V Coveney
- Centre for Computational Science, Department of Chemistry, University College London, London, United Kingdom
| |
Collapse
|
70
|
Zukeran MS, Ribeiro SML. The Importance of Nutrition in a Conceptual Framework of Frailty Syndrome. Curr Nutr Rep 2017. [DOI: 10.1007/s13668-017-0195-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
|
71
|
Warden AS, Mayfield RD. Gene expression profiling in the human alcoholic brain. Neuropharmacology 2017; 122:161-174. [PMID: 28254370 DOI: 10.1016/j.neuropharm.2017.02.017] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2016] [Revised: 02/13/2017] [Accepted: 02/17/2017] [Indexed: 01/12/2023]
Abstract
Long-term alcohol use causes widespread changes in gene expression in the human brain. Aberrant gene expression changes likely contribute to the progression from occasional alcohol use to alcohol use disorder (including alcohol dependence). Transcriptome studies have identified individual gene candidates that are linked to alcohol-dependence phenotypes. The use of bioinformatics techniques to examine expression datasets has provided novel systems-level approaches to transcriptome profiling in human postmortem brain. These analytical advances, along with recent developments in next-generation sequencing technology, have been instrumental in detecting both known and novel coding and non-coding RNAs, alternative splicing events, and cell-type specific changes that may contribute to alcohol-related pathologies. This review offers an integrated perspective on alcohol-responsive transcriptional changes in the human brain underlying the regulatory gene networks that contribute to alcohol dependence. This article is part of the Special Issue entitled "Alcoholism".
Collapse
Affiliation(s)
- Anna S Warden
- Institute for Neuroscience, The University of Texas at Austin, 1 University Station, C7000, Austin, TX 78712, USA; Waggoner Center for Alcohol and Addiction Research, The University of Texas at Austin, 2500 Speedway, A4800, Austin, TX 78712, USA
| | - R Dayne Mayfield
- Waggoner Center for Alcohol and Addiction Research, The University of Texas at Austin, 2500 Speedway, A4800, Austin, TX 78712, USA.
| |
Collapse
|
72
|
Reilly MT, Noronha A, Goldman D, Koob GF. Genetic studies of alcohol dependence in the context of the addiction cycle. Neuropharmacology 2017; 122:3-21. [PMID: 28118990 DOI: 10.1016/j.neuropharm.2017.01.017] [Citation(s) in RCA: 67] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2016] [Revised: 01/13/2017] [Accepted: 01/19/2017] [Indexed: 12/16/2022]
Abstract
Family, twin and adoption studies demonstrate clearly that alcohol dependence and alcohol use disorders are phenotypically complex and heritable. The heritability of alcohol use disorders is estimated at approximately 50-60% of the total phenotypic variability. Vulnerability to alcohol use disorders can be due to multiple genetic or environmental factors or their interaction which gives rise to extensive and daunting heterogeneity. This heterogeneity makes it a significant challenge in mapping and identifying the specific genes that influence alcohol use disorders. Genetic linkage and (candidate gene) association studies have been used now for decades to map and characterize genomic loci and genes that underlie the genetic vulnerability to alcohol use disorders. These approaches have been moderately successful in identifying several genes that contribute to the complexity of alcohol use disorders. Recently, genome-wide association studies have become one of the major tools for identifying genes for alcohol use disorders by examining correlations between millions of common single-nucleotide polymorphisms with diagnosis status. Genome-wide association studies are just beginning to uncover novel biology; however, the functional significance of results remains a matter of extensive debate and uncertainty. In this review, we present a select group of genome-wide association studies of alcohol dependence, as one example of a way to generate functional hypotheses, within the addiction cycle framework. This analysis may provide novel directions for validating the functional significance of alcohol dependence candidate genes. This article is part of the Special Issue entitled "Alcoholism".
Collapse
Affiliation(s)
- Matthew T Reilly
- National Institutes of Health (NIH), National Institute on Alcohol Abuse and Alcoholism (NIAAA), Division of Neuroscience and Behavior, 5635 Fishers Lane, Bethesda, MD 20852, USA.
| | - Antonio Noronha
- National Institutes of Health (NIH), National Institute on Alcohol Abuse and Alcoholism (NIAAA), Division of Neuroscience and Behavior, 5635 Fishers Lane, Bethesda, MD 20852, USA
| | - David Goldman
- National Institutes of Health (NIH), National Institute on Alcohol Abuse and Alcoholism (NIAAA), Chief, Laboratory of Neurogenetics, 5635 Fishers Lane, Bethesda, MD 20852, USA
| | - George F Koob
- National Institutes of Health (NIH), National Institute on Alcohol Abuse and Alcoholism (NIAAA), Director NIAAA, 5635 Fishers Lane, Bethesda, MD 20852, USA
| |
Collapse
|
73
|
Coveney PV, Boon JP, Succi S. Bridging the gaps at the physics-chemistry-biology interface. PHILOSOPHICAL TRANSACTIONS. SERIES A, MATHEMATICAL, PHYSICAL, AND ENGINEERING SCIENCES 2016; 374:rsta.2016.0335. [PMID: 27698047 PMCID: PMC5052737 DOI: 10.1098/rsta.2016.0335] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/17/2016] [Indexed: 05/13/2023]
Affiliation(s)
- P V Coveney
- Centre for Computational Science, University College London, Gordon Street, London WC1H 0AJ, UK
| | - J P Boon
- Physics Department, Université Libre de Bruxelles, Campus Plaine, CP 231, Avenue F.D. Roosevelt 50, 1050 Bruxelles, Belgium
| | - S Succi
- Istituto Applicazioni del Calcolo-CNR, Viale del Policlinico 19, 00185 Roma, Italy Institute for Applied Computational Science, Harvard J. Paulson School of Engineering and Applied Sciences, Harvard University, 29 Oxford Street, Cambridge, MA 02138, USA
| |
Collapse
|