1
|
Stock M, Pieters O, De Swaef T, wyffels F. Plant science in the age of simulation intelligence. Front Plant Sci 2024; 14:1299208. [PMID: 38293629 PMCID: PMC10824965 DOI: 10.3389/fpls.2023.1299208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Accepted: 12/07/2023] [Indexed: 02/01/2024]
Abstract
Historically, plant and crop sciences have been quantitative fields that intensively use measurements and modeling. Traditionally, researchers choose between two dominant modeling approaches: mechanistic plant growth models or data-driven, statistical methodologies. At the intersection of both paradigms, a novel approach referred to as "simulation intelligence", has emerged as a powerful tool for comprehending and controlling complex systems, including plants and crops. This work explores the transformative potential for the plant science community of the nine simulation intelligence motifs, from understanding molecular plant processes to optimizing greenhouse control. Many of these concepts, such as surrogate models and agent-based modeling, have gained prominence in plant and crop sciences. In contrast, some motifs, such as open-ended optimization or program synthesis, still need to be explored further. The motifs of simulation intelligence can potentially revolutionize breeding and precision farming towards more sustainable food production.
Collapse
Affiliation(s)
- Michiel Stock
- KERMIT and Biobix, Department of Data Analysis and Mathematical Modelling, Ghent University, Ghent, Belgium
| | - Olivier Pieters
- IDLAB-AIRO, Ghent University, imec, Ghent, Belgium
- Plant Sciences Unit, Flanders Research Institute for Agriculture, Fisheries and Food, Melle, Belgium
| | - Tom De Swaef
- Plant Sciences Unit, Flanders Research Institute for Agriculture, Fisheries and Food, Melle, Belgium
| | | |
Collapse
|
2
|
Shao R, Sim A, Wu K, Kim J. Leveraging History to Predict Infrequent Abnormal Transfers in Distributed Workflows. Sensors (Basel) 2023; 23:5485. [PMID: 37420657 DOI: 10.3390/s23125485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Revised: 05/29/2023] [Accepted: 06/06/2023] [Indexed: 07/09/2023]
Abstract
Scientific computing heavily relies on data shared by the community, especially in distributed data-intensive applications. This research focuses on predicting slow connections that create bottlenecks in distributed workflows. In this study, we analyze network traffic logs collected between January 2021 and August 2022 at the National Energy Research Scientific Computing Center (NERSC). Based on the observed patterns, we define a set of features primarily based on history for identifying low-performing data transfers. Typically, there are far fewer slow connections on well-maintained networks, which creates difficulty in learning to identify these abnormally slow connections from the normal ones. We devise several stratified sampling techniques to address the class-imbalance challenge and study how they affect the machine learning approaches. Our tests show that a relatively simple technique that undersamples the normal cases to balance the number of samples in two classes (normal and slow) is very effective for model training. This model predicts slow connections with an F1 score of 0.926.
Collapse
Affiliation(s)
- Robin Shao
- EECS, University of California at Berkeley, Berkeley, CA 94720, USA
| | - Alex Sim
- Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
| | - Kesheng Wu
- Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA
| | - Jinoh Kim
- Computer Science Department, Texas A&M University, Commerce, TX 75428, USA
| |
Collapse
|
3
|
Karabelas E, Longobardi S, Fuchsberger J, Razeghi O, Rodero C, Strocchi M, Rajani R, Haase G, Plank G, Niederer S. Global Sensitivity Analysis of Four Chamber Heart Hemodynamics Using Surrogate Models. IEEE Trans Biomed Eng 2022; 69:3216-3223. [PMID: 35353691 PMCID: PMC9491017 DOI: 10.1109/tbme.2022.3163428] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 03/19/2022] [Indexed: 11/15/2022]
Abstract
Computational Fluid Dynamics (CFD) is used to assist in designing artificial valves and planning procedures, focusing on local flow features. However, assessing the impact on overall cardiovascular function or predicting longer-term outcomes may requires more comprehensive whole heart CFD models. Fitting such models to patient data requires numerous computationally expensive simulations, and depends on specific clinical measurements to constrain model parameters, hampering clinical adoption. Surrogate models can help to accelerate the fitting process while accounting for the added uncertainty. We create a validated patient-specific four-chamber heart CFD model based on the Navier-Stokes-Brinkman (NSB) equations and test Gaussian Process Emulators (GPEs) as a surrogate model for performing a variance-based global sensitivity analysis (GSA). GSA identified preload as the dominant driver of flow in both the right and left side of the heart, respectively. Left-right differences were seen in terms of vascular outflow resistances, with pulmonary artery resistance having a much larger impact on flow than aortic resistance. Our results suggest that GPEs can be used to identify parameters in personalized whole heart CFD models, and highlight the importance of accurate preload measurements.
Collapse
Affiliation(s)
- Elias Karabelas
- Institute of Mathematics and Scientific ComputingUniversity of GrazAustria
| | - Stefano Longobardi
- Cardiac Electromechanics Research Group, School of Biomedical Engineering and Imaging SciencesKing’s College LondonU.K.
| | - Jana Fuchsberger
- Institute of Mathematics and Scientific ComputingUniversity of GrazAustria
| | - Orod Razeghi
- Research IT Services DepartmentUniversity College LondonU.K.
| | - Cristobal Rodero
- Cardiac Electromechanics Research Group, School of Biomedical Engineering and Imaging SciencesKing’s College LondonU.K.
| | - Marina Strocchi
- Cardiac Electromechanics Research Group, School of Biomedical Engineering and Imaging SciencesKing’s College LondonU.K.
| | - Ronak Rajani
- Department of Adult EchocardiographyGuy’s and St Thomas’ Hospitals NHS Foundation TrustU.K.
| | - Gundolf Haase
- Institute of Mathematics and Scientific ComputingUniversity of GrazAustria
| | - Gernot Plank
- Gottfried Schatz Research Center (for Cell Signaling, Metabolism and Aging), Division BiophysicsMedical University of GrazAustria
| | - Steven Niederer
- Cardiac Electromechanics Research Group, School of Biomedical Engineering and Imaging SciencesKing’s College LondonSE1 7EHLondonU.K.
| |
Collapse
|
4
|
Abstract
Nonlinear differential equations rarely admit closed-form solutions, thus requiring numerical time-stepping algorithms to approximate solutions. Further, many systems characterized by multiscale physics exhibit dynamics over a vast range of timescales, making numerical integration expensive. In this work, we develop a hierarchy of deep neural network time-steppers to approximate the dynamical system flow map over a range of time-scales. The model is purely data-driven, enabling accurate and efficient numerical integration and forecasting. Similar ideas can be used to couple neural network-based models with classical numerical time-steppers. Our hierarchical time-stepping scheme provides advantages over current time-stepping algorithms, including (i) capturing a range of timescales, (ii) improved accuracy in comparison with leading neural network architectures, (iii) efficiency in long-time forecasting due to explicit training of slow time-scale dynamics, and (iv) a flexible framework that is parallelizable and may be integrated with standard numerical time-stepping algorithms. The method is demonstrated on numerous nonlinear dynamical systems, including the Van der Pol oscillator, the Lorenz system, the Kuramoto-Sivashinsky equation, and fluid flow pass a cylinder; audio and video signals are also explored. On the sequence generation examples, we benchmark our algorithm against state-of-the-art methods, such as LSTM, reservoir computing and clockwork RNN. This article is part of the theme issue 'Data-driven prediction in dynamical systems'.
Collapse
Affiliation(s)
- Yuying Liu
- Department of Applied Mathematics, University of Washington, Seattle, WA 98105, USA
| | - J. Nathan Kutz
- Department of Mechanical Engineering, University of Washington, Seattle, WA 98105, USA
| | - Steven L. Brunton
- Department of Mechanical Engineering, University of Washington, Seattle, WA 98105, USA
| |
Collapse
|
5
|
Fletcher AG, Osborne JM. Seven challenges in the multiscale modeling of multicellular tissues. WIREs Mech Dis 2022; 14:e1527. [PMID: 35023326 DOI: 10.1002/wsbm.1527] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Revised: 11/23/2020] [Accepted: 03/25/2021] [Indexed: 11/11/2022]
Abstract
The growth and dynamics of multicellular tissues involve tightly regulated and coordinated morphogenetic cell behaviors, such as shape changes, movement, and division, which are governed by subcellular machinery and involve coupling through short- and long-range signals. A key challenge in the fields of developmental biology, tissue engineering and regenerative medicine is to understand how relationships between scales produce emergent tissue-scale behaviors. Recent advances in molecular biology, live-imaging and ex vivo techniques have revolutionized our ability to study these processes experimentally. To fully leverage these techniques and obtain a more comprehensive understanding of the causal relationships underlying tissue dynamics, computational modeling approaches are increasingly spanning multiple spatial and temporal scales, and are coupling cell shape, growth, mechanics, and signaling. Yet such models remain challenging: modeling at each scale requires different areas of technical skills, while integration across scales necessitates the solution to novel mathematical and computational problems. This review aims to summarize recent progress in multiscale modeling of multicellular tissues and to highlight ongoing challenges associated with the construction, implementation, interrogation, and validation of such models. This article is categorized under: Reproductive System Diseases > Computational Models Metabolic Diseases > Computational Models Cancer > Computational Models.
Collapse
Affiliation(s)
- Alexander G Fletcher
- School of Mathematics and Statistics, University of Sheffield, Sheffield, UK.,Bateson Centre, University of Sheffield, Sheffield, UK
| | - James M Osborne
- School of Mathematics and Statistics, University of Melbourne, Parkville, Victoria, Australia
| |
Collapse
|
6
|
Abstract
Physics-Informed Neural Networks (PINN) are neural networks encoding the problem governing equations, such as Partial Differential Equations (PDE), as a part of the neural network. PINNs have emerged as a new essential tool to solve various challenging problems, including computing linear systems arising from PDEs, a task for which several traditional methods exist. In this work, we focus first on evaluating the potential of PINNs as linear solvers in the case of the Poisson equation, an omnipresent equation in scientific computing. We characterize PINN linear solvers in terms of accuracy and performance under different network configurations (depth, activation functions, input data set distribution). We highlight the critical role of transfer learning. Our results show that low-frequency components of the solution converge quickly as an effect of the F-principle. In contrast, an accurate solution of the high frequencies requires an exceedingly long time. To address this limitation, we propose integrating PINNs into traditional linear solvers. We show that this integration leads to the development of new solvers whose performance is on par with other high-performance solvers, such as PETSc conjugate gradient linear solvers, in terms of performance and accuracy. Overall, while the accuracy and computational performance are still a limiting factor for the direct use of PINN linear solvers, hybrid strategies combining old traditional linear solver approaches with new emerging deep-learning techniques are among the most promising methods for developing a new class of linear solvers.
Collapse
|
7
|
Jay C, Haines R, Katz DS, Carver JC, Gesing S, Brandt SR, Howison J, Dubey A, Phillips JC, Wan H, Turk MJ. The challenges of theory-software translation. F1000Res 2020; 9:1192. [PMID: 33214878 PMCID: PMC7656273 DOI: 10.12688/f1000research.25561.1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 08/18/2020] [Indexed: 11/20/2022] Open
Abstract
Background: Software is now ubiquitous within research. In addition to the general challenges common to all software development projects, research software must also represent, manipulate, and provide data for complex theoretical constructs. Ensuring this process of theory-software translation is robust is essential to maintaining the integrity of the science resulting from it, and yet there has been little formal recognition or exploration of the challenges associated with it. Methods: We thematically analyse the outputs of the discussion sessions at the Theory-Software Translation Workshop 2019, where academic researchers and research software engineers from a variety of domains, and with particular expertise in high performance computing, explored the process of translating between scientific theory and software. Results: We identify a wide range of challenges to implementing scientific theory in research software and using the resulting data and models for the advancement of knowledge. We categorise these within the emergent themes of design, infrastructure, and culture, and map them to associated research questions. Conclusions: Systematically investigating how software is constructed and its outputs used within science has the potential to improve the robustness of research software and accelerate progress in its development. We propose that this issue be examined within a new research area of theory-software translation, which would aim to significantly advance both knowledge and scientific practice.
Collapse
Affiliation(s)
| | | | - Daniel S Katz
- University of Illinois at Urbana-Champaign, Urbana, USA
| | | | | | | | | | | | | | - Hui Wan
- Pacific Northwest National Laboratory, Richland, USA
| | | |
Collapse
|
8
|
Abstract
Big data and complex analysis workflows (pipelines) are common issues in data driven science such as bioinformatics. Large amounts of computational tools are available for data analysis. Additionally, many workflow management systems to piece together such tools into data analysis pipelines have been developed. For example, more than 50 computational tools for read mapping are available representing a large amount of duplicated effort. Furthermore, it is unclear whether these tools are correct and only a few have a user base large enough to have encountered and reported most of the potential problems. Bringing together many largely untested tools in a computational pipeline must lead to unpredictable results. Yet, this is the current state. While presently data analysis is performed on personal computers/workstations/clusters, the future will see development and analysis shift to the cloud. None of the workflow management systems is ready for this transition. This presents the opportunity to build a new system, which will overcome current duplications of effort, introduce proper testing, allow for development and analysis in public and private clouds, and include reporting features leading to interactive documents.
Collapse
Affiliation(s)
- Jens Allmer
- Hochschule Ruhr West, University of Applied Sciences, Medical Informatics and Bioinformatics, 45407 Mülheim an der Ruhr, Germany
| |
Collapse
|
9
|
Abstract
We describe a project-based introduction to reproducible and collaborative neuroimaging analysis. Traditional teaching on neuroimaging usually consists of a series of lectures that emphasize the big picture rather than the foundations on which the techniques are based. The lectures are often paired with practical workshops in which students run imaging analyses using the graphical interface of specific neuroimaging software packages. Our experience suggests that this combination leaves the student with a superficial understanding of the underlying ideas, and an informal, inefficient, and inaccurate approach to analysis. To address these problems, we based our course around a substantial open-ended group project. This allowed us to teach: (a) computational tools to ensure computationally reproducible work, such as the Unix command line, structured code, version control, automated testing, and code review and (b) a clear understanding of the statistical techniques used for a basic analysis of a single run in an MR scanner. The emphasis we put on the group project showed the importance of standard computational tools for accuracy, efficiency, and collaboration. The projects were broadly successful in engaging students in working reproducibly on real scientific questions. We propose that a course on this model should be the foundation for future programs in neuroimaging. We believe it will also serve as a model for teaching efficient and reproducible research in other fields of computational science.
Collapse
Affiliation(s)
- K. Jarrod Millman
- Division of Biostatistics, University of California, Berkeley, Berkeley, CA, United States
- Berkeley Institute for Data Science, University of California, Berkeley, Berkeley, CA, United States
| | - Matthew Brett
- College of Life and Environmental Sciences, University of Birmingham, Birmingham, United Kingdom
| | - Ross Barnowski
- Applied Nuclear Physics Program, Lawrence Berkeley National Laboratory, Berkeley, CA, United States
| | | |
Collapse
|
10
|
Abstract
With Next Generation Sequencing data being routinely used, evolutionary biology is transforming into a computational science. Thus, researchers have to rely on a growing number of increasingly complex software. All widely used core tools in the field have grown considerably, in terms of the number of features as well as lines of code and consequently, also with respect to software complexity. A topic that has received little attention is the software engineering quality of widely used core analysis tools. Software developers appear to rarely assess the quality of their code, and this can have potential negative consequences for end-users. To this end, we assessed the code quality of 16 highly cited and compute-intensive tools mainly written in C/C++ (e.g., MrBayes, MAFFT, SweepFinder, etc.) and JAVA (BEAST) from the broader area of evolutionary biology that are being routinely used in current data analysis pipelines. Because, the software engineering quality of the tools we analyzed is rather unsatisfying, we provide a list of best practices for improving the quality of existing tools and list techniques that can be deployed for developing reliable, high quality scientific software from scratch. Finally, we also discuss journal as well as science policy and, more importantly, funding issues that need to be addressed for improving software engineering quality as well as ensuring support for developing new and maintaining existing software. Our intention is to raise the awareness of the community regarding software engineering quality issues and to emphasize the substantial lack of funding for scientific software development.
Collapse
Affiliation(s)
- Diego Darriba
- Scientific Computing Group, Heidelberg Institute for Theoretical Studies, Heidelberg, Germany
| | - Tomáš Flouri
- Scientific Computing Group, Heidelberg Institute for Theoretical Studies, Heidelberg, Germany
| | - Alexandros Stamatakis
- Scientific Computing Group, Heidelberg Institute for Theoretical Studies, Heidelberg, Germany
- Institute of Theoretical Informatics, Karlsruhe Institute of Technology, Karlsruhe, Germany
| |
Collapse
|
11
|
Affiliation(s)
- Eilif Muller
- Center for Brain Simulation, Ecole Polytechnique Fédérale de Lausanne Geneva, Switzerland
| | - James A Bednar
- Institute for Adaptive and Neural Computation, University of Edinburgh Edinburgh, UK
| | - Markus Diesmann
- Jülich Research Center and Jülich Aachen Research Alliance, Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) Jülich, Germany ; Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University Aachen, Germany ; Department of Physics, Faculty 1, RWTH Aachen University Aachen, Germany
| | - Marc-Oliver Gewaltig
- Center for Brain Simulation, Ecole Polytechnique Fédérale de Lausanne Geneva, Switzerland
| | - Michael Hines
- Department of Neurobiology, Yale University New Haven, CT, USA
| | - Andrew P Davison
- Neuroinformatics group Unité de Neurosciences, Information et Complexité, Centre National de la Recherche Scientifique Gif sur Yvette, France
| |
Collapse
|
12
|
Vincent T, Badillo S, Risser L, Chaari L, Bakhous C, Forbes F, Ciuciu P. Flexible multivariate hemodynamics fMRI data analyses and simulations with PyHRF. Front Neurosci 2014; 8:67. [PMID: 24782699 PMCID: PMC3989728 DOI: 10.3389/fnins.2014.00067] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2013] [Accepted: 03/21/2014] [Indexed: 11/13/2022] Open
Abstract
As part of fMRI data analysis, the pyhrf package provides a set of tools for addressing the two main issues involved in intra-subject fMRI data analysis: (1) the localization of cerebral regions that elicit evoked activity and (2) the estimation of activation dynamics also known as Hemodynamic Response Function (HRF) recovery. To tackle these two problems, pyhrf implements the Joint Detection-Estimation framework (JDE) which recovers parcel-level HRFs and embeds an adaptive spatio-temporal regularization scheme of activation maps. With respect to the sole detection issue (1), the classical voxelwise GLM procedure is also available through nipy, whereas Finite Impulse Response (FIR) and temporally regularized FIR models are concerned with HRF estimation (2) and are specifically implemented in pyhrf. Several parcellation tools are also integrated such as spatial and functional clustering. Parcellations may be used for spatial averaging prior to FIR/RFIR analysis or to specify the spatial support of the HRF estimates in the JDE approach. These analysis procedures can be applied either to volume-based data sets or to data projected onto the cortical surface. For validation purpose, this package is shipped with artificial and real fMRI data sets, which are used in this paper to compare the outcome of the different available approaches. The artificial fMRI data generator is also described to illustrate how to simulate different activation configurations, HRF shapes or nuisance components. To cope with the high computational needs for inference, pyhrf handles distributing computing by exploiting cluster units as well as multi-core machines. Finally, a dedicated viewer is presented, which handles n-dimensional images and provides suitable features to explore whole brain hemodynamics (time series, maps, ROI mask overlay).
Collapse
Affiliation(s)
- Thomas Vincent
- INRIA, MISTIS, LJK, Grenoble University Grenoble, France ; UNATI/INRIA Saclay, Parietal, CEA/DSV/I2BM NeuroSpin center Gif-sur-Yvette, France
| | - Solveig Badillo
- UNATI/INRIA Saclay, Parietal, CEA/DSV/I2BM NeuroSpin center Gif-sur-Yvette, France ; INRIA, Parietal, NeuroSpin center Gif-sur-Yvette, France
| | - Laurent Risser
- UNATI/INRIA Saclay, Parietal, CEA/DSV/I2BM NeuroSpin center Gif-sur-Yvette, France ; CNRS, UMR 5219, Statistics and Probability Team, Toulouse Mathematics Institute Toulouse, France
| | - Lotfi Chaari
- INRIA, MISTIS, LJK, Grenoble University Grenoble, France ; INP-ENSEEIHT/CNRS UMR 5505, TCI, IRIT, University of Toulouse Toulouse, France
| | | | | | - Philippe Ciuciu
- UNATI/INRIA Saclay, Parietal, CEA/DSV/I2BM NeuroSpin center Gif-sur-Yvette, France ; INRIA, Parietal, NeuroSpin center Gif-sur-Yvette, France
| |
Collapse
|
13
|
Wilcox C, Strout MM, Bieman JM. Tool support for software lookup table optimization. Sci Program 2011; 19:213-229. [PMID: 24532963 PMCID: PMC3922221 DOI: 10.3233/spr-2011-0329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
A number of scientific applications are performance-limited by expressions that repeatedly call costly elementary functions. Lookup table (LUT) optimization accelerates the evaluation of such functions by reusing previously computed results. LUT methods can speed up applications that tolerate an approximation of function results, thereby achieving a high level of fuzzy reuse. One problem with LUT optimization is the difficulty of controlling the tradeoff between performance and accuracy. The current practice of manual LUT optimization adds programming effort by requiring extensive experimentation to make this tradeoff, and such hand tuning can obfuscate algorithms. In this paper we describe a methodology and tool implementation to improve the application of software LUT optimization. Our Mesa tool implements source-to-source transformations for C or C++ code to automate the tedious and error-prone aspects of LUT generation such as domain profiling, error analysis, and code generation. We evaluate Mesa with five scientific applications. Our results show a performance improvement of 3.0 × and 6.9 × for two molecular biology algorithms, 1.4 × for a molecular dynamics program, 2.1 × to 2.8 × for a neural network application, and 4.6 × for a hydrology calculation. We find that Mesa enables LUT optimization with more control over accuracy and less effort than manual approaches.
Collapse
|
14
|
Abstract
The neural simulation tool NEST (http://www.nest-initiative.org) is a simulator for heterogeneous networks of point neurons or neurons with a small number of compartments. It aims at simulations of large neural systems with more than 10(4) neurons and 10(7) to 10(9) synapses. NEST is implemented in C++ and can be used on a large range of architectures from single-core laptops over multi-core desktop computers to super-computers with thousands of processor cores. Python (http://www.python.org) is a modern programming language that has recently received considerable attention in Computational Neuroscience. Python is easy to learn and has many extension modules for scientific computing (e.g. http://www.scipy.org). In this contribution we describe PyNEST, the new user interface to NEST. PyNEST combines NEST's efficient simulation kernel with the simplicity and flexibility of Python. Compared to NEST's native simulation language SLI, PyNEST makes it easier to set up simulations, generate stimuli, and analyze simulation results. We describe how PyNEST connects NEST and Python and how it is implemented. With a number of examples, we illustrate how it is used.
Collapse
|
15
|
Boisvert RF, Donahue MJ, Lozier DW, McMichael R, Rust BW. Mathematics and Measurement. J Res Natl Inst Stand Technol 2001; 106:293-313. [PMID: 27500024 PMCID: PMC4865281 DOI: 10.6028/jres.106.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
In this paper we describe the role that mathematics plays in measurement science at NIST. We first survey the history behind NIST's current work in this area, starting with the NBS Math Tables project of the 1930s. We then provide examples of more recent efforts in the application of mathematics to measurement science, including the solution of ill-posed inverse problems, characterization of the accuracy of software for micromagnetic modeling, and in the development and dissemination of mathematical reference data. Finally, we comment on emerging issues in measurement science to which mathematicians will devote their energies in coming years.
Collapse
|