Eriksson O, Bhalla US, Blackwell KT, Crook SM, Keller D, Kramer A, Linne ML, Saudargienė A, Wade RC, Hellgren Kotaleski J. Combining hypothesis- and data-driven neuroscience modeling in FAIR workflows.
eLife 2022;
11:e69013. [PMID:
35792600 PMCID:
PMC9259018 DOI:
10.7554/elife.69013]
[Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Accepted: 05/13/2022] [Indexed: 12/22/2022] Open
Abstract
Modeling in neuroscience occurs at the intersection of different points of view and approaches. Typically, hypothesis-driven modeling brings a question into focus so that a model is constructed to investigate a specific hypothesis about how the system works or why certain phenomena are observed. Data-driven modeling, on the other hand, follows a more unbiased approach, with model construction informed by the computationally intensive use of data. At the same time, researchers employ models at different biological scales and at different levels of abstraction. Combining these models while validating them against experimental data increases understanding of the multiscale brain. However, a lack of interoperability, transparency, and reusability of both models and the workflows used to construct them creates barriers for the integration of models representing different biological scales and built using different modeling philosophies. We argue that the same imperatives that drive resources and policy for data - such as the FAIR (Findable, Accessible, Interoperable, Reusable) principles - also support the integration of different modeling approaches. The FAIR principles require that data be shared in formats that are Findable, Accessible, Interoperable, and Reusable. Applying these principles to models and modeling workflows, as well as the data used to constrain and validate them, would allow researchers to find, reuse, question, validate, and extend published models, regardless of whether they are implemented phenomenologically or mechanistically, as a few equations or as a multiscale, hierarchical system. To illustrate these ideas, we use a classical synaptic plasticity model, the Bienenstock-Cooper-Munro rule, as an example due to its long history, different levels of abstraction, and implementation at many scales.
Collapse