1
|
Rule JS, Piantadosi ST, Cropper A, Ellis K, Nye M, Tenenbaum JB. Symbolic metaprogram search improves learning efficiency and explains rule learning in humans. Nat Commun 2024; 15:6847. [PMID: 39127796 DOI: 10.1038/s41467-024-50966-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Accepted: 07/23/2024] [Indexed: 08/12/2024] Open
Abstract
Throughout their lives, humans seem to learn a variety of rules for things like applying category labels, following procedures, and explaining causal relationships. These rules are often algorithmically rich but are nonetheless acquired with minimal data and computation. Symbolic models based on program learning successfully explain rule-learning in many domains, but performance degrades quickly as program complexity increases. It remains unclear how to scale symbolic rule-learning methods to model human performance in challenging domains. Here we show that symbolic search over the space of metaprograms-programs that revise programs-dramatically improves learning efficiency. On a behavioral benchmark of 100 algorithmically rich rules, this approach fits human learning more accurately than alternative models while also using orders of magnitude less search. The computation required to match median human performance is consistent with conservative estimates of human thinking time. Our results suggest that metaprogram-like representations may help human learners to efficiently acquire rules.
Collapse
Affiliation(s)
- Joshua S Rule
- Psychology, University of California, Berkeley, Berkeley, CA, 94704, USA.
| | | | | | - Kevin Ellis
- Computer Science, Cornell University, Ithaca, NY, 14850, USA
| | - Maxwell Nye
- Adept AI Labs, San Francisco, CA, 94110, USA
| | - Joshua B Tenenbaum
- Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| |
Collapse
|
2
|
Depeweg S, Rothkopf CA, Jäkel F. Solving Bongard Problems With a Visual Language and Pragmatic Constraints. Cogn Sci 2024; 48:e13432. [PMID: 38700123 DOI: 10.1111/cogs.13432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Revised: 02/15/2024] [Accepted: 02/26/2024] [Indexed: 05/05/2024]
Abstract
More than 50 years ago, Bongard introduced 100 visual concept learning problems as a challenge for artificial vision systems. These problems are now known as Bongard problems. Although they are well known in cognitive science and artificial intelligence, only very little progress has been made toward building systems that can solve a substantial subset of them. In the system presented here, visual features are extracted through image processing and then translated into a symbolic visual vocabulary. We introduce a formal language that allows representing compositional visual concepts based on this vocabulary. Using this language and Bayesian inference, concepts can be induced from the examples that are provided in each problem. We find a reasonable agreement between the concepts with high posterior probability and the solutions formulated by Bongard himself for a subset of 35 problems. While this approach is far from solving Bongard problems like humans, it does considerably better than previous approaches. We discuss the issues we encountered while developing this system and their continuing relevance for understanding visual cognition. For instance, contrary to other concept learning problems, the examples are not random in Bongard problems; instead they are carefully chosen to ensure that the concept can be induced, and we found it helpful to take the resulting pragmatic constraints into account.
Collapse
Affiliation(s)
| | - Contantin A Rothkopf
- Centre for Cognitive Science & Institute of Psychology, Technische Universität Darmstadt
- Frankfurt Institute for Advanced Studies, Frankfurt am Main
| | - Frank Jäkel
- Centre for Cognitive Science & Institute of Psychology, Technische Universität Darmstadt
| |
Collapse
|
3
|
Zhou Y, Feinman R, Lake BM. Compositional diversity in visual concept learning. Cognition 2024; 244:105711. [PMID: 38224649 DOI: 10.1016/j.cognition.2023.105711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2023] [Revised: 12/20/2023] [Accepted: 12/21/2023] [Indexed: 01/17/2024]
Abstract
Humans leverage compositionality to efficiently learn new concepts, understanding how familiar parts can combine together to form novel objects. In contrast, popular computer vision models struggle to make the same types of inferences, requiring more data and generalizing less flexibly than people do. Here, we study these distinctively human abilities across a range of different types of visual composition, examining how people classify and generate "alien figures" with rich relational structure. We also develop a Bayesian program induction model which searches for the best programs for generating the candidate visual figures, utilizing a large program space containing different compositional mechanisms and abstractions. In few shot classification tasks, we find that people and the program induction model can make a range of meaningful compositional generalizations, with the model providing a strong account of the experimental data as well as interpretable parameters that reveal human assumptions about the factors invariant to category membership (here, to rotation and changing part attachment). In few shot generation tasks, both people and the models are able to construct compelling novel examples, with people behaving in additional structured ways beyond the model capabilities, e.g. making choices that complete a set or reconfigure existing parts in new ways. To capture these additional behavioral patterns, we develop an alternative model based on neuro-symbolic program induction: this model also composes new concepts from existing parts yet, distinctively, it utilizes neural network modules to capture residual statistical structure. Together, our behavioral and computational findings show how people and models can produce a variety of compositional behavior when classifying and generating visual objects.
Collapse
Affiliation(s)
- Yanli Zhou
- Center for Data Science, New York University, United States of America.
| | - Reuben Feinman
- Center for Neural Science, New York University, United States of America.
| | - Brenden M Lake
- Center for Data Science, New York University, United States of America; Department of Psychology, New York University, United States of America.
| |
Collapse
|
4
|
Piantadosi ST. The algorithmic origins of counting. Child Dev 2023; 94:1472-1490. [PMID: 37984061 DOI: 10.1111/cdev.14031] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 09/16/2023] [Accepted: 09/19/2023] [Indexed: 11/22/2023]
Abstract
The study of how children learn numbers has yielded one of the most productive research programs in cognitive development, spanning empirical and computational methods, as well as nativist and empiricist philosophies. This paper provides a tutorial on how to think computationally about learning models in a domain like number, where learners take finite data and go far beyond what they directly observe or perceive. To illustrate, this paper then outlines a model which acquires a counting procedure using observations of sets and words, extending the proposal of Piantadosi et al. (2012). This new version of the model responds to several critiques of the original work and outlines an approach which is likely appropriate for acquiring further aspects of mathematics.
Collapse
|
5
|
Carcassi F, Szymanik J. The Boolean Language of Thought is recoverable from learning data. Cognition 2023; 239:105541. [PMID: 37473608 DOI: 10.1016/j.cognition.2023.105541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Revised: 04/12/2023] [Accepted: 06/27/2023] [Indexed: 07/22/2023]
Abstract
According to the Language of Thought Hypothesis (LoTH), an influential account in philosophy and cognitive science, human cognition is underlain by symbolic reasoning in a formal language. In this account, concepts are expressions in a Language of Thought, deduction is syntactic manipulation in this language, and learning is an inference of expressions in this language from data. This picture raises the question of what LoT humans have, and how to infer it from behavior. In this paper, we pave the way towards answering this question, by approaching a more fundamental question: to what extent is it possible in principle to recover the human LoT from experimental data? To answer this question, we focus on the fragment of LoT that is concerned with representing Boolean categories and simulate the recovery of the Boolean LoT from category learning experiments. Our findings show that in principle the vast majority of Boolean LoTs can be accurately recovered from experimental data. However, we find that this crucially depends on the employed experimental design. Moreover, we find evidence that LoTs with fewer operators can be recovered from category learning data faster.
Collapse
Affiliation(s)
- Fausto Carcassi
- Department of Linguistics, University of Tübingen, Keplerstraße 2, 72074 Tübingen, Germany.
| | - Jakub Szymanik
- Center for Mind/Brain Sciences and Department of Information Engineering and Computer Science, Corso Bettini 31, 38068 Rovereto (TN), Italy.
| |
Collapse
|
6
|
Dubova M, Goldstone RL. Carving joints into nature: reengineering scientific concepts in light of concept-laden evidence. Trends Cogn Sci 2023; 27:656-670. [PMID: 37173157 DOI: 10.1016/j.tics.2023.04.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2022] [Revised: 04/11/2023] [Accepted: 04/12/2023] [Indexed: 05/15/2023]
Abstract
A new wave of proposals suggests that scientists must reassess scientific concepts in light of accumulated evidence. However, reengineering scientific concepts in light of data is challenging because scientific concepts affect the evidence itself in multiple ways. Among other possible influences, concepts (i) prime scientists to overemphasize within-concept similarities and between-concept differences; (ii) lead scientists to measure conceptually relevant dimensions more accurately; (iii) serve as units of scientific experimentation, communication, and theory-building; and (iv) affect the phenomena themselves. When looking for improved ways to carve nature at its joints, scholars must take the concept-laden nature of evidence into account to avoid entering a vicious circle of concept-evidence mutual substantiation.
Collapse
Affiliation(s)
- Marina Dubova
- Cognitive Science Program, Indiana University, 1101 E. 10th Street, Bloomington, IN 47405, USA.
| | - Robert L Goldstone
- Cognitive Science Program, Indiana University, 1101 E. 10th Street, Bloomington, IN 47405, USA; Department of Psychological and Brain Sciences, Indiana University, 1101 E. 10th Street, Bloomington, IN 47405, USA
| |
Collapse
|
7
|
Quilty-Dunn J, Porot N, Mandelbaum E. The best game in town: The reemergence of the language-of-thought hypothesis across the cognitive sciences. Behav Brain Sci 2022; 46:e261. [PMID: 36471543 DOI: 10.1017/s0140525x22002849] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Mental representations remain the central posits of psychology after many decades of scrutiny. However, there is no consensus about the representational format(s) of biological cognition. This paper provides a survey of evidence from computational cognitive psychology, perceptual psychology, developmental psychology, comparative psychology, and social psychology, and concludes that one type of format that routinely crops up is the language-of-thought (LoT). We outline six core properties of LoTs: (i) discrete constituents; (ii) role-filler independence; (iii) predicate-argument structure; (iv) logical operators; (v) inferential promiscuity; and (vi) abstract content. These properties cluster together throughout cognitive science. Bayesian computational modeling, compositional features of object perception, complex infant and animal reasoning, and automatic, intuitive cognition in adults all implicate LoT-like structures. Instead of regarding LoT as a relic of the previous century, researchers in cognitive science and philosophy-of-mind must take seriously the explanatory breadth of LoT-based architectures. We grant that the mind may harbor many formats and architectures, including iconic and associative structures as well as deep-neural-network-like architectures. However, as computational/representational approaches to the mind continue to advance, classical compositional symbolic structures - that is, LoTs - only prove more flexible and well-supported over time.
Collapse
Affiliation(s)
- Jake Quilty-Dunn
- Department of Philosophy and Philosophy-Neuroscience-Psychology Program, Washington University in St. Louis, St. Louis, MO, USA. , sites.google.com/site/jakequiltydunn/
| | - Nicolas Porot
- Africa Institute for Research in Economics and Social Sciences, Mohammed VI Polytechnic University, Rabat, Morocco. , nicolasporot.com
| | - Eric Mandelbaum
- Departments of Philosophy and Psychology, The Graduate Center & Baruch College, CUNY, New York, NY, USA. , ericmandelbaum.com
| |
Collapse
|
8
|
Abstract
Vision and learning have long been considered to be two areas of research linked only distantly. However, recent developments in vision research have changed the conceptual definition of vision from a signal-evaluating process to a goal-oriented interpreting process, and this shift binds learning, together with the resulting internal representations, intimately to vision. In this review, we consider various types of learning (perceptual, statistical, and rule/abstract) associated with vision in the past decades and argue that they represent differently specialized versions of the fundamental learning process, which must be captured in its entirety when applied to complex visual processes. We show why the generalized version of statistical learning can provide the appropriate setup for such a unified treatment of learning in vision, what computational framework best accommodates this kind of statistical learning, and what plausible neural scheme could feasibly implement this framework. Finally, we list the challenges that the field of statistical learning faces in fulfilling the promise of being the right vehicle for advancing our understanding of vision in its entirety. Expected final online publication date for the Annual Review of Vision Science, Volume 8 is September 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- József Fiser
- Department of Cognitive Science, Center for Cognitive Computation, Central European University, Vienna 1100, Austria;
| | - Gábor Lengyel
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York 14627, USA
| |
Collapse
|
9
|
Baker N, Garrigan P, Kellman PJ. Constant curvature segments as building blocks of 2D shape representation. J Exp Psychol Gen 2021; 150:1556-1580. [PMID: 33332142 PMCID: PMC8324180 DOI: 10.1037/xge0001007] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
How the visual system represents shape, and how shape representations might be computed by neural mechanisms, are fundamental and unanswered questions. Here, we investigated the hypothesis that 2-dimensional (2D) contour shapes are encoded structurally, as sets of connected constant curvature segments. We report 3 experiments investigating constant curvature segments as fundamental units of contour shape representations in human perception. Our results showed better performance in a path detection paradigm for constant curvature targets, as compared with locally matched targets that lacked this global regularity (Experiment 1), and that participants can learn to segment contours into constant curvature parts with different curvature values, but not into similarly different parts with linearly increasing curvatures (Experiment 2). We propose a neurally plausible model of contour shape representation based on constant curvature, built from oriented units known to exist in early cortical areas, and we confirmed the model's prediction that changes to the angular extent of a segment will be easier to detect than changes to relative curvature (Experiment 3). Together, these findings suggest the human visual system is specially adapted to detect and encode regions of constant curvature and support the notion that constant curvature segments are the building blocks from which abstract contour shape representations are composed. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
Collapse
Affiliation(s)
- Nicholas Baker
- Department of Psychology, University of California, Los Angeles
| | | | | |
Collapse
|
10
|
Piantadosi ST. The computational origin of representation. Minds Mach (Dordr) 2021; 31:1-58. [PMID: 34305318 PMCID: PMC8300595 DOI: 10.1007/s11023-020-09540-9] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2019] [Accepted: 08/29/2020] [Indexed: 01/29/2023]
Abstract
Each of our theories of mental representation provides some insight into how the mind works. However, these insights often seem incompatible, as the debates between symbolic, dynamical, emergentist, sub-symbolic, and grounded approaches to cognition attest. Mental representations-whatever they are-must share many features with each of our theories of representation, and yet there are few hypotheses about how a synthesis could be possible. Here, I develop a theory of the underpinnings of symbolic cognition that shows how sub-symbolic dynamics may give rise to higher-level cognitive representations of structures, systems of knowledge, and algorithmic processes. This theory implements a version of conceptual role semantics by positing an internal universal representation language in which learners may create mental models to capture dynamics they observe in the world. The theory formalizes one account of how truly novel conceptual content may arise, allowing us to explain how even elementary logical and computational operations may be learned from a more primitive basis. I provide an implementation that learns to represent a variety of structures, including logic, number, kinship trees, regular languages, context-free languages, domains of theories like magnetism, dominance hierarchies, list structures, quantification, and computational primitives like repetition, reversal, and recursion. This account is based on simple discrete dynamical processes that could be implemented in a variety of different physical or biological systems. In particular, I describe how the required dynamics can be directly implemented in a connectionist framework. The resulting theory provides an "assembly language" for cognition, where high-level theories of symbolic computation can be implemented in simple dynamics that themselves could be encoded in biologically plausible systems.
Collapse
|
11
|
Rule JS, Riesenhuber M. Leveraging Prior Concept Learning Improves Generalization From Few Examples in Computational Models of Human Object Recognition. Front Comput Neurosci 2021; 14:586671. [PMID: 33510629 PMCID: PMC7835122 DOI: 10.3389/fncom.2020.586671] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Accepted: 11/30/2020] [Indexed: 11/13/2022] Open
Abstract
Humans quickly and accurately learn new visual concepts from sparse data, sometimes just a single example. The impressive performance of artificial neural networks which hierarchically pool afferents across scales and positions suggests that the hierarchical organization of the human visual system is critical to its accuracy. These approaches, however, require magnitudes of order more examples than human learners. We used a benchmark deep learning model to show that the hierarchy can also be leveraged to vastly improve the speed of learning. We specifically show how previously learned but broadly tuned conceptual representations can be used to learn visual concepts from as few as two positive examples; reusing visual representations from earlier in the visual hierarchy, as in prior approaches, requires significantly more examples to perform comparably. These results suggest techniques for learning even more efficiently and provide a biologically plausible way to learn new visual concepts from few examples.
Collapse
Affiliation(s)
- Joshua S. Rule
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Maximilian Riesenhuber
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, United States
| |
Collapse
|
12
|
Rule JS, Tenenbaum JB, Piantadosi ST. The Child as Hacker. Trends Cogn Sci 2020; 24:900-915. [PMID: 33012688 PMCID: PMC7673661 DOI: 10.1016/j.tics.2020.07.005] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Revised: 07/13/2020] [Accepted: 07/16/2020] [Indexed: 01/29/2023]
Abstract
The scope of human learning and development poses a radical challenge for cognitive science. We propose that developmental theories can address this challenge by adopting perspectives from computer science. Many of our best models treat learning as analogous to computer programming because symbolic programs provide the most compelling account of sophisticated mental representations. We specifically propose that children's learning is analogous to a particular style of programming called hacking, making code better along many dimensions through an open-ended set of goals and activities. By contrast to existing theories, which depend primarily on local search and simple metrics, this view highlights the many features of good mental representations and the multiple complementary processes children use to create them.
Collapse
Affiliation(s)
- Joshua S Rule
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA.
| | - Joshua B Tenenbaum
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Steven T Piantadosi
- Department of Psychology, University of California, Berkeley, Berkeley, CA, USA
| |
Collapse
|
13
|
Lázaro-Gredilla M, Lin D, Guntupalli JS, George D. Beyond imitation: Zero-shot task transfer on robots by learning concepts as cognitive programs. Sci Robot 2019; 4:4/26/eaav3150. [DOI: 10.1126/scirobotics.aav3150] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2018] [Accepted: 11/19/2018] [Indexed: 01/29/2023]
Abstract
Humans can infer concepts from image pairs and apply those in the physical world in a completely different setting, enabling tasks like IKEA assembly from diagrams. If robots could represent and infer high-level concepts, then it would notably improve their ability to understand our intent and to transfer tasks between different environments. To that end, we introduce a computational framework that replicates aspects of human concept learning. Concepts are represented as programs on a computer architecture consisting of a visual perception system, working memory, and action controller. The instruction set of this cognitive computer has commands for parsing a visual scene, directing gaze and attention, imagining new objects, manipulating the contents of a visual working memory, and controlling arm movement. Inferring a concept corresponds to inducing a program that can transform the input to the output. Some concepts require the use of imagination and recursion. Previously learned concepts simplify the learning of subsequent, more elaborate concepts and create a hierarchy of abstractions. We demonstrate how a robot can use these abstractions to interpret novel concepts presented to it as schematic images and then apply those concepts in very different situations. By bringing cognitive science ideas on mental imagery, perceptual symbols, embodied cognition, and deictic mechanisms into the realm of machine learning, our work brings us closer to the goal of building robots that have interpretable representations and common sense.
Collapse
|
14
|
Romano S, Salles A, Amalric M, Dehaene S, Sigman M, Figueira S. Bayesian validation of grammar productions for the language of thought. PLoS One 2018; 13:e0200420. [PMID: 29990351 PMCID: PMC6039029 DOI: 10.1371/journal.pone.0200420] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2017] [Accepted: 06/26/2018] [Indexed: 01/29/2023] Open
Abstract
Probabilistic proposals of Language of Thoughts (LoTs) can explain learning across different domains as statistical inference over a compositionally structured hypothesis space. While frameworks may differ on how a LoT may be implemented computationally, they all share the property that they are built from a set of atomic symbols and rules by which these symbols can be combined. In this work we propose an extra validation step for the set of atomic productions defined by the experimenter. It starts by expanding the defined LoT grammar for the cognitive domain with a broader set of arbitrary productions and then uses Bayesian inference to prune the productions from the experimental data. The result allows the researcher to validate that the resulting grammar still matches the intuitive grammar chosen for the domain. We then test this method in the language of geometry, a specific LoT model for geometrical sequence learning. Finally, despite the fact of the geometrical LoT not being a universal (i.e. Turing-complete) language, we show an empirical relation between a sequence’s probability and its complexity consistent with the theoretical relationship for universal languages described by Levin’s Coding Theorem.
Collapse
Affiliation(s)
- Sergio Romano
- Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Computación. Buenos Aires, Argentina
- CONICET-Universidad de Buenos Aires. Instituto de Investigación en Ciencias de la Computación (ICC). Buenos Aires, Argentina
- * E-mail:
| | - Alejo Salles
- CONICET-Universidad de Buenos Aires. Instituto de Cálculo (IC). Buenos Aires, Argentina
| | - Marie Amalric
- Cognitive Neuroimaging Unit, CEA DSV/I2BM, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin center, 91191 Gif/Yvette, France
| | - Stanislas Dehaene
- Cognitive Neuroimaging Unit, CEA DSV/I2BM, INSERM, Université Paris-Sud, Université Paris-Saclay, NeuroSpin center, 91191 Gif/Yvette, France
| | - Mariano Sigman
- CONICET-Universidad Torcuato Di Tella. Laboratorio de Neurociencia, C1428BIJ. Buenos Aires, Argentina
| | - Santiago Figueira
- Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales. Departamento de Computación. Buenos Aires, Argentina
- CONICET-Universidad de Buenos Aires. Instituto de Investigación en Ciencias de la Computación (ICC). Buenos Aires, Argentina
| |
Collapse
|