1
|
Feldman MJ, Bliss-Moreau E, Lindquist KA. The neurobiology of interoception and affect. Trends Cogn Sci 2024; 28:643-661. [PMID: 38395706 PMCID: PMC11222051 DOI: 10.1016/j.tics.2024.01.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 01/25/2024] [Accepted: 01/26/2024] [Indexed: 02/25/2024]
Abstract
Scholars have argued for centuries that affective states involve interoception, or representations of the state of the body. Yet, we lack a mechanistic understanding of how signals from the body are transduced, transmitted, compressed, and integrated by the brains of humans to produce affective states. We suggest that to understand how the body contributes to affect, we first need to understand information flow through the nervous system's interoceptive pathways. We outline such a model and discuss how unique anatomical and physiological aspects of interoceptive pathways may give rise to the qualities of affective experiences in general and valence and arousal in particular. We conclude by considering implications and future directions for research on interoception, affect, emotions, and human mental experiences.
Collapse
Affiliation(s)
- M J Feldman
- Department of Psychology, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
| | - E Bliss-Moreau
- Department of Psychology, University of California Davis, Davis, CA, USA; California National Primate Research Center, University of California Davis, Davis, CA, USA
| | - K A Lindquist
- Department of Psychology, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
| |
Collapse
|
2
|
Scharf C, Witkowski O. Rebuilding the Habitable Zone from the Bottom up with Computational Zones. ASTROBIOLOGY 2024; 24:613-627. [PMID: 38853680 DOI: 10.1089/ast.2023.0035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2024]
Abstract
Computation, if treated as a set of physical processes that act on information represented by states of matter, encompasses biological systems, digital systems, and other constructs and may be a fundamental measure of living systems. The opportunity for biological computation, represented in the propagation and selection-driven evolution of information-carrying organic molecular structures, has been partially characterized in terms of planetary habitable zones (HZs) based on primary conditions such as temperature and the presence of liquid water. A generalization of this concept to computational zones (CZs) is proposed, with constraints set by three principal characteristics: capacity (including computation rates), energy, and instantiation (or substrate, including spatial extent). CZs naturally combine traditional habitability factors, including those associated with biological function that incorporate the chemical milieu, constraints on nutrients and free energy, as well as element availability. Two example applications are presented by examining the fundamental thermodynamic work efficiency and Landauer limit of photon-driven biological computation on planetary surfaces and of generalized computation in stellar energy capture structures (a.k.a. Dyson structures). It is suggested that CZs that involve nested structures or substellar objects could manifest unique observational signatures as cool far-infrared emitters. While these latter scenarios are entirely hypothetical, they offer a useful, complementary introduction to the potential universality of CZs.
Collapse
Affiliation(s)
- Caleb Scharf
- NASA Ames Research Center, Moffett Field, California, USA
| | - Olaf Witkowski
- Cross Labs, Cross Compass Ltd., Kyoto, Japan
- College of Arts and Sciences, University of Tokyo, Tokyo, Japan
| |
Collapse
|
3
|
Kinney D, Lombrozo T. Tell me your (cognitive) budget, and I'll tell you what you value. Cognition 2024; 247:105782. [PMID: 38593569 DOI: 10.1016/j.cognition.2024.105782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 03/15/2024] [Accepted: 03/21/2024] [Indexed: 04/11/2024]
Abstract
Consider the following two (hypothetical) generic causal claims: "Living in a neighborhood with many families with children increases purchases of bicycles" and "living in an affluent neighborhood with many families with children increases purchases of bicycles." These claims not only differ in what they suggest about how bicycle ownership is distributed across different neighborhoods (i.e., "the data"), but also have the potential to communicate something about the speakers' values: namely, the prominence they accord to affluence in representing and making decisions about the social world. Here, we examine the relationship between the level of granularity with which a cause is described in a generic causal claim (e.g., neighborhood vs. affluent neighborhood) and the value of the information contained in the causal model that generates that claim. We argue that listeners who know any two of the following can make reliable inferences about the third: 1) the level of granularity at which a speaker makes a generic causal claim, 2) the speaker's values, and 3) the data available to the speaker. We present results of four experiments (N = 1323) in the domain of social categories that provide evidence in keeping with these predictions.
Collapse
Affiliation(s)
- David Kinney
- Yale University, 100 College Street, New Haven, CT 06510, United States of America.
| | | |
Collapse
|
4
|
Ferdinand V, Pattenden E, Brightsmith DJ, Hobson EA. Inferring the decision rules that drive co-foraging affiliations in wild mixed-species parrot groups. Philos Trans R Soc Lond B Biol Sci 2023; 378:20220101. [PMID: 37066652 PMCID: PMC10107227 DOI: 10.1098/rstb.2022.0101] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Accepted: 01/09/2023] [Indexed: 04/18/2023] Open
Abstract
Animals gathered around a specific location or resource may represent mixed-species aggregations or mixed-species groups. Patterns of individuals choosing to join these groups can provide insight into the information processing underlying these decisions. However, we still have little understanding of how much information these decisions are based upon. We used data on 12 parrot species to test what kind of information each species may use about others to make decisions about which mixed-species aggregations to participate in. We used co-presence and joining patterns with categorization and model fitting methods to test how these species could be making grouping decisions. Species generally used a simpler lower-category method to choose which other individuals to associate with, rather than basing these decisions on species-level information. We also found that the best-fit models for decision-making differed across the 12 species and included different kinds of information. We found that not only does this approach provide a framework to test hypotheses about why individuals join or leave mixed-species aggregations, it also provides insight into what features each parrot could have been using to make their decisions. While not exhaustive, this approach provides a novel examination of the potential features that species could use to make grouping decisions and could provide a link to the perceptive and cognitive abilities of the animals making these minute-by-minute decisions. This article is part of the theme issue 'Mixed-species groups and aggregations: shaping ecological and behavioural patterns and processes'.
Collapse
Affiliation(s)
- Vanessa Ferdinand
- Melbourne School of Psychological Sciences, University of Melbourne, Melbourne, VIC 3010, Australia
| | - Elle Pattenden
- Melbourne School of Psychological Sciences, University of Melbourne, Melbourne, VIC 3010, Australia
| | - Donald J. Brightsmith
- School of Veterinary Medicine and Biomedical Sciences, Texas A&M University, College Station, TX 77843, USA
| | - Elizabeth A. Hobson
- Department of Biological Sciences, University of Cincinnati, Cincinnati, OH 45221, USA
| |
Collapse
|
5
|
Zhou D, Lynn CW, Cui Z, Ciric R, Baum GL, Moore TM, Roalf DR, Detre JA, Gur RC, Gur RE, Satterthwaite TD, Bassett DS. Efficient coding in the economics of human brain connectomics. Netw Neurosci 2022; 6:234-274. [PMID: 36605887 PMCID: PMC9810280 DOI: 10.1162/netn_a_00223] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Accepted: 12/08/2021] [Indexed: 01/07/2023] Open
Abstract
In systems neuroscience, most models posit that brain regions communicate information under constraints of efficiency. Yet, evidence for efficient communication in structural brain networks characterized by hierarchical organization and highly connected hubs remains sparse. The principle of efficient coding proposes that the brain transmits maximal information in a metabolically economical or compressed form to improve future behavior. To determine how structural connectivity supports efficient coding, we develop a theory specifying minimum rates of message transmission between brain regions to achieve an expected fidelity, and we test five predictions from the theory based on random walk communication dynamics. In doing so, we introduce the metric of compression efficiency, which quantifies the trade-off between lossy compression and transmission fidelity in structural networks. In a large sample of youth (n = 1,042; age 8-23 years), we analyze structural networks derived from diffusion-weighted imaging and metabolic expenditure operationalized using cerebral blood flow. We show that structural networks strike compression efficiency trade-offs consistent with theoretical predictions. We find that compression efficiency prioritizes fidelity with development, heightens when metabolic resources and myelination guide communication, explains advantages of hierarchical organization, links higher input fidelity to disproportionate areal expansion, and shows that hubs integrate information by lossy compression. Lastly, compression efficiency is predictive of behavior-beyond the conventional network efficiency metric-for cognitive domains including executive function, memory, complex reasoning, and social cognition. Our findings elucidate how macroscale connectivity supports efficient coding and serve to foreground communication processes that utilize random walk dynamics constrained by network connectivity.
Collapse
Affiliation(s)
- Dale Zhou
- Department of Neuroscience, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Christopher W. Lynn
- Initiative for the Theoretical Sciences, Graduate Center, City University of New York, New York, NY, USA,Joseph Henry Laboratories of Physics, Princeton University, Princeton, NJ, USA
| | - Zaixu Cui
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Rastko Ciric
- Department of Bioengineering, Schools of Engineering and Medicine, Stanford University, Stanford, CA, USA
| | - Graham L. Baum
- Department of Psychology and Center for Brain Science, Harvard University, Cambridge, MA, USA
| | - Tyler M. Moore
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Penn-Children’s Hospital of Philadelphia Lifespan Brain Institute, Philadelphia, PA, USA
| | - David R. Roalf
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - John A. Detre
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Ruben C. Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Penn-Children’s Hospital of Philadelphia Lifespan Brain Institute, Philadelphia, PA, USA
| | - Raquel E. Gur
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Penn-Children’s Hospital of Philadelphia Lifespan Brain Institute, Philadelphia, PA, USA
| | - Theodore D. Satterthwaite
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Penn-Children’s Hospital of Philadelphia Lifespan Brain Institute, Philadelphia, PA, USA
| | - Dani S. Bassett
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA,Department of Physics & Astronomy, College of Arts and Sciences, University of Pennsylvania, Philadelphia, PA, USA,Department of Bioengineering, School of Engineering and Applied Sciences, University of Pennsylvania, Philadelphia, PA, USA,Department of Electrical & Systems Engineering, School of Engineering and Applied Sciences, University of Pennsylvania, Philadelphia, PA, USA,Santa Fe Institute, Santa Fe, NM, USA,* Corresponding Author:
| |
Collapse
|
6
|
Moffett AS, Eckford AW. Minimal informational requirements for fitness. Phys Rev E 2022; 105:014403. [PMID: 35193272 DOI: 10.1103/physreve.105.014403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Accepted: 12/02/2021] [Indexed: 11/07/2022]
Abstract
The existing concept of the "fitness value of information" provides a theoretical upper bound on the fitness advantage of using information concerning a fluctuating environment. Using concepts from rate-distortion theory, we develop a theoretical framework to answer a different pair of questions: What is the minimal amount of information needed for a population to achieve a certain growth rate? What is the minimal amount of information gain needed for one subpopulation to achieve a certain average selection coefficient over another? We introduce a correspondence between fitness and distortion and solve for the rate-distortion functions of several systems using analytical and numerical methods. Because accurate information processing is energetically costly, our approach provides a theoretical basis for understanding evolutionary "design principles" underlying information-cost trade-offs.
Collapse
Affiliation(s)
- Alexander S Moffett
- Department of Electrical Engineering and Computer Science, York University, Toronto, Ontario, Canada
| | - Andrew W Eckford
- Department of Electrical Engineering and Computer Science, York University, Toronto, Ontario, Canada
| |
Collapse
|
7
|
Neural optimization: Understanding trade-offs with Pareto theory. Curr Opin Neurobiol 2021; 71:84-91. [PMID: 34688051 DOI: 10.1016/j.conb.2021.08.008] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Accepted: 08/26/2021] [Indexed: 11/21/2022]
Abstract
Nervous systems, like any organismal structure, have been shaped by evolutionary processes to increase fitness. The resulting neural 'bauplan' has to account for multiple objectives simultaneously, including computational function, as well as additional factors such as robustness to environmental changes and energetic limitations. Oftentimes these objectives compete, and quantification of the relative impact of individual optimization targets is non-trivial. Pareto optimality offers a theoretical framework to decipher objectives and trade-offs between them. We, therefore, highlight Pareto theory as a useful tool for the analysis of neurobiological systems from biophysically detailed cells to large-scale network structures and behavior. The Pareto approach can help to assess optimality, identify relevant objectives and their respective impact, and formulate testable hypotheses.
Collapse
|
8
|
Abstract
In addition to the role that our visual system plays in determining what we are seeing right now, visual computations contribute in important ways to predicting what we will see next. While the role of memory in creating future predictions is often overlooked, efficient predictive computation requires the use of information about the past to estimate future events. In this article, we introduce a framework for understanding the relationship between memory and visual prediction and review the two classes of mechanisms that the visual system relies on to create future predictions. We also discuss the principles that define the mapping from predictive computations to predictive mechanisms and how downstream brain areas interpret the predictive signals computed by the visual system. Expected final online publication date for the Annual Review of Vision Science, Volume 7 is September 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Collapse
Affiliation(s)
- Nicole C Rust
- Department of Psychology, University of Pennsylvania, Philadelphia, Pennsylvania 19104;
| | - Stephanie E Palmer
- Department of Organismal Biology and Anatomy, University of Chicago, Illinois 60637;
| |
Collapse
|
9
|
Seoane LF. Fate of Duplicated Neural Structures. ENTROPY (BASEL, SWITZERLAND) 2020; 22:E928. [PMID: 33286697 PMCID: PMC7597184 DOI: 10.3390/e22090928] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 08/18/2020] [Accepted: 08/20/2020] [Indexed: 01/25/2023]
Abstract
Statistical physics determines the abundance of different arrangements of matter depending on cost-benefit balances. Its formalism and phenomenology percolate throughout biological processes and set limits to effective computation. Under specific conditions, self-replicating and computationally complex patterns become favored, yielding life, cognition, and Darwinian evolution. Neurons and neural circuits sit at a crossroads between statistical physics, computation, and (through their role in cognition) natural selection. Can we establish a statistical physics of neural circuits? Such theory would tell what kinds of brains to expect under set energetic, evolutionary, and computational conditions. With this big picture in mind, we focus on the fate of duplicated neural circuits. We look at examples from central nervous systems, with stress on computational thresholds that might prompt this redundancy. We also study a naive cost-benefit balance for duplicated circuits implementing complex phenotypes. From this, we derive phase diagrams and (phase-like) transitions between single and duplicated circuits, which constrain evolutionary paths to complex cognition. Back to the big picture, similar phase diagrams and transitions might constrain I/O and internal connectivity patterns of neural circuits at large. The formalism of statistical physics seems to be a natural framework for this worthy line of research.
Collapse
Affiliation(s)
- Luís F. Seoane
- Departamento de Biología de Sistemas, Centro Nacional de Biotecnología (CNB), CSIC, C/Darwin 3, 28049 Madrid, Spain;
- Instituto de Física Interdisciplinar y Sistemas Complejos (IFISC), CSIC-UIB, 07122 Palma de Mallorca, Spain
| |
Collapse
|
10
|
Kloosterman NA, Kosciessa JQ, Lindenberger U, Fahrenfort JJ, Garrett DD. Boosts in brain signal variability track liberal shifts in decision bias. eLife 2020; 9:54201. [PMID: 32744502 PMCID: PMC7398662 DOI: 10.7554/elife.54201] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Accepted: 07/16/2020] [Indexed: 12/22/2022] Open
Abstract
Adopting particular decision biases allows organisms to tailor their choices to environmental demands. For example, a liberal response strategy pays off when target detection is crucial, whereas a conservative strategy is optimal for avoiding false alarms. Using conventional time-frequency analysis of human electroencephalographic (EEG) activity, we previously showed that bias setting entails adjustment of evidence accumulation in sensory regions (Kloosterman et al., 2019), but the presumed prefrontal signature of a conservative-to-liberal bias shift has remained elusive. Here, we show that a liberal bias shift is reflected in a more unconstrained neural regime (boosted entropy) in frontal regions that is suited to the detection of unpredictable events. Overall EEG variation, spectral power and event-related potentials could not explain this relationship, highlighting that moment-to-moment neural variability uniquely tracks bias shifts. Neural variability modulation through prefrontal cortex appears instrumental for permitting an organism to adapt its biases to environmental demands.
Collapse
Affiliation(s)
- Niels A Kloosterman
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Berlin, Germany.,Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
| | - Julian Q Kosciessa
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Berlin, Germany.,Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
| | - Ulman Lindenberger
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Berlin, Germany.,Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
| | - Johannes Jacobus Fahrenfort
- Department of Experimental and Applied Psychology, Vrije Universiteit Amsterdam, Amsterdam, Netherlands.,Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
| | - Douglas D Garrett
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Berlin, Germany.,Center for Lifespan Psychology, Max Planck Institute for Human Development, Berlin, Germany
| |
Collapse
|
11
|
Garrett DD, Epp SM, Kleemeyer M, Lindenberger U, Polk TA. Higher performers upregulate brain signal variability in response to more feature-rich visual input. Neuroimage 2020; 217:116836. [PMID: 32283277 DOI: 10.1016/j.neuroimage.2020.116836] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2019] [Revised: 02/20/2020] [Accepted: 04/07/2020] [Indexed: 11/18/2022] Open
Abstract
The extent to which brain responses differ across varying cognitive demands is referred to as "neural differentiation," and greater neural differentiation has been associated with better cognitive performance in older adults. An emerging approach has examined within-person neural differentiation using moment-to-moment brain signal variability. A number of studies have found that brain signal variability differs by cognitive state; however, the factors that cause signal variability to rise or fall on a given task remain understudied. We hypothesized that top performers would modulate signal variability according to the complexity of sensory input, upregulating variability when processing more feature-rich stimuli. In the current study, 46 older adults passively viewed face and house stimuli during fMRI. Low-level analyses showed that house images were more feature-rich than faces, and subsequent computational modelling of ventral visual stream responses (HMAX) revealed that houses were more feature-rich especially in V1/V2-like model layers. Notably, we then found that participants exhibiting greater face-to-house upregulation of brain signal variability in V1/V2 (higher for house relative to face stimuli) also exhibited more accurate, faster, and more consistent behavioral performance on a battery of offline visuo-cognitive tasks. Further, control models revealed that face-house modulation of mean brain signal was relatively insensitive to offline cognition, providing further evidence for the importance of brain signal variability for understanding human behavior. We conclude that the ability to align brain signal variability to the richness of perceptual input may mark heightened trait-level behavioral performance in older adults.
Collapse
Affiliation(s)
- Douglas D Garrett
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Berlin, London, UK; Center for Lifespan Psychology, Max Planck Institute for Human Development, Lentzeallee 94, 14195, Berlin, Germany.
| | - Samira M Epp
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Berlin, London, UK; Center for Lifespan Psychology, Max Planck Institute for Human Development, Lentzeallee 94, 14195, Berlin, Germany
| | - Maike Kleemeyer
- Center for Lifespan Psychology, Max Planck Institute for Human Development, Lentzeallee 94, 14195, Berlin, Germany
| | - Ulman Lindenberger
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, Berlin, London, UK; Center for Lifespan Psychology, Max Planck Institute for Human Development, Lentzeallee 94, 14195, Berlin, Germany
| | - Thad A Polk
- Department of Psychology, University of Michigan, Ann Arbor, MI, 48109, USA
| |
Collapse
|
12
|
Bayesian Behavioral Systems Theory. Behav Processes 2019; 168:103904. [DOI: 10.1016/j.beproc.2019.103904] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2019] [Revised: 06/30/2019] [Accepted: 07/08/2019] [Indexed: 12/29/2022]
|
13
|
Hobson EA, Ferdinand V, Kolchinsky A, Garland J. Rethinking animal social complexity measures with the help of complex systems concepts. Anim Behav 2019. [DOI: 10.1016/j.anbehav.2019.05.016] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
|
14
|
Douven I. Putting prototypes in place. Cognition 2019; 193:104007. [PMID: 31260845 DOI: 10.1016/j.cognition.2019.104007] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2018] [Revised: 06/16/2019] [Accepted: 06/17/2019] [Indexed: 10/26/2022]
Abstract
It has recently been proposed that natural concepts are those represented by the cells of an optimally partitioned similarity space. In this proposal, optimal partitioning has been defined in terms of rational design criteria, criteria that a good engineer would adopt if asked to develop a conceptual system. It has been argued, for instance, that convexity should rank high among such criteria. Other criteria concern the possibility of placing prototypes such that they are both similar to the items they represent-each prototype ought to be representative-and dissimilar to each other: the prototypes ought to be contrastive. Parts of this design proposal are already supported by evidence. This paper reports results of a new study meant to address parts still lacking in empirical support. In particular, it presents data concerning color similarity space which indicate that color prototypes are indeed located such that they trade off optimally between being representative and being contrastive.
Collapse
|
15
|
Affiliation(s)
- Noga Zaslavsky
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University, Jerusalem, Israel
- Department of Linguistics, University of California, Berkeley, CA, USA
| | - Charles Kemp
- School of Psychological Sciences, The University of Melbourne, Parkville, Australia
| | - Naftali Tishby
- Edmond and Lily Safra Center for Brain Sciences, The Hebrew University, Jerusalem, Israel
- Benin School of Computer Science and Engineering, The Hebrew University, Jerusalem, Israel
| | - Terry Regier
- Department of Linguistics, University of California, Berkeley, CA, USA
- Cognitive Science Program, University of California, Berkeley, CA, USA
| |
Collapse
|
16
|
Marzen S. Infinitely large, randomly wired sensors cannot predict their input unless they are close to deterministic. PLoS One 2018; 13:e0202333. [PMID: 30157215 PMCID: PMC6114800 DOI: 10.1371/journal.pone.0202333] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 08/01/2018] [Indexed: 11/24/2022] Open
Abstract
Building predictive sensors is of paramount importance in science. Can we make a randomly wired sensor “good enough” at predicting its input simply by making it larger? We show that infinitely large, randomly wired sensors are nonspecific for their input, and therefore nonpredictive of future input, unless they are close to deterministic. Nearly deterministic, randomly wired sensors can capture ∼ 10% of the predictive information of their inputs for “typical” environments.
Collapse
Affiliation(s)
- Sarah Marzen
- Department of Physics, Physics of Living Systems Group, Massachusetts Institute of Technology, Cambridge, MA, United States of America
- * E-mail:
| |
Collapse
|
17
|
Sims CR. Efficient coding explains the universal law of generalization in human perception. Science 2018; 360:652-656. [PMID: 29748284 DOI: 10.1126/science.aaq1118] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2017] [Accepted: 04/03/2018] [Indexed: 11/02/2022]
Abstract
Perceptual generalization and discrimination are fundamental cognitive abilities. For example, if a bird eats a poisonous butterfly, it will learn to avoid preying on that species again by generalizing its past experience to new perceptual stimuli. In cognitive science, the "universal law of generalization" seeks to explain this ability and states that generalization between stimuli will follow an exponential function of their distance in "psychological space." Here, I challenge existing theoretical explanations for the universal law and offer an alternative account based on the principle of efficient coding. I show that the universal law emerges inevitably from any information processing system (whether biological or artificial) that minimizes the cost of perceptual error subject to constraints on the ability to process or transmit information.
Collapse
Affiliation(s)
- Chris R Sims
- Department of Cognitive Science, Rensselaer Polytechnic Institute, Troy, NY 12180, USA.
| |
Collapse
|
18
|
Singer Y, Teramoto Y, Willmore BD, Schnupp JW, King AJ, Harper NS. Sensory cortex is optimized for prediction of future input. eLife 2018; 7:31557. [PMID: 29911971 PMCID: PMC6108826 DOI: 10.7554/elife.31557] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2017] [Accepted: 06/16/2018] [Indexed: 11/13/2022] Open
Abstract
Neurons in sensory cortex are tuned to diverse features in natural scenes. But what determines which features neurons become selective to? Here we explore the idea that neuronal selectivity is optimized to represent features in the recent sensory past that best predict immediate future inputs. We tested this hypothesis using simple feedforward neural networks, which were trained to predict the next few moments of video or audio in clips of natural scenes. The networks developed receptive fields that closely matched those of real cortical neurons in different mammalian species, including the oriented spatial tuning of primary visual cortex, the frequency selectivity of primary auditory cortex and, most notably, their temporal tuning properties. Furthermore, the better a network predicted future inputs the more closely its receptive fields resembled those in the brain. This suggests that sensory processing is optimized to extract those features with the most capacity to predict future input. A large part of our brain is devoted to processing the sensory inputs that we receive from the world. This allows us to tell, for example, whether we are looking at a cat or a dog, and if we are hearing a bark or a meow. Neurons in the sensory cortex respond to these stimuli by generating spikes of activity. Within each sensory area, neurons respond best to stimuli with precise properties: those in the primary visual cortex prefer edge-like structures that move in a certain direction at a given speed, while neurons in the primary auditory cortex favour sounds that change in loudness over a particular range of frequencies. Singer et al. sought to understand why neurons respond to the particular features of stimuli that they do. Why do visual neurons react more to moving edges than to, say, rotating hexagons? And why do auditory neurons respond more to certain changing sounds than to, say, constant tones? One leading idea is that the brain tries to use as few spikes as possible to represent real-world stimuli. Known as sparse coding, this principle can account for much of the behaviour of sensory neurons. Another possibility is that sensory areas respond the way they do because it enables them to best predict future sensory input. To test this idea, Singer et al. used a computer to simulate a network of neurons and trained this network to predict the next few frames of video clips using the previous few frames. When the network had learned this task, Singer et al. examined the neurons’ preferred stimuli. Like neurons in primary visual cortex, the simulated neurons typically responded most to edges that moved over time. The same network was also trained in a similar way, but this time using sound. As for neurons in primary auditory cortex, the simulated neurons preferred sounds that changed in loudness at particular frequencies. Notably, for both vision and audition, the simulated neurons favoured recent inputs over those further into the past. In this way and others, they were more similar to real neurons than simulated neurons that used sparse coding. Both artificial networks trained to foretell sensory input and the brain therefore favour the same types of stimuli: the ones that are good at helping to grasp future information. This suggests that the brain represents the sensory world so as to be able to best predict the future. Knowing how the brain handles information from our senses may help to understand disorders associated with sensory processing, such as dyslexia and tinnitus. It may also inspire approaches for training machines to process sensory inputs, improving artificial intelligence.
Collapse
Affiliation(s)
- Yosef Singer
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Yayoi Teramoto
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Ben Db Willmore
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Jan Wh Schnupp
- Department of Biomedical Sciences, City University of Hong Kong, Kowloon Tong, Hong Kong
| | - Andrew J King
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| | - Nicol S Harper
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
19
|
Seoane LF, Solé RV. Information theory, predictability and the emergence of complex life. ROYAL SOCIETY OPEN SCIENCE 2018; 5:172221. [PMID: 29515907 PMCID: PMC5830796 DOI: 10.1098/rsos.172221] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2017] [Accepted: 01/24/2018] [Indexed: 05/07/2023]
Abstract
Despite the obvious advantage of simple life forms capable of fast replication, different levels of cognitive complexity have been achieved by living systems in terms of their potential to cope with environmental uncertainty. Against the inevitable cost associated with detecting environmental cues and responding to them in adaptive ways, we conjecture that the potential for predicting the environment can overcome the expenses associated with maintaining costly, complex structures. We present a minimal formal model grounded in information theory and selection, in which successive generations of agents are mapped into transmitters and receivers of a coded message. Our agents are guessing machines and their capacity to deal with environments of different complexity defines the conditions to sustain more complex agents.
Collapse
Affiliation(s)
- Luís F. Seoane
- Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
- ICREA-Complex Systems Lab, Universitat Pompeu Fabra (GRIB), Dr Aiguader 80, 08003 Barcelona, Spain
- Institut de Biologia Evolutiva, CSIC-UPF, Pg Maritim de la Barceloneta 37, 08003 Barcelona, Spain
| | - Ricard V. Solé
- ICREA-Complex Systems Lab, Universitat Pompeu Fabra (GRIB), Dr Aiguader 80, 08003 Barcelona, Spain
- Institut de Biologia Evolutiva, CSIC-UPF, Pg Maritim de la Barceloneta 37, 08003 Barcelona, Spain
- Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, NM 87501, USA
| |
Collapse
|
20
|
Elastic Multi-scale Mechanisms: Computation and Biological Evolution. J Mol Evol 2017; 86:47-57. [PMID: 29248946 DOI: 10.1007/s00239-017-9823-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2017] [Accepted: 12/09/2017] [Indexed: 10/18/2022]
Abstract
Explanations based on low-level interacting elements are valuable and powerful since they contribute to identify the key mechanisms of biological functions. However, many dynamic systems based on low-level interacting elements with unambiguous, finite, and complete information of initial states generate future states that cannot be predicted, implying an increase of complexity and open-ended evolution. Such systems are like Turing machines, that overlap with dynamical systems that cannot halt. We argue that organisms find halting conditions by distorting these mechanisms, creating conditions for a constant creativity that drives evolution. We introduce a modulus of elasticity to measure the changes in these mechanisms in response to changes in the computed environment. We test this concept in a population of predators and predated cells with chemotactic mechanisms and demonstrate how the selection of a given mechanism depends on the entire population. We finally explore this concept in different frameworks and postulate that the identification of predictive mechanisms is only successful with small elasticity modulus.
Collapse
|