1
|
Giannini G, Nierhaus T, Blankenburg F. Investigation of sensory attenuation in the somatosensory domain using EEG in a novel virtual reality paradigm. Sci Rep 2025; 15:2819. [PMID: 39843944 PMCID: PMC11754869 DOI: 10.1038/s41598-025-87244-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2024] [Accepted: 01/17/2025] [Indexed: 01/24/2025] Open
Abstract
We are not only passively immersed in a sensorial world, but we are active agents that directly produce stimulations. Understanding what is unique about sensory consequences can give valuable insight into the action-perception-cycle. Sensory attenuation is the phenomenon that self-produced stimulations are perceived as less intense compared to externally-generated ones. Studying this phenomenon, however, requires considering a plethora of factors that could otherwise interfere with its interpretation, such as differences in stimulus properties, attentional resources, or temporal predictability. We therefore developed a novel Virtual Reality (VR) setup which allows control over several of these confounding factors. Furthermore, we modulated the expectation of receiving a somatosensory stimulation across self-production and passive perception through a simple probabilistic learning task, allowing us to test to what extent the electrophysiological correlates of sensory attenuation are impacted by stimulus expectation. Therefore, the aim of the present study was twofold: first we aimed validating a novel VR paradigm during electroencephalography (EEG) recoding to investigate sensory attenuation in a highly controlled setup; second, we tested whether electrophysiological differences between self- and externally-generated sensations could be better explained by stimulus predictability factors, corroborating the validity of sensory attenuation. Results of 26 participants indicate that early (P100), mid-latency (P200) and later negative contralateral potentials were significantly attenuated by self-generated sensations, independent of the stimulus expectation. Moreover, a component around 200 ms post-stimulus at frontal sites was found to be enhanced for self-produced stimuli. The P300 was influenced by stimulus expectation, regardless of whether the stimulation was actively produced or passively attended. Together, our results demonstrate that VR opens up new possibilities to study sensory attenuation in more ecological valid yet well-controlled paradigms, and that sensory attenuation is not significantly modulated by stimulus predictability, suggesting that sensory attenuation relies on motor-specific predictions about their sensory outcomes. This not only supports the phenomenon of sensory attenuation, but is also consistent with previous research and the concept that action actually plays a crucial role in perception.
Collapse
Affiliation(s)
- Gianluigi Giannini
- Neurocomputation and Neuroimaging Unit (NNU), Freie Universität Berlin, Berlin, Germany.
- Berlin School of Mind and Brain, Humboldt Universität zu Berlin, Berlin, Germany.
| | - Till Nierhaus
- Neurocomputation and Neuroimaging Unit (NNU), Freie Universität Berlin, Berlin, Germany
| | - Felix Blankenburg
- Neurocomputation and Neuroimaging Unit (NNU), Freie Universität Berlin, Berlin, Germany
- Berlin School of Mind and Brain, Humboldt Universität zu Berlin, Berlin, Germany
| |
Collapse
|
2
|
Gundlach C, Müller MM. Increased visual alpha-band activity during self-paced finger tapping does not affect early visual stimulus processing. Psychophysiology 2024; 61:e14707. [PMID: 39380314 PMCID: PMC11579237 DOI: 10.1111/psyp.14707] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Revised: 08/13/2024] [Accepted: 09/26/2024] [Indexed: 10/10/2024]
Abstract
Alpha-band activity is thought to be involved in orchestrating neural processing within and across brain regions relevant to various functions such as perception, cognition, and motor activity. Across different studies, attenuated alpha-band activity has been linked to increased neural excitability. Yet, there have been conflicting results concerning the consequences of alpha-band modulations for early sensory processing. We here examined whether movement-related alterations in visual alpha-band activity affected the early sensory processing of visual stimuli. For this purpose, in an EEG experiment, participants were engaged in a voluntary finger-tapping task while passively viewing flickering dots. We found extensive and expected movement-related amplitude modulations of motor alpha- and beta-band activity with event-related-desynchronization (ERD) before and during, and event-related-synchronization (ERS) after single voluntary finger taps. Crucially, while a visual alpha-band ERS accompanied the motor alpha-ERD before and during each finger tap, flicker-evoked Steady-State-Visually-Evoked-Potentials (SSVEPs), as a marker of early visual sensory gain, were not modulated in amplitude. As early sensory stimulus processing was unaffected by amplitude-modulated visual alpha-band activity, this argues against the idea that alpha-band activity represents a mechanism by which early sensory gain modulation is implemented. The distinct neural dynamics of visual alpha-band activity and early sensory processing may point to distinct and multiplexed neural selection processes in visual processing.
Collapse
Affiliation(s)
- C. Gundlach
- Wilhelm Wundt Institute for Psychology, Experimental Psychology and MethodsUniversität LeipzigLeipzigGermany
| | - M. M. Müller
- Wilhelm Wundt Institute for Psychology, Experimental Psychology and MethodsUniversität LeipzigLeipzigGermany
| |
Collapse
|
3
|
Mercado E, Zhuo J. Do rodents smell with sound? Neurosci Biobehav Rev 2024; 167:105908. [PMID: 39343078 DOI: 10.1016/j.neubiorev.2024.105908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2024] [Revised: 09/09/2024] [Accepted: 09/24/2024] [Indexed: 10/01/2024]
Abstract
Chemosensation via olfaction is a critical process underlying social interactions in many different species. Past studies of olfaction in mammals often have focused on its mechanisms in isolation from other systems, limiting the generalizability of findings from olfactory research to perceptual processes in other modalities. Studies of chemical communication, in particular, have progressed independently of research on vocal behavior and acoustic communication. Those bioacousticians who have considered how sound production and reception might interact with olfaction often portray odors as cues to the kinds of vocalizations that might be functionally useful. In the olfaction literature, vocalizations are rarely mentioned. Here, we propose that ultrasonic vocalizations may affect what rodents smell by altering the deposition of inhaled particles and that rodents coordinate active sniffing with sound production specifically to enhance reception of pheromones. In this scenario, rodent vocalizations may contribute to a unique mode of active olfactory sensing, in addition to whatever roles they serve as social signals. Consideration of this hypothesis highlights the perceptual advantages that parallel coordination of multiple sensorimotor processes may provide to individuals exploring novel situations and environments, especially those involving dynamic social interactions.
Collapse
Affiliation(s)
- Eduardo Mercado
- University at Buffalo, The State University of New York, USA.
| | | |
Collapse
|
4
|
Takamuku S, Arslanova I, Gomi H, Haggard P. Multidigit tactile perception II: perceptual weighting during integration follows a leading-finger priority. J Neurophysiol 2024; 132:1805-1819. [PMID: 39441210 DOI: 10.1152/jn.00105.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Revised: 09/18/2024] [Accepted: 09/27/2024] [Indexed: 10/25/2024] Open
Abstract
When we run our hand across a surface, each finger typically repeats the sensory stimulation that the leading finger has already experienced. Because of this redundancy, the leading finger may attract more attention and contribute more strongly when tactile signals are integrated across fingers to form an overall percept. To test this hypothesis, we re-analyzed data collected in a previous study (Arslanova I, Takamuku S, Gomi H, Haggard P, J Neurophysiol 128: 418-433, 2022), where two probes were moved in different directions on two different fingerpads and participants reported the probes' average direction. Here, we evaluate the relative contribution of each finger to the percept and examine whether multidigit integration gives priority to the leading finger. Although the hand actually remained static in these experiments, a "functional leading finger" could be defined with reference to the average direction of the stimuli and the direction of hand-object relative motion that this implied. When participants averaged the motion direction across fingers of the same hand, the leading finger received a higher weighting than the nonleading finger, even though this biased estimate of average direction. Importantly, this bias disappeared when averaging motion direction across the two hands. Both the reported average direction and its systematic relation to the difference between the individual stimulus directions were explained by a model of motion integration in which the sensory weighting of stimuli depends on the directions of the applied stimuli. Our finding supports the hypothesis that the leading finger, which often receives novel information in natural hand-object interactions, is prioritized in forming our tactile perception.NEW & NOTEWORTHY The capacity of the tactile system to process multiple simultaneous stimuli is restricted. One solution could be to prioritize input from more informative sources. Here, we show that sensory weighting accorded to each finger during multidigit touch is biased in a direction-dependent manner when different motions are delivered to the fingers of the same hand. We argue that tactile inputs are weighted based on purely geometric information to prioritize "novel" information from the leading finger.
Collapse
Affiliation(s)
- Shinya Takamuku
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Atsugi, Japan
| | - Irena Arslanova
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
- Department of Psychology, Royal Holloway University of London, Egham, United Kingdom
| | - Hiroaki Gomi
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Atsugi, Japan
| | - Patrick Haggard
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| |
Collapse
|
5
|
Castillo IO, Schrater P, Pitkow X. Control when confidence is costly. ARXIV 2024:arXiv:2406.14427v2. [PMID: 39575123 PMCID: PMC11581108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/28/2024]
Abstract
We develop a version of stochastic control that accounts for computational costs of inference. Past studies identified efficient coding without control, or efficient control that neglects the cost of synthesizing information. Here we combine these concepts into a framework where agents rationally approximate inference for efficient control. Specifically, we study Linear Quadratic Gaussian (LQG) control with an added internal cost on the relative precision of the posterior probability over the world state. This creates a trade-off: an agent can obtain more utility overall by sacrificing some task performance, if doing so saves enough bits during inference. We discover that the rational strategy that solves the joint inference and control problem goes through phase transitions depending on the task demands, switching from a costly but optimal inference to a family of suboptimal inferences related by rotation transformations, each misestimate the stability of the world. In all cases, the agent moves more to think less. This work provides a foundation for a new type of rational computations that could be used by both brains and machines for efficient but computationally constrained control.
Collapse
Affiliation(s)
| | - Paul Schrater
- Departments of Computer Science and Psychology, University of Minnesota, Minneapolis, MN 55455
| | - Xaq Pitkow
- Departments of Electrical and Computer Engineering and Computer Science, Rice University, Houston, TX 77005
- Neuroscience Institute and Department of Machine Learning, Carnegie Mellon University, Pittsburgh, PA 15213
- Department of Neuroscience, Baylor College of Medicine, Houston, TX 77030
| |
Collapse
|
6
|
Singletary NM, Horga G, Gottlieb J. A neural code supporting prospective probabilistic reasoning for instrumental information demand in humans. Commun Biol 2024; 7:1242. [PMID: 39358516 PMCID: PMC11447085 DOI: 10.1038/s42003-024-06927-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 09/19/2024] [Indexed: 10/04/2024] Open
Abstract
When making adaptive decisions, we actively demand information, but relatively little is known about the mechanisms of active information gathering. An open question is how the brain prospectively estimates the information gains that are expected to accrue from various sources by integrating simpler quantities of prior certainty and the reliability (diagnosticity) of a source. We examine this question using fMRI in a task in which people placed bids to obtain information in conditions that varied independently in the rewards, decision uncertainty, and information diagnosticity. We show that, consistent with value of information theory, the participants' bids are sensitive to prior certainty (the certainty about the correct choice before gathering information) and expected posterior certainty (the certainty expected after gathering information). Expected posterior certainty is decoded from multivoxel activation patterns in the posterior parietal and extrastriate cortices. This representation is independent of instrumental rewards and spatially overlaps with distinct representations of prior certainty and expected information gains. The findings suggest that the posterior parietal and extrastriate cortices are candidates for mediating the prospection of posterior probabilities as a key step to anticipating information gains during active gathering of instrumental information.
Collapse
Affiliation(s)
- Nicholas M Singletary
- Doctoral Program in Neurobiology and Behavior, Columbia University, New York, NY, USA.
- Department of Neuroscience, Columbia University, New York, NY, USA.
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA.
- New York State Psychiatric Institute, New York, NY, USA.
| | - Guillermo Horga
- New York State Psychiatric Institute, New York, NY, USA.
- Department of Psychiatry, Columbia University, New York, NY, USA.
| | - Jacqueline Gottlieb
- Department of Neuroscience, Columbia University, New York, NY, USA.
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA.
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA.
| |
Collapse
|
7
|
Constant A, Desirèe Di Paolo L, Guénin-Carlut A, M. Martinez L, Criado-Boado F, Müeller J, Clark A. A computational approach to selective attention in embodied approaches to cognitive archaeology. J R Soc Interface 2024; 21:20240508. [PMID: 39378981 PMCID: PMC11461058 DOI: 10.1098/rsif.2024.0508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2023] [Revised: 08/27/2024] [Accepted: 08/28/2024] [Indexed: 10/10/2024] Open
Abstract
This article proposes a novel computational approach to embodied approaches in cognitive archaeology called computational cognitive archaeology (CCA). We argue that cognitive archaeology, understood as the study of the human mind based on archaeological findings such as artefacts and material remains excavated and interpreted in the present, can benefit from the integration of novel methods in computational neuroscience interested in modelling the way the brain, the body and the environment are coupled and parameterized to allow for adaptive behaviour. We discuss the kind of tasks that CCA may engage in with a narrative example of how one can model the cumulative cultural evolution of the material and cognitive components of technologies, focusing on the case of knapping technology. This article thus provides a novel theoretical framework to formalize research in cognitive archaeology using recent developments in computational neuroscience.
Collapse
Affiliation(s)
- Axel Constant
- School of Engineering and Informatics, University of Sussex, Falmer (Brighton & Hove), UK
| | - Laura Desirèe Di Paolo
- School of Engineering and Informatics, University of Sussex, Falmer (Brighton & Hove), UK
- Developmental Psychology, ChatLab, University of Sussex, Falmer (Brighton & Hove), UK
| | - Avel Guénin-Carlut
- School of Engineering and Informatics, University of Sussex, Falmer (Brighton & Hove), UK
| | | | - Felipe Criado-Boado
- Instituto de Ciencias del Patrimonio, Santiago de Compostela, Galicia, Spain
| | | | - Andy Clark
- School of Engineering and Informatics, University of Sussex, Falmer (Brighton & Hove), UK
| |
Collapse
|
8
|
Mai J, Gargiullo R, Zheng M, Esho V, Hussein OE, Pollay E, Bowe C, Williamson LM, McElroy AF, Saunders JL, Goolsby WN, Brooks KA, Rodgers CC. Sound-seeking before and after hearing loss in mice. Sci Rep 2024; 14:19181. [PMID: 39160202 PMCID: PMC11333604 DOI: 10.1038/s41598-024-67577-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Accepted: 07/11/2024] [Indexed: 08/21/2024] Open
Abstract
How we move our bodies affects how we perceive sound. For instance, head movements help us to better localize the source of a sound and to compensate for asymmetric hearing loss. However, many auditory experiments are designed to restrict head and body movements. To study the role of movement in hearing, we developed a behavioral task called sound-seeking that rewarded freely moving mice for tracking down an ongoing sound source. Over the course of learning, mice more efficiently navigated to the sound. Next, we asked how sound-seeking was affected by hearing loss induced by surgical removal of the malleus from the middle ear. After bilateral hearing loss sound-seeking performance drastically declined and did not recover. In striking contrast, after unilateral hearing loss mice were only transiently impaired and then recovered their sound-seek ability over about a week. Throughout recovery, unilateral mice increasingly relied on a movement strategy of sequentially checking potential locations for the sound source. In contrast, the startle reflex (an innate auditory behavior) was preserved after unilateral hearing loss and abolished by bilateral hearing loss without recovery over time. In sum, mice compensate with body movement for permanent unilateral damage to the peripheral auditory system. Looking forward, this paradigm provides an opportunity to examine how movement enhances perception and enables resilient adaptation to sensory disorders.
Collapse
Affiliation(s)
- Jessica Mai
- Department of Neurosurgery, Emory University School of Medicine, Atlanta, GA, 30322, USA
| | - Rowan Gargiullo
- Department of Neurosurgery, Emory University School of Medicine, Atlanta, GA, 30322, USA
| | - Megan Zheng
- Department of Neurosurgery, Emory University School of Medicine, Atlanta, GA, 30322, USA
| | - Valentina Esho
- Department of Neurosurgery, Emory University School of Medicine, Atlanta, GA, 30322, USA
| | - Osama E Hussein
- Department of Neurosurgery, Emory University School of Medicine, Atlanta, GA, 30322, USA
| | - Eliana Pollay
- Department of Neurosurgery, Emory University School of Medicine, Atlanta, GA, 30322, USA
| | - Cedric Bowe
- Neuroscience Graduate Program, Emory University, Atlanta, GA, 30322, USA
| | - Lucas M Williamson
- Neuroscience Graduate Program, Emory University, Atlanta, GA, 30322, USA
| | - Abigail F McElroy
- Neuroscience Graduate Program, Emory University, Atlanta, GA, 30322, USA
| | - Jonny L Saunders
- Department of Neurology, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - William N Goolsby
- Department of Cell Biology, Emory University School of Medicine, Atlanta, GA, 30322, USA
| | - Kaitlyn A Brooks
- Department of Otolaryngology-Head and Neck Surgery, Emory University School of Medicine, Atlanta, GA, 30308, USA
| | - Chris C Rodgers
- Department of Neurosurgery, Emory University School of Medicine, Atlanta, GA, 30322, USA.
- Department of Cell Biology, Emory University School of Medicine, Atlanta, GA, 30322, USA.
- Department of Biomedical Engineering, Georgia Tech and Emory University School of Medicine, Atlanta, GA, 30322, USA.
- Department of Biology, Emory College of Arts and Sciences, Atlanta, GA, 30322, USA.
| |
Collapse
|
9
|
Arató J, Rothkopf CA, Fiser J. Eye movements reflect active statistical learning. J Vis 2024; 24:17. [PMID: 38819805 PMCID: PMC11146064 DOI: 10.1167/jov.24.5.17] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 04/23/2024] [Indexed: 06/01/2024] Open
Abstract
What is the link between eye movements and sensory learning? Although some theories have argued for an automatic interaction between what we know and where we look that continuously modulates human information gathering behavior during both implicit and explicit learning, there exists limited experimental evidence supporting such an ongoing interplay. To address this issue, we used a visual statistical learning paradigm combined with a gaze-contingent stimulus presentation and manipulated the explicitness of the task to explore how learning and eye movements interact. During both implicit exploration and explicit visual learning of unknown composite visual scenes, spatial eye movement patterns systematically and gradually changed in accordance with the underlying statistical structure of the scenes. Moreover, the degree of change was directly correlated with the amount and type of knowledge the observers acquired. This suggests that eye movements are potential indicators of active learning, a process where long-term knowledge, current visual stimuli and an inherent tendency to reduce uncertainty about the visual environment jointly determine where we look.
Collapse
Affiliation(s)
- József Arató
- Department of Cognitive Science, Central European University, Vienna, Austria
- Center for Cognitive Computation, Central European University, Vienna, Austria
- Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
| | - Constantin A Rothkopf
- Center for Cognitive Science & Institute of Psychology, Technical University of Darmstadt, Darmstadt, Germany
- Frankfurt Institute for Advanced Studies, Goethe University, Frankfurt, Germany
| | - József Fiser
- Department of Cognitive Science, Central European University, Vienna, Austria
- Center for Cognitive Computation, Central European University, Vienna, Austria
| |
Collapse
|
10
|
Mansfield D, Montazeri A. A survey on autonomous environmental monitoring approaches: towards unifying active sensing and reinforcement learning. Front Robot AI 2024; 11:1336612. [PMID: 38533524 PMCID: PMC10964253 DOI: 10.3389/frobt.2024.1336612] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Accepted: 02/22/2024] [Indexed: 03/28/2024] Open
Abstract
The environmental pollution caused by various sources has escalated the climate crisis making the need to establish reliable, intelligent, and persistent environmental monitoring solutions more crucial than ever. Mobile sensing systems are a popular platform due to their cost-effectiveness and adaptability. However, in practice, operation environments demand highly intelligent and robust systems that can cope with an environment's changing dynamics. To achieve this reinforcement learning has become a popular tool as it facilitates the training of intelligent and robust sensing agents that can handle unknown and extreme conditions. In this paper, a framework that formulates active sensing as a reinforcement learning problem is proposed. This framework allows unification with multiple essential environmental monitoring tasks and algorithms such as coverage, patrolling, source seeking, exploration and search and rescue. The unified framework represents a step towards bridging the divide between theoretical advancements in reinforcement learning and real-world applications in environmental monitoring. A critical review of the literature in this field is carried out and it is found that despite the potential of reinforcement learning for environmental active sensing applications there is still a lack of practical implementation and most work remains in the simulation phase. It is also noted that despite the consensus that, multi-agent systems are crucial to fully realize the potential of active sensing there is a lack of research in this area.
Collapse
|
11
|
Mai J, Gargiullo R, Zheng M, Esho V, Hussein OE, Pollay E, Bowe C, Williamson LM, McElroy AF, Goolsby WN, Brooks KA, Rodgers CC. Sound-seeking before and after hearing loss in mice. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.08.574475. [PMID: 38260458 PMCID: PMC10802496 DOI: 10.1101/2024.01.08.574475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/24/2024]
Abstract
How we move our bodies affects how we perceive sound. For instance, we can explore an environment to seek out the source of a sound and we can use head movements to compensate for hearing loss. How we do this is not well understood because many auditory experiments are designed to limit head and body movements. To study the role of movement in hearing, we developed a behavioral task called sound-seeking that rewarded mice for tracking down an ongoing sound source. Over the course of learning, mice more efficiently navigated to the sound. We then asked how auditory behavior was affected by hearing loss induced by surgical removal of the malleus from the middle ear. An innate behavior, the auditory startle response, was abolished by bilateral hearing loss and unaffected by unilateral hearing loss. Similarly, performance on the sound-seeking task drastically declined after bilateral hearing loss and did not recover. In striking contrast, mice with unilateral hearing loss were only transiently impaired on sound-seeking; over a recovery period of about a week, they regained high levels of performance, increasingly reliant on a different spatial sampling strategy. Thus, even in the face of permanent unilateral damage to the peripheral auditory system, mice recover their ability to perform a naturalistic sound-seeking task. This paradigm provides an opportunity to examine how body movement enables better hearing and resilient adaptation to sensory deprivation.
Collapse
Affiliation(s)
- Jessica Mai
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Rowan Gargiullo
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Megan Zheng
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Valentina Esho
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Osama E Hussein
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Eliana Pollay
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
| | - Cedric Bowe
- Neuroscience Graduate Program, Emory University, Atlanta GA 30322
| | | | | | - William N Goolsby
- Department of Cell Biology, Emory University School of Medicine, Atlanta GA 30322
| | - Kaitlyn A Brooks
- Department of Otolaryngology - Head and Neck Surgery, Emory University School of Medicine, Atlanta GA 30308
| | - Chris C Rodgers
- Department of Neurosurgery, Emory University School of Medicine, Atlanta GA 30322
- Department of Cell Biology, Emory University School of Medicine, Atlanta GA 30322
- Department of Biomedical Engineering, Georgia Tech and Emory University School of Medicine, Atlanta GA 30322
- Department of Biology, Emory College of Arts and Sciences, Atlanta GA 30322
| |
Collapse
|
12
|
Abbaspoor S, Rahman K, Zinke W, Hoffman KL. Learning of object-in-context sequences in freely-moving macaques. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.12.11.571113. [PMID: 38168449 PMCID: PMC10760043 DOI: 10.1101/2023.12.11.571113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2024]
Abstract
Flexible learning is a hallmark of primate cognition, which arises through interactions with changing environments. Studies of the neural basis for this flexibility are typically limited by laboratory settings that use minimal environmental cues and restrict interactions with the environment, including active sensing and exploration. To address this, we constructed a 3-D enclosure containing touchscreens on its walls, for studying cognition in freely moving macaques. To test flexible learning, two monkeys completed trials consisting of a regular sequence of object selections across four touchscreens. On each screen, the monkeys had to select by touching the sole correct object item ('target') among a set of four items, irrespective of their positions on the screen. Each item was the target on exactly one screen of the sequence, making correct performance conditioned on the spatiotemporal sequence rule across screens. Both monkeys successfully learned multiple 4-item sets (N=14 and 22 sets), totaling over 50 and 80 unique, conditional item-context memoranda, with no indication of capacity limits. The enclosure allowed freedom of movements leading up to and following the touchscreen interactions. To determine whether movement economy changed with learning, we reconstructed 3D position and movement dynamics using markerless tracking software and gyroscopic inertial measurements. Whereas general body positions remained consistent across repeated sequences, fine head movements varied as monkeys learned, within and across sequence sets, demonstrating learning set or "learning to learn". These results demonstrate monkeys' rapid, capacious, and flexible learning within an integrated, multisensory 3-D space. Furthermore, this approach enables the measurement of continuous behavior while ensuring precise experimental control and behavioral repetition of sequences over time. Overall, this approach harmonizes the design features that are needed for electrophysiological studies with tasks that showcase fully situated, flexible cognition.
Collapse
Affiliation(s)
- S Abbaspoor
- Department of Psychological Sciences, Vanderbilt University, Nashville, United States
- Vanderbilt Vision Research Center, Vanderbilt University, Nashville, United States
| | - K Rahman
- Department of Psychological Sciences, Vanderbilt University, Nashville, United States
- Vanderbilt Vision Research Center, Vanderbilt University, Nashville, United States
| | - W Zinke
- Department of Psychological Sciences, Vanderbilt University, Nashville, United States
- Vanderbilt Vision Research Center, Vanderbilt University, Nashville, United States
| | - K L Hoffman
- Department of Psychological Sciences, Vanderbilt University, Nashville, United States
- Vanderbilt Vision Research Center, Vanderbilt University, Nashville, United States
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, United States
- Department of Biomedical Engineering, Vanderbilt University, Nashville, United States
| |
Collapse
|
13
|
Singletary NM, Horga G, Gottlieb J. A Distinct Neural Code Supports Prospection of Future Probabilities During Instrumental Information-Seeking. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.11.27.568849. [PMID: 38076800 PMCID: PMC10705234 DOI: 10.1101/2023.11.27.568849] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 12/24/2023]
Abstract
To make adaptive decisions, we must actively demand information, but relatively little is known about the mechanisms of active information gathering. An open question is how the brain estimates expected information gains (EIG) when comparing the current decision uncertainty with the uncertainty that is expected after gathering information. We examined this question using fMRI in a task in which people placed bids to obtain information in conditions that varied independently by prior decision uncertainty, information diagnosticity, and the penalty for an erroneous choice. Consistent with value of information theory, bids were sensitive to EIG and its components of prior certainty and expected posterior certainty. Expected posterior certainty was decoded above chance from multivoxel activation patterns in the posterior parietal and extrastriate cortices. This representation was independent of instrumental rewards and overlapped with distinct representations of EIG and prior certainty. Thus, posterior parietal and extrastriate cortices are candidates for mediating the prospection of posterior probabilities as a key step to estimate EIG during active information gathering.
Collapse
Affiliation(s)
- Nicholas M Singletary
- Doctoral Program in Neurobiology and Behavior, Columbia University, New York, NY, USA
- Department of Neuroscience, Columbia University, New York, NY, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
- New York State Psychiatric Institute, New York, NY, USA
| | - Guillermo Horga
- New York State Psychiatric Institute, New York, NY, USA
- Department of Psychiatry, Columbia University, New York, NY, USA
- These authors contributed equally
| | - Jacqueline Gottlieb
- Department of Neuroscience, Columbia University, New York, NY, USA
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
- Kavli Institute for Brain Science, Columbia University, New York, NY, USA
- These authors contributed equally
| |
Collapse
|
14
|
Abbasi O, Kluger DS, Chalas N, Steingräber N, Meyer L, Gross J. Predictive coordination of breathing during intra-personal speaking and listening. iScience 2023; 26:107281. [PMID: 37520729 PMCID: PMC10372729 DOI: 10.1016/j.isci.2023.107281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 05/04/2023] [Accepted: 06/30/2023] [Indexed: 08/01/2023] Open
Abstract
It has long been known that human breathing is altered during listening and speaking compared to rest: during speaking, inhalation depth is adjusted to the air volume required for the upcoming utterance. During listening, inhalation is temporally aligned to inhalation of the speaker. While evidence for the former is relatively strong, it is virtually absent for the latter. We address both phenomena using recordings of speech envelope and respiration in 30 participants during 14 min of speaking and listening to one's own speech. First, we show that inhalation depth is positively correlated with the total power of the speech envelope in the following utterance. Second, we provide evidence that inhalation during listening to one's own speech is significantly more likely at time points of inhalation during speaking. These findings are compatible with models that postulate alignment of internal forward models of interlocutors with the aim to facilitate communication.
Collapse
Affiliation(s)
- Omid Abbasi
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
| | - Daniel S. Kluger
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
- Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| | - Nikos Chalas
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
- Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| | - Nadine Steingräber
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
| | - Lars Meyer
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Joachim Gross
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
- Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| |
Collapse
|
15
|
Zhu SL, Lakshminarasimhan KJ, Angelaki DE. Computational cross-species views of the hippocampal formation. Hippocampus 2023; 33:586-599. [PMID: 37038890 PMCID: PMC10947336 DOI: 10.1002/hipo.23535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 03/17/2023] [Accepted: 03/21/2023] [Indexed: 04/12/2023]
Abstract
The discovery of place cells and head direction cells in the hippocampal formation of freely foraging rodents has led to an emphasis of its role in encoding allocentric spatial relationships. In contrast, studies in head-fixed primates have additionally found representations of spatial views. We review recent experiments in freely moving monkeys that expand upon these findings and show that postural variables such as eye/head movements strongly influence neural activity in the hippocampal formation, suggesting that the function of the hippocampus depends on where the animal looks. We interpret these results in the light of recent studies in humans performing challenging navigation tasks which suggest that depending on the context, eye/head movements serve one of two roles-gathering information about the structure of the environment (active sensing) or externalizing the contents of internal beliefs/deliberation (embodied cognition). These findings prompt future experimental investigations into the information carried by signals flowing between the hippocampal formation and the brain regions controlling postural variables, and constitute a basis for updating computational theories of the hippocampal system to accommodate the influence of eye/head movements.
Collapse
Affiliation(s)
- Seren L Zhu
- Center for Neural Science, New York University, New York, New York, USA
| | - Kaushik J Lakshminarasimhan
- Center for Theoretical Neuroscience, Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, USA
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, New York, USA
- Mechanical and Aerospace Engineering, Tandon School of Engineering, New York University, New York, New York, USA
| |
Collapse
|
16
|
Studnicki A, Ferris DP. Parieto-Occipital Electrocortical Dynamics during Real-World Table Tennis. eNeuro 2023; 10:ENEURO.0463-22.2023. [PMID: 37037603 PMCID: PMC10158585 DOI: 10.1523/eneuro.0463-22.2023] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 02/22/2023] [Accepted: 03/01/2023] [Indexed: 04/12/2023] Open
Abstract
Traditional human electroencephalography (EEG) experiments that study visuomotor processing use controlled laboratory conditions with limited ecological validity. In the real world, the brain integrates complex, dynamic, multimodal visuomotor cues to guide the execution of movement. The parietal and occipital cortices are especially important in the online control of goal-directed actions. Table tennis is a whole-body, responsive activity requiring rapid visuomotor integration that presents a myriad of unanswered neurocognitive questions about brain function during real-world movement. The aim of this study was to quantify the electrocortical dynamics of the parieto-occipital cortices while playing a sport with high-density electroencephalography. We included analysis of power spectral densities (PSDs), event-related spectral perturbations, intertrial phase coherences (ITPCs), event-related potentials (ERPs), and event-related phase coherences of parieto-occipital source-localized clusters while participants played table tennis with a ball machine and a human. We found significant spectral power fluctuations in the parieto-occipital cortices tied to hit events. Ball machine trials exhibited more fluctuations in θ power around hit events, an increase in intertrial phase coherence and deflection in the event-related potential, and higher event-related phase coherence between parieto-occipital clusters as compared with trials with a human. Our results suggest that sport training with a machine elicits fundamentally different brain dynamics than training with a human.
Collapse
Affiliation(s)
- Amanda Studnicki
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611
| | - Daniel P Ferris
- J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611
| |
Collapse
|
17
|
D’Amelio A, Patania S, Bursic S, Cuculo V, Boccignone G. Using Gaze for Behavioural Biometrics. SENSORS (BASEL, SWITZERLAND) 2023; 23:1262. [PMID: 36772302 PMCID: PMC9920149 DOI: 10.3390/s23031262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 01/15/2023] [Accepted: 01/20/2023] [Indexed: 06/18/2023]
Abstract
A principled approach to the analysis of eye movements for behavioural biometrics is laid down. The approach grounds in foraging theory, which provides a sound basis to capture the uniqueness of individual eye movement behaviour. We propose a composite Ornstein-Uhlenbeck process for quantifying the exploration/exploitation signature characterising the foraging eye behaviour. The relevant parameters of the composite model, inferred from eye-tracking data via Bayesian analysis, are shown to yield a suitable feature set for biometric identification; the latter is eventually accomplished via a classical classification technique. A proof of concept of the method is provided by measuring its identification performance on a publicly available dataset. Data and code for reproducing the analyses are made available. Overall, we argue that the approach offers a fresh view on either the analyses of eye-tracking data and prospective applications in this field.
Collapse
Affiliation(s)
- Alessandro D’Amelio
- PHuSe Lab, Department of Computer Science, University of Milano Statale, Via Celoria 18, 20133 Milan, Italy
| | - Sabrina Patania
- PHuSe Lab, Department of Computer Science, University of Milano Statale, Via Celoria 18, 20133 Milan, Italy
| | - Sathya Bursic
- PHuSe Lab, Department of Computer Science, University of Milano Statale, Via Celoria 18, 20133 Milan, Italy
- Department of Psychology, University of Milano-Bicocca, Piazza dell’Ateneo Nuovo 1, 20126 Milan, Italy
| | - Vittorio Cuculo
- PHuSe Lab, Department of Computer Science, University of Milano Statale, Via Celoria 18, 20133 Milan, Italy
| | - Giuseppe Boccignone
- PHuSe Lab, Department of Computer Science, University of Milano Statale, Via Celoria 18, 20133 Milan, Italy
| |
Collapse
|
18
|
Oh K, Prilutsky BI. Transformation from arm joint coordinates to hand external coordinates explains non-uniform precision of hand position sense in horizontal workspace. Hum Mov Sci 2022; 86:103020. [DOI: 10.1016/j.humov.2022.103020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 10/17/2022] [Accepted: 10/23/2022] [Indexed: 11/06/2022]
|
19
|
Noel JP, Balzani E, Avila E, Lakshminarasimhan KJ, Bruni S, Alefantis P, Savin C, Angelaki DE. Coding of latent variables in sensory, parietal, and frontal cortices during closed-loop virtual navigation. eLife 2022; 11:e80280. [PMID: 36282071 PMCID: PMC9668339 DOI: 10.7554/elife.80280] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2022] [Accepted: 10/24/2022] [Indexed: 11/13/2022] Open
Abstract
We do not understand how neural nodes operate and coordinate within the recurrent action-perception loops that characterize naturalistic self-environment interactions. Here, we record single-unit spiking activity and local field potentials (LFPs) simultaneously from the dorsomedial superior temporal area (MSTd), parietal area 7a, and dorsolateral prefrontal cortex (dlPFC) as monkeys navigate in virtual reality to 'catch fireflies'. This task requires animals to actively sample from a closed-loop virtual environment while concurrently computing continuous latent variables: (i) the distance and angle travelled (i.e., path integration) and (ii) the distance and angle to a memorized firefly location (i.e., a hidden spatial goal). We observed a patterned mixed selectivity, with the prefrontal cortex most prominently coding for latent variables, parietal cortex coding for sensorimotor variables, and MSTd most often coding for eye movements. However, even the traditionally considered sensory area (i.e., MSTd) tracked latent variables, demonstrating path integration and vector coding of hidden spatial goals. Further, global encoding profiles and unit-to-unit coupling (i.e., noise correlations) suggested a functional subnetwork composed by MSTd and dlPFC, and not between these and 7a, as anatomy would suggest. We show that the greater the unit-to-unit coupling between MSTd and dlPFC, the more the animals' gaze position was indicative of the ongoing location of the hidden spatial goal. We suggest this MSTd-dlPFC subnetwork reflects the monkeys' natural and adaptive task strategy wherein they continuously gaze toward the location of the (invisible) target. Together, these results highlight the distributed nature of neural coding during closed action-perception loops and suggest that fine-grain functional subnetworks may be dynamically established to subserve (embodied) task strategies.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Edoardo Balzani
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Eric Avila
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Kaushik J Lakshminarasimhan
- Center for Neural Science, New York UniversityNew York CityUnited States
- Center for Theoretical Neuroscience, Columbia UniversityNew YorkUnited States
| | - Stefania Bruni
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Panos Alefantis
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Cristina Savin
- Center for Neural Science, New York UniversityNew York CityUnited States
| | - Dora E Angelaki
- Center for Neural Science, New York UniversityNew York CityUnited States
| |
Collapse
|
20
|
Rodgers CC. A detailed behavioral, videographic, and neural dataset on object recognition in mice. Sci Data 2022; 9:620. [PMID: 36229608 PMCID: PMC9561117 DOI: 10.1038/s41597-022-01728-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 09/12/2022] [Indexed: 11/09/2022] Open
Abstract
Mice adeptly use their whiskers to touch, recognize, and learn about objects in their environment. This behavior is enabled by computations performed by populations of neurons in the somatosensory cortex. To understand these computations, we trained mice to use their whiskers to recognize different shapes while we recorded activity in the barrel cortex, which processes whisker input. Here, we present a large dataset of high-speed video of the whiskers, along with rigorous tracking of the entire extent of multiple whiskers and every contact they made on the shape. We used spike sorting to identify individual neurons, which responded with precise timing to whisker contacts and motion. These data will be useful for understanding the behavioral strategies mice use to explore objects, as well as the neuronal dynamics that mediate those strategies. In addition, our carefully curated labeled data could be used to develop new computer vision algorithms for tracking body posture, or for extracting responses of individual neurons from large-scale neural recordings.
Collapse
Affiliation(s)
- Chris C Rodgers
- Department of Neurosurgery, Emory University, Atlanta, GA, 30322, USA.
| |
Collapse
|
21
|
Anil Meera A, Novicky F, Parr T, Friston K, Lanillos P, Sajid N. Reclaiming saliency: Rhythmic precision-modulated action and perception. Front Neurorobot 2022; 16:896229. [PMID: 35966370 PMCID: PMC9368584 DOI: 10.3389/fnbot.2022.896229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 06/28/2022] [Indexed: 11/13/2022] Open
Abstract
Computational models of visual attention in artificial intelligence and robotics have been inspired by the concept of a saliency map. These models account for the mutual information between the (current) visual information and its estimated causes. However, they fail to consider the circular causality between perception and action. In other words, they do not consider where to sample next, given current beliefs. Here, we reclaim salience as an active inference process that relies on two basic principles: uncertainty minimization and rhythmic scheduling. For this, we make a distinction between attention and salience. Briefly, we associate attention with precision control, i.e., the confidence with which beliefs can be updated given sampled sensory data, and salience with uncertainty minimization that underwrites the selection of future sensory data. Using this, we propose a new account of attention based on rhythmic precision-modulation and discuss its potential in robotics, providing numerical experiments that showcase its advantages for state and noise estimation, system identification and action selection for informative path planning.
Collapse
Affiliation(s)
- Ajith Anil Meera
- Department of Cognitive Robotics, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology, Delft, Netherlands
- *Correspondence: Ajith Anil Meera
| | - Filip Novicky
- Department of Neurophysiology, Donders Institute for Brain Cognition and Behavior, Radboud University, Nijmegen, Netherlands
- Filip Novicky
| | - Thomas Parr
- Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom
| | - Karl Friston
- Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom
| | - Pablo Lanillos
- Department of Artificial Intelligence, Donders Institute for Brain Cognition and Behavior, Radboud University, Nijmegen, Netherlands
| | - Noor Sajid
- Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom
| |
Collapse
|
22
|
Ross JM, Balasubramaniam R. Time Perception for Musical Rhythms: Sensorimotor Perspectives on Entrainment, Simulation, and Prediction. Front Integr Neurosci 2022; 16:916220. [PMID: 35865808 PMCID: PMC9294366 DOI: 10.3389/fnint.2022.916220] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 06/16/2022] [Indexed: 11/19/2022] Open
Abstract
Neural mechanisms supporting time perception in continuously changing sensory environments may be relevant to a broader understanding of how the human brain utilizes time in cognition and action. In this review, we describe current theories of sensorimotor engagement in the support of subsecond timing. We focus on musical timing due to the extensive literature surrounding movement with and perception of musical rhythms. First, we define commonly used but ambiguous concepts including neural entrainment, simulation, and prediction in the context of musical timing. Next, we summarize the literature on sensorimotor timing during perception and performance and describe current theories of sensorimotor engagement in the support of subsecond timing. We review the evidence supporting that sensorimotor engagement is critical in accurate time perception. Finally, potential clinical implications for a sensorimotor perspective of timing are highlighted.
Collapse
Affiliation(s)
- Jessica M. Ross
- Veterans Affairs Palo Alto Healthcare System and the Sierra Pacific Mental Illness, Research, Education, and Clinical Center, Palo Alto, CA, United States
- Department of Psychiatry and Behavioral Sciences, Stanford University Medical Center, Stanford, CA, United States
- Berenson-Allen Center for Non-invasive Brain Stimulation, Beth Israel Deaconess Medical Center, Boston, MA, United States
- Department of Neurology, Harvard Medical School, Boston, MA, United States
- *Correspondence: Jessica M. Ross,
| | - Ramesh Balasubramaniam
- Cognitive and Information Sciences, University of California, Merced, Merced, CA, United States
| |
Collapse
|
23
|
Variations of Sensorimotor Representation (Structure): The Functional Interplay between Object Features and Goal-Directed Grasping Actions. Brain Sci 2022; 12:brainsci12070873. [PMID: 35884679 PMCID: PMC9312880 DOI: 10.3390/brainsci12070873] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 06/24/2022] [Accepted: 06/27/2022] [Indexed: 12/10/2022] Open
Abstract
This study investigated the structure of sensorimotor representations during goal-directed grasping actions and explored their relationship with object features. Sixteen 3D-printed spheres that varied in size (i.e., a diameter of 20 mm, 40 mm, 60 mm, 80 mm) and weight (i.e., 40 g, 52 g, 76 g, 91 g) were used as experimental stimuli. The Structural Dimensional Analysis of Mental Representation (SDA-M) method was used to assess the sensorimotor representation structure during grasping. Participants were instructed in each trial to weigh, lift, or transport sets of two different spheres and to judge the similarity of the objects’ features, taking into account the executed grasping movement. Each participant performed a total of 240 trials, and object presentation was randomized. The results suggest that the functional interplay between object features and goal-directed actions accounts for the significant variations in the structure of sensorimotor representations after grasping. Specifically, the relevance of the perceived objects’ size and weight is closely interrelated to the grasping task demands and movement dynamics of the executed action. Our results suggest that distinct sensorimotor representations support individual grasping actions according to top-down influences modulated by motor intentions, functional task demands, and task-relevant object features.
Collapse
|
24
|
Bujia G, Sclar M, Vita S, Solovey G, Kamienkowski JE. Modeling Human Visual Search in Natural Scenes: A Combined Bayesian Searcher and Saliency Map Approach. Front Syst Neurosci 2022; 16:882315. [PMID: 35712044 PMCID: PMC9197262 DOI: 10.3389/fnsys.2022.882315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Accepted: 04/26/2022] [Indexed: 11/13/2022] Open
Abstract
Finding objects is essential for almost any daily-life visual task. Saliency models have been useful to predict fixation locations in natural images during a free-exploring task. However, it is still challenging to predict the sequence of fixations during visual search. Bayesian observer models are particularly suited for this task because they represent visual search as an active sampling process. Nevertheless, how they adapt to natural images remains largely unexplored. Here, we propose a unified Bayesian model for visual search guided by saliency maps as prior information. We validated our model with a visual search experiment in natural scenes. We showed that, although state-of-the-art saliency models performed well in predicting the first two fixations in a visual search task ( 90% of the performance achieved by humans), their performance degraded to chance afterward. Therefore, saliency maps alone could model bottom-up first impressions but they were not enough to explain scanpaths when top-down task information was critical. In contrast, our model led to human-like performance and scanpaths as revealed by: first, the agreement between targets found by the model and the humans on a trial-by-trial basis; and second, the scanpath similarity between the model and the humans, that makes the behavior of the model indistinguishable from that of humans. Altogether, the combination of deep neural networks based saliency models for image processing and a Bayesian framework for scanpath integration probes to be a powerful and flexible approach to model human behavior in natural scenarios.
Collapse
Affiliation(s)
- Gaston Bujia
- Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación, Universidad de Buenos Aires – CONICET, Ciudad Autónoma de Buenos Aires, Argentina
- Instituto de Cálculo, Universidad de Buenos Aires – CONICET, Ciudad Autónoma de Buenos Aires, Argentina
| | - Melanie Sclar
- Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación, Universidad de Buenos Aires – CONICET, Ciudad Autónoma de Buenos Aires, Argentina
| | - Sebastian Vita
- Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación, Universidad de Buenos Aires – CONICET, Ciudad Autónoma de Buenos Aires, Argentina
| | - Guillermo Solovey
- Instituto de Cálculo, Universidad de Buenos Aires – CONICET, Ciudad Autónoma de Buenos Aires, Argentina
| | - Juan Esteban Kamienkowski
- Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación, Universidad de Buenos Aires – CONICET, Ciudad Autónoma de Buenos Aires, Argentina
- Maestría de Explotación de Datos y Descubrimiento del Conocimiento, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Ciudad Autónoma de Buenos Aires, Argentina
| |
Collapse
|
25
|
Miller CT, Gire D, Hoke K, Huk AC, Kelley D, Leopold DA, Smear MC, Theunissen F, Yartsev M, Niell CM. Natural behavior is the language of the brain. Curr Biol 2022; 32:R482-R493. [PMID: 35609550 PMCID: PMC10082559 DOI: 10.1016/j.cub.2022.03.031] [Citation(s) in RCA: 69] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
The breadth and complexity of natural behaviors inspires awe. Understanding how our perceptions, actions, and internal thoughts arise from evolved circuits in the brain has motivated neuroscientists for generations. Researchers have traditionally approached this question by focusing on stereotyped behaviors, either natural or trained, in a limited number of model species. This approach has allowed for the isolation and systematic study of specific brain operations, which has greatly advanced our understanding of the circuits involved. At the same time, the emphasis on experimental reductionism has left most aspects of the natural behaviors that have shaped the evolution of the brain largely unexplored. However, emerging technologies and analytical tools make it possible to comprehensively link natural behaviors to neural activity across a broad range of ethological contexts and timescales, heralding new modes of neuroscience focused on natural behaviors. Here we describe a three-part roadmap that aims to leverage the wealth of behaviors in their naturally occurring distributions, linking their variance with that of underlying neural processes to understand how the brain is able to successfully navigate the everyday challenges of animals' social and ecological landscapes. To achieve this aim, experimenters must harness one challenge faced by all neurobiological systems, namely variability, in order to gain new insights into the language of the brain.
Collapse
Affiliation(s)
- Cory T Miller
- Cortical Systems and Behavior Laboratory, University of California San Diego, 9500 Gilman Drive, La Jolla, CA 92039, USA.
| | - David Gire
- Department of Psychology, University of Washington, Guthrie Hall, Seattle, WA 98105, USA
| | - Kim Hoke
- Department of Biology, Colorado State University, 1878 Campus Delivery, Fort Collins, CO 80523, USA
| | - Alexander C Huk
- Center for Perceptual Systems, Departments of Neuroscience and Psychology, University of Texas at Austin, 116 Inner Campus Drive, Austin, TX 78712, USA
| | - Darcy Kelley
- Department of Biological Sciences, Columbia University, 1212 Amsterdam Avenue, New York, NY 10027, USA
| | - David A Leopold
- Section of Cognitive Neurophysiology and Imaging, National Institute of Mental Health, 49 Convent Drive, Bethesda, MD 20892, USA
| | - Matthew C Smear
- Department of Psychology and Institute of Neuroscience, University of Oregon, 1227 University Street, Eugene, OR 97403, USA
| | - Frederic Theunissen
- Department of Psychology, University of California Berkeley, 2121 Berkeley Way, Berkeley, CA 94720, USA
| | - Michael Yartsev
- Department of Bioengineering, University of California Berkeley, 306 Stanley Hall, Berkeley, CA 94720, USA
| | - Cristopher M Niell
- Department of Biology and Institute of Neuroscience, University of Oregon, 222 Huestis Hall, Eugene, OR 97403, USA.
| |
Collapse
|
26
|
Zhu S, Lakshminarasimhan KJ, Arfaei N, Angelaki DE. Eye movements reveal spatiotemporal dynamics of visually-informed planning in navigation. eLife 2022; 11:e73097. [PMID: 35503099 PMCID: PMC9135400 DOI: 10.7554/elife.73097] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 05/01/2022] [Indexed: 11/28/2022] Open
Abstract
Goal-oriented navigation is widely understood to depend upon internal maps. Although this may be the case in many settings, humans tend to rely on vision in complex, unfamiliar environments. To study the nature of gaze during visually-guided navigation, we tasked humans to navigate to transiently visible goals in virtual mazes of varying levels of difficulty, observing that they took near-optimal trajectories in all arenas. By analyzing participants' eye movements, we gained insights into how they performed visually-informed planning. The spatial distribution of gaze revealed that environmental complexity mediated a striking trade-off in the extent to which attention was directed towards two complimentary aspects of the world model: the reward location and task-relevant transitions. The temporal evolution of gaze revealed rapid, sequential prospection of the future path, evocative of neural replay. These findings suggest that the spatiotemporal characteristics of gaze during navigation are significantly shaped by the unique cognitive computations underlying real-world, sequential decision making.
Collapse
Affiliation(s)
- Seren Zhu
- Center for Neural Science, New York UniversityNew YorkUnited States
| | | | - Nastaran Arfaei
- Department of Psychology, New York UniversityNew YorkUnited States
| | - Dora E Angelaki
- Center for Neural Science, New York UniversityNew YorkUnited States
- Department of Mechanical and Aerospace Engineering, New York UniversityNew YorkUnited States
| |
Collapse
|
27
|
Lundqvist M, Wutz A. New methods for oscillation analyses push new theories of discrete cognition. Psychophysiology 2022; 59:e13827. [PMID: 33942323 PMCID: PMC11475370 DOI: 10.1111/psyp.13827] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 03/18/2021] [Accepted: 03/23/2021] [Indexed: 11/28/2022]
Abstract
Classical ways of analyzing neural time series data has led to static views on cognition, in which the cognitive processes are linked to sustained neural activity and interpreted as stationary states. The core analytical focus was on slow power modulations of neural oscillations averaged across many experimental trials. Whereas this custom analytical approach reduces the complexity and increases the signal-to-noise ratio, it may disregard or even remove important aspects of the underlying neural dynamics. Novel analysis methods investigate the instantaneous frequency and phase of neural oscillations and relate them to the precisely controlled timing of brief successive sensory stimuli. This enables to capture how cognitive processes unfold in discrete windows within and across oscillatory cycles. Moreover, several recent studies analyze the oscillatory power modulations on single experimental trials. They suggest that the power modulations are packed into discrete bursts of activity, which occur at different rates and times, and with different durations from trial-to-trial. Here, we review the current work that made use of these methodological advances for neural oscillations. These novel analysis perspectives emphasize that cognitive processes occur in discrete time windows, instead of sustained, stationary states. Evidence for discretization was observed for the entire range of cognitive functions from perception and attention to working memory, goal-directed thought and motor actions, as well as throughout the entire cortical hierarchy and in subcortical regions. These empirical observations create demand for new psychological theories and computational models of cognition in the brain, which integrate its discrete temporal dynamics.
Collapse
Affiliation(s)
- Mikael Lundqvist
- Department of Clinical NeuroscienceKarolinska InstituteStockholmSweden
- Picower Institute for Learning & MemoryMassachusetts Institute of TechnologyCambridgeMAUSA
| | - Andreas Wutz
- Picower Institute for Learning & MemoryMassachusetts Institute of TechnologyCambridgeMAUSA
- Centre for Cognitive NeuroscienceUniversity of SalzburgSalzburgAustria
| |
Collapse
|
28
|
Kaanders P, Sepulveda P, Folke T, Ortoleva P, De Martino B. Humans actively sample evidence to support prior beliefs. eLife 2022; 11:e71768. [PMID: 35404234 PMCID: PMC9038198 DOI: 10.7554/elife.71768] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 04/08/2022] [Indexed: 11/13/2022] Open
Abstract
No one likes to be wrong. Previous research has shown that participants may underweight information incompatible with previous choices, a phenomenon called confirmation bias. In this paper, we argue that a similar bias exists in the way information is actively sought. We investigate how choice influences information gathering using a perceptual choice task and find that participants sample more information from a previously chosen alternative. Furthermore, the higher the confidence in the initial choice, the more biased information sampling becomes. As a consequence, when faced with the possibility of revising an earlier decision, participants are more likely to stick with their original choice, even when incorrect. Critically, we show that agency controls this phenomenon. The effect disappears in a fixed sampling condition where presentation of evidence is controlled by the experimenter, suggesting that the way in which confirmatory evidence is acquired critically impacts the decision process. These results suggest active information acquisition plays a critical role in the propagation of strongly held beliefs over time.
Collapse
Affiliation(s)
- Paula Kaanders
- Department of Experimental Psychology, University of OxfordOxfordUnited Kingdom
- Wellcome Centre for Integrative Neuroimaging, University of OxfordOxfordUnited Kingdom
| | - Pradyumna Sepulveda
- Institute of Cognitive Neuroscience, University College LondonLondonUnited Kingdom
| | - Tomas Folke
- Department of Mathematics and Computer Science, Rutgers UniversityNewarkUnited States
- Centre for Business Research, Cambridge Judge Business School, University of CambridgeCambridgeUnited Kingdom
| | - Pietro Ortoleva
- Department of Economics and Woodrow Wilson School, Princeton UniversityPrincetonUnited States
| | - Benedetto De Martino
- Institute of Cognitive Neuroscience, University College LondonLondonUnited Kingdom
- Wellcome Centre for Human Neuroimaging, University College LondonLondonUnited Kingdom
| |
Collapse
|
29
|
Delis I, Ince RAA, Sajda P, Wang Q. Neural Encoding of Active Multi-Sensing Enhances Perceptual Decision-Making via a Synergistic Cross-Modal Interaction. J Neurosci 2022; 42:2344-2355. [PMID: 35091504 PMCID: PMC8936614 DOI: 10.1523/jneurosci.0861-21.2022] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Revised: 11/29/2021] [Accepted: 01/02/2022] [Indexed: 12/16/2022] Open
Abstract
Most perceptual decisions rely on the active acquisition of evidence from the environment involving stimulation from multiple senses. However, our understanding of the neural mechanisms underlying this process is limited. Crucially, it remains elusive how different sensory representations interact in the formation of perceptual decisions. To answer these questions, we used an active sensing paradigm coupled with neuroimaging, multivariate analysis, and computational modeling to probe how the human brain processes multisensory information to make perceptual judgments. Participants of both sexes actively sensed to discriminate two texture stimuli using visual (V) or haptic (H) information or the two sensory cues together (VH). Crucially, information acquisition was under the participants' control, who could choose where to sample information from and for how long on each trial. To understand the neural underpinnings of this process, we first characterized where and when active sensory experience (movement patterns) is encoded in human brain activity (EEG) in the three sensory conditions. Then, to offer a neurocomputational account of active multisensory decision formation, we used these neural representations of active sensing to inform a drift diffusion model of decision-making behavior. This revealed a multisensory enhancement of the neural representation of active sensing, which led to faster and more accurate multisensory decisions. We then dissected the interactions between the V, H, and VH representations using a novel information-theoretic methodology. Ultimately, we identified a synergistic neural interaction between the two unisensory (V, H) representations over contralateral somatosensory and motor locations that predicted multisensory (VH) decision-making performance.SIGNIFICANCE STATEMENT In real-world settings, perceptual decisions are made during active behaviors, such as crossing the road on a rainy night, and include information from different senses (e.g., car lights, slippery ground). Critically, it remains largely unknown how sensory evidence is combined and translated into perceptual decisions in such active scenarios. Here we address this knowledge gap. First, we show that the simultaneous exploration of information across senses (multi-sensing) enhances the neural encoding of active sensing movements. Second, the neural representation of active sensing modulates the evidence available for decision; and importantly, multi-sensing yields faster evidence accumulation. Finally, we identify a cross-modal interaction in the human brain that correlates with multisensory performance, constituting a putative neural mechanism for forging active multisensory perception.
Collapse
Affiliation(s)
- Ioannis Delis
- School of Biomedical Sciences, University of Leeds, Leeds, LS2 9JT, United Kingdom
| | - Robin A A Ince
- School of Psychology and Neuroscience, University of Glasgow, G12 8QQ, United Kingdom
| | - Paul Sajda
- Department of Biomedical Engineering, Columbia University, New York, New York 10027
- Data Science Institute, Columbia University, New York, New York 10027
| | - Qi Wang
- Department of Biomedical Engineering, Columbia University, New York, New York 10027
| |
Collapse
|
30
|
Alternative female and male developmental trajectories in the dynamic balance of human visual perception. Sci Rep 2022; 12:1674. [PMID: 35102227 PMCID: PMC8803928 DOI: 10.1038/s41598-022-05620-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Accepted: 12/17/2021] [Indexed: 12/21/2022] Open
Abstract
The numerous multistable phenomena in vision, hearing and touch attest that the inner workings of perception are prone to instability. We investigated a visual example-binocular rivalry-with an accurate no-report paradigm, and uncovered developmental and maturational lifespan trajectories that were specific for age and sex. To interpret these trajectories, we hypothesized that conflicting objectives of visual perception-such as stability of appearance, sensitivity to visual detail, and exploration of fundamental alternatives-change in relative importance over the lifespan. Computational modelling of our empirical results allowed us to estimate this putative development of stability, sensitivity, and exploration over the lifespan. Our results confirmed prior findings of developmental psychology and appear to quantify important aspects of neurocognitive phenotype. Additionally, we report atypical function of binocular rivalry in autism spectrum disorder and borderline personality disorder. Our computational approach offers new ways of quantifying neurocognitive phenotypes both in development and in dysfunction.
Collapse
|
31
|
Vicencio-Jimenez S, Bucci-Mansilla G, Bowen M, Terreros G, Morales-Zepeda D, Robles L, Délano PH. The Strength of the Medial Olivocochlear Reflex in Chinchillas Is Associated With Delayed Response Performance in a Visual Discrimination Task With Vocalizations as Distractors. Front Neurosci 2021; 15:759219. [PMID: 34955720 PMCID: PMC8695804 DOI: 10.3389/fnins.2021.759219] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 11/16/2021] [Indexed: 11/17/2022] Open
Abstract
The ability to perceive the world is not merely a passive process but depends on sensorimotor loops and interactions that guide and actively bias our sensory systems. Understanding which and how cognitive processes participate in this active sensing is still an open question. In this context, the auditory system presents itself as an attractive model for this purpose as it features an efferent control network that projects from the cortex to subcortical nuclei and even to the sensory epithelium itself. This efferent system can regulate the cochlear amplifier sensitivity through medial olivocochlear (MOC) neurons located in the brainstem. The ability to suppress irrelevant sounds during selective attention to visual stimuli is one of the functions that have been attributed to this system. MOC neurons are also directly activated by sounds through a brainstem reflex circuit, a response linked to the ability to suppress auditory stimuli during visual attention. Human studies have suggested that MOC neurons are also recruited by other cognitive functions, such as working memory and predictability. The aim of this research was to explore whether cognitive processes related to delayed responses in a visual discrimination task were associated with MOC function. In this behavioral condition, chinchillas held their responses for more than 2.5 s after visual stimulus offset, with and without auditory distractors, and the accuracy of these responses was correlated with the magnitude of the MOC reflex. We found that the animals’ performance decreased in presence of auditory distractors and that the results observed in MOC reflex could predict this performance. The individual MOC strength correlated with behavioral performance during delayed responses with auditory distractors, but not without them. These results in chinchillas, suggest that MOC neurons are also recruited by other cognitive functions, such as working memory.
Collapse
Affiliation(s)
- Sergio Vicencio-Jimenez
- Departamento de Otorrinolaringología, Hospital Clínico de la Universidad de Chile, Santiago, Chile.,Department of Otolaryngology-Head and Neck Surgery, The Center for Hearing and Balance, Johns Hopkins University School of Medicine, Baltimore, MD, United States.,Departamento de Neurociencia, Facultad de Medicina, Universidad de Chile, Santiago, Chile
| | | | - Macarena Bowen
- Departamento de Neurociencia, Facultad de Medicina, Universidad de Chile, Santiago, Chile.,Departamento de Fonoaudiología, Facultad de Medicina, Universidad de Chile, Santiago, Chile
| | - Gonzalo Terreros
- Instituto de Ciencias de la Salud, Universidad de O'Higgins, Rancagua, Chile
| | - David Morales-Zepeda
- Departamento de Neurociencia, Facultad de Medicina, Universidad de Chile, Santiago, Chile
| | - Luis Robles
- Departamento de Neurociencia, Facultad de Medicina, Universidad de Chile, Santiago, Chile
| | - Paul H Délano
- Departamento de Otorrinolaringología, Hospital Clínico de la Universidad de Chile, Santiago, Chile.,Departamento de Neurociencia, Facultad de Medicina, Universidad de Chile, Santiago, Chile.,Facultad de Medicina, Biomedical Neuroscience Institute, Universidad de Chile, Santiago, Chile.,Centro Avanzado de Ingeniería Eléctrica y Electrónica, AC3E, Universidad Técnica Federico Santa María, Valparaíso, Chile
| |
Collapse
|
32
|
Zylberberg A. Decision prioritization and causal reasoning in decision hierarchies. PLoS Comput Biol 2021; 17:e1009688. [PMID: 34971552 PMCID: PMC8719712 DOI: 10.1371/journal.pcbi.1009688] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Accepted: 11/28/2021] [Indexed: 12/02/2022] Open
Abstract
From cooking a meal to finding a route to a destination, many real life decisions can be decomposed into a hierarchy of sub-decisions. In a hierarchy, choosing which decision to think about requires planning over a potentially vast space of possible decision sequences. To gain insight into how people decide what to decide on, we studied a novel task that combines perceptual decision making, active sensing and hierarchical and counterfactual reasoning. Human participants had to find a target hidden at the lowest level of a decision tree. They could solicit information from the different nodes of the decision tree to gather noisy evidence about the target's location. Feedback was given only after errors at the leaf nodes and provided ambiguous evidence about the cause of the error. Despite the complexity of task (with 107 latent states) participants were able to plan efficiently in the task. A computational model of this process identified a small number of heuristics of low computational complexity that accounted for human behavior. These heuristics include making categorical decisions at the branching points of the decision tree rather than carrying forward entire probability distributions, discarding sensory evidence deemed unreliable to make a choice, and using choice confidence to infer the cause of the error after an initial plan failed. Plans based on probabilistic inference or myopic sampling norms could not capture participants' behavior. Our results show that it is possible to identify hallmarks of heuristic planning with sensing in human behavior and that the use of tasks of intermediate complexity helps identify the rules underlying human ability to reason over decision hierarchies.
Collapse
Affiliation(s)
- Ariel Zylberberg
- Zuckerman Mind Brain Behavior Institute, Columbia University, New York, New York, United States of America
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, New York, United States of America
| |
Collapse
|
33
|
Milne AO, Orton L, Black CH, Jones GC, Sullivan M, Grant RA. California sea lions employ task-specific strategies for active touch sensing. J Exp Biol 2021; 224:273347. [PMID: 34608932 PMCID: PMC8627572 DOI: 10.1242/jeb.243085] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 09/26/2021] [Indexed: 12/03/2022]
Abstract
Active sensing is the process of moving sensors to extract task-specific information. Whisker touch is often referred to as an active sensory system as whiskers are moved with purposeful control. Even though whisker movements are found in many species, it is unknown whether any animal can make task-specific movements with their whiskers. California sea lions (Zalophus californianus) make large, purposeful whisker movements and are capable of performing many whisker-related discrimination tasks. Therefore, California sea lions are an ideal species to explore the active nature of whisker touch sensing. Here, we show that California sea lions can make task-specific whisker movements. California sea lions move their whiskers with large amplitudes around object edges to judge size, make smaller, lateral stroking movements to judge texture and make very small whisker movements during a visual task. These findings, combined with the ease of training mammals and measuring whisker movements, makes whiskers an ideal system for studying mammalian perception, cognition and motor control. Highlighted Article: California sea lions engage in task-specific active touch sensing with their whiskers to discriminate size and differentiate textures, indicating that their whiskers are truly an active sensory system.
Collapse
Affiliation(s)
- Alyx O Milne
- Faculty of Science and Engineering, Manchester Metropolitan University, Chester Street, Manchester, M1 5GD, UK.,Events Team, Blackpool Zoo, East Park Drive, Blackpool, FY3 8PP, UK
| | - Llwyd Orton
- Faculty of Science and Engineering, Manchester Metropolitan University, Chester Street, Manchester, M1 5GD, UK
| | | | - Gary C Jones
- Events Team, Blackpool Zoo, East Park Drive, Blackpool, FY3 8PP, UK
| | - Matthew Sullivan
- Faculty of Science and Engineering, Manchester Metropolitan University, Chester Street, Manchester, M1 5GD, UK
| | - Robyn A Grant
- Faculty of Science and Engineering, Manchester Metropolitan University, Chester Street, Manchester, M1 5GD, UK
| |
Collapse
|
34
|
Solopchuk O, Zénon A. Active sensing with artificial neural networks. Neural Netw 2021; 143:751-758. [PMID: 34482173 DOI: 10.1016/j.neunet.2021.08.007] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 07/11/2021] [Accepted: 08/06/2021] [Indexed: 10/20/2022]
Abstract
The fitness of behaving agents depends on their knowledge of the environment, which demands efficient exploration strategies. Active sensing formalizes exploration as reduction of uncertainty about the current state of the environment. Despite strong theoretical justifications, active sensing has had limited applicability due to difficulty in estimating information gain. Here we address this issue by proposing a linear approximation to information gain and by implementing efficient gradient-based action selection within an artificial neural network setting. We compare information gain estimation with state of the art, and validate our model on an active sensing task based on MNIST dataset. We also propose an approximation that exploits the amortized inference network, and performs equally well in certain contexts.
Collapse
Affiliation(s)
- Oleg Solopchuk
- Université catholique de Louvain, Brussels, Belgium; University of Bordeaux, Bordeaux, France.
| | | |
Collapse
|
35
|
Cao R, Pastukhov A, Aleshin S, Mattia M, Braun J. Binocular rivalry reveals an out-of-equilibrium neural dynamics suited for decision-making. eLife 2021; 10:e61581. [PMID: 34369875 PMCID: PMC8352598 DOI: 10.7554/elife.61581] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Accepted: 05/24/2021] [Indexed: 12/19/2022] Open
Abstract
In ambiguous or conflicting sensory situations, perception is often 'multistable' in that it perpetually changes at irregular intervals, shifting abruptly between distinct alternatives. The interval statistics of these alternations exhibits quasi-universal characteristics, suggesting a general mechanism. Using binocular rivalry, we show that many aspects of this perceptual dynamics are reproduced by a hierarchical model operating out of equilibrium. The constitutive elements of this model idealize the metastability of cortical networks. Independent elements accumulate visual evidence at one level, while groups of coupled elements compete for dominance at another level. As soon as one group dominates perception, feedback inhibition suppresses supporting evidence. Previously unreported features in the serial dependencies of perceptual alternations compellingly corroborate this mechanism. Moreover, the proposed out-of-equilibrium dynamics satisfies normative constraints of continuous decision-making. Thus, multistable perception may reflect decision-making in a volatile world: integrating evidence over space and time, choosing categorically between hypotheses, while concurrently evaluating alternatives.
Collapse
Affiliation(s)
- Robin Cao
- Cognitive Biology, Center for Behavioral Brain SciencesMagdeburgGermany
- Gatsby Computational Neuroscience UnitLondonUnited Kingdom
- Istituto Superiore di SanitàRomeItaly
| | | | - Stepan Aleshin
- Cognitive Biology, Center for Behavioral Brain SciencesMagdeburgGermany
| | | | - Jochen Braun
- Cognitive Biology, Center for Behavioral Brain SciencesMagdeburgGermany
| |
Collapse
|
36
|
Rodgers CC, Nogueira R, Pil BC, Greeman EA, Park JM, Hong YK, Fusi S, Bruno RM. Sensorimotor strategies and neuronal representations for shape discrimination. Neuron 2021; 109:2308-2325.e10. [PMID: 34133944 PMCID: PMC8298290 DOI: 10.1016/j.neuron.2021.05.019] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2020] [Revised: 01/28/2021] [Accepted: 05/14/2021] [Indexed: 10/21/2022]
Abstract
Humans and other animals can identify objects by active touch, requiring the coordination of exploratory motion and tactile sensation. Both the motor strategies and neural representations employed could depend on the subject's goals. We developed a shape discrimination task that challenged head-fixed mice to discriminate concave from convex shapes. Behavioral decoding revealed that mice did this by comparing contacts across whiskers. In contrast, a separate group of mice performing a shape detection task simply summed up contacts over whiskers. We recorded populations of neurons in the barrel cortex, which processes whisker input, and found that individual neurons across the cortical layers encoded touch, whisker motion, and task-related signals. Sensory representations were task-specific: during shape discrimination, but not detection, neurons responded most to behaviorally relevant whiskers, overriding somatotopy. Thus, sensory cortex employs task-specific representations compatible with behaviorally relevant computations.
Collapse
Affiliation(s)
- Chris C Rodgers
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science, Columbia University, New York, NY 10027, USA.
| | - Ramon Nogueira
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA
| | - B Christina Pil
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science, Columbia University, New York, NY 10027, USA
| | - Esther A Greeman
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science, Columbia University, New York, NY 10027, USA
| | - Jung M Park
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science, Columbia University, New York, NY 10027, USA
| | - Y Kate Hong
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science, Columbia University, New York, NY 10027, USA
| | - Stefano Fusi
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science, Columbia University, New York, NY 10027, USA; Center for Theoretical Neuroscience, Columbia University, New York, NY 10027, USA
| | - Randy M Bruno
- Department of Neuroscience, Columbia University, New York, NY 10027, USA; Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY 10027, USA; Kavli Institute for Brain Science, Columbia University, New York, NY 10027, USA.
| |
Collapse
|
37
|
Noel JP, Caziot B, Bruni S, Fitzgerald NE, Avila E, Angelaki DE. Supporting generalization in non-human primate behavior by tapping into structural knowledge: Examples from sensorimotor mappings, inference, and decision-making. Prog Neurobiol 2021; 201:101996. [PMID: 33454361 PMCID: PMC8096669 DOI: 10.1016/j.pneurobio.2021.101996] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2020] [Revised: 12/15/2020] [Accepted: 01/12/2021] [Indexed: 02/05/2023]
Abstract
The complex behaviors we ultimately wish to understand are far from those currently used in systems neuroscience laboratories. A salient difference are the closed loops between action and perception prominently present in natural but not laboratory behaviors. The framework of reinforcement learning and control naturally wades across action and perception, and thus is poised to inform the neurosciences of tomorrow, not only from a data analyses and modeling framework, but also in guiding experimental design. We argue that this theoretical framework emphasizes active sensing, dynamical planning, and the leveraging of structural regularities as key operations for intelligent behavior within uncertain, time-varying environments. Similarly, we argue that we may study natural task strategies and their neural circuits without over-training animals when the tasks we use tap into our animal's structural knowledge. As proof-of-principle, we teach animals to navigate through a virtual environment - i.e., explore a well-defined and repetitive structure governed by the laws of physics - using a joystick. Once these animals have learned to 'drive', without further training they naturally (i) show zero- or one-shot learning of novel sensorimotor contingencies, (ii) infer the evolving path of dynamically changing latent variables, and (iii) make decisions consistent with maximizing reward rate. Such task designs allow for the study of flexible and generalizable, yet controlled, behaviors. In turn, they allow for the exploitation of pillars of intelligence - flexibility, prediction, and generalization -, properties whose neural underpinning have remained elusive.
Collapse
Affiliation(s)
- Jean-Paul Noel
- Center for Neural Science, New York University, New York, USA
| | - Baptiste Caziot
- Center for Neural Science, New York University, New York, USA
| | - Stefania Bruni
- Center for Neural Science, New York University, New York, USA
| | | | - Eric Avila
- Center for Neural Science, New York University, New York, USA
| | - Dora E Angelaki
- Center for Neural Science, New York University, New York, USA; Tandon School of Engineering, New York University, New York, USA.
| |
Collapse
|
38
|
Abstract
Perception is often described as probabilistic inference requiring an internal representation of uncertainty. However, it is unknown whether uncertainty is represented in a task-dependent manner, solely at the level of decisions, or in a fully Bayesian manner, across the entire perceptual pathway. To address this question, we first codify and evaluate the possible strategies the brain might use to represent uncertainty, and highlight the normative advantages of fully Bayesian representations. In such representations, uncertainty information is explicitly represented at all stages of processing, including early sensory areas, allowing for flexible and efficient computations in a wide variety of situations. Next, we critically review neural and behavioral evidence about the representation of uncertainty in the brain agreeing with fully Bayesian representations. We argue that sufficient behavioral evidence for fully Bayesian representations is lacking and suggest experimental approaches for demonstrating the existence of multivariate posterior distributions along the perceptual pathway.
Collapse
Affiliation(s)
- Ádám Koblinger
- Center for Cognitive Computation, Department of Cognitive Science, Central European University, Hungary
| | - József Fiser
- Center for Cognitive Computation, Department of Cognitive Science, Central European University, Hungary
| | - Máté Lengyel
- Center for Cognitive Computation, Department of Cognitive Science, Central European University, Hungary
- Computational and Biological Learning Lab, Department of Engineering, University of Cambridge, United Kingdom
| |
Collapse
|
39
|
Zakirov B, Charalambous G, Thuret R, Aspalter IM, Van-Vuuren K, Mead T, Harrington K, Regan ER, Herbert SP, Bentley K. Active perception during angiogenesis: filopodia speed up Notch selection of tip cells in silico and in vivo. Philos Trans R Soc Lond B Biol Sci 2021; 376:20190753. [PMID: 33550953 PMCID: PMC7934951 DOI: 10.1098/rstb.2019.0753] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/10/2020] [Indexed: 12/19/2022] Open
Abstract
How do cells make efficient collective decisions during tissue morphogenesis? Humans and other organisms use feedback between movement and sensing known as 'sensorimotor coordination' or 'active perception' to inform behaviour, but active perception has not before been investigated at a cellular level within organs. Here we provide the first proof of concept in silico/in vivo study demonstrating that filopodia (actin-rich, dynamic, finger-like cell membrane protrusions) play an unexpected role in speeding up collective endothelial decisions during the time-constrained process of 'tip cell' selection during blood vessel formation (angiogenesis). We first validate simulation predictions in vivo with live imaging of zebrafish intersegmental vessel growth. Further simulation studies then indicate the effect is due to the coupled positive feedback between movement and sensing on filopodia conferring a bistable switch-like property to Notch lateral inhibition, ensuring tip selection is a rapid and robust process. We then employ measures from computational neuroscience to assess whether filopodia function as a primitive (basal) form of active perception and find evidence in support. By viewing cell behaviour through the 'basal cognitive lens' we acquire a fresh perspective on the tip cell selection process, revealing a hidden, yet vital time-keeping role for filopodia. Finally, we discuss a myriad of new and exciting research directions stemming from our conceptual approach to interpreting cell behaviour. This article is part of the theme issue 'Basal cognition: multicellularity, neurons and the cognitive lens'.
Collapse
Affiliation(s)
- Bahti Zakirov
- Cellular Adaptive Behaviour Lab, Francis Crick Institute, London, NW1 1AT, UK
- Department of Informatics, King's College London, London, UK
| | - Georgios Charalambous
- Division of Developmental Biology and Medicine, University of Manchester, Manchester, UK
| | - Raphael Thuret
- Division of Developmental Biology and Medicine, University of Manchester, Manchester, UK
| | - Irene M. Aspalter
- Cellular Adaptive Behaviour Lab, Francis Crick Institute, London, NW1 1AT, UK
| | - Kelvin Van-Vuuren
- Cellular Adaptive Behaviour Lab, Francis Crick Institute, London, NW1 1AT, UK
| | - Thomas Mead
- Cellular Adaptive Behaviour Lab, Francis Crick Institute, London, NW1 1AT, UK
- Department of Informatics, King's College London, London, UK
| | - Kyle Harrington
- Virtual Technology and Design, University of Idaho, Moscow, ID, USA
- Center for Vascular Biology Research, Beth Israel Deaconess Medical Center, Department of Pathology, Harvard Medical School, Boston, MA, USA
| | - Erzsébet Ravasz Regan
- Center for Vascular Biology Research, Beth Israel Deaconess Medical Center, Department of Pathology, Harvard Medical School, Boston, MA, USA
- Department of Biology, The College of Wooster, Wooster, OH, USA
| | - Shane Paul Herbert
- Division of Developmental Biology and Medicine, University of Manchester, Manchester, UK
| | - Katie Bentley
- Cellular Adaptive Behaviour Lab, Francis Crick Institute, London, NW1 1AT, UK
- Department of Informatics, King's College London, London, UK
- Center for Vascular Biology Research, Beth Israel Deaconess Medical Center, Department of Pathology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
40
|
Abeles D, Yuval-Greenberg S. Active sensing and overt avoidance: Gaze shifts as a mechanism of predictive avoidance in vision. Cognition 2021; 211:104648. [PMID: 33714871 DOI: 10.1016/j.cognition.2021.104648] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2020] [Revised: 01/11/2021] [Accepted: 02/23/2021] [Indexed: 11/27/2022]
Abstract
Sensory organs are not only involved in passively transmitting sensory input, but are also involved in actively seeking it. Some sensory organs move dynamically to allow highly prioritized input to be detected by their most sensitive parts. Such 'active sensing' systems engage in pursuing relevant input, relying on attentional prioritizations. However, pursuing input may not always be advantageous. Task-irrelevant input may be distracting and interfere with task performance. We hypothesize that an efficient 'active sensing' mechanism should be able to not only pursue relevant input but also to predict irrelevant input and avoid it. Moreover, we hypothesize that this mechanism should be evident even when the task is non-visual and all visual information acts as a distractor. In this study, we demonstrate the existence of a predictive 'overt avoidance' mechanism in vision. In two experiments, participants were asked to perform a continuous mental-arithmetic task while occasionally being presented with task-irrelevant crowded displays limited to one quadrant of a screen. The locations of these visual stimuli were constant within a block but varied between blocks. Results show that gaze was consistently shifted away from the predicted location of distraction, even prior to its appearance, confirming the existence of a predictive 'overt avoidance' mechanism in vision. Based on these findings, we propose a conceptual model to explain how an 'active sensing' system, hardwired to explore, can overcome this drive when presented with distracting information. According to the model, distraction is handled through a dual mechanism of suppression and avoidance processes that are causally linked. This framework demonstrates how perception and motion work together to approach relevant information while avoiding irrelevant distraction.
Collapse
Affiliation(s)
- Dekel Abeles
- School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel
| | - Shlomit Yuval-Greenberg
- School of Psychological Sciences and Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel.
| |
Collapse
|
41
|
Gao H, Ou Y, Zhang Z, Ni M, Zhou X, Liao L. The Relationship Between Family Support and e-Learning Engagement in College Students: The Mediating Role of e-Learning Normative Consciousness and Behaviors and Self-Efficacy. Front Psychol 2021; 12:573779. [PMID: 33613373 PMCID: PMC7890012 DOI: 10.3389/fpsyg.2021.573779] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Accepted: 01/06/2021] [Indexed: 01/26/2023] Open
Abstract
Due to the current COVID-19 pandemic, colleges and universities have implemented network teaching. E-learning engagement is the most important concern of educators and parents because this will directly affect student academic performance. Hence, this study focuses on students’ perceived family support and their e-learning engagement and analyzes the effects of e-learning normative consciousness and behaviors and self-efficacy on the relationship between family support and e-learning engagement in college students. Prior to this study, the relationship between these variables was unknown. Four structural equation models revealed the multiple mediating roles of e-learning normative consciousness and behaviors and self-efficacy in the relationship between family support and e-learning engagement. A total of 1,317 college students (mean age=19.51; 52.2% freshman) voluntarily participated in our study. The results showed that e-learning normative consciousness and behaviors and self-efficacy played significant and mediating roles between students’ perceived family support and e-learning engagement. Specifically, these two individual variables fully mediated the relationship between students’ perceived family support and e-learning engagement. The multiple mediation model showed that family members can increase family support of their children by creating a household environment conducive to learning, displaying positive emotions, demonstrating the capability to assist their children, advocating the significance of learning normative consciousness and behaviors, and encouraging dedicated and efficient learning. The findings complement and extend the understanding of factors influencing student e-learning engagement.
Collapse
Affiliation(s)
- Hong Gao
- School of Nursing, University of South China, Hengyang, China
| | - Yangli Ou
- School of Nursing, University of South China, Hengyang, China
| | - Zhiyuan Zhang
- Emergency Department, The Second Hospital University of South China, Hengyang, China
| | - Menghui Ni
- School of Nursing, University of South China, Hengyang, China
| | - Xinlian Zhou
- Emergency Department, The Second Hospital University of South China, Hengyang, China
| | - Li Liao
- School of Nursing, University of South China, Hengyang, China
| |
Collapse
|
42
|
A review of the neurobiomechanical processes underlying secure gripping in object manipulation. Neurosci Biobehav Rev 2021; 123:286-300. [PMID: 33497782 DOI: 10.1016/j.neubiorev.2021.01.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 01/05/2021] [Accepted: 01/11/2021] [Indexed: 11/24/2022]
Abstract
O'SHEA, H. and S. J. Redmond. A review of the neurobiomechanical processes underlying secure gripping in object manipulation. NEUROSCI BIOBEHAV REV 286-300, 2021. Humans display skilful control over the objects they manipulate, so much so that biomimetic systems have yet to emulate this remarkable behaviour. Two key control processes are assumed to facilitate such dexterity: predictive cognitive-motor processes that guide manipulation procedures by anticipating action outcomes; and reactive sensorimotor processes that provide important error-based information for movement adaptation. Notwithstanding increased interdisciplinary research interest in object manipulation behaviour, the complexity of the perceptual-sensorimotor-cognitive processes involved and the theoretical divide regarding the fundamentality of control mean that the essential mechanisms underlying manipulative action remain undetermined. In this paper, following a detailed discussion of the theoretical and empirical bases for understanding human dexterous movement, we emphasise the role of tactile-related sensory events in secure object handling, and consider the contribution of certain biophysical and biomechanical phenomena. We aim to provide an integrated account of the current state-of-art in skilled human-object interaction that bridges the literature in neuroscience, cognitive psychology, and biophysics. We also propose novel directions for future research exploration in this area.
Collapse
|
43
|
Kuperberg GR. Tea With Milk? A Hierarchical Generative Framework of Sequential Event Comprehension. Top Cogn Sci 2021; 13:256-298. [PMID: 33025701 PMCID: PMC7897219 DOI: 10.1111/tops.12518] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2019] [Revised: 07/11/2020] [Accepted: 07/11/2020] [Indexed: 10/23/2022]
Abstract
To make sense of the world around us, we must be able to segment a continual stream of sensory inputs into discrete events. In this review, I propose that in order to comprehend events, we engage hierarchical generative models that "reverse engineer" the intentions of other agents as they produce sequential action in real time. By generating probabilistic predictions for upcoming events, generative models ensure that we are able to keep up with the rapid pace at which perceptual inputs unfold. By tracking our certainty about other agents' goals and the magnitude of prediction errors at multiple temporal scales, generative models enable us to detect event boundaries by inferring when a goal has changed. Moreover, by adapting flexibly to the broader dynamics of the environment and our own comprehension goals, generative models allow us to optimally allocate limited resources. Finally, I argue that we use generative models not only to comprehend events but also to produce events (carry out goal-relevant sequential action) and to continually learn about new events from our surroundings. Taken together, this hierarchical generative framework provides new insights into how the human brain processes events so effortlessly while highlighting the fundamental links between event comprehension, production, and learning.
Collapse
Affiliation(s)
- Gina R. Kuperberg
- Department of Psychology and Center for Cognitive Science, Tufts University
- Department of Psychiatry and the Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School
| |
Collapse
|
44
|
Detecting Relative Amplitude of IR Signals with Active Sensors and Its Application to a Positioning System. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10186412] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Nowadays, there is an increasing interest in smart systems, e.g., smart metering or smart spaces, for which active sensing plays an important role. In such systems, the sample or environment to be measured is irradiated with a signal (acoustic, infrared, radio-frequency…) and some of their features are determined from the transmitted or reflected part of the original signal. In this work, infrared (IR) signals are emitted from different sources (four in this case) and received by a unique quadrature angular diversity aperture (QADA) sensor. A code division multiple access (CDMA) technique is used to deal with the simultaneous transmission of all the signals and their separation (depending on the source) at the receiver’s processing stage. Furthermore, the use of correlation techniques allows the receiver to determine the amount of energy received from each transmitter, by quantifying the main correlation peaks. This technique can be used in any system requiring active sensing; in the particular case of the IR positioning system presented here, the relative amplitudes of those peaks are used to determine the central incidence point of the light from each emitter on the QADA. The proposal tackles the typical phenomena, such as distortions caused by the transducer impulse response, the near-far effect in CDMA-based systems, multipath transmissions, the correlation degradation from non-coherent demodulations, etc. Finally, for each emitter, the angle of incidence on the QADA receiver is estimated, assuming that it is on a horizontal plane, although with any rotation on the vertical axis Z. With the estimated angles and the known positions of the LED emitters, the position (x, y, z) of the receiver is determined. The system is validated at different positions in a volume of 3 × 3 × 3.4 m3 obtaining average errors of 7.1, 5.4, and 47.3 cm in the X, Y and Z axes, respectively.
Collapse
|
45
|
Cañigueral R, Ward JA, Hamilton AFDC. Effects of being watched on eye gaze and facial displays of typical and autistic individuals during conversation. AUTISM : THE INTERNATIONAL JOURNAL OF RESEARCH AND PRACTICE 2020; 25:210-226. [PMID: 32854524 PMCID: PMC7812513 DOI: 10.1177/1362361320951691] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Communication with others relies on coordinated exchanges of social signals, such as eye gaze and facial displays. However, this can only happen when partners are able to see each other. Although previous studies report that autistic individuals have difficulties in planning eye gaze and making facial displays during conversation, evidence from real-life dyadic tasks is scarce and mixed. Across two studies, here we investigate how eye gaze and facial displays of typical and high-functioning autistic individuals are modulated by the belief in being seen and potential to show true gaze direction. Participants were recorded with an eye-tracking and video-camera system while they completed a structured Q&A task with a confederate under three social contexts: pre-recorded video, video-call and face-to-face. Typical participants gazed less to the confederate and produced more facial displays when they were being watched and when they were speaking. Contrary to our hypotheses, eye gaze and facial motion patterns in autistic participants were overall similar to the typical group. This suggests that high-functioning autistic participants are able to use eye gaze and facial displays as social signals. Future studies will need to investigate to what extent this reflects spontaneous behaviour or the use of compensation strategies.
Collapse
Affiliation(s)
| | - Jamie A Ward
- University College London, UK.,Goldsmiths, University of London, UK
| | | |
Collapse
|
46
|
Zweifel NO, Hartmann MJZ. Defining "active sensing" through an analysis of sensing energetics: homeoactive and alloactive sensing. J Neurophysiol 2020; 124:40-48. [PMID: 32432502 DOI: 10.1152/jn.00608.2019] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The term "active sensing" has been defined in multiple ways. Most strictly, the term refers to sensing that uses self-generated energy to sample the environment (e.g., echolocation). More broadly, the definition includes all sensing that occurs when the sensor is moving (e.g., tactile stimuli obtained by an immobile versus moving fingertip) and, broader still, includes all sensing guided by attention or intent (e.g., purposeful eye movements). The present work offers a framework to help disambiguate aspects of the "active sensing" terminology and reveals properties of tactile sensing unique among all modalities. The framework begins with the well-described "sensorimotor loop," which expresses the perceptual process as a cycle involving four subsystems: environment, sensor, nervous system, and actuator. Using system dynamics, we examine how information flows through the loop. This "sensory-energetic loop" reveals two distinct sensing mechanisms that subdivide active sensing into homeoactive and alloactive sensing. In homeoactive sensing, the animal can change the state of the environment, while in alloactive sensing the animal can alter only the sensor's configurational parameters and thus the mapping between input and output. Given these new definitions, examination of the sensory-energetic loop helps identify two unique characteristics of tactile sensing: 1) in tactile systems, alloactive and homeoactive sensing merge to a mutually controlled sensing mechanism, and 2) tactile sensing may require fundamentally different predictions to anticipate reafferent input. We expect this framework may help resolve ambiguities in the active sensing community and form a basis for future theoretical and experimental work regarding alloactive and homeoactive sensing.
Collapse
Affiliation(s)
- Nadina O Zweifel
- Department of Biomedical Engineering, Northwestern University, Evanston, Illinois
| | - Mitra J Z Hartmann
- Department of Biomedical Engineering, Northwestern University, Evanston, Illinois.,Department of Mechanical Engineering, Northwestern University, Evanston, Illinois
| |
Collapse
|
47
|
Tschantz A, Seth AK, Buckley CL. Learning action-oriented models through active inference. PLoS Comput Biol 2020; 16:e1007805. [PMID: 32324758 PMCID: PMC7200021 DOI: 10.1371/journal.pcbi.1007805] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Revised: 05/05/2020] [Accepted: 03/19/2020] [Indexed: 11/29/2022] Open
Abstract
Converging theories suggest that organisms learn and exploit probabilistic models of their environment. However, it remains unclear how such models can be learned in practice. The open-ended complexity of natural environments means that it is generally infeasible for organisms to model their environment comprehensively. Alternatively, action-oriented models attempt to encode a parsimonious representation of adaptive agent-environment interactions. One approach to learning action-oriented models is to learn online in the presence of goal-directed behaviours. This constrains an agent to behaviourally relevant trajectories, reducing the diversity of the data a model need account for. Unfortunately, this approach can cause models to prematurely converge to sub-optimal solutions, through a process we refer to as a bad-bootstrap. Here, we exploit the normative framework of active inference to show that efficient action-oriented models can be learned by balancing goal-oriented and epistemic (information-seeking) behaviours in a principled manner. We illustrate our approach using a simple agent-based model of bacterial chemotaxis. We first demonstrate that learning via goal-directed behaviour indeed constrains models to behaviorally relevant aspects of the environment, but that this approach is prone to sub-optimal convergence. We then demonstrate that epistemic behaviours facilitate the construction of accurate and comprehensive models, but that these models are not tailored to any specific behavioural niche and are therefore less efficient in their use of data. Finally, we show that active inference agents learn models that are parsimonious, tailored to action, and which avoid bad bootstraps and sub-optimal convergence. Critically, our results indicate that models learned through active inference can support adaptive behaviour in spite of, and indeed because of, their departure from veridical representations of the environment. Our approach provides a principled method for learning adaptive models from limited interactions with an environment, highlighting a route to sample efficient learning algorithms.
Collapse
Affiliation(s)
- Alexander Tschantz
- Sackler Centre for Consciousness Science, University of Sussex, Falmer, Brighton, United Kingdom
- Department of Informatics, University of Sussex, Brighton, United Kingdom
| | - Anil K. Seth
- Sackler Centre for Consciousness Science, University of Sussex, Falmer, Brighton, United Kingdom
- Department of Informatics, University of Sussex, Brighton, United Kingdom
- Canadian Institute for Advanced Research, Azrieli Programme on Brain, Mind, and Consciousness, Toronto, Ontario, Canada
| | - Christopher L. Buckley
- Department of Informatics, University of Sussex, Brighton, United Kingdom
- Evolutionary and Adaptive Systems Research Group, University of Sussex, Falmer, United Kingdom
| |
Collapse
|
48
|
Bianchi B, Bengolea Monzón G, Ferrer L, Fernández Slezak D, Shalom DE, Kamienkowski JE. Human and computer estimations of Predictability of words in written language. Sci Rep 2020; 10:4396. [PMID: 32157161 PMCID: PMC7064512 DOI: 10.1038/s41598-020-61353-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2019] [Accepted: 02/24/2020] [Indexed: 01/14/2023] Open
Abstract
When we read printed text, we are continuously predicting upcoming words to integrate information and guide future eye movements. Thus, the Predictability of a given word has become one of the most important variables when explaining human behaviour and information processing during reading. In parallel, the Natural Language Processing (NLP) field evolved by developing a wide variety of applications. Here, we show that using different word embeddings techniques (like Latent Semantic Analysis, Word2Vec, and FastText) and N-gram-based language models we were able to estimate how humans predict words (cloze-task Predictability) and how to better understand eye movements in long Spanish texts. Both types of models partially captured aspects of predictability. On the one hand, our N-gram model performed well when added as a replacement for the cloze-task Predictability of the fixated word. On the other hand, word embeddings were useful to mimic Predictability of the following word. Our study joins efforts from neurolinguistic and NLP fields to understand human information processing during reading to potentially improve NLP algorithms.
Collapse
Affiliation(s)
- Bruno Bianchi
- Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires - Consejo Nacional de Investigación en Ciencia y Técnica, Ciudad Autónoma de Buenos Aires, Argentina.
| | - Gastón Bengolea Monzón
- Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires - Consejo Nacional de Investigación en Ciencia y Técnica, Ciudad Autónoma de Buenos Aires, Argentina
| | - Luciana Ferrer
- Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires - Consejo Nacional de Investigación en Ciencia y Técnica, Ciudad Autónoma de Buenos Aires, Argentina
| | - Diego Fernández Slezak
- Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires - Consejo Nacional de Investigación en Ciencia y Técnica, Ciudad Autónoma de Buenos Aires, Argentina.,Departamento de Computación, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Ciudad Autónoma de Buenos Aires, Argentina
| | - Diego E Shalom
- Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Ciudad Autónoma de Buenos Aires, Argentina
| | - Juan E Kamienkowski
- Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires - Consejo Nacional de Investigación en Ciencia y Técnica, Ciudad Autónoma de Buenos Aires, Argentina.,Departamento de Física, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Ciudad Autónoma de Buenos Aires, Argentina
| |
Collapse
|
49
|
A Comparison between Mouse, In Silico, and Robot Odor Plume Navigation Reveals Advantages of Mouse Odor Tracking. eNeuro 2020; 7:ENEURO.0212-19.2019. [PMID: 31924732 PMCID: PMC7004486 DOI: 10.1523/eneuro.0212-19.2019] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2019] [Revised: 11/04/2019] [Accepted: 12/19/2019] [Indexed: 11/26/2022] Open
Abstract
Localization of odors is essential to animal survival, and thus animals are adept at odor navigation. In natural conditions animals encounter odor sources in which odor is carried by air flow varying in complexity. We sought to identify potential minimalist strategies that can effectively be used for odor-based navigation and asses their performance in an increasingly chaotic environment. Localization of odors is essential to animal survival, and thus animals are adept at odor navigation. In natural conditions animals encounter odor sources in which odor is carried by air flow varying in complexity. We sought to identify potential minimalist strategies that can effectively be used for odor-based navigation and asses their performance in an increasingly chaotic environment. To do so, we compared mouse, in silico model, and Arduino-based robot odor-localization behavior in a standardized odor landscape. Mouse performance remains robust in the presence of increased complexity, showing a shift in strategy towards faster movement with increased environmental complexity. Implementing simple binaral and temporal models of tropotaxis and klinotaxis, an in silico model and Arduino robot, in the same environment as the mice, are equally successful in locating the odor source within a plume of low complexity. However, performance of these algorithms significantly drops when the chaotic nature of the plume is increased. Additionally, both algorithm-driven systems show more successful performance when using a strictly binaral model at a larger sensor separation distance and more successful performance when using a temporal and binaral model when using a smaller sensor separation distance. This suggests that with an increasingly chaotic odor environment, mice rely on complex strategies that allow for robust odor localization that cannot be resolved by minimal algorithms that display robust performance at low levels of complexity. Thus, highlighting that an animal’s ability to modulate behavior with environmental complexity is beneficial for odor localization.
Collapse
|
50
|
Parr T, Friston KJ. Generalised free energy and active inference. BIOLOGICAL CYBERNETICS 2019; 113:495-513. [PMID: 31562544 PMCID: PMC6848054 DOI: 10.1007/s00422-019-00805-w] [Citation(s) in RCA: 75] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2017] [Accepted: 09/13/2019] [Indexed: 05/30/2023]
Abstract
Active inference is an approach to understanding behaviour that rests upon the idea that the brain uses an internal generative model to predict incoming sensory data. The fit between this model and data may be improved in two ways. The brain could optimise probabilistic beliefs about the variables in the generative model (i.e. perceptual inference). Alternatively, by acting on the world, it could change the sensory data, such that they are more consistent with the model. This implies a common objective function (variational free energy) for action and perception that scores the fit between an internal model and the world. We compare two free energy functionals for active inference in the framework of Markov decision processes. One of these is a functional of beliefs (i.e. probability distributions) about states and policies, but a function of observations, while the second is a functional of beliefs about all three. In the former (expected free energy), prior beliefs about outcomes are not part of the generative model (because they are absorbed into the prior over policies). Conversely, in the second (generalised free energy), priors over outcomes become an explicit component of the generative model. When using the free energy function, which is blind to future observations, we equip the generative model with a prior over policies that ensure preferred (i.e. priors over) outcomes are realised. In other words, if we expect to encounter a particular kind of outcome, this lends plausibility to those policies for which this outcome is a consequence. In addition, this formulation ensures that selected policies minimise uncertainty about future outcomes by minimising the free energy expected in the future. When using the free energy functional-that effectively treats future observations as hidden states-we show that policies are inferred or selected that realise prior preferences by minimising the free energy of future expectations. Interestingly, the form of posterior beliefs about policies (and associated belief updating) turns out to be identical under both formulations, but the quantities used to compute them are not.
Collapse
Affiliation(s)
- Thomas Parr
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London, WC1N 3BG UK
| | - Karl J. Friston
- Wellcome Centre for Human Neuroimaging, Institute of Neurology, University College London, 12 Queen Square, London, WC1N 3BG UK
| |
Collapse
|