1
|
Dan C, Hulse BK, Kappagantula R, Jayaraman V, Hermundstad AM. A neural circuit architecture for rapid learning in goal-directed navigation. Neuron 2024; 112:2581-2599.e23. [PMID: 38795708 DOI: 10.1016/j.neuron.2024.04.036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Revised: 01/16/2024] [Accepted: 04/30/2024] [Indexed: 05/28/2024]
Abstract
Anchoring goals to spatial representations enables flexible navigation but is challenging in novel environments when both representations must be acquired simultaneously. We propose a framework for how Drosophila uses internal representations of head direction (HD) to build goal representations upon selective thermal reinforcement. We show that flies use stochastically generated fixations and directed saccades to express heading preferences in an operant visual learning paradigm and that HD neurons are required to modify these preferences based on reinforcement. We used a symmetric visual setting to expose how flies' HD and goal representations co-evolve and how the reliability of these interacting representations impacts behavior. Finally, we describe how rapid learning of new goal headings may rest on a behavioral policy whose parameters are flexible but whose form is genetically encoded in circuit architecture. Such evolutionarily structured architectures, which enable rapidly adaptive behavior driven by internal representations, may be relevant across species.
Collapse
Affiliation(s)
- Chuntao Dan
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - Brad K Hulse
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - Ramya Kappagantula
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA
| | - Vivek Jayaraman
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA.
| | - Ann M Hermundstad
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA 20147, USA.
| |
Collapse
|
2
|
Jesusanmi OO, Amin AA, Domcsek N, Knight JC, Philippides A, Nowotny T, Graham P. Investigating visual navigation using spiking neural network models of the insect mushroom bodies. Front Physiol 2024; 15:1379977. [PMID: 38841209 PMCID: PMC11151298 DOI: 10.3389/fphys.2024.1379977] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 04/29/2024] [Indexed: 06/07/2024] Open
Abstract
Ants are capable of learning long visually guided foraging routes with limited neural resources. The visual scene memory needed for this behaviour is mediated by the mushroom bodies; an insect brain region important for learning and memory. In a visual navigation context, the mushroom bodies are theorised to act as familiarity detectors, guiding ants to views that are similar to those previously learned when first travelling along a foraging route. Evidence from behavioural experiments, computational studies and brain lesions all support this idea. Here we further investigate the role of mushroom bodies in visual navigation with a spiking neural network model learning complex natural scenes. By implementing these networks in GeNN-a library for building GPU accelerated spiking neural networks-we were able to test these models offline on an image database representing navigation through a complex outdoor natural environment, and also online embodied on a robot. The mushroom body model successfully learnt a large series of visual scenes (400 scenes corresponding to a 27 m route) and used these memories to choose accurate heading directions during route recapitulation in both complex environments. Through analysing our model's Kenyon cell (KC) activity, we were able to demonstrate that KC activity is directly related to the respective novelty of input images. Through conducting a parameter search we found that there is a non-linear dependence between optimal KC to visual projection neuron (VPN) connection sparsity and the length of time the model is presented with an image stimulus. The parameter search also showed training the model on lower proportions of a route generally produced better accuracy when testing on the entire route. We embodied the mushroom body model and comparator visual navigation algorithms on a Quanser Q-car robot with all processing running on an Nvidia Jetson TX2. On a 6.5 m route, the mushroom body model had a mean distance to training route (error) of 0.144 ± 0.088 m over 5 trials, which was performance comparable to standard visual-only navigation algorithms. Thus, we have demonstrated that a biologically plausible model of the ant mushroom body can navigate complex environments both in simulation and the real world. Understanding the neural basis of this behaviour will provide insight into how neural circuits are tuned to rapidly learn behaviourally relevant information from complex environments and provide inspiration for creating bio-mimetic computer/robotic systems that can learn rapidly with low energy requirements.
Collapse
Affiliation(s)
| | - Amany Azevedo Amin
- Sussex AI, School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom
| | - Norbert Domcsek
- Sussex AI, School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom
| | - James C. Knight
- Sussex AI, School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom
| | - Andrew Philippides
- Sussex AI, School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom
| | - Thomas Nowotny
- Sussex AI, School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom
| | - Paul Graham
- Sussex Neuroscience, School of Life Sciences, University of Sussex, Brighton, United Kingdom
| |
Collapse
|
3
|
Jürgensen AM, Sakagiannis P, Schleyer M, Gerber B, Nawrot MP. Prediction error drives associative learning and conditioned behavior in a spiking model of Drosophila larva. iScience 2024; 27:108640. [PMID: 38292165 PMCID: PMC10824792 DOI: 10.1016/j.isci.2023.108640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 11/10/2023] [Accepted: 12/01/2023] [Indexed: 02/01/2024] Open
Abstract
Predicting reinforcement from sensory cues is beneficial for goal-directed behavior. In insect brains, underlying associations between cues and reinforcement, encoded by dopaminergic neurons, are formed in the mushroom body. We propose a spiking model of the Drosophila larva mushroom body. It includes a feedback motif conveying learned reinforcement expectation to dopaminergic neurons, which can compute prediction error as the difference between expected and present reinforcement. We demonstrate that this can serve as a driving force in learning. When combined with synaptic homeostasis, our model accounts for theoretically derived features of acquisition and loss of associations that depend on the intensity of the reinforcement and its temporal proximity to the cue. From modeling olfactory learning over the time course of behavioral experiments and simulating the locomotion of individual larvae toward or away from odor sources in a virtual environment, we conclude that learning driven by prediction errors can explain larval behavior.
Collapse
Affiliation(s)
- Anna-Maria Jürgensen
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, 50674 Cologne, Germany
| | - Panagiotis Sakagiannis
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, 50674 Cologne, Germany
| | - Michael Schleyer
- Leibniz Institute for Neurobiology (LIN), Department of Genetics, 39118 Magdeburg, Germany
- Institute for the Advancement of Higher Education, Faculty of Science, Hokkaido University, Sapporo 060-08080, Japan
| | - Bertram Gerber
- Leibniz Institute for Neurobiology (LIN), Department of Genetics, 39118 Magdeburg, Germany
- Institute for Biology, Otto-von-Guericke University, 39120 Magdeburg, Germany
- Center for Brain and Behavioral Sciences (CBBS), Otto-von-Guericke University, 39118 Magdeburg, Germany
| | - Martin Paul Nawrot
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, 50674 Cologne, Germany
| |
Collapse
|
4
|
Mangan M, Floreano D, Yasui K, Trimmer BA, Gravish N, Hauert S, Webb B, Manoonpong P, Szczecinski N. A virtuous cycle between invertebrate and robotics research: perspective on a decade of Living Machines research. BIOINSPIRATION & BIOMIMETICS 2023; 18:035005. [PMID: 36881919 DOI: 10.1088/1748-3190/acc223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Accepted: 03/07/2023] [Indexed: 06/18/2023]
Abstract
Many invertebrates are ideal model systems on which to base robot design principles due to their success in solving seemingly complex tasks across domains while possessing smaller nervous systems than vertebrates. Three areas are particularly relevant for robot designers: Research on flying and crawling invertebrates has inspired new materials and geometries from which robot bodies (their morphologies) can be constructed, enabling a new generation of softer, smaller, and lighter robots. Research on walking insects has informed the design of new systems for controlling robot bodies (their motion control) and adapting their motion to their environment without costly computational methods. And research combining wet and computational neuroscience with robotic validation methods has revealed the structure and function of core circuits in the insect brain responsible for the navigation and swarming capabilities (their mental faculties) displayed by foraging insects. The last decade has seen significant progress in the application of principles extracted from invertebrates, as well as the application of biomimetic robots to model and better understand how animals function. This Perspectives paper on the past 10 years of the Living Machines conference outlines some of the most exciting recent advances in each of these fields before outlining lessons gleaned and the outlook for the next decade of invertebrate robotic research.
Collapse
Affiliation(s)
- Michael Mangan
- The University of Sheffield, Mappin St, Sheffield S10 2TN, United Kingdom
| | - Dario Floreano
- Ecole Polytechnique Federale de Lausanne, Laboratory of Intelligent Systems, Station 9, Lausanne CH-1015, Switzerland
| | - Kotaro Yasui
- Tohoku University, Frontier Research Institute for Interdisciplinary Sciences, 6-3 Aramaki aza Aoba, Aoba-ku, Sendai 980-8578, Japan
| | - Barry A Trimmer
- Tufts University, Biology, 200 Boston Av, Boston, MA 02111, United States of America
| | - Nick Gravish
- University of California San Diego, Mechanical and Aerospace Engineering, Building EBU II, La Jolla, CA 92093, United States of America
| | - Sabine Hauert
- University of Bristol, Engineering Mathematics, Bristol BS8 1QU, United Kingdom
| | - Barbara Webb
- University of Edinburgh, School of Informatics, 10 Crichton St, Edinburgh EH8 9AB, United Kingdom
| | - Poramate Manoonpong
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, People's Republic of China
- Bio-Inspired Robotics and Neural Engineering Laboratory, School of Information Science and Technology, Vidyasirimedhi Institute of Science and Technology, Wangchan Valley, Rayong 21210, Thailand
| | - Nicholas Szczecinski
- West Virginia University, Mechanical and Aerospace Engineering, Morgantown, WV 26506-6201, United States of America
| |
Collapse
|
5
|
Visual navigation: properties, acquisition and use of views. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2022:10.1007/s00359-022-01599-2. [PMID: 36515743 DOI: 10.1007/s00359-022-01599-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 11/20/2022] [Accepted: 11/25/2022] [Indexed: 12/15/2022]
Abstract
Panoramic views offer information on heading direction and on location to visually navigating animals. This review covers the properties of panoramic views and the information they provide to navigating animals, irrespective of image representation. Heading direction can be retrieved by alignment matching between memorized and currently experienced views, and a gradient descent in image differences can lead back to the location at which a view was memorized (positional image matching). Central place foraging insects, such as ants, bees and wasps, conduct distinctly choreographed learning walks and learning flights upon first leaving their nest that are likely to be designed to systematically collect scene memories tagged with information provided by path integration on the direction of and the distance to the nest. Equally, traveling along routes, ants have been shown to engage in scanning movements, in particular when routes are unfamiliar, again suggesting a systematic process of acquiring and comparing views. The review discusses what we know and do not know about how view memories are represented in the brain of insects, how they are acquired and how they are subsequently used for traveling along routes and for pinpointing places.
Collapse
|
6
|
An artificial neural network explains how bats might use vision for navigation. Commun Biol 2022; 5:1325. [PMID: 36463311 PMCID: PMC9719490 DOI: 10.1038/s42003-022-04260-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Accepted: 11/15/2022] [Indexed: 12/04/2022] Open
Abstract
Animals navigate using various sensory information to guide their movement. Miniature tracking devices now allow documenting animals' routes with high accuracy. Despite this detailed description of animal movement, how animals translate sensory information to movement is poorly understood. Recent machine learning advances now allow addressing this question with unprecedented statistical learning tools. We harnessed this power to address visual-based navigation in fruit bats. We used machine learning and trained a convolutional neural network to navigate along a bat's route using visual information that would have been available to the real bat, which we collected using a drone. We show that a simple feed-forward network can learn to guide the agent towards a goal based on sensory input, and can generalize its learning both in time and in space. Our analysis suggests how animals could potentially use visual input for navigation and which features might be useful for this purpose.
Collapse
|
7
|
Meier JM, Perdikis D, Blickensdörfer A, Stefanovski L, Liu Q, Maith O, Dinkelbach HÜ, Baladron J, Hamker FH, Ritter P. Virtual deep brain stimulation: Multiscale co-simulation of a spiking basal ganglia model and a whole-brain mean-field model with the virtual brain. Exp Neurol 2022; 354:114111. [DOI: 10.1016/j.expneurol.2022.114111] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 04/04/2022] [Accepted: 05/05/2022] [Indexed: 11/04/2022]
|
8
|
Stankiewicz J, Webb B. Looking down: a model for visual route following in flying insects. BIOINSPIRATION & BIOMIMETICS 2021; 16:055007. [PMID: 34243169 DOI: 10.1088/1748-3190/ac1307] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 07/09/2021] [Indexed: 06/13/2023]
Abstract
Insect visual navigation is often assumed to depend on panoramic views of the horizon, and how these change as the animal moves. However, it is known that honey bees can visually navigate in flat, open meadows where visual information at the horizon is minimal, or would remain relatively constant across a wide range of positions. In this paper we hypothesise that these animals can navigate using view memories of the ground. We find that in natural scenes, low resolution views from an aerial perspective of ostensibly self-similar terrain (e.g. within a field of grass) provide surprisingly robust descriptors of precise spatial locations. We propose a new visual route following approach that makes use of transverse oscillations to centre a flight path along a sequence of learned views of the ground. We deploy this model on an autonomous quadcopter and demonstrate that it provides robust performance in the real world on journeys of up to 30 m. The success of our method is contingent on a robust view matching process which can evaluate the familiarity of a view with a degree of translational invariance. We show that a previously developed wavelet based bandpass orientated filter approach fits these requirements well, exhibiting double the catchment area of standard approaches. Using a realistic simulation package, we evaluate the robustness of our approach to variations in heading direction and aircraft height between inbound and outbound journeys. We also demonstrate that our approach can operate using a vision system with a biologically relevant visual acuity and viewing direction.
Collapse
Affiliation(s)
- J Stankiewicz
- School of Informatics, University of Edinburgh, 10 Crichton Street, Edinburgh EH8 9AB, United Kingdom
| | - B Webb
- School of Informatics, University of Edinburgh, 10 Crichton Street, Edinburgh EH8 9AB, United Kingdom
| |
Collapse
|
9
|
Paffhausen BH, Petrasch J, Wild B, Meurers T, Schülke T, Polster J, Fuchs I, Drexler H, Kuriatnyk O, Menzel R, Landgraf T. A Flying Platform to Investigate Neuronal Correlates of Navigation in the Honey Bee ( Apis mellifera). Front Behav Neurosci 2021; 15:690571. [PMID: 34354573 PMCID: PMC8329708 DOI: 10.3389/fnbeh.2021.690571] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2021] [Accepted: 06/24/2021] [Indexed: 11/13/2022] Open
Abstract
Navigating animals combine multiple perceptual faculties, learn during exploration, retrieve multi-facetted memory contents, and exhibit goal-directedness as an expression of their current needs and motivations. Navigation in insects has been linked to a variety of underlying strategies such as path integration, view familiarity, visual beaconing, and goal-directed orientation with respect to previously learned ground structures. Most works, however, study navigation either from a field perspective, analyzing purely behavioral observations, or combine computational models with neurophysiological evidence obtained from lab experiments. The honey bee (Apis mellifera) has long been a popular model in the search for neural correlates of complex behaviors and exhibits extraordinary navigational capabilities. However, the neural basis for bee navigation has not yet been explored under natural conditions. Here, we propose a novel methodology to record from the brain of a copter-mounted honey bee. This way, the animal experiences natural multimodal sensory inputs in a natural environment that is familiar to her. We have developed a miniaturized electrophysiology recording system which is able to record spikes in the presence of time-varying electric noise from the copter's motors and rotors, and devised an experimental procedure to record from mushroom body extrinsic neurons (MBENs). We analyze the resulting electrophysiological data combined with a reconstruction of the animal's visual perception and find that the neural activity of MBENs is linked to sharp turns, possibly related to the relative motion of visual features. This method is a significant technological step toward recording brain activity of navigating honey bees under natural conditions. By providing all system specifications in an online repository, we hope to close a methodological gap and stimulate further research informing future computational models of insect navigation.
Collapse
Affiliation(s)
- Benjamin H Paffhausen
- Department of Biology, Chemistry and Pharmacy, Institute of Neurobiology, Free University of Berlin, Berlin, Germany
| | - Julian Petrasch
- Dahlem Center for Machine Learning and Robotics, Department of Mathematics and Computer Science, Institute of Computer Science, Free University of Berlin, Berlin, Germany
| | - Benjamin Wild
- Dahlem Center for Machine Learning and Robotics, Department of Mathematics and Computer Science, Institute of Computer Science, Free University of Berlin, Berlin, Germany
| | - Thierry Meurers
- Dahlem Center for Machine Learning and Robotics, Department of Mathematics and Computer Science, Institute of Computer Science, Free University of Berlin, Berlin, Germany
| | - Tobias Schülke
- Dahlem Center for Machine Learning and Robotics, Department of Mathematics and Computer Science, Institute of Computer Science, Free University of Berlin, Berlin, Germany
| | - Johannes Polster
- Dahlem Center for Machine Learning and Robotics, Department of Mathematics and Computer Science, Institute of Computer Science, Free University of Berlin, Berlin, Germany
| | - Inga Fuchs
- Department of Biology, Chemistry and Pharmacy, Institute of Neurobiology, Free University of Berlin, Berlin, Germany
| | - Helmut Drexler
- Department of Biology, Chemistry and Pharmacy, Institute of Neurobiology, Free University of Berlin, Berlin, Germany
| | - Oleksandra Kuriatnyk
- Department of Biology, Chemistry and Pharmacy, Institute of Neurobiology, Free University of Berlin, Berlin, Germany
| | - Randolf Menzel
- Department of Biology, Chemistry and Pharmacy, Institute of Neurobiology, Free University of Berlin, Berlin, Germany
| | - Tim Landgraf
- Dahlem Center for Machine Learning and Robotics, Department of Mathematics and Computer Science, Institute of Computer Science, Free University of Berlin, Berlin, Germany
| |
Collapse
|
10
|
Friedman DA, Tschantz A, Ramstead MJD, Friston K, Constant A. Active Inferants: An Active Inference Framework for Ant Colony Behavior. Front Behav Neurosci 2021; 15:647732. [PMID: 34248515 PMCID: PMC8264549 DOI: 10.3389/fnbeh.2021.647732] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2020] [Accepted: 05/18/2021] [Indexed: 11/13/2022] Open
Abstract
In this paper, we introduce an active inference model of ant colony foraging behavior, and implement the model in a series of in silico experiments. Active inference is a multiscale approach to behavioral modeling that is being applied across settings in theoretical biology and ethology. The ant colony is a classic case system in the function of distributed systems in terms of stigmergic decision-making and information sharing. Here we specify and simulate a Markov decision process (MDP) model for ant colony foraging. We investigate a well-known paradigm from laboratory ant colony behavioral experiments, the alternating T-maze paradigm, to illustrate the ability of the model to recover basic colony phenomena such as trail formation after food location discovery. We conclude by outlining how the active inference ant colony foraging behavioral model can be extended and situated within a nested multiscale framework and systems approaches to biology more generally.
Collapse
Affiliation(s)
- Daniel Ari Friedman
- Department of Entomology and Nematology, University of California, Davis, Davis, CA, United States
- Active Inference Lab, University of California, Davis, Davis, CA, United States
| | - Alec Tschantz
- Sackler Centre for Consciousness Science, University of Sussex, Brighton, United Kingdom
- Department of Informatics, University of Sussex, Brighton, United Kingdom
| | - Maxwell J. D. Ramstead
- Division of Social and Transcultural Psychiatry, Department of Psychiatry, McGill University, Montreal, QC, Canada
- Culture, Mind, and Brain Program, McGill University, Montreal, QC, Canada
- Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom
- Spatial Web Foundation, Los Angeles, CA, United States
| | - Karl Friston
- Wellcome Centre for Human Neuroimaging, University College London, London, United Kingdom
| | - Axel Constant
- Theory and Method in Biosciences, The University of Sydney, Sydney, NSW, Australia
| |
Collapse
|
11
|
Rapp H, Nawrot MP. A spiking neural program for sensorimotor control during foraging in flying insects. Proc Natl Acad Sci U S A 2020; 117:28412-28421. [PMID: 33122439 PMCID: PMC7668073 DOI: 10.1073/pnas.2009821117] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Foraging is a vital behavioral task for living organisms. Behavioral strategies and abstract mathematical models thereof have been described in detail for various species. To explore the link between underlying neural circuits and computational principles, we present how a biologically detailed neural circuit model of the insect mushroom body implements sensory processing, learning, and motor control. We focus on cast and surge strategies employed by flying insects when foraging within turbulent odor plumes. Using a spike-based plasticity rule, the model rapidly learns to associate individual olfactory sensory cues paired with food in a classical conditioning paradigm. We show that, without retraining, the system dynamically recalls memories to detect relevant cues in complex sensory scenes. Accumulation of this sensory evidence on short time scales generates cast-and-surge motor commands. Our generic systems approach predicts that population sparseness facilitates learning, while temporal sparseness is required for dynamic memory recall and precise behavioral control. Our work successfully combines biological computational principles with spike-based machine learning. It shows how knowledge transfer from static to arbitrary complex dynamic conditions can be achieved by foraging insects and may serve as inspiration for agent-based machine learning.
Collapse
Affiliation(s)
- Hannes Rapp
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, Cologne 50674, Germany
| | - Martin Paul Nawrot
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, Cologne 50674, Germany
| |
Collapse
|
12
|
Schwarz S, Mangan M, Webb B, Wystrach A. Route-following ants respond to alterations of the view sequence. J Exp Biol 2020; 223:jeb218701. [PMID: 32487668 DOI: 10.1242/jeb.218701] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2019] [Accepted: 05/21/2020] [Indexed: 08/26/2023]
Abstract
Ants can navigate by comparing the currently perceived view with memorised views along a familiar foraging route. Models regarding route-following suggest that the views are stored and recalled independently of the sequence in which they occur. Hence, the ant only needs to evaluate the instantaneous familiarity of the current view to obtain a heading direction. This study investigates whether ant homing behaviour is influenced by alterations in the sequence of views experienced along a familiar route, using the frequency of stop-and-scan behaviour as an indicator of the ant's navigational uncertainty. Ants were trained to forage between their nest and a feeder which they exited through a short channel before proceeding along the homeward route. In tests, ants were collected before entering the nest and released again in the channel, which was placed either in its original location or halfway along the route. Ants exiting the familiar channel in the middle of the route would thus experience familiar views in a novel sequence. Results show that ants exiting the channel scan significantly more when they find themselves in the middle of the route, compared with when emerging at the expected location near the feeder. This behaviour suggests that previously encountered views influence the recognition of current views, even when these views are highly familiar, revealing a sequence component to route memory. How information about view sequences could be implemented in the insect brain, as well as potential alternative explanations to our results, are discussed.
Collapse
Affiliation(s)
- Sebastian Schwarz
- Centre de Recherches sur la Cognition Animale, CNRS, Université Paul Sabatier, Toulouse, 31062 Cedex 09, France
| | - Michael Mangan
- Sheffield Robotics, Department of Computer Science, University of Sheffield, Western Bank, Sheffield S10 2TN, UK
| | - Barbara Webb
- School of Informatics, University of Edinburgh, Crichton Street, Edinburgh EH8 9AB, UK
| | - Antoine Wystrach
- Centre de Recherches sur la Cognition Animale, CNRS, Université Paul Sabatier, Toulouse, 31062 Cedex 09, France
| |
Collapse
|
13
|
Sun X, Yue S, Mangan M. A decentralised neural model explaining optimal integration of navigational strategies in insects. eLife 2020; 9:e54026. [PMID: 32589143 PMCID: PMC7365663 DOI: 10.7554/elife.54026] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2019] [Accepted: 06/26/2020] [Indexed: 12/12/2022] Open
Abstract
Insect navigation arises from the coordinated action of concurrent guidance systems but the neural mechanisms through which each functions, and are then coordinated, remains unknown. We propose that insects require distinct strategies to retrace familiar routes (route-following) and directly return from novel to familiar terrain (homing) using different aspects of frequency encoded views that are processed in different neural pathways. We also demonstrate how the Central Complex and Mushroom Bodies regions of the insect brain may work in tandem to coordinate the directional output of different guidance cues through a contextually switched ring-attractor inspired by neural recordings. The resultant unified model of insect navigation reproduces behavioural data from a series of cue conflict experiments in realistic animal environments and offers testable hypotheses of where and how insects process visual cues, utilise the different information that they provide and coordinate their outputs to achieve the adaptive behaviours observed in the wild.
Collapse
Affiliation(s)
- Xuelong Sun
- Computational Intelligence Lab & L-CAS, School of Computer Science, University of LincolnLincolnUnited Kingdom
| | - Shigang Yue
- Computational Intelligence Lab & L-CAS, School of Computer Science, University of LincolnLincolnUnited Kingdom
- Machine Life and Intelligence Research Centre, Guangzhou UniversityGuangzhouChina
| | - Michael Mangan
- Sheffield Robotics, Department of Computer Science, University of SheffieldSheffieldUnited Kingdom
| |
Collapse
|
14
|
Betkiewicz R, Lindner B, Nawrot MP. Circuit and Cellular Mechanisms Facilitate the Transformation from Dense to Sparse Coding in the Insect Olfactory System. eNeuro 2020; 7:ENEURO.0305-18.2020. [PMID: 32132095 PMCID: PMC7294456 DOI: 10.1523/eneuro.0305-18.2020] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2018] [Revised: 10/31/2019] [Accepted: 02/19/2020] [Indexed: 11/21/2022] Open
Abstract
Transformations between sensory representations are shaped by neural mechanisms at the cellular and the circuit level. In the insect olfactory system, the encoding of odor information undergoes a transition from a dense spatiotemporal population code in the antennal lobe to a sparse code in the mushroom body. However, the exact mechanisms shaping odor representations and their role in sensory processing are incompletely identified. Here, we investigate the transformation from dense to sparse odor representations in a spiking model of the insect olfactory system, focusing on two ubiquitous neural mechanisms: spike frequency adaptation at the cellular level and lateral inhibition at the circuit level. We find that cellular adaptation is essential for sparse representations in time (temporal sparseness), while lateral inhibition regulates sparseness in the neuronal space (population sparseness). The interplay of both mechanisms shapes spatiotemporal odor representations, which are optimized for the discrimination of odors during stimulus onset and offset. Response pattern correlation across different stimuli showed a nonmonotonic dependence on the strength of lateral inhibition with an optimum at intermediate levels, which is explained by two counteracting mechanisms. In addition, we find that odor identity is stored on a prolonged timescale in the adaptation levels but not in the spiking activity of the principal cells of the mushroom body, providing a testable hypothesis for the location of the so-called odor trace.
Collapse
Affiliation(s)
- Rinaldo Betkiewicz
- Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, 50674 Cologne, Germany
- Department of Physics, Humboldt University Berlin, 12489 Berlin, Germany
| | - Benjamin Lindner
- Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
- Department of Physics, Humboldt University Berlin, 12489 Berlin, Germany
| | - Martin P Nawrot
- Bernstein Center for Computational Neuroscience Berlin, 10115 Berlin, Germany
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, 50674 Cologne, Germany
| |
Collapse
|
15
|
Kheradmand B, Nieh JC. The Role of Landscapes and Landmarks in Bee Navigation: A Review. INSECTS 2019; 10:E342. [PMID: 31614833 PMCID: PMC6835465 DOI: 10.3390/insects10100342] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2019] [Revised: 10/08/2019] [Accepted: 10/09/2019] [Indexed: 11/16/2022]
Abstract
The ability of animals to explore landmarks in their environment is essential to their fitness. Landmarks are widely recognized to play a key role in navigation by providing information in multiple sensory modalities. However, what is a landmark? We propose that animals use a hierarchy of information based upon its utility and salience when an animal is in a given motivational state. Focusing on honeybees, we suggest that foragers choose landmarks based upon their relative uniqueness, conspicuousness, stability, and context. We also propose that it is useful to distinguish between landmarks that provide sensory input that changes ("near") or does not change ("far") as the receiver uses these landmarks to navigate. However, we recognize that this distinction occurs on a continuum and is not a clear-cut dichotomy. We review the rich literature on landmarks, focusing on recent studies that have illuminated our understanding of the kinds of information that bees use, how they use it, potential mechanisms, and future research directions.
Collapse
Affiliation(s)
- Bahram Kheradmand
- Section of Ecology, Behavior, and Evolution, Division of Biological Sciences, UC San Diego, La Jolla, CA 92093, USA.
| | - James C Nieh
- Section of Ecology, Behavior, and Evolution, Division of Biological Sciences, UC San Diego, La Jolla, CA 92093, USA.
| |
Collapse
|
16
|
Menzel R, Tison L, Fischer-Nakai J, Cheeseman J, Balbuena MS, Chen X, Landgraf T, Petrasch J, Polster J, Greggers U. Guidance of Navigating Honeybees by Learned Elongated Ground Structures. Front Behav Neurosci 2019; 12:322. [PMID: 30697152 PMCID: PMC6341004 DOI: 10.3389/fnbeh.2018.00322] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2018] [Accepted: 12/07/2018] [Indexed: 02/03/2023] Open
Abstract
Elongated landscape features like forest edges, rivers, roads or boundaries of fields are particularly salient landmarks for navigating animals. Here, we ask how honeybees learn such structures and how they are used during their homing flights after being released at an unexpected location (catch-and-release paradigm). The experiments were performed in two landscapes that differed with respect to their overall structure: a rather feature-less landscape, and one rich in close and far distant landmarks. We tested three different forms of learning: learning during orientation flights, learning during training to a feeding site, and learning during homing flights after release at an unexpected site within the explored area. We found that bees use elongated ground structures, e.g., a field boundary separating two pastures close to the hive (Experiment 1), an irrigation channel (Experiment 2), a hedgerow along which the bees were trained (Experiment 3), a gravel road close to the hive and the feeder (Experiment 4), a path along an irrigation channel with its vegetation close to the feeder (Experiment 5) and a gravel road along which bees performed their homing flights (Experiment 6). Discrimination and generalization between the learned linear landmarks and similar ones in the test area depend on their object properties (irrigation channel, gravel road, hedgerow) and their compass orientation. We conclude that elongated ground structures are embedded into multiple landscape features indicating that memory of these linear structures is one component of bee navigation. Elongated structures interact and compete with other references. Object identification is an important part of this process. The objects are characterized not only by their appearance but also by their alignment in the compass. Their salience is highest if both components are close to what had been learned. High similarity in appearance can compensate for (partial) compass misalignment, and vice versa.
Collapse
Affiliation(s)
- Randolf Menzel
- Institute of Biology, Freie Universität Berlin, Berlin, Germany
| | - Lea Tison
- Institute of Biology, Freie Universität Berlin, Berlin, Germany
| | - Johannes Fischer-Nakai
- Fachbereich Biowissenschaften, Polytechnische Gesellschaft Frankfurt am Main, Institute für Bienenkunde, Goethe-Universität Frankfurt am Main, Frankfurt, Germany
| | - James Cheeseman
- Department of Anaesthesiology, Faculty of Medical and Health Science, The University of Auckland, Auckland, New Zealand
| | - Maria Sol Balbuena
- Laboratorio de Insectos Sociales, Departamento de Biodiversidad y Biología Experimental, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Instituto de Fisiología, Biología Molecular y Neurociencias (IFIBYNE), CONICET-Universidad de Buenos Aires, Buenos Aires, Argentina
| | - Xiuxian Chen
- Institute of Biology, Freie Universität Berlin, Berlin, Germany
| | - Tim Landgraf
- Dahlem Center of Machine Learning and Robotics, Institute for Informatics, Freie Universität Berlin, Berlin, Germany
| | - Julian Petrasch
- Dahlem Center of Machine Learning and Robotics, Institute for Informatics, Freie Universität Berlin, Berlin, Germany
| | - Johannes Polster
- Dahlem Center of Machine Learning and Robotics, Institute for Informatics, Freie Universität Berlin, Berlin, Germany
| | - Uwe Greggers
- Institute of Biology, Freie Universität Berlin, Berlin, Germany
| |
Collapse
|
17
|
Key B, Brown D. Designing Brains for Pain: Human to Mollusc. Front Physiol 2018; 9:1027. [PMID: 30127750 PMCID: PMC6088194 DOI: 10.3389/fphys.2018.01027] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Accepted: 07/11/2018] [Indexed: 12/16/2022] Open
Abstract
There is compelling evidence that the "what it feels like" subjective experience of sensory stimuli arises in the cerebral cortex in both humans as well as mammalian experimental animal models. Humans are alone in their ability to verbally communicate their experience of the external environment. In other species, sensory awareness is extrapolated on the basis of behavioral indicators. For instance, cephalopods have been claimed to be sentient on the basis of their complex behavior and anecdotal reports of human-like intelligence. We have interrogated the findings of avoidance learning behavioral paradigms and classical brain lesion studies and conclude that there is no evidence for cephalopods feeling pain. This analysis highlighted the questionable nature of anthropometric assumptions about sensory experience with increased phylogenetic distance from humans. We contend that understanding whether invertebrates such as molluscs are sentient should first begin with defining the computational processes and neural circuitries underpinning subjective awareness. Using fundamental design principles, we advance the notion that subjective awareness is dependent on observer neural networks (networks that in some sense introspect the neural processing generating neural representations of sensory stimuli). This introspective process allows the observer network to create an internal model that predicts the neural processing taking place in the network being surveyed. Predictions arising from the internal model form the basis of a rudimentary form of awareness. We develop an algorithm built on parallel observer networks that generates multiple levels of sensory awareness. A network of cortical regions in the human brain has the appropriate functional properties and neural interconnectivity that is consistent with the predicted circuitry of the algorithm generating pain awareness. By contrast, the cephalopod brain lacks the necessary neural circuitry to implement such an algorithm. In conclusion, we find no compelling behavioral, functional, or neuroanatomical evidence to indicate that cephalopods feel pain.
Collapse
Affiliation(s)
- Brian Key
- School of Biomedical Sciences, University of Queensland, Brisbane, QLD, Australia
| | - Deborah Brown
- School of Historical and Philosophical Inquiry, University of Queensland, Brisbane, QLD, Australia
| |
Collapse
|
18
|
Lobecke A, Kern R, Egelhaaf M. Taking a goal-centred dynamic snapshot as a possibility for local homing in initially naïve bumblebees. ACTA ACUST UNITED AC 2018; 221:jeb.168674. [PMID: 29150448 DOI: 10.1242/jeb.168674] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2017] [Accepted: 11/13/2017] [Indexed: 11/20/2022]
Abstract
It is essential for central place foragers, such as bumblebees, to return reliably to their nest. Bumblebees, leaving their inconspicuous nest hole for the first time need to gather and learn sufficient information about their surroundings to allow them to return to their nest at the end of their trip, instead of just flying away to forage. Therefore, we assume an intrinsic learning programme that manifests itself in the flight structure immediately after leaving the nest for the first time. In this study, we recorded and analysed the first outbound flight of individually marked naïve bumblebees in an indoor environment. We found characteristic loop-like features in the flight pattern that appear to be necessary for the bees to acquire environmental information and might be relevant for finding the nest hole after a foraging trip. Despite common features in their spatio-temporal organisation, first departure flights from the nest are characterised by a high level of variability in their loop-like flight structure across animals. Changes in turn direction of body orientation, for example, are distributed evenly across the entire area used for the flights without any systematic relationship to the nest location. By considering the common flight motifs and this variability, we came to the hypothesis that a kind of dynamic snapshot is taken during the early phase of departure flights centred at the nest location. The quality of this snapshot is hypothesised to be 'tested' during the later phases of the departure flights concerning its usefulness for local homing.
Collapse
Affiliation(s)
- Anne Lobecke
- Department of Neurobiology and Cluster of Excellence 'Cognitive Interaction Technology' (CITEC), Bielefeld University, 33615 Bielefeld, Germany
| | - Roland Kern
- Department of Neurobiology and Cluster of Excellence 'Cognitive Interaction Technology' (CITEC), Bielefeld University, 33615 Bielefeld, Germany
| | - Martin Egelhaaf
- Department of Neurobiology and Cluster of Excellence 'Cognitive Interaction Technology' (CITEC), Bielefeld University, 33615 Bielefeld, Germany
| |
Collapse
|