1
|
|
2
|
Rusch C, Roth E, Vinauger C, Riffell JA. Honeybees in a virtual reality environment learn unique combinations of colour and shape. ACTA ACUST UNITED AC 2017; 220:3478-3487. [PMID: 28751492 DOI: 10.1242/jeb.164731] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2017] [Accepted: 07/21/2017] [Indexed: 11/20/2022]
Abstract
Honeybees are well-known models for the study of visual learning and memory. Whereas most of our knowledge of learned responses comes from experiments using free-flying bees, a tethered preparation would allow fine-scale control of the visual stimuli as well as accurate characterization of the learned responses. Unfortunately, conditioning procedures using visual stimuli in tethered bees have been limited in their efficacy. In this study, using a novel virtual reality environment and a differential training protocol in tethered walking bees, we show that the majority of honeybees learn visual stimuli, and need only six paired training trials to learn the stimulus. We found that bees readily learn visual stimuli that differ in both shape and colour. However, bees learn certain components over others (colour versus shape), and visual stimuli are learned in a non-additive manner with the interaction of specific colour and shape combinations being crucial for learned responses. To better understand which components of the visual stimuli the bees learned, the shape-colour association of the stimuli was reversed either during or after training. Results showed that maintaining the visual stimuli in training and testing phases was necessary to elicit visual learning, suggesting that bees learn multiple components of the visual stimuli. Together, our results demonstrate a protocol for visual learning in restrained bees that provides a powerful tool for understanding how components of a visual stimulus elicit learned responses as well as elucidating how visual information is processed in the honeybee brain.
Collapse
Affiliation(s)
- Claire Rusch
- Department of Biology, University of Washington, Seattle, WA 98195, USA.,University of Washington Institute for Neuroengineering, Seattle, WA 98195, USA
| | - Eatai Roth
- Department of Biology, University of Washington, Seattle, WA 98195, USA.,University of Washington Institute for Neuroengineering, Seattle, WA 98195, USA
| | - Clément Vinauger
- Department of Biology, University of Washington, Seattle, WA 98195, USA
| | - Jeffrey A Riffell
- Department of Biology, University of Washington, Seattle, WA 98195, USA .,University of Washington Institute for Neuroengineering, Seattle, WA 98195, USA
| |
Collapse
|
3
|
Avarguès-Weber A, Mota T. Advances and limitations of visual conditioning protocols in harnessed bees. ACTA ACUST UNITED AC 2016; 110:107-118. [PMID: 27998810 DOI: 10.1016/j.jphysparis.2016.12.006] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Revised: 10/06/2016] [Accepted: 12/14/2016] [Indexed: 12/12/2022]
Abstract
Bees are excellent invertebrate models for studying visual learning and memory mechanisms, because of their sophisticated visual system and impressive cognitive capacities associated with a relatively simple brain. Visual learning in free-flying bees has been traditionally studied using an operant conditioning paradigm. This well-established protocol, however, can hardly be combined with invasive procedures for studying the neurobiological basis of visual learning. Different efforts have been made to develop protocols in which harnessed honey bees could associate visual cues with reinforcement, though learning performances remain poorer than those obtained with free-flying animals. Especially in the last decade, the intention of improving visual learning performances of harnessed bees led many authors to adopt distinct visual conditioning protocols, altering parameters like harnessing method, nature and duration of visual stimulation, number of trials, inter-trial intervals, among others. As a result, the literature provides data hardly comparable and sometimes contradictory. In the present review, we provide an extensive analysis of the literature available on visual conditioning of harnessed bees, with special emphasis on the comparison of diverse conditioning parameters adopted by different authors. Together with this comparative overview, we discuss how these diverse conditioning parameters could modulate visual learning performances of harnessed bees.
Collapse
Affiliation(s)
- Aurore Avarguès-Weber
- Centre de Recherches sur la Cognition Animale, Centre de Biologie Intégrative (CBI), Université de Toulouse, CNRS, UPS, 118 Route de Narbonne, 31062 Toulouse Cedex 9, France.
| | - Theo Mota
- Departamento de Fisiologia e Biofísica, Instituto de Ciências Biológicas - ICB, Universidade Federal de Minas Gerais - UFMG, Av. Antônio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais, Brazil.
| |
Collapse
|
4
|
Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes? PLoS One 2016; 11:e0147106. [PMID: 26886006 PMCID: PMC4757030 DOI: 10.1371/journal.pone.0147106] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2015] [Accepted: 12/29/2015] [Indexed: 11/19/2022] Open
Abstract
Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees' flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots.
Collapse
|
5
|
Abstract
Our memory is often surprisingly inaccurate, with errors ranging from misremembering minor details of events to generating illusory memories of entire episodes. The pervasiveness of such false memories generates a puzzle: in the face of selection pressure for accuracy of memory, how could such systematic failures have persisted over evolutionary time? It is possible that memory errors are an inevitable by-product of our adaptive memories and that semantic false memories are specifically connected to our ability to learn rules and concepts and to classify objects by category memberships. Here we test this possibility using a standard experimental false memory paradigm and inter-individual variation in verbal categorisation ability. Indeed it turns out that the error scores are significantly negatively correlated, with those individuals scoring fewer errors on the categorisation test being more susceptible to false memory intrusions in a free recall test. A similar trend, though not significant, was observed between individual categorisation ability and false memory susceptibility in a word recognition task. Our results therefore indicate that false memories, to some extent, might be a by-product of our ability to learn rules, categories and concepts.
Collapse
Affiliation(s)
- Kathryn Hunt
- Biological and Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, Mile End Road, London, E1 4NS, UK
| | - Lars Chittka
- Biological and Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, Mile End Road, London, E1 4NS, UK
| |
Collapse
|
6
|
Liedtke J, Schneider JM. Association and reversal learning abilities in a jumping spider. Behav Processes 2014; 103:192-8. [DOI: 10.1016/j.beproc.2013.12.015] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2013] [Revised: 12/18/2013] [Accepted: 12/22/2013] [Indexed: 11/27/2022]
|
7
|
Chittka L, Muller H. Learning, specialization, efficiency and task allocation in social insects. Commun Integr Biol 2011; 2:151-4. [PMID: 19513269 DOI: 10.4161/cib.7600] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2008] [Accepted: 12/09/2008] [Indexed: 11/19/2022] Open
Abstract
One of the most spectacular features of social insect colonies is their division of labor. Although individuals are often totipotent in terms of the labor they might perform, they might persistently work as scouts, fighters, nurses, foragers, undertakers or cleaners with a repetitiveness that might resemble an assembly line worker in a factory. Perhaps because of this apparent analogy, researchers have often assumed a priori that such labor division must be efficient, but empirical proof is scarce. New work on Themnothorax ants shows that there might be no link between an individual's propensity to perform a task, and their efficiency at that task, nor are task specialists more efficient than generalists. Here we argue that learning psychology might provide the missing link between social insect task specialization and efficiency: just like in human societies, efficiency at a job specialty is only partially a result of "talent", or innate tendency to engage in a job: it is much more a result of perfecting skills with experience, and the extent to which experience can be carried over from one task to the next (transfer), or whether experience at one task might actually impair performance at another (interference). Indeed there is extensive circumstantial evidence that learning is involved in almost any task performed by social insect workers, including food type recognition and handling techniques, but also such seemingly basic tasks as nest building and climate control. New findings on Cerapachys ants indicate that early experience of success at a task might to some extent determine the "profession" an insect worker chooses in later life.
Collapse
Affiliation(s)
- Lars Chittka
- Queen Mary University of London; Research Centre for Psychology; School of Biological and Chemical Sciences; London, UK
| | | |
Collapse
|
8
|
Abstract
Attempts to relate brain size to behaviour and cognition have rarely integrated information from insects with that from vertebrates. Many insects, however, demonstrate that highly differentiated motor repertoires, extensive social structures and cognition are possible with very small brains, emphasising that we need to understand the neural circuits, not just the size of brain regions, which underlie these feats. Neural network analyses show that cognitive features found in insects, such as numerosity, attention and categorisation-like processes, may require only very limited neuron numbers. Thus, brain size may have less of a relationship with behavioural repertoire and cognitive capacity than generally assumed, prompting the question of what large brains are for. Larger brains are, at least partly, a consequence of larger neurons that are necessary in large animals due to basic biophysical constraints. They also contain greater replication of neuronal circuits, adding precision to sensory processes, detail to perception, more parallel processing and enlarged storage capacity. Yet, these advantages are unlikely to produce the qualitative shifts in behaviour that are often assumed to accompany increased brain size. Instead, modularity and interconnectivity may be more important.
Collapse
|
9
|
Whitney HM, Kolle M, Andrew P, Chittka L, Steiner U, Glover BJ. Response to Comment on “Floral Iridescence, Produced by Diffractive Optics, Acts As a Cue for Animal Pollinators”. Science 2009. [DOI: 10.1126/science.1173503] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Affiliation(s)
- Heather M. Whitney
- Department of Plant Sciences, University of Cambridge, Downing Street, Cambridge CB2 3EA, UK
| | - Mathias Kolle
- Department of Physics, Cavendish Laboratory, University of Cambridge, J. J. Thomson Avenue, Cambridge CB3 0HE, UK
- Nanoscience Centre, University of Cambridge, 11 J. J. Thomson Avenue, Cambridge CB3 0FF, UK
| | - Piers Andrew
- Nanoscience Centre, University of Cambridge, 11 J. J. Thomson Avenue, Cambridge CB3 0FF, UK
| | - Lars Chittka
- Queen Mary University of London, London E1 4NS, UK
| | - Ullrich Steiner
- Department of Physics, Cavendish Laboratory, University of Cambridge, J. J. Thomson Avenue, Cambridge CB3 0HE, UK
- Nanoscience Centre, University of Cambridge, 11 J. J. Thomson Avenue, Cambridge CB3 0FF, UK
| | - Beverley J. Glover
- Department of Plant Sciences, University of Cambridge, Downing Street, Cambridge CB2 3EA, UK
| |
Collapse
|
10
|
Abstract
Vision looms large in neuroscience--it is the subject of a gigantic literature and four Nobel prizes--but there is a growing realization that there are problems with the textbook explanation of how mammalian vision works. Here we will summarize the evidence behind this disquiet. In effect, we shall present a portrait of a field that is 'stuck'. Our initial focus, because it is our area of expertise, is on evidence that the early steps of mammalian vision are more diverse and more interesting than is usually imagined, so that our understanding of the later stages is in trouble right from the start. But we will also summarize problems, raised by others, with the later stages themselves.
Collapse
Affiliation(s)
- Richard H Masland
- Massachusetts General Hospital, Harvard Medical School, 50 Blossom Street, Boston, MA 02114, USA
| | | |
Collapse
|
11
|
Kastrup CJ, Shen F, Ismagilov RF. Response to Shape Emerges in a Complex Biochemical Network and Its Simple Chemical Analogue. Angew Chem Int Ed Engl 2007; 46:3660-2. [PMID: 17407119 DOI: 10.1002/anie.200604995] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Christian J Kastrup
- Department of Chemistry and Institute for Biophysical Dynamics, The University of Chicago, IL 60637, USA
| | | | | |
Collapse
|
12
|
Kastrup C, Shen F, Ismagilov R. Response to Shape Emerges in a Complex Biochemical Network and Its Simple Chemical Analogue. Angew Chem Int Ed Engl 2007. [DOI: 10.1002/ange.200604995] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|