1
|
Grabowska MJ, Jeans R, Steeves J, van Swinderen B. Oscillations in the central brain of Drosophila are phase locked to attended visual features. Proc Natl Acad Sci U S A 2020; 117:29925-29936. [PMID: 33177231 PMCID: PMC7703559 DOI: 10.1073/pnas.2010749117] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
Object-based attention describes the brain's capacity to prioritize one set of stimuli while ignoring others. Human research suggests that the binding of diverse stimuli into one attended percept requires phase-locked oscillatory activity in the brain. Even insects display oscillatory brain activity during visual attention tasks, but it is unclear if neural oscillations in insects are selectively correlated to different features of attended objects. We addressed this question by recording local field potentials in the Drosophila central complex, a brain structure involved in visual navigation and decision making. We found that attention selectively increased the neural gain of visual features associated with attended objects and that attention could be redirected to unattended objects by activation of a reward circuit. Attention was associated with increased beta (20- to 30-Hz) oscillations that selectively locked onto temporal features of the attended visual objects. Our results suggest a conserved function for the beta frequency range in regulating selective attention to salient visual features.
Collapse
Affiliation(s)
- Martyna J Grabowska
- Queensland Brain Institute, The University of Queensland, Brisbane, QLD 4072, Australia
| | - Rhiannon Jeans
- Queensland Brain Institute, The University of Queensland, Brisbane, QLD 4072, Australia
| | - James Steeves
- Queensland Brain Institute, The University of Queensland, Brisbane, QLD 4072, Australia
| | - Bruno van Swinderen
- Queensland Brain Institute, The University of Queensland, Brisbane, QLD 4072, Australia
| |
Collapse
|
2
|
Kócsi Z, Murray T, Dahmen H, Narendra A, Zeil J. The Antarium: A Reconstructed Visual Reality Device for Ant Navigation Research. Front Behav Neurosci 2020; 14:599374. [PMID: 33240057 PMCID: PMC7683616 DOI: 10.3389/fnbeh.2020.599374] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Accepted: 10/12/2020] [Indexed: 12/16/2022] Open
Abstract
We constructed a large projection device (the Antarium) with 20,000 UV-Blue-Green LEDs that allows us to present tethered ants with views of their natural foraging environment. The ants walk on an air-cushioned trackball, their movements are registered and can be fed back to the visual panorama. Views are generated in a 3D model of the ants’ environment so that they experience the changing visual world in the same way as they do when foraging naturally. The Antarium is a biscribed pentakis dodecahedron with 55 facets of identical isosceles triangles. The length of the base of the triangles is 368 mm resulting in a device that is roughly 1 m in diameter. Each triangle contains 361 blue/green LEDs and nine UV LEDs. The 55 triangles of the Antarium have 19,855 Green and Blue pixels and 495 UV pixels, covering 360° azimuth and elevation from −50° below the horizon to +90° above the horizon. The angular resolution is 1.5° for Green and Blue LEDs and 6.7° for UV LEDs, offering 65,536 intensity levels at a flicker frequency of more than 9,000 Hz and a framerate of 190 fps. Also, the direction and degree of polarisation of the UV LEDs can be adjusted through polarisers mounted on the axles of rotary actuators. We build 3D models of the natural foraging environment of ants using purely camera-based methods. We reconstruct panoramic scenes at any point within these models, by projecting panoramic images onto six virtual cameras which capture a cube-map of images to be projected by the LEDs of the Antarium. The Antarium is a unique instrument to investigate visual navigation in ants. In an open loop, it allows us to provide ants with familiar and unfamiliar views, with completely featureless visual scenes, or with scenes that are altered in spatial or spectral composition. In closed-loop, we can study the behavior of ants that are virtually displaced within their natural foraging environment. In the future, the Antarium can also be used to investigate the dynamics of navigational guidance and the neurophysiological basis of ant navigation in natural visual environments.
Collapse
Affiliation(s)
- Zoltán Kócsi
- Research School of Biology, Australian National University, Canberra, ACT, Australia
| | - Trevor Murray
- Research School of Biology, Australian National University, Canberra, ACT, Australia
| | - Hansjürgen Dahmen
- Department of Cognitive Neuroscience, University of Tübingen, Tübingen, Germany
| | - Ajay Narendra
- Department of Biological Sciences, Macquarie University, Sydney, NSW, Australia
| | - Jochen Zeil
- Research School of Biology, Australian National University, Canberra, ACT, Australia
| |
Collapse
|
3
|
Yuan D, Ji X, Hao S, Gestrich JY, Duan W, Wang X, Xiang Y, Yang J, Hu P, Xu M, Liu L, Wei H. Lamina feedback neurons regulate the bandpass property of the flicker-induced orientation response in Drosophila. J Neurochem 2020; 156:59-75. [PMID: 32383496 DOI: 10.1111/jnc.15036] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Revised: 04/28/2020] [Accepted: 04/30/2020] [Indexed: 11/28/2022]
Abstract
Natural scenes contain complex visual cues with specific features, including color, motion, flicker, and position. It is critical to understand how different visual features are processed at the early stages of visual perception to elicit appropriate cellular responses, and even behavioral output. Here, we studied the visual orientation response induced by flickering stripes in a novel behavioral paradigm in Drosophila melanogaster. We found that free walking flies exhibited bandpass orientation response to flickering stripes of different frequencies. The most sensitive frequency spectrum was confined to low frequencies of 2-4 Hz. Through genetic silencing, we showed that lamina L1 and L2 neurons, which receive visual inputs from R1 to R6 neurons, were the main components in mediating flicker-induced orientation behavior. Moreover, specific blocking of different types of lamina feedback neurons Lawf1, Lawf2, C2, C3, and T1 modulated orientation responses to flickering stripes of particular frequencies, suggesting that bandpass orientation response was generated through cooperative modulation of lamina feedback neurons. Furthermore, we found that lamina feedback neurons Lawf1 were glutamatergic. Thermal activation of Lawf1 neurons could suppress neural activities in L1 and L2 neurons, which could be blocked by the glutamate-gated chloride channel inhibitor picrotoxin (PTX). In summary, lamina monopolar neurons L1 and L2 are the primary components in mediating flicker-induced orientation response. Meanwhile, lamina feedback neurons cooperatively modulate the orientation response in a frequency-dependent way, which might be achieved through modulating neural activities of L1 and L2 neurons.
Collapse
Affiliation(s)
- Deliang Yuan
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing, P. R. China.,College of Life Sciences, University of the Chinese Academy of Sciences, Beijing, P. R. China
| | - Xiaoxiao Ji
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing, P. R. China.,College of Life Sciences, University of the Chinese Academy of Sciences, Beijing, P. R. China
| | - Shun Hao
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing, P. R. China.,College of Life Sciences, University of the Chinese Academy of Sciences, Beijing, P. R. China
| | - Julia Yvonne Gestrich
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing, P. R. China
| | - Wenlan Duan
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing, P. R. China.,College of Life Sciences, University of the Chinese Academy of Sciences, Beijing, P. R. China
| | - Xinwei Wang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing, P. R. China.,College of Life Sciences, University of the Chinese Academy of Sciences, Beijing, P. R. China
| | - Yuanhang Xiang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing, P. R. China.,College of Life Sciences, University of the Chinese Academy of Sciences, Beijing, P. R. China
| | - Jihua Yang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing, P. R. China.,College of Life Sciences, University of the Chinese Academy of Sciences, Beijing, P. R. China
| | - Pengbo Hu
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing, P. R. China.,College of Life Sciences, University of the Chinese Academy of Sciences, Beijing, P. R. China
| | - Mengbo Xu
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing, P. R. China
| | - Li Liu
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing, P. R. China.,College of Life Sciences, University of the Chinese Academy of Sciences, Beijing, P. R. China.,CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, P. R. China
| | - Hongying Wei
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Biomacromolecules, Institute of Biophysics, Chinese Academy of Sciences, Beijing, P. R. China.,College of Life Sciences, University of the Chinese Academy of Sciences, Beijing, P. R. China
| |
Collapse
|
4
|
Complexity and plasticity in honey bee phototactic behaviour. Sci Rep 2020; 10:7872. [PMID: 32398687 PMCID: PMC7217928 DOI: 10.1038/s41598-020-64782-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Accepted: 04/21/2020] [Indexed: 11/28/2022] Open
Abstract
The ability to move towards or away from a light source, namely phototaxis, is essential for a number of species to find the right environmental niche and may have driven the appearance of simple visual systems. In this study we ask if the later evolution of more complex visual systems was accompanied by a sophistication of phototactic behaviour. The honey bee is an ideal model organism to tackle this question, as it has an elaborate visual system, demonstrates exquisite abilities for visual learning and performs phototaxis. Our data suggest that in this insect, phototaxis has wavelength specific properties and is a highly dynamical response including multiple decision steps. In addition, we show that previous experience with a light (through exposure or classical aversive conditioning) modulates the phototactic response. This plasticity is dependent on the wavelength used, with blue being more labile than green or ultraviolet. Wavelength, intensity and past experience are integrated into an overall valence for each light that determines phototactic behaviour in honey bees. Thus, our results support the idea that complex visual systems allow sophisticated phototaxis. Future studies could take advantage of these findings to better understand the neuronal circuits underlying this processing of the visual information.
Collapse
|
5
|
A Target-Detecting Visual Neuron in the Dragonfly Locks on to Selectively Attended Targets. J Neurosci 2019; 39:8497-8509. [PMID: 31519823 DOI: 10.1523/jneurosci.1431-19.2019] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2019] [Revised: 09/04/2019] [Accepted: 09/06/2019] [Indexed: 01/23/2023] Open
Abstract
The visual world projects a complex and rapidly changing image onto the retina of many animal species. This presents computational challenges for those animals reliant on visual processing to provide an accurate representation of the world. One such challenge is parsing a visual scene for the most salient targets, such as the selection of prey amid a swarm. The ability to selectively prioritize processing of some stimuli over others is known as 'selective attention'. We recently identified a dragonfly visual neuron called 'Centrifugal Small Target Motion Detector 1' (CSTMD1) that exhibits selective attention when presented with multiple, equally salient targets. Here we conducted in vivo, electrophysiological recordings from CSTMD1 in wild-caught male dragonflies (Hemicordulia tau), while presenting visual stimuli on an LCD monitor. To identify the target selected in any given trial, we uniquely modulated the intensity of the moving targets (frequency tagging). We found that the frequency information of the selected target is preserved in the neuronal response, while the distracter is completely ignored. We also show that the competitive system that underlies selection in this neuron can be biased by the presentation of a preceding target on the same trajectory, even when it is of lower contrast than an abrupt, novel distracter. With this improved method for identifying and biasing target selection in CSTMD1, the dragonfly provides an ideal animal model system to probe the neuronal mechanisms underlying selective attention.SIGNIFICANCE STATEMENT We present the first application of frequency tagging to intracellular neuronal recordings, demonstrating that the frequency component of a stimulus is encoded in the spiking response of an individual neuron. Using this technique as an identifier, we demonstrate that CSTMD1 'locks on' to a selected target and encodes the absolute strength of this target, even in the presence of abruptly appearing, high-contrast distracters. The underlying mechanism also permits the selection mechanism to switch between targets mid-trial, even among equivalent targets. Together, these results demonstrate greater complexity in this selective attention system than would be expected in a winner-takes-all network. These results are in contrast to typical findings in the primate and avian brain, but display intriguing resemblance to observations in human psychophysics.
Collapse
|
6
|
Khamukhin A. Numerical Simulation of Visually Guided Landing Based on a Honeybee Motion Model. J INTELL ROBOT SYST 2018. [DOI: 10.1007/s10846-018-0960-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
7
|
Grabowska MJ, Steeves J, Alpay J, van de Poll M, Ertekin D, van Swinderen B. Innate visual preferences and behavioral flexibility in Drosophila. J Exp Biol 2018; 221:jeb.185918. [DOI: 10.1242/jeb.185918] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Accepted: 10/10/2018] [Indexed: 01/02/2023]
Abstract
Visual decision-making in animals is influenced by innate preferences as well as experience. Interaction between hard-wired responses and changing motivational states determines whether a visual stimulus is attractive, aversive, or neutral. It is however difficult to separate the relative contribution of nature versus nurture in experimental paradigms, especially for more complex visual parameters such as the shape of objects. We used a closed-loop virtual reality paradigm for walking Drosophila flies to uncover innate visual preferences for the shape and size of objects, in a recursive choice scenario allowing the flies to reveal their visual preferences over time. We found that Drosophila flies display a robust attraction / repulsion profile for a range of objects sizes in this paradigm, and that this visual preference profile remains evident under a variety of conditions and persists into old age. We also demonstrate a level of flexibility in this behavior: innate repulsion to certain objects could be transiently overridden if these were novel, although this effect was only evident in younger flies. Finally, we show that a neuromodulatory circuit in the fly brain, Drosophila neuropeptide F (dNPF), can be recruited to guide visual decision-making. Optogenetic activation of dNPF-expressing neurons converted a visually repulsive object into a more attractive object. This suggests that dNPF activity in the Drosophila brain guides ongoing visual choices, to override innate preferences and thereby provide a necessary level of behavioral flexibility in visual decision-making.
Collapse
Affiliation(s)
- Martyna J. Grabowska
- Queensland Brain Institute, The University of Queensland, St Lucia, QLD 4072, Australia
| | - James Steeves
- Queensland Brain Institute, The University of Queensland, St Lucia, QLD 4072, Australia
| | - Julius Alpay
- Queensland Brain Institute, The University of Queensland, St Lucia, QLD 4072, Australia
| | - Matthew van de Poll
- Queensland Brain Institute, The University of Queensland, St Lucia, QLD 4072, Australia
| | - Deniz Ertekin
- Queensland Brain Institute, The University of Queensland, St Lucia, QLD 4072, Australia
| | - Bruno van Swinderen
- Queensland Brain Institute, The University of Queensland, St Lucia, QLD 4072, Australia
| |
Collapse
|
8
|
Schultheiss P, Buatois A, Avarguès-Weber A, Giurfa M. Using virtual reality to study visual performances of honeybees. CURRENT OPINION IN INSECT SCIENCE 2017; 24:43-50. [PMID: 29208222 DOI: 10.1016/j.cois.2017.08.003] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2017] [Revised: 08/14/2017] [Accepted: 08/24/2017] [Indexed: 06/07/2023]
Abstract
Virtual reality (VR) offers an appealing experimental framework for studying visual performances of insects under highly controlled conditions. In the case of the honeybee Apis mellifera, this possibility may fill the gap between behavioural analyses in free-flight and cellular analyses in the laboratory. Using automated, computer-controlled systems, it is possible to generate virtual stimuli or even entire environments that can be modified to test hypotheses on bee visual behaviour. The bee itself can remain tethered in place, making it possible to record neural activity while the bees is performing behavioural tasks. Recent studies have examined visual navigation and attentional processes in VR on flying or walking tethered bees, but experimental paradigms for examining visual learning and memory are only just emerging. Behavioural performances of bees under current experimental conditions are often lower in VR than in natural environments, but further improvements on current experimental protocols seem possible. Here we discuss current developments and conclude that it is essential to tailor the specifications of the VR simulation to the visual processing of honeybees to improve the success of this research endeavour.
Collapse
Affiliation(s)
- Patrick Schultheiss
- Research Centre on Animal Cognition, Centre for Integrative Biology, CNRS, University of Toulouse, 118 Route de Narbonne, 31062 Toulouse cedex 09, France.
| | - Alexis Buatois
- Research Centre on Animal Cognition, Centre for Integrative Biology, CNRS, University of Toulouse, 118 Route de Narbonne, 31062 Toulouse cedex 09, France
| | - Aurore Avarguès-Weber
- Research Centre on Animal Cognition, Centre for Integrative Biology, CNRS, University of Toulouse, 118 Route de Narbonne, 31062 Toulouse cedex 09, France
| | - Martin Giurfa
- Research Centre on Animal Cognition, Centre for Integrative Biology, CNRS, University of Toulouse, 118 Route de Narbonne, 31062 Toulouse cedex 09, France
| |
Collapse
|
9
|
Buatois A, Pichot C, Schultheiss P, Sandoz JC, Lazzari CR, Chittka L, Avarguès-Weber A, Giurfa M. Associative visual learning by tethered bees in a controlled visual environment. Sci Rep 2017; 7:12903. [PMID: 29018218 PMCID: PMC5635106 DOI: 10.1038/s41598-017-12631-w] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2017] [Accepted: 09/08/2017] [Indexed: 11/22/2022] Open
Abstract
Free-flying honeybees exhibit remarkable cognitive capacities but the neural underpinnings of these capacities cannot be studied in flying insects. Conversely, immobilized bees are accessible to neurobiological investigation but display poor visual learning. To overcome this limitation, we aimed at establishing a controlled visual environment in which tethered bees walking on a spherical treadmill learn to discriminate visual stimuli video projected in front of them. Freely flying bees trained to walk into a miniature Y-maze displaying these stimuli in a dark environment learned the visual discrimination efficiently when one of them (CS+) was paired with sucrose and the other with quinine solution (CS−). Adapting this discrimination to the treadmill paradigm with a tethered, walking bee was successful as bees exhibited robust discrimination and preferred the CS+ to the CS− after training. As learning was better in the maze, movement freedom, active vision and behavioral context might be important for visual learning. The nature of the punishment associated with the CS− also affects learning as quinine and distilled water enhanced the proportion of learners. Thus, visual learning is amenable to a controlled environment in which tethered bees learn visual stimuli, a result that is important for future neurobiological studies in virtual reality.
Collapse
Affiliation(s)
- Alexis Buatois
- Research Centre on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, 118 route de Narbonne, F-31062, Toulouse cedex 09, France
| | - Cécile Pichot
- Research Centre on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, 118 route de Narbonne, F-31062, Toulouse cedex 09, France
| | - Patrick Schultheiss
- Research Centre on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, 118 route de Narbonne, F-31062, Toulouse cedex 09, France
| | - Jean-Christophe Sandoz
- Laboratory Evolution Genomes Behavior and Ecology, CNRS, Univ Paris-Sud, IRD, University Paris Saclay, F-91198, Gif-sur-Yvette, France
| | - Claudio R Lazzari
- Institut de Recherche sur la Biologie de l'Insecte, UMR 7261 CNRS, University François Rabelais of Tours, F-37200, Tours, France
| | - Lars Chittka
- Queen Mary University of London, School of Biological and Chemical Sciences, Biological and Experimental Psychology, Mile End Road, London, E1 4NS, United Kingdom
| | - Aurore Avarguès-Weber
- Research Centre on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, 118 route de Narbonne, F-31062, Toulouse cedex 09, France.
| | - Martin Giurfa
- Research Centre on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, 118 route de Narbonne, F-31062, Toulouse cedex 09, France.
| |
Collapse
|
10
|
Rusch C, Roth E, Vinauger C, Riffell JA. Honeybees in a virtual reality environment learn unique combinations of colour and shape. ACTA ACUST UNITED AC 2017; 220:3478-3487. [PMID: 28751492 DOI: 10.1242/jeb.164731] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2017] [Accepted: 07/21/2017] [Indexed: 11/20/2022]
Abstract
Honeybees are well-known models for the study of visual learning and memory. Whereas most of our knowledge of learned responses comes from experiments using free-flying bees, a tethered preparation would allow fine-scale control of the visual stimuli as well as accurate characterization of the learned responses. Unfortunately, conditioning procedures using visual stimuli in tethered bees have been limited in their efficacy. In this study, using a novel virtual reality environment and a differential training protocol in tethered walking bees, we show that the majority of honeybees learn visual stimuli, and need only six paired training trials to learn the stimulus. We found that bees readily learn visual stimuli that differ in both shape and colour. However, bees learn certain components over others (colour versus shape), and visual stimuli are learned in a non-additive manner with the interaction of specific colour and shape combinations being crucial for learned responses. To better understand which components of the visual stimuli the bees learned, the shape-colour association of the stimuli was reversed either during or after training. Results showed that maintaining the visual stimuli in training and testing phases was necessary to elicit visual learning, suggesting that bees learn multiple components of the visual stimuli. Together, our results demonstrate a protocol for visual learning in restrained bees that provides a powerful tool for understanding how components of a visual stimulus elicit learned responses as well as elucidating how visual information is processed in the honeybee brain.
Collapse
Affiliation(s)
- Claire Rusch
- Department of Biology, University of Washington, Seattle, WA 98195, USA.,University of Washington Institute for Neuroengineering, Seattle, WA 98195, USA
| | - Eatai Roth
- Department of Biology, University of Washington, Seattle, WA 98195, USA.,University of Washington Institute for Neuroengineering, Seattle, WA 98195, USA
| | - Clément Vinauger
- Department of Biology, University of Washington, Seattle, WA 98195, USA
| | - Jeffrey A Riffell
- Department of Biology, University of Washington, Seattle, WA 98195, USA .,University of Washington Institute for Neuroengineering, Seattle, WA 98195, USA
| |
Collapse
|
11
|
Avarguès-Weber A, Mota T. Advances and limitations of visual conditioning protocols in harnessed bees. ACTA ACUST UNITED AC 2016; 110:107-118. [PMID: 27998810 DOI: 10.1016/j.jphysparis.2016.12.006] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Revised: 10/06/2016] [Accepted: 12/14/2016] [Indexed: 12/12/2022]
Abstract
Bees are excellent invertebrate models for studying visual learning and memory mechanisms, because of their sophisticated visual system and impressive cognitive capacities associated with a relatively simple brain. Visual learning in free-flying bees has been traditionally studied using an operant conditioning paradigm. This well-established protocol, however, can hardly be combined with invasive procedures for studying the neurobiological basis of visual learning. Different efforts have been made to develop protocols in which harnessed honey bees could associate visual cues with reinforcement, though learning performances remain poorer than those obtained with free-flying animals. Especially in the last decade, the intention of improving visual learning performances of harnessed bees led many authors to adopt distinct visual conditioning protocols, altering parameters like harnessing method, nature and duration of visual stimulation, number of trials, inter-trial intervals, among others. As a result, the literature provides data hardly comparable and sometimes contradictory. In the present review, we provide an extensive analysis of the literature available on visual conditioning of harnessed bees, with special emphasis on the comparison of diverse conditioning parameters adopted by different authors. Together with this comparative overview, we discuss how these diverse conditioning parameters could modulate visual learning performances of harnessed bees.
Collapse
Affiliation(s)
- Aurore Avarguès-Weber
- Centre de Recherches sur la Cognition Animale, Centre de Biologie Intégrative (CBI), Université de Toulouse, CNRS, UPS, 118 Route de Narbonne, 31062 Toulouse Cedex 9, France.
| | - Theo Mota
- Departamento de Fisiologia e Biofísica, Instituto de Ciências Biológicas - ICB, Universidade Federal de Minas Gerais - UFMG, Av. Antônio Carlos 6627, 31270-901 Belo Horizonte, Minas Gerais, Brazil.
| |
Collapse
|