1
|
Strelevitz H, Tiraboschi E, Haase A. Associative Learning of Quantitative Mechanosensory Stimuli in Honeybees. INSECTS 2024; 15:94. [PMID: 38392513 PMCID: PMC10889140 DOI: 10.3390/insects15020094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Revised: 01/05/2024] [Accepted: 01/13/2024] [Indexed: 02/24/2024]
Abstract
The proboscis extension response (PER) has been widely used to evaluate honeybees' (Apis mellifera) learning and memory abilities, typically by using odors and visual cues for the conditioned stimuli. Here we asked whether honeybees could learn to distinguish between different magnitudes of the same type of stimulus, given as two speeds of air flux. By taking advantage of a novel automated system for administering PER experiments, we determined that the bees were highly successful when the lower air flux was rewarded and less successful when the higher flux was rewarded. Importantly, since our method includes AI-assisted analysis, we were able to consider subthreshold responses at a high temporal resolution; this analysis revealed patterns of rapid generalization and slowly acquired discrimination between the rewarded and unrewarded stimuli, as well as indications that the high air flux may have been mildly aversive. The learning curve for these mechanosensory stimuli, at least when the lower flux is rewarded, more closely mimics prior data from olfactory PER studies rather than visual ones, possibly in agreement with recent findings that the insect olfactory system is also sensitive to mechanosensory information. This work demonstrates a new modality to be used in PER experiments and lays the foundation for deeper exploration of honeybee cognitive processes when posed with complex learning challenges.
Collapse
Affiliation(s)
- Heather Strelevitz
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Piazza Manifattura 1, 38068 Rovereto, Italy
| | - Ettore Tiraboschi
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Piazza Manifattura 1, 38068 Rovereto, Italy
| | - Albrecht Haase
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Piazza Manifattura 1, 38068 Rovereto, Italy
- Department of Physics, University of Trento, 38123 Povo, Italy
| |
Collapse
|
2
|
Schmalz F, El Jundi B, Rössler W, Strube-Bloss M. Categorizing Visual Information in Subpopulations of Honeybee Mushroom Body Output Neurons. Front Physiol 2022; 13:866807. [PMID: 35574496 PMCID: PMC9092450 DOI: 10.3389/fphys.2022.866807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 04/08/2022] [Indexed: 11/17/2022] Open
Abstract
Multisensory integration plays a central role in perception, as all behaviors usually require the input of different sensory signals. For instance, for a foraging honeybee the association of a food source includes the combination of olfactory and visual cues to be categorized as a flower. Moreover, homing after successful foraging using celestial cues and the panoramic scenery may be dominated by visual cues. Hence, dependent on the context, one modality might be leading and influence the processing of other modalities. To unravel the complex neural mechanisms behind this process we studied honeybee mushroom body output neurons (MBON). MBONs represent the first processing level after olfactory-visual convergence in the honeybee brain. This was physiologically confirmed in our previous study by characterizing a subpopulation of multisensory MBONs. These neurons categorize incoming sensory inputs into olfactory, visual, and olfactory-visual information. However, in addition to multisensory units a prominent population of MBONs was sensitive to visual cues only. Therefore, we asked which visual features might be represented at this high-order integration level. Using extracellular, multi-unit recordings in combination with visual and olfactory stimulation, we separated MBONs with multisensory responses from purely visually driven MBONs. Further analysis revealed, for the first time, that visually driven MBONs of both groups encode detailed aspects within this individual modality, such as light intensity and light identity. Moreover, we show that these features are separated by different MBON subpopulations, for example by extracting information about brightness and wavelength. Most interestingly, the latter MBON population was tuned to separate UV-light from other light stimuli, which were only poorly differentiated from each other. A third MBON subpopulation was neither tuned to brightness nor to wavelength and encoded the general presence of light. Taken together, our results support the view that the mushroom body, a high-order sensory integration, learning and memory center in the insect brain, categorizes sensory information by separating different behaviorally relevant aspects of the multisensory scenery and that these categories are channeled into distinct MBON subpopulations.
Collapse
Affiliation(s)
- Fabian Schmalz
- Behavioral Physiology and Sociobiology (Zoology II), Biozentrum, University of Würzburg, Würzburg, Germany
| | - Basil El Jundi
- Behavioral Physiology and Sociobiology (Zoology II), Biozentrum, University of Würzburg, Würzburg, Germany
| | - Wolfgang Rössler
- Behavioral Physiology and Sociobiology (Zoology II), Biozentrum, University of Würzburg, Würzburg, Germany
| | - Martin Strube-Bloss
- Department of Biological Cybernetics and Theoretical Biology, University of Bielefeld, Bielefeld, Germany
| |
Collapse
|
3
|
Antennal movements can be used as behavioral readout of odor valence in honey bees. IBRO Neurosci Rep 2022; 12:323-332. [PMID: 35746975 PMCID: PMC9210461 DOI: 10.1016/j.ibneur.2022.04.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Revised: 04/15/2022] [Accepted: 04/17/2022] [Indexed: 11/22/2022] Open
Abstract
The fact that honey bees have a relatively simple nervous system that allows complex behaviors has made them an outstanding model for studying neurobiological processes. Studies on learning and memory routinely use appetitive and aversive learning paradigms that involve recording of the proboscis or the sting extension. However, these protocols are based on all-or-none responses, which has the disadvantage of occluding intermediate and more elaborated behaviors. Nowadays, the great advances in tracking software and data analysis, combined with affordable video recording systems, have made it possible to extract very detailed information about animal behavior. Here we describe antennal movements that are elicited by odor that have no, positive or negative valence. We show that animals orient their antennae towards the source of the odor when it is positive, and orient them in the opposite direction when the odor is negative. Moreover, we found that this behavior was modified between animals that had been trained based on protocols of different strength. Since this procedure allows a more accurate description of the behavioral outcome using a relatively small number of animals, it represents a great tool for studying different cognitive processes and olfactory perception.
Collapse
|
4
|
Lafon G, Geng H, Avarguès-Weber A, Buatois A, Massou I, Giurfa M. The Neural Signature of Visual Learning Under Restrictive Virtual-Reality Conditions. Front Behav Neurosci 2022; 16:846076. [PMID: 35250505 PMCID: PMC8888666 DOI: 10.3389/fnbeh.2022.846076] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Accepted: 01/21/2022] [Indexed: 11/22/2022] Open
Abstract
Honey bees are reputed for their remarkable visual learning and navigation capabilities. These capacities can be studied in virtual reality (VR) environments, which allow studying performances of tethered animals in stationary flight or walk under full control of the sensory environment. Here, we used a 2D VR setup in which a tethered bee walking stationary under restrictive closed-loop conditions learned to discriminate vertical rectangles differing in color and reinforcing outcome. Closed-loop conditions restricted stimulus control to lateral displacements. Consistently with prior VR analyses, bees learned to discriminate the trained stimuli. Ex vivo analyses on the brains of learners and non-learners showed that successful learning led to a downregulation of three immediate early genes in the main regions of the visual circuit, the optic lobes (OLs) and the calyces of the mushroom bodies (MBs). While Egr1 was downregulated in the OLs, Hr38 and kakusei were coincidently downregulated in the calyces of the MBs. Our work thus reveals that color discrimination learning induced a neural signature distributed along the sequential pathway of color processing that is consistent with an inhibitory trace. This trace may relate to the motor patterns required to solve the discrimination task, which are different from those underlying pathfinding in 3D VR scenarios allowing for navigation and exploratory learning and which lead to IEG upregulation.
Collapse
Affiliation(s)
- Gregory Lafon
- Research Center on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, Toulouse, France
| | - Haiyang Geng
- Research Center on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, Toulouse, France
- College of Animal Sciences (College of Bee Science), Fujian Agriculture and Forestry University, Fuzhou, China
| | - Aurore Avarguès-Weber
- Research Center on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, Toulouse, France
| | - Alexis Buatois
- Research Center on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, Toulouse, France
| | - Isabelle Massou
- Research Center on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, Toulouse, France
| | - Martin Giurfa
- Research Center on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, Toulouse, France
- College of Animal Sciences (College of Bee Science), Fujian Agriculture and Forestry University, Fuzhou, China
- Institut Universitaire de France, Paris, France
- *Correspondence: Martin Giurfa,
| |
Collapse
|
5
|
Motion cues from the background influence associative color learning of honey bees in a virtual-reality scenario. Sci Rep 2021; 11:21127. [PMID: 34702914 PMCID: PMC8548521 DOI: 10.1038/s41598-021-00630-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Accepted: 10/13/2021] [Indexed: 11/21/2022] Open
Abstract
Honey bees exhibit remarkable visual learning capacities, which can be studied using virtual reality (VR) landscapes in laboratory conditions. Existing VR environments for bees are imperfect as they provide either open-loop conditions or 2D displays. Here we achieved a true 3D environment in which walking bees learned to discriminate a rewarded from a punished virtual stimulus based on color differences. We included ventral or frontal background cues, which were also subjected to 3D updating based on the bee movements. We thus studied if and how the presence of such motion cues affected visual discrimination in our VR landscape. Our results showed that the presence of frontal, and to a lesser extent, of ventral background motion cues impaired the bees' performance. Whenever these cues were suppressed, color discrimination learning became possible. We analyzed the specific contribution of foreground and background cues and discussed the role of attentional interference and differences in stimulus salience in the VR environment to account for these results. Overall, we show how background and target cues may interact at the perceptual level and influence associative learning in bees. In addition, we identify issues that may affect decision-making in VR landscapes, which require specific control by experimenters.
Collapse
|
6
|
Finke V, Baracchi D, Giurfa M, Scheiner R, Avarguès-Weber A. Evidence of cognitive specialization in an insect: proficiency is maintained across elemental and higher-order visual learning but not between sensory modalities in honey bees. J Exp Biol 2021; 224:273769. [PMID: 34664669 DOI: 10.1242/jeb.242470] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Accepted: 10/14/2021] [Indexed: 11/20/2022]
Abstract
Individuals differing in their cognitive abilities and foraging strategies may confer a valuable benefit to their social groups as variability may help responding flexibly in scenarios with different resource availability. Individual learning proficiency may either be absolute or vary with the complexity or the nature of the problem considered. Determining if learning abilities correlate between tasks of different complexity or between sensory modalities has a high interest for research on brain modularity and task-dependent specialisation of neural circuits. The honeybee Apis mellifera constitutes an attractive model to address this question due to its capacity to successfully learn a large range of tasks in various sensory domains. Here we studied whether the performance of individual bees in a simple visual discrimination task (a discrimination between two visual shapes) is stable over time and correlates with their capacity to solve either a higher-order visual task (a conceptual discrimination based on spatial relations between objects) or an elemental olfactory task (a discrimination between two odorants). We found that individual learning proficiency within a given task was maintained over time and that some individuals performed consistently better than others within the visual modality, thus showing consistent aptitude across visual tasks of different complexity. By contrast, performance in the elemental visual-learning task did not predict performance in the equivalent elemental olfactory task. Overall, our results suggest the existence of cognitive specialisation within the hive, which may contribute to ecological social success.
Collapse
Affiliation(s)
- Valerie Finke
- Centre de Recherches sur la Cognition Animale (CRCA), Centre de Biologie Intégrative (CBI), Université de Toulouse; CNRS, UPS, 118 Route de Narbonne, 31062 Toulouse, France.,Biozentrum, Universität Würzburg, Am Hubland, 97074 Würzburg, Germany
| | - David Baracchi
- Centre de Recherches sur la Cognition Animale (CRCA), Centre de Biologie Intégrative (CBI), Université de Toulouse; CNRS, UPS, 118 Route de Narbonne, 31062 Toulouse, France.,Department of Biology, University of Florence, Via Madonna del Piano 6, 50019 Sesto Fiorentino, Italy
| | - Martin Giurfa
- Centre de Recherches sur la Cognition Animale (CRCA), Centre de Biologie Intégrative (CBI), Université de Toulouse; CNRS, UPS, 118 Route de Narbonne, 31062 Toulouse, France.,Institut Universitaire de France, Paris, France
| | - Ricarda Scheiner
- Biozentrum, Universität Würzburg, Am Hubland, 97074 Würzburg, Germany
| | - Aurore Avarguès-Weber
- Centre de Recherches sur la Cognition Animale (CRCA), Centre de Biologie Intégrative (CBI), Université de Toulouse; CNRS, UPS, 118 Route de Narbonne, 31062 Toulouse, France
| |
Collapse
|
7
|
Howard SR, Prendergast K, Symonds MRE, Shrestha M, Dyer AG. Spontaneous choices for insect-pollinated flower shapes by wild non-eusocial halictid bees. J Exp Biol 2021; 224:271069. [PMID: 34318316 DOI: 10.1242/jeb.242457] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Accepted: 07/22/2021] [Indexed: 11/20/2022]
Abstract
The majority of angiosperms require animal pollination for reproduction, and insects are the dominant group of animal pollinators. Bees are considered one of the most important and abundant insect pollinators. Research into bee behaviour and foraging decisions has typically centred on managed eusocial bee species, including Apis mellifera and Bombus terrestris. Non-eusocial bees are understudied with respect to foraging strategies and decision making, such as flower preferences. Understanding whether there are fundamental foraging strategies and preferences that are features of insect groups can provide key insights into the evolution of flower-pollinator co-evolution. In the current study, Lasioglossum (Chilalictus) lanarium and Lasioglossum (Parasphecodes) sp., two native Australian generalist halictid bees, were tested for flower shape preferences between native insect-pollinated and bird-pollinated flowers. Each bee was presented with achromatic images of either insect-pollinated or bird-pollinated flowers in a circular arena. Both native bee species demonstrated a significant preference for images of insect-pollinated flowers. These preferences are similar to those found in A. mellifera, suggesting that flower shape preference may be a deep-rooted evolutionary occurrence within bees. With growing interest in the sensory capabilities of non-eusocial bees as alternative pollinators, the current study also provides a valuable framework for further behavioural testing of such species.
Collapse
Affiliation(s)
- Scarlett R Howard
- Centre for Integrative Ecology, School of Life and Environmental Sciences, Deakin University, Burwood, VIC 3125, Australia
| | - Kit Prendergast
- School of Molecular and Life Sciences, Curtin University, Bentley, WA 6102, Australia
| | - Matthew R E Symonds
- Centre for Integrative Ecology, School of Life and Environmental Sciences, Deakin University, Burwood, VIC 3125, Australia
| | - Mani Shrestha
- Disturbance Ecology, Bayreuth Center of Ecology and Environmental Research (BayCEER), University of Bayreuth, 95440 Bayreuth, Germany.,Faculty of Information Technology, Monash University, Clayton, VIC 3800, Australia
| | - Adrian G Dyer
- School of Media and Communication, RMIT University, Melbourne, VIC 3000, Australia.,Department of Physiology, Monash University, Clayton, VIC 3800, Australia
| |
Collapse
|
8
|
Riveros AJ, Entler BV, Seid MA. Stimulus-dependent learning and memory in the neotropical ant Ectatomma ruidum. J Exp Biol 2021; 224:261761. [PMID: 33948646 DOI: 10.1242/jeb.238535] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 03/26/2021] [Indexed: 11/20/2022]
Abstract
Learning and memory are major cognitive processes strongly tied to the life histories of animals. In ants, chemotactile information generally plays a central role in social interaction, navigation and resource exploitation. However, in hunters, visual information should take special relevance during foraging, thus leading to differential use of information from different sensory modalities. Here, we aimed to test whether a hunter, the neotropical ant Ectatomma ruidum, differentially learns stimuli acquired through multiple sensory channels. We evaluated the performance of E. ruidum workers when trained using olfactory, mechanical, chemotactile and visual stimuli under a restrained protocol of appetitive learning. Conditioning of the maxilla labium extension response enabled control of the stimuli provided. Our results show that ants learn faster and remember for longer when trained using chemotactile or visual stimuli than when trained using olfactory and mechanical stimuli separately. These results agree with the life history of E. ruidum, characterized by a high relevance of chemotactile information acquired through antennation as well as the role of vision during hunting.
Collapse
Affiliation(s)
- Andre J Riveros
- Departamento de Biología, Facultad de Ciencias Naturales, Universidad del Rosario, Cra. 26 #63B-48, Bogotá, Colombia
| | - Brian V Entler
- Program in Neuroscience, Biology Department, University of Scranton, Scranton, PA 18510, USA
| | - Marc A Seid
- Program in Neuroscience, Biology Department, University of Scranton, Scranton, PA 18510, USA
| |
Collapse
|
9
|
Riveros AJ, Leonard AS, Gronenberg W, Papaj DR. Learning of bimodal versus unimodal signals in restrained bumble bees. J Exp Biol 2020; 223:jeb220103. [PMID: 32321753 DOI: 10.1242/jeb.220103] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Accepted: 04/12/2020] [Indexed: 01/31/2023]
Abstract
Similar to animal communication displays, flowers emit complex signals that attract pollinators. Signal complexity could lead to higher cognitive load for pollinators, impairing performance, or might benefit them by facilitating learning, memory and decision making. Here, we evaluated learning and memory in foragers of the bumble bee Bombus impatiens trained to simple (unimodal) versus complex (bimodal) signals under restrained conditions. Use of a proboscis extension response protocol enabled us to control the timing and duration of stimuli presented during absolute and differential learning tasks. Overall, we observed broad variation in performance under the two conditions, with bees trained to compound bimodal signals learning and remembering as well as, better than or more poorly than bees trained to unimodal signals. Interestingly, the outcome of training was affected by the specific colour-odour combination. Among unimodal stimuli, the performance with odour stimuli was higher than with colour stimuli, suggesting that olfactory signals played a more significant role in the compound bimodal condition. This was supported by the fact that after 24 h, most bimodal-treatment bees responded to odour but not visual stimuli. We did not observe differences in latency of response, suggesting that signal composition affected decision accuracy, not speed. We conclude that restrained bumble bee workers exhibit broad variation of responses to bimodal stimuli and that components of the bimodal signal may not be used equivalently. The analysis of bee performance under restrained conditions enables accurate control of the multimodal stimuli provided to individuals and to study the interaction of individual components within a compound.
Collapse
Affiliation(s)
- Andre J Riveros
- Departamento de Biología, Grupo de Investigaciones CANNON, Facultad de Ciencias Naturales, Universidad del Rosario, Bogotá, Colombia
| | - Anne S Leonard
- Department of Biology, University of Nevada, Reno, NV 89557, USA
| | - Wulfila Gronenberg
- Department of Neuroscience, University of Arizona, Tucson, AZ 85721, USA
| | - Daniel R Papaj
- Department of Ecology and Evolutionary Biology, University of Arizona, Tucson, AZ 85721, USA
| |
Collapse
|
10
|
Varnon CA, Dinges CW, Vest AJ, Abramson CI. Conspecific and interspecific stimuli reduce initial performance in an aversive learning task in honey bees (Apis mellifera). PLoS One 2020; 15:e0228161. [PMID: 32097420 PMCID: PMC7041878 DOI: 10.1371/journal.pone.0228161] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Accepted: 01/08/2020] [Indexed: 11/26/2022] Open
Abstract
The purpose of this experiment was to investigate whether honey bees (Apis mellifera) are able to use social discriminative stimuli in a spatial aversive conditioning paradigm. We tested bees' ability to avoid shock in a shuttle box apparatus across multiple groups when either shock, or the absence of shock, was associated with a live hive mate, a dead hive mate, a live Polistes exclamans wasp or a dead wasp. Additionally, we used several control groups common to bee shuttle box research where shock was only associated with spatial cues, or where shock was associated with a blue or yellow color. While bees were able to learn the aversive task in a simple spatial discrimination, the presence of any other stimuli (color, another bee, or a wasp) reduced initial performance. While the color biases we discovered are in line with other experiments, the finding that the presence of another animal reduces performance is novel. Generally, it appears that the use of bees or wasps as stimuli initially causes an increase in overall activity that interferes with early performance in the spatial task. During the course of the experiment, the bees habituate to the insect stimuli (bee or wasp), and begin learning the aversive task. Additionally, we found that experimental subject bees did not discriminate between bees or wasps used as stimulus animals, nor did they discriminate between live or dead stimulus animals. This may occur, in part, due to the specialized nature of the worker honey bee. Results are discussed with implications for continual research on honey bees as models of aversive learning, as well as research on insect social learning in general.
Collapse
Affiliation(s)
- Christopher A. Varnon
- Laboratory of Comparative Psychology and Behavioral Ecology, Department of Psychology, Converse College, Spartanburg, South Carolina, United States of America
| | - Christopher W. Dinges
- Laboratory of Comparative Psychology and Behavioral Biology, Department of Psychology, Oklahoma State University, Stillwater, Oklahoma, United States of America
| | - Adam J. Vest
- Laboratory of Comparative Psychology and Behavioral Biology, Department of Psychology, Oklahoma State University, Stillwater, Oklahoma, United States of America
| | - Charles I. Abramson
- Laboratory of Comparative Psychology and Behavioral Biology, Department of Psychology, Oklahoma State University, Stillwater, Oklahoma, United States of America
| |
Collapse
|
11
|
Nouvian M, Galizia CG. Aversive Training of Honey Bees in an Automated Y-Maze. Front Physiol 2019; 10:678. [PMID: 31231238 PMCID: PMC6558987 DOI: 10.3389/fphys.2019.00678] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2019] [Accepted: 05/13/2019] [Indexed: 11/13/2022] Open
Abstract
Honeybees have remarkable learning abilities given their small brains, and have thus been established as a powerful model organism for the study of learning and memory. Most of our current knowledge is based on appetitive paradigms, in which a previously neutral stimulus (e.g., a visual, olfactory, or tactile stimulus) is paired with a reward. Here, we present a novel apparatus, the yAPIS, for aversive training of walking honey bees. This system consists in three arms of equal length and at 120° from each other. Within each arm, colored lights (λ = 375, 465 or 520 nm) or odors (here limonene or linalool) can be delivered to provide conditioned stimuli (CS). A metal grid placed on the floor and roof delivers the punishment in the form of mild electric shocks (unconditioned stimulus, US). Our training protocol followed a fully classical procedure, in which the bee was exposed sequentially to a CS paired with shocks (CS+) and to another CS not punished (CS-). Learning performance was measured during a second phase, which took advantage of the Y-shape of the apparatus and of real-time tracking to present the bee with a choice situation, e.g., between the CS+ and the CS-. Bees reliably chose the CS- over the CS+ after only a few training trials with either colors or odors, and retained this memory for at least a day, except for the shorter wavelength (λ = 375 nm) that produced mixed results. This behavior was largely the result of the bees avoiding the CS+, as no evidence was found for attraction to the CS-. Interestingly, trained bees initially placed in the CS+ spontaneously escaped to a CS- arm if given the opportunity, even though they could never do so during the training. Finally, honey bees trained with compound stimuli (color + odor) later avoided either components of the CS+. Thus, the yAPIS is a fast, versatile and high-throughput way to train honey bees in aversive paradigms. It also opens the door for controlled laboratory experiments investigating bimodal integration and learning, a field that remains in its infancy.
Collapse
Affiliation(s)
- Morgane Nouvian
- Department of Biology, University of Konstanz, Konstanz, Germany
| | - C. Giovanni Galizia
- Department of Biology, University of Konstanz, Konstanz, Germany
- Centre for the Advanced Study of Collective Behaviour, University of Konstanz, Konstanz, Germany
| |
Collapse
|
12
|
Aminergic neuromodulation of associative visual learning in harnessed honey bees. Neurobiol Learn Mem 2018; 155:556-567. [DOI: 10.1016/j.nlm.2018.05.014] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Revised: 05/04/2018] [Accepted: 05/19/2018] [Indexed: 11/21/2022]
|
13
|
Mansur BE, Rodrigues JRV, Mota T. Bimodal Patterning Discrimination in Harnessed Honey Bees. Front Psychol 2018; 9:1529. [PMID: 30197616 PMCID: PMC6117423 DOI: 10.3389/fpsyg.2018.01529] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2018] [Accepted: 08/02/2018] [Indexed: 11/13/2022] Open
Abstract
In natural environments, stimuli and events learned by animals usually occur in a combination of more than one sensory modality. An important problem in experimental psychology has been thus to understand how organisms learn about multimodal compounds and how they discriminate this compounds from their unimodal constituents. Here we tested the ability of honey bees to learn bimodal patterning discriminations in which a visual-olfactory compound (AB) should be differentiated from its visual (A) and olfactory (B) elements. We found that harnessed bees trained in classical conditioning of the proboscis extension reflex (PER) are able to solve bimodal positive and negative patterning (NP) tasks. In positive patterning (PP), bees learned to respond significantly more to a bimodal reinforced compound (AB+) than to non-reinforced presentations of single visual (A-) or olfactory (B-) elements. In NP, bees learned to suppress their responses to a non-reinforced compound (AB-) and increase their responses to reinforced presentations of visual (A+) or olfactory (B+) elements alone. We compared the effect of two different inter-trial intervals (ITI) in our conditioning approaches. Whereas an ITI of 8 min allowed solving both PP and NP, only PP could be solved with a shorter ITI of 3 min. In all successful cases of bimodal PP and NP, bees were still able to discriminate between reinforced and non-reinforced stimuli in memory tests performed one hour after conditioning. The analysis of individual performances in PP and NP revealed that different learning strategies emerged in distinct individuals. Both in PP and NP, high levels of generalization were found between elements and compound at the individual level, suggesting a similar difficulty for bees to solve these bimodal patterning tasks. We discuss our results in light of elemental and configural learning theories that may support the strategies adopted by honey bees to solve bimodal PP or NP discriminations.
Collapse
Affiliation(s)
- Breno E Mansur
- Department of Physiology and Biophysics, Federal University of Minas Gerais, Belo Horizonte, Brazil
| | - Jean R V Rodrigues
- Department of Physiology and Biophysics, Federal University of Minas Gerais, Belo Horizonte, Brazil
| | - Theo Mota
- Department of Physiology and Biophysics, Federal University of Minas Gerais, Belo Horizonte, Brazil
| |
Collapse
|
14
|
Lichtenstein L, Lichtenstein M, Spaethe J. Length of stimulus presentation and visual angle are critical for efficient visual PER conditioning in the restrained honey bee, Apis mellifera. J Exp Biol 2018; 221:221/14/jeb179622. [DOI: 10.1242/jeb.179622] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2018] [Accepted: 05/21/2018] [Indexed: 11/20/2022]
Abstract
ABSTRACT
Learning visual cues is an essential capability of bees for vital behaviors such as orientation in space and recognition of nest sites, food sources and mating partners. To study learning and memory in bees under controlled conditions, the proboscis extension response (PER) provides a well-established behavioral paradigm. While many studies have used the PER paradigm to test olfactory learning in bees because of its robustness and reproducibility, studies on PER conditioning of visual stimuli are rare. In this study, we designed a new setup to test the learning performance of restrained honey bees and the impact of several parameters: stimulus presentation length, stimulus size (i.e. visual angle) and ambient illumination. Intact honey bee workers could successfully discriminate between two monochromatic lights when the color stimulus was presented for 4, 7 and 10 s before a sugar reward was offered, reaching similar performance levels to those for olfactory conditioning. However, bees did not learn at shorter presentation durations. Similar to free-flying honey bees, harnessed bees were able to associate a visual stimulus with a reward at small visual angles (5 deg) but failed to utilize the chromatic information to discriminate the learned stimulus from a novel color. Finally, ambient light had no effect on acquisition performance. We discuss possible reasons for the distinct differences between olfactory and visual PER conditioning.
Collapse
Affiliation(s)
- Leonie Lichtenstein
- Department of Behavioral Physiology and Sociobiology, Biozentrum, University of Würzburg, 97074 Würzburg, Germany
| | - Matthias Lichtenstein
- Department of Behavioral Physiology and Sociobiology, Biozentrum, University of Würzburg, 97074 Würzburg, Germany
| | - Johannes Spaethe
- Department of Behavioral Physiology and Sociobiology, Biozentrum, University of Würzburg, 97074 Würzburg, Germany
| |
Collapse
|
15
|
Buatois A, Flumian C, Schultheiss P, Avarguès-Weber A, Giurfa M. Transfer of Visual Learning Between a Virtual and a Real Environment in Honey Bees: The Role of Active Vision. Front Behav Neurosci 2018; 12:139. [PMID: 30057530 PMCID: PMC6053632 DOI: 10.3389/fnbeh.2018.00139] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2018] [Accepted: 06/18/2018] [Indexed: 01/19/2023] Open
Abstract
To study visual learning in honey bees, we developed a virtual reality (VR) system in which the movements of a tethered bee walking stationary on a spherical treadmill update the visual panorama presented in front of it (closed-loop conditions), thus creating an experience of immersion within a virtual environment. In parallel, we developed a small Y-maze with interchangeable end-boxes, which allowed replacing repeatedly a freely walking bee into the starting point of the maze for repeated decision recording. Using conditioning and transfer experiments between the VR setup and the Y-maze, we studied the extent to which movement freedom and active vision are crucial for learning a simple color discrimination. Approximately 57% of the bees learned the visual discrimination in both conditions. Transfer from VR to the maze improved significantly the bees’ performances: 75% of bees having chosen the CS+ continued doing so and 100% of bees having chosen the CS− reverted their choice in favor of the CS+. In contrast, no improvement was seen for these two groups of bees during the reciprocal transfer from the Y-maze to VR. In this case, bees exhibited inconsistent choices in the VR setup. The asymmetric transfer between contexts indicates that the information learned in each environment may be different despite the similar learning success. Moreover, it shows that reducing the possibility of active vision and movement freedom in the passage from the maze to the VR impairs the expression of visual learning while increasing them in the reciprocal transfer improves it. Our results underline the active nature of visual processing in bees and allow discussing the developments required for immersive VR experiences in insects.
Collapse
Affiliation(s)
- Alexis Buatois
- Research Centre on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, Toulouse, France
| | - Clara Flumian
- Research Centre on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, Toulouse, France
| | - Patrick Schultheiss
- Research Centre on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, Toulouse, France
| | - Aurore Avarguès-Weber
- Research Centre on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, Toulouse, France
| | - Martin Giurfa
- Research Centre on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, Toulouse, France
| |
Collapse
|
16
|
Rodrigues Vieira A, Salles N, Borges M, Mota T. Visual discrimination transfer and modulation by biogenic amines in honeybees. J Exp Biol 2018; 221:jeb.178830. [DOI: 10.1242/jeb.178830] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2018] [Accepted: 03/12/2018] [Indexed: 01/29/2023]
Abstract
For more than a century, visual learning and memory has been studied in the honeybee Apis mellifera using operant appetitive conditioning. Although honeybees show impressive visual learning capacities in this well-established protocol, operant training of free-flying animals can hardly be combined with invasive protocols for studying the neurobiological basis of visual learning. In view of that, different efforts have been made to develop new classical conditioning protocols for studying visual learning in harnessed honeybees, though learning performances remain considerably poorer than those obtained in free-flying animals. Here we investigated the ability of honeybees to use visual information acquired during classical conditioning in a new operant context. We performed differential visual conditioning of the proboscis extension reflex (PER) followed by visual orientation tests in Y-maze. Classical conditioning and Y-maze retention tests were performed using a same pair of perceptually isoluminant monochromatic stimuli, to avoid the influence of phototaxis during free-flying orientation. Visual discrimination transfer was clearly observed, with pre-trained honeybees significantly orienting their flights towards the former positive conditioned stimulus (CS+). We thus show that visual memories acquired by honeybees are resistant to context changes between conditioning and retention test. We combined this visual discrimination approach with selective pharmacological injections to evaluate the effect of dopamine and octopamine in appetitive visual learning. Both octopaminergic and dopaminergic antagonists impaired visual discrimination performances, suggesting that both these biogenic amines modulate appetitive visual learning in honeybees. Our study brings new insights into cognitive and neurobiological mechanisms underlying visual learning in honeybees.
Collapse
Affiliation(s)
- Amanda Rodrigues Vieira
- Department of Physiology and Biophysics, Federal University of Minas Gerais, Belo Horizonte, Brazil
- Postgraduate Program in Neurosciences, Federal University of Minas Gerais, Belo Horizonte, Brazil
| | - Nayara Salles
- Department of Physiology and Biophysics, Federal University of Minas Gerais, Belo Horizonte, Brazil
| | - Marco Borges
- Department of Physiology and Biophysics, Federal University of Minas Gerais, Belo Horizonte, Brazil
| | - Theo Mota
- Department of Physiology and Biophysics, Federal University of Minas Gerais, Belo Horizonte, Brazil
- Postgraduate Program in Neurosciences, Federal University of Minas Gerais, Belo Horizonte, Brazil
| |
Collapse
|
17
|
Schultheiss P, Buatois A, Avarguès-Weber A, Giurfa M. Using virtual reality to study visual performances of honeybees. CURRENT OPINION IN INSECT SCIENCE 2017; 24:43-50. [PMID: 29208222 DOI: 10.1016/j.cois.2017.08.003] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/26/2017] [Revised: 08/14/2017] [Accepted: 08/24/2017] [Indexed: 06/07/2023]
Abstract
Virtual reality (VR) offers an appealing experimental framework for studying visual performances of insects under highly controlled conditions. In the case of the honeybee Apis mellifera, this possibility may fill the gap between behavioural analyses in free-flight and cellular analyses in the laboratory. Using automated, computer-controlled systems, it is possible to generate virtual stimuli or even entire environments that can be modified to test hypotheses on bee visual behaviour. The bee itself can remain tethered in place, making it possible to record neural activity while the bees is performing behavioural tasks. Recent studies have examined visual navigation and attentional processes in VR on flying or walking tethered bees, but experimental paradigms for examining visual learning and memory are only just emerging. Behavioural performances of bees under current experimental conditions are often lower in VR than in natural environments, but further improvements on current experimental protocols seem possible. Here we discuss current developments and conclude that it is essential to tailor the specifications of the VR simulation to the visual processing of honeybees to improve the success of this research endeavour.
Collapse
Affiliation(s)
- Patrick Schultheiss
- Research Centre on Animal Cognition, Centre for Integrative Biology, CNRS, University of Toulouse, 118 Route de Narbonne, 31062 Toulouse cedex 09, France.
| | - Alexis Buatois
- Research Centre on Animal Cognition, Centre for Integrative Biology, CNRS, University of Toulouse, 118 Route de Narbonne, 31062 Toulouse cedex 09, France
| | - Aurore Avarguès-Weber
- Research Centre on Animal Cognition, Centre for Integrative Biology, CNRS, University of Toulouse, 118 Route de Narbonne, 31062 Toulouse cedex 09, France
| | - Martin Giurfa
- Research Centre on Animal Cognition, Centre for Integrative Biology, CNRS, University of Toulouse, 118 Route de Narbonne, 31062 Toulouse cedex 09, France
| |
Collapse
|
18
|
Buatois A, Pichot C, Schultheiss P, Sandoz JC, Lazzari CR, Chittka L, Avarguès-Weber A, Giurfa M. Associative visual learning by tethered bees in a controlled visual environment. Sci Rep 2017; 7:12903. [PMID: 29018218 PMCID: PMC5635106 DOI: 10.1038/s41598-017-12631-w] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2017] [Accepted: 09/08/2017] [Indexed: 11/22/2022] Open
Abstract
Free-flying honeybees exhibit remarkable cognitive capacities but the neural underpinnings of these capacities cannot be studied in flying insects. Conversely, immobilized bees are accessible to neurobiological investigation but display poor visual learning. To overcome this limitation, we aimed at establishing a controlled visual environment in which tethered bees walking on a spherical treadmill learn to discriminate visual stimuli video projected in front of them. Freely flying bees trained to walk into a miniature Y-maze displaying these stimuli in a dark environment learned the visual discrimination efficiently when one of them (CS+) was paired with sucrose and the other with quinine solution (CS−). Adapting this discrimination to the treadmill paradigm with a tethered, walking bee was successful as bees exhibited robust discrimination and preferred the CS+ to the CS− after training. As learning was better in the maze, movement freedom, active vision and behavioral context might be important for visual learning. The nature of the punishment associated with the CS− also affects learning as quinine and distilled water enhanced the proportion of learners. Thus, visual learning is amenable to a controlled environment in which tethered bees learn visual stimuli, a result that is important for future neurobiological studies in virtual reality.
Collapse
Affiliation(s)
- Alexis Buatois
- Research Centre on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, 118 route de Narbonne, F-31062, Toulouse cedex 09, France
| | - Cécile Pichot
- Research Centre on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, 118 route de Narbonne, F-31062, Toulouse cedex 09, France
| | - Patrick Schultheiss
- Research Centre on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, 118 route de Narbonne, F-31062, Toulouse cedex 09, France
| | - Jean-Christophe Sandoz
- Laboratory Evolution Genomes Behavior and Ecology, CNRS, Univ Paris-Sud, IRD, University Paris Saclay, F-91198, Gif-sur-Yvette, France
| | - Claudio R Lazzari
- Institut de Recherche sur la Biologie de l'Insecte, UMR 7261 CNRS, University François Rabelais of Tours, F-37200, Tours, France
| | - Lars Chittka
- Queen Mary University of London, School of Biological and Chemical Sciences, Biological and Experimental Psychology, Mile End Road, London, E1 4NS, United Kingdom
| | - Aurore Avarguès-Weber
- Research Centre on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, 118 route de Narbonne, F-31062, Toulouse cedex 09, France.
| | - Martin Giurfa
- Research Centre on Animal Cognition, Center for Integrative Biology, CNRS, University of Toulouse, 118 route de Narbonne, F-31062, Toulouse cedex 09, France.
| |
Collapse
|
19
|
Fernandes ASD, Buckley CL, Niven JE. Visual associative learning in wood ants. J Exp Biol 2017; 221:jeb.173260. [DOI: 10.1242/jeb.173260] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2017] [Accepted: 12/05/2017] [Indexed: 12/15/2022]
Abstract
Wood ants are a model system for studying visual learning and navigation. They can forage for food and navigate to their nests effectively by forming memories of visual features in their surrounding environment. Previous studies of freely behaving ants have revealed many of the behavioural strategies and environmental features necessary for successful navigation. However, little is known about the exact visual properties of the environment that animals learn or the neural mechanisms that allow them to achieve this. As a first step towards addressing this, we developed a classical conditioning paradigm for visual learning in harnessed wood ants that allows us to control precisely the learned visual cues. In this paradigm, ants are fixed and presented with a visual cue paired with an appetitive sugar reward. Using this paradigm, we found that visual cues learnt by wood ants through Pavlovian conditioning are retained for at least one hour. Furthermore, we found that memory retention is dependent upon the ants’ performance during training. Our study provides the first evidence that wood ants can form visual associative memories when restrained. This classical conditioning paradigm has the potential to permit detailed analysis of the dynamics of memory formation and retention, and the neural basis of learning in wood ants.
Collapse
Affiliation(s)
- A. Sofia D. Fernandes
- Department of Informatics, University of Sussex, Falmer, Brighton BN1 9QJ, UK
- School of Life Sciences, University of Sussex, Falmer, Brighton BN1 9QG, UK
- Centre for Computational Neuroscience and Robotics, University of Sussex, Falmer, Brighton BN1 9QG, UK
| | - C. L. Buckley
- Department of Informatics, University of Sussex, Falmer, Brighton BN1 9QJ, UK
- Centre for Computational Neuroscience and Robotics, University of Sussex, Falmer, Brighton BN1 9QG, UK
| | - J. E. Niven
- School of Life Sciences, University of Sussex, Falmer, Brighton BN1 9QG, UK
- Centre for Computational Neuroscience and Robotics, University of Sussex, Falmer, Brighton BN1 9QG, UK
| |
Collapse
|