1
|
Winsor AM, Pagoti GF, Daye DJ, Cheries EW, Cave KR, Jakob EM. What gaze direction can tell us about cognitive processes in invertebrates. Biochem Biophys Res Commun 2021; 564:43-54. [PMID: 33413978 DOI: 10.1016/j.bbrc.2020.12.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Revised: 11/30/2020] [Accepted: 12/01/2020] [Indexed: 01/29/2023]
Abstract
Most visually guided animals shift their gaze using body movements, eye movements, or both to gather information selectively from their environments. Psychological studies of eye movements have advanced our understanding of perceptual and cognitive processes that mediate visual attention in humans and other vertebrates. However, much less is known about how these processes operate in other organisms, particularly invertebrates. We here make the case that studies of invertebrate cognition can benefit by adding precise measures of gaze direction. To accomplish this, we briefly review the human visual attention literature and outline four research themes and several experimental paradigms that could be extended to invertebrates. We briefly review selected studies where the measurement of gaze direction in invertebrates has provided new insights, and we suggest future areas of exploration.
Collapse
Affiliation(s)
- Alex M Winsor
- Graduate Program in Organismic and Evolutionary Biology, University of Massachusetts Amherst, Amherst, MA, 01003, USA.
| | - Guilherme F Pagoti
- Programa de Pós-Graduação em Zoologia, Instituto de Biociências, Universidade de São Paulo, Rua do Matão, 321, Travessa 14, Cidade Universitária, São Paulo, SP, 05508-090, Brazil
| | - Daniel J Daye
- Department of Biology, University of Massachusetts Amherst, Amherst, MA, 01003, USA; Graduate Program in Biological and Environmental Sciences, University of Rhode Island, Kingston, RI, 02881, USA
| | - Erik W Cheries
- Department of Psychological and Brain Sciences, University of Massachusetts Amherst, Amherst, MA, 01003, USA
| | - Kyle R Cave
- Department of Psychological and Brain Sciences, University of Massachusetts Amherst, Amherst, MA, 01003, USA
| | - Elizabeth M Jakob
- Department of Biology, University of Massachusetts Amherst, Amherst, MA, 01003, USA.
| |
Collapse
|
2
|
Different mechanisms underlie implicit visual statistical learning in honey bees and humans. Proc Natl Acad Sci U S A 2020; 117:25923-25934. [PMID: 32989162 DOI: 10.1073/pnas.1919387117] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The ability of developing complex internal representations of the environment is considered a crucial antecedent to the emergence of humans' higher cognitive functions. Yet it is an open question whether there is any fundamental difference in how humans and other good visual learner species naturally encode aspects of novel visual scenes. Using the same modified visual statistical learning paradigm and multielement stimuli, we investigated how human adults and honey bees (Apis mellifera) encode spontaneously, without dedicated training, various statistical properties of novel visual scenes. We found that, similarly to humans, honey bees automatically develop a complex internal representation of their visual environment that evolves with accumulation of new evidence even without a targeted reinforcement. In particular, with more experience, they shift from being sensitive to statistics of only elemental features of the scenes to relying on co-occurrence frequencies of elements while losing their sensitivity to elemental frequencies, but they never encode automatically the predictivity of elements. In contrast, humans involuntarily develop an internal representation that includes single-element and co-occurrence statistics, as well as information about the predictivity between elements. Importantly, capturing human visual learning results requires a probabilistic chunk-learning model, whereas a simple fragment-based memory-trace model that counts occurrence summary statistics is sufficient to replicate honey bees' learning behavior. Thus, humans' sophisticated encoding of sensory stimuli that provides intrinsic sensitivity to predictive information might be one of the fundamental prerequisites of developing higher cognitive abilities.
Collapse
|
3
|
Lin IR, Chiao CC. Visual Equivalence and Amodal Completion in Cuttlefish. Front Physiol 2017; 8:40. [PMID: 28220075 PMCID: PMC5292434 DOI: 10.3389/fphys.2017.00040] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2016] [Accepted: 01/13/2017] [Indexed: 11/13/2022] Open
Abstract
Modern cephalopods are notably the most intelligent invertebrates and this is accompanied by keen vision. Despite extensive studies investigating the visual systems of cephalopods, little is known about their visual perception and object recognition. In the present study, we investigated the visual processing of the cuttlefish Sepia pharaonis, including visual equivalence and amodal completion. Cuttlefish were trained to discriminate images of shrimp and fish using the operant conditioning paradigm. After cuttlefish reached the learning criteria, a series of discrimination tasks were conducted. In the visual equivalence experiment, several transformed versions of the training images, such as images reduced in size, images reduced in contrast, sketches of the images, the contours of the images, and silhouettes of the images, were used. In the amodal completion experiment, partially occluded views of the original images were used. The results showed that cuttlefish were able to treat the training images of reduced size and sketches as the visual equivalence. Cuttlefish were also capable of recognizing partially occluded versions of the training image. Furthermore, individual differences in performance suggest that some cuttlefish may be able to recognize objects when visual information was partly removed. These findings support the hypothesis that the visual perception of cuttlefish involves both visual equivalence and amodal completion. The results from this research also provide insights into the visual processing mechanisms used by cephalopods.
Collapse
Affiliation(s)
- I-Rong Lin
- Institute of Systems Neuroscience, National Tsing Hua University Hsinchu, Taiwan
| | - Chuan-Chin Chiao
- Institute of Systems Neuroscience, National Tsing Hua UniversityHsinchu, Taiwan; Department of Life Science, National Tsing Hua UniversityHsinchu, Taiwan
| |
Collapse
|
4
|
McCarthy EW, Chase MW, Knapp S, Litt A, Leitch AR, Le Comber SC. Transgressive phenotypes and generalist pollination in the floral evolution of Nicotiana polyploids. NATURE PLANTS 2016; 2:16119. [PMID: 27501400 DOI: 10.1038/nplants.2016.119] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2016] [Accepted: 07/06/2016] [Indexed: 05/28/2023]
Abstract
Polyploidy is an important driving force in angiosperm evolution, and much research has focused on genetic, epigenetic and transcriptomic responses to allopolyploidy. Nicotiana is an excellent system in which to study allopolyploidy because half of the species are allotetraploids of different ages, allowing us to examine the trajectory of floral evolution over time. Here, we study the effects of allopolyploidy on floral morphology in Nicotiana, using corolla tube measurements and geometric morphometrics to quantify petal shape. We show that polyploid morphological divergence from the intermediate phenotype expected (based on progenitor morphology) increases with time for floral limb shape and tube length, and that most polyploids are distinct or transgressive in at least one trait. In addition, we show that polyploids tend to evolve shorter and wider corolla tubes, suggesting that allopolyploidy could provide an escape from specialist pollination via reversion to more generalist pollination strategies.
Collapse
Affiliation(s)
- Elizabeth W McCarthy
- School of Biological and Chemical Sciences, Queen Mary University of London, Mile End Road, London E1 4NS, UK
- Jodrell Laboratory, Royal Botanic Gardens, Kew, Richmond TW9 3DS, UK
- Natural History Museum, London SW7 5BD, UK
| | - Mark W Chase
- Jodrell Laboratory, Royal Botanic Gardens, Kew, Richmond TW9 3DS, UK
| | | | - Amy Litt
- Department of Botany and Plant Sciences, University of California, Riverside, California 92521, USA
| | - Andrew R Leitch
- School of Biological and Chemical Sciences, Queen Mary University of London, Mile End Road, London E1 4NS, UK
| | - Steven C Le Comber
- School of Biological and Chemical Sciences, Queen Mary University of London, Mile End Road, London E1 4NS, UK
| |
Collapse
|
5
|
Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes? PLoS One 2016; 11:e0147106. [PMID: 26886006 PMCID: PMC4757030 DOI: 10.1371/journal.pone.0147106] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2015] [Accepted: 12/29/2015] [Indexed: 11/19/2022] Open
Abstract
Honeybees (Apis mellifera) discriminate multiple object features such as colour, pattern and 2D shape, but it remains unknown whether and how bees recover three-dimensional shape. Here we show that bees can recognize objects by their three-dimensional form, whereby they employ an active strategy to uncover the depth profiles. We trained individual, free flying honeybees to collect sugar water from small three-dimensional objects made of styrofoam (sphere, cylinder, cuboids) or folded paper (convex, concave, planar) and found that bees can easily discriminate between these stimuli. We also tested possible strategies employed by the bees to uncover the depth profiles. For the card stimuli, we excluded overall shape and pictorial features (shading, texture gradients) as cues for discrimination. Lacking sufficient stereo vision, bees are known to use speed gradients in optic flow to detect edges; could the bees apply this strategy also to recover the fine details of a surface depth profile? Analysing the bees' flight tracks in front of the stimuli revealed specific combinations of flight maneuvers (lateral translations in combination with yaw rotations), which are particularly suitable to extract depth cues from motion parallax. We modelled the generated optic flow and found characteristic patterns of angular displacement corresponding to the depth profiles of our stimuli: optic flow patterns from pure translations successfully recovered depth relations from the magnitude of angular displacements, additional rotation provided robust depth information based on the direction of the displacements; thus, the bees flight maneuvers may reflect an optimized visuo-motor strategy to extract depth structure from motion signals. The robustness and simplicity of this strategy offers an efficient solution for 3D-object-recognition without stereo vision, and could be employed by other flying insects, or mobile robots.
Collapse
|
6
|
Avarguès-Weber A, Dyer AG, Ferrah N, Giurfa M. The forest or the trees: preference for global over local image processing is reversed by prior experience in honeybees. Proc Biol Sci 2015; 282:20142384. [PMID: 25473017 DOI: 10.1098/rspb.2014.2384] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Traditional models of insect vision have assumed that insects are only capable of low-level analysis of local cues and are incapable of global, holistic perception. However, recent studies on honeybee (Apis mellifera) vision have refuted this view by showing that this insect also processes complex visual information by using spatial configurations or relational rules. In the light of these findings, we asked whether bees prioritize global configurations or local cues by setting these two levels of image analysis in competition. We trained individual free-flying honeybees to discriminate hierarchical visual stimuli within a Y-maze and tested bees with novel stimuli in which local and/or global cues were manipulated. We demonstrate that even when local information is accessible, bees prefer global information, thus relying mainly on the object's spatial configuration rather than on elemental, local information. This preference can be reversed if bees are pre-trained to discriminate isolated local cues. In this case, bees prefer the hierarchical stimuli with the local elements previously primed even if they build an incorrect global configuration. Pre-training with local cues induces a generic attentional bias towards any local elements as local information is prioritized in the test, even if the local cues used in the test are different from the pre-trained ones. Our results thus underline the plasticity of visual processing in insects and provide new insights for the comparative analysis of visual recognition in humans and animals.
Collapse
Affiliation(s)
- Aurore Avarguès-Weber
- Centre de Recherches sur la Cognition Animale, Université de Toulouse; UPS, 118 route de Narbonne, Toulouse Cedex 9 31062, France Centre de Recherches sur la Cognition Animale, CNRS, 118 route de Narbonne, Toulouse Cedex 9 31062, France
| | - Adrian G Dyer
- Department of Physiology, Monash University, Clayton, Victoria 3800, Australia School of Media and Communication, Royal Melbourne Institute of Technology, Melbourne, Victoria 3000, Australia
| | - Noha Ferrah
- Department of Physiology, Monash University, Clayton, Victoria 3800, Australia
| | - Martin Giurfa
- Centre de Recherches sur la Cognition Animale, Université de Toulouse; UPS, 118 route de Narbonne, Toulouse Cedex 9 31062, France Centre de Recherches sur la Cognition Animale, CNRS, 118 route de Narbonne, Toulouse Cedex 9 31062, France
| |
Collapse
|
7
|
Huang Y, Spelke ES. Core knowledge and the emergence of symbols: The case of maps. JOURNAL OF COGNITION AND DEVELOPMENT 2015; 16:81-96. [PMID: 25642150 PMCID: PMC4308729 DOI: 10.1080/15248372.2013.784975] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
Map reading is unique to humans but present in people of diverse cultures, at ages as young as 4 years. Here we explore the nature and sources of this ability, asking both what geometric information young children use in maps and what non-symbolic systems are associated with their map-reading performance. Four-year-old children were given two tests of map-based navigation (placing an object within a small 3D surface layout at a position indicated on a 2D map), one focused on distance relations and the other on angle relations. Children also were given two non-symbolic tasks, testing their use of geometry for navigation (a reorientation task) and for visual form analysis (a deviant-detection task). Although children successfully performed both map tasks, their performance on the two map tasks was uncorrelated, providing evidence for distinct abilities to represent distance and angle on 2D maps of 3D surface layouts. In contrast, performance on each map task was associated with performance on one of the two non-symbolic tasks: map-based navigation by distance correlated with sensitivity to the shape of the environment in the reorientation task, whereas map-based navigation by angle correlated with sensitivity to the shapes of 2D forms and patterns in the deviant detection task. These findings suggest links between one uniquely human, emerging symbolic ability, geometric map use, and two core systems of geometry.
Collapse
Affiliation(s)
- Yi Huang
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | | |
Collapse
|
8
|
Dittmar L. Static and dynamic snapshots for goal localization in insects? Commun Integr Biol 2014. [DOI: 10.4161/cib.13763] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
|
9
|
Mertes M, Dittmar L, Egelhaaf M, Boeddeker N. Visual motion-sensitive neurons in the bumblebee brain convey information about landmarks during a navigational task. Front Behav Neurosci 2014; 8:335. [PMID: 25309374 PMCID: PMC4173878 DOI: 10.3389/fnbeh.2014.00335] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2014] [Accepted: 09/07/2014] [Indexed: 11/13/2022] Open
Abstract
Bees use visual memories to find the spatial location of previously learnt food sites. Characteristic learning flights help acquiring these memories at newly discovered foraging locations where landmarks—salient objects in the vicinity of the goal location—can play an important role in guiding the animal's homing behavior. Although behavioral experiments have shown that bees can use a variety of visual cues to distinguish objects as landmarks, the question of how landmark features are encoded by the visual system is still open. Recently, it could be shown that motion cues are sufficient to allow bees localizing their goal using landmarks that can hardly be discriminated from the background texture. Here, we tested the hypothesis that motion sensitive neurons in the bee's visual pathway provide information about such landmarks during a learning flight and might, thus, play a role for goal localization. We tracked learning flights of free-flying bumblebees (Bombus terrestris) in an arena with distinct visual landmarks, reconstructed the visual input during these flights, and replayed ego-perspective movies to tethered bumblebees while recording the activity of direction-selective wide-field neurons in their optic lobe. By comparing neuronal responses during a typical learning flight and targeted modifications of landmark properties in this movie we demonstrate that these objects are indeed represented in the bee's visual motion pathway. We find that object-induced responses vary little with object texture, which is in agreement with behavioral evidence. These neurons thus convey information about landmark properties that are useful for view-based homing.
Collapse
Affiliation(s)
- Marcel Mertes
- Department of Neurobiology, Center of Excellence 'Cognitive Interaction Technology' (CITEC), Bielefeld University Bielefeld, Germany
| | - Laura Dittmar
- Department of Neurobiology, Center of Excellence 'Cognitive Interaction Technology' (CITEC), Bielefeld University Bielefeld, Germany
| | - Martin Egelhaaf
- Department of Neurobiology, Center of Excellence 'Cognitive Interaction Technology' (CITEC), Bielefeld University Bielefeld, Germany
| | - Norbert Boeddeker
- Department of Neurobiology, Center of Excellence 'Cognitive Interaction Technology' (CITEC), Bielefeld University Bielefeld, Germany
| |
Collapse
|
10
|
Hempel de Ibarra N, Vorobyev M, Menzel R. Mechanisms, functions and ecology of colour vision in the honeybee. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2014; 200:411-33. [PMID: 24828676 PMCID: PMC4035557 DOI: 10.1007/s00359-014-0915-1] [Citation(s) in RCA: 101] [Impact Index Per Article: 10.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2014] [Revised: 04/15/2014] [Accepted: 04/17/2014] [Indexed: 11/06/2022]
Abstract
Research in the honeybee has laid the foundations for our understanding of insect colour vision. The trichromatic colour vision of honeybees shares fundamental properties with primate and human colour perception, such as colour constancy, colour opponency, segregation of colour and brightness coding. Laborious efforts to reconstruct the colour vision pathway in the honeybee have provided detailed descriptions of neural connectivity and the properties of photoreceptors and interneurons in the optic lobes of the bee brain. The modelling of colour perception advanced with the establishment of colour discrimination models that were based on experimental data, the Colour-Opponent Coding and Receptor Noise-Limited models, which are important tools for the quantitative assessment of bee colour vision and colour-guided behaviours. Major insights into the visual ecology of bees have been gained combining behavioural experiments and quantitative modelling, and asking how bee vision has influenced the evolution of flower colours and patterns. Recently research has focussed on the discrimination and categorisation of coloured patterns, colourful scenes and various other groupings of coloured stimuli, highlighting the bees' behavioural flexibility. The identification of perceptual mechanisms remains of fundamental importance for the interpretation of their learning strategies and performance in diverse experimental tasks.
Collapse
Affiliation(s)
- N Hempel de Ibarra
- Department of Psychology, Centre for Research in Animal Behaviour, University of Exeter, Exeter, UK,
| | | | | |
Collapse
|
11
|
Blue colour preference in honeybees distracts visual attention for learning closed shapes. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2013; 199:817-27. [PMID: 23918312 DOI: 10.1007/s00359-013-0843-5] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2013] [Revised: 07/20/2013] [Accepted: 07/20/2013] [Indexed: 10/26/2022]
Abstract
Spatial vision is an important cue for how honeybees (Apis mellifera) find flowers, and previous work has suggested that spatial learning in free-flying bees is exclusively mediated by achromatic input to the green photoreceptor channel. However, some data suggested that bees may be able to use alternative channels for shape processing, and recent work shows conditioning type and training length can significantly influence bee learning and cue use. We thus tested the honeybees' ability to discriminate between two closed shapes considering either absolute or differential conditioning, and using eight stimuli differing in their spectral characteristics. Consistent with previous work, green contrast enabled reliable shape learning for both types of conditioning, but surprisingly, we found that bees trained with appetitive-aversive differential conditioning could additionally use colour and/or UV contrast to enable shape discrimination. Interestingly, we found that a high blue contrast initially interferes with bee shape learning, probably due to the bees innate preference for blue colours, but with increasing experience bees can learn a variety of spectral and/or colour cues to facilitate spatial learning. Thus, the relationship between bee pollinators and the spatial and spectral cues that they use to find rewarding flowers appears to be a more rich visual environment than previously thought.
Collapse
|
12
|
Spelke ES, Lee SA. Core systems of geometry in animal minds. Philos Trans R Soc Lond B Biol Sci 2013; 367:2784-93. [PMID: 22927577 DOI: 10.1098/rstb.2012.0210] [Citation(s) in RCA: 43] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023] Open
Abstract
Research on humans from birth to maturity converges with research on diverse animals to reveal foundational cognitive systems in human and animal minds. The present article focuses on two such systems of geometry. One system represents places in the navigable environment by recording the distance and direction of the navigator from surrounding, extended surfaces. The other system represents objects by detecting the shapes of small-scale forms. These two systems show common signatures across animals, suggesting that they evolved in distant ancestral species. As children master symbolic systems such as maps and language, they come productively to combine representations from the two core systems of geometry in uniquely human ways; these combinations may give rise to abstract geometric intuitions. Studies of the ontogenetic and phylogenetic sources of abstract geometry therefore are illuminating of both human and animal cognition. Research on animals brings simpler model systems and richer empirical methods to bear on the analysis of abstract concepts in human minds. In return, research on humans, relating core cognitive capacities to symbolic abilities, sheds light on the content of representations in animal minds.
Collapse
Affiliation(s)
- Elizabeth S Spelke
- Department of Psychology, Harvard University, 1130 William James Hall, 33 Kirkland Street, Cambridge, MA 02138, USA.
| | | |
Collapse
|
13
|
Egelhaaf M, Boeddeker N, Kern R, Kurtz R, Lindemann JP. Spatial vision in insects is facilitated by shaping the dynamics of visual input through behavioral action. Front Neural Circuits 2012; 6:108. [PMID: 23269913 PMCID: PMC3526811 DOI: 10.3389/fncir.2012.00108] [Citation(s) in RCA: 71] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2012] [Accepted: 12/03/2012] [Indexed: 11/30/2022] Open
Abstract
Insects such as flies or bees, with their miniature brains, are able to control highly aerobatic flight maneuvres and to solve spatial vision tasks, such as avoiding collisions with obstacles, landing on objects, or even localizing a previously learnt inconspicuous goal on the basis of environmental cues. With regard to solving such spatial tasks, these insects still outperform man-made autonomous flying systems. To accomplish their extraordinary performance, flies and bees have been shown by their characteristic behavioral actions to actively shape the dynamics of the image flow on their eyes ("optic flow"). The neural processing of information about the spatial layout of the environment is greatly facilitated by segregating the rotational from the translational optic flow component through a saccadic flight and gaze strategy. This active vision strategy thus enables the nervous system to solve apparently complex spatial vision tasks in a particularly efficient and parsimonious way. The key idea of this review is that biological agents, such as flies or bees, acquire at least part of their strength as autonomous systems through active interactions with their environment and not by simply processing passively gained information about the world. These agent-environment interactions lead to adaptive behavior in surroundings of a wide range of complexity. Animals with even tiny brains, such as insects, are capable of performing extraordinarily well in their behavioral contexts by making optimal use of the closed action-perception loop. Model simulations and robotic implementations show that the smart biological mechanisms of motion computation and visually-guided flight control might be helpful to find technical solutions, for example, when designing micro air vehicles carrying a miniaturized, low-weight on-board processor.
Collapse
Affiliation(s)
- Martin Egelhaaf
- Neurobiology and Centre of Excellence “Cognitive Interaction Technology”Bielefeld University, Germany
| | | | | | | | | |
Collapse
|
14
|
Nørgaard T, Gagnon YL, Warrant EJ. Nocturnal homing: learning walks in a wandering spider? PLoS One 2012; 7:e49263. [PMID: 23145137 PMCID: PMC3492270 DOI: 10.1371/journal.pone.0049263] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2012] [Accepted: 10/05/2012] [Indexed: 11/21/2022] Open
Abstract
Homing by the nocturnal Namib Desert spider Leucorchestris arenicola (Araneae: Sparassidae) is comparable to homing in diurnal bees, wasps and ants in terms of path length and layout. The spiders' homing is based on vision but their basic navigational strategy is unclear. Diurnal homing insects use memorised views of their home in snapshot matching strategies. The insects learn the visual scenery identifying their nest location during learning flights (e.g. bees and wasps) or walks (ants). These learning flights and walks are stereotyped movement patterns clearly different from other movement behaviours. If the visual homing of L. arenicola is also based on an image matching strategy they are likely to exhibit learning walks similar to diurnal insects. To explore this possibility we recorded departures of spiders from a new burrow in an unfamiliar area with infrared cameras and analysed their paths using computer tracking techniques. We found that L. arenicola performs distinct stereotyped movement patterns during the first part of their departures in an unfamiliar area and that they seem to learn the appearance of their home during these movement patterns. We conclude that the spiders perform learning walks and this strongly suggests that L. arenicola uses a visual memory of the burrow location when homing.
Collapse
|
15
|
|
16
|
Honeybees can discriminate between Monet and Picasso paintings. J Comp Physiol A Neuroethol Sens Neural Behav Physiol 2012; 199:45-55. [PMID: 23076444 DOI: 10.1007/s00359-012-0767-5] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2012] [Revised: 10/04/2012] [Accepted: 10/04/2012] [Indexed: 10/27/2022]
Abstract
Honeybees (Apis mellifera) have remarkable visual learning and discrimination abilities that extend beyond learning simple colours, shapes or patterns. They can discriminate landscape scenes, types of flowers, and even human faces. This suggests that in spite of their small brain, honeybees have a highly developed capacity for processing complex visual information, comparable in many respects to vertebrates. Here, we investigated whether this capacity extends to complex images that humans distinguish on the basis of artistic style: Impressionist paintings by Monet and Cubist paintings by Picasso. We show that honeybees learned to simultaneously discriminate between five different Monet and Picasso paintings, and that they do not rely on luminance, colour, or spatial frequency information for discrimination. When presented with novel paintings of the same style, the bees even demonstrated some ability to generalize. This suggests that honeybees are able to discriminate Monet paintings from Picasso ones by extracting and learning the characteristic visual information inherent in each painting style. Our study further suggests that discrimination of artistic styles is not a higher cognitive function that is unique to humans, but simply due to the capacity of animals-from insects to humans-to extract and categorize the visual characteristics of complex images.
Collapse
|
17
|
Sheehan MJ, Tibbetts EA. Specialized face learning is associated with individual recognition in paper wasps. Science 2012; 334:1272-5. [PMID: 22144625 DOI: 10.1126/science.1211334] [Citation(s) in RCA: 140] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
Abstract
We demonstrate that the evolution of facial recognition in wasps is associated with specialized face-learning abilities. Polistes fuscatus can differentiate among normal wasp face images more rapidly and accurately than nonface images or manipulated faces. A close relative lacking facial recognition, Polistes metricus, however, lacks specialized face learning. Similar specializations for face learning are found in primates and other mammals, although P. fuscatus represents an independent evolution of specialization. Convergence toward face specialization in distant taxa as well as divergence among closely related taxa with different recognition behavior suggests that specialized cognition is surprisingly labile and may be adaptively shaped by species-specific selective pressures such as face recognition.
Collapse
Affiliation(s)
- Michael J Sheehan
- Department of Ecology and Evolutionary Biology, University of Michigan, Ann Arbor, MI 48109, USA.
| | | |
Collapse
|
18
|
Braun E, Dittmar L, Boeddeker N, Egelhaaf M. Prototypical components of honeybee homing flight behavior depend on the visual appearance of objects surrounding the goal. Front Behav Neurosci 2012; 6:1. [PMID: 22279431 PMCID: PMC3260448 DOI: 10.3389/fnbeh.2012.00001] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2011] [Accepted: 01/03/2012] [Indexed: 11/13/2022] Open
Abstract
Honeybees use visual cues to relocate profitable food sources and their hive. What bees see while navigating, depends on the appearance of the cues, the bee's current position, orientation, and movement relative to them. Here we analyze the detailed flight behavior during the localization of a goal surrounded by cylinders that are characterized either by a high contrast in luminance and texture or by mostly motion contrast relative to the background. By relating flight behavior to the nature of the information available from these landmarks, we aim to identify behavioral strategies that facilitate the processing of visual information during goal localization. We decompose flight behavior into prototypical movements using clustering algorithms in order to reduce the behavioral complexity. The determined prototypical movements reflect the honeybee's saccadic flight pattern that largely separates rotational from translational movements. During phases of translational movements between fast saccadic rotations, the bees can gain information about the 3D layout of their environment from the translational optic flow. The prototypical movements reveal the prominent role of sideways and up- or downward movements, which can help bees to gather information about objects, particularly in the frontal visual field. We find that the occurrence of specific prototypes depends on the bees' distance from the landmarks and the feeder and that changing the texture of the landmarks evokes different prototypical movements. The adaptive use of different behavioral prototypes shapes the visual input and can facilitate information processing in the bees' visual system during local navigation.
Collapse
Affiliation(s)
- Elke Braun
- Department of Neurobiology and Center of Excellence 'Cognitive Interaction Technology,' Bielefeld University Bielefeld, Germany
| | | | | | | |
Collapse
|
19
|
|
20
|
Skorupski P, Chittka L. Photoreceptor processing speed and input resistance changes during light adaptation correlate with spectral class in the bumblebee, Bombus impatiens. PLoS One 2011; 6:e25989. [PMID: 22046251 PMCID: PMC3203109 DOI: 10.1371/journal.pone.0025989] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2011] [Accepted: 09/15/2011] [Indexed: 11/18/2022] Open
Abstract
Colour vision depends on comparison of signals from photoreceptors with different spectral sensitivities. However, response properties of photoreceptor cells may differ in ways other than spectral tuning. In insects, for example, broadband photoreceptors, with a major sensitivity peak in the green region of the spectrum (>500 nm), drive fast visual processes, which are largely blind to chromatic signals from more narrowly-tuned photoreceptors with peak sensitivities in the blue and UV regions of the spectrum. In addition, electrophysiological properties of the photoreceptor membrane may result in differences in response dynamics of photoreceptors of similar spectral class between species, and different spectral classes within a species. We used intracellular electrophysiological techniques to investigate response dynamics of the three spectral classes of photoreceptor underlying trichromatic colour vision in the bumblebee, Bombus impatiens, and we compare these with previously published data from a related species, Bombus terrestris. In both species, we found significantly faster responses in green, compared with blue- or UV-sensitive photoreceptors, although all 3 photoreceptor types are slower in B. impatiens than in B. terrestris. Integration times for light-adapted B. impatiens photoreceptors (estimated from impulse response half-width) were 11.3 ± 1.6 ms for green photoreceptors compared with 18.6 ± 4.4 ms and 15.6 ± 4.4 for blue and UV, respectively. We also measured photoreceptor input resistance in dark- and light-adapted conditions. All photoreceptors showed a decrease in input resistance during light adaptation, but this decrease was considerably larger (declining to about 22% of the dark value) in green photoreceptors, compared to blue and UV (41% and 49%, respectively). Our results suggest that the conductances associated with light adaptation are largest in green photoreceptors, contributing to their greater temporal processing speed. We suggest that the faster temporal processing of green photoreceptors is related to their role in driving fast achromatic visual processes.
Collapse
Affiliation(s)
- Peter Skorupski
- Biological and Experimental Psychology Group, School of Biological and Chemical Sciences, Queen Mary University of London, London, United Kingdom.
| | | |
Collapse
|
21
|
Dittmar L. Static and dynamic snapshots for goal localization in insects? Commun Integr Biol 2011; 4:17-20. [PMID: 21509170 DOI: 10.4161/cib.4.1.13763] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2010] [Accepted: 09/27/2010] [Indexed: 11/19/2022] Open
Abstract
Bees, wasps and ants navigate successfully between feeding sites and their nest, despite the small size of their brains which contain less than a million neurons. A long history of studies examining the role of visual memories in homing behavior show that insects can localize a goal by finding a close match between a memorized view at the goal location and their current view ("snapshot matching"). However, the concept of static snapshot matching might not explain all aspects of homing behavior, as honeybees are able to use landmarks that are statically camouflaged. In this case the landmarks are only detectable by relative motion cues between the landmark and the background, which the bees generate when they perform characteristic flight maneuvers close to the landmarks. The bees' navigation performance can be explained by a matching scheme based on optic flow amplitudes ("dynamic snapshot matching"). In this article, I will discuss the concept of dynamic snapshot matching in the light of previous literature.
Collapse
Affiliation(s)
- Laura Dittmar
- Department of Neurobiology & Center of Excellence 'Cognitive Interaction Technology' Bielefeld University; Bielefeld, Germany
| |
Collapse
|
22
|
Dittmar L, Egelhaaf M, Stürzl W, Boeddeker N. The behavioral relevance of landmark texture for honeybee homing. Front Behav Neurosci 2011; 5:20. [PMID: 21541258 PMCID: PMC3083717 DOI: 10.3389/fnbeh.2011.00020] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2010] [Accepted: 04/03/2011] [Indexed: 11/15/2022] Open
Abstract
Honeybees visually pinpoint the location of a food source using landmarks. Studies on the role of visual memories have suggested that bees approach the goal by finding a close match between their current view and a memorized view of the goal location. The most relevant landmark features for this matching process seem to be their retinal positions, the size as defined by their edges, and their color. Recently, we showed that honeybees can use landmarks that are statically camouflaged, suggesting that motion cues are relevant as well. Currently it is unclear how bees weight these different landmark features when accomplishing navigational tasks, and whether this depends on their saliency. Since natural objects are often distinguished by their texture, we investigate the behavioral relevance and the interplay of the spatial configuration and the texture of landmarks. We show that landmark texture is a feature that bees memorize, and being given the opportunity to identify landmarks by their texture improves the bees’ navigational performance. Landmark texture is weighted more strongly than landmark configuration when it provides the bees with positional information and when the texture is salient. In the vicinity of the landmark honeybees changed their flight behavior according to its texture.
Collapse
Affiliation(s)
- Laura Dittmar
- Department of Neurobiology and Center of Excellence 'Cognitive Interaction Technology', Bielefeld University Bielefeld, Germany
| | | | | | | |
Collapse
|
23
|
Abstract
Visual learning admits different levels of complexity, from the formation of a simple associative link between a visual stimulus and its outcome, to more sophisticated performances, such as object categorization or rules learning, that allow flexible responses beyond simple forms of learning. Not surprisingly, higher-order forms of visual learning have been studied primarily in vertebrates with larger brains, while simple visual learning has been the focus in animals with small brains such as insects. This dichotomy has recently changed as studies on visual learning in social insects have shown that these animals can master extremely sophisticated tasks. Here we review a spectrum of visual learning forms in social insects, from color and pattern learning, visual attention, and top-down image recognition, to interindividual recognition, conditional discrimination, category learning, and rule extraction. We analyze the necessity and sufficiency of simple associations to account for complex visual learning in Hymenoptera and discuss possible neural mechanisms underlying these visual performances.
Collapse
Affiliation(s)
- Aurore Avarguès-Weber
- Centre de Recherches sur la Cognition Animale, Université de Toulouse, F-31062 Toulouse Cedex 9, France
| | | | | |
Collapse
|
24
|
Dittmar L, Stürzl W, Baird E, Boeddeker N, Egelhaaf M. Goal seeking in honeybees: matching of optic flow snapshots? J Exp Biol 2010; 213:2913-23. [DOI: 10.1242/jeb.043737] [Citation(s) in RCA: 62] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
SUMMARY
Visual landmarks guide humans and animals including insects to a goal location. Insects, with their miniature brains, have evolved a simple strategy to find their nests or profitable food sources; they approach a goal by finding a close match between the current view and a memorised retinotopic representation of the landmark constellation around the goal. Recent implementations of such a matching scheme use raw panoramic images (‘image matching’) and show that it is well suited to work on robots and even in natural environments. However, this matching scheme works only if relevant landmarks can be detected by their contrast and texture. Therefore, we tested how honeybees perform in localising a goal if the landmarks can hardly be distinguished from the background by such cues. We recorded the honeybees' flight behaviour with high-speed cameras and compared the search behaviour with computer simulations. We show that honeybees are able to use landmarks that have the same contrast and texture as the background and suggest that the bees use relative motion cues between the landmark and the background. These cues are generated on the eyes when the bee moves in a characteristic way in the vicinity of the landmarks. This extraordinary navigation performance can be explained by a matching scheme that includes snapshots based on optic flow amplitudes (‘optic flow matching’). This new matching scheme provides a robust strategy for navigation, as it depends primarily on the depth structure of the environment.
Collapse
Affiliation(s)
- Laura Dittmar
- Department of Neurobiology and Center of Excellence ‘Cognitive Interaction Technology’, Bielefeld University, 33615 Bielefeld, Germany
| | - Wolfgang Stürzl
- Department of Neurobiology and Center of Excellence ‘Cognitive Interaction Technology’, Bielefeld University, 33615 Bielefeld, Germany
| | - Emily Baird
- Department of Neurobiology and Center of Excellence ‘Cognitive Interaction Technology’, Bielefeld University, 33615 Bielefeld, Germany
| | - Norbert Boeddeker
- Department of Neurobiology and Center of Excellence ‘Cognitive Interaction Technology’, Bielefeld University, 33615 Bielefeld, Germany
| | - Martin Egelhaaf
- Department of Neurobiology and Center of Excellence ‘Cognitive Interaction Technology’, Bielefeld University, 33615 Bielefeld, Germany
| |
Collapse
|
25
|
Skorupski P, Chittka L. Photoreceptor spectral sensitivity in the bumblebee, Bombus impatiens (Hymenoptera: Apidae). PLoS One 2010; 5:e12049. [PMID: 20711523 PMCID: PMC2919406 DOI: 10.1371/journal.pone.0012049] [Citation(s) in RCA: 50] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2010] [Accepted: 06/28/2010] [Indexed: 11/18/2022] Open
Abstract
The bumblebee Bombus impatiens is increasingly used as a model in comparative studies of colour vision, or in behavioural studies relying on perceptual discrimination of colour. However, full spectral sensitivity data on the photoreceptor inputs underlying colour vision are not available for B. impatiens. Since most known bee species are trichromatic, with photoreceptor spectral sensitivity peaks in the UV, blue and green regions of the spectrum, data from a related species, where spectral sensitivity measurements have been made, are often applied to B impatiens. Nevertheless, species differences in spectral tuning of equivalent photoreceptor classes may result in peaks that differ by several nm, which may have small but significant effects on colour discrimination ability. We therefore used intracellular recording to measure photoreceptor spectral sensitivity in B. impatiens. Spectral peaks were estimated at 347, 424 and 539 nm for UV, blue and green receptors, respectively, suggesting that this species is a UV-blue-green trichromat. Photoreceptor spectral sensitivity peaks are similar to previous measurements from Bombus terrestris, although there is a significant difference in the peak sensitivity of the blue receptor, which is shifted in the short wave direction by 12–13 nm in B. impatiens compared to B. terrestris.
Collapse
Affiliation(s)
- Peter Skorupski
- Biological and Experimental Psychology Group, School of Biological and Chemical Sciences, Queen Mary University of London, London, United Kingdom.
| | | |
Collapse
|
26
|
Boeddeker N, Dittmar L, Stürzl W, Egelhaaf M. The fine structure of honeybee head and body yaw movements in a homing task. Proc Biol Sci 2010; 277:1899-906. [PMID: 20147329 PMCID: PMC2871881 DOI: 10.1098/rspb.2009.2326] [Citation(s) in RCA: 73] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2009] [Accepted: 01/22/2010] [Indexed: 11/12/2022] Open
Abstract
Honeybees turn their thorax and thus their flight motor to change direction or to fly sideways. If the bee's head were fixed to its thorax, such movements would have great impact on vision. Head movements independent of thorax orientation can stabilize gaze and thus play an important and active role in shaping the structure of the visual input the animal receives. Here, we investigate how gaze and flight control interact in a homing task. We use high-speed video equipment to record the head and body movements of honeybees approaching and departing from a food source that was located between three landmarks in an indoor flight arena. During these flights, the bees' trajectories consist of straight flight segments combined with rapid turns. These short and fast yaw turns ('saccades') are in most cases accompanied by even faster head yaw turns that start about 8 ms earlier than the body saccades. Between saccades, gaze stabilization leads to a behavioural elimination of rotational components from the optical flow pattern, which facilitates depth perception from motion parallax.
Collapse
Affiliation(s)
- Norbert Boeddeker
- Bielefeld University, Neurobiology and Center of Excellence Cognitive Interaction Technology, Bielefeld, Germany.
| | | | | | | |
Collapse
|
27
|
von der Emde G, Behr K, Bouton B, Engelmann J, Fetz S, Folde C. 3-Dimensional Scene Perception during Active Electrolocation in a Weakly Electric Pulse Fish. Front Behav Neurosci 2010; 4:26. [PMID: 20577635 PMCID: PMC2889722 DOI: 10.3389/fnbeh.2010.00026] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2010] [Accepted: 05/04/2010] [Indexed: 11/17/2022] Open
Abstract
Weakly electric fish use active electrolocation for object detection and orientation in their environment even in complete darkness. The African mormyrid Gnathonemus petersii can detect object parameters, such as material, size, shape, and distance. Here, we tested whether individuals of this species can learn to identify 3-dimensional objects independently of the training conditions and independently of the object's position in space (rotation-invariance; size-constancy). Individual G. petersii were trained in a two-alternative forced-choice procedure to electrically discriminate between a 3-dimensional object (S+) and several alternative objects (S−). Fish were then tested whether they could identify the S+ among novel objects and whether single components of S+ were sufficient for recognition. Size-constancy was investigated by presenting the S+ together with a larger version at different distances. Rotation-invariance was tested by rotating S+ and/or S− in 3D. Our results show that electrolocating G. petersii could (1) recognize an object independently of the S− used during training. When only single components of a complex S+ were offered, recognition of S+ was more or less affected depending on which part was used. (2) Object-size was detected independently of object distance, i.e. fish showed size-constancy. (3) The majority of the fishes tested recognized their S+ even if it was rotated in space, i.e. these fishes showed rotation-invariance. (4) Object recognition was restricted to the near field around the fish and failed when objects were moved more than about 4 cm away from the animals. Our results indicate that even in complete darkness our G. petersii were capable of complex 3-dimensional scene perception using active electrolocation.
Collapse
Affiliation(s)
- Gerhard von der Emde
- Neuroethology/Sensory Ecology, Institute of Zoology, University of Bonn Bonn, Germany
| | | | | | | | | | | |
Collapse
|
28
|
Differences in photoreceptor processing speed for chromatic and achromatic vision in the bumblebee, Bombus terrestris. J Neurosci 2010; 30:3896-903. [PMID: 20237260 DOI: 10.1523/jneurosci.5700-09.2010] [Citation(s) in RCA: 65] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Fast detection of visual change can be mediated by visual processes that ignore chromatic aspects of the visual signal, relying on inputs from a single photoreceptor class (or pooled input from similar classes). There is an established link between photoreceptor processing speed (in achromatic vision) and visual ecology. Highly maneuverable flies, for example, have the fastest know photoreceptors, relying on metabolically expensive membrane conductances to boost performance. Less active species forgo this investment and their photoreceptors are correspondingly slower. However, within a species, additional classes of photoreceptors are required to extract chromatic information, and the question therefore arises as to whether there might be within-species differences in processing speed between photoreceptors involved in chromatic processing compared with those feeding into fast achromatic visual systems. We used intracellular recording to compare light-adapted impulse responses in three spectral classes of photoreceptor in the bumblebee. Green-sensitive photoreceptors, which are known to provide achromatic contrast for motion detection, generated the fastest impulse responses (half-width, Deltat = 7.9 +/- 1.1 ms). Blue- and UV-sensitive photoreceptors (which are involved in color vision) were significantly slower (9.8 +/- 1.2 and 12.3 +/- 1.8 ms, respectively). The faster responses of green photoreceptors are in keeping with their role in fast achromatic vision. However, blue and UV photoreceptors are still relatively fast in comparison with many other insect species, as well as vertebrate cones, suggesting a significant investment in photoreceptor processing for color vision in bees. We discuss this finding in relation to bees' requirement for accurate learning of flower color, especially in conditions of variable luminance contrast.
Collapse
|
29
|
Kulahci IG, Dornhaus A, Papaj DR. Multimodal signals enhance decision making in foraging bumble-bees. Proc Biol Sci 2008; 275:797-802. [PMID: 18198150 DOI: 10.1098/rspb.2007.1176] [Citation(s) in RCA: 114] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Multimodal signals are common in nature and have recently attracted considerable attention. Despite this interest, their function is not well understood. We test the hypothesis that multimodal signals improve decision making in receivers by influencing the speed and the accuracy of their decisions. We trained bumble-bees (Bombus impatiens) to discriminate between artificial flowers that differed either in one modality, visual (specifically, shape) or olfactory, or in two modalities, visual plus olfactory. Bees trained on multimodal flowers learned the rewarding flowers faster than those trained on flowers that differed only in the visual modality and, in extinction trials, visited the previously rewarded flowers at a higher rate than bees trained on unimodal flowers. Overall, bees showed a speed-accuracy trade-off; bees that made slower decisions achieved higher accuracy levels. Foraging on multimodal flowers did not affect the slope of the speed-accuracy relationship, but resulted in a higher intercept, indicating that multimodal signals were associated with consistently higher accuracy across range of decision speeds. Our results suggest that bees make more effective decisions when flowers signal in more than one modality, and confirm the importance of studying signal components together rather than separately.
Collapse
Affiliation(s)
- Ipek G Kulahci
- Department of Ecology and Evolutionary Biology, University of Arizona, 1041 East Lowell Street, Tucson, AZ 85721, USA.
| | | | | |
Collapse
|
30
|
von der Emde G, Fetz S. Distance, shape and more: recognition of object features during active electrolocation in a weakly electric fish. J Exp Biol 2007; 210:3082-95. [PMID: 17704083 DOI: 10.1242/jeb.005694] [Citation(s) in RCA: 48] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
SUMMARY
In the absence of light, the weakly electric fish Gnathonemus petersii detects and distinguishes objects in the environment through active electrolocation. In order to test which features of an object the fish use under these conditions to discriminate between differently shaped objects,we trained eight individuals in a food-rewarded, two-alternative,forced-choice procedure. All fish learned to discriminate between two objects of different shapes and volumes. When new object combinations were offered in non-rewarded test trials, fish preferred those objects that resembled the one they had been trained to (S+) and avoided objects resembling the one that had not been rewarded (S–). For a decision, fish paid attention to the relative differences between the two objects they had to discriminate. For discrimination, fish used several object features, the most important ones being volume, material and shape. The importance of shape was demonstrated by reducing the objects to their 3-dimensional contours, which sufficed for the fish to distinguish differently shaped objects. Our results also showed that fish attended strongly to the feature `volume', because all individuals tended to avoid the larger one of two objects. When confronted with metal versus plastic objects, all fish avoided metal and preferred plastic objects, irrespective of training. In addition to volume, material and shape,fish attended to additional parameters, such as corners or rounded edges. When confronted with two unknown objects, fish weighed up the positive and negative properties of these novel objects and based their decision on the outcome of this comparison. Our results suggest that fish are able to link and assemble local features of an electrolocation pattern to construct a representation of an object, suggesting that some form of a feature extraction mechanism enables them to solve a complex object recognition task.
Collapse
Affiliation(s)
- Gerhard von der Emde
- Institut für Zoologie, Universität Bonn, Endenicher Allee 11-13, 53115 Bonn, Germany.
| | | |
Collapse
|
31
|
Affiliation(s)
- Mandyam V Srinivasan
- Centre of Excellence in Vision Science, Research School of Biological Sciences, Australian National University, PO Box 475, Canberra, ACT 2601, Australia.
| |
Collapse
|