1
|
Demartsev V, Averly B, Johnson-Ulrich L, Sridhar VH, Leonardos L, Vining A, Thomas M, Manser MB, Strandburg-Peshkin A. Mapping vocal interactions in space and time differentiates signal broadcast versus signal exchange in meerkat groups. Philos Trans R Soc Lond B Biol Sci 2024; 379:20230188. [PMID: 38768207 DOI: 10.1098/rstb.2023.0188] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Accepted: 02/02/2024] [Indexed: 05/22/2024] Open
Abstract
Animal vocal communication research traditionally focuses on acoustic and contextual features of calls, yet substantial information is also contained in response selectivity and timing during vocalization events. By examining the spatiotemporal structure of vocal interactions, we can distinguish between 'broadcast' and 'exchange' signalling modes, with the former potentially serving to transmit signallers' general state and the latter reflecting more interactive signalling behaviour. Here, we tracked the movements and vocalizations of wild meerkat (Suricata suricatta) groups simultaneously using collars to explore this distinction. We found evidence that close calls (used for maintaining group cohesion) are given as signal exchanges. They are typically given in temporally structured call-response sequences and are also strongly affected by the social environment, with individuals calling more when they have more neighbours and juveniles responding more to adults than the reverse. In contrast, short note calls appear mainly in sequences produced by single individuals and show little dependence on social surroundings, suggesting a broadcast signalling mode. Despite these differences, both call categories show similar clustering in space and time at a group level. Our results highlight how the fine-scale structure of vocal interactions can give important insights into the usage and function of signals in social groups. This article is part of the theme issue 'The power of sound: unravelling how acoustic communication shapes group dynamics.'
Collapse
Affiliation(s)
- Vlad Demartsev
- Department of Biology, University of Konstanz , Konstanz 78464, Germany
- Centre for the Advanced Study of Collective Behaviour, University of Konstanz , Konstanz 78464, Germany
- Department for the Ecology of Animal Societies, Max Planck Institute of Animal Behavior , Konstanz 78467, Germany
- Kalahari Research Centre , Van Zylsrus 8467, South Africa
| | - Baptiste Averly
- Department of Biology, University of Konstanz , Konstanz 78464, Germany
- Centre for the Advanced Study of Collective Behaviour, University of Konstanz , Konstanz 78464, Germany
- Department for the Ecology of Animal Societies, Max Planck Institute of Animal Behavior , Konstanz 78467, Germany
- Kalahari Research Centre , Van Zylsrus 8467, South Africa
| | - Lily Johnson-Ulrich
- Kalahari Research Centre , Van Zylsrus 8467, South Africa
- Department of Evolutionary Biology and Environmental Studies, University of Zurich , Zurich 8057, Switzerland
| | - Vivek H Sridhar
- Department of Biology, University of Konstanz , Konstanz 78464, Germany
- Centre for the Advanced Study of Collective Behaviour, University of Konstanz , Konstanz 78464, Germany
- Department for the Ecology of Animal Societies, Max Planck Institute of Animal Behavior , Konstanz 78467, Germany
| | - Leonardos Leonardos
- Department of Biology, University of Konstanz , Konstanz 78464, Germany
- Centre for the Advanced Study of Collective Behaviour, University of Konstanz , Konstanz 78464, Germany
- Department for the Ecology of Animal Societies, Max Planck Institute of Animal Behavior , Konstanz 78467, Germany
| | - Alexander Vining
- Department of Biology, University of Konstanz , Konstanz 78464, Germany
- Centre for the Advanced Study of Collective Behaviour, University of Konstanz , Konstanz 78464, Germany
- Animal Behavior Graduate Group, University of California , Davis, CA 95616, USA
| | - Mara Thomas
- Department of Biology, University of Konstanz , Konstanz 78464, Germany
- Centre for the Advanced Study of Collective Behaviour, University of Konstanz , Konstanz 78464, Germany
| | - Marta B Manser
- Kalahari Research Centre , Van Zylsrus 8467, South Africa
- Department of Evolutionary Biology and Environmental Studies, University of Zurich , Zurich 8057, Switzerland
- Interdisciplinary Center for the Evolution of Language, University of Zurich , Zurich 8057, Switzerland
| | - Ariana Strandburg-Peshkin
- Department of Biology, University of Konstanz , Konstanz 78464, Germany
- Centre for the Advanced Study of Collective Behaviour, University of Konstanz , Konstanz 78464, Germany
- Department for the Ecology of Animal Societies, Max Planck Institute of Animal Behavior , Konstanz 78467, Germany
- Kalahari Research Centre , Van Zylsrus 8467, South Africa
| |
Collapse
|
2
|
Xie B, Daunay V, Petersen TC, Briefer EF. Vocal repertoire and individuality in the plains zebra ( Equus quagga). ROYAL SOCIETY OPEN SCIENCE 2024; 11:240477. [PMID: 39076369 PMCID: PMC11286140 DOI: 10.1098/rsos.240477] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 06/10/2024] [Accepted: 06/11/2024] [Indexed: 07/31/2024]
Abstract
Acoustic signals are vital in animal communication, and quantifying them is fundamental for understanding animal behaviour and ecology. Vocalizations can be classified into acoustically and functionally or contextually distinct categories, but establishing these categories can be challenging. Newly developed methods, such as machine learning, can provide solutions for classification tasks. The plains zebra is known for its loud and specific vocalizations, yet limited knowledge exists on the structure and information content of its vocalzations. In this study, we employed both feature-based and spectrogram-based algorithms, incorporating supervised and unsupervised machine learning methods to enhance robustness in categorizing zebra vocalization types. Additionally, we implemented a permuted discriminant function analysis to examine the individual identity information contained in the identified vocalization types. The findings revealed at least four distinct vocalization types-the 'snort', the 'soft snort', the 'squeal' and the 'quagga quagga'-with individual differences observed mostly in snorts, and to a lesser extent in squeals. Analyses based on acoustic features outperformed those based on spectrograms, but each excelled in characterizing different vocalization types. We thus recommend the combined use of these two approaches. This study offers valuable insights into plains zebra vocalization, with implications for future comprehensive explorations in animal communication.
Collapse
Affiliation(s)
- Bing Xie
- Behavioural Ecology Group, Section for Ecology and Evolution, University of Copenhagen, Copenhagen, Denmark
- Research and Conservation, Copenhagen Zoo, Roskildevej 38, 2000 Frederiksberg, Denmark
| | - Virgile Daunay
- Behavioural Ecology Group, Section for Ecology and Evolution, University of Copenhagen, Copenhagen, Denmark
- Laboratoire Dynamique du Langage, CNRS, University Lumière Lyon 2, Lyon, France
- ENES Bioacoustics Research Lab, CRNL, CNRS, Inserm, University of Saint-Etienne, 42100 Saint-Etienne, France
| | | | - Elodie F. Briefer
- Behavioural Ecology Group, Section for Ecology and Evolution, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
3
|
Mielke A, Badihi G, Graham KE, Grund C, Hashimoto C, Piel AK, Safryghin A, Slocombe KE, Stewart F, Wilke C, Zuberbühler K, Hobaiter C. Many morphs: Parsing gesture signals from the noise. Behav Res Methods 2024:10.3758/s13428-024-02368-6. [PMID: 38438657 DOI: 10.3758/s13428-024-02368-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/12/2024] [Indexed: 03/06/2024]
Abstract
Parsing signals from noise is a general problem for signallers and recipients, and for researchers studying communicative systems. Substantial efforts have been invested in comparing how other species encode information and meaning, and how signalling is structured. However, research depends on identifying and discriminating signals that represent meaningful units of analysis. Early approaches to defining signal repertoires applied top-down approaches, classifying cases into predefined signal types. Recently, more labour-intensive methods have taken a bottom-up approach describing detailed features of each signal and clustering cases based on patterns of similarity in multi-dimensional feature-space that were previously undetectable. Nevertheless, it remains essential to assess whether the resulting repertoires are composed of relevant units from the perspective of the species using them, and redefining repertoires when additional data become available. In this paper we provide a framework that takes data from the largest set of wild chimpanzee (Pan troglodytes) gestures currently available, splitting gesture types at a fine scale based on modifying features of gesture expression using latent class analysis (a model-based cluster detection algorithm for categorical variables), and then determining whether this splitting process reduces uncertainty about the goal or community of the gesture. Our method allows different features of interest to be incorporated into the splitting process, providing substantial future flexibility across, for example, species, populations, and levels of signal granularity. Doing so, we provide a powerful tool allowing researchers interested in gestural communication to establish repertoires of relevant units for subsequent analyses within and between systems of communication.
Collapse
Affiliation(s)
- Alexander Mielke
- Wild Minds Lab, School of Psychology and Neuroscience, University of St Andrews, St Andrews, UK.
- School of Biological and Behavioural Sciences, Queen Mary University of London, London, UK.
| | - Gal Badihi
- Wild Minds Lab, School of Psychology and Neuroscience, University of St Andrews, St Andrews, UK
| | - Kirsty E Graham
- Wild Minds Lab, School of Psychology and Neuroscience, University of St Andrews, St Andrews, UK
| | - Charlotte Grund
- Wild Minds Lab, School of Psychology and Neuroscience, University of St Andrews, St Andrews, UK
| | - Chie Hashimoto
- Primate Research Institute, Kyoto University, Kyoto, Japan
| | - Alex K Piel
- Department of Anthropology, University College London, London, UK
- Department of Human Origins, Max Planck Institute of Evolutionary Anthropology, Leipzig, Germany
| | - Alexandra Safryghin
- Wild Minds Lab, School of Psychology and Neuroscience, University of St Andrews, St Andrews, UK
| | | | - Fiona Stewart
- Department of Anthropology, University College London, London, UK
- Department of Human Origins, Max Planck Institute of Evolutionary Anthropology, Leipzig, Germany
| | - Claudia Wilke
- Department of Psychology, University of York, York, UK
| | - Klaus Zuberbühler
- Institute of Biology, University of Neuchâtel, Neuchâtel, Switzerland
| | - Catherine Hobaiter
- Wild Minds Lab, School of Psychology and Neuroscience, University of St Andrews, St Andrews, UK
| |
Collapse
|
4
|
Martin K, Cornero FM, Clayton NS, Adam O, Obin N, Dufour V. Vocal complexity in a socially complex corvid: gradation, diversity and lack of common call repertoire in male rooks. ROYAL SOCIETY OPEN SCIENCE 2024; 11:231713. [PMID: 38204786 PMCID: PMC10776222 DOI: 10.1098/rsos.231713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 12/08/2023] [Indexed: 01/12/2024]
Abstract
Vocal communication is widespread in animals, with vocal repertoires of varying complexity. The social complexity hypothesis predicts that species may need high vocal complexity to deal with complex social organization (e.g. have a variety of different interindividual relations). We quantified the vocal complexity of two geographically distant captive colonies of rooks, a corvid species with complex social organization and cognitive performances, but understudied vocal abilities. We quantified the diversity and gradation of their repertoire, as well as the inter-individual similarity at the vocal unit level. We found that males produced call units with lower diversity and gradation than females, while song units did not differ between sexes. Surprisingly, while females produced highly similar call repertoires, even between colonies, each individual male produced almost completely different call repertoires from any other individual. These findings question the way male rooks communicate with their social partners. We suggest that each male may actively seek to remain vocally distinct, which could be an asset in their frequently changing social environment. We conclude that inter-individual similarity, an understudied aspect of vocal repertoires, should also be considered as a measure of vocal complexity.
Collapse
Affiliation(s)
- Killian Martin
- PRC, UMR 7247, Ethologie Cognitive et Sociale, CNRS-IFCE-INRAE-Université de Tours, Strasbourg, France
| | | | | | - Olivier Adam
- Institut Jean Le Rond d'Alembert, UMR 7190, CNRS-Sorbonne Université, 75005 Paris, France
- Institut des Neurosciences Paris-Saclay, UMR 9197, CNRS-Université Paris Sud, Orsay, France
| | - Nicolas Obin
- STMS Lab, IRCAM, CNRS-Sorbonne Université, Paris, France
| | - Valérie Dufour
- PRC, UMR 7247, Ethologie Cognitive et Sociale, CNRS-IFCE-INRAE-Université de Tours, Strasbourg, France
| |
Collapse
|
5
|
Fleishman E, Cholewiak D, Gillespie D, Helble T, Klinck H, Nosal EM, Roch MA. Ecological inferences about marine mammals from passive acoustic data. Biol Rev Camb Philos Soc 2023; 98:1633-1647. [PMID: 37142263 DOI: 10.1111/brv.12969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2022] [Revised: 04/20/2023] [Accepted: 04/24/2023] [Indexed: 05/06/2023]
Abstract
Monitoring on the basis of sound recordings, or passive acoustic monitoring, can complement or serve as an alternative to real-time visual or aural monitoring of marine mammals and other animals by human observers. Passive acoustic data can support the estimation of common, individual-level ecological metrics, such as presence, detection-weighted occupancy, abundance and density, population viability and structure, and behaviour. Passive acoustic data also can support estimation of some community-level metrics, such as species richness and composition. The feasibility of estimation and certainty of estimates is highly context dependent, and understanding the factors that affect the reliability of measurements is useful for those considering whether to use passive acoustic data. Here, we review basic concepts and methods of passive acoustic sampling in marine systems that often are applicable to marine mammal research and conservation. Our ultimate aim is to facilitate collaboration among ecologists, bioacousticians, and data analysts. Ecological applications of passive acoustics require one to make decisions about sampling design, which in turn requires consideration of sound propagation, sampling of signals, and data storage. One also must make decisions about signal detection and classification and evaluation of the performance of algorithms for these tasks. Investment in the research and development of systems that automate detection and classification, including machine learning, are increasing. Passive acoustic monitoring is more reliable for detection of species presence than for estimation of other species-level metrics. Use of passive acoustic monitoring to distinguish among individual animals remains difficult. However, information about detection probability, vocalisation or cue rate, and relations between vocalisations and the number and behaviour of animals increases the feasibility of estimating abundance or density. Most sensor deployments are fixed in space or are sporadic, making temporal turnover in species composition more tractable to estimate than spatial turnover. Collaborations between acousticians and ecologists are most likely to be successful and rewarding when all partners critically examine and share a fundamental understanding of the target variables, sampling process, and analytical methods.
Collapse
Affiliation(s)
- Erica Fleishman
- College of Earth, Ocean, and Atmospheric Sciences, Oregon State University, Corvallis, OR, 97331, USA
| | - Danielle Cholewiak
- Northeast Fisheries Science Center, National Marine Fisheries Service, National Oceanic and Atmospheric Administration, Woods Hole, MA, 02543, USA
| | - Douglas Gillespie
- Sea Mammal Research Unit, Scottish Oceans Institute, University of St Andrews, St Andrews, KY16 9XL, UK
| | - Tyler Helble
- Naval Information Warfare Center Pacific, San Diego, CA, 92152, USA
| | - Holger Klinck
- K. Lisa Yang Center for Conservation Bioacoustics, Cornell Lab of Ornithology, Cornell University, Ithaca, NY, 14850, USA
| | - Eva-Marie Nosal
- Department of Ocean and Resources Engineering, University of Hawai'i at Manoa, Honolulu, HI, 96822, USA
| | - Marie A Roch
- Department of Computer Science, San Diego State University, San Diego, CA, 92182, USA
| |
Collapse
|
6
|
Best P, Paris S, Glotin H, Marxer R. Deep audio embeddings for vocalisation clustering. PLoS One 2023; 18:e0283396. [PMID: 37428759 DOI: 10.1371/journal.pone.0283396] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 06/25/2023] [Indexed: 07/12/2023] Open
Abstract
The study of non-human animals' communication systems generally relies on the transcription of vocal sequences using a finite set of discrete units. This set is referred to as a vocal repertoire, which is specific to a species or a sub-group of a species. When conducted by human experts, the formal description of vocal repertoires can be laborious and/or biased. This motivates computerised assistance for this procedure, for which machine learning algorithms represent a good opportunity. Unsupervised clustering algorithms are suited for grouping close points together, provided a relevant representation. This paper therefore studies a new method for encoding vocalisations, allowing for automatic clustering to alleviate vocal repertoire characterisation. Borrowing from deep representation learning, we use a convolutional auto-encoder network to learn an abstract representation of vocalisations. We report on the quality of the learnt representation, as well as of state of the art methods, by quantifying their agreement with expert labelled vocalisation types from 8 datasets of other studies across 6 species (birds and marine mammals). With this benchmark, we demonstrate that using auto-encoders improves the relevance of vocalisation representation which serves repertoire characterisation using a very limited number of settings. We also publish a Python package for the bioacoustic community to train their own vocalisation auto-encoders or use a pretrained encoder to browse vocal repertoires and ease unit wise annotation.
Collapse
Affiliation(s)
- Paul Best
- Université de Toulon, Aix Marseille Univ, CNRS, LIS, Toulon, France
| | - Sébastien Paris
- Université de Toulon, Aix Marseille Univ, CNRS, LIS, Toulon, France
| | - Hervé Glotin
- Université de Toulon, Aix Marseille Univ, CNRS, LIS, Toulon, France
| | - Ricard Marxer
- Université de Toulon, Aix Marseille Univ, CNRS, LIS, Toulon, France
| |
Collapse
|
7
|
Arnaud V, Pellegrino F, Keenan S, St-Gelais X, Mathevon N, Levréro F, Coupé C. Improving the workflow to crack Small, Unbalanced, Noisy, but Genuine (SUNG) datasets in bioacoustics: The case of bonobo calls. PLoS Comput Biol 2023; 19:e1010325. [PMID: 37053268 PMCID: PMC10129004 DOI: 10.1371/journal.pcbi.1010325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 04/25/2023] [Accepted: 03/01/2023] [Indexed: 04/15/2023] Open
Abstract
Despite the accumulation of data and studies, deciphering animal vocal communication remains challenging. In most cases, researchers must deal with the sparse recordings composing Small, Unbalanced, Noisy, but Genuine (SUNG) datasets. SUNG datasets are characterized by a limited number of recordings, most often noisy, and unbalanced in number between the individuals or categories of vocalizations. SUNG datasets therefore offer a valuable but inevitably distorted vision of communication systems. Adopting the best practices in their analysis is essential to effectively extract the available information and draw reliable conclusions. Here we show that the most recent advances in machine learning applied to a SUNG dataset succeed in unraveling the complex vocal repertoire of the bonobo, and we propose a workflow that can be effective with other animal species. We implement acoustic parameterization in three feature spaces and run a Supervised Uniform Manifold Approximation and Projection (S-UMAP) to evaluate how call types and individual signatures cluster in the bonobo acoustic space. We then implement three classification algorithms (Support Vector Machine, xgboost, neural networks) and their combination to explore the structure and variability of bonobo calls, as well as the robustness of the individual signature they encode. We underscore how classification performance is affected by the feature set and identify the most informative features. In addition, we highlight the need to address data leakage in the evaluation of classification performance to avoid misleading interpretations. Our results lead to identifying several practical approaches that are generalizable to any other animal communication system. To improve the reliability and replicability of vocal communication studies with SUNG datasets, we thus recommend: i) comparing several acoustic parameterizations; ii) visualizing the dataset with supervised UMAP to examine the species acoustic space; iii) adopting Support Vector Machines as the baseline classification approach; iv) explicitly evaluating data leakage and possibly implementing a mitigation strategy.
Collapse
Affiliation(s)
- Vincent Arnaud
- Département des arts, des lettres et du langage, Université du Québec à Chicoutimi, Chicoutimi, Canada
- Laboratoire Dynamique Du Langage, UMR 5596, Université de Lyon, CNRS, Lyon, France
| | - François Pellegrino
- Laboratoire Dynamique Du Langage, UMR 5596, Université de Lyon, CNRS, Lyon, France
| | - Sumir Keenan
- ENES Bioacoustics Research Laboratory, University of Saint Étienne, CRNL, CNRS UMR 5292, Inserm UMR_S 1028, Saint-Étienne, France
| | - Xavier St-Gelais
- Département des arts, des lettres et du langage, Université du Québec à Chicoutimi, Chicoutimi, Canada
| | - Nicolas Mathevon
- ENES Bioacoustics Research Laboratory, University of Saint Étienne, CRNL, CNRS UMR 5292, Inserm UMR_S 1028, Saint-Étienne, France
| | - Florence Levréro
- ENES Bioacoustics Research Laboratory, University of Saint Étienne, CRNL, CNRS UMR 5292, Inserm UMR_S 1028, Saint-Étienne, France
| | - Christophe Coupé
- Laboratoire Dynamique Du Langage, UMR 5596, Université de Lyon, CNRS, Lyon, France
- Department of Linguistics, The University of Hong Kong, Hong Kong, China
| |
Collapse
|
8
|
Zimmermann J, Beguet F, Guthruf D, Langbehn B, Rupp D. Finding the semantic similarity in single-particle diffraction images using self-supervised contrastive projection learning. NPJ COMPUTATIONAL MATERIALS 2023; 9:24. [PMID: 38666059 PMCID: PMC11041688 DOI: 10.1038/s41524-023-00966-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Accepted: 01/10/2023] [Indexed: 04/28/2024]
Abstract
Single-shot coherent diffraction imaging of isolated nanosized particles has seen remarkable success in recent years, yielding in-situ measurements with ultra-high spatial and temporal resolution. The progress of high-repetition-rate sources for intense X-ray pulses has further enabled recording datasets containing millions of diffraction images, which are needed for the structure determination of specimens with greater structural variety and dynamic experiments. The size of the datasets, however, represents a monumental problem for their analysis. Here, we present an automatized approach for finding semantic similarities in coherent diffraction images without relying on human expert labeling. By introducing the concept of projection learning, we extend self-supervised contrastive learning to the context of coherent diffraction imaging and achieve a dimensionality reduction producing semantically meaningful embeddings that align with physical intuition. The method yields substantial improvements compared to previous approaches, paving the way toward real-time and large-scale analysis of coherent diffraction experiments at X-ray free-electron lasers.
Collapse
Affiliation(s)
| | | | | | | | - Daniela Rupp
- ETH Zürich, Zürich, Switzerland
- Max-Born-Institut, Berlin, Germany
| |
Collapse
|
9
|
Walsh SL, Engesser S, Townsend SW, Ridley AR. Multi-level combinatoriality in magpie non-song vocalizations. J R Soc Interface 2023; 20:20220679. [PMID: 36722171 PMCID: PMC9890321 DOI: 10.1098/rsif.2022.0679] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023] Open
Abstract
Comparative studies conducted over the past few decades have provided important insights into the capacity for animals to combine vocal segments at either one of two levels: within- or between-calls. There remains, however, a distinct gap in knowledge as to whether animal combinatoriality can extend beyond one level. Investigating this requires a comprehensive analysis of the combinatorial features characterizing a species' vocal system. Here, we used a nonlinear dimensionality reduction analysis and sequential transition analysis to quantitatively describe the non-song combinatorial repertoire of the Western Australian magpie (Gymnorhina tibicen dorsalis). We found that (i) magpies recombine four distinct acoustic segments to create a larger number of calls, and (ii) the resultant calls are further combined into larger call combinations. Our work demonstrates two levels in the combining of magpie vocal units. These results are incongruous with the notion that a capacity for multi-level combinatoriality is unique to human language, wherein the combining of meaningless sounds and meaningful words interactively occurs across different combinatorial levels. Our study thus provides novel insights into the combinatorial capacities of a non-human species, adding to the growing evidence of analogues of language-specific traits present in the animal kingdom.
Collapse
Affiliation(s)
- Sarah L. Walsh
- Centre for Evolutionary Biology, School of Biological Sciences, University of Western Australia, Crawley, WA 6009, Australia
| | - Sabrina Engesser
- Department of Biology, University of Copenhagen, 1165 København, Denmark
| | - Simon W. Townsend
- Department of Comparative Language Science, University of Zurich, Zurich 8006, Switzerland,Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Zurich 8006, Switzerland,Department of Psychology, University of Warwick, Coventry CV4 7AL, UK
| | - Amanda R. Ridley
- Centre for Evolutionary Biology, School of Biological Sciences, University of Western Australia, Crawley, WA 6009, Australia
| |
Collapse
|
10
|
McGinn K, Kahl S, Peery MZ, Klinck H, Wood CM. Feature embeddings from the BirdNET algorithm provide insights into avian ecology. ECOL INFORM 2023. [DOI: 10.1016/j.ecoinf.2023.101995] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|