1
|
Martin K, Cornero FM, Clayton NS, Adam O, Obin N, Dufour V. Vocal complexity in a socially complex corvid: gradation, diversity and lack of common call repertoire in male rooks. ROYAL SOCIETY OPEN SCIENCE 2024; 11:231713. [PMID: 38204786 PMCID: PMC10776222 DOI: 10.1098/rsos.231713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Accepted: 12/08/2023] [Indexed: 01/12/2024]
Abstract
Vocal communication is widespread in animals, with vocal repertoires of varying complexity. The social complexity hypothesis predicts that species may need high vocal complexity to deal with complex social organization (e.g. have a variety of different interindividual relations). We quantified the vocal complexity of two geographically distant captive colonies of rooks, a corvid species with complex social organization and cognitive performances, but understudied vocal abilities. We quantified the diversity and gradation of their repertoire, as well as the inter-individual similarity at the vocal unit level. We found that males produced call units with lower diversity and gradation than females, while song units did not differ between sexes. Surprisingly, while females produced highly similar call repertoires, even between colonies, each individual male produced almost completely different call repertoires from any other individual. These findings question the way male rooks communicate with their social partners. We suggest that each male may actively seek to remain vocally distinct, which could be an asset in their frequently changing social environment. We conclude that inter-individual similarity, an understudied aspect of vocal repertoires, should also be considered as a measure of vocal complexity.
Collapse
Affiliation(s)
- Killian Martin
- PRC, UMR 7247, Ethologie Cognitive et Sociale, CNRS-IFCE-INRAE-Université de Tours, Strasbourg, France
| | | | | | - Olivier Adam
- Institut Jean Le Rond d'Alembert, UMR 7190, CNRS-Sorbonne Université, 75005 Paris, France
- Institut des Neurosciences Paris-Saclay, UMR 9197, CNRS-Université Paris Sud, Orsay, France
| | - Nicolas Obin
- STMS Lab, IRCAM, CNRS-Sorbonne Université, Paris, France
| | - Valérie Dufour
- PRC, UMR 7247, Ethologie Cognitive et Sociale, CNRS-IFCE-INRAE-Université de Tours, Strasbourg, France
| |
Collapse
|
2
|
Rio R. First acoustic evidence of signature whistle production by spinner dolphins (Stenella longirostris). Anim Cogn 2023; 26:1915-1927. [PMID: 37676587 DOI: 10.1007/s10071-023-01824-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 08/23/2023] [Accepted: 08/29/2023] [Indexed: 09/08/2023]
Abstract
A dolphin's signature whistle (SW) is a distinctive acoustic signal, issued in a bout pattern of unique frequency modulation contours; it allows individuals belonging to a given group to recognize each other and, consequently, to maintain contact and cohesion. The current study is the first scientific evidence that spinner dolphins (Stenella longirostris) produce SWs. Acoustic data were recorded at a shallow rest bay called "Biboca", in Fernando de Noronha Archipelago, Brazil. In total, 1902 whistles were analyzed; 40% (753/1,902) of them were classified as stereotyped whistles (STW). Based on the SIGID method, 63% (472/753) of all STWs were identified as SWs; subsequently, they were categorized into one of 18 SW types. SWs accounted for 25% (472/1,902) of the acoustic repertoire. External observers have shown near perfect agreement to classify whistles into the adopted SW categorization. Most acoustic and temporal variables measured for SWs showed mean values similar to those recorded in other studies with spinner dolphins, whose authors did not differentiate SWs from non-SWs. Principal component analysis has explained 78% of total SW variance, and it emphasized the relevance of shape/contour and frequency variables to SW variance. This scientific discovery helps improving bioacoustics knowledge about the investigated species. Future studies to be conducted in Fernando de Noronha Archipelago should focus on continuous investigations about SW development and use by S. longirostris, expanding individuals' identifications (Photo ID and SW Noronha Catalog), assessing long-term whistle stability and emission rates, and making mother-offspring comparisons with sex-based differences.
Collapse
Affiliation(s)
- Raul Rio
- Laboratory of Observational and Bioacoustics Technologies Applied to Biodiversity (TecBio), Department of Veterinary Medicine, Federal University of Juiz de Fora (UFJF), Juiz de Fora, Minas Gerais, Brazil.
- Ocean Sound, Non-Governmental Organization (NGO), Santos, São Paulo, Brazil.
| |
Collapse
|
3
|
Best P, Paris S, Glotin H, Marxer R. Deep audio embeddings for vocalisation clustering. PLoS One 2023; 18:e0283396. [PMID: 37428759 DOI: 10.1371/journal.pone.0283396] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 06/25/2023] [Indexed: 07/12/2023] Open
Abstract
The study of non-human animals' communication systems generally relies on the transcription of vocal sequences using a finite set of discrete units. This set is referred to as a vocal repertoire, which is specific to a species or a sub-group of a species. When conducted by human experts, the formal description of vocal repertoires can be laborious and/or biased. This motivates computerised assistance for this procedure, for which machine learning algorithms represent a good opportunity. Unsupervised clustering algorithms are suited for grouping close points together, provided a relevant representation. This paper therefore studies a new method for encoding vocalisations, allowing for automatic clustering to alleviate vocal repertoire characterisation. Borrowing from deep representation learning, we use a convolutional auto-encoder network to learn an abstract representation of vocalisations. We report on the quality of the learnt representation, as well as of state of the art methods, by quantifying their agreement with expert labelled vocalisation types from 8 datasets of other studies across 6 species (birds and marine mammals). With this benchmark, we demonstrate that using auto-encoders improves the relevance of vocalisation representation which serves repertoire characterisation using a very limited number of settings. We also publish a Python package for the bioacoustic community to train their own vocalisation auto-encoders or use a pretrained encoder to browse vocal repertoires and ease unit wise annotation.
Collapse
Affiliation(s)
- Paul Best
- Université de Toulon, Aix Marseille Univ, CNRS, LIS, Toulon, France
| | - Sébastien Paris
- Université de Toulon, Aix Marseille Univ, CNRS, LIS, Toulon, France
| | - Hervé Glotin
- Université de Toulon, Aix Marseille Univ, CNRS, LIS, Toulon, France
| | - Ricard Marxer
- Université de Toulon, Aix Marseille Univ, CNRS, LIS, Toulon, France
| |
Collapse
|
4
|
Smith-Vidaurre G, Pérez-Marrufo V, Hobson EA, Salinas-Melgoza A, Wright TF. Individual identity information persists in learned calls of introduced parrot populations. PLoS Comput Biol 2023; 19:e1011231. [PMID: 37498847 PMCID: PMC10374045 DOI: 10.1371/journal.pcbi.1011231] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Accepted: 06/01/2023] [Indexed: 07/29/2023] Open
Abstract
Animals can actively encode different types of identity information in learned communication signals, such as group membership or individual identity. The social environments in which animals interact may favor different types of information, but whether identity information conveyed in learned signals is robust or responsive to social disruption over short evolutionary timescales is not well understood. We inferred the type of identity information that was most salient in vocal signals by combining computational tools, including supervised machine learning, with a conceptual framework of "hierarchical mapping", or patterns of relative acoustic convergence across social scales. We used populations of a vocal learning species as a natural experiment to test whether the type of identity information emphasized in learned vocalizations changed in populations that experienced the social disruption of introduction into new parts of the world. We compared the social scales with the most salient identity information among native and introduced range monk parakeet (Myiopsitta monachus) calls recorded in Uruguay and the United States, respectively. We also evaluated whether the identity information emphasized in introduced range calls changed over time. To place our findings in an evolutionary context, we compared our results with another parrot species that exhibits well-established and distinctive regional vocal dialects that are consistent with signaling group identity. We found that both native and introduced range monk parakeet calls displayed the strongest convergence at the individual scale and minimal convergence within sites. We did not identify changes in the strength of acoustic convergence within sites over time in the introduced range calls. These results indicate that the individual identity information in learned vocalizations did not change over short evolutionary timescales in populations that experienced the social disruption of introduction. Our findings point to exciting new research directions about the robustness or responsiveness of communication systems over different evolutionary timescales.
Collapse
Affiliation(s)
- Grace Smith-Vidaurre
- Department of Biology, New Mexico State University, Las Cruces, New Mexico, United States of America
- Laboratory of Neurogenetics of Language, Rockefeller University, New York, New York, United States of America
- Rockefeller University Field Research Center, Millbrook, New York, United States of America
- Department of Biological Sciences, University of Cincinnati, Cincinnati, Ohio, United States of America
| | - Valeria Pérez-Marrufo
- Department of Biology, New Mexico State University, Las Cruces, New Mexico, United States of America
- Department of Biology, Syracuse University, Syracuse, New York, United States of America
| | - Elizabeth A. Hobson
- Department of Biological Sciences, University of Cincinnati, Cincinnati, Ohio, United States of America
| | | | - Timothy F. Wright
- Department of Biology, New Mexico State University, Las Cruces, New Mexico, United States of America
| |
Collapse
|
5
|
Figueiredo LDD, Maciel I, Viola FM, Savi MA, Simão SM. Nonlinear features in whistles produced by the short-beaked common dolphin (Delphinus delphis) off southeastern Brazil. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:2436. [PMID: 37092947 DOI: 10.1121/10.0017883] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/11/2022] [Accepted: 03/30/2023] [Indexed: 05/03/2023]
Abstract
Animal vocalizations have nonlinear characteristics responsible for features such as subharmonics, frequency jumps, biphonation, and deterministic chaos. This study describes the whistle repertoire of a short-beaked common dolphin (Delphinus delphis) group at Brazilian coast and quantifies the nonlinear features of these whistles. Dolphins were recorded for a total of 67 min around Cabo Frio, Brazil. We identify 10 basic categories of whistle, with 75 different types, classified according to their contour shape. Most (45) of these 75 types had not been reported previously for the species. The duration of the whistles ranged from 0.04 to 3.67 s, with frequencies of 3.05-29.75 kHz. Overall, the whistle repertoire presented here has one of the widest frequency ranges and greatest level of frequency modulation recorded in any study of D. delphis. All the nonlinear features sought during the study were confirmed, with at least one feature occurring in 38.4% of the whistles. The frequency jump was the most common feature (29.75% of the whistles) and the nonlinear time series analyses confirmed the deterministic chaos in the chaotic-like segments. These results indicate that nonlinearities are a relevant characteristic of these whistles, and that are important in acoustic communication.
Collapse
Affiliation(s)
| | - Israel Maciel
- Department of Ecology, State University of Rio de Janeiro, Rio de Janeiro, Brazil
| | - Flavio M Viola
- Center for Nonlinear Mechanics, COPPE-Mechanical Engineering, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil
| | - Marcelo A Savi
- Center for Nonlinear Mechanics, COPPE-Mechanical Engineering, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil
| | - Sheila M Simão
- Department of Environmental Science, Federal Rural University of Rio de Janeiro, Rio de Janeiro, Brazil
| |
Collapse
|
6
|
Clink DJ, Kier I, Ahmad AH, Klinck H. A workflow for the automated detection and classification of female gibbon calls from long-term acoustic recordings. Front Ecol Evol 2023. [DOI: 10.3389/fevo.2023.1071640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/11/2023] Open
Abstract
Passive acoustic monitoring (PAM) allows for the study of vocal animals on temporal and spatial scales difficult to achieve using only human observers. Recent improvements in recording technology, data storage, and battery capacity have led to increased use of PAM. One of the main obstacles in implementing wide-scale PAM programs is the lack of open-source programs that efficiently process terabytes of sound recordings and do not require large amounts of training data. Here we describe a workflow for detecting, classifying, and visualizing female Northern grey gibbon calls in Sabah, Malaysia. Our approach detects sound events using band-limited energy summation and does binary classification of these events (gibbon female or not) using machine learning algorithms (support vector machine and random forest). We then applied an unsupervised approach (affinity propagation clustering) to see if we could further differentiate between true and false positives or the number of gibbon females in our dataset. We used this workflow to address three questions: (1) does this automated approach provide reliable estimates of temporal patterns of gibbon calling activity; (2) can unsupervised approaches be applied as a post-processing step to improve the performance of the system; and (3) can unsupervised approaches be used to estimate how many female individuals (or clusters) there are in our study area? We found that performance plateaued with >160 clips of training data for each of our two classes. Using optimized settings, our automated approach achieved a satisfactory performance (F1 score ~ 80%). The unsupervised approach did not effectively differentiate between true and false positives or return clusters that appear to correspond to the number of females in our study area. Our results indicate that more work needs to be done before unsupervised approaches can be reliably used to estimate the number of individual animals occupying an area from PAM data. Future work applying these methods across sites and different gibbon species and comparisons to deep learning approaches will be crucial for future gibbon conservation initiatives across Southeast Asia.
Collapse
|
7
|
Lessons learned in animal acoustic cognition through comparisons with humans. Anim Cogn 2023; 26:97-116. [PMID: 36574158 PMCID: PMC9877085 DOI: 10.1007/s10071-022-01735-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 11/21/2022] [Accepted: 12/06/2022] [Indexed: 12/28/2022]
Abstract
Humans are an interesting subject of study in comparative cognition. While humans have a lot of anecdotal and subjective knowledge about their own minds and behaviors, researchers tend not to study humans the way they study other species. Instead, comparisons between humans and other animals tend to be based on either assumptions about human behavior and cognition, or very different testing methods. Here we emphasize the importance of using insider knowledge about humans to form interesting research questions about animal cognition while simultaneously stepping back and treating humans like just another species as if one were an alien researcher. This perspective is extremely helpful to identify what aspects of cognitive processes may be interesting and relevant across the animal kingdom. Here we outline some examples of how this objective human-centric approach has helped us to move forward knowledge in several areas of animal acoustic cognition (rhythm, harmonicity, and vocal units). We describe how this approach works, what kind of benefits we obtain, and how it can be applied to other areas of animal cognition. While an objective human-centric approach is not useful when studying traits that do not occur in humans (e.g., magnetic spatial navigation), it can be extremely helpful when studying traits that are relevant to humans (e.g., communication). Overall, we hope to entice more people working in animal cognition to use a similar approach to maximize the benefits of being part of the animal kingdom while maintaining a detached and scientific perspective on the human species.
Collapse
|
8
|
Rookognise: Acoustic detection and identification of individual rooks in field recordings using multi-task neural networks. ECOL INFORM 2022. [DOI: 10.1016/j.ecoinf.2022.101818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
9
|
Introducing the Software CASE (Cluster and Analyze Sound Events) by Comparing Different Clustering Methods and Audio Transformation Techniques Using Animal Vocalizations. Animals (Basel) 2022; 12:ani12162020. [PMID: 36009611 PMCID: PMC9404437 DOI: 10.3390/ani12162020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 07/28/2022] [Accepted: 08/04/2022] [Indexed: 11/17/2022] Open
Abstract
Simple Summary Unsupervised clustering algorithms are widely used in ecology and conservation to classify animal vocalizations, but also offer various advantages in basic research, contributing to the understanding of acoustic communication. Nevertheless, there are still some challenges to overcome. For instance, the quality of the clustering result depends on the audio transformation technique previously used to adjust the audio data. Moreover, it is difficult to verify the reliability of the clustering result. To analyze bioacoustic data using a clustering algorithm, it is, therefore, essential to select a reasonable algorithm from the many existing algorithms and prepare the recorded vocalizations so that the resulting values characterize a vocalization as accurately as possible. Frequency-modulated vocalizations, whose frequencies change over time, pose a particular problem. In this paper, we present the software CASE, which includes various clustering methods and provides an overview of their strengths and weaknesses concerning the classification of bioacoustic data. This software uses a multidimensional feature-extraction method to achieve better clustering results, especially for frequency-modulated vocalizations. Abstract Unsupervised clustering algorithms are widely used in ecology and conservation to classify animal sounds, but also offer several advantages in basic bioacoustics research. Consequently, it is important to overcome the existing challenges. A common practice is extracting the acoustic features of vocalizations one-dimensionally, only extracting an average value for a given feature for the entire vocalization. With frequency-modulated vocalizations, whose acoustic features can change over time, this can lead to insufficient characterization. Whether the necessary parameters have been set correctly and the obtained clustering result reliably classifies the vocalizations subsequently often remains unclear. The presented software, CASE, is intended to overcome these challenges. Established and new unsupervised clustering methods (community detection, affinity propagation, HDBSCAN, and fuzzy clustering) are tested in combination with various classifiers (k-nearest neighbor, dynamic time-warping, and cross-correlation) using differently transformed animal vocalizations. These methods are compared with predefined clusters to determine their strengths and weaknesses. In addition, a multidimensional data transformation procedure is presented that better represents the course of multiple acoustic features. The results suggest that, especially with frequency-modulated vocalizations, clustering is more applicable with multidimensional feature extraction compared with one-dimensional feature extraction. The characterization and clustering of vocalizations in multidimensional space offer great potential for future bioacoustic studies. The software CASE includes the developed method of multidimensional feature extraction, as well as all used clustering methods. It allows quickly applying several clustering algorithms to one data set to compare their results and to verify their reliability based on their consistency. Moreover, the software CASE determines the optimal values of most of the necessary parameters automatically. To take advantage of these benefits, the software CASE is provided for free download.
Collapse
|
10
|
Wilson KC, Širović A, Semmens BX, Gittings SR, Pattengill-Semmens CV, McCoy C. Grouper source levels and aggregation dynamics inferred from passive acoustic localization at a multispecies spawning site. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:3052. [PMID: 35649949 DOI: 10.1121/10.0010236] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Accepted: 03/25/2022] [Indexed: 06/15/2023]
Abstract
Four species of grouper (family Epinephlidae), Red Hind (Epinephelus guttatus), Nassau (Epinephelus striatus), Black (Mycteroperca bonaci), and Yellowfin Grouper (Mycteroperca venenosa) share an aggregation site in Little Cayman, Cayman Islands and produce sounds while aggregating. Continuous observation of these aggregations is challenging because traditional diver or ship-based methods are limited in time and space. Passive acoustic localization can overcome this challenge for sound-producing species, allowing observations over long durations and at fine spatial scales. A hydrophone array was deployed in February 2017 over a 9-day period that included Nassau Grouper spawning. Passive acoustic localization was used to find positions of the grouper-produced calls recorded during this time, which enabled the measurement of call source levels and evaluation of spatiotemporal aspects of calling. Yellowfin Grouper had the lowest mean peak-to-peak (PP) call source level, and Nassau Grouper had the highest mean PP call source level (143.7 and 155.2 dB re: 1 μPa at 1 m for 70-170 Hz, respectively). During the days that Nassau Grouper spawned, calling peaked after sunset. Similarly, when Red Hind calls were abundant, calls were highest in the afternoon and evening. The measured source levels can be used to estimate communication and detection ranges and implement passive acoustic density estimation for these fishes.
Collapse
Affiliation(s)
- Katherine C Wilson
- Scripps Institution of Oceanography, University of California San Diego, La Jolla, California 92093, USA
| | - Ana Širović
- Scripps Institution of Oceanography, University of California San Diego, La Jolla, California 92093, USA
| | - Brice X Semmens
- Scripps Institution of Oceanography, University of California San Diego, La Jolla, California 92093, USA
| | - Stephen R Gittings
- National Oceanic and Atmospheric Administration, Office of National Marine Sanctuaries, Silver Spring, Maryland 20910, USA
| | | | - Croy McCoy
- Reef Environmental Education Foundation, Key Largo, Florida 33037, USA
| |
Collapse
|
11
|
Linhart P, Mahamoud-Issa M, Stowell D, Blumstein DT. The potential for acoustic individual identification in mammals. Mamm Biol 2022. [DOI: 10.1007/s42991-021-00222-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
12
|
Clink DJ, Lau AR, Kanthaswamy S, Johnson LM, Bales KL. Moderate evidence for heritability in the duet contributions of a South American primate. J Evol Biol 2022; 35:51-63. [PMID: 34822207 PMCID: PMC9514391 DOI: 10.1111/jeb.13962] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2020] [Revised: 10/24/2021] [Accepted: 11/09/2021] [Indexed: 01/03/2023]
Abstract
Acoustic signals are ubiquitous across mammalian taxa. They serve a myriad of functions related to the formation and maintenance of social bonds and can provide conspecifics information about caller condition, motivation and identity. Disentangling the relative importance of evolutionary mechanisms that shape vocal variation is difficult, and little is known about heritability of mammalian vocalizations. Duetting--coordinated vocalizations within male and female pairs--arose independently at least four times across the Primate Order. Primate duets contain individual- or pair-level signatures, but the mechanisms that shape this variation remain unclear. Here, we test for evidence of heritability in two call types (pulses and chirps) from the duets of captive coppery titi monkeys (Plecturocebus cupreus). We extracted four features--note rate, duration, minimum and maximum fundamental frequency--from spectrograms of pulses and chirps, and estimated heritability of the features. We also tested whether features varied with sex or body weight. We found evidence for moderate heritability in one of the features examined (chirp note rate), whereas inter-individual variance was the most important source of variance for the rest of the features. We did not find evidence for sex differences in any of the features, but we did find that body weight and fundamental frequency of chirp elements covaried. Kin recognition has been invoked as a possible explanation for heritability or kin signatures in mammalian vocalizations. Although the function of primate duets remains a topic of debate, the presence of moderate heritability in titi monkey chirp elements indicates duets may serve a kin recognition function.
Collapse
Affiliation(s)
- Dena J. Clink
- K. Lisa Yang Center for Conservation Bioacoustics, Cornell Lab of Ornithology, Cornell University, Ithaca, NY, 14850
| | - Allison R. Lau
- Animal Behavior Graduate Group, University of California, Davis, Davis, CA, 95616,California National Primate Research Center, University of California, Davis, Davis, CA, 95616
| | - Sreetharan Kanthaswamy
- School of Mathematics and Natural Sciences, Arizona State University (ASU) at the West Campus, Glendale, AZ, USA,California National Primate Research Center, University of California, One Shields Ave, Davis, CA 95616, USA
| | - Lynn M. Johnson
- Cornell Statistical Consulting Unit, Cornell University, Ithaca, NY, USA
| | - Karen L. Bales
- Animal Behavior Graduate Group, University of California, Davis, Davis, CA, 95616,California National Primate Research Center, University of California, Davis, Davis, CA, 95616,Department of Psychology, University of California, Davis, One Shields Avenue, Davis, CA, 95616, USA,Department of Neurobiology, Physiology, and Behavior, University of California, Davis, CA, 95616, USA
| |
Collapse
|
13
|
Luís AR, May-Collado LJ, Rako-Gospić N, Gridley T, Papale E, Azevedo A, Silva MA, Buscaino G, Herzing D, dos Santos ME. Vocal universals and geographic variations in the acoustic repertoire of the common bottlenose dolphin. Sci Rep 2021; 11:11847. [PMID: 34088923 PMCID: PMC8178411 DOI: 10.1038/s41598-021-90710-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Accepted: 04/29/2021] [Indexed: 02/04/2023] Open
Abstract
Acoustical geographic variation is common in widely distributed species and it is already described for several taxa, at various scales. In cetaceans, intraspecific variation in acoustic repertoires has been linked to ecological factors, geographical barriers, and social processes. For the common bottlenose dolphin (Tursiops truncatus), studies on acoustic variability are scarce, focus on a single signal type-whistles and on the influence of environmental variables. Here, we analyze the acoustic emissions of nine bottlenose dolphin populations across the Atlantic Ocean and the Mediterranean Sea, and identify common signal types and acoustic variants to assess repertoires' (dis)similarity. Overall, these dolphins present a rich acoustic repertoire, with 24 distinct signal sub-types including: whistles, burst-pulsed sounds, brays and bangs. Acoustic divergence was observed only in social signals, suggesting the relevance of cultural transmission in geographic variation. The repertoire dissimilarity values were remarkably low (from 0.08 to 0.4) and do not reflect the geographic distances among populations. Our findings suggest that acoustic ecology may play an important role in the occurrence of intraspecific variability, as proposed by the 'environmental adaptation hypothesis'. Further work may clarify the boundaries between neighboring populations, and shed light into vocal learning and cultural transmission in bottlenose dolphin societies.
Collapse
Affiliation(s)
- A. R. Luís
- grid.410954.d0000 0001 2237 5901MARE - Marine and Environmental Sciences Centre, ISPA - Instituto Universitário, Rua Jardim do Tabaco, 34, 1149-041 Lisboa, Portugal ,Projecto Delfim - Centro Português de Estudo dos Mamíferos Marinhos, Rua Jardim do Tabaco, 34, 1149-041 Lisboa, Portugal
| | - L. J. May-Collado
- grid.59062.380000 0004 1936 7689Department of Biology, University of Vermont, Burlington, VT 05403 USA ,grid.412889.e0000 0004 1937 0706Centro de Investigacion en Ciencias del Mar y Limnologia, Universidad de Costa Rica, San Jose, Costa Rica
| | - N. Rako-Gospić
- Blue World Institute of Marine Research and Conservation, Kaštel 24, 51551 Veli Lošinj, Croatia
| | - T. Gridley
- grid.7836.a0000 0004 1937 1151Centre for Statistics in Ecology, Environment and Conservation, Department of Statistical Sciences, University of Cape Town, C/O Sea Search Research and Conservation NPC, Cape Town, South Africa
| | - E. Papale
- grid.5326.20000 0001 1940 4177Institute for the Study of Antropogenic Impacts and Sustainability in the Marine Environment, National Research Council, Capo Granitola, Via del Mare 3, 91021 Torretta Granitola (TP), Italy ,grid.7605.40000 0001 2336 6580Department of Life Sciences and Systems Biology, University of Torino, Via Accademia Albertina 13, 10123 Torino, Italy
| | - A. Azevedo
- grid.412211.5Laboratório de Mamíferos Aquáticos e Bioindicadores Profª Izabel Gurgel (MAQUA), Universidade do Estado do Rio de Janeiro, Rio de Janeiro, Brazil
| | - M. A. Silva
- grid.7338.f0000 0001 2096 9474OKEANOS & IMAR – Instituto do Mar, Universidade dos Açores, 9901-862 Horta, Portugal
| | - G. Buscaino
- grid.5326.20000 0001 1940 4177Institute for the Study of Antropogenic Impacts and Sustainability in the Marine Environment, National Research Council, Capo Granitola, Via del Mare 3, 91021 Torretta Granitola (TP), Italy
| | - D. Herzing
- Wild Dolphin Project, P.O. Box 8436, Jupiter, FL 33468 USA ,grid.255951.f0000 0004 0635 0263Department of Biological Sciences, Florida Atlantic University, Boca Raton, FL 33431 USA
| | - M. E. dos Santos
- grid.410954.d0000 0001 2237 5901MARE - Marine and Environmental Sciences Centre, ISPA - Instituto Universitário, Rua Jardim do Tabaco, 34, 1149-041 Lisboa, Portugal ,Projecto Delfim - Centro Português de Estudo dos Mamíferos Marinhos, Rua Jardim do Tabaco, 34, 1149-041 Lisboa, Portugal
| |
Collapse
|
14
|
Walb R, von Fersen L, Meijer T, Hammerschmidt K. Individual Differences in the Vocal Communication of Malayan Tapirs ( Tapirus indicus) Considering Familiarity and Relatedness. Animals (Basel) 2021; 11:1026. [PMID: 33916401 PMCID: PMC8065771 DOI: 10.3390/ani11041026] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Revised: 03/31/2021] [Accepted: 04/02/2021] [Indexed: 11/27/2022] Open
Abstract
Studies in animal communication have shown that many species have individual distinct calls. These individual distinct vocalizations can play an important role in animal communication because they can carry important information about the age, sex, personality, or social role of the signaler. Although we have good knowledge regarding the importance of individual vocalization in social living mammals, it is less clear to what extent solitary living mammals possess individual distinct vocalizations. We recorded and analyzed the vocalizations of 14 captive adult Malayan tapirs (Tapirus indicus) (six females and eight males) to answer this question. We investigated whether familiarity or relatedness had an influence on call similarity. In addition to sex-related differences, we found significant differences between all subjects, comparable to the individual differences found in highly social living species. Surprisingly, kinship appeared to have no influence on call similarity, whereas familiar subjects exhibited significantly higher similarity in their harmonic calls compared to unfamiliar or related subjects. The results support the view that solitary animals could have individual distinct calls, like highly social animals. Therefore, it is likely that non-social factors, like low visibility, could have an influence on call individuality. The increasing knowledge of their behavior will help to protect this endangered species.
Collapse
Affiliation(s)
- Robin Walb
- Department of Wildlife Management, University of Applied Sciences Van Hall-Larenstein, Agora 1, 8934 CJ Leeuwarden, The Netherlands;
- Cognitive Ethology Laboratory, German Primate Center, Kellnerweg 4, 37077 Göttingen, Germany;
| | | | - Theo Meijer
- Department of Wildlife Management, University of Applied Sciences Van Hall-Larenstein, Agora 1, 8934 CJ Leeuwarden, The Netherlands;
| | - Kurt Hammerschmidt
- Cognitive Ethology Laboratory, German Primate Center, Kellnerweg 4, 37077 Göttingen, Germany;
| |
Collapse
|
15
|
Clink DJ, Klinck H. Unsupervised acoustic classification of individual gibbon females and the implications for passive acoustic monitoring. Methods Ecol Evol 2020. [DOI: 10.1111/2041-210x.13520] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Affiliation(s)
- Dena J. Clink
- Center for Conservation Bioacoustics Cornell Laboratory of Ornithology Cornell University Ithaca NY USA
| | - Holger Klinck
- Center for Conservation Bioacoustics Cornell Laboratory of Ornithology Cornell University Ithaca NY USA
| |
Collapse
|
16
|
Morrison EL, DeLong CM, Wilcox KT. How humans discriminate acoustically among bottlenose dolphin signature whistles with and without masking by boat noise. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:4162. [PMID: 32611182 DOI: 10.1121/10.0001450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/09/2019] [Accepted: 05/31/2020] [Indexed: 06/11/2023]
Abstract
Anthropogenic noise in the world's oceans is known to impede many species' ability to perceive acoustic signals, but little research has addressed how this noise affects the perception of bioacoustic signals used for communication in marine mammals. Bottlenose dolphins (Tursiops truncatus) use signature whistles containing identification information. Past studies have used human participants to gain insight into dolphin perception, but most previous research investigated echolocation. In Experiment 1, human participants were tested on their ability to discriminate among signature whistles from three dolphins. Participants' performance was nearly errorless. In Experiment 2, participants identified signature whistles masked by five different samples of boat noise utilizing different signal-to-noise ratios. Lower signal-to-noise ratio and proximity in frequency between the whistle and noise both significantly decreased performance. Like dolphins, human participants primarily identified whistles using frequency contour. Participants reported greater use of amplitude in noise-present vs noise-absent trials, but otherwise did not vary cue usage. These findings can be used to generate hypotheses about dolphins' performance and auditory cue use for future research. This study may provide insight into how specific characteristics of boat noise affect dolphin whistle perception and may have implications for conservation and regulations.
Collapse
Affiliation(s)
- Evan L Morrison
- Department of Psychology, College of Liberal Arts, Rochester Institute of Technology, 18 Lomb Memorial Drive, Rochester, New York 14623, USA
| | - Caroline M DeLong
- Department of Psychology, College of Liberal Arts, Rochester Institute of Technology, 18 Lomb Memorial Drive, Rochester, New York 14623, USA
| | - Kenneth Tyler Wilcox
- Department of Psychology, College of Arts and Letters, University of Notre Dame, 390 Corbett Family Hall, Notre Dame, Indiana 46556, USA
| |
Collapse
|
17
|
Coffey KR, Marx RE, Neumaier JF. DeepSqueak: a deep learning-based system for detection and analysis of ultrasonic vocalizations. Neuropsychopharmacology 2019; 44:859-868. [PMID: 30610191 PMCID: PMC6461910 DOI: 10.1038/s41386-018-0303-6] [Citation(s) in RCA: 130] [Impact Index Per Article: 26.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/22/2018] [Revised: 12/04/2018] [Accepted: 12/16/2018] [Indexed: 01/11/2023]
Abstract
Rodents engage in social communication through a rich repertoire of ultrasonic vocalizations (USVs). Recording and analysis of USVs has broad utility during diverse behavioral tests and can be performed noninvasively in almost any rodent behavioral model to provide rich insights into the emotional state and motor function of the test animal. Despite strong evidence that USVs serve an array of communicative functions, technical and financial limitations have been barriers for most laboratories to adopt vocalization analysis. Recently, deep learning has revolutionized the field of machine hearing and vision, by allowing computers to perform human-like activities including seeing, listening, and speaking. Such systems are constructed from biomimetic, "deep", artificial neural networks. Here, we present DeepSqueak, a USV detection and analysis software suite that can perform human quality USV detection and classification automatically, rapidly, and reliably using cutting-edge regional convolutional neural network architecture (Faster-RCNN). DeepSqueak was engineered to allow non-experts easy entry into USV detection and analysis yet is flexible and adaptable with a graphical user interface and offers access to numerous input and analysis features. Compared to other modern programs and manual analysis, DeepSqueak was able to reduce false positives, increase detection recall, dramatically reduce analysis time, optimize automatic syllable classification, and perform automatic syntax analysis on arbitrarily large numbers of syllables, all while maintaining manual selection review and supervised classification. DeepSqueak allows USV recording and analysis to be added easily to existing rodent behavioral procedures, hopefully revealing a wide range of innate responses to provide another dimension of insights into behavior when combined with conventional outcome measures.
Collapse
Affiliation(s)
- Kevin R. Coffey
- 0000000122986657grid.34477.33Psychiatry & Behavioral Sciences, University of Washington, Seattle, WA 98104 USA
| | - Ruby E. Marx
- 0000000122986657grid.34477.33Psychiatry & Behavioral Sciences, University of Washington, Seattle, WA 98104 USA
| | - John F. Neumaier
- 0000000122986657grid.34477.33Psychiatry & Behavioral Sciences, University of Washington, Seattle, WA 98104 USA
| |
Collapse
|
18
|
Luís AR, Alves IS, Sobreira FV, Couchinho MN, dos Santos ME. Brays and bits: information theory applied to acoustic communication sequences of bottlenose dolphins. BIOACOUSTICS 2018. [DOI: 10.1080/09524622.2018.1443285] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Affiliation(s)
- A. R. Luís
- MARE – Marine and Environmental Sciences Centre, ISPA – Instituto Universitário, Lisboa, Portugal
- Projecto Delfim – Centro Português de Estudo dos Mamíferos Marinhos, Lisboa, Portugal
| | - I. S. Alves
- MARE – Marine and Environmental Sciences Centre, ISPA – Instituto Universitário, Lisboa, Portugal
| | - F. V. Sobreira
- MARE – Marine and Environmental Sciences Centre, ISPA – Instituto Universitário, Lisboa, Portugal
| | - M. N. Couchinho
- MARE – Marine and Environmental Sciences Centre, ISPA – Instituto Universitário, Lisboa, Portugal
- Projecto Delfim – Centro Português de Estudo dos Mamíferos Marinhos, Lisboa, Portugal
| | - M. E. dos Santos
- MARE – Marine and Environmental Sciences Centre, ISPA – Instituto Universitário, Lisboa, Portugal
- Projecto Delfim – Centro Português de Estudo dos Mamíferos Marinhos, Lisboa, Portugal
| |
Collapse
|
19
|
|
20
|
Kershenbaum A, Déaux ÉC, Habib B, Mitchell B, Palacios V, Root-Gutteridge H, Waller S. Measuring acoustic complexity in continuously varying signals: how complex is a wolf howl? BIOACOUSTICS 2017. [DOI: 10.1080/09524622.2017.1317287] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
| | - Éloïse C. Déaux
- Department of Biological Sciences, Macquarie University, Sydney, Australia
| | - Bilal Habib
- Department of Animal Ecology and Conservation Biology, Wildlife Institute of India, Dehradun, India
| | - Brian Mitchell
- The Rubenstein School of Environment and Natural Resources, University of Vermont, Burlington, VT, USA
| | - Vicente Palacios
- Instituto Cavanilles de Biodiversidad y Biología Evolutiva, University of Valencia, Valencia, Spain
| | | | - Sara Waller
- Department of Philosophy, Montana State University, Bozeman, MT, USA
| |
Collapse
|
21
|
Everyday bat vocalizations contain information about emitter, addressee, context, and behavior. Sci Rep 2016; 6:39419. [PMID: 28005079 PMCID: PMC5178335 DOI: 10.1038/srep39419] [Citation(s) in RCA: 55] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2016] [Accepted: 11/22/2016] [Indexed: 11/09/2022] Open
Abstract
Animal vocal communication is often diverse and structured. Yet, the information concealed in animal vocalizations remains elusive. Several studies have shown that animal calls convey information about their emitter and the context. Often, these studies focus on specific types of calls, as it is rarely possible to probe an entire vocal repertoire at once. In this study, we continuously monitored Egyptian fruit bats for months, recording audio and video around-the-clock. We analyzed almost 15,000 vocalizations, which accompanied the everyday interactions of the bats, and were all directed toward specific individuals, rather than broadcast. We found that bat vocalizations carry ample information about the identity of the emitter, the context of the call, the behavioral response to the call, and even the call’s addressee. Our results underline the importance of studying the mundane, pairwise, directed, vocal interactions of animals.
Collapse
|
22
|
Moore RK, Marxer R, Thill S. Vocal Interactivity in-and-between Humans, Animals, and Robots. Front Robot AI 2016. [DOI: 10.3389/frobt.2016.00061] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
|
23
|
Anderson R, Waayers R, Knight A. Orca Behavior and Subsequent Aggression Associated with Oceanarium Confinement. Animals (Basel) 2016; 6:ani6080049. [PMID: 27548232 PMCID: PMC4997274 DOI: 10.3390/ani6080049] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2016] [Revised: 07/31/2016] [Accepted: 08/11/2016] [Indexed: 11/16/2022] Open
Abstract
Based on neuroanatomical indices such as brain size and encephalization quotient, orcas are among the most intelligent animals on Earth. They display a range of complex behaviors indicative of social intelligence, but these are difficult to study in the open ocean where protective laws may apply, or in captivity, where access is constrained for commercial and safety reasons. From 1979 to 1980, however, we were able to interact with juvenile orcas in an unstructured way at San Diego's SeaWorld facility. We observed in the animals what appeared to be pranks, tests of trust, limited use of tactical deception, emotional self-control, and empathetic behaviors. Our observations were consistent with those of a former Seaworld trainer, and provide important insights into orca cognition, communication, and social intelligence. However, after being trained as performers within Seaworld's commercial entertainment program, a number of orcas began to exhibit aggressive behaviors. The orcas who previously established apparent friendships with humans were most affected, although significant aggression also occurred in some of their descendants, and among the orcas they lived with. Such oceanaria confinement and commercial use can no longer be considered ethically defensible, given the current understanding of orcas' advanced cognitive, social, and communicative capacities, and of their behavioral needs.
Collapse
Affiliation(s)
- Robert Anderson
- Retired, Space Dynamics Laboratory, Utah State University Research Foundation, Logan, UT 84341, USA.
| | - Robyn Waayers
- Palomar College, 1140 West Mission Road, San Marcos, CA 92069, USA.
| | - Andrew Knight
- Centre for Animal Welfare, Faculty of Humanities and Social Sciences, University of Winchester, Sparkford Road, Winchester SO22 4NR, UK.
| |
Collapse
|
24
|
Disentangling canid howls across multiple species and subspecies: Structure in a complex communication channel. Behav Processes 2016; 124:149-57. [DOI: 10.1016/j.beproc.2016.01.006] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2015] [Revised: 01/15/2016] [Accepted: 01/20/2016] [Indexed: 11/22/2022]
|
25
|
Kershenbaum A, Blumstein DT, Roch MA, Akçay Ç, Backus G, Bee MA, Bohn K, Cao Y, Carter G, Cäsar C, Coen M, DeRuiter SL, Doyle L, Edelman S, Ferrer-i-Cancho R, Freeberg TM, Garland EC, Gustison M, Harley HE, Huetz C, Hughes M, Bruno JH, Ilany A, Jin DZ, Johnson M, Ju C, Karnowski J, Lohr B, Manser MB, McCowan B, Mercado E, Narins PM, Piel A, Rice M, Salmi R, Sasahara K, Sayigh L, Shiu Y, Taylor C, Vallejo EE, Waller S, Zamora-Gutierrez V. Acoustic sequences in non-human animals: a tutorial review and prospectus. Biol Rev Camb Philos Soc 2016; 91:13-52. [PMID: 25428267 PMCID: PMC4444413 DOI: 10.1111/brv.12160] [Citation(s) in RCA: 132] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2014] [Revised: 10/02/2014] [Accepted: 10/15/2014] [Indexed: 11/30/2022]
Abstract
Animal acoustic communication often takes the form of complex sequences, made up of multiple distinct acoustic units. Apart from the well-known example of birdsong, other animals such as insects, amphibians, and mammals (including bats, rodents, primates, and cetaceans) also generate complex acoustic sequences. Occasionally, such as with birdsong, the adaptive role of these sequences seems clear (e.g. mate attraction and territorial defence). More often however, researchers have only begun to characterise - let alone understand - the significance and meaning of acoustic sequences. Hypotheses abound, but there is little agreement as to how sequences should be defined and analysed. Our review aims to outline suitable methods for testing these hypotheses, and to describe the major limitations to our current and near-future knowledge on questions of acoustic sequences. This review and prospectus is the result of a collaborative effort between 43 scientists from the fields of animal behaviour, ecology and evolution, signal processing, machine learning, quantitative linguistics, and information theory, who gathered for a 2013 workshop entitled, 'Analysing vocal sequences in animals'. Our goal is to present not just a review of the state of the art, but to propose a methodological framework that summarises what we suggest are the best practices for research in this field, across taxa and across disciplines. We also provide a tutorial-style introduction to some of the most promising algorithmic approaches for analysing sequences. We divide our review into three sections: identifying the distinct units of an acoustic sequence, describing the different ways that information can be contained within a sequence, and analysing the structure of that sequence. Each of these sections is further subdivided to address the key questions and approaches in that area. We propose a uniform, systematic, and comprehensive approach to studying sequences, with the goal of clarifying research terms used in different fields, and facilitating collaboration and comparative studies. Allowing greater interdisciplinary collaboration will facilitate the investigation of many important questions in the evolution of communication and sociality.
Collapse
Affiliation(s)
- Arik Kershenbaum
- National Institute for Mathematical and Biological Synthesis, 1122 Volunteer Blvd., Suite 106, University of Tennessee, Knoxville, TN 37996-3410, USA
- Department of Zoology, University of Cambridge, Downing Street, Cambridge, CB2 3EJ, UK
| | - Daniel T. Blumstein
- Department of Ecology and Evolutionary Biology, University of California Los Angeles, 621 Charles E. Young Drive South, Los Angeles, CA 90095-1606, USA
| | - Marie A. Roch
- Department of Computer Science, San Diego State University, 5500 Campanile Dr, San Diego, CA 92182, USA
| | - Çağlar Akçay
- Lab of Ornithology, Cornell University, 159 Sapsucker Woods Rd, Ithaca, NY 14850, USA
| | - Gregory Backus
- Department of Biomathematics, North Carolina State University, Raleigh, NC 27607, USA
| | - Mark A. Bee
- Department of Ecology, Evolution and Behavior, University of Minnesota, 100 Ecology Building, 1987 Upper Buford Cir, Falcon Heights, MN 55108, USA
| | - Kirsten Bohn
- Integrated Science, Florida International University, Modesto Maidique Campus, 11200 SW 8th Street, AHC-4, 351, Miami, FL 33199, USA
| | - Yan Cao
- Department of Mathematical Sciences, University of Texas at Dallas, 800 W Campbell Rd, Richardson, TX 75080, USA
| | - Gerald Carter
- Biological Sciences Graduate Program, University of Maryland, College Park, MD 20742, USA
| | - Cristiane Cäsar
- Department of Psychology & Neuroscience, University of St. Andrews, St Mary’s Quad South Street, St Andrews, Fife, KY16 9JP, UK
| | - Michael Coen
- Department of Biostatistics and Medical Informatics, University of Wisconsin, K6/446 Clinical Sciences Center, 600 Highland Avenue, Madison, WI 53792-4675, USA
| | - Stacy L. DeRuiter
- School of Mathematics and Statistics, University of St. Andrews, St Andrews, KY16 9SS, UK
| | - Laurance Doyle
- Carl Sagan Center for the Study of Life in the Universe, SETI Institute, 189 Bernardo Ave, Suite 100, Mountain View, CA 94043, USA
| | - Shimon Edelman
- Department of Psychology, Cornell University, 211 Uris Hall, Ithaca, NY 14853-7601, USA
| | - Ramon Ferrer-i-Cancho
- Department of Computer Science, Universitat Politecnica de Catalunya, (Catalonia), Calle Jordi Girona, 31, 08034 Barcelona, Spain
| | - Todd M. Freeberg
- Department of Psychology, University of Tennessee, Austin Peay Building, Knoxville, Tennessee 37996, USA
| | - Ellen C. Garland
- National Marine Mammal Laboratory, AFSC/NOAA, 7600 Sand Point Way N.E., Seattle, Washington 98115, USA
| | - Morgan Gustison
- Department of Psychology, University of Michigan, 530 Church St, Ann Arbor, MI 48109, USA
| | - Heidi E. Harley
- Division of Social Sciences, New College of Florida, 5800 Bay Shore Rd, Sarasota, FL 34243, USA
| | - Chloé Huetz
- CNPS, CNRS UMR 8195, Université Paris-Sud, UMR 8195, Batiments 440-447, Rue Claude Bernard, 91405 Orsay, France
| | - Melissa Hughes
- Department of Biology, College of Charleston, 66 George St, Charleston, SC 29424, USA
| | - Julia Hyland Bruno
- Department of Psychology, Hunter College and the Graduate Center, The City University of New York, 365 Fifth Avenue, New York, NY 10016, USA
| | - Amiyaal Ilany
- National Institute for Mathematical and Biological Synthesis, 1122 Volunteer Blvd., Suite 106, University of Tennessee, Knoxville, TN 37996-3410, USA
| | - Dezhe Z. Jin
- Department of Physics, Pennsylvania State University, 104 Davey Lab, University Park, PA 16802-6300, USA
| | - Michael Johnson
- Department of Electrical and Computer Engineering, Marquette University, 1515 W. Wisconsin Ave., Milwaukee, WI 53233, USA
| | - Chenghui Ju
- Department of Biology, Queen College, The City Univ. of New York, 65-30 Kissena Blvd., Flushing, New York 11367, USA
| | - Jeremy Karnowski
- Department of Cognitive Science, University of California San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0515, USA
| | - Bernard Lohr
- Department of Biological Sciences, University of Maryland Baltimore County, 1000 Hilltop Circle, Baltimore, MD 21250, USA
| | - Marta B. Manser
- Institute of Evolutionary Biology and Environmental Studies, University of Zurich, Winterthurerstrasse 190, CH-8057 Zurich, Switzerland
| | - Brenda McCowan
- Department of Veterinary Medicine, University of California Davis, 1 Peter J Shields Ave, Davis, CA 95616, USA
| | - Eduardo Mercado
- Department of Psychology; Evolution, Ecology, & Behavior, University at Buffalo, The State University of New York, Park Hall Room 204, Buffalo, NY 14260-4110, USA
| | - Peter M. Narins
- Department of Integrative Biology & Physiology, University of California Los Angeles, 612 Charles E. Young Drive East, Los Angeles, CA 90095-7246, USA
| | - Alex Piel
- Division of Biological Anthropology, University of Cambridge, Pembroke Street Cambridge, CB2 3QG, UK
| | - Megan Rice
- Department of Psychology, California State University San Marcos, 333 S. Twin Oaks Valley Rd., San Marcos, CA 92096-0001, USA
| | - Roberta Salmi
- Department of Anthropology, University of Georgia at Athens, 355 S Jackson St, Athens, GA 30602, USA
| | - Kazutoshi Sasahara
- Graduate School of Information Science, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan
| | - Laela Sayigh
- Biology Department, Woods Hole Oceanographic Institution, 86 Water St, Woods Hole, MA 02543, USA
| | - Yu Shiu
- Lab of Ornithology, Cornell University, 159 Sapsucker Woods Rd, Ithaca, NY 14850, USA
| | - Charles Taylor
- Department of Ecology and Evolutionary Biology, University of California Los Angeles, 621 Charles E. Young Drive South, Los Angeles, CA 90095-1606, USA
| | - Edgar E. Vallejo
- Department of Computer Science, Monterrey Institute of Technology, Ave. Eugenio Garza Sada 2501 Sur Col. Tecnológico C.P. 64849, Monterrey, Nuevo León, Mexico
| | - Sara Waller
- Department of Philosophy, Montana State University, 2-155 Wilson Hall, Bozeman, Montana 59717, USA
| | - Veronica Zamora-Gutierrez
- Department of Zoology, University of Cambridge, Downing Street, Cambridge, CB2 3EJ, UK
- Centre for Biodiversity and Environmental Research, University College London, London WC1H 0AG, UK
| |
Collapse
|
26
|
|
27
|
Favaro L, Gamba M, Alfieri C, Pessani D, McElligott AG. Vocal individuality cues in the African penguin (Spheniscus demersus): a source-filter theory approach. Sci Rep 2015; 5:17255. [PMID: 26602001 PMCID: PMC4658557 DOI: 10.1038/srep17255] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2015] [Accepted: 10/26/2015] [Indexed: 11/29/2022] Open
Abstract
The African penguin is a nesting seabird endemic to southern Africa. In penguins of the genus Spheniscus vocalisations are important for social recognition. However, it is not clear which acoustic features of calls can encode individual identity information. We recorded contact calls and ecstatic display songs of 12 adult birds from a captive colony. For each vocalisation, we measured 31 spectral and temporal acoustic parameters related to both source and filter components of calls. For each parameter, we calculated the Potential of Individual Coding (PIC). The acoustic parameters showing PIC ≥ 1.1 were used to perform a stepwise cross-validated discriminant function analysis (DFA). The DFA correctly classified 66.1% of the contact calls and 62.5% of display songs to the correct individual. The DFA also resulted in the further selection of 10 acoustic features for contact calls and 9 for display songs that were important for vocal individuality. Our results suggest that studying the anatomical constraints that influence nesting penguin vocalisations from a source-filter perspective, can lead to a much better understanding of the acoustic cues of individuality contained in their calls. This approach could be further extended to study and understand vocal communication in other bird species.
Collapse
Affiliation(s)
- Livio Favaro
- Department of Life Sciences and Systems Biology, University of Turin, Via Accademia Albertina 13, 10123 Turin, Italy
| | - Marco Gamba
- Department of Life Sciences and Systems Biology, University of Turin, Via Accademia Albertina 13, 10123 Turin, Italy
| | - Chiara Alfieri
- Department of Life Sciences and Systems Biology, University of Turin, Via Accademia Albertina 13, 10123 Turin, Italy
| | - Daniela Pessani
- Department of Life Sciences and Systems Biology, University of Turin, Via Accademia Albertina 13, 10123 Turin, Italy
| | - Alan G. McElligott
- Biological and Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, Mile End Road, London E1 4NS, UK
| |
Collapse
|
28
|
Pimm SL, Alibhai S, Bergl R, Dehgan A, Giri C, Jewell Z, Joppa L, Kays R, Loarie S. Emerging Technologies to Conserve Biodiversity. Trends Ecol Evol 2015; 30:685-696. [PMID: 26437636 DOI: 10.1016/j.tree.2015.08.008] [Citation(s) in RCA: 143] [Impact Index Per Article: 15.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2015] [Revised: 08/15/2015] [Accepted: 08/18/2015] [Indexed: 10/23/2022]
Abstract
Technologies to identify individual animals, follow their movements, identify and locate animal and plant species, and assess the status of their habitats remotely have become better, faster, and cheaper as threats to the survival of species are increasing. New technologies alone do not save species, and new data create new problems. For example, improving technologies alone cannot prevent poaching: solutions require providing appropriate tools to the right people. Habitat loss is another driver: the challenge here is to connect existing sophisticated remote sensing with species occurrence data to predict where species remain. Other challenges include assembling a wider public to crowdsource data, managing the massive quantities of data generated, and developing solutions to rapidly emerging threats.
Collapse
Affiliation(s)
- Stuart L Pimm
- Nicholas School of the Environment, Duke University, Box 90328, Durham, NC 27708, USA.
| | - Sky Alibhai
- WildTrack Inc., JMP Division, SAS Institute, SAS Campus Drive, Cary, NC 27513, USA
| | - Richard Bergl
- North Carolina Zoological Park, 4401 Zoo Parkway, Asheboro, NC 27401, USA
| | - Alex Dehgan
- Conservation X Labs, 2380 Champlain Street NW, Washington, DC 20009, USA
| | - Chandra Giri
- US Geological Survey/Earth Resources Observation and Science (EROS), Center/Nicholas School of the Environment, Duke University, Box 90328, Durham, NC 27708, USA
| | - Zoë Jewell
- WildTrack Inc., JMP Division, SAS Institute, SAS Campus Drive, Cary, NC 27513, USA
| | - Lucas Joppa
- Microsoft Research 14820 NE 36th Street, Redmond, WA 98052, USA
| | - Roland Kays
- North Carolina Museum of Natural Sciences, 11 West Jones Street, Raleigh, NC 27601, USA; Department of Forestry and Environmental Resources, North Carolina State University, Raleigh, NC 27695, USA
| | - Scott Loarie
- iNaturalist Department, California Academy of Sciences, San Francisco, CA 94118, USA
| |
Collapse
|
29
|
Kershenbaum A, Garland EC. Quantifying similarity in animal vocal sequences: which metric performs best? Methods Ecol Evol 2015. [DOI: 10.1111/2041-210x.12433] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Affiliation(s)
| | - Ellen C. Garland
- School of Biology University of St. Andrews St. Andrews Fife KY16 9TH UK
| |
Collapse
|
30
|
Wadewitz P, Hammerschmidt K, Battaglia D, Witt A, Wolf F, Fischer J. Characterizing Vocal Repertoires--Hard vs. Soft Classification Approaches. PLoS One 2015; 10:e0125785. [PMID: 25915039 PMCID: PMC4411004 DOI: 10.1371/journal.pone.0125785] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2014] [Accepted: 03/24/2015] [Indexed: 11/18/2022] Open
Abstract
To understand the proximate and ultimate causes that shape acoustic communication in animals, objective characterizations of the vocal repertoire of a given species are critical, as they provide the foundation for comparative analyses among individuals, populations and taxa. Progress in this field has been hampered by a lack of standard in methodology, however. One problem is that researchers may settle on different variables to characterize the calls, which may impact on the classification of calls. More important, there is no agreement how to best characterize the overall structure of the repertoire in terms of the amount of gradation within and between call types. Here, we address these challenges by examining 912 calls recorded from wild chacma baboons (Papio ursinus). We extracted 118 acoustic variables from spectrograms, from which we constructed different sets of acoustic features, containing 9, 38, and 118 variables; as well 19 factors derived from principal component analysis. We compared and validated the resulting classifications of k-means and hierarchical clustering. Datasets with a higher number of acoustic features lead to better clustering results than datasets with only a few features. The use of factors in the cluster analysis resulted in an extremely poor resolution of emerging call types. Another important finding is that none of the applied clustering methods gave strong support to a specific cluster solution. Instead, the cluster analysis revealed that within distinct call types, subtypes may exist. Because hard clustering methods are not well suited to capture such gradation within call types, we applied a fuzzy clustering algorithm. We found that this algorithm provides a detailed and quantitative description of the gradation within and between chacma baboon call types. In conclusion, we suggest that fuzzy clustering should be used in future studies to analyze the graded structure of vocal repertoires. Moreover, the use of factor analyses to reduce the number of acoustic variables should be discouraged.
Collapse
Affiliation(s)
- Philip Wadewitz
- Cognitive Ethology Laboratory, German Primate Center, Göttingen, Germany
- Theoretical Neurophysics, Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
| | - Kurt Hammerschmidt
- Cognitive Ethology Laboratory, German Primate Center, Göttingen, Germany
| | - Demian Battaglia
- Theoretical Neurophysics, Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
- Theoretical Neurosciences Group, Institute for Systems Neuroscience, Marseille, France
| | - Annette Witt
- Theoretical Neurophysics, Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
| | - Fred Wolf
- Theoretical Neurophysics, Max Planck Institute for Dynamics and Self-Organization, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
| | - Julia Fischer
- Cognitive Ethology Laboratory, German Primate Center, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, Göttingen, Germany
| |
Collapse
|
31
|
Stowell D, Plumbley MD. Large-scale analysis of frequency modulation in birdsong data bases. Methods Ecol Evol 2014. [DOI: 10.1111/2041-210x.12223] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Affiliation(s)
- Dan Stowell
- Centre for Digital Music; Queen Mary University of London; Mile End Road London E1 4NS UK
| | - Mark D. Plumbley
- Centre for Digital Music; Queen Mary University of London; Mile End Road London E1 4NS UK
| |
Collapse
|
32
|
Kershenbaum A, Roch MA. An image processing based paradigm for the extraction of tonal sounds in cetacean communications. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 134:4435. [PMID: 25669255 PMCID: PMC3874055 DOI: 10.1121/1.4828821] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2013] [Revised: 10/04/2013] [Accepted: 10/14/2013] [Indexed: 05/29/2023]
Abstract
Dolphins and whales use tonal whistles for communication, and it is known that frequency modulation encodes contextual information. An automated mathematical algorithm could characterize the frequency modulation of tonal calls for use with clustering and classification. Most automatic cetacean whistle processing techniques are based on peak or edge detection or require analyst assistance in verifying detections. An alternative paradigm is introduced using techniques of image processing. Frequency information is extracted as ridges in whistle spectrograms. Spectral ridges are the fundamental structure of tonal vocalizations, and ridge detection is a well-established image processing technique, easily applied to vocalization spectrograms. This paradigm is implemented as freely available matlab scripts, coined IPRiT (image processing ridge tracker). Its fidelity in the reconstruction of synthesized whistles is compared to another published whistle detection software package, silbido. Both algorithms are also applied to real-world recordings of bottlenose dolphin (Tursiops trunactus) signature whistles and tested for the ability to identify whistles belonging to different individuals. IPRiT gave higher fidelity and lower false detection than silbido with synthesized whistles, and reconstructed dolphin identity groups from signature whistles, whereas silbido could not. IPRiT appears to be superior to silbido for the extraction of the precise frequency variation of the whistle.
Collapse
Affiliation(s)
- Arik Kershenbaum
- National Institute for Mathematical and Biological Synthesis, Knoxville, Tennessee 37996
| | - Marie A Roch
- Department of Computer Science, San Diego State University, San Diego, California 92182
| |
Collapse
|