1
|
Bennitt E. Automated identification of African carnivores: conservation applications. Trends Ecol Evol 2024; 39:125-127. [PMID: 38185582 DOI: 10.1016/j.tree.2023.12.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 12/12/2023] [Accepted: 12/13/2023] [Indexed: 01/09/2024]
Abstract
Photographic images taken by tourists and uploaded to the African Carnivore Wildbook have been used by Cozzi et al. to identify individual African wild dogs and study their dispersal behavior. Collaborations among citizen scientists, computer scientists, and researchers can expand the reach of conservation efforts spatially and temporally.
Collapse
Affiliation(s)
- Emily Bennitt
- Okavango Research Institute, University of Botswana, Shorobe Road, Sexaxa, Maun, Botswana.
| |
Collapse
|
2
|
Brickson L, Zhang L, Vollrath F, Douglas-Hamilton I, Titus AJ. Elephants and algorithms: a review of the current and future role of AI in elephant monitoring. J R Soc Interface 2023; 20:20230367. [PMID: 37963556 PMCID: PMC10645515 DOI: 10.1098/rsif.2023.0367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Accepted: 10/23/2023] [Indexed: 11/16/2023] Open
Abstract
Artificial intelligence (AI) and machine learning (ML) present revolutionary opportunities to enhance our understanding of animal behaviour and conservation strategies. Using elephants, a crucial species in Africa and Asia's protected areas, as our focal point, we delve into the role of AI and ML in their conservation. Given the increasing amounts of data gathered from a variety of sensors like cameras, microphones, geophones, drones and satellites, the challenge lies in managing and interpreting this vast data. New AI and ML techniques offer solutions to streamline this process, helping us extract vital information that might otherwise be overlooked. This paper focuses on the different AI-driven monitoring methods and their potential for improving elephant conservation. Collaborative efforts between AI experts and ecological researchers are essential in leveraging these innovative technologies for enhanced wildlife conservation, setting a precedent for numerous other species.
Collapse
Affiliation(s)
| | | | - Fritz Vollrath
- Save the Elephants, Nairobi, Kenya
- Department of Biology, University of Oxford, Oxford, UK
| | | | - Alexander J. Titus
- Colossal Biosciences, Dallas, TX, USA
- Information Sciences Institute, University of Southern California, Los Angeles, USA
| |
Collapse
|
3
|
Phaniraj N, Wierucka K, Zürcher Y, Burkart JM. Who is calling? Optimizing source identification from marmoset vocalizations with hierarchical machine learning classifiers. J R Soc Interface 2023; 20:20230399. [PMID: 37848054 PMCID: PMC10581777 DOI: 10.1098/rsif.2023.0399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 09/25/2023] [Indexed: 10/19/2023] Open
Abstract
With their highly social nature and complex vocal communication system, marmosets are important models for comparative studies of vocal communication and, eventually, language evolution. However, our knowledge about marmoset vocalizations predominantly originates from playback studies or vocal interactions between dyads, and there is a need to move towards studying group-level communication dynamics. Efficient source identification from marmoset vocalizations is essential for this challenge, and machine learning algorithms (MLAs) can aid it. Here we built a pipeline capable of plentiful feature extraction, meaningful feature selection, and supervised classification of vocalizations of up to 18 marmosets. We optimized the classifier by building a hierarchical MLA that first learned to determine the sex of the source, narrowed down the possible source individuals based on their sex and then determined the source identity. We were able to correctly identify the source individual with high precisions (87.21%-94.42%, depending on call type, and up to 97.79% after the removal of twins from the dataset). We also examine the robustness of identification across varying sample sizes. Our pipeline is a promising tool not only for source identification from marmoset vocalizations but also for analysing vocalizations of other species.
Collapse
Affiliation(s)
- Nikhil Phaniraj
- Institute of Evolutionary Anthropology (IEA), University of Zurich, Winterthurerstrasse 190, 8057 Zürich, Switzerland
- Neuroscience Center Zurich (ZNZ), University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057 Zürich, Switzerland
- Department of Biology, Indian Institute of Science Education and Research (IISER) Pune, Dr. Homi Bhabha Road, Pune 411008, India
| | - Kaja Wierucka
- Institute of Evolutionary Anthropology (IEA), University of Zurich, Winterthurerstrasse 190, 8057 Zürich, Switzerland
- Behavioral Ecology & Sociobiology Unit, German Primate Center, Leibniz Institute for Primate Research, Kellnerweg 4, 37077 Göttingen, Germany
| | - Yvonne Zürcher
- Institute of Evolutionary Anthropology (IEA), University of Zurich, Winterthurerstrasse 190, 8057 Zürich, Switzerland
| | - Judith M. Burkart
- Institute of Evolutionary Anthropology (IEA), University of Zurich, Winterthurerstrasse 190, 8057 Zürich, Switzerland
- Neuroscience Center Zurich (ZNZ), University of Zurich and ETH Zurich, Winterthurerstrasse 190, 8057 Zürich, Switzerland
- Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Affolternstrasse 56, 8050 Zürich, Switzerland
| |
Collapse
|
4
|
Yu Y, Niu Q, Li X, Xue J, Liu W, Lin D. A Review of Fingerprint Sensors: Mechanism, Characteristics, and Applications. MICROMACHINES 2023; 14:1253. [PMID: 37374839 DOI: 10.3390/mi14061253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Revised: 06/07/2023] [Accepted: 06/09/2023] [Indexed: 06/29/2023]
Abstract
Identification technology based on biometrics is a branch of research that employs the unique individual traits of humans to authenticate identity, which is the most secure method of identification based on its exceptional high dependability and stability of human biometrics. Common biometric identifiers include fingerprints, irises, and facial sounds, among others. In the realm of biometric recognition, fingerprint recognition has gained success with its convenient operation and fast identif ication speed. Different fingerprint collecting techniques, which supply fingerprint information for fingerprint identification systems, have attracted a significant deal of interest in authentication technology regarding fingerprint identification systems. This work presents several fingerprint acquisition techniques, such as optical capacitive and ultrasonic, and analyzes acquisition types and structures. In addition, the pros and drawbacks of various sensor types, as well as the limits and benefits of optical, capacitive, and ultrasonic kinds, are discussed. It is the necessary stage for the application of the Internet of Things (IoT).
Collapse
Affiliation(s)
- Yirong Yu
- School of Optoelectronic Engineering, Xi'an Technological University, Xi'an 710032, China
| | - Qiming Niu
- School of Optoelectronic Engineering, Xi'an Technological University, Xi'an 710032, China
| | - Xuyang Li
- School of Optoelectronic Engineering, Xi'an Technological University, Xi'an 710032, China
| | - Jianshe Xue
- BOE Display Technology Co., Ltd., Beijing 100176, China
| | - Weiguo Liu
- School of Optoelectronic Engineering, Xi'an Technological University, Xi'an 710032, China
| | - Dabin Lin
- School of Optoelectronic Engineering, Xi'an Technological University, Xi'an 710032, China
| |
Collapse
|
5
|
Chelysheva EV, Klenova AV, Volodin IA, Volodina EV. Advertising sex and individual identity by long‐distance chirps in wild‐living mature cheetahs (
Acinonyx jubatus
). Ethology 2023. [DOI: 10.1111/eth.13366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2023]
Affiliation(s)
| | - Anna V. Klenova
- Department of Vertebrate Zoology, Faculty of Biology Lomonosov Moscow State University Moscow Russia
| | - Ilya A. Volodin
- Department of Vertebrate Zoology, Faculty of Biology Lomonosov Moscow State University Moscow Russia
- Department of Behaviour and Behavioural Ecology of Mammals, Severtsov Institute of Ecology and Evolution Russian Academy of Sciences Moscow Russia
| | - Elena V. Volodina
- Department of Behaviour and Behavioural Ecology of Mammals, Severtsov Institute of Ecology and Evolution Russian Academy of Sciences Moscow Russia
| |
Collapse
|
6
|
Selection levels on vocal individuality: strategic use or byproduct. Curr Opin Behav Sci 2022. [DOI: 10.1016/j.cobeha.2022.101140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
7
|
Lehmann KDS, Jensen FH, Gersick AS, Strandburg-Peshkin A, Holekamp KE. Long-distance vocalizations of spotted hyenas contain individual, but not group, signatures. Proc Biol Sci 2022; 289:20220548. [PMID: 35855604 PMCID: PMC9297016 DOI: 10.1098/rspb.2022.0548] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Abstract
In animal societies, identity signals are common, mediate interactions within groups, and allow individuals to discriminate group-mates from out-group competitors. However, individual recognition becomes increasingly challenging as group size increases and as signals must be transmitted over greater distances. Group vocal signatures may evolve when successful in-group/out-group distinctions are at the crux of fitness-relevant decisions, but group signatures alone are insufficient when differentiated within-group relationships are important for decision-making. Spotted hyenas are social carnivores that live in stable clans of less than 125 individuals composed of multiple unrelated matrilines. Clan members cooperate to defend resources and communal territories from neighbouring clans and other mega carnivores; this collective defence is mediated by long-range (up to 5 km range) recruitment vocalizations, called whoops. Here, we use machine learning to determine that spotted hyena whoops contain individual but not group signatures, and that fundamental frequency features which propagate well are critical for individual discrimination. For effective clan-level cooperation, hyenas face the cognitive challenge of remembering and recognizing individual voices at long range. We show that serial redundancy in whoop bouts increases individual classification accuracy and thus extended call bouts used by hyenas probably evolved to overcome the challenges of communicating individual identity at long distance.
Collapse
Affiliation(s)
- Kenna D. S. Lehmann
- School of Biological Sciences, University of Nebraska—Lincoln, 1101T Street, Lincoln, NE 68588, USA
| | - Frants H. Jensen
- Department of Biology, Syracuse University, 107 College Place, Syracuse, NY 13244, USA,Biology Department, Woods Hole Oceanographic Institution, Woods Hole, MA 02543, USA
| | - Andrew S. Gersick
- Dept of Ecology and Evolutionary Biology, Princeton University, 106A Guyot Hall, Princeton, NJ 08544, USA
| | - Ariana Strandburg-Peshkin
- Biology Department, University of Konstanz, Universitätsstrasse 10, 78464 Konstanz, Germany,Centre for the Advanced Study of Collective Behaviour, University of Konstanz, Universitätsstrasse 10, 78464 Konstanz, Germany,Department for the Ecology of Animal Societies, Max Planck Institute of Animal Behaviour, Bücklestrasse 5a, 78467 Konstanz, Germany
| | - Kay E. Holekamp
- Department of Integrative Biology, Michigan State University, 288 Farm Lane, East Lansing, MI 48824 USA,Ecology, Evolution, and Behavior Program, Michigan State University, 293 Farm Lane, East Lansing, MI 48824, USA
| |
Collapse
|
8
|
Trapanotto M, Nanni L, Brahnam S, Guo X. Convolutional Neural Networks for the Identification of African Lions from Individual Vocalizations. J Imaging 2022; 8:jimaging8040096. [PMID: 35448223 PMCID: PMC9029749 DOI: 10.3390/jimaging8040096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 03/17/2022] [Accepted: 03/29/2022] [Indexed: 02/05/2023] Open
Abstract
The classification of vocal individuality for passive acoustic monitoring (PAM) and census of animals is becoming an increasingly popular area of research. Nearly all studies in this field of inquiry have relied on classic audio representations and classifiers, such as Support Vector Machines (SVMs) trained on spectrograms or Mel-Frequency Cepstral Coefficients (MFCCs). In contrast, most current bioacoustic species classification exploits the power of deep learners and more cutting-edge audio representations. A significant reason for avoiding deep learning in vocal identity classification is the tiny sample size in the collections of labeled individual vocalizations. As is well known, deep learners require large datasets to avoid overfitting. One way to handle small datasets with deep learning methods is to use transfer learning. In this work, we evaluate the performance of three pretrained CNNs (VGG16, ResNet50, and AlexNet) on a small, publicly available lion roar dataset containing approximately 150 samples taken from five male lions. Each of these networks is retrained on eight representations of the samples: MFCCs, spectrogram, and Mel spectrogram, along with several new ones, such as VGGish and stockwell, and those based on the recently proposed LM spectrogram. The performance of these networks, both individually and in ensembles, is analyzed and corroborated using the Equal Error Rate and shown to surpass previous classification attempts on this dataset; the best single network achieved over 95% accuracy and the best ensembles over 98% accuracy. The contributions this study makes to the field of individual vocal classification include demonstrating that it is valuable and possible, with caution, to use transfer learning with single pretrained CNNs on the small datasets available for this problem domain. We also make a contribution to bioacoustics generally by offering a comparison of the performance of many state-of-the-art audio representations, including for the first time the LM spectrogram and stockwell representations. All source code for this study is available on GitHub.
Collapse
Affiliation(s)
- Martino Trapanotto
- Department of Information Engineering, University of Padua, Via Gradenigo 6, 35131 Padova, Italy; (M.T.); (L.N.)
| | - Loris Nanni
- Department of Information Engineering, University of Padua, Via Gradenigo 6, 35131 Padova, Italy; (M.T.); (L.N.)
| | - Sheryl Brahnam
- Information Technology and Cybersecurity, Missouri State University, 901 S. National, Springfield, MO 65897, USA;
- Correspondence: ; Tel.: +1-417-873-9979
| | - Xiang Guo
- Information Technology and Cybersecurity, Missouri State University, 901 S. National, Springfield, MO 65897, USA;
| |
Collapse
|
9
|
Eisenring E, Eens M, Pradervand J, Jacot A, Baert J, Ulenaers E, Lathouwers M, Evens R. Quantifying song behavior in a free-living, light-weight, mobile bird using accelerometers. Ecol Evol 2022; 12:e8446. [PMID: 35127007 PMCID: PMC8803288 DOI: 10.1002/ece3.8446] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Revised: 11/22/2021] [Accepted: 11/26/2021] [Indexed: 12/21/2022] Open
Abstract
To acquire a fundamental understanding of animal communication, continuous observations in a natural setting and at an individual level are required. Whereas the use of animal-borne acoustic recorders in vocal studies remains challenging, light-weight accelerometers can potentially register individuals' vocal output when this coincides with body vibrations. We collected one-dimensional accelerometer data using light-weight tags on a free-living, crepuscular bird species, the European Nightjar (Caprimulgus europaeus). We developed a classification model to identify four behaviors (rest, sing, fly, and leap) from accelerometer data and, for the purpose of this study, validated the classification of song behavior. Male nightjars produce a distinctive "churring" song while they rest on a stationary song post. We expected churring to be associated with body vibrations (i.e., medium-amplitude body acceleration), which we assumed would be easy to distinguish from resting (i.e., low-amplitude body acceleration). We validated the classification of song behavior using simultaneous GPS tracking data (i.e., information on individuals' movement and proximity to audio recorders) and vocal recordings from stationary audio recorders at known song posts of one tracked individual. Song activity was detected by the classification model with an accuracy of 92%. Beyond a threshold of 20 m from the audio recorders, only 8% of the classified song bouts were recorded. The duration of the detected song activity (i.e., acceleration data) was highly correlated with the duration of the simultaneously recorded song bouts (correlation coefficient = 0.87, N = 10, S = 21.7, p = .001). We show that accelerometer-based identification of vocalizations could serve as a promising tool to study communication in free-living, small-sized birds and demonstrate possible limitations of audio recorders to investigate individual-based variation in song behavior.
Collapse
Affiliation(s)
- Elena Eisenring
- Department of BiologyBehavioural Ecology and Ecophysiology GroupUniversity of AntwerpWilrijkBelgium
| | - Marcel Eens
- Department of BiologyBehavioural Ecology and Ecophysiology GroupUniversity of AntwerpWilrijkBelgium
| | | | - Alain Jacot
- Swiss Ornithological InstituteField Station ValaisSionSwitzerland
| | - Jan Baert
- Department of BiologyBehavioural Ecology and Ecophysiology GroupUniversity of AntwerpWilrijkBelgium
- Terrestrial Ecology UnitDepartment of BiologyGhent UniversityGhentBelgium
| | - Eddy Ulenaers
- Agentschap Natuur en BosRegio Noord‐LimburgBrusselsBelgium
| | - Michiel Lathouwers
- Research Group: Zoology, Biodiversity and ToxicologyCentre for Environmental SciencesHasselt UniversityDiepenbeekBelgium
- Department of GeographyInstitute of Life, Earth and Environment (ILEE)University of NamurNamurBelgium
| | - Ruben Evens
- Department of BiologyBehavioural Ecology and Ecophysiology GroupUniversity of AntwerpWilrijkBelgium
- Max Planck Institute for OrnithologySeewiesenGermany
| |
Collapse
|
10
|
Reinwald M, Moseley B, Szenicer A, Nissen-Meyer T, Oduor S, Vollrath F, Markham A, Mortimer B. Seismic localization of elephant rumbles as a monitoring approach. J R Soc Interface 2021; 18:20210264. [PMID: 34255988 PMCID: PMC8277467 DOI: 10.1098/rsif.2021.0264] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Accepted: 06/23/2021] [Indexed: 11/12/2022] Open
Abstract
African elephants (Loxodonta africana) are sentient and intelligent animals that use a variety of vocalizations to greet, warn or communicate with each other. Their low-frequency rumbles propagate through the air as well as through the ground and the physical properties of both media cause differences in frequency filtering and propagation distances of the respective wave. However, it is not well understood how each mode contributes to the animals' abilities to detect these rumbles and extract behavioural or spatial information. In this study, we recorded seismic and co-generated acoustic rumbles in Kenya and compared their potential use to localize the vocalizing animal using the same multi-lateration algorithms. For our experimental set-up, seismic localization has higher accuracy than acoustic, and bimodal localization does not improve results. We conclude that seismic rumbles can be used to remotely monitor and even decipher elephant social interactions, presenting us with a tool for far-reaching, non-intrusive and surprisingly informative wildlife monitoring.
Collapse
Affiliation(s)
| | - Ben Moseley
- Department of Computer Science, University of Oxford, Oxford, UK
| | | | | | | | - Fritz Vollrath
- Department of Zoology, University of Oxford, Oxford, UK
- Save the Elephants, Marula Manor, Karen, Nairobi, Kenya
| | - Andrew Markham
- Department of Computer Science, University of Oxford, Oxford, UK
| | - Beth Mortimer
- Department of Zoology, University of Oxford, Oxford, UK
| |
Collapse
|