1
|
Gavojdian D, Mincu M, Lazebnik T, Oren A, Nicolae I, Zamansky A. BovineTalk: machine learning for vocalization analysis of dairy cattle under the negative affective state of isolation. Front Vet Sci 2024; 11:1357109. [PMID: 38362300 PMCID: PMC10867142 DOI: 10.3389/fvets.2024.1357109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Accepted: 01/19/2024] [Indexed: 02/17/2024] Open
Abstract
There is a critical need to develop and validate non-invasive animal-based indicators of affective states in livestock species, in order to integrate them into on-farm assessment protocols, potentially via the use of precision livestock farming (PLF) tools. One such promising approach is the use of vocal indicators. The acoustic structure of vocalizations and their functions were extensively studied in important livestock species, such as pigs, horses, poultry, and goats, yet cattle remain understudied in this context to date. Cows were shown to produce two types of vocalizations: low-frequency calls (LF), produced with the mouth closed, or partially closed, for close distance contacts, and open mouth emitted high-frequency calls (HF), produced for long-distance communication, with the latter considered to be largely associated with negative affective states. Moreover, cattle vocalizations were shown to contain information on individuality across a wide range of contexts, both negative and positive. Nowadays, dairy cows are facing a series of negative challenges and stressors in a typical production cycle, making vocalizations during negative affective states of special interest for research. One contribution of this study is providing the largest to date pre-processed (clean from noises) dataset of lactating adult multiparous dairy cows during negative affective states induced by visual isolation challenges. Here, we present two computational frameworks-deep learning based and explainable machine learning based, to classify high and low-frequency cattle calls and individual cow voice recognition. Our models in these two frameworks reached 87.2 and 89.4% accuracy for LF and HF classification, with 68.9 and 72.5% accuracy rates for the cow individual identification, respectively.
Collapse
Affiliation(s)
- Dinu Gavojdian
- Cattle Production Systems Laboratory, Research and Development Institute for Bovine, Balotesti, Romania
| | - Madalina Mincu
- Cattle Production Systems Laboratory, Research and Development Institute for Bovine, Balotesti, Romania
| | - Teddy Lazebnik
- Department of Mathematics, Ariel University, Ariel, Israel
- Department of Cancer Biology, University College London, London, United Kingdom
| | - Ariel Oren
- Tech4Animals Laboratory, Information Systems Department, University of Haifa, Haifa, Israel
| | - Ioana Nicolae
- Cattle Production Systems Laboratory, Research and Development Institute for Bovine, Balotesti, Romania
| | - Anna Zamansky
- Tech4Animals Laboratory, Information Systems Department, University of Haifa, Haifa, Israel
| |
Collapse
|
2
|
Wang J, Chen H, Wang J, Zhao K, Li X, Liu B, Zhou Y. Identification of oestrus cows based on vocalisation characteristics and machine learning technique using a dual-channel-equipped acoustic tag. Animal 2023; 17:100811. [PMID: 37150135 DOI: 10.1016/j.animal.2023.100811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 03/31/2023] [Accepted: 04/03/2023] [Indexed: 05/09/2023] Open
Abstract
Timely and accurate detection of oestrus in cows is an essential element of the good management of dairy farms. At present, the detection of cows in oestrus by acoustic means is impeded by the problems of filtering, incomplete feature selection, and poor recognition accuracy. To overcome these difficulties, this study proposes a sound detection method for cows in oestrus based on machine learning technology using an optimal feature combination and an optimal time window. Firstly, a dual-channel sound detection tag consisting of a unidirectional microphone and an omnidirectional microphone (OM) was developed. The Least Mean Squares adaptive algorithm based on wavelet thresholds was used to filter the signals from the OM, and the dual-channel endpoint detection algorithm was used to identify the lowing of individual cows. The Friedman analysis was then used to select the sound features with significant differences before and after oestrus in terms of time, frequency, and cepstrum, and these were used to determine the most acceptable feature combination. We then analysed the effects of Back Propagation Neural Network (BPNN), Cartesian Regression Tree, Support Vector Machine, and Random Forest classification on the accuracy, precision, sensitivity, specificity, and F1 score of oestrus discrimination. Different time windows were used, and the discrimination performance of these algorithms was evaluated using the Area under Receiver Operating Characteristic Curve to find the most satisfactory match between the time window and the recognition algorithm. The dual-channel acoustic tag's accuracy, precision, sensitivity, and specificity results were 91.25, 98.83, 91.75, and 83.68%, respectively. BPNN with the 70 ms time window and the feature combination (spectral roll-off + spectral flatness + Mel-Frequency Cepstrum Coefficients) was confirmed as the most suitable oestrus recognition method. The average accuracy, precision, sensitivity, specificity, and F1 score of this method were 97.62, 98.07, 97.17, 97.19, and 97.63%, respectively. Based on these results, the approach was shown to be a feasible means of oestrus detection in dairy cows. Based on its ability to differentiate cows and its consistency, it was demonstrated that sound has the potential to replace accelerometers as an early indicator of oestrus in dairy cows.
Collapse
Affiliation(s)
- Jun Wang
- School of Information Engineering, Henan University of Science and Technology, Luoyang, Henan 471003, PR China.
| | - Haoran Chen
- School of Information Engineering, Henan University of Science and Technology, Luoyang, Henan 471003, PR China
| | - Jianping Wang
- School of Animal Science and Technology, Henan University of Science and Technology, Luoyang, Henan 471003, PR China
| | - Kaixuan Zhao
- School of Agricultural Equipment Engineering, Henan University of Science and Technology, Luoyang, Henan 471003, PR China
| | - Xiaoxia Li
- School of Animal Science and Technology, Henan University of Science and Technology, Luoyang, Henan 471003, PR China
| | - Bo Liu
- School of Information Engineering, Henan University of Science and Technology, Luoyang, Henan 471003, PR China
| | - Yu Zhou
- School of Medical Technology and Engineering, Henan University of Science and Technology, Luoyang, Henan 471003, PR China
| |
Collapse
|
3
|
Schnaider MA, Heidemann MS, Silva AHP, Taconeli CA, Molento CFM. Vocalization and other behaviors as indicators of emotional valence: The case of cow-calf separation and reunion in beef cattle. J Vet Behav 2022. [DOI: 10.1016/j.jveb.2021.11.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
4
|
Schnaider MA, Heidemann MS, Silva AHP, Taconeli CA, Molento CFM. Vocalization and other behaviors indicating pain in beef calves during the ear tagging procedure. J Vet Behav 2022. [DOI: 10.1016/j.jveb.2021.10.005] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
5
|
Devi I, Singh P, Lathwal SS, Dudi K, Singh Y, Ruhil AP, Kumar A, Dash S, Malhotra R. Threshold values of acoustic features to assess estrous cycle phases in water buffaloes (Bubalus bubalis). Appl Anim Behav Sci 2019. [DOI: 10.1016/j.applanim.2019.104838] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
|
6
|
Automatic recording of individual oestrus vocalisation in group-housed dairy cattle: development of a cattle call monitor. Animal 2019; 14:198-205. [PMID: 31368424 DOI: 10.1017/s1751731119001733] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
Oestrus detection remains a problem in the dairy cattle industry. Therefore, automatic detection systems have been developed to detect specific behavioural changes at oestrus. Vocal behaviour has not been considered in such automatic oestrus detection systems in cattle, though the vocalisation rate is known to increase during oestrus. The main challenge in using vocalisation to detect oestrus is correctly identifying the calling individual when animals are moving freely in large groups, as oestrus needs to be detected at an individual level. Therefore, we aimed to automate vocalisation recording and caller identification in group-housed dairy cows. This paper first presents the details of such a system and then presents the results of a pilot study validating its functionality, in which the automatic detection of calls from individual heifers was compared to video-based assessment of these calls by a trained human observer, a technique that has, until now, been considered the 'gold standard'. We developed a collar-based cattle call monitor (CCM) with structure-borne and airborne sound microphones and a recording unit and developed a postprocessing algorithm to identify the caller by matching the information from both microphones. Five group-housed heifers, each in the perioestrus or oestrus period, were equipped with a CCM prototype for 5 days. The recorded audio data were subsequently analysed and compared with audiovisual recordings. Overall, 1404 vocalisations from the focus heifers and 721 vocalisations from group mates were obtained. Vocalisations during collar changes or malfunctions of the CCM were omitted from the evaluation. The results showed that the CCM had a sensitivity of 87% and a specificity of 94%. The negative and positive predictive values were 80% and 96%, respectively. These results show that the detection of individual vocalisations and the correct identification of callers are possible, even in freely moving group-housed cattle. The results are promising for the future use of vocalisation in automatic oestrus detection systems.
Collapse
|
7
|
Mcloughlin MP, Stewart R, McElligott AG. Automated bioacoustics: methods in ecology and conservation and their potential for animal welfare monitoring. J R Soc Interface 2019; 16:20190225. [PMID: 31213168 PMCID: PMC6597774 DOI: 10.1098/rsif.2019.0225] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2019] [Accepted: 05/16/2019] [Indexed: 11/12/2022] Open
Abstract
Vocalizations carry emotional, physiological and individual information. This suggests that they may serve as potentially useful indicators for inferring animal welfare. At the same time, automated methods for analysing and classifying sound have developed rapidly, particularly in the fields of ecology, conservation and sound scene classification. These methods are already used to automatically classify animal vocalizations, for example, in identifying animal species and estimating numbers of individuals. Despite this potential, they have not yet found widespread application in animal welfare monitoring. In this review, we first discuss current trends in sound analysis for ecology, conservation and sound classification. Following this, we detail the vocalizations produced by three of the most important farm livestock species: chickens ( Gallus gallus domesticus), pigs ( Sus scrofa domesticus) and cattle ( Bos taurus). Finally, we describe how these methods can be applied to monitor animal welfare with new potential for developing automated methods for large-scale farming.
Collapse
Affiliation(s)
- Michael P. Mcloughlin
- Centre for Digital Music, School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Campus, London, UK
| | - Rebecca Stewart
- Centre for Digital Music, School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Campus, London, UK
| | - Alan G. McElligott
- Centre for Research in Ecology, Evolution and Behaviour, Department of Life Sciences, University of Roehampton, London, UK
| |
Collapse
|
8
|
Fearey J, Elwen SH, James BS, Gridley T. Identification of potential signature whistles from free-ranging common dolphins (Delphinus delphis) in South Africa. Anim Cogn 2019; 22:777-789. [PMID: 31177344 DOI: 10.1007/s10071-019-01274-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2019] [Revised: 05/07/2019] [Accepted: 06/01/2019] [Indexed: 12/21/2022]
Abstract
Conveying identity is important for social animals to maintain individually based relationships. Communication of identity information relies on both signal encoding and perception. Several delphinid species use individually distinctive signature whistles to transmit identity information, best described for the common bottlenose dolphin (Tursiops truncatus). In this study, we investigate signature whistle use in wild common dolphins (Delphinus delphis). Acoustic recordings were analysed from 11 encounters from three locations in South Africa (Hout Bay, False Bay, and Plettenberg Bay) during 2009, 2016 and 2017. The frequency contours of whistles were visually categorised, with 29 signature whistle types (SWTs) identified through contour categorisation and a bout analysis approach developed specifically to identify signature whistles in bottlenose dolphins (SIGID). Categorisation verification was conducted using an unsupervised neural network (ARTwarp) at both a 91% and 96% vigilance parameter. For this, individual SWTs were analysed type by type and then in a 'global' analysis whereby all 497 whistle contours were categorised simultaneously. Overall the analysis demonstrated high stereotypy in the structure and temporal production of whistles, consistent with signature whistle use. We suggest that individual identity information may be encoded in these whistle contours. However, the large group sizes and high degree of vocal activity characteristic of this dolphin species generate a cluttered acoustic environment with high potential for masking from conspecific vocalisations. Therefore, further investigation into the mechanisms of identity perception in such acoustically cluttered environments is required to demonstrate the function of these stereotyped whistle types in common dolphins.
Collapse
Affiliation(s)
- J Fearey
- Sea Search Research and Conservation NPC, 4 Bath Rd, Muizenberg, Cape Town, 7945, South Africa
- Department of Statistical Sciences, Centre for Statistics in Ecology, Environment and Conservation, University of Cape Town, Rondebosch, Cape Town, 7700, South Africa
| | - S H Elwen
- Sea Search Research and Conservation NPC, 4 Bath Rd, Muizenberg, Cape Town, 7945, South Africa
- Department of Zoology and Entomology, Mammal Research Institute, University of Pretoria, Hatfield, Pretoria , 0002, South Africa
| | - B S James
- Sea Search Research and Conservation NPC, 4 Bath Rd, Muizenberg, Cape Town, 7945, South Africa
| | - T Gridley
- Sea Search Research and Conservation NPC, 4 Bath Rd, Muizenberg, Cape Town, 7945, South Africa.
- Department of Statistical Sciences, Centre for Statistics in Ecology, Environment and Conservation, University of Cape Town, Rondebosch, Cape Town, 7700, South Africa.
| |
Collapse
|
9
|
Röttgen V, Becker F, Tuchscherer A, Wrenzycki C, Düpjan S, Schön PC, Puppe B. Vocalization as an indicator of estrus climax in Holstein heifers during natural estrus and superovulation. J Dairy Sci 2018; 101:2383-2394. [PMID: 29331456 DOI: 10.3168/jds.2017-13412] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2017] [Accepted: 11/21/2017] [Indexed: 11/19/2022]
Abstract
The reliable detection of estrus is an important scientific and practical challenge in dairy cattle farming. Female vocalization may indicate reproductive status, and preliminary evidence suggests that this information can be used to detect estrus in dairy cattle. The aim of this study was to associate the changes in the vocalization rate of dairy heifers with behavioral estrus indicators as well as test the influence of the type of estrus (natural estrus vs. superovulation-induced estrus). We analyzed 6 predefined estrus-related behavior patterns (standing to be mounted, head-side mounting, active mounting, chin resting, being mounted while not standing, and active sniffing in the anogenital region) and vocalization rates in the peri-estrus period (day of estrus ± 1 d) of 12 German Holstein heifers using audio-visual recordings. Each heifer was observed under natural estrus and a consecutive superovulation induced by FSH and cloprostenol. Estrus was determined by behavioral patterns and confirmed by clinical examination (vaginoscopy and ultrasound imaging of the ovaries) as well as by the concentration of peripheral progesterone. Estrus behavior and vocalization rates were analyzed in 3-h intervals (an average of 19 intervals for each heifer), and an estrus score was calculated based on the 6 behaviors. The interval with the highest estrus score (I0) was considered the estrus climax. We demonstrated similar time courses for the estrus score and vocalization rate independent of estrus type. However, in natural estrus, the maximum vocalization rate (±SE) occurred in the interval before estrus climax (I-1; 42.58 ± 21.89) and was significantly higher than that in any other interval except estrus climax (I0; 27.58 ± 9.76). During natural estrus, the vocalization rate was significantly higher within the interval before estrus climax (I-1; 42.58 ± 21.89 vs. 11.58 ± 5.51) than under superovulation. The results underscore the potential use of vocalization rate as a suitable indicator of estrus climax in automated estrus detection devices. Further studies and technical development are required to record and process individual vocalization rates.
Collapse
Affiliation(s)
- Volker Röttgen
- Institute of Behavioural Physiology, Leibniz Institute for Farm Animal Biology (FBN), D-18196 Dummerstorf, Germany; Institute of Reproductive Biology, Leibniz Institute for Farm Animal Biology (FBN), D-18196 Dummerstorf, Germany
| | - Frank Becker
- Institute of Reproductive Biology, Leibniz Institute for Farm Animal Biology (FBN), D-18196 Dummerstorf, Germany
| | - Armin Tuchscherer
- Institute of Genetics and Biometry, Leibniz Institute for Farm Animal Biology (FBN), D-18196 Dummerstorf, Germany
| | - Christine Wrenzycki
- Chair of Molecular Reproductive Medicine, Clinic for Veterinary Obstetrics, Gynecology and Andrology, Faculty of Veterinary Medicine, Justus-Liebig-University Giessen, D-35392 Giessen, Germany
| | - Sandra Düpjan
- Institute of Behavioural Physiology, Leibniz Institute for Farm Animal Biology (FBN), D-18196 Dummerstorf, Germany
| | - Peter C Schön
- Institute of Behavioural Physiology, Leibniz Institute for Farm Animal Biology (FBN), D-18196 Dummerstorf, Germany
| | - Birger Puppe
- Institute of Behavioural Physiology, Leibniz Institute for Farm Animal Biology (FBN), D-18196 Dummerstorf, Germany; Behavioural Sciences, Faculty of Agricultural and Environmental Sciences, University of Rostock, D-18059 Rostock, Germany.
| |
Collapse
|
10
|
Invited review: The evolution of cattle bioacoustics and application for advanced dairy systems. Animal 2018; 12:1250-1259. [DOI: 10.1017/s1751731117002646] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
|