1
|
Mluba HS, Atif O, Lee J, Park D, Chung Y. Pattern Mining-Based Pig Behavior Analysis for Health and Welfare Monitoring. SENSORS (BASEL, SWITZERLAND) 2024; 24:2185. [PMID: 38610396 PMCID: PMC11013991 DOI: 10.3390/s24072185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 03/13/2024] [Accepted: 03/26/2024] [Indexed: 04/14/2024]
Abstract
The increasing popularity of pigs has prompted farmers to increase pig production to meet the growing demand. However, while the number of pigs is increasing, that of farm workers has been declining, making it challenging to perform various farm tasks, the most important among them being managing the pigs' health and welfare. This study proposes a pattern mining-based pig behavior analysis system to provide visualized information and behavioral patterns, assisting farmers in effectively monitoring and assessing pigs' health and welfare. The system consists of four modules: (1) data acquisition module for collecting pigs video; (2) detection and tracking module for localizing and uniquely identifying pigs, using tracking information to crop pig images; (3) pig behavior recognition module for recognizing pig behaviors from sequences of cropped images; and (4) pig behavior analysis module for providing visualized information and behavioral patterns to effectively help farmers understand and manage pigs. In the second module, we utilize ByteTrack, which comprises YOLOx as the detector and the BYTE algorithm as the tracker, while MnasNet and LSTM serve as appearance features and temporal information extractors in the third module. The experimental results show that the system achieved a multi-object tracking accuracy of 0.971 for tracking and an F1 score of 0.931 for behavior recognition, while also highlighting the effectiveness of visualization and pattern mining in helping farmers comprehend and manage pigs' health and welfare.
Collapse
Affiliation(s)
- Hassan Seif Mluba
- Department of Computer and Information Science, Korea University, Sejong City 30019, Republic of Korea; (H.S.M.); (O.A.)
| | - Othmane Atif
- Department of Computer and Information Science, Korea University, Sejong City 30019, Republic of Korea; (H.S.M.); (O.A.)
| | - Jonguk Lee
- Department of Computer Convergence Software, Sejong Campus, Korea University, Sejong City 30019, Republic of Korea;
| | - Daihee Park
- Department of Computer Convergence Software, Sejong Campus, Korea University, Sejong City 30019, Republic of Korea;
| | - Yongwha Chung
- Department of Computer Convergence Software, Sejong Campus, Korea University, Sejong City 30019, Republic of Korea;
| |
Collapse
|
2
|
Li G, Shi G, Jiao J. YOLOv5-KCB: A New Method for Individual Pig Detection Using Optimized K-Means, CA Attention Mechanism and a Bi-Directional Feature Pyramid Network. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115242. [PMID: 37299967 DOI: 10.3390/s23115242] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Revised: 05/26/2023] [Accepted: 05/29/2023] [Indexed: 06/12/2023]
Abstract
Individual identification of pigs is a critical component of intelligent pig farming. Traditional pig ear-tagging requires significant human resources and suffers from issues such as difficulty in recognition and low accuracy. This paper proposes the YOLOv5-KCB algorithm for non-invasive identification of individual pigs. Specifically, the algorithm utilizes two datasets-pig faces and pig necks-which are divided into nine categories. Following data augmentation, the total sample size was augmented to 19,680. The distance metric used for K-means clustering is changed from the original algorithm to 1-IOU, which improves the adaptability of the model's target anchor boxes. Furthermore, the algorithm introduces SE, CBAM, and CA attention mechanisms, with the CA attention mechanism being selected for its superior performance in feature extraction. Finally, CARAFE, ASFF, and BiFPN are used for feature fusion, with BiFPN selected for its superior performance in improving the detection ability of the algorithm. The experimental results indicate that the YOLOv5-KCB algorithm achieved the highest accuracy rates in pig individual recognition, surpassing all other improved algorithms in average accuracy rate (IOU = 0.5). The accuracy rate of pig head and neck recognition was 98.4%, while the accuracy rate for pig face recognition was 95.1%, representing an improvement of 4.8% and 13.8% over the original YOLOv5 algorithm. Notably, the average accuracy rate of identifying pig head and neck was consistently higher than pig face recognition across all algorithms, with YOLOv5-KCB demonstrating an impressive 2.9% improvement. These results emphasize the potential for utilizing the YOLOv5-KCB algorithm for precise individual pig identification, facilitating subsequent intelligent management practices.
Collapse
Affiliation(s)
- Guangbo Li
- School of Information and Computer, Anhui Agricultural University, Hefei 230036, China
| | - Guolong Shi
- School of Information and Computer, Anhui Agricultural University, Hefei 230036, China
- Key Laboratory of Agricultural Sensors, Ministry of Agriculture and Rural Affairs, Hefei 230036, China
- Anhui Provincial Key Laboratory of Smart Agricultural Technology and Equipment, Hefei 230036, China
| | - Jun Jiao
- School of Information and Computer, Anhui Agricultural University, Hefei 230036, China
| |
Collapse
|
3
|
GAN-Based Video Denoising with Attention Mechanism for Field-Applicable Pig Detection System. SENSORS 2022; 22:s22103917. [PMID: 35632328 PMCID: PMC9143193 DOI: 10.3390/s22103917] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Revised: 05/17/2022] [Accepted: 05/20/2022] [Indexed: 01/27/2023]
Abstract
Infrared cameras allow non-invasive and 24 h continuous monitoring. Thus, they are widely used in automatic pig monitoring, which is essential to maintain the profitability and sustainability of intensive pig farms. However, in practice, impurities such as insect secretions continuously pollute camera lenses. This causes problems with IR reflections, which can seriously affect pig detection performance. In this study, we propose a noise-robust, real-time pig detection system that can improve accuracy in pig farms where infrared cameras suffer from the IR reflection problem. The system consists of a data collector to collect infrared images, a preprocessor to transform noisy images into clean images, and a detector to detect pigs. The preprocessor embeds a multi-scale spatial attention module in U-net and generative adversarial network (GAN) models, enabling the model to pay more attention to the noisy area. The GAN model was trained on paired sets of clean data and data with simulated noise. It can operate in a real-time and end-to-end manner. Experimental results show that the proposed preprocessor was able to significantly improve the average precision of pig detection from 0.766 to 0.906, with an additional execution time of only 4.8 ms on a PC environment.
Collapse
|
4
|
The Application of Cameras in Precision Pig Farming: An Overview for Swine-Keeping Professionals. Animals (Basel) 2021; 11:ani11082343. [PMID: 34438800 PMCID: PMC8388688 DOI: 10.3390/ani11082343] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 07/19/2021] [Accepted: 08/06/2021] [Indexed: 01/06/2023] Open
Abstract
Simple Summary The preeminent purpose of precision livestock farming (PLF) is to provide affordable and straightforward solutions to severe problems with certainty. Some data collection techniques in PLF such as RFID are accurate but not affordable for small- and medium-sized farms. On the other hand, camera sensors are cheap, commonly available, and easily used to collect information compared to other sensor systems in precision pig farming. Cameras have ample chance to monitor pigs with high precision at an affordable cost. However, the lack of targeted information about the application of cameras in the pig industry is a shortcoming for swine farmers and researchers. This review describes the state of the art in 3D imaging systems (i.e., depth sensors and time-of-flight cameras), along with 2D cameras, for effectively identifying pig behaviors, and presents automated approaches for monitoring and investigating pigs’ feeding, drinking, lying, locomotion, aggressive, and reproductive behaviors. In addition, the review summarizes the related literature and points out limitations to open up new dimensions for future researchers to explore. Abstract Pork is the meat with the second-largest overall consumption, and chicken, pork, and beef together account for 92% of global meat production. Therefore, it is necessary to adopt more progressive methodologies such as precision livestock farming (PLF) rather than conventional methods to improve production. In recent years, image-based studies have become an efficient solution in various fields such as navigation for unmanned vehicles, human–machine-based systems, agricultural surveying, livestock, etc. So far, several studies have been conducted to identify, track, and classify the behaviors of pigs and achieve early detection of disease, using 2D/3D cameras. This review describes the state of the art in 3D imaging systems (i.e., depth sensors and time-of-flight cameras), along with 2D cameras, for effectively identifying pig behaviors and presents automated approaches for the monitoring and investigation of pigs’ feeding, drinking, lying, locomotion, aggressive, and reproductive behaviors.
Collapse
|
5
|
Tzanidakis C, Simitzis P, Arvanitis K, Panagakis P. An overview of the current trends in precision pig farming technologies. Livest Sci 2021. [DOI: 10.1016/j.livsci.2021.104530] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
|
6
|
Li G, Huang Y, Chen Z, Chesser GD, Purswell JL, Linhoss J, Zhao Y. Practices and Applications of Convolutional Neural Network-Based Computer Vision Systems in Animal Farming: A Review. SENSORS (BASEL, SWITZERLAND) 2021; 21:1492. [PMID: 33670030 PMCID: PMC7926480 DOI: 10.3390/s21041492] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Revised: 02/03/2021] [Accepted: 02/19/2021] [Indexed: 01/28/2023]
Abstract
Convolutional neural network (CNN)-based computer vision systems have been increasingly applied in animal farming to improve animal management, but current knowledge, practices, limitations, and solutions of the applications remain to be expanded and explored. The objective of this study is to systematically review applications of CNN-based computer vision systems on animal farming in terms of the five deep learning computer vision tasks: image classification, object detection, semantic/instance segmentation, pose estimation, and tracking. Cattle, sheep/goats, pigs, and poultry were the major farm animal species of concern. In this research, preparations for system development, including camera settings, inclusion of variations for data recordings, choices of graphics processing units, image preprocessing, and data labeling were summarized. CNN architectures were reviewed based on the computer vision tasks in animal farming. Strategies of algorithm development included distribution of development data, data augmentation, hyperparameter tuning, and selection of evaluation metrics. Judgment of model performance and performance based on architectures were discussed. Besides practices in optimizing CNN-based computer vision systems, system applications were also organized based on year, country, animal species, and purposes. Finally, recommendations on future research were provided to develop and improve CNN-based computer vision systems for improved welfare, environment, engineering, genetics, and management of farm animals.
Collapse
Affiliation(s)
- Guoming Li
- Department of Agricultural and Biological Engineering, Mississippi State University, Starkville, MS 39762, USA; (G.L.); (J.L.)
| | - Yanbo Huang
- Agricultural Research Service, Genetics and Sustainable Agriculture Unit, United States Department of Agriculture, Starkville, MS 39762, USA;
| | - Zhiqian Chen
- Department of Computer Science and Engineering, Mississippi State University, Starkville, MS 39762, USA;
| | - Gary D. Chesser
- Department of Agricultural and Biological Engineering, Mississippi State University, Starkville, MS 39762, USA; (G.L.); (J.L.)
| | - Joseph L. Purswell
- Agricultural Research Service, Poultry Research Unit, United States Department of Agriculture, Starkville, MS 39762, USA;
| | - John Linhoss
- Department of Agricultural and Biological Engineering, Mississippi State University, Starkville, MS 39762, USA; (G.L.); (J.L.)
| | - Yang Zhao
- Department of Animal Science, The University of Tennessee, Knoxville, TN 37996, USA
| |
Collapse
|
7
|
Alameer A, Kyriazakis I, Bacardit J. Automated recognition of postures and drinking behaviour for the detection of compromised health in pigs. Sci Rep 2020; 10:13665. [PMID: 32788633 PMCID: PMC7423952 DOI: 10.1038/s41598-020-70688-6] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Accepted: 07/30/2020] [Indexed: 11/12/2022] Open
Abstract
Changes in pig behaviours are a useful aid in detecting early signs of compromised health and welfare. In commercial settings, automatic detection of pig behaviours through visual imaging remains a challenge due to farm demanding conditions, e.g., occlusion of one pig from another. Here, two deep learning-based detector methods were developed to identify pig postures and drinking behaviours of group-housed pigs. We first tested the system ability to detect changes in these measures at group-level during routine management. We then demonstrated the ability of our automated methods to identify behaviours of individual animals with a mean average precision of \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$0.989 \pm 0.009$$\end{document}0.989±0.009, under a variety of settings. When the pig feeding regime was disrupted, we automatically detected the expected deviations from the daily feeding routine in standing, lateral lying and drinking behaviours. These experiments demonstrate that the method is capable of robustly and accurately monitoring individual pig behaviours under commercial conditions, without the need for additional sensors or individual pig identification, hence providing a scalable technology to improve the health and well-being of farm animals. The method has the potential to transform how livestock are monitored and address issues in livestock farming, such as targeted treatment of individuals with medication.
Collapse
Affiliation(s)
- Ali Alameer
- School of Natural and Environmental Sciences, Newcastle University, Newcastle Upon Tyne, NE1 7RU, UK. .,School of Computing, Newcastle University, Newcastle Upon Tyne, NE4 5TG, UK.
| | - Ilias Kyriazakis
- Institute for Global Food Security, Queen's University, Belfast, BT9 5DL, UK
| | - Jaume Bacardit
- School of Computing, Newcastle University, Newcastle Upon Tyne, NE4 5TG, UK
| |
Collapse
|
8
|
EmbeddedPigDet—Fast and Accurate Pig Detection for Embedded Board Implementations. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10082878] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Automated pig monitoring is an important issue in the surveillance environment of a pig farm. For a large-scale pig farm in particular, practical issues such as monitoring cost should be considered but such consideration based on low-cost embedded boards has not yet been reported. Since low-cost embedded boards have more limited computing power than typical PCs and have tradeoffs between execution speed and accuracy, achieving fast and accurate detection of individual pigs for “on-device” pig monitoring applications is very challenging. Therefore, in this paper, we propose a method for the fast detection of individual pigs by reducing the computational workload of 3 × 3 convolution in widely-used, deep learning-based object detectors. Then, in order to recover the accuracy of the “light-weight” deep learning-based object detector, we generate a three-channel composite image as its input image, through “simple” image preprocessing techniques. Our experimental results on an NVIDIA Jetson Nano embedded board show that the proposed method can improve the integrated performance of both execution speed and accuracy of widely-used, deep learning-based object detectors, by a factor of up to 8.7.
Collapse
|