1
|
Shao X, Liu C, Zhou Z, Xue W, Zhang G, Liu J, Yan H. Research on Dynamic Pig Counting Method Based on Improved YOLOv7 Combined with DeepSORT. Animals (Basel) 2024; 14:1227. [PMID: 38672375 PMCID: PMC11047650 DOI: 10.3390/ani14081227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 04/13/2024] [Accepted: 04/15/2024] [Indexed: 04/28/2024] Open
Abstract
A pig inventory is a crucial component of achieving precise and large-scale farming. In complex pigsty environments, due to pigs' stress reactions and frequent obstructions, it is challenging to count them accurately and automatically. This difficulty contrasts with most current deep learning studies, which rely on overhead views or static images for counting. This research proposes a video-based dynamic counting method, combining YOLOv7 with DeepSORT. By utilizing the YOLOv7 network structure and optimizing the second and third 3 × 3 convolution operations in the head network ELAN-W with PConv, the model reduces the computational demand and improves the inference speed without sacrificing accuracy. To ensure that the network acquires accurate position perception information at oblique angles and extracts rich semantic information, we introduce the coordinate attention (CA) mechanism before the three re-referentialization paths (REPConv) in the head network, enhancing robustness in complex scenarios. Experimental results show that, compared to the original model, the improved model increases the mAP by 3.24, 0.05, and 1.00 percentage points for oblique, overhead, and all pig counting datasets, respectively, while reducing the computational cost by 3.6 GFLOPS. The enhanced YOLOv7 outperforms YOLOv5, YOLOv4, YOLOv3, Faster RCNN, and SSD in target detection with mAP improvements of 2.07, 5.20, 2.16, 7.05, and 19.73 percentage points, respectively. In dynamic counting experiments, the improved YOLOv7 combined with DeepSORT was tested on videos with total pig counts of 144, 201, 285, and 295, yielding errors of -3, -3, -4, and -26, respectively, with an average accuracy of 96.58% and an FPS of 22. This demonstrates the model's capability of performing the real-time counting of pigs in various scenes, providing valuable data and references for automated pig counting research.
Collapse
Affiliation(s)
- Xiaobao Shao
- College of Information Science and Engineering, Shanxi Agricultural University, Jinzhong 030801, China; (X.S.); (C.L.); (Z.Z.); (W.X.); (G.Z.)
| | - Chengcheng Liu
- College of Information Science and Engineering, Shanxi Agricultural University, Jinzhong 030801, China; (X.S.); (C.L.); (Z.Z.); (W.X.); (G.Z.)
| | - Zhixuan Zhou
- College of Information Science and Engineering, Shanxi Agricultural University, Jinzhong 030801, China; (X.S.); (C.L.); (Z.Z.); (W.X.); (G.Z.)
| | - Wenjing Xue
- College of Information Science and Engineering, Shanxi Agricultural University, Jinzhong 030801, China; (X.S.); (C.L.); (Z.Z.); (W.X.); (G.Z.)
| | - Guoye Zhang
- College of Information Science and Engineering, Shanxi Agricultural University, Jinzhong 030801, China; (X.S.); (C.L.); (Z.Z.); (W.X.); (G.Z.)
| | - Jianyu Liu
- Science & Technology Information and Strategy Research Center of Shanxi, Taiyuan 030024, China
| | - Hongwen Yan
- College of Information Science and Engineering, Shanxi Agricultural University, Jinzhong 030801, China; (X.S.); (C.L.); (Z.Z.); (W.X.); (G.Z.)
| |
Collapse
|
2
|
Mluba HS, Atif O, Lee J, Park D, Chung Y. Pattern Mining-Based Pig Behavior Analysis for Health and Welfare Monitoring. SENSORS (BASEL, SWITZERLAND) 2024; 24:2185. [PMID: 38610396 PMCID: PMC11013991 DOI: 10.3390/s24072185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/02/2024] [Revised: 03/13/2024] [Accepted: 03/26/2024] [Indexed: 04/14/2024]
Abstract
The increasing popularity of pigs has prompted farmers to increase pig production to meet the growing demand. However, while the number of pigs is increasing, that of farm workers has been declining, making it challenging to perform various farm tasks, the most important among them being managing the pigs' health and welfare. This study proposes a pattern mining-based pig behavior analysis system to provide visualized information and behavioral patterns, assisting farmers in effectively monitoring and assessing pigs' health and welfare. The system consists of four modules: (1) data acquisition module for collecting pigs video; (2) detection and tracking module for localizing and uniquely identifying pigs, using tracking information to crop pig images; (3) pig behavior recognition module for recognizing pig behaviors from sequences of cropped images; and (4) pig behavior analysis module for providing visualized information and behavioral patterns to effectively help farmers understand and manage pigs. In the second module, we utilize ByteTrack, which comprises YOLOx as the detector and the BYTE algorithm as the tracker, while MnasNet and LSTM serve as appearance features and temporal information extractors in the third module. The experimental results show that the system achieved a multi-object tracking accuracy of 0.971 for tracking and an F1 score of 0.931 for behavior recognition, while also highlighting the effectiveness of visualization and pattern mining in helping farmers comprehend and manage pigs' health and welfare.
Collapse
Affiliation(s)
- Hassan Seif Mluba
- Department of Computer and Information Science, Korea University, Sejong City 30019, Republic of Korea; (H.S.M.); (O.A.)
| | - Othmane Atif
- Department of Computer and Information Science, Korea University, Sejong City 30019, Republic of Korea; (H.S.M.); (O.A.)
| | - Jonguk Lee
- Department of Computer Convergence Software, Sejong Campus, Korea University, Sejong City 30019, Republic of Korea;
| | - Daihee Park
- Department of Computer Convergence Software, Sejong Campus, Korea University, Sejong City 30019, Republic of Korea;
| | - Yongwha Chung
- Department of Computer Convergence Software, Sejong Campus, Korea University, Sejong City 30019, Republic of Korea;
| |
Collapse
|
3
|
Wang L, Hu B, Hou Y, Wu H. Lightweight Sheep Head Detection and Dynamic Counting Method Based on Neural Network. Animals (Basel) 2023; 13:3459. [PMID: 38003075 PMCID: PMC10668793 DOI: 10.3390/ani13223459] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 11/05/2023] [Accepted: 11/07/2023] [Indexed: 11/26/2023] Open
Abstract
To achieve rapid and precise target counting, the quality of target detection serves as a pivotal factor. This study introduces the Sheep's Head-Single Shot MultiBox Detector (SH-SSD) as a solution. Within the network's backbone, the Triple Attention mechanism is incorporated to enhance the MobileNetV3 backbone, resulting in a significant reduction in network parameters and an improvement in detection speed. The network's neck is constructed using a combination of the Spatial Pyramid Pooling module and the Triple Attention Bottleneck module. This combination enhances the extraction of semantic information and the preservation of detailed feature map information, with a slight increase in network parameters. The network's head is established through the Decoupled Head module, optimizing the network's prediction capabilities. Experimental findings demonstrate that the SH-SSD model attains an impressive average detection accuracy of 96.11%, effectively detecting sheep's heads within the sample. Notably, SH-SSD exhibits enhancements across various detection metrics, accompanied by a significant reduction in model parameters. Furthermore, when combined with the DeepSort tracking algorithm, it achieves high-precision quantitative statistics. The SH-SSD model, introduced in this paper, showcases commendable performance in sheep's head detection and offers deployment simplicity, thereby furnishing essential technical support for the advancement of intelligent animal husbandry practices.
Collapse
Affiliation(s)
- Liang Wang
- Department of Electronic Engineering, School of Information Science and Engineering, Fudan University, Shanghai 200438, China;
- College of Electronic Information Engineering, Inner Mongolia University, Hohhot 010021, China
| | - Bo Hu
- Department of Electronic Engineering, School of Information Science and Engineering, Fudan University, Shanghai 200438, China;
| | - Yuecheng Hou
- College of Electronic Information Engineering, Inner Mongolia University, Hohhot 010021, China
| | - Huijuan Wu
- College of Electronic Information Engineering, Inner Mongolia University, Hohhot 010021, China
| |
Collapse
|
4
|
Zha W, Li H, Wu G, Zhang L, Pan W, Gu L, Jiao J, Zhang Q. Research on the Recognition and Tracking of Group-Housed Pigs' Posture Based on Edge Computing. SENSORS (BASEL, SWITZERLAND) 2023; 23:8952. [PMID: 37960652 PMCID: PMC10649120 DOI: 10.3390/s23218952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 11/01/2023] [Accepted: 11/01/2023] [Indexed: 11/15/2023]
Abstract
The existing algorithms for identifying and tracking pigs in barns generally have a large number of parameters, relatively complex networks and a high demand for computational resources, which are not suitable for deployment in embedded-edge nodes on farms. A lightweight multi-objective identification and tracking algorithm based on improved YOLOv5s and DeepSort was developed for group-housed pigs in this study. The identification algorithm was optimized by: (i) using a dilated convolution in the YOLOv5s backbone network to reduce the number of model parameters and computational power requirements; (ii) adding a coordinate attention mechanism to improve the model precision; and (iii) pruning the BN layers to reduce the computational requirements. The optimized identification model was combined with DeepSort to form the final Tracking by Detecting algorithm and ported to a Jetson AGX Xavier edge computing node. The algorithm reduced the model size by 65.3% compared to the original YOLOv5s. The algorithm achieved a recognition precision of 96.6%; a tracking time of 46 ms; and a tracking frame rate of 21.7 FPS, and the precision of the tracking statistics was greater than 90%. The model size and performance met the requirements for stable real-time operation in embedded-edge computing nodes for monitoring group-housed pigs.
Collapse
Affiliation(s)
- Wenwen Zha
- School of Information and Computer, Anhui Agricultural University, Hefei 230036, China; (W.Z.); (G.W.); (W.P.); (L.G.)
| | - Hualong Li
- Institute of Intelligent Machines, Chinese Academy of Sciences, Hefei 230031, China;
| | - Guodong Wu
- School of Information and Computer, Anhui Agricultural University, Hefei 230036, China; (W.Z.); (G.W.); (W.P.); (L.G.)
| | - Liping Zhang
- Institute of Agricultural Economy and Information, Anhui Academy of Agricultural Sciences, Hefei 230031, China;
| | - Weihao Pan
- School of Information and Computer, Anhui Agricultural University, Hefei 230036, China; (W.Z.); (G.W.); (W.P.); (L.G.)
| | - Lichuan Gu
- School of Information and Computer, Anhui Agricultural University, Hefei 230036, China; (W.Z.); (G.W.); (W.P.); (L.G.)
| | - Jun Jiao
- School of Information and Computer, Anhui Agricultural University, Hefei 230036, China; (W.Z.); (G.W.); (W.P.); (L.G.)
| | - Qiang Zhang
- Department of Biosystems Engineering, University of Manitoba, Winnipeg, MB R3T 5V6, Canada
| |
Collapse
|
5
|
Wang S, Jiang H, Qiao Y, Jiang S. A Method for Obtaining 3D Point Cloud Data by Combining 2D Image Segmentation and Depth Information of Pigs. Animals (Basel) 2023; 13:2472. [PMID: 37570282 PMCID: PMC10417003 DOI: 10.3390/ani13152472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 07/21/2023] [Accepted: 07/25/2023] [Indexed: 08/13/2023] Open
Abstract
This paper proposes a method for automatic pig detection and segmentation using RGB-D data for precision livestock farming. The proposed method combines the enhanced YOLOv5s model with the Res2Net bottleneck structure, resulting in improved fine-grained feature extraction and ultimately enhancing the precision of pig detection and segmentation in 2D images. Additionally, the method facilitates the acquisition of 3D point cloud data of pigs in a simpler and more efficient way by using the pig mask obtained in 2D detection and segmentation and combining it with depth information. To evaluate the effectiveness of the proposed method, two datasets were constructed. The first dataset consists of 5400 images captured in various pig pens under diverse lighting conditions, while the second dataset was obtained from the UK. The experimental results demonstrated that the improved YOLOv5s_Res2Net achieved a mAP@0.5:0.95 of 89.6% and 84.8% for both pig detection and segmentation tasks on our dataset, while achieving a mAP@0.5:0.95 of 93.4% and 89.4% on the Edinburgh pig behaviour dataset. This approach provides valuable insights for improving pig management, conducting welfare assessments, and estimating weight accurately.
Collapse
Affiliation(s)
- Shunli Wang
- College of Information Science and Engineering, Shandong Agricultural University, Tai’an 271018, China; (S.W.); (H.J.)
| | - Honghua Jiang
- College of Information Science and Engineering, Shandong Agricultural University, Tai’an 271018, China; (S.W.); (H.J.)
| | - Yongliang Qiao
- Australian Institute for Machine Learning (AIML), The University of Adelaide, Adelaide, SA 5005, Australia
| | - Shuzhen Jiang
- Key Laboratory of Efficient Utilisation of Non-Grain Feed Resources (Co-Construction by Ministry and Province), Ministry of Agriculture and Rural Affairs, Department of Animal Science and Technology, Shandong Agricultural University, Tai’an 271018, China;
| |
Collapse
|
6
|
Huang Y, Xiao D, Liu J, Tan Z, Liu K, Chen M. An Improved Pig Counting Algorithm Based on YOLOv5 and DeepSORT Model. SENSORS (BASEL, SWITZERLAND) 2023; 23:6309. [PMID: 37514604 PMCID: PMC10383308 DOI: 10.3390/s23146309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 06/25/2023] [Accepted: 07/01/2023] [Indexed: 07/30/2023]
Abstract
Pig counting is an important task in pig sales and breeding supervision. Currently, manual counting is low-efficiency and high-cost and presents challenges in terms of statistical analysis. In response to the difficulties faced in pig part feature detection, the loss of tracking due to rapid movement, and the large counting deviation in pig video tracking and counting research, this paper proposes an improved pig counting algorithm (Mobile Pig Counting Algorithm with YOLOv5xpig and DeepSORTPig (MPC-YD)) based on YOLOv5 + DeepSORT model. The algorithm improves the detection rate of pig body parts by adding two different sizes of SPP networks and using SoftPool instead of MaxPool operations in YOLOv5x. In addition, the algorithm includes a pig reidentification network, a pig-tracking method based on spatial state correction, and a pig counting method based on frame number judgment on the DeepSORT algorithm to improve pig tracking accuracy. Experimental analysis shows that the MPC-YD algorithm achieves an average precision of 99.24% in pig object detection and an accuracy of 85.32% in multitarget pig tracking. In the aisle environment of the slaughterhouse, the MPC-YD algorithm achieves a correlation coefficient (R2) of 98.14% in pig counting from video, and it achieves stable pig counting in a breeding environment. The algorithm has a wide range of application prospects.
Collapse
Affiliation(s)
- Yigui Huang
- College of Mathematics Informatics, South China Agricultural University, Guangzhou 510642, China; (Y.H.); (J.L.); (Z.T.); (K.L.); (M.C.)
- Key Laboratory of Smart Agricultural Technology in Tropical South China, Ministry of Agriculture and Rural Affairs, Guangzhou 510642, China
| | - Deqin Xiao
- College of Mathematics Informatics, South China Agricultural University, Guangzhou 510642, China; (Y.H.); (J.L.); (Z.T.); (K.L.); (M.C.)
- Key Laboratory of Smart Agricultural Technology in Tropical South China, Ministry of Agriculture and Rural Affairs, Guangzhou 510642, China
| | - Junbin Liu
- College of Mathematics Informatics, South China Agricultural University, Guangzhou 510642, China; (Y.H.); (J.L.); (Z.T.); (K.L.); (M.C.)
- Key Laboratory of Smart Agricultural Technology in Tropical South China, Ministry of Agriculture and Rural Affairs, Guangzhou 510642, China
| | - Zhujie Tan
- College of Mathematics Informatics, South China Agricultural University, Guangzhou 510642, China; (Y.H.); (J.L.); (Z.T.); (K.L.); (M.C.)
- Key Laboratory of Smart Agricultural Technology in Tropical South China, Ministry of Agriculture and Rural Affairs, Guangzhou 510642, China
| | - Kejian Liu
- College of Mathematics Informatics, South China Agricultural University, Guangzhou 510642, China; (Y.H.); (J.L.); (Z.T.); (K.L.); (M.C.)
- Key Laboratory of Smart Agricultural Technology in Tropical South China, Ministry of Agriculture and Rural Affairs, Guangzhou 510642, China
| | - Miaobin Chen
- College of Mathematics Informatics, South China Agricultural University, Guangzhou 510642, China; (Y.H.); (J.L.); (Z.T.); (K.L.); (M.C.)
- Key Laboratory of Smart Agricultural Technology in Tropical South China, Ministry of Agriculture and Rural Affairs, Guangzhou 510642, China
| |
Collapse
|
7
|
Myat Noe S, Zin TT, Tin P, Kobayashi I. Comparing State-of-the-Art Deep Learning Algorithms for the Automated Detection and Tracking of Black Cattle. SENSORS (BASEL, SWITZERLAND) 2023; 23:532. [PMID: 36617130 PMCID: PMC9824081 DOI: 10.3390/s23010532] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2022] [Revised: 12/28/2022] [Accepted: 12/29/2022] [Indexed: 06/17/2023]
Abstract
Effective livestock management is critical for cattle farms in today's competitive era of smart modern farming. To ensure farm management solutions are efficient, affordable, and scalable, the manual identification and detection of cattle are not feasible in today's farming systems. Fortunately, automatic tracking and identification systems have greatly improved in recent years. Moreover, correctly identifying individual cows is an integral part of predicting behavior during estrus. By doing so, we can monitor a cow's behavior, and pinpoint the right time for artificial insemination. However, most previous techniques have relied on direct observation, increasing the human workload. To overcome this problem, this paper proposes the use of state-of-the-art deep learning-based Multi-Object Tracking (MOT) algorithms for a complete system that can automatically and continuously detect and track cattle using an RGB camera. This study compares state-of-the-art MOTs, such as Deep-SORT, Strong-SORT, and customized light-weight tracking algorithms. To improve the tracking accuracy of these deep learning methods, this paper presents an enhanced re-identification approach for a black cattle dataset in Strong-SORT. For evaluating MOT by detection, the system used the YOLO v5 and v7, as a comparison with the instance segmentation model Detectron-2, to detect and classify the cattle. The high cattle-tracking accuracy with a Multi-Object Tracking Accuracy (MOTA) was 96.88%. Using these methods, the findings demonstrate a highly accurate and robust cattle tracking system, which can be applied to innovative monitoring systems for agricultural applications. The effectiveness and efficiency of the proposed system were demonstrated by analyzing a sample of video footage. The proposed method was developed to balance the trade-off between costs and management, thereby improving the productivity and profitability of dairy farms; however, this method can be adapted to other domestic species.
Collapse
Affiliation(s)
- Su Myat Noe
- Interdisciplinary Graduate School of Agriculture and Engineering, University of Miyazaki, Miyazaki 889-2192, Japan
| | - Thi Thi Zin
- Graduate School of Engineering, University of Miyazaki, Miyazaki 889-2192, Japan
| | - Pyke Tin
- Graduate School of Engineering, University of Miyazaki, Miyazaki 889-2192, Japan
| | - Ikuo Kobayashi
- Field Science Center, Faculty of Agriculture, University of Miyazaki, Miyazaki 889-2192, Japan
| |
Collapse
|
8
|
Son S, Ahn H, Baek H, Yu S, Suh Y, Lee S, Chung Y, Park D. StaticPigDet: Accuracy Improvement of Static Camera-Based Pig Monitoring Using Background and Facility Information. SENSORS (BASEL, SWITZERLAND) 2022; 22:8315. [PMID: 36366013 PMCID: PMC9655159 DOI: 10.3390/s22218315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 10/23/2022] [Accepted: 10/27/2022] [Indexed: 06/16/2023]
Abstract
The automatic detection of individual pigs can improve the overall management of pig farms. The accuracy of single-image object detection has significantly improved over the years with advancements in deep learning techniques. However, differences in pig sizes and complex structures within pig pen of a commercial pig farm, such as feeding facilities, present challenges to the detection accuracy for pig monitoring. To implement such detection in practice, the differences should be analyzed by video recorded from a static camera. To accurately detect individual pigs that may be different in size or occluded by complex structures, we present a deep-learning-based object detection method utilizing generated background and facility information from image sequences (i.e., video) recorded from a static camera, which contain relevant information. As all images are preprocessed to reduce differences in pig sizes. We then used the extracted background and facility information to create different combinations of gray images. Finally, these images are combined into different combinations of three-channel composite images, which are used as training datasets to improve detection accuracy. Using the proposed method as a component of image processing improved overall accuracy from 84% to 94%. From the study, an accurate facility and background image was able to be generated after updating for a long time that helped detection accuracy. For the further studies, improving detection accuracy on overlapping pigs can also be considered.
Collapse
Affiliation(s)
- Seungwook Son
- Department of Computer Convergence Software, Korea University, Sejong 30019, Korea
| | - Hanse Ahn
- Department of Computer Convergence Software, Korea University, Sejong 30019, Korea
| | - Hwapyeong Baek
- Department of Computer Convergence Software, Korea University, Sejong 30019, Korea
| | - Seunghyun Yu
- Department of Computer Convergence Software, Korea University, Sejong 30019, Korea
| | - Yooil Suh
- Info Valley Korea Co., Ltd., Anyang 14067, Korea
| | - Sungju Lee
- Department of Software, Sangmyung University, Cheonan 31066, Korea
| | - Yongwha Chung
- Department of Computer Convergence Software, Korea University, Sejong 30019, Korea
| | - Daihee Park
- Department of Computer Convergence Software, Korea University, Sejong 30019, Korea
| |
Collapse
|
9
|
Wang S, Jiang H, Qiao Y, Jiang S, Lin H, Sun Q. The Research Progress of Vision-Based Artificial Intelligence in Smart Pig Farming. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22176541. [PMID: 36080994 PMCID: PMC9460267 DOI: 10.3390/s22176541] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 08/22/2022] [Accepted: 08/27/2022] [Indexed: 05/05/2023]
Abstract
Pork accounts for an important proportion of livestock products. For pig farming, a lot of manpower, material resources and time are required to monitor pig health and welfare. As the number of pigs in farming increases, the continued use of traditional monitoring methods may cause stress and harm to pigs and farmers and affect pig health and welfare as well as farming economic output. In addition, the application of artificial intelligence has become a core part of smart pig farming. The precision pig farming system uses sensors such as cameras and radio frequency identification to monitor biometric information such as pig sound and pig behavior in real-time and convert them into key indicators of pig health and welfare. By analyzing the key indicators, problems in pig health and welfare can be detected early, and timely intervention and treatment can be provided, which helps to improve the production and economic efficiency of pig farming. This paper studies more than 150 papers on precision pig farming and summarizes and evaluates the application of artificial intelligence technologies to pig detection, tracking, behavior recognition and sound recognition. Finally, we summarize and discuss the opportunities and challenges of precision pig farming.
Collapse
Affiliation(s)
- Shunli Wang
- College of Information Science and Engineering, Shandong Agricultural University, Tai’an 271018, China
| | - Honghua Jiang
- College of Information Science and Engineering, Shandong Agricultural University, Tai’an 271018, China
| | - Yongliang Qiao
- Australian Centre for Field Robotics (ACFR), Faculty of Engineering, The University of Sydney, Sydney, NSW 2006, Australia
- Correspondence:
| | - Shuzhen Jiang
- College of Animal Science and Veterinary Medicine, Shandong Agricultural University, Tai’an 271018, China
| | - Huaiqin Lin
- College of Information Science and Engineering, Shandong Agricultural University, Tai’an 271018, China
| | - Qian Sun
- College of Information Science and Engineering, Shandong Agricultural University, Tai’an 271018, China
| |
Collapse
|
10
|
An Integrated Goat Head Detection and Automatic Counting Method Based on Deep Learning. Animals (Basel) 2022; 12:ani12141810. [PMID: 35883357 PMCID: PMC9312201 DOI: 10.3390/ani12141810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 07/14/2022] [Accepted: 07/14/2022] [Indexed: 12/05/2022] Open
Abstract
Simple Summary To achieve precision and intelligence in farming, automatic recognition and counting of goats are important and fundamental parts of the process of large-scale goat farming. Currently, many farms with low modernization use manual counting, which has the obvious shortcomings of low efficiency and difficulty in avoiding duplication and omissions due to the large population base and frequent counting needs of goats. In order to solve this problem in the farming process, an efficient and accurate goat counting method is urgently needed. In this study, we address the above problem by constructing an integrated deep learning model for automatic detection and counting of goats based on computer vision technology with the Chengdu Ma goat as the research object. It is worth noting that we have improved the model using a series of advanced and effective strategies to enhance the performance of the model. Experiments show that our method can achieve accurate automatic counting of goats in a practical breeding environment. The method is beneficial to the regionalized management of goat barns and can be applied to different goat species with high practicality. Abstract Goat farming is one of the pillar industries for sustainable development of national economies in some countries and plays an active role in social and economic development. In order to realize the precision and intelligence of goat breeding, this paper describes an integrated goat detection and counting method based on deep learning. First, we constructed a new dataset of video images of goats for the object tracking task. Then, we took YOLOv5 as the baseline of the object detector and improved it using a series of advanced methods, including: using RandAugment to explore suitable data augmentation strategies in a real goat barn environment, using AF-FPN to improve the network’s ability to represent multi-scale objects, and using the Dynamic Head framework to unify the attention mechanism with the detector’s heads to improve its performance. The improved detector achieved 92.19% mAP, a significant improvement compared to the 84.26% mAP of the original YOLOv5. In addition, we also input the information obtained by the detector into DeepSORT for goat tracking and counting. The average overlap rate of our proposed method is 89.69%, which is significantly higher than the 82.78% of the original combination of YOLOv5 and DeepSORT. In order to avoid double counting as much as possible, goats were counted using the single-line counting based on the results of goat head tracking, which can support practical applications.
Collapse
|