1
|
Reza MN, Lee KH, Habineza E, Samsuzzaman, Kyoung H, Choi YK, Kim G, Chung SO. RGB-based machine vision for enhanced pig disease symptoms monitoring and health management: a review. JOURNAL OF ANIMAL SCIENCE AND TECHNOLOGY 2025; 67:17-42. [PMID: 39974778 PMCID: PMC11833201 DOI: 10.5187/jast.2024.e111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/16/2024] [Revised: 11/15/2024] [Accepted: 11/18/2024] [Indexed: 02/21/2025]
Abstract
The growing demands of sustainable, efficient, and welfare-conscious pig husbandry have necessitated the adoption of advanced technologies. Among these, RGB imaging and machine vision technology may offer a promising solution for early disease detection and proactive disease management in advanced pig husbandry practices. This review explores innovative applications for monitoring disease symptoms by assessing features that directly or indirectly indicate disease risk, as well as for tracking body weight and overall health. Machine vision and image processing algorithms enable for the real-time detection of subtle changes in pig appearance and behavior that may signify potential health issues. Key indicators include skin lesions, inflammation, ocular and nasal discharge, and deviations in posture and gait, each of which can be detected non-invasively using RGB cameras. Moreover, when integrated with thermal imaging, RGB systems can detect fever, a reliable indicator of infection, while behavioral monitoring systems can track abnormal posture, reduced activity, and altered feeding and drinking habits, which are often precursors to illness. The technology also facilitates the analysis of respiratory symptoms, such as coughing or sneezing (enabling early identification of respiratory diseases, one of the most significant challenges in pig farming), and the assessment of fecal consistency and color (providing valuable insights into digestive health). Early detection of disease or poor health supports proactive interventions, reducing mortality and improving treatment outcomes. Beyond direct symptom monitoring, RGB imaging and machine vision can indirectly assess disease risk by monitoring body weight, feeding behavior, and environmental factors such as overcrowding and temperature. However, further research is needed to refine the accuracy and robustness of algorithms in diverse farming environments. Ultimately, integrating RGB-based machine vision into existing farm management systems could provide continuous, automated surveillance, generating real-time alerts and actionable insights; these can support data-driven disease prevention strategies, reducing the need for mass medication and the development of antimicrobial resistance.
Collapse
Affiliation(s)
- Md Nasim Reza
- Department of Agricultural Machinery
Engineering, Graduate School, Chungnam National University,
Daejeon 34134, Korea
- Department of Smart Agricultural Systems,
Graduate School, Chungnam National University, Daejeon 34134,
Korea
| | - Kyu-Ho Lee
- Department of Agricultural Machinery
Engineering, Graduate School, Chungnam National University,
Daejeon 34134, Korea
- Department of Smart Agricultural Systems,
Graduate School, Chungnam National University, Daejeon 34134,
Korea
| | - Eliezel Habineza
- Department of Smart Agricultural Systems,
Graduate School, Chungnam National University, Daejeon 34134,
Korea
| | - Samsuzzaman
- Department of Agricultural Machinery
Engineering, Graduate School, Chungnam National University,
Daejeon 34134, Korea
| | - Hyunjin Kyoung
- Division of Animal and Dairy Science,
Chungnam National University, Daejeon 34134, Korea
| | | | - Gookhwan Kim
- National Institute of Agricultural
Sciences, Rural Development Administration, Jeonju 54875,
Korea
| | - Sun-Ok Chung
- Department of Agricultural Machinery
Engineering, Graduate School, Chungnam National University,
Daejeon 34134, Korea
- Department of Smart Agricultural Systems,
Graduate School, Chungnam National University, Daejeon 34134,
Korea
| |
Collapse
|
2
|
Shao X, Liu C, Zhou Z, Xue W, Zhang G, Liu J, Yan H. Research on Dynamic Pig Counting Method Based on Improved YOLOv7 Combined with DeepSORT. Animals (Basel) 2024; 14:1227. [PMID: 38672375 PMCID: PMC11047650 DOI: 10.3390/ani14081227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 04/13/2024] [Accepted: 04/15/2024] [Indexed: 04/28/2024] Open
Abstract
A pig inventory is a crucial component of achieving precise and large-scale farming. In complex pigsty environments, due to pigs' stress reactions and frequent obstructions, it is challenging to count them accurately and automatically. This difficulty contrasts with most current deep learning studies, which rely on overhead views or static images for counting. This research proposes a video-based dynamic counting method, combining YOLOv7 with DeepSORT. By utilizing the YOLOv7 network structure and optimizing the second and third 3 × 3 convolution operations in the head network ELAN-W with PConv, the model reduces the computational demand and improves the inference speed without sacrificing accuracy. To ensure that the network acquires accurate position perception information at oblique angles and extracts rich semantic information, we introduce the coordinate attention (CA) mechanism before the three re-referentialization paths (REPConv) in the head network, enhancing robustness in complex scenarios. Experimental results show that, compared to the original model, the improved model increases the mAP by 3.24, 0.05, and 1.00 percentage points for oblique, overhead, and all pig counting datasets, respectively, while reducing the computational cost by 3.6 GFLOPS. The enhanced YOLOv7 outperforms YOLOv5, YOLOv4, YOLOv3, Faster RCNN, and SSD in target detection with mAP improvements of 2.07, 5.20, 2.16, 7.05, and 19.73 percentage points, respectively. In dynamic counting experiments, the improved YOLOv7 combined with DeepSORT was tested on videos with total pig counts of 144, 201, 285, and 295, yielding errors of -3, -3, -4, and -26, respectively, with an average accuracy of 96.58% and an FPS of 22. This demonstrates the model's capability of performing the real-time counting of pigs in various scenes, providing valuable data and references for automated pig counting research.
Collapse
Affiliation(s)
- Xiaobao Shao
- College of Information Science and Engineering, Shanxi Agricultural University, Jinzhong 030801, China; (X.S.); (C.L.); (Z.Z.); (W.X.); (G.Z.)
| | - Chengcheng Liu
- College of Information Science and Engineering, Shanxi Agricultural University, Jinzhong 030801, China; (X.S.); (C.L.); (Z.Z.); (W.X.); (G.Z.)
| | - Zhixuan Zhou
- College of Information Science and Engineering, Shanxi Agricultural University, Jinzhong 030801, China; (X.S.); (C.L.); (Z.Z.); (W.X.); (G.Z.)
| | - Wenjing Xue
- College of Information Science and Engineering, Shanxi Agricultural University, Jinzhong 030801, China; (X.S.); (C.L.); (Z.Z.); (W.X.); (G.Z.)
| | - Guoye Zhang
- College of Information Science and Engineering, Shanxi Agricultural University, Jinzhong 030801, China; (X.S.); (C.L.); (Z.Z.); (W.X.); (G.Z.)
| | - Jianyu Liu
- Science & Technology Information and Strategy Research Center of Shanxi, Taiyuan 030024, China
| | - Hongwen Yan
- College of Information Science and Engineering, Shanxi Agricultural University, Jinzhong 030801, China; (X.S.); (C.L.); (Z.Z.); (W.X.); (G.Z.)
| |
Collapse
|
3
|
Automatic Detection and Segmentation for Group-Housed Pigs Based on PigMS R-CNN. SENSORS 2021; 21:s21093251. [PMID: 34067210 PMCID: PMC8125813 DOI: 10.3390/s21093251] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Revised: 04/30/2021] [Accepted: 05/02/2021] [Indexed: 11/27/2022]
Abstract
Instance segmentation is an accurate and reliable method to segment adhesive pigs’ images, and is critical for providing health and welfare information on individual pigs, such as body condition score, live weight, and activity behaviors in group-housed pig environments. In this paper, a PigMS R-CNN framework based on mask scoring R-CNN (MS R-CNN) is explored to segment adhesive pig areas in group-pig images, to separate the identification and location of group-housed pigs. The PigMS R-CNN consists of three processes. First, a residual network of 101-layers, combined with the feature pyramid network (FPN), is used as a feature extraction network to obtain feature maps for input images. Then, according to these feature maps, the region candidate network generates the regions of interest (RoIs). Finally, for each RoI, we can obtain the location, classification, and segmentation results of detected pigs through the regression and category, and mask three branches from the PigMS R-CNN head network. To avoid target pigs being missed and error detections in overlapping or stuck areas of group-housed pigs, the PigMS R-CNN framework uses soft non-maximum suppression (soft-NMS) by replacing the traditional NMS to conduct post-processing selected operation of pigs. The MS R-CNN framework with traditional NMS obtains results with an F1 of 0.9228. By setting the soft-NMS threshold to 0.7 on PigMS R-CNN, detection of the target pigs achieves an F1 of 0.9374. The work explores a new instance segmentation method for adhesive group-housed pig images, which provides valuable exploration for vision-based, real-time automatic pig monitoring and welfare evaluation.
Collapse
|
4
|
Li G, Huang Y, Chen Z, Chesser GD, Purswell JL, Linhoss J, Zhao Y. Practices and Applications of Convolutional Neural Network-Based Computer Vision Systems in Animal Farming: A Review. SENSORS (BASEL, SWITZERLAND) 2021; 21:1492. [PMID: 33670030 PMCID: PMC7926480 DOI: 10.3390/s21041492] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Revised: 02/03/2021] [Accepted: 02/19/2021] [Indexed: 01/28/2023]
Abstract
Convolutional neural network (CNN)-based computer vision systems have been increasingly applied in animal farming to improve animal management, but current knowledge, practices, limitations, and solutions of the applications remain to be expanded and explored. The objective of this study is to systematically review applications of CNN-based computer vision systems on animal farming in terms of the five deep learning computer vision tasks: image classification, object detection, semantic/instance segmentation, pose estimation, and tracking. Cattle, sheep/goats, pigs, and poultry were the major farm animal species of concern. In this research, preparations for system development, including camera settings, inclusion of variations for data recordings, choices of graphics processing units, image preprocessing, and data labeling were summarized. CNN architectures were reviewed based on the computer vision tasks in animal farming. Strategies of algorithm development included distribution of development data, data augmentation, hyperparameter tuning, and selection of evaluation metrics. Judgment of model performance and performance based on architectures were discussed. Besides practices in optimizing CNN-based computer vision systems, system applications were also organized based on year, country, animal species, and purposes. Finally, recommendations on future research were provided to develop and improve CNN-based computer vision systems for improved welfare, environment, engineering, genetics, and management of farm animals.
Collapse
Affiliation(s)
- Guoming Li
- Department of Agricultural and Biological Engineering, Mississippi State University, Starkville, MS 39762, USA; (G.L.); (J.L.)
| | - Yanbo Huang
- Agricultural Research Service, Genetics and Sustainable Agriculture Unit, United States Department of Agriculture, Starkville, MS 39762, USA;
| | - Zhiqian Chen
- Department of Computer Science and Engineering, Mississippi State University, Starkville, MS 39762, USA;
| | - Gary D. Chesser
- Department of Agricultural and Biological Engineering, Mississippi State University, Starkville, MS 39762, USA; (G.L.); (J.L.)
| | - Joseph L. Purswell
- Agricultural Research Service, Poultry Research Unit, United States Department of Agriculture, Starkville, MS 39762, USA;
| | - John Linhoss
- Department of Agricultural and Biological Engineering, Mississippi State University, Starkville, MS 39762, USA; (G.L.); (J.L.)
| | - Yang Zhao
- Department of Animal Science, The University of Tennessee, Knoxville, TN 37996, USA
| |
Collapse
|
5
|
|
6
|
Brünger J, Gentz M, Traulsen I, Koch R. Panoptic Segmentation of Individual Pigs for Posture Recognition. SENSORS (BASEL, SWITZERLAND) 2020; 20:E3710. [PMID: 32630794 PMCID: PMC7374502 DOI: 10.3390/s20133710] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Revised: 06/23/2020] [Accepted: 06/29/2020] [Indexed: 11/30/2022]
Abstract
Behavioural research of pigs can be greatly simplified if automatic recognition systems are used. Systems based on computer vision in particular have the advantage that they allow an evaluation without affecting the normal behaviour of the animals. In recent years, methods based on deep learning have been introduced and have shown excellent results. Object and keypoint detector have frequently been used to detect individual animals. Despite promising results, bounding boxes and sparse keypoints do not trace the contours of the animals, resulting in a lot of information being lost. Therefore, this paper follows the relatively new approach of panoptic segmentation and aims at the pixel accurate segmentation of individual pigs. A framework consisting of a neural network for semantic segmentation as well as different network heads and postprocessing methods will be discussed. The method was tested on a data set of 1000 hand-labeled images created specifically for this experiment and achieves detection rates of around 95% (F1 score) despite disturbances such as occlusions and dirty lenses.
Collapse
Affiliation(s)
- Johannes Brünger
- Department of Computer Science, Kiel University, 24118 Kiel, Germany;
| | - Maria Gentz
- Department of Animal Sciences, Livestock Systems, Georg-August-University Göttingen, 37075 Göttingen, Germany; (M.G.); (I.T.)
| | - Imke Traulsen
- Department of Animal Sciences, Livestock Systems, Georg-August-University Göttingen, 37075 Göttingen, Germany; (M.G.); (I.T.)
| | - Reinhard Koch
- Department of Computer Science, Kiel University, 24118 Kiel, Germany;
| |
Collapse
|
7
|
T. Psota E, Schmidt T, Mote B, C. Pérez L. Long-Term Tracking of Group-Housed Livestock Using Keypoint Detection and MAP Estimation for Individual Animal Identification. SENSORS 2020; 20:s20133670. [PMID: 32630011 PMCID: PMC7374513 DOI: 10.3390/s20133670] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 06/08/2020] [Accepted: 06/16/2020] [Indexed: 02/05/2023]
Abstract
Tracking individual animals in a group setting is a exigent task for computer vision and animal science researchers. When the objective is months of uninterrupted tracking and the targeted animals lack discernible differences in their physical characteristics, this task introduces significant challenges. To address these challenges, a probabilistic tracking-by-detection method is proposed. The tracking method uses, as input, visible keypoints of individual animals provided by a fully-convolutional detector. Individual animals are also equipped with ear tags that are used by a classification network to assign unique identification to instances. The fixed cardinality of the targets is leveraged to create a continuous set of tracks and the forward-backward algorithm is used to assign ear-tag identification probabilities to each detected instance. Tracking achieves real-time performance on consumer-grade hardware, in part because it does not rely on complex, costly, graph-based optimizations. A publicly available, human-annotated dataset is introduced to evaluate tracking performance. This dataset contains 15 half-hour long videos of pigs with various ages/sizes, facility environments, and activity levels. Results demonstrate that the proposed method achieves an average precision and recall greater than 95% across the entire dataset. Analysis of the error events reveals environmental conditions and social interactions that are most likely to cause errors in real-world deployments.
Collapse
Affiliation(s)
- Eric T. Psota
- Department of Electrical and Computer Engineering, University of Nebraska–Lincoln, Lincoln, NE 68505, USA;
- Correspondence:
| | - Ty Schmidt
- Department of Animal Science, University of Nebraska–Lincoln, Lincoln, NE 68588, USA; (T.S.); (B.M.)
| | - Benny Mote
- Department of Animal Science, University of Nebraska–Lincoln, Lincoln, NE 68588, USA; (T.S.); (B.M.)
| | - Lance C. Pérez
- Department of Electrical and Computer Engineering, University of Nebraska–Lincoln, Lincoln, NE 68505, USA;
| |
Collapse
|
8
|
Nasirahmadi A, Sturm B, Edwards S, Jeppsson KH, Olsson AC, Müller S, Hensel O. Deep Learning and Machine Vision Approaches for Posture Detection of Individual Pigs. SENSORS (BASEL, SWITZERLAND) 2019; 19:E3738. [PMID: 31470571 PMCID: PMC6749226 DOI: 10.3390/s19173738] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Revised: 08/15/2019] [Accepted: 08/28/2019] [Indexed: 02/08/2023]
Abstract
Posture detection targeted towards providing assessments for the monitoring of health and welfare of pigs has been of great interest to researchers from different disciplines. Existing studies applying machine vision techniques are mostly based on methods using three-dimensional imaging systems, or two-dimensional systems with the limitation of monitoring under controlled conditions. Thus, the main goal of this study was to determine whether a two-dimensional imaging system, along with deep learning approaches, could be utilized to detect the standing and lying (belly and side) postures of pigs under commercial farm conditions. Three deep learning-based detector methods, including faster regions with convolutional neural network features (Faster R-CNN), single shot multibox detector (SSD) and region-based fully convolutional network (R-FCN), combined with Inception V2, Residual Network (ResNet) and Inception ResNet V2 feature extractions of RGB images were proposed. Data from different commercial farms were used for training and validation of the proposed models. The experimental results demonstrated that the R-FCN ResNet101 method was able to detect lying and standing postures with higher average precision (AP) of 0.93, 0.95 and 0.92 for standing, lying on side and lying on belly postures, respectively and mean average precision (mAP) of more than 0.93.
Collapse
Affiliation(s)
- Abozar Nasirahmadi
- Department of Agricultural and Biosystems Engineering, University of Kassel, 37213 Witzenhausen, Germany.
| | - Barbara Sturm
- Department of Agricultural and Biosystems Engineering, University of Kassel, 37213 Witzenhausen, Germany
| | - Sandra Edwards
- School of Natural and Environmental Sciences, Newcastle University, Newcastle upon Tyne NE1 7RU, UK
| | - Knut-Håkan Jeppsson
- Department of Biosystems and Technology, Swedish University of Agricultural Sciences, 23053 Alnarp, Sweden
| | - Anne-Charlotte Olsson
- Department of Biosystems and Technology, Swedish University of Agricultural Sciences, 23053 Alnarp, Sweden
| | - Simone Müller
- Department Animal Husbandry, Thuringian State Institute for Agriculture and Rural Development, 07743 Jena, Germany
| | - Oliver Hensel
- Department of Agricultural and Biosystems Engineering, University of Kassel, 37213 Witzenhausen, Germany
| |
Collapse
|
9
|
Abstract
The fast detection of pigs is a crucial aspect for a surveillance environment intended for the ultimate purpose of the 24 h tracking of individual pigs. Particularly, in a realistic pig farm environment, one should consider various illumination conditions such as sunlight, but such consideration has not been reported yet. We propose a fast method to detect pigs under various illumination conditions by exploiting the complementary information from depth and infrared images. By applying spatiotemporal interpolation, we first remove the noises caused by sunlight. Then, we carefully analyze the characteristics of both the depth and infrared information and detect pigs using only simple image processing techniques. Rather than exploiting highly time-consuming techniques, such as frequency-, optimization-, or deep learning-based detections, our image processing-based method can guarantee a fast execution time for the final goal, i.e., intelligent pig monitoring applications. In the experimental results, pigs could be detected effectively through the proposed method for both accuracy (i.e., 0.79) and execution time (i.e., 8.71 ms), even with various illumination conditions.
Collapse
|
10
|
Multi-Pig Part Detection and Association with a Fully-Convolutional Network. SENSORS 2019; 19:s19040852. [PMID: 30791377 PMCID: PMC6413214 DOI: 10.3390/s19040852] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/27/2018] [Revised: 01/15/2019] [Accepted: 02/16/2019] [Indexed: 01/06/2023]
Abstract
Computer vision systems have the potential to provide automated, non-invasive monitoring of livestock animals, however, the lack of public datasets with well-defined targets and evaluation metrics presents a significant challenge for researchers. Consequently, existing solutions often focus on achieving task-specific objectives using relatively small, private datasets. This work introduces a new dataset and method for instance-level detection of multiple pigs in group-housed environments. The method uses a single fully-convolutional neural network to detect the location and orientation of each animal, where both body part locations and pairwise associations are represented in the image space. Accompanying this method is a new dataset containing 2000 annotated images with 24,842 individually annotated pigs from 17 different locations. The proposed method achieves over 99% precision and over 96% recall when detecting pigs in environments previously seen by the network during training. To evaluate the robustness of the trained network, it is also tested on environments and lighting conditions unseen in the training set, where it achieves 91% precision and 67% recall. The dataset is publicly available for download.
Collapse
|
11
|
Lee J, Kim DW, Won CS, Jung SW. Graph Cut-Based Human Body Segmentation in Color Images Using Skeleton Information from the Depth Sensor. SENSORS (BASEL, SWITZERLAND) 2019; 19:s19020393. [PMID: 30669363 PMCID: PMC6358916 DOI: 10.3390/s19020393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2018] [Revised: 01/13/2019] [Accepted: 01/17/2019] [Indexed: 06/09/2023]
Abstract
Segmentation of human bodies in images is useful for a variety of applications, including background substitution, human activity recognition, security, and video surveillance applications. However, human body segmentation has been a challenging problem, due to the complicated shape and motion of a non-rigid human body. Meanwhile, depth sensors with advanced pattern recognition algorithms provide human body skeletons in real time with reasonable accuracy. In this study, we propose an algorithm that projects the human body skeleton from a depth image to a color image, where the human body region is segmented in the color image by using the projected skeleton as a segmentation cue. Experimental results using the Kinect sensor demonstrate that the proposed method provides high quality segmentation results and outperforms the conventional methods.
Collapse
Affiliation(s)
- Jonha Lee
- Department of Multimedia Engineering, Dongguk University, Pildong-ro 1gil 30, Jung-gu, Seoul 100-715, Korea.
| | - Dong-Wook Kim
- Department of Multimedia Engineering, Dongguk University, Pildong-ro 1gil 30, Jung-gu, Seoul 100-715, Korea.
| | - Chee Sun Won
- Department of Electronics and Electrical Engineering, Dongguk University, Pildong-ro 1gil 30, Jung-gu, Seoul 100-715, Korea.
| | - Seung-Won Jung
- Department of Multimedia Engineering, Dongguk University, Pildong-ro 1gil 30, Jung-gu, Seoul 100-715, Korea.
| |
Collapse
|
12
|
Non-Contact Body Measurement for Qinchuan Cattle with LiDAR Sensor. SENSORS 2018; 18:s18093014. [PMID: 30205607 PMCID: PMC6164280 DOI: 10.3390/s18093014] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/19/2018] [Revised: 09/02/2018] [Accepted: 09/05/2018] [Indexed: 12/14/2022]
Abstract
The body dimension measurement of large animals plays a significant role in quality improvement and genetic breeding, and the non-contact measurements by computer vision-based remote sensing could represent great progress in the case of dangerous stress responses and time-costing manual measurements. This paper presents a novel approach for three-dimensional digital modeling of live adult Qinchuan cattle for body size measurement. On the basis of capturing the original point data series of live cattle by a Light Detection and Ranging (LiDAR) sensor, the conditional, statistical outliers and voxel grid filtering methods are fused to cancel the background and outliers. After the segmentation of K-means clustering extraction and the RANdom SAmple Consensus (RANSAC) algorithm, the Fast Point Feature Histogram (FPFH) is put forward to get the cattle data automatically. The cattle surface is reconstructed to get the 3D cattle model using fast Iterative Closest Point (ICP) matching with Bi-directional Random K-D Trees and a Greedy Projection Triangulation (GPT) reconstruction method by which the feature points of cattle silhouettes could be clicked and calculated. Finally, the five body parameters (withers height, chest depth, back height, body length, and waist height) are measured in the field and verified within an accuracy of 2 mm and an error close to 2%. The experimental results show that this approach could be considered as a new feasible method towards the non-contact body measurement for large physique livestock.
Collapse
|